One Video Call. $25.6 Million Gone.

In January 2024, a finance employee at the engineering firm Arup was invited to a video conference with their CFO and several colleagues. All faces were familiar. The voices were correct. The person approved 15 separate bank transfers totaling $25.6 million—equivalent to approximately 270 million Norwegian kroner.

None of the people in the call were real humans. All were AI-generated deepfakes.

The Arup case has become a reference point in the debate over online authenticity—not because it was technically sophisticated beyond the ordinary, but because it illustrates something fundamental: human trust, not technical weaknesses, was the attack vector. And this is precisely where Microsoft and the rest of the tech industry are now focusing their efforts.

C2PA: The Industry Standard to Save Trust

Microsoft's answer is not a new product—it is infrastructure. The company is a key player in the Coalition for Content Provenance and Authenticity (C2PA), an open industry standard that uses cryptographic metadata to attach an invisible, tamper-proof signature to digital content the moment it is created.

The signature contains information about who created the content, with which tool, when—and whether it has been edited afterward. Microsoft Copilot has begun building in support for C2PA Content Credentials in AI-generated images, according to Windows Forum. The Azure AI Content Safety platform already offers real-time tools to detect and flag synthetic content.

But Microsoft is far from alone. Google has integrated C2PA Assurance Level 2 directly into Pixel camera hardware. TikTok is making C2PA labeling mandatory for realistic AI content. YouTube uses it for content created with tools like Dream Screen. OpenAI and Adobe adopted C2PA v2.1 in 2024 for tamper-proof signing of AI output, while version 2.3 in 2025 added support for live broadcasts and text.

News agencies AFP and the BBC have piloted the standard in election coverage. Sony is integrating it into cameras. Cloudflare uses it for metadata preservation in the distribution chain.

$1.87 bn
Market value fake image detection 2026
$7.43 bn
Estimated market value 2031 (31.7% CAGR)
The War Against Deepfakes: How Microsoft Aims to Label Everything AI Creates

The Competitors: A Market in Explosive Growth

C2PA is an open standard, not a product—and around it, a market of commercial players has emerged, competing to deliver solutions.

Adobe Content Credentials is the company's C2PA-compliant implementation, built into tools like GenStudio and Experience Manager, and visible on platforms like LinkedIn. It offers brand signatures, invisible watermarking, and fingerprints that survive export and editing in standard formats like JPEG, PNG, and MP4.

Truepic takes a different angle: the company positions itself as the "immune system for digital reality," combining machine learning-based deepfake detection with provenance infrastructure. While Adobe focuses on enterprise workflows, Truepic is more user-oriented and detection-focused.

Google's SynthID is a proprietary watermark that is invisible to the naked eye but detectable via the Gemini platform. It represents a parallel approach: instead of an open standard, a closed but highly robust system.

Common to all approaches is that they are complementary, not mutually exclusive. Mordor Intelligence estimates that the market for fake image detection will grow from $1.87 billion in 2026 to $7.43 billion by 2031—an annual growth rate of 31.7%.

"C2PA proves origin—not truth. It is the difference between knowing who made something and knowing if it is true."
The War Against Deepfakes: How Microsoft Aims to Label Everything AI Creates

Known Weaknesses of the Standard

Critics of C2PA are not hard to find—and they have points that no cryptographic signature can solve alone.

Manifest stripping is the simplest attack vector: metadata can be removed. An image without a C2PA signature is not necessarily fake—it could just as easily be an old photo, a screenshot, or an image from a source that has not implemented the standard. The absence of a stamp proves nothing.

The analog hole attack is more subtle: by re-photographing or re-recording content, the digital signatures are lost entirely. The content is no longer "manipulated" in a digital sense—it is simply new content.

Voluntary adoption is the core structural challenge. Malicious actors will, by definition, not label their content. The system protects against accidental errors and manipulation by honest actors—not against those who deliberately intend to mislead.

These are not hypothetical weaknesses. Research cited in the International AI Safety Report for 2026 emphasizes that even state-of-the-art automated detection systems drop from near-perfect accuracy in the lab to a 40–50% error rate under real-world conditions—particularly in live video conferences, exactly the context the Arup attack exploited.

Removing C2PA metadata takes seconds. Building trust in a system that relies on voluntary use takes years.

EU Sets Deadline: August 2026

Where voluntarism has its limits, regulation steps in. The EU's AI regulation—the AI Act—sets a clear deadline: August 2, 2026, is the date when Article 50(2) comes into full force, making machine-readable labeling of AI-generated content mandatory.

The requirements apply to providers of generative AI models and systems, who must implement robust, detectable labeling methods—watermarking, metadata, provenance certificates for text—before the content is placed on the market. Deployers must ensure visible labels for users upon first exposure.

Sanctions are not symbolic: fines of up to 35 million euros or 7% of global turnover.

A voluntary draft "Code of Practice" for labeling was published in December 2025, with the final version expected by June 2026. A proposal under the "Digital Omnibus" package allows for a limited six-month extension to February 2027 for systems already on the market—but the European Data Protection Board (EDPB) and the supervisor EDPS have publicly opposed any extension.

For Norwegian and European businesses, this means that C2PA is no longer just good practice—it could become a concrete compliance tool with legal backing.

August 2024
EU AI Act enters into force
2024
Google, TikTok, YouTube, OpenAI, and Adobe adopt C2PA v2.1; AFP and BBC pilot the standard in election coverage
December 2025
EU publishes voluntary draft "Code of Practice" for AI labeling
June 2026
Final "Code of Practice" expected to be approved
August 2, 2026
Article 50(2) of the EU AI Act enters into force—mandatory machine-readable labeling of AI content

What's Next for Microsoft

Microsoft has not yet publicly announced a unified launch strategy for C2PA integration across its product portfolio. Bing, LinkedIn, Azure, and Microsoft 365 are all potential distribution channels—and LinkedIn has already begun showing Adobe Content Credentials information for content produced with compatible tools.

The Azure AI Content Safety platform is operational and provides developers with API tools for content filtering and detection of synthetic material in real-time. This is where the practical infrastructure is being built—not in a single product launch, but by making the tools available to those who will build applications on top.

For the media industry, advertisers, and businesses handling sensitive transactions, the implication is simple: the technology is on its way, regulators are setting deadlines, and the Arup case shows what happens when trust infrastructure is missing. A digital provenance system is only as strong as the proportion of serious actors who actually use it—but without it, the alternative is that human trust remains the weakest link in the chain.