What If Big Tech's Deepfake Defences Fail? Implications for UK Online Safety Act Compliance

Deepfake detection technology is currently losing a high-stakes arms race against synthetic media generators, leaving UK digital evidence integrity in a state of forensic crisis. While the Online Safety Act 2023 mandates the removal of harmful content, it does not equip a barrister in the High Court to prove whether a specific video file has been manipulated by a diffusion model.
That scenario is no longer hypothetical. It is arriving in legal practice before the profession has the tools, or the training, to handle it properly.
Adam Mosseri, head of Instagram, closed 2025 with an unusually candid admission. Authenticity, he said, is "becoming infinitely reproducible." What made a creator's voice distinctive, the fact that it could not be faked, no longer holds. He was talking about content creators. He was also, whether he intended it or not, describing a forensic crisis.
Detection Is Losing the Arms Race
The honest picture from current research is uncomfortable. AI tools can detect deepfake images with around 97% accuracy in controlled conditions, according to a University of Florida study published this year. That sounds reassuring until you consider two things. First, controlled conditions are not real-world conditions. Second, the same study found that humans still outperform AI when it comes to detecting deepfake video. The medium most commonly used to fabricate evidence of human behaviour is also the medium where detection is weakest.
The structural problem is generative tools evolve faster than detection tools. Every time a detector learns to spot a particular spatial artefact or compression signature, the generators are updated to eliminate it. PXL Vision, one of the identity verification providers working in this space, has recently integrated research-derived detection techniques from the Idiap Research Institute, which represents genuine technical progress. We have not tested PXL Vision's implementation, and independent benchmarking across real-world conditions remains scarce. The broader point stands regardless: even the more sophisticated detection approaches remain reactive. They chase the last generation of fakes, not the current one.
The industry response has been to shift emphasis from detection to provenance. The Coalition for Content Provenance and Authenticity (C2PA) framework, now backed by Adobe, Microsoft, and others, embeds cryptographic metadata into content at the point of creation, creating a verifiable chain of custody. That is the right direction. The problem is provenance metadata is trivially stripped. A screenshot, a re-upload, a format conversion, and the chain is broken. For content that has circulated even briefly before anyone questions it, provenance data is often already gone.
What the Online Safety Act Actually Requires
The Online Safety Act 2023 creates specific obligations for regulated platforms around deepfake content. Sections 181 to 186 address the sharing of intimate images without consent, including synthetic intimate images, and the Act treats AI-generated intimate content as equivalent to real imagery for these purposes. That is a significant legislative step.
Beyond intimate images, the Act's broader illegal content duties and the misinformation and disinformation provisions in the safety duties for categorised services create real compliance pressure. Large platforms with significant UK user bases are expected to have systems capable of identifying and acting on AI-generated content that violates these provisions. Ofcom's enforcement powers are not theoretical. The question is whether the technical means to discharge those obligations actually exist at scale.
The answer, right now, is: not reliably. Platforms cannot consistently distinguish AI-generated content from authentic content, particularly video, at the speed and volume required for automated moderation. They know this. Their own researchers know this. The compliance frameworks assume a detection capability that the underlying technology has not yet delivered.
For regulated platforms, that creates a liability gap. A platform that deploys detection systems it knows are unreliable, without disclosing that fact to Ofcom or adjusting its safety policies accordingly, is in a materially different position to one that is transparent about the limits of current technology and compensates with human review workflows and user reporting mechanisms.
Fraud Act Exposure and the Identity Verification Problem
The deepfake problem has a second legal dimension that receives less attention than it deserves. Synthetic identity fraud, where AI-generated faces and documents are used to pass identity verification checks during financial onboarding, is accelerating. PwC's 2026 fraud analysis identifies synthetic identity creation as one of the most significant emerging fraud vectors. The Fraud Act 2006 is the obvious instrument, but prosecuting synthetic identity fraud requires proving the identity was fabricated, which loops back to the same detection problem.
For financial services firms subject to the Money Laundering, Terrorist Financing and Transfer of Funds Regulations 2017, this creates a Know Your Customer dilemma that current guidance does not fully address. FCA-regulated firms are expected to conduct adequate customer due diligence. If a sophisticated deepfake passes a certified biometric check, and the firm had no reasonable means of detecting the fraud, the regulatory position is uncertain. That uncertainty will not resolve itself without either better detection technology or clearer regulatory guidance on what "adequate" means when the technology has known limits.
The Monday Morning Test for Lawyers
If you act for a platform with UK users, or advise clients in financial services, there are things you should be doing now, not when the first significant Ofcom enforcement action lands.
Start by auditing what your client's content moderation or identity verification systems actually claim to detect, and what independent evidence supports those claims. Vendor documentation is not independent evidence. A system marketed as detecting deepfakes with high accuracy needs scrutiny: accuracy on what dataset, at what video resolution, against what generation of synthetic content, and tested by whom?
If you are in contentious practice, build deepfake authentication into your evidence handling procedures. When digital content is material to a dispute, ask early whether provenance metadata exists, whether it has been preserved, and whether the original file or a copy is being produced. Courts are not yet consistently asking these questions. That will change.
If you are advising on compliance with the Online Safety Act, the absence of reliable detection technology is not a defence. It is a risk factor that should appear explicitly in your client's risk assessment, with documented mitigation steps. Ofcom will look at what the platform knew about the limits of its systems and what it did about it.
The Uncomfortable Conclusion
Mosseri was right, even if his concern was for creators rather than courts. Authenticity is no longer a property you can assume. That should worry anyone whose professional work depends on trusting that a document, a recording, or an image is what it purports to be.
The technology to solve this problem reliably does not yet exist. The law, to its credit, has moved faster than the technology. The Online Safety Act creates real obligations. The Fraud Act creates real exposure. But obligations without reliable enforcement tools produce compliance theatre rather than actual safety.
The profession needs to understand this gap, not because lawyers are expected to be forensic scientists, but because the clients who will be harmed by it are already appearing in our practices. Technical literacy here is not optional. It is part of competent advice.
Lextrapolate helps law firms and legal teams develop the technical understanding to advise confidently on AI-related matters. If your practice is grappling with deepfake evidence or Online Safety Act compliance, we can help.
Sources
- 1PXL Vision integrates deepfake detection technique from research with Idiap
- 2Machines spot deepfake pictures better than humans, but people outperform AI in detecting deepfake videos
- 3Deepfake Threats in 2026: Can We Detect What's Fake?
- 4The Era of Deepfakes and Synthetic Identities
- 5Will 2026 be the year deepfakes go mainstream?
Stay ahead of the curve
Get practical AI insights for lawyers delivered to your inbox. No spam, no fluff, just the developments that matter.
Chris Jeyes
Barrister & Leading Junior
Founder of Lextrapolate. 20+ years at the Bar. Legal 500 Leading Junior. Helping lawyers and legal businesses use AI effectively, safely and compliantly.
Get in TouchMore from Lextrapolate

What If Big Tech's Deepfake Detection Lags? Implications for UK Platform Liability Under the Online Safety Act
Deepfake detection is losing the arms race. For UK lawyers, that creates evidence integrity risks and Online Safety Act exposure worth taking seriously now.

What If Big Tech's Deepfake Detection Lags? Implications for UK Platform Liability Under the Online Safety Act
Deepfake detection is losing the arms race. For UK lawyers, that creates evidence integrity risks and Online Safety Act exposure worth taking seriously now.

What if you can no longer trust the video your client sent you?
Deepfake detection is losing the arms race. For lawyers relying on digital evidence, that is not a technology problem. It is a professional one.