Agentic AI Browsers and Prompt Injection: What Legal Professionals Need to Know

Between October 2025 and January 2026, security researchers demonstrated that Perplexity’s Comet browser could be hijacked to exfiltrate data in under four minutes without a single user click. This vulnerability, known as prompt injection, allows malicious actors to embed instructions inside web content that an agentic AI browser cannot distinguish from its owner's commands.
Security researchers at Zenity, SquareX, and LayerX demonstrated variants of this attack against Perplexity's Comet browser between October 2025 and January 2026. The underlying vulnerability is not specific to one product. It is structural to how agentic AI browsers work.
What Prompt Injection Actually Is
A traditional browser executes your instructions. An agentic AI browser reasons about what you probably want and then acts. That reasoning capability is also its weakness.
Prompt injection works by embedding instructions inside content the AI is processing: a web page, an email, a calendar invite, a document retrieved from Google Drive. The AI cannot reliably distinguish between data it is reading and instructions it should follow. A malicious actor plants a hidden instruction inside otherwise normal content. The AI reads it, blends it with the user's stated intent, and acts on the composite result.
Researchers demonstrated this with zero user interaction beyond opening an invite or visiting a URL. In the CometJacking variant documented by LayerX, a single malicious URL was sufficient. In the PleaseFix variant demonstrated by Zenity, the attack vector was a calendar event. Google Drive wipes triggered from emails were also demonstrated. Each exploit took the AI's tendency to reason about user intent and used it to smuggle in attacker intent.
Perplexity has disputed some of the specific claims and issued patches, including a fix for file access vulnerabilities in January 2026 and a silent disable of the MCP API following November 2025 disclosures. We have not independently tested Comet or verified the precise technical claims of any of the security firms involved. Several of them are competitors with commercial interests in publicising AI browser vulnerabilities. That said, the underlying concept has been independently confirmed by multiple research teams using different attack vectors. The structural risk is real.
The UK Legal Exposure
Law firms are not peripheral targets. They hold privileged communications, commercially sensitive documents, personal data across matters, and access credentials to court systems and client platforms. An agentic AI browser operating in that environment and capable of being hijacked is a serious exposure.
Three areas of UK law are directly engaged.
The UK Computer Misuse Act 1990. A prompt injection attack that causes an AI agent to access systems or data without authorisation would constitute an offence under section 1 (unauthorised access) and potentially section 3 (unauthorised modification) of the CMA. The attacker commits the offence. But firms deploying agentic tools that can be redirected in this way should be thinking about whether their deployment decisions adequately manage foreseeable risk. Regulatory bodies and courts will not be sympathetic to arguments that a firm did not know its AI could be weaponised if the security research was in the public domain.
UK GDPR. Exfiltration of personal data from emails, calendars, or documents processed by an agentic browser triggers the breach reporting obligations under Article 33 UK GDPR: notification to the ICO within 72 hours where there is a risk to individuals' rights and freedoms. If the data includes special category data (health, political opinions, legal proceedings in some contexts), the threshold for notification is lower and the consequences of getting it wrong are higher. Fines under UK GDPR can reach four percent of global annual turnover. A firm that deployed an AI browser without adequate security assessment and then suffered a breach would find it difficult to argue it had implemented appropriate technical measures under Article 32.
Emerging product liability. The UK is watching developments in EU AI Act enforcement and product liability reform closely. Where an AI system is deployed in a professional context and causes harm through a foreseeable security flaw, the direction of travel in both UK and EU frameworks is towards holding developers accountable for inadequate security by design. That trajectory matters when assessing which agentic tools to deploy and on what terms.
The Human-in-the-Loop Question
Firms rushing to adopt agentic AI for research, document retrieval, and case preparation should ask a specific question before deploying: at what point does a human verify what the agent is about to do?
Agentic AI is genuinely useful. The ability to instruct a browser to retrieve case law, compile a chronology from disclosed documents, or cross-reference regulatory filings across multiple sources saves real time. The problem is not the automation itself. The problem is automation without checkpoints.
Human-in-the-loop protocols are not a nice-to-have. For any agentic task that involves accessing external content, especially content controlled or influenced by third parties, verification before execution is the minimum standard. That means the agent proposes; the human approves. Not just for high-stakes tasks. For any task where the content being processed could include adversarial instructions.
Practically, this means:
- Do not grant agentic browsers (Perplexity Comet, OpenAI Atlas etc) write access to live systems or credential stores without explicit, step-by-step approval flows.
- Treat any external content an AI agent processes (web pages, emails, documents, calendar entries) as potentially adversarial.
- Review vendor security documentation before deployment. If it does not address prompt injection, ask why.
- Include agentic AI tools within your firm's data protection impact assessment process under Article 35 UK GDPR. Autonomous agents processing personal data on behalf of clients are high-risk processing.
- Check your cyber insurance policy wording. Some policies have not caught up with agentic AI as a distinct attack surface.
The Test
If someone in your firm is already using an agentic AI browser, find out today. Not to shut it down, but to ask three questions: what data can it access, what actions can it take without explicit approval, and what happens if it is fed a malicious instruction mid-task?
If you cannot answer those questions, you do not have a security posture. You have an assumption.
The structural vulnerability in agentic AI browsers is that they collapse the distinction between data and instruction. That is, at some level, what makes them powerful. It is also what makes them dangerous in an environment where not all the content they process is benign. Legal professionals working with confidential and privileged material cannot afford to treat that as an abstract concern. The research is out. The attack vectors are documented. Reasonable steps are now a higher bar than they were twelve months ago.
The tools will improve. But improved tools deployed without governance frameworks are still a liability. Build the framework first.
Sources
- 1Experts say Perplexity's AI Comet browser can be hijacked to steal passwords
- 2Security gap in Perplexity's Comet browser exposed users to system-level attacks
- 3CometJacking: How One Click Can Turn Perplexity's Comet AI Browser Against You
- 4SquareX and Perplexity Quarrel Over Alleged Comet Browser Vulnerability
Stay ahead of the curve
Get practical AI insights for lawyers delivered to your inbox. No spam, no fluff, just the developments that matter.
Chris Jeyes
Barrister & Leading Junior
Founder of Lextrapolate. 20+ years at the Bar. Legal 500 Leading Junior. Helping lawyers and legal businesses use AI effectively, safely and compliantly.
Get in TouchMore from Lextrapolate

What if your AI workflow was the security breach? The hidden risks of LLM infrastructure
As professionals build custom AI tools and workflows, the real security risk isn't the model. It's the infrastructure connecting everything together.

What if your AI workflow was the security breach? The hidden risks of LLM infrastructure
As professionals build custom AI tools and workflows, the real security risk isn't the model. It's the infrastructure connecting everything together.

What If Your AI Assistant Were Your Biggest Insider Threat?
Law firms are deploying AI agents with file-system access. But are they treating those agents as trusted colleagues or potential security risks?