Skip to main content
All articles

What If Your AI Assistant Were Your Biggest Insider Threat?

Cyber & Data9 March 20267 min read
Share
What If Your AI Assistant Were Your Biggest Insider Threat?

In 2026, a UK law firm’s greatest security vulnerability isn't a phishing email; it's the autonomous AI agent sitting inside the Document Management System. While the SRA mandates robust risk controls, the transition from passive chatbots to active agents has left a governance gap that traditional perimeters cannot close.

That is the position many law firms are now in with their AI agents. Not hypothetically. Right now.

This article is a thought experiment, but only just. The technology is real. The risks are real. The governance frameworks are lagging badly behind both.

From chatbot to autonomous agent: a bigger shift than it looks

Most law firms started their AI journey with a chatbot. Ask a question, get an answer. The system has no persistent memory, no access to live files, no ability to act. The security perimeter barely changes.

AI agents are different in kind, not just degree. An agent can be given a task, access the relevant files, draft the output, send the email, update the matter management system, and log the action, all without a human in the loop at each step. Some agents can browse the web, execute code, and call external APIs. They do not just answer questions. They act.

The International AI Safety Report 2026 identifies this autonomy as a central emerging risk. As AI systems gain the ability to take sequences of actions in pursuit of goals, the failure modes multiply. A poorly instructed agent does not just give a wrong answer. It does the wrong thing, repeatedly, at scale, before anyone notices.

For a law firm, the consequences of that are not abstract. Client data is processed. Privilege may be waived. Regulatory obligations under the Data Protection Act 2018 may be triggered. And the audit trail, if anyone thought to create one, may be the only thing standing between the firm and an ICO enforcement notice.

The insider threat model: why AI agents need to be treated like people

Security professionals think about insider threats in a particular way. The risk is not just the malicious employee. It is the well-intentioned one with too much access and too little oversight. The contractor who has credentials they no longer need. The junior who can see files that are nothing to do with their work.

AI agents map almost perfectly onto that threat model, and then exceed it.

An agent granted broad file-system access becomes, from a security perspective, an insider. If that agent is manipulated through prompt injection, where a malicious instruction is embedded in a document or email the agent processes, it can be redirected to exfiltrate data, forward privileged communications, or take actions entirely contrary to its operator's intentions. The 2026 Vanta report identifies prompt injection as one of the highest-priority AI security risks this year. The Hexnode analysis reaches the same conclusion.

The firm does not need to be hacked in the traditional sense. The agent does the work from the inside.

UK law offers no comfortable safe harbour here. Under the Data Protection Act 2018, a firm that deploys an agent with access to personal data bears full controller liability for what that agent does with it. The ICO's guidance on automated processing makes clear that accountability sits with the organisation, not the tool. If the agent processes data unlawfully, the firm processed data unlawfully.

Negligence principles compound this. A firm that deploys an autonomous agent without adequate access controls, audit logging, or human oversight mechanisms may struggle to argue it discharged its duty of care to clients whose data was compromised. The agent's autonomy does not diminish the firm's responsibility. If anything, it heightens it.

The governance gap is not a technology problem

Only 44% of organisations have a formal AI policy, according to Vanta's 2026 security research. That figure should alarm any managing partner who has already approved an AI agent deployment, or is about to.

A formal AI policy is not bureaucracy for its own sake. It answers the questions that matter when something goes wrong. Which systems can the agent access? Who authorised that access, and when? What does the agent log? Who reviews those logs? What is the escalation path if the agent behaves unexpectedly? Who has authority to shut it down?

In employment law terms, we would never on-board a human employee without answering those questions. The Cybersecurity Insiders 2026 report frames this precisely: AI agents are beginning to occupy roles in organisational hierarchies without the governance structures that apply to human occupants of those roles. The org chart is changing faster than the policies that govern it.

For law firms, the regulatory exposure sits across multiple regimes simultaneously. The SRA's transparency and supervision requirements. ICO accountability principles under UK GDPR. FCA conduct rules for firms in scope. Common law duties to clients. None of these pause while the firm gets its AI governance documentation in order.

What this means

If your firm is deploying or evaluating AI agents, these are the questions to answer before the next deployment sign-off.

Access controls. What can the agent actually reach? Start with the principle of least privilege. An agent that drafts client correspondence does not need access to HR files or the firm's financial systems. Map the access and cut everything that cannot be justified.

Audit logging. Can you reconstruct what the agent did, when, and why? If not, you have no basis for investigating an incident and no evidence to produce to a regulator. Logging is not optional.

Human oversight checkpoints. Which categories of action require human review before execution? Sending external communications and accessing client files should, at minimum, require a human in the loop. Automate the drafting; supervise the sending.

Policy documentation. Write down what the agent is authorised to do. This is both a governance control and a legal protection. If you cannot articulate the scope of authorisation, you cannot credibly argue the firm exercised reasonable oversight.

Incident response. Do you have a procedure for what happens when an agent behaves unexpectedly? Who gets called? Who has the authority to suspend the system? This needs to be decided before the incident, not during it.

The perimeter has moved

Traditional cybersecurity assumed a clear boundary between the organisation and the outside world. The job was to keep threats from crossing that line. AI agents dissolve that logic. The potential threat is already inside, already credentialed, already acting.

That is not a reason to avoid the technology. The productivity case for well-governed AI agents in legal practice is strong. Document review, drafting, research, matter management: the efficiency gains are genuine. But the security model has to catch up with the capability.

Firms that treat AI agents as they would a new member of staff, with defined access, supervision, accountability, and a clear record of what they are authorised to do, will be in a defensible position when something goes wrong. And in complex systems, something always eventually goes wrong.

Firms that treat agents as just another software tool and grant them access accordingly will find that the insider threat framework applies to them whether they designed for it or not.

The paralegal without a contract, working through the night, with keys to everything. Worth thinking about.

Sources

  1. 1International AI Safety Report 2026: Emerging Risks from General-Purpose AI
  2. 2Top AI Security Risks in 2026 - Hexnode
  3. 3Top 6 AI Security Trends for 2026 - Vanta
  4. 42026 is the Year AI Attacks Your Enterprise and Your Org Chart - Cybersecurity Insiders

Stay ahead of the curve

Get practical AI insights for lawyers delivered to your inbox. No spam, no fluff, just the developments that matter.

CJ

Chris Jeyes

Barrister & Leading Junior

Founder of Lextrapolate. 20+ years at the Bar. Legal 500 Leading Junior. Helping lawyers and legal businesses use AI effectively, safely and compliantly.

Get in Touch
AI agentscybersecurityinsider threatlaw firm securitydata protectionAI governanceautonomous AI