AI Legal News Roundup – 25 February 2025

Abstract illustration of AI with silhouette head full of eyes, symbolizing observation and technology.

Key Developments in Artificial Intelligence and Legal Practice

This update looks at developments in the intersection between AI (artificial intelligence) and the legal world. In general AI news, surely the biggest story is the release of Anthropic’s Claude 3.7 Sonnet, giving access to a new “thinking mode”.

1. UK Government Faces Backlash Over AI Copyright Proposals

The UK government’s recent proposal to allow AI companies to utilise copyrighted material without explicit permission has ignited significant opposition from the creative sector. Artists and authors argue that this move threatens their livelihoods by enabling tech firms to exploit their work without compensation. The government’s suggestion includes a “text and data mining exemption,” permitting AI developers to access and use creative content unless creators opt out—a process critics deem impractical and unjust. This debate underscores the tension between fostering AI innovation and safeguarding intellectual property rights.

The Guardian

2. Nvidia Challenges EU Antitrust Investigation

Nvidia has initiated legal proceedings against European Union antitrust regulators following their decision to investigate the company’s acquisition of AI startup Run:ai. Nvidia contends that the regulators’ actions contravene a prior court ruling that limits their authority over minor mergers. The outcome of this lawsuit could redefine the scope of EU regulatory power concerning mergers and acquisitions in the technology sector.

Reuters

3. Divergent Approaches to AI in Legal Recruitment

The Bar Council has prohibited the use of generative AI tools, such as ChatGPT and Google’s Gemini, in pupillage applications submitted through its online gateway. Applicants must now confirm that their submissions are original and free from AI assistance, aiming to ensure authenticity and integrity in the application process. In contrast, several law firms encourage the responsible use of AI to enhance applications, provided it does not compromise individuality. This dichotomy highlights the legal profession’s ongoing deliberation over AI’s role in recruitment and practice.

The Times

4. AI ‘Hallucinations’ Pose Challenges in Legal Proceedings

Recent incidents have highlighted the risks associated with AI-generated “hallucinations,” where AI systems produce false or misleading information. In one notable case, attorneys faced potential sanctions for submitting fictitious case citations generated by AI in a lawsuit. This underscores the necessity for legal professionals to meticulously verify AI-generated content to maintain ethical standards and avoid the dissemination of inaccurate information.

Reuters

5. Linklaters Evaluates AI Competency Through Legal Exams

Prominent law firm Linklaters is assessing the proficiency of artificial intelligence in performing legal tasks by subjecting AI models to legal exams. The firm developed the LinksAI English law benchmark to test large language models’ ability to answer complex legal questions. While improvements have been noted, experts stress that AI should not be used without human supervision for English law legal advice due to occasional inaccuracies and lack of nuance. This initiative reflects the legal industry’s cautious yet proactive approach to integrating AI into practice.

The Times

AI consultancy for lawyers and law firms

If you want to explore the intersection between your legal practice and AI – get in touch.

Scroll to Top
Verified by MonsterInsights