Our Approach to AI

What is AI?

Artificial Intelligence (“AI”) can be a divisive subject. To some, it is a panacea that will enable all diseases to be cured, and humankind to be freed from all manner of drudgery to enjoy a fulfilling life in the sun. To others, it heralds the doom of civilisation with mass unemployment mingled with Terminators and an attack on humans by Skynet-controlled robots.

The reality may hopefully be closer to the former than the latter, as the potential of AI is enormous, and no technology has advanced as quickly as this, with progress seemingly visible in real-time. The industrial revolution took decades to permeate, as did the use of computers. It took five to ten years for the internet to really take off. Smartphone adoption was quicker. Now with AI, labs and developers worldwide compete with each other in a race to be the frontier benchmarking model. Progress is announced on X.com (Twitter!), proof is videoed and code is uploaded to Github, rivals copy or improve and announce their triumphs on the same channels. It is only two and a bit years since ChatGPT became relatively-widely known, and the capabilities of the new reasoning models coming out are vastly superior. 

Yet in truth, AI is a tool like any other, just one for which the boundaries and true capabilities have yet to be fully defined. It is the latest step in a process of machine learning which began a long time ago but which has exploded in recent years after the advent of large language models (“LLMs” not in the usual academic legal sense) and Generative Pre-trained Transformers (“GPT” as in ChatGPT). Together these mean AI which has been trained to recognise patterns in languages by way of ingesting and learning from very large volumes of textual data, i.e. materials from the internet.  AI is not intelligent as such, yet, but it is very good at recognising patterns and latterly relationships. Understanding that AI does not itself currently understand goes a long way to recognising the limitations. 

What does AI mean for legal practice?

Some lawyers may worry that the days of their profession are numbered. The reality is likely not as stark.

There are many tasks undertaken by (usually junior) lawyers or support staff that don’t really engage the legal brain. Reviewing vast quantities of documents or cross-referencing one thing against another is not at the height of strategic thinking. This kind of thing can ultimately be adequately dealt with by a suitable AI model. 

Detailed tactical thinking, and consideration based on the nuances of human behaviour, rather less so.

There have been firms whose business model is billing large fees on the back of large numbers of paralegals spending hours on relatively mundane tasks. This is not fulfilling for the staff involved, takes a long time, can result in errors, and inflates the bill for the client or opponent. There is no particular reason why clients should have to pay for this.

Even senior counsel in high-profile litigation may find themselves having to consider and cross-reference volumes of documents (or have a junior or devil do it for them). Again, this time could be shortened. 

Still more worthy of consideration are the multitude of business and administrative tasks required of any legal business: marketing, sales, strategy, onboarding, file and matter administration, customer service and updating, time management, billing… Many of these are repetitive functions that could be streamlined.

Overall what I expect is that over the next five years, as AI develops further and matures, increasingly the major players in the legal industry will embrace AI. They will be able to handle more matters, more smoothly and at lower cost to their clients. In time those holding the purse strings will expect firms and chambers to have adapted to the new world and adopted solutions to improve the workflow. 

There are challenges and risks. Foremost in the challenges is mistrust and fear from staff and principals in legal businesses. We have all had bad experiences with technological innovations, delayed website builds, and similar. Add to that a more general fear of computers taking our jobs, and it can be difficult to persuade people to embrace new ways. Uppermost in risk is the concern of data and confidence leakage by way of improper data-sharing with new tools. 

Yet, these issues can be faced down and dealt with. There has always been resistance from senior lawyers to change – but lawyers didn’t always use computers, or store their client information on servers. Nor did they carry out research online rather than in the library. In time the debate over AI will fade and we will be left with law firms that are more efficient and effective than before, spending more time on quality work for clients and less time surrounded by lever arch files in the basement.

So what is required?

  • An honest appraisal of what functions we carry out in our legal businesses, and the tasks involved. You can only improve processes if you know what processes you follow. 
  • A further honest – perhaps painful – appraisal of relationships within firms and attitudes of staff and management to technological change and new ways of working. If you have staff who resist change and will actively try to defeat it, work needs to be done. 
  • We need to address training, both within firms and within the professions. Repetitive paralegal work may prove not to be viable, but it is a poor substitute for proper training and grounding in professional principles anyway. There will be new roles arising from the use of new technology. Training is also required on how to use AI and what its limitations are, as well as to understand increased threat arising from hostile actors with access to AI (it is not that hard now to fake and emulate voices or generate good quality material to use in a social-engineering phishing attack).
  • Consider areas that may be high impact for AI but of limited risk to the firm. Strategic and business research, sales and marketing may be good places to start. Then customer service and CX. Finally, when safe (and by which the AI will be still more advanced) move on to areas of the core legal service.
  • From the outset and as you go, ensure you are clear on your organisation’s AI and information technology policies. They may need revision and reassessment as you proceed. 
  • Engage a consultant who truly understands your business, your legal and regulatory imperatives, and is sufficiently clued-up on the AI and tech side of things. You probably don’t have the time to do all the research required, and pure tech people sometimes implement their preferred solution without an adequate appreciation of your needs and situation.
  • Above all, start with your staff. It is they who will have to work with your new systems, and they who know how your processes work at the moment. It is they who will ensure it succeeds or fails. With them on side you will be much stronger than if you just buy it in and try to impose it on them, while they fear that if it works they will lose their jobs. 

Scroll to Top
Verified by MonsterInsights