Have No Fear: Embrace Generative AI But Understand The Risks

Generative artificial intelligence (AI) is probably the most significant disrupter of legal practice for decades.

Many forward-thinking law firms were already harnessing its power before the likes of ChatGPT hit mainstream headlines late last year.

Unlike machine learning models such as knowledge management systems, generative AI raises the game by facilitating access to vast amounts of knowledge (i.e. online content and digitally-held content – both publicly available and private). Larger firms with the resources are already using generative AI for legal document automation, contract analysis, legal research, chatbots, streamlining eDiscovery processes etc.

For many in (and outside) the legal sector, the latest AI is an unknown, and potentially quite terrifying development. You’d also be forgiven for remaining somewhat sceptical about the accuracy of output you might get from generative AI.

But to recognise that both users and experts of generative AI are unanimous that the speed, power and accuracy of text produced by ChatGPT and its ilk is astonishing is, perhaps, an important first step to seriously considering the potential use in your legal practice. Importantly, that does not mean the output is faultless – the end results must always be checked critically for accuracy and omissions.

Risk factors

There are also associated risks to bear in mind consider. Generative AI is unregulated, and there are particular concerns around privacy, data protection and intellectual property. Note that the Information Commissioner’s Office (ICO) has recently issued a warning to business organisations not to be blind to the risks “in their rush to see opportunity”.

Hasty attempts to jump on the AI bullet train could lead to costly compliance breaches. The ICO is already reviewing its use by key businesses whether businesses have tackled privacy risks before introducing generative AI. It also makes clear it will take action where there is “risk of harm to people through poor use of their data”.

As Stephen Almond, the ICO’s executive director of regulatory risk, says: “Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won't upset customers or regulators.”

Any regulation in the UK appears to be a long way off. In March this year, government published a white paper revealing that there is no intention to introduce AI-specific legislation (unlike the EU). “A heavy-handed and rigid approach can stifle innovation and slow AI adoption,” it says. Rather it would be for the existing regulators to take responsibility.

For now, firms would do well to get to know what generative AI can do for them and their clients; identify and address the risks and you won’t get left behind by the competition.

The ICO website has a raft of tools to help organisations deal with the risks associated with AI at ico.org.uk. Lawyers may also find useful a recently published statement from the Council of Bars and Law Societies of Europe on the use of AI in the justice system setting out its concerns.

Date:

Posted on 03.08.23