Business risks of AI adoption require careful consideration: Darren Mead

With the first global AI Safety Summit recently held in the UK and the government’s preceding discussion papers on the capabilities and risks of AI, media headlines have ranged from the optimistic to the apocalyptic.

For businesses across all sectors, there are undoubtedly huge opportunities in the adoption of AI, but there are also significant weaknesses and risks, which need to be carefully considered.

What are LLMs?

Large Language Models (LLMs) fall under the category of generative AI and have been around for decades but gained huge exposure with the launch of the ChatGPT tool in late 2022.

Darren Mead shares his expert insightDarren Mead shares his expert insight
Darren Mead shares his expert insight
Hide Ad
Hide Ad

Essentially, LLMs scrape vast amounts of internet data to generate new content based on the parameters set by the user. They offer a wide variety of business benefits, including automated content creation, workplace training, generating market intelligence, data analysis and providing customer service via chatbots or virtual assistants.

Accuracy and bias

As with most disruptors, LLMs are in their relative infancy and are therefore more prone to flaws and weaknesses. Firstly, there is no guarantee of accuracy, as the content produced is only as accurate as the data fuelling it. They can also “hallucinate” information, creating a false or misleading response.

Because biases are prevalent in much real-world data, LLM generated content can also reflect things like gender or racial biases back to us by favouring certain cultural references over others and even creating harmful or offensive material.

Going forward, we’ll see an increasingly hybrid approach, where AI models based on machine learning are overlaid with expert human knowledge, to develop fine-tuned, industry specific models.

Data security and controls

Hide Ad
Hide Ad

It’s also vital to consider data security. Whilst including data in an LLM query will not generally result in it being incorporated into the model, it will be stored by the provider and could lead to sensitive data in questions being used for developing future versions. Individuals and organisations need to take great care therefore and avoid using organisational, employee or client data for example in prompts.

A more immediate issue is that many companies will not currently have documented controls in place that take account of LLMs, which as evolving technology presents its own challenges, and so could be open to both deliberate and unintentional misuse by employees.

Future threats

As a relatively new technology, we also need to consider the potential future business threats of LLMs.

There are evolving ethical and legal issues in using generative AI, an example being the class action lawsuit Google is facing for unauthorised scraping of website data to train its AI systems, which plaintiffs contend violated their privacy and property rights. Depending on the outcome, this could lead to increased governance of LLMs in future.

Hide Ad
Hide Ad

Use of this technology is also highly likely to accelerate the frequency and sophistication of criminal activities such as scams, fraud, impersonation, ransomware and data harvesting, according to the government’s discussion paper on security risks in generative AI.

Whilst being early adopters may offer certain attractions, businesses need to proceed with caution in order to be confident that they are not placing their organisation, employees or clients at risk, especially as industry designed LLM solutions, trained on sector-specific data, will continue to evolve and multiply over time.

Darren Mead is Director of Risk at Progeny​​​​​​​​​​​​​​

Related topics: