A growing body of research has shown that artificial intelligence chatbots can be tricked into performing harmful tasks, according to British regulators who are recommending businesses against using them in their operations.

The National Cyber Security Centre (NCSC) of Britain wrote in a pair of blog entries on Wednesday that researchers were still grappling with the security issues that could arise from algorithms known as large language models, or LLMs, that can provide human-sounding interactions.

The chatbots powered by AI are already being used, and some people envisage them replacing not only internet searches but also customer care jobs and sales calls.

According to the NCSC, there could be risks involved, especially if such models were integrated into other organizational business operations. Researchers and academics have regularly discovered ways to trick chatbots into ignoring their own built-in safeguards or executing rogue orders.

For instance, if a hacker carefully crafted their query, an AI-powered chatbot used by a bank might be duped into carrying out an unauthorized transaction.

"Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta," the NCSC stated in one of its blog entries, alluding to beta software releases.

"They might not fully trust that product and they might not allow it to handle transactions on the customer's behalf. To LLMs, similar caution should be used.

Authorities from all over the world are attempting to understand the emergence of LLMs like OpenAI's ChatGPT, which companies are integrating into a variety of services including sales and customer support. Authorities in the US and Canada claim they have observed hackers embracing the technology, which is another indication of the security ramifications of AI that are still being explored.

According to a recent Reuters/Ipsos study, many corporate employees use ChatGPT for simple tasks like composing emails, summarizing papers, and conducting exploratory research.

A quarter of those surveyed did not know whether their employer allowed the use of the technology, while 10% of respondents stated their superiors explicitly forbade the use of external AI technologies.

The rush to incorporate AI into business procedures, according to Oseloka Obiora, chief technology officer at cybersecurity company RiverSafe, would have "disastrous consequences" if business executives didn't put in place the required controls.

Senior leaders should reconsider before embracing the newest AI advances, he suggested. "To ensure the organization is safe from harm, assess the benefits and risks and implement the necessary cyber protection."

You Might Also Like

Leave A Comment

Stay connected

Newsletter

Calendar

Advertisement