Industry Talk

Regular Industry Development Updates, Opinions and Talking Points relating to Manufacturing, the Supply Chain and Logistics.

Balancing Large Language Model Adoption With Robust API Security

The popularity of Large Language Models (LLMs) has prompted an unprecedented wave of interest and experimentation in AI and machine learning solutions. Far from simply using popular LLMs for sporadic background research and writing assistance, LLMs have now matured to the degree where particular solutions are being used within specific workflows to solve genuine business problems.

Industries such as retail, education, technology, and manufacturing are using LLMs to create innovative business solutions, delivering the required tools to automate complex processes, enhance customer experiences, and obtain actionable insights from large datasets.

APIs play a central role in democratising access to LLMs, offering a simplified interface for incorporating these models into an organisation’s applications, and for LLMs to communicate with each other. They frequently have access to a diverse library of sensitive data, automating the collection of information – in some cases personally identifiable information (PII) – that enables LLMs to provide tailored business solutions to meet specific needs.

 

API Security Must Be a Key Consideration

During LLM development, or when using APIs to integrate multiple LLMs into existing technology stacks or applications, their efficiency is entirely dependent on the security posture of each API that ties them together.

With organisations using multiple, purpose-built LLMs that require numerous APIs, the lack of a robust API security monitoring and remediation strategy for LLMs can have a snowball effect. It can expose new vulnerabilities that may not have been considered, and leave APIs and the data they handle, dangerously exposed to bad actors.

Before thinking about how to automate tasks, create content, and improve customer engagement, businesses must take a proactive stance towards API security throughout the entire lifecycle of an LLM. This includes:

  • Design and development: Without a proactive approach to API security, new vulnerabilities can be introduced.
  • Training and testing: Developers must anonymise and encrypt training data, and use adversarial testing to simulate attacks and identify vulnerabilities.
  • Deployment: If secure deployment practices are not followed, unsecured APIs can be exploited by attackers to gain unauthorised access, manipulate data, or disrupt services.
  • Operation and monitoring: Without continuous monitoring, threats may go undetected, allowing attackers to exploit vulnerabilities for extended periods.
  • Maintenance and updates: Failure to implement API security patches, and undertake regular security audits can leave APIs vulnerable to known exploits and attacks.

 

The OWASP Top-10 for LLMs

Businesses are always looking at emerging technologies with a view to improving operational efficiencies. As the number of AI-enabled tools – and APIs – within enterprises proliferates, the security of LLMs is in the spotlight like never before. At the same time, cyber attackers are evaluating new ways to compromise LLMs, and gain access to an organisation’s crown jewels – data, that can be used to enact new attacks.

As a result, development teams should play close attention to the Open Web Application Security Project (OWASP)’s top-10 most critical risks for application security and LLMs. Continuously updated with the most pertinent web application security threats, I have detailed below how the latest vulnerabilities apply to the development of LLMs.

  • Prompt Injection: Through unsecured APIs, hackers manipulate LLM input to cause unintended behaviour or gain unauthorised access. For example, if a chatbot API allows user inputs without any filtering, an attacker can trick it into revealing sensitive information or performing actions it was not designed to do.

  • Insecure Output Handling: Without output validation, LLM outputs may lead to subsequent security exploits, including code execution that compromises systems and exposes data. Therefore, APIs that deliver these outputs to other systems must ensure the outputs are safe and do not contain harmful content.

  • Training Data Poisoning: Training data poisoning involves injecting malicious data during the training phase to corrupt an LLM. APIs that handle training data must be secured to prevent unauthorised access and manipulation. If an API allows training data from external sources, an attacker could submit harmful data designed to poison the LLM.

 

  • Denial of Service: LLM Denial of Service (DoS) attacks involve overloading LLMs with resource-heavy operations, causing service disruptions and increased costs. APIs are the gateways for these requests, making them targets for DoS attacks.

 

  • Supply Chain Vulnerabilities: Developers must ensure that APIs only interact with trusted and secure third-party services and external datasets. If not, APIs that integrates third-party LLMs could be compromised.

 

  • Sensitive Information Disclosure: Failure to protect against disclosure of sensitive information in LLM outputs can result in legal consequences or a loss of competitive advantage.

 

  • Insecure Plugin Design: LLM plugins processing untrusted inputs and having insufficient access control risk severe exploits like remote code execution. APIs that enable plugin integration must ensure new vulnerabilities are not introduced.

 

  • Excessive Agency: APIs that grant LLMs the ability to act autonomously must include mechanisms to control these actions. Without it, it can jeopardise reliability, privacy, and trust.

  • Overreliance: Failing to critically assess LLM outputs can lead to compromised decision making, security vulnerabilities, and legal liabilities. APIs that deliver LLM-generated outputs to decision-making systems must ensure these outputs are verified and validated.

 

  • Model Theft: Unauthorised access to proprietary LLMs risks theft, competitive advantage, and dissemination of sensitive information. APIs that provide access to the LLM must be designed to prevent excessive querying and reverse engineering attempts.

 

Don’t Run Before You Can Walk With LLMs

For many businesses, LLMs are now at the cutting edge, as they try to understand how they can fit into their current ecosystem. APIs play a pivotal role in making the implementation and return on investment of LLMs within a business, a reality.

However, before thinking about how to automate tasks, create content, and improve customer engagement, businesses must prioritise API security throughout the entire lifecycle of an LLM. With the number of AI-enabled LLMs continuing to exponentially increase and multi-LLM strategies becoming common within organisations, APIs are indispensable to make this happen in a secure way.