The CISO’s guide to AI: Embracing Innovation While Mitigating Risk

Generative AI is making waves across many industries with its amazing abilities, but it also brings some serious cybersecurity headaches. These powerful Generative AI models, like large language models (LLMs), are becoming targets for crafty cyber-attacks such as data poisoning, data leaks, and messing with data integrity. These attacks make systems using these models more vulnerable. This is especially worrying when dealing with sensitive private data, as it creates big problems for Chief Information Security Officers (CISOs) trying to keep everything secure.

In this blog post, we will explore strategies that the CISO community can employ to evaluate AI-integrated systems. These strategies will help mitigate risks, protect sensitive information, and ensure compliance with privacy laws, thereby keeping corporate data safe. Please note that, while security is a broad area and we must have a comprehensive strategy for all AI systems we use, in this post, we will focus on the security aspects of the models that are used in the Generative AI based systems or applications. Also, there are multiple tools out there that can help with evaluation, I am keeping the tools and technologies discussion out of this blog, only focusing on the strategy we can employ.

A Quick Primer on Generate AI Models

We will refer to several terminologies around Generative AI. The Generative AI Models, the Foundational Models and LLMs. Let us briefly go through the terms to avoid any confusions around them, as we discuss them in this blog post.

To start with, Generative AI refers to models and systems designed to generate new data, such as text, images, audio, or video, based on the patterns learned from existing data.

Foundational models, on the other hands, are a broader category that encompasses LLMs but also includes other types of AI models that serve as the base or “foundation” for various tasks across different domains. These models are pretrained on large and diverse datasets and can be adapted for multiple applications beyond just language processing.

Finally, Large Language Models (LLMs) are a specific type of AI model designed to understand and generate human language. They are trained on vast amounts of text data and are capable of performing a wide range of natural language processing (NLP) tasks such as text generation, translation, summarization, and answering questions. We can call LLM model as one type of Foundational Models. Examples include GPT-3, GPT-4, BERT, and T5.

Challenges with Generative AI

It might be very tempting to download an open source LLM model start integrating the model with the existing IT systems. For personal testing and learning, these models might be very useful, however, they can introduce several risks, including security flaws within the models, uncertainties about the data used for training, and adherence to Responsible AI (RAI) principles. Additionally, determining accountability for these aspects can be complex. For instance, downloading an open-source LLM without proper vetting can lead to significant security and operational risks.

What could go wrong?

Before the era of Generative AI and LLM, the story of Microsoft’s AI Chatbot, Tay, made the headline with  Microsoft shuts down AI chatbot after it turned into a Nazi in the news outlets. This incident showed how complicated and risky AI chatbots can be, especially when they learn from unfiltered public interactions.

With lots of open-sourced LLM models, the above incident is a vivid example of how things can go wrong, when proper guardrails are absent and precautions are not taken early enough. If an LLM model has the security flaws, it can be exploited by malicious actors, leading to data breaches and system compromises.

For example, in one of the scenarios, open-source LLMs were found to be susceptible to data poisoning attacks, where malicious data was introduced during the training process, causing the model to produce biased or harmful outputs (ref: From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy).

These models, if used as is, can compromise the integrity of the systems built using the models and lead to the propagation of false or malicious information.

Few more examples to relate to security flaws are:

  • Exploits: Models like GPT-3 have had issues where hackers could manipulate them to do harmful things through Jailbreaking techniques
  • Legal Trouble: Using data scraped from the web can lead to legal issues if it includes copyrighted or sensitive info.
  • Data Leaks: In 2020, there were reports about how GPT-3 could potentially leak sensitive information it had been trained on. Users found that the model could sometimes produce text snippets containing private information embedded in its training dataset. This highlighted the risks of data leakage inherent in these powerful AI systems.
  • Privacy Backlash: Enterprise workplace collaboration platform Slack has sparked a privacy backlash with the revelation that it has been scraping customer data, including messages and files, to develop new AI and ML models.

Now a days, most security tools in the market are embedding AI technology in their products. The effectiveness of AI models used in these tools, is heavily dependent on the quality and diversity of the training data. So, it is important to keep in mind that the AI models can sometimes give wrong information for following reasons:

  • AI models, especially language models, have limitations in understanding and reasoning. They can generate plausible but incorrect or nonsensical information.
  • Models trained on data up to a certain point will not have information on events or developments that occurred after their training cut-off date.
  • AI models make probabilistic predictions based on patterns in data. This means there’s always some level of uncertainty, and they can occasionally produce incorrect results.

Safeguarding while staying compliant

With these possible vulnerabilities, we typically observe (there are certainly more), when leveraging the LLM models, it is essential to adopt a comprehensive strategy that includes evaluating the security of the systems built on top of any Generative AI models. We will need to ensure the integrity of the training data for the LLM, verify that the models were developed using Responsible AI (RAI) principles. We must establish the AI Governance body and AI adoption strategies with clear responsibilities to manage and mitigate risks. Below are some of the steps we can take to ensure data safety, compliance, and responsibility:

Model Evaluation and Verification:

Any system we use, we have to ensure that the foundational model (i.e., LLM), the model the system is infused with, has gone through the following evaluation and verification steps:

  • Security Audits: The models have gone through thorough security audits before they were infused with the systems or applications, so that potential vulnerabilities have been identified and necessary measures were taken to remove those.
  • Regular Updates: The models and their dependencies were regularly updated with the latest security patches and improvements.
Training Data Assessment:

The foundational models like LLMs are typically trained on a broad open data set. When evaluating any application, we must ensure the data has gone through:

  • Source Verification: Ensure the data used for training the model was sourced from reputable and legal channels.
  • Data Quality: Verify that the data is clean, unbiased, and does not contain sensitive or proprietary information without proper authorization.
Adherence to Responsible AI (RAI) Principles:

Responsible Artificial Intelligence (Responsible AI) means creating and using AI systems in a way that is safe, trustworthy, and ethical. Any AI model, trained with RAI principles like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability will be more trustworthy. As part of the evaluation, we must validate that the models had:

  • Ethical Training Practices: Ensure the model training follows ethical guidelines and RAI principles, focusing on fairness, accountability, and transparency.
  • Documentation: Maintain thorough documentation of the training process, including data sources and any preprocessing steps.
Accountability and Governance:

When we use any system from third party vendors, we often rely on the claims and reputations of the vendors as we use the systems. If there are issues, we can go back to them for support, as they are responsible parties. When it comes to systems built with the Gen AI models, we have to go an extra mile. We must identify, for the models:

  • Clear Responsibilities: Ensure clear roles and responsibilities for managing the Generative AI enabled system, including accountability for any security flaws or ethical issues.
  • Third-Party Reviews: Engage independent third-party reviews to validate the security and ethical compliance of the model.

Also, the Data Protection Addendum (DPA) that the third-party vendors signs with the customer, should include several key considerations to ensure compliance with data protection regulations and to address specific concerns related to AI usage. The main elements to consider are: Purpose Limitation, Transparency, Data Minimization, Accuracy, Data Subject Rights, Automated Decision-Making (if AI systems are used for automated decision-making, provide details on the logic involved, the significance, and the potential consequences of such processing for data subjects), Security Measures, Data Protection Impact Assessments, and Data Retention and Deletion.

Regulatory Compliance:

For any Generative AI based systems, regulatory compliance is of utmost importance. We know that ChatGPT was temporarily banned in Italy by the Italian data-protection agency, over data collection concerns. Here in the USA, we must do the same, check for, for the models used for the intended AI systems:

  • Legal Adherence: The models were compliant with relevant data protection regulations such as GDPR, HIPAA, or CCPA.
  • Regular Compliance Checks: The AI system vendor has a process to conduct regular compliance checks and stay up-to-date with changing regulatory requirements.
Security Measures:

When leveraging Generative AI enabled applications or systems, to interact with our own corporate data, we may choose to fine-tune any foundational model or feed the model with our own data, leveraging patterns like RAG (Retrieval Augmented Generation). With the way we work with our data, we must ensure:

  • Access Control: Implement strong access control mechanisms to restrict access to the model and its data.
  • Encryption: Use encryption for data at rest and in transit to protect against unauthorized access.
Continuous Monitoring:

A comprehensive evaluation of any Generative AI based systems (or any modern-day systems) would not be complete without vetting for a robust monitoring system. The systems must be evaluated for:

  • Anomaly Detection: Ensure that the Gen AI system has built in continuous monitoring capability to detect and respond to any unusual activities or potential security breaches.
  • Incident Response Plan: Validate that a near real time incident response plan to address any security incidents swiftly is an integral part of the Gen AI systems we are evaluating.
Review Terms and Conditions:

When evaluating products or applications built on top LLM models, having a clear understanding, review the terms and conditions of the LLM model, infused with the application you are reviewing, for fine prints is a must. Google’s Gemini, for example will share the terms and conditions including the information as below:

Things to know

  • Gemini Apps are a new technology. They are continuously evolving and may sometimes give inaccurate or inappropriate information that doesn’t represent Google’s views.
  • Don’t rely on responses from Gemini Apps as medical, legal, financial or other professional advice.
  • Your feedback will help make Gemini Apps better.
  • Google processes your voice and audio data if you speak to Gemini Apps, but by default, this info is not saved to Google servers.

As we see, it clearly possible to still get inaccurate or inappropriate information generated by any LLM model out there.

Conclusion

It is crucial for organizations to invest in AI-driven cybersecurity measures and to remain vigilant about emerging threats. When evaluating a solution or a system, by taking these comprehensive actions, organizations can secure their generative AI systems and ensure compliance with data protection regulations. This approach not only mitigates risks but also fosters trust and confidence in the use of AI technologies, supporting ethical and responsible innovation.

References