Momentum for organizations incorporating generative artificial intelligence (GenAI) into their business processes without full consideration of potential security repercussions is growing at an alarming rate. As more employees use AI on the job, they are more likely to leak confidential data about their company and customers.
Responsible use of AI does not have to come at the cost of efficiency or innovation. Emphasizing a security-first approach to AI implementation will enhance public and private organizations' overall effectiveness and help them stay ahead of the competition.
According to an IBM Institute for Business Value survey, "96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years." The risk of such a breach can be minimized with tried-and-true security practices such as establishing security boundaries, tracking data flows, and applying principles of least privilege. Beyond traditional security risks, new data exfiltration methods are evolving with the advent of AI.
Through data memorization, large language models (LLMs) memorize portions of training data, which can be displayed in the result of a future prompt. For example, the GPT-J model memorizes at least 1% of its training data. Even anonymized data can be recovered through a process known as inference, where a GenAI tool triangulates multiple data points to reconstruct the original data accurately.
Related:AI Basics: A Quick Reference Guide for IT Professionals
In 2023, it was reported that Amazon employees unknowingly shared sensitive company data with ChatGPT, which was subsequently leaked to outside users. Experts estimate the damage to be in millions of dollars. A similar scenario occurred at Samsung the same year, resulting in a ban on the use of GenAI tools on company-owned devices. Once sensitive or proprietary data is passed to a third-party tool and embedded in the model weights, it can never be removed. For this reason, it is imperative that information like passwords and other highly sensitive data is never shared with an AI tool.
The introduction of bias into GenAI models is also a serious concern. Harmful biases regarding race, ethnicity, gender, socioeconomic status, and more have surfaced in the outputs of popular GenAI tools. Security is also threatened when systems trained on biased data misidentify threats, miss potential threats, and generate false positives. For these reasons, it is important to scrutinize and monitor the potential for bias in training data and AI-augmented decision-making.
Related:AI Quiz 2024: Test Your AI Knowledge
Whether it's a third-party tool or an in-house project, thorough research and a clear plan will go a long way toward reducing risks. When developing guidelines for AI implementation, the first step is to match the business case with available tools, remembering that some models are more suited to specific tasks than others.
Practicing a Secure by Design strategy from the ground up can future-proof AI implementation. These principles ensure that security is prioritized throughout the entire lifecycle of an AI product. A Secure by Design methodology implements multiple layers of defense against cyberthreats.
During the planning stage, the security team's input is critical for a Secure by Design approach. Vendor trust is also vital. Evaluating vendors for trustworthiness and auditing contracts thoroughly, including regular monitoring of updates to vendor terms and conditions, are imperative. It is essential for data quality to be assessed for metrics like accuracy, relevance, and completeness. Focusing on security in the adoption and integration of AI reduces the risk of compliance issues and costly remediations down the road.
Data governance frameworks help develop working relationships across departments on AI policy, implementation, and monitoring. They can also reduce the likelihood of fines and data breaches and ensure brand safety.
The security team, however, is not the only group necessary for secure, efficient, and transparent AI implementation. Good data governance practices involve multiple stakeholders within an organization, like legal, finance, and human resources (HR) representatives. These best practices come down to the policies that allow employees to manage and monitor data as they move across an organization, including clear guidelines for specific departments to provide input to the decision-making process. Ongoing employee training at every level of an organization also plays an important role. Data governance involves continual reassessment and helps to prevent a "set it and forget it" mentality.
Requirements around legal and regulatory compliance underscore the value of good data governance. As laws continue to evolve, adhering to these frameworks can ensure AI investments are made on a stable foundation. Experts recommend that companies serious about creating or improving their policies consult data governance frameworks like the NIST cybersecurity framework 2.0. The Coalition for Secure AI (CoSAI), Cloud Security Alliance (CSA), and National Institute of Standards and Technology (NIST) offer additional resources and data governance guidelines.
Visionary business leaders understand the importance of proper data governance while adopting this rapidly changing technology. Implementing a SAFER strategy and having a common-sense approach to AI and security will help protect company assets and keep customer information private. Here's how to practice a SAFER AI strategy:
Keeping security at the forefront from the get-go confers advantages, especially as tools and risks evolve. Safer AI is on the horizon as more users adhere to best practices through regulatory frameworks, international collaborations, and security-first use cases. The trend toward reliable, secure AI will benefit companies and customers while these tools continue to weave into the fabric of today's digital lives.
About the Author: