Life Buzz News

Businesses Need New AI Governance in Cybersecurity and Privacy


Businesses Need New AI Governance in Cybersecurity and Privacy

Regulators increasingly expect executives to proactively address bespoke risks associated with development and deployment of generative AI. Privacy and cybersecurity remain an important focus for businesses as they explore successful and responsible adoption, particularly with respect to ensuring transparent disclosure to consumers on AI use.

This includes how consumer data will be processed by such tools and the rights available to control such processing, and proactively addressing and protecting against increasingly sophisticated AI-enhanced cybersecurity incidents.

To address these risks, management must set the tone by establishing governance infrastructure, rules, standards, processes, and tools to guide development and use of AI in a safe and compliant manner, and in alignment with corporate values and strategies and emerging and evolving legislation.

There is an inherent tension between generative AI development -- which requires processing of large amounts of data, including personally identifiable information, to train generative AI models -- and privacy laws, which dictate data minimization and purpose limitation. Privacy laws also require transparency and respect of individuals' rights to access, correct, and delete their data.

To mitigate privacy risks, regulators have begun to adopt AI-specific legislation and expand interpretations of existing laws to address AI-specific risks. For example, the Federal Trade Commission brought a complaint against Rite Aid under Section 5 of the FTC Act, alleging Rite Aid failed to take reasonable measures to prevent consumer harm in connection with its deployment of AI-based facial recognition technologies to identify potential shoplifters.

Numerous other federal agencies have similarly hinted at potential AI regulation and enforcement, though the future is uncertain given the new administration and turnover in regulatory leadership.

At the state level, Colorado became the first to enact comprehensive AI legislation with the Colorado AI Act, which imposes certain obligations on developers and deployers of high-risk AI systems to prevent algorithmic discrimination.

Similarly, the California lawmakers issued draft regulations under the California Consumer Privacy Act governing businesses' use of automated decision-making technology, requiring consumer notice, opt-out, and information access rights when a person's data is processed through those tools in certain circumstances.

Lawmakers will likely continue adopting similar or more restrictive privacy legislation, and exercising any existing powers to ensure generative AI deployment doesn't undermine privacy, subjecting companies to regulatory fines or requiring changes to product offerings or business practices.

Leadership must remain aware of the legal landscape and emerging enforcement trends, and build strong foundations for data use through an AI governance program overseen by a centralized governing body and supported by well-documented controls. A solid governance program also promotes accountability, transparency, AI literacy, and requires strong third-party risk management.

And although AI adoption supports quicker detection and prevention of cyberattacks, threat actors also use it to launch more widespread and sophisticated cyberattacks. For example, threat actors have used generative AI to create deepfakes (artificial videos or images) that make it easier to commit social engineering attacks, which are used to solicit sensitive information from an organization's employees and circumvent authentication controls.

State regulators have noticed. For example, the New York Department of Financial Services recently issued guidance to executives and senior leadership proposing mitigating controls in response to an organization's AI-specific cybersecurity risks.

Managers should ensure that companies review and update existing cybersecurity policies and procedures to address these new generative AI risks using regulation such as New York's guidance as a framework.

An AI governance program should include generative AI-specific risk training for information security and other personnel, and enhanced third-party vendor risk management programs that include due diligence, contractual protections, incident response, and vendor oversight.

The proliferation of generative AI technologies will continue to test businesses' cybersecurity and privacy risk mitigation structures. To minimize exposure, leaders should prioritize development and implementation of solid AI-specific governance programs.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Daniel Ilan is a partner with Cleary Gottlieb and focuses on intellection property law, cybersecurity and privacy, and AI.

Megan Medeiros is a practice development lawyer with Cleary Gottlieb and focuses on intellectual property law.

Melissa Faragasso is an associate with Cleary Gottlieb and focuses on intellectual property and technology transactions.

Previous articleNext article

POPULAR CATEGORY

corporate

10177

tech

11341

entertainment

12441

research

5711

misc

13346

wellness

9885

athletics

13196