AI is here: how should CISOs respond?
How CISOs should respond to the advent of AI
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Withartificial intelligence(AI) use growing, Chief Information Security Officers (CISOs) play a critical role in its implementation and adoption. They need to prepare for the risks associated with AI content creation as well as AI-assistedsecuritythreats from attackers. By following some key best practices, we’ll be better prepared to safely welcome our new robot overlords into the enterprise!
AI is growing fast!
The popularity ofChatGPThas sparked massive interest in the potential of generative AI and many businesses are deploying it across the enterprise. AI technology is now in the wild—and it’s moving faster than any other technology I’ve seen.
There are several compelling use cases for generative AI in the enterprise:
Gail Coury is SVP and CISO at F5.
Issues and challenges
However, there are challenges to overcome, such as whether using AI at all will run afoul of laws and regulations in international markets.
Earlier this yearOpenAItemporarily blocked the use ofChatGPTin Italy after the Italian Data Protection Authority accused it of unlawfully collecting userdata. Meanwhile, German regulators are looking at whether ChatGPT adheres to the European General Data Protection Regulation (GDPR). In May, the European Parliament took a step closer to issuing the first rules on use of Artificial Intelligence.
Another challenge are the issues around data collection and the accidental disclosure of personal or proprietary information. Companies need to secure their confidential information against and ensure they aren’t plagiarizing from other companies and individuals who are using the same tools they are. We’ve already seen reports of intellectual property being entered into public generative AI systems, which could impact a company’s ability to defend its patents. One AI-powered transcription and note-taking service makes copies of any materials that are presented in Zoom calls that it monitors.
The third major challenge is that AI-powered cyberattack software could try many possible approaches, learn from how we respond to each, and quickly adjust its tactics to devise an optimal strategy—all at a speed much faster than any human attacker. We have seen new sophisticated phishing attacks that are utilizing AI, including impersonating individuals both in writing and in speech. For example, an AI tool called PassGAN, short for Password Generative Adversarial Network, has been found to crackpasswordsfaster and more efficiently than traditional methods.
CISOs and AI
As CISOs, we help leaders create an organizational strategy that provides guidelines for use and takes into account legal, ethical, and operational considerations.
When used responsibly and with proper governance frameworks in place, generative AI can provide businesses with advantages ranging from automated processes to optimization solutions.
Creating a comprehensive AI strategy
With new technologies such as generative AI, come opportunities. But they also come with risks. A comprehensive AI strategy ensuresprivacy, security, and compliance, and needs to consider:
Once your organization has assessed and prioritized use cases for generative AI, a governance framework needs to be established for AI services such as ChatGPT. Components of this framework will include setting up rules for data collection and retention and policies must be created to mitigate the risk of bias, anticipate ways the systems can be abused, and mitigate the harm they can do if used improperly.
A company’s AI strategy should also cover how changes brought about by AI automation will affect employees and customers. Employee training initiatives can help ensure that everyone understands how these new technologies are changing day-to-day processes and how threat actors may already be using them to further increase the efficacy of their social engineering attacks.Customer experienceteams should assess how changes resulting from AI implementation might impact customer service delivery so that they can adjust accordingly.
AI and security
A process for establishing and maintaining strong AI security standards is vital. What you need is guardrails that are specific to how AI functions—for example, which AI service it pulls content from and what it does with whatever information you feed into it.
AI tools need to be designed with adversarial robustness in mind. We currently see this happening in the lab to improve training, but doing this in the ‘real’ world, against an unknown enemy, must be top-of-mind—especially in military and critical infrastructure scenarios.
With attackers looking closely at AI, your organization needs to plan and prepare their defense right now. Here are a few practices to consider:
-
Ensure you analyze your software code for bugs, malware, and behavioral anomalies. Signature ‘scans’ only look for what is known, and these new attacks will leverage unknown techniques and tools.
-
When monitoring your logs, use AI to fight AI. Machine Learning security log analysis is a great way to search for patterns and anomalies. It can incorporate endless variables to search for and produce predictive intelligence, which in turn provides predictive actions.
-
Update your cybersecurity training to reflect new threats such as AI-powered phishing, and your cybersecurity policies to counter the new AI password cracking tools.
-
Continue to monitor new uses of AI, including generative AI, to stay ahead of emerging risks.
These steps are critical to building trust with your employees, partners, and customers about whether you’re properly safeguarding their data.
Preparing for the future
To stay competitive, it’s essential for organizations to adopt AI technology while safeguarding against potential risks. By taking these steps now, companies can ensure they’re able to reap the full benefits of AI while minimizing exposure.
We’ve featured the best online cybersecurity courses.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Gail Coury is SVP and CISO at F5.
This new malware utilizes a rare programming language to evade traditional detection methods
Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days
Arcane season 2 confirms the hit series isn’t just one of the best Netflix shows ever made – it’s an animated legend that’ll stand the test of time