As AI and generative AI become more integrated into business processes, the pressure is on risk and cybersecurity professionals to shield organizations from data compromise or damage to operations.
In this insight, the first of our How to… cybersecurity series, our cyber risk analytics consultants share practical steps and the strategic focus your organization needs to improve cybersecurity and respond effectively to AI-related breaches, while still taking advantage of AI opportunities.
Get tough on new AI tools and uses
Your IT staff may be facing daily requests to enable new AI tools. Some of these may have weak security controls or an unclear use case. Like any new technology you’re thinking of introducing into your IT environment, you’ll need to vet each new instance and use case, setting out and documenting clearly defined usage parameters.
This is an area where cybersecurity can help build valuable collaborations between key players such as the CIO, CISO, data officer, risk manager and legal colleagues. Together, these business functions can review each new AI tool, set usage guidelines and disseminate information to the rest of your organization.
Industry-specific analytics can help you assess and quantify the risks associated with third-party AI tools and their potential impact on your business. By quantifying the likelihood and impact of AI-driven cyber risks, you can inform more effective and efficient decision-making on which tools to use and how to mitigate associated risks. Periodic technology rationalization assessments can also help determine if each AI tool in use is still necessary.
Check your data (and shadow data)
Data is the lifeblood of AI. But while data makes AI run, it also represents the greatest potential vulnerability. So-called ‘shadow data’ is data that falls outside of defined repositories in your environment. It's an endemic problem in many organizations, contributing to data leakage and associated regulatory compliance risk.
As you introduce AI to your business, you need to know the answers to the following questions on data:
- Do you have a named data officer?
- Do you have a published data strategy?
- Do you know where all your data is?
- Who, or what, has access to your data, including APIs and AI tools?
There are many useful frameworks covering data and AI governance you can turn to for initial guidance, including the US National Institute of Science and Technology’s Artificial Intelligence Risk Management Framework, ISO/IEC 42001, the European Union’s AI Act and the UK government’s AI Regulatory Principles.
Secure system prompts and critical components of AI models
Recent cyber incidents such as DeepSeek tell us that securely managing system prompts can help prevent unauthorized access. Your business needs to implement strong access controls, ensuring only authorized personnel have access to system prompts and critical components. Use multi-factor authentication (MFA) and role-based access control (RBAC) to minimize the risks.
By encrypting all sensitive data, including system prompts, you can add a further layer of security, making it more difficult for attackers to exploit vulnerabilities.
Commit to continuous monitoring and rapid response
Regularly auditing and reviewing your AI systems, such as access logs, to monitor for unusual activity, puts you in a good position to identify and address security issues.
