Skip to main content
main content, press tab to continue
Article | Managing Risk

Agentic AI: Trends, risks and how your business can respond

By Sam Haslam | February 20, 2026

What new risks emerge when AI systems can independently plan, reason and take actions to achieve complex goals with minimal human oversight?
Risk Management Consulting
N/A

Agentic AI systems can now act with limited human involvement, triggering workflows, interacting with data and completing tasks across platforms. This brings speed, but it also raises material risks if your teams adopt tools without approval or oversight.

Below, we look at some key trends in agentic AI adoption and the actions your business can take to get ahead of the risks they can create, without hampering innovation.

How can businesses stay ahead of shadow agentic AI risks?

Shadow agentic tools are those your teams are using inside the business without approval, oversight or the right controls. In other words, it's when colleagues quietly start using autonomous or semi-autonomous AI assistants that IT or risk teams haven’t reviewed.

These tools might pull data from shared drives, trigger actions across systems, or send information automatically, which can create privacy, cybersecurity, operational and compliance risks.

Your business needs clear routes to approve, monitor and contain these systems so they operate within defined boundaries, adding AI system risk assessments before going‑live. Use the assessment to check data access and read/write permissions, establish audit logging and implement human-in-the-loop controls.

How this agentic AI risk could play out: A finance analyst installs an unapproved assistant to speed up reporting. It merges files from shared drives and pulls data from multiple systems but the figures are wrong and include outdated assumptions. You now face a breach and internal cleanup. A basic approval gate and access checks would have blocked the send and flagged the data mismatch.

How can businesses stay ahead of agentic AI people risks?

AI adoption is likely to be uneven across your organisation, and differences will be significant. Agentic systems are now able to automate material parts of some knowledge work tasks, not just speed them up. As a result, employees who effectively integrate these tools into their workflow have the potential to achieve orders of magnitude higher output than peers who do not. These uneven adoption patterns can widen productivity gaps, create cultural tension and raise concerns about fairness, capability and future role design.

Leaders should begin treating this as a strategic workforce issue: understanding which tasks are likely to be partially automated, where human judgment remains essential, and how roles may evolve over the next 12–24 months.

Running targeted AI training and workshops that focus on safe, practical use cases in each function, being clear where AI will change tasks in the near term and where human judgment remains essential.

How this agentic AI risk could play out: In your customer service team, high users of AI tools cut handling time. Peers who avoid the tools feel left behind and worry about performance ratings. A focused programme that teaches prompt patterns, quality checks and escalation rules narrows the gap and calms anxiety.

Another emerging AI risk to consider: Wearable AI technologies

Wearables, such as AI-enabled glasses, can record and interpret what colleagues see and hear. Existing policies may not cover device behaviour, new data capture or physical‑world safety. You’ll also need to update acceptable‑use, privacy and conduct rules so colleagues know where devices are allowed, how recordings are handled and when extra permission is required.

How this emerging AI risk could play out: An employee visits a supplier site wearing AI enabled glasses. The device captures whiteboards and side conversations. Even without intent, the recording triggers a data handling incident. Policy clarity and on-device restrictions would have prevented it.

What AI risk mitigations should your business prioritise now?

AI capabilities are evolving faster than many operating models. Focus on three priorities to keep your business resilient.

  1. Identify and analyse your specific AI risk exposures by sector, geography and business model.
  2. Modernise risk, control and governance processes so AI systems cannot bypass safeguards.
  3. Build disciplined horizon scanning and scenario planning so you can act early as technology changes, opening up new exposures as well as opportunities.

To get ahead of your changing AI risks, get in touch with our AI Risk Advisory specialists.

Contact


Practice Leader – Risk & Resilience Advisory

Contact us