Generative AI’s New Competitive Landscape
The Generative AI (GenAI) space advanced significantly due to investments and product releases from major technology companies, boosting their competitive edge. However, startups, open-source communities, and small language models (SLMs) have responded quickly and are shifting the competitive landscape.
SLMs’ competitive advantages are primarily due to their ability to provide localized solutions that reduce reliance on cloud infrastructure. They are smaller, can be integrated much quicker, and allow for focused expertise rather than broad solutions. SLMs typically range from 1 million to 10 billion parameters,[1] whereas large language models (LLMs) can range between several hundred billion and even trillions of parameters. This shift is just one of many forces that are shaping the future of Generative AI and LLMs. For risk managers in all industries looking to utilize AI, understanding this dynamic is key to navigating both emerging threats and opportunities in the AI-driven market.
To help WTW better understand the forces influencing the AI market, the WTW Research Network (WRN) partnered with the University of Pennsylvania’s Wharton School and its Mack Institute’s Collaborative Innovation Program (CIP) to better understand this evolving landscape. Building on our previous work with the CIP and their Executive MBA students, Green Algorithms – AI and Sustainability, the WRN has sought to further examine the LLM competitive landscape including new disruptions and opportunities for optimization and efficiency. This piece looks at GenAI’s impact on risk management frameworks while Part 2 will explore LLM Effectiveness at Scale and Part 3 rounds out the series, providing a look at The Future of Hardware Computing and examining the implications for the market going forward as the industry moves from the training to the inference phase.
A leaked Google memo in 2023 sounded the alarm: “we aren’t positioned to win this arms race… open-source [is] lapping us. Things we consider ‘major open problems’ are solved and in people’s hands today.”[2] In other words, smaller open-source projects and agile startups have dismantled many of the long-standing advantages that large AI labs had.
These advantages include access to high-end computing infrastructure, proprietary data, and top-tier research talent. Innovation that used to take months now happens in a matter of days. One recent industry analysis found that the time gap for open-source models to catch up with frontier models has shrunk to less than 24 hours[3], signaling to well-funded incumbents that breakthroughs in open-source models can now rival or surpass them.
Meta’s release of Llama (an open-source LLM, recent release Llama 4[4] which spurred a flood of community-driven offshoots, compressed the innovation cycle dramatically.[5] Meanwhile, startups like Mistral AI, Claude, and Hugging Face provide open models or platforms that anyone can adapt and customize. Even industry leaders are exploring smaller models: Microsoft’s Phi-2 SLM (just a few billion parameters) claims to “outperform… larger models on math-related reasoning”,[6] and IBM’s new Granite SLMs are 3x–23x cheaper to run than “frontier” LLMs while matching their performance on key tasks.[7] This trend signals a shift toward efficiency and targeted performance, rather than brute-force scaling. Additionally, this trend helps support sustainability with AI as targeted algorithms reduce energy needs and carbon emissions (see Solving the AI energy dilemma for the WRN’s research on how efficient AI models affect carbon footprint).
Across industries, startups, open-source consortia, and in-house model development are offering tailored, industry-specific and lower-cost alternatives to diversify from larger incumbent firms.
Media: Open-source voice and video models are enabling startups to bypass expensive content pipelines. AI-powered dubbing tools use generative speech technology to translate speech across languages while preserving tone and emotional nuance. When combined with synthetic avatars and voice cloning, smaller studios and independent creators can reach global audiences quicker without the overhead of full-scale dubbing operations. This has reduced dependence on large studio-grade tools.[8]
Key risks to both groups include:
Generative AI’s competitive landscape is evolving. For risk managers across sectors, this calls for dual awareness: incumbents should be mindful of the speed of disruption, while challengers should consider building risk maturity as they scale. Companies considering the potential risks and challenges to their AI strategy creates an opportunity to turn uncertainty into advantage.