This website uses cookies. Learn more
Around this shift three narratives are beginning to form. AI as a growth engine for investors. AI as a source of anxiety and opportunity for employees. AI as a growing governance and ethics concern for regulators and the public.
Inside many large organisations, positioning around the investor story is clear, shaped by pressure from markets and analysts. But the bigger picture remains hazy. Many corporate affairs teams are caught in the operational weeds of AI adoption and those who have taken a step back to consider the wider trust implications, are understandably nervous to put their heads above the parapet, wary of first mover disadvantage. That communications gap is where a new trust-related reputation risk is building and where reputation-upside waits for those willing to move early and carefully.
From the perspective of ‘the markets’, AI is a core driver of future value. AI-exposed stocks are outpacing broader indexes, AI is in the share price, the earnings calls, CEO commentary, etc.
Set against this story of promise is a counter story of friction. Adoption is high, but evidence of strategic or financial value is far less clear. Recent studies suggest few companies yet report any meaningful return on AI investment, and fewer still can show a clear financial gain.
This is the "AI Paradox" in practice. Many companies have bought enterprise licences, hired AI leads and launched pilots. But pilots that work in controlled environments often stall when they meet real workflows and legacy systems. When the rubber hits the road, the journey is far from smooth, so to speak.
One major underlying issue is that AI is often treated as if it were a software update – buy the tools, plug them in, watch transformation happen. In reality, effective adoption requires a fundamental "rewiring" of the way a business operates.
This creates a split reality: from the boardroom, investors hear a story of rapid efficiency and productivity gains; on the ground, leaders are learning returns will only come once workflows and skills are rewired.
While most AI-related external communication targets capital markets, companies are careful, even cautious, about saying more in public settings because they fear moving first in a field that remains fluid.
This silence won’t remain neutral for long. If companies don’t begin to outline their AI positioning (beyond investors) others will write that story instead. We are already starting to see this play out in two specific areas where the "investor story" collides with reality:
The danger is clear. A line that reads well for an analyst can raise red flags for other stakeholders. The gap between these audiences is where the trust deficit grows.
Corporate affairs already sits at the intersection of commercial, social, and political expectations. AI now belongs firmly in that space. This doesn’t mean corporate affairs leads taking ownership of AI delivery or technical choices. It means owning the coherence of overarching corporate positioning around AI. To close the trust gap, three narratives need to line up as part of this positioning:
The goal here is not to deploy a detailed playbook in public, but instead to ask a sharper set of questions about who owns the AI trust story.
This is not a checklist, but a set of prompts to test how ready your organisation is for the scrutiny that is coming.
AI is no longer just a technical initiative. It is now part of how companies are valued and judged.
Most large organisations are still focused on adoption - tools, pilots, policies. That work matters, but it is not the whole story. The gap between the "investor promise" and the "stakeholder reality" is where trust risks are building. Those who begin to work through these contradictions now will be better placed when scrutiny from investors, employees, and regulators hardens into specific demands.
New Business: to find out how we can help you, contact our dedicated new businesss team consultancy@lansons.com
Careers: we’d love to hear from you, please visit our careers hub