This website uses cookies. Learn more

Okay

AI, trust and the emerging reputation gap in corporate adoption

7 Jan 2026
Read: 5 min

Tom Cook, director of Digital and AI at Lansons, examines the growing trust gap around AI and what it means for corporate affairs.

Tom cook
Tom Cook
director, Digital and AI
AI Trust and Reputation

AI has moved quickly from a technology and IT-adjacent curiosity to a key factor in how companies are valued. 

Around this shift three narratives are beginning to form. AI as a growth engine for investors. AI as a source of anxiety and opportunity for employees. AI as a growing governance and ethics concern for regulators and the public.

Inside many large organisations, positioning around the investor story is clear, shaped by pressure from markets and analysts. But the bigger picture remains hazy. Many corporate affairs teams are caught in the operational weeds of AI adoption and those who have taken a step back to consider the wider trust implications, are understandably nervous to put their heads above the parapet, wary of first mover disadvantage. That communications gap is where a new trust-related reputation risk is building and where reputation-upside waits for those willing to move early and carefully.

The AI paradox in corporate life

From the perspective of ‘the markets’, AI is a core driver of future value. AI-exposed stocks are outpacing broader indexes, AI is in the share price, the earnings calls, CEO commentary, etc.

Set against this story of promise is a counter story of friction. Adoption is high, but evidence of strategic or financial value is far less clear. Recent studies suggest few companies yet report any meaningful return on AI investment, and fewer still can show a clear financial gain.

This is the "AI Paradox" in practice. Many companies have bought enterprise licences, hired AI leads and launched pilots. But pilots that work in controlled environments often stall when they meet real workflows and legacy systems. When the rubber hits the road, the journey is far from smooth, so to speak.

One major underlying issue is that AI is often treated as if it were a software update – buy the tools, plug them in, watch transformation happen. In reality, effective adoption requires a fundamental "rewiring" of the way a business operates.

This creates a split reality: from the boardroom, investors hear a story of rapid efficiency and productivity gains; on the ground, leaders are learning returns will only come once workflows and skills are rewired.


The Trust Gap. Why Silence is Dangerous

While most AI-related external communication targets capital markets, companies are careful, even cautious, about saying more in public settings because they fear moving first in a field that remains fluid.

This silence won’t remain neutral for long. If companies don’t begin to outline their AI positioning (beyond investors) others will write that story instead. We are already starting to see this play out in two specific areas where the "investor story" collides with reality:

  • The Workforce Flashpoint. Investors reward stories of efficiency and automation. But the same language lands differently for an employee unsure about their future prospects. Two media angles are forming here – "automation and displacement" vs. "skills and upskilling". The former dominates the headlines, with job cuts frequently and tightly linked to AI investment. The latter is largely absent from a corporate perspective, opening companies to a backlash from unions and policymakers who view AI through the lens of inequality and worker protection.
  • The Governance Trap. Simultaneously, regulators are shifting from observation to action. Public support for oversight is building, and media coverage of ethical lapses – from bias to "AI washing" – is unforgiving.

The danger is clear. A line that reads well for an analyst can raise red flags for other stakeholders. The gap between these audiences is where the trust deficit grows.


Corporate Affairs as the Integration Function

Corporate affairs already sits at the intersection of commercial, social, and political expectations. AI now belongs firmly in that space. This doesn’t mean corporate affairs leads taking ownership of AI delivery or technical choices. It means owning the coherence of overarching corporate positioning around AI. To close the trust gap, three narratives need to line up as part of this positioning:

  • The Investor Story. Moving beyond hype to specific proofs of value.
  • The Workforce Story. Balancing efficiency claims with concrete support for how people are supported through change.
  • The Governance Story. Moving beyond "principles" to show how controls on data and bias actually work in practice.

The goal here is not to deploy a detailed playbook in public, but instead to ask a sharper set of questions about who owns the AI trust story.


Questions for Leadership

This is not a checklist, but a set of prompts to test how ready your organisation is for the scrutiny that is coming.

  • Narrative vs. Reality. How closely does your public AI story match what is actually happening in the business today?
  • The "Overheard" Test. How would the AI wording in your annual report read to an anxious employee, a union lead, or a regulator?
  • Jobs and Skills. What story is your workforce hearing about AI and jobs right now - and does that story come from you, or from headlines and rumour?
  • Governance in Plain Sight. If you had to show, not just describe, how you use AI responsibly, what concrete examples could you put on the table today?
  • Ownership. Who in your organisation currently owns the "AI trust" narrative, and is corporate affairs in that conversation early enough?


Conclusion

AI is no longer just a technical initiative. It is now part of how companies are valued and judged.

Most large organisations are still focused on adoption - tools, pilots, policies. That work matters, but it is not the whole story. The gap between the "investor promise" and the "stakeholder reality" is where trust risks are building. Those who begin to work through these contradictions now will be better placed when scrutiny from investors, employees, and regulators hardens into specific demands.

Stay in the loop with our experts

Subscribe for the latest news, events and insights straight to your inbox