Morson Edge Newsroom

AI Archetypes Are The Missing Link Between AI Investment And Real Return

Uncommon Sense
AI adoption

14.01.2026

Organisations have never invested more heavily in AI yet fewer than 1% believe they have reached AI maturity. At Morson, we see this moment clearly: that gap is not a technology problem, it is a people problem. Across sectors, leaders are discovering the same uncomfortable truth: AI tools are being deployed faster than humans are adapting to them and the resulting friction is quietly eroding trust, performance, and ROI.

We call this a cognitive industrial revolution and it demands a fundamentally different approach to AI adoption, one that starts with humans, not systems.

The hidden cost of getting AI adoption wrong

AI programmes rarely fail because the technology doesn’t work.

They fail because:

  • Employees don’t trust it
  • Leaders underestimate anxiety and resistance
  • Training is bolted on too late
  • Culture is treated as an afterthought

The data is stark:

  • 70% of digital transformations fail due to lack of employee training
  • Prioritising training delivers a 30% improvement in adoption rates

Without understanding how different people perceive AI, organisations risk:

  • Shadow AI usage
  • Polarisation between adopters and resistors
  • Technostress and burnout
  • Underperformance of AI-assisted processes
  • Wasted capital investment

This is where AI Archetypes become critical.

Introducing AI Archetypes: A simple map of complex reactions

A useful way to understand these reactions comes from the concept of AI Archetypes, originally articulated by Reid Hoffman. Rather than treating the workforce as a single audience, AI Archetypes recognise four broad identities that coexist in most organisations:

Zoomers

Highly enthusiastic early adopters. They move fast, experiment freely, and see AI primarily as competitive advantage. They are confident, sometimes overconfident, and expect others to keep up.

Bloomers

Cautiously optimistic. They recognise genuine risks but believe good governance, skills, and design can unlock broad benefit. They are comfortable with iteration and learning in public.

Gloomers

Resigned but pessimistic. They see AI as inevitable, but expect job loss, inequality, and surveillance to outweigh the upside. Their concern is less about if AI happens, and more about who pays the price.

Doomers

Existentially concerned. They frame AI as a fundamental threat not just to jobs, but to humanity itself. They often favour strong limits, moratoriums, or outright prevention.

What Happens When Leaders Ignore AI Perceptions

When organisations deploy AI without addressing these archetypes explicitly, three predictable failure modes emerge:

1. Shadow AI and Fragmented Adoption

Zoomers move ahead anyway experimenting outside policy boundaries. Gloomers disengage or quietly resist. The result is uncontrolled tool usage and uneven capability.

2. Trust Erosion and Poorer AI Performance

When people don’t trust AI systems, they stop feeding them high-quality inputs. Bias increases. Accuracy drops. Value declines.

3. The Productivity–Wellbeing Gap

AI may deliver efficiency gains on paper, while simultaneously increasing anxiety, cognitive load, and burnout. The organisation becomes faster but not healthier.

Reframing AI Policy as a People Policy

AI Archetypes don’t just describe attitudes they predict risk, adoption behaviour, and ROI leakage.

We are at a critical intersection of people, technology, and policy. Organisations that lead on perception mapping, AI archetype awareness, human-centred “rules of the road” will not just manage AI risk they will actively design a future of work where humans and AI agents do their best work together.

Perhaps the most powerful insight from AI Archetypes is this: different people need different support to arrive at the same destination.

  • Zoomers need safe sandboxes, places to experiment without breaching policy or creating risk. They are your innovators.
  • Bloomers need opportunities to co-design workflows, shape training, and lead AI literacy efforts. They are your bridge-builders.
  • Gloomers and Doomers need dialogue, reassurance, and meaningful involvement in risk and governance forums. They are your stress-testers.

When people see themselves reflected in the language you use, the protections you prioritise and the roles you invite them into trust increases, adoption follows and value compounds.

Therefore a strong AI policy should feel like a people policy that happens to involve technology not a technology policy that occasionally mentions people. Human-centred AI policy rests on three non-negotiables:

  • Transparency: People must always know when AI is involved in decisions or processes that affect them.
  • Human Oversight: Consequential decisions should never be left solely to an AI agent. There must always be an accountable human.
  • Right to Contest: Employees need clear routes to question and appeal AI-influenced outcomes protecting dignity, agency, and trust.

Treat them as one audience, and adoption fragments. Recognise them properly, and something powerful happens: resistance turns into insight, and fear becomes signal.

The Morson AI Archetypes Consultancy: how we can help

Our Decoding AI & Humans consultancy is a structured, end-to-end people strategy for AI adoption designed to protect investment, reduce risk, and accelerate value.

Stage 1: Discovery: See the Reality, Not the Assumptions

We begin by mapping every human–AI interface across your organisation.

This includes:

  • Skills readiness
  • Workflow impact
  • Cultural risk
  • Perception mapping using AI Archetypes

At the heart of this phase is The Morson Measure our proprietary diagnostic that evaluates:

  • Cultural readiness for AI
  • Engagement and trust
  • Risk hotspots that threaten adoption

This gives leaders hard data on soft issues and a baseline to track progress over time.

Stage 2: Action: From Insight to Behaviour Change

This phase turns insight into momentum. We support organisations with:

  • Human-centred AI policy creation
  • Ethical AI and governance frameworks
  • Training and AI literacy programmes
  • Leadership communication strategies
  • Change management aligned to archetype realities

Crucially, this phase addresses technostress the psychological and physiological strain caused by rapid AI integration before it erodes wellbeing and performance.

Stage 3: Embed: Making AI Adoption Stick

AI success is not a launch moment it is a cultural outcome. We embed sustainable adoption through:

  • Ongoing measurement via The Morson Measure
  • Archetype tracking across functions
  • Engagement 2.0: building internal “fan culture” for AI
  • Continuous improvement and governance refinement

Our support doesn’t end with delivery. Clients gain access to proprietary research, advisory support, and evidence-led insight that strengthens both AI outcomes and employer value proposition.

Why This Matters Now

AI regulation is tightening and employee scrutiny is increasing. Litigation risk around human harm is real. AI Archetypes give leaders a clear, practical roadmap for navigating the human reality of AI not in theory, but in lived organisational experience. This is how you:

  • Unlock sustainable AI value
  • Protect capital investment
  • Reduce adoption risk
  • Build trust at scale

Organisations that fail to actively design the human experience of AI will spend the next decade firefighting resistance, ethics concerns, and unrealised value. Those that get it right will do something far more powerful, they will turn AI from a tool into a trusted collaborator.

To top