01.04.2026
Board-level AI risk is no longer just about models, vendors or data; it is about whether your workforce can live with AI without burning out, disengaging or taking you to court. Executive teams cannot afford to treat the AI/human interface as a “soft” cultural concern.
The picture painted in Raconteur’s piece on “AI‑related job carnage” will be painfully familiar to many leaders: heightened anxiety, drop‑off in trust, quiet resistance, and a growing sense that AI is something being done to people, not with them. That cocktail is not just a cultural problem; it is a live operational and legal risk. If leaders ignore this ‘technostress’ – the psychological and organisational strain of working alongside AI – it will translate directly into lost productivity, failed AI investment and increased exposure to grievances, tribunals and regulatory challenge.
AI fear is now a balance-sheet risk
Behind the headlines about job loss sits something more immediate: a growing layer of technostress and uncertainty as workers try to adapt to AI‑enabled tools without clear guidance, training or guardrails. Research shows technostress is associated with anxiety, reduced performance, absenteeism and a sense of loss of control – all of which quietly hit productivity and safety long before restructures are announced.
At the same time, AI is beginning to appear directly in employment disputes. Tribunals have already considered AI‑driven facial recognition systems that led to indirect race discrimination findings under the Equality Act 2010, and legal advisers are warning that reliance on opaque AI outputs in dismissal or performance decisions can undermine the fairness of process. In parallel, regulators and commentators are flagging AI‑related psychosocial harm and surveillance concerns as part of employers’ duties to protect workers’ health and safety.
In this context, workforce sentiment and behavioural readiness now directly shape AI ROI, capital investment risk, and even your standing as an employer of choice. Understanding the human experience of AI is not a nice‑to‑have; it is a critical input for sustainable transformation and board‑level assurance.
Three converging threats boards must see
For boards, the human side of AI adoption now concentrates risk in three areas:
- Value erosion through technostress and disengagement
When AI is rolled out faster than people can adapt, workers experience techno‑insecurity (fear of job loss), techno‑overload (too many tools, too fast) and techno‑uncertainty (constant change). These patterns correlate with lower performance, higher error rates and turnover, turning ambitious AI investments into stranded assets. - Direct exposure to AI‑linked litigation
AI‑assisted decisions in recruitment, performance management, allocation of shifts or disciplinary processes can create discrimination or unfair dismissal risk if they introduce bias, lack transparency or erode meaningful human oversight. Boards that cannot explain how AI influences people decisions will struggle to defend those decisions under equality, employment and data protection law. - Reputational and employer‑of‑choice damage
Workers who feel surveilled, sidelined or misled about AI’s impact on their jobs are more likely to disengage, organise, litigate or leave – particularly in high‑skill, high‑scarcity segments. In a market where AI capability is commoditised but trusted human capability is scarce, that is a direct threat to long‑term competitiveness.
None of these sit neatly within a single function. They cut across strategy, risk, operations and governance and require board-level visibility.
Moving beyond averages: understanding workforce response
One of the more persistent blind spots in AI adoption is the reliance on averaged sentiment, headline figures that obscure how differently AI is experienced across a workforce.
In practice, response to AI tends to fragment into distinct behavioural patterns. Some groups lean in quickly, others adapt cautiously, while some resist or disengage entirely. These are not fringe dynamics; they shape how AI is actually used day to day.
Morson’s work in this space – particularly through AI Archetype Mapping – reflects a broader shift away from treating the workforce as a single audience. By segmenting how different groups engage with AI, organisations can start to see where risk and opportunity genuinely sit: where adoption will accelerate, where it will stall, and where unintended behaviours such as shadow AI are most likely to emerge.
This is less about categorisation for its own sake, and more about replacing assumptions with evidence.
Technostress is a design signal, not an inevitability
There is a tendency to treat technostress as the price of progress, something to be managed after the fact. That framing is increasingly hard to defend. Emerging evidence suggests technostress is not random; it clusters around specific conditions: unclear expectations, perceived surveillance, low AI literacy and weak governance. In other words, it often reflects how AI has been introduced, not just that it has been introduced.
This reframes the issue. Instead of asking how to mitigate negative impacts, boards should be asking what those impacts are signalling about design, communication and oversight. Approaches such as Morson’s Risk‑to‑Readiness thinking point in this direction treating workforce response as something that can be measured, interpreted and acted on, rather than absorbed as background noise.
AI risk is becoming operational, not theoretical
It is easy to position AI risk as something emerging, just over the horizon of regulation. But many of the underlying issues are already visible in today’s operating environment.
- Decision opacity is colliding with expectations of fairness and explainability.
- Workforce monitoring is intersecting with wellbeing and safety obligations.
- Unofficial use of AI tools is creating new forms of data and IP exposure.
These are not edge cases; they are early signals of how AI is embedding into everyday organisational risk. The common thread is not the technology itself, but the gap between capability and control. Morson’s AI Workforce Strategy & Human‑Centred Adoption work sits within this context, not as a technology solution, but as a way of making those gaps visible and actionable before they crystallise into more material issues.
What boards should be asking now
Given the trajectory described in the Raconteur article, boards should be asking:
- Do we have visibility of workforce sentiment, technostress and behavioural readiness for AI by role, function and location – or are we relying on anecdotes?
- Where, specifically, does AI influence hiring, performance, scheduling, safety‑critical decisions or exits, and how robust is our human oversight?
- Can we evidence to regulators, investors and courts that we have considered the psychosocial and equality implications of AI in our risk management?
- How is AI adoption affecting our status as an employer of choice in scarce talent markets?
If the honest answer is “we don’t know”, AI has become a board‑level blind spot.
From blind spot to board assurance
The organisations that navigate this well are unlikely to be those with the most advanced models, but those with the clearest view of how AI is actually experienced across their workforce.
That is where our approach combining workforce risk measurement, behavioural segmentation and human‑centred policy design, are starting to gain traction. Not as an overlay to AI strategy, but as a way of grounding it in operational reality.
The shift here is subtle but important: from asking “Are we using AI?” to “Can we explain, evidence and sustain how we are using AI?”
AI will not fail because the models aren’t powerful enough. It will fail where organisations move faster than their workforce can adapt, and where boards fail to treat workforce sentiment, technostress and behavioural readiness as core levers of AI ROI and risk. The opportunity now is to bring those levers squarely into the boardroom.
If you would like board‑grade visibility of how your workforce is really experiencing AI – including technostress, readiness and litigation exposure – start with an AI Workforce Risk‑to‑Readiness assessment. We’ll provide a concise, evidence‑based view your board can use to challenge assumptions, target investment and demonstrate responsible AI governance.
Our AI offer is led by Morson’s Head of Client Innovation, Luciana Rousseau, a postgraduate researcher whose work sits at the intersection of behavioural research, the human–AI interface, and the ethics surrounding that relationship. Contact Luciana.Rousseau@morson.com to scope a foundational visibility report tailored to your sectors, roles and risk profile.