11.02.2026
As we step ever further into an AI augmented world, the issue of ethics forms a huge part of the adoption. Most businesses have invested in AI technology over the past few years, so there’s no doubt that it’s here to stay, but ethics and issues around bias are topics that can be often overlooked and be subject to shortcuts due to fear of being left behind by the pace of change. A good example of this is the phenomenon of shadow hiring, where teams bypass traditional legal, HR and talent acquisition checkpoints to rapidly build out AI capabilities to keep with the perceived pace of the market, creating potential ethical and compliance issues.
A government report estimates that effective AI adoption could add up to £400 billion to the UK economy by 2030 through innovation and productivity gains. Yet it also finds significant barriers, including poor employer understanding of AI skills, basic digital literacy gaps in sectors such as construction, as well as untrained and unrestricted AI use.
The stakes couldn’t be higher. The way leaders navigate this evolving relationship will shape the future of work itself. The demand for responsible AI and strong regulation is growing, forcing companies to make critical choices about how to deploy and govern these technologies.
One truth remains clear: when it comes to high-stakes decisions, human oversight is essential. Based on Dr. Sarah Bankins research:
“People recognised both the advantages and limitations of AI in these decision-making contexts. While AI was seen as useful, there was a strong preference for keeping humans in a dominant role, ensuring that decisions were not left solely to an algorithm.”
AI ethics in leadership
Leadership transparency and authenticity is key to building trust during a potentially prolonged period of technology transition. Communications strategies delivered by leadership will have an instrumental impact on adoption success and the human experience. When leaders exhibit agency and optimism towards AI, employees are more likely to embrace AI, being 1.5x more likely to use AI and nearly 3x more likely to show agency and optimism themselves. Leading companies are setting AI principles, aligning with core values like privacy and workplace impact. Clear messaging is key, employees need to see AI as a tool for collaboration, not replacement. When AI supports rather than sidelines workers, it enhances job fulfillment and workplace experience.
But it’s not just about getting employees on board with the technology. Ethics in AI goes beyond the surface level compliance and corporate communications along the lines of “we’re optimistic about how we use AI”. For some companies, the compliance checkpoints can be as follows:
It’s gone through the legal channels
Some companies have banned the use of certain AI platforms like ChatGPT altogether, citing the sharing of proprietary data and insights to a public platform. Others have greenlit its use from a legal standpoint.
But do the legal minds possess the necessary knowledge of AI in order to accurately assess its usage? And who has ultimate sign off – a legal mind, a technical mind, or both?
We run an AI ethics committee with constant feedback
This may sound good, and certainly should be part of ongoing improvement and measurement. But who is on this committee, what knowledge do they have about AI, and do they have oversight of broad company usage?
It can’t be biased or unethical – we tell the algorithm not to be
Again, with all the will in the world and as advanced as the technology may be, simply telling the AI to play fair isn’t necessarily going to cut it. Are employees trained to spot its potential pitfalls?
The ‘human in the loop’ leadership stance: steps for more ethical AI usage
Ethical AI usage doesn’t just stop at a few regulatory box ticking exercises. It’s a longer-term project that requires analysing your business, identifying the key risk areas and developing accountability structures.
Plan for when AI gets it wrong and train teams
AI is already shaping how decisions get made. The smarter move is planning for the moments it gets things wrong. No model is flawless. Bias creeps in. Data drifts. Context gets missed. Businesses that treat AI as infallible set themselves up for risk, not progress. The answer is not slowing adoption. It is building fail-safes around it.
Start with clear accountability. Every AI-driven decision should have a named human owner. Someone who understands the model’s limits, challenges its outputs, and knows when to override it. Human-in-the-loop processes are not a brake on innovation. They are what keep it on the road.
Next, plan for failure before it happens. Run scenario testing where AI outputs are wrong, incomplete, or misleading. What happens if an automated system rejects strong candidates, flags the wrong risks, or amplifies weak data? If there is no documented response, the system is not ready for scale.
Training teams matters just as much as tuning models. People need to understand how AI reaches conclusions, not just how to click the button. That means practical training on data quality, bias awareness, prompt discipline, and decision review. Teams that understand AI are far better at spotting when it is guessing rather than knowing.
Finally, treat AI capability as a workforce challenge, not a software one. The organisations that win are those blending technical specialists with commercially sharp operators who can translate insight into action and stop problems early.
Awareness of negative employee impact
Most people can identify someone in their life that relies on it perhaps that little bit too much, be it for writing emails or even generating talking points in meetings. However, as with most quick fixes, the short-term benefit is threatening to be overshadowed by the medium- and long-term problems that they can cause, most notably exacerbating the ‘cognitive sloths.’
Cognitive sloths are individuals that utilise AI to avoid effortful thinking all together, even when a situation clearly demands care, reflection or analysis. The instinct to default to the easy, habitual or superficial response, or the path of least resistance, can be problematic when it comes to dealing with more complex problems and issues.
AI adoption isn’t consistent across individuals and generations, either. AI Archetypes describe the four dominant ways employees respond to workplace AI: Zoomers, Bloomers, Gloomers, and Doomers. From highly enthusiastic early adopters (Zoomers) to the resigned and pessimistic Gloomers, understanding how each archetype will adapt is crucial to building an ethical environment, since different risks apply with different people.
Making managers accountable
Ethical AI use cannot sit with a single team or be parked as a compliance exercise. It has to be owned by managers across the business because AI now influences everyday decisions, not just technical ones.
Managers decide how tools are used in real situations. They approve processes, set targets, and judge outcomes. If AI is shaping hiring shortlists, performance metrics, customer interactions, or risk assessments, managers are directly responsible for the consequences. Delegating ethics to data scientists alone creates a gap between how systems are designed and how they are actually applied.
Shared responsibility also reduces risk. Ethical failures rarely come from one bad line of code. They come from pressure, shortcuts, and unclear accountability. When managers understand bias, transparency, data privacy, and model limitations, they are better equipped to spot issues early and challenge results that do not pass the common-sense test. That protects customers, employees, and the organisation’s reputation.
There is also a leadership signal at play. People take cues from their managers. If leaders treat AI outputs as unquestionable, teams will follow suit. If managers encourage scrutiny, context, and human judgement, ethical behaviour becomes part of day-to-day decision-making rather than an abstract policy.
AI policy as a people policy first
AI policy works best when it starts with people, not platforms. Too many organisations treat AI governance as a technical exercise. A set of rules written by IT or systems teams, focused on tools, architectures, and risk controls. Necessary, yes. But insufficient. Because AI does not just change systems, rather how work gets done, how decisions are made, and how people are managed, trusted, and held accountable. That makes AI policy a human policy first.
The real questions are not about models or infrastructure. They are about behaviour, judgement, ownership, and impact. Who is accountable when an AI-supported decision goes wrong? How do teams challenge outputs they do not agree with? What skills do leaders need to manage people working alongside intelligent systems? Where does human judgement sit, and where must it always prevail?
These are organisational questions. Cultural ones. Leadership ones. They sit naturally with HR, legal, risk, and the business, not just IT.
Technology should be referenced, not centred. AI policy should set clear principles for fairness, transparency, data use, capability building, and decision rights. It should define how people are trained, supported, and protected as AI reshapes roles and workflows. Systems teams then translate those principles into tools, controls, and architecture.
Get this the right way round and AI becomes an enabler of better work, not a source of confusion or fear. Think sharper. Put humans first. Technology will follow.
Luciana Rousseau leads the development of human-centred strategies that connect behavioural research with organisational transformation. With deep expertise in the psychology of work, Luciana helps leaders understand the motivations, behaviours, and cultural dynamics that shape performance. Get in touch with her at Luciana.rousseau@morson.com
Let’s talk about how we can help you shape smarter, more inclusive ways to attract and retain specialist talent. Because at the sharp end, there’s no time to stand still.