Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Ready to get started?

The trust advantage: Credible leadership in the age of AI
5 mins to read

The trust advantage: Credible leadership in the age of AI

Henry White

Co-founder and CEO of xUnlocked

How to build the data capability, culture, and confidence to turn AI risk into strategic opportunity.

The trust advantage: Credible leadership in the age of AI

Start free trial

Unlock access to all content by signing up to a 7-day free trial

In 2025, lawyers in two separate cases found themselves on the wrong side of the UK courts. They had used AI to present numerous case-law citations that were either completely fictitious or contained made-up passages.

In a warning issued to legal professionals, Dame Victoria Sharp, President of the King’s Bench Division, cautioned: “The responses may make confident assertions that are simply untrue. They may cite sources that do not exist. They may purport to quote passages from a genuine source that do not appear in that source.”

For an industry built on precision and precedent, these cases cut deep, even shaking confidence in the justice system. But they also serve as a wider signal to every sector now experimenting with AI. If professionals trained to interrogate evidence can be duped by AI, we are all at risk.

How can we ensure we are building a future that remains rooted in truth, capability and trust?

Building the capabilities for the future of AI

We are already living in an age when misinformation moves faster than fact. BBC Verify recently had to correct Grok, X’s AI chatbot, after it falsely claimed aerial footage of the “No Kings” anti-Trump protest was from 2017. The lie went viral before the truth came to light. And despite large language models (LLMs) becoming more sophisticated, their “hallucinations” appear to be increasing, not disappearing, further challenging our shared grasp of truth.

AI is built to please us, to provide answers. But that very design creates risk: the confident wrong answer, the plausible fabrication, the unchallenged bias.

If we want an AI-enabled world we can still trust, it depends not on technology, but on people. We must equip individuals and teams across our businesses with the judgment to use AI tools ethically and intelligently. Building skills in AI ethics and data trust is now central to protecting your business’s reputation, building resilience, and executing purposeful change, rooted in reality.

Only then can you be confident that AI will enhance and transform your business for the better.

1. Trust as a competitive advantage

Once, trust was a soft concept. Today, it determines customer loyalty, investor confidence, and even recruitment success. It has become a strategic objective in its own right.

Take the recent example of Capita, which was fined £14 million by the UK Information Commissioner’s Office after a data breach exposed the personal information of 6.6 million people. Beyond the penalty, the company faced public scrutiny and client concern, a reminder that data governance failures instantly translate into reputational damage.

AI further heightens these data risks because many organisations are still in the early stages of adopting AI. According to Deloitte’s second edition of Governance of AI: A critical imperative for today’s boards, more boards are getting up to speed on AI, with most wanting the pace to accelerate. But 66% of the study’s 700 senior respondents across 56 countries also said their boards still have “limited to no knowledge or experience” with AI.

In the capabilities gap, dangers proliferate.

But this also presents companies with an opportunity: to demonstrate ethical excellence. Just as financial literacy builds financial resilience, data literacy builds reputational resilience — turning risk management into strategic advantage.

Take IBM, for example, an early mover in recognising ethical considerations in AI, and swiftly integrating these principles into their product range and consulting services. Or Salesforce, which was recently named one of the World’s Most Ethical Companies by the Ethisphere Institute, a global authority on corporate ethics, for its strong governance, values-driven culture, and leadership in responsible technology.

These are companies turning rapid change into an opportunity for differentiation, turning data responsibility into the defining skill-set of the decade.

2. Beyond compliance: Building a culture of responsibility

In 2024, the UK’s Institute of Business Ethics urged every organisation to appoint an AI ethics lead or committee, warning that without these, firms “risk breaching privacy rules and damaging their reputations.” Yet it’s not compliance frameworks alone that create credibility; it’s the expertise and insight of human beings.

The cases against Microsoft and Google, brought by Manchester’s Barings Law on behalf of 15,000 claimants, allege that both companies used personal data — voices, messages, app activity — to train AI models without explicit consent. Regardless of the legal outcome, the perception of unethical practice has already cost trust.

A culture of responsibility means every employee understands not only what is permitted, but why it matters. When teams are trained in ethical data use — privacy, consent, transparency — trust becomes self-reinforcing. People take ownership of the standard, rather than merely feeling obliged to follow the rules they often barely grasp.

3. AI risk = human risk

Every algorithm is a reflection of the people behind it.

Bias in data is not a technical flaw, it’s a human inheritance. When AI replicates bias, it amplifies it at scale. The Clearview AI case, in which the UK Upper Tribunal confirmed that scraping facial images from the web without consent violates GDPR, underscores how the misuse of data can cross from innovation into intrusion overnight.

Simply put, AI ethics cannot be outsourced to the machine. Human oversight is the trust mechanism, not the friction point.

In the workplace, that means designing AI governance structures that keep people in the loop, training them to question outputs, validate sources, and recognise when automation conceals bias.

The outcome isn’t slower decision-making. It’s smarter, safer, and more defensible decision-making, the kind that protects both brand and bottom line, and opens up genuine paths to innovation.

4. Democratising data ethics

Ethical data use should not live solely in the data-science or compliance teams. In a hyper-connected world, everyone is a data practitioner to some degree.

A recent Microsoft survey found that 71% of UK employees admitted to using unauthorised AI tools at work, with more than half using AI every week to enhance productivity. This ‘shadow AI’ doesn’t only risk data leakage, but also inconsistent standards and unintentional leaks of sensitive information.

Training can turn that risk into resilience. When every employee understands their role in data ethics, risk management becomes second nature, rather than an afterthought.

Building data capability across the workforce creates consistency, confidence, and accountability. And in doing so, it transforms data ethics from a niche specialism into a shared language of trust.

5. Leadership through transparency

Whether explaining an AI model’s outputs, clarifying data-consent practices, or communicating openly about limitations, transparency strengthens brand integrity. Organisations that “show their workings” will lead the market in trust.

Firms, including Salesforce, Alphabet, IBM and Microsoft, are investing in explainable AI (XAI) — using models and processes to lay out the reasoning behind AI’s decisions and predictions, moving it away from opaque “black box” models. Not only is this approach vital for industries including the legal, finance and healthcare sectors, but it also builds confidence and external trust more widely, turning openness into a source of strength.

Leaders across businesses must follow this example, upgrading their own understanding and capabilities, not only to inform their own decisions around AI implementation, but so they can also communicate the use of tools clearly, setting the benchmark for responsible innovation and trusted AI output.

Turning ethics into edge

AI is transforming industries. But so too is the public’s expectation of accountability. The question for executives is no longer whether AI ethics matters — it’s how to make it a source of advantage.

The organisations that will win in this environment are those that:

  • Make trust measurable. Integrate data-ethics KPIs into strategic dashboards alongside financial performance.
  • Build cultures, not just controls. Equip both leaders and employees through data and AI-learning programmes that make ethics habitual.
  • Design for human oversight. Keep people empowered to question, validate, and correct AI outputs.
  • Democratise capability. Extend ethical data literacy beyond data teams to every workforce role.
  • Lead with transparency. Communicate openly about AI use, governance and monitoring in your business.

When done right, ethical AI practice doesn’t just protect reputation, it amplifies potential.

The art of the possible

As AI accelerates, so will scrutiny. But businesses that invest in their people today will find themselves not merely compliant tomorrow, but credible, confident, and competitive.

Because the future belongs to the organisations that understand that ethical AI is about enabling opportunity. And it starts with the people who use it.

Henry White
About the author

Henry White

Henry, CEO and Co-founder of xUnlocked, leads the EdTech platform Sustainability Unlocked, used by global corporations to upskill their employees in fundamental sustainability education to enable a sustainable culture and help meet their net zero targets.

Share "The trust advantage: Credible leadership in the age of AI" on