Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Ready to get started?

Embedding Ethics, Responsibility and Safety in AI

Embedding Ethics, Responsibility and Safety in AI

Emily Yang

Human-Centred AI (HCAI) Specialist

Join Emily Yang as she explores how we can turn AI ethics into action. Learn the differences between Responsible, Safe, and Trustworthy AI, see why human oversight matters, and discover how ethical design and AI stewardship build systems that truly serve humanity.

Join Emily Yang as she explores how we can turn AI ethics into action. Learn the differences between Responsible, Safe, and Trustworthy AI, see why human oversight matters, and discover how ethical design and AI stewardship build systems that truly serve humanity.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Embedding Ethics, Responsibility and Safety in AI

17 mins 53 secs

Key learning objectives:

  • Understand the distinctions and interconnections between AI ethics, Responsible AI, AI governance, AI safety, and Trustworthy AI

  • Understand the risks of poorly designed AI systems and the need for a human-centred approach in development and deployment

  • Outline practical tools, frameworks, and organisational structures that embed ethics, safety, and responsibility into AI workflows

  • Understand the role of AI stewards in bridging principles with practice and ensuring long-term trust, accountability, and societal alignment

Overview:

Human-centred AI demands more than good intentions; it requires systems designed with moral clarity, structural rigour, and human insight at every stage. Ethical frameworks must translate into practical safeguards, not just aspirations. When people are misled, excluded, or harmed by AI systems, it reflects failures of responsibility, not just technology. True progress lies in building AI that amplifies human value, respects individual dignity, and earns trust over time. This means involving diverse voices, setting ethical boundaries, measuring social impact, and empowering stewards to intervene. AI's future isn't inevitable; it’s designed. The question is whether we design it to serve humanity, or erode it.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Summary
What’s the difference between AI ethics, Responsible AI, AI governance, AI safety, and Trustworthy AI?
These terms describe different layers of a comprehensive approach to AI. Ethics sets the moral foundation. Responsible AI translates those values into context-specific principles like fairness and transparency. Governance is the operational layer, how organisations apply checks, reviews, and controls. Safety ensures systems behave reliably, even under stress. And Trustworthy AI focuses on human interaction: do people understand, trust, and feel comfortable with the system? While distinct, these layers must work together to ensure AI serves people, not just systems.


Why do real-world failures, like biased virtual therapists, highlight the need for a human-centred AI approach?
When AI systems act in emotionally sensitive or socially impactful ways without clear safeguards, the consequences go beyond technical errors; they affect trust, well-being, and even legal liability. Failures like manipulative chatbots or facial recognition bias show that if human needs and risks aren’t considered upfront, systems can cause real harm. A human-centred approach builds in foresight and safeguards from the start, helping prevent these harms and ensure AI aligns with public expectations and social norms.


How can organisations practically embed ethics and human values into their AI systems?
Embedding ethics requires more than high-level principles; it needs practical tools and structures. That includes using bias detection libraries, implementing explainability features, gathering diverse stakeholder input, defining ethical KPIs, and forming cross-functional teams with legal, design, and behavioural experts. It also means testing AI for edge cases and emotional risks, especially in sensitive domains like healthcare or finance. Making these practices routine across development, governance, and deployment phases ensures AI supports rather than undermines human needs.


What role do AI stewards play in organisations?
AI stewards help bridge the gap between aspiration and implementation. Whether formally appointed or informally recognised, they’re the people who champion ethics, ask uncomfortable questions, and guide teams toward inclusive, transparent, and safe AI. They interpret evolving regulations, challenge assumptions, and foster collaboration across departments, often bringing legal, tech, design, and user insights together. In high-impact areas, their leadership ensures AI doesn’t just function well but earns lasting trust by reflecting the values of the communities it serves.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Emily Yang

Emily Yang

Emily Yang leads Human-Centred AI and Innovation at a global financial institution and serves on the organisation’s AI Safety and Governance committees. Her work focuses on advancing responsible and trustworthy AI systems that balance innovation with accountability. She is among the first practitioners in the industry to apply Human-Centred AI at scale. With over a decade of experience in human-computer interaction and user experience, Emily has held roles across tech startups, corporate venture builders, and major technology companies. Her journey into AI began with studies in biochemistry and neuroscience, followed by a research master’s in HCI and natural language technologies, during which she published work on perceived empathy and emotional intelligence in virtual agents.

There are no available Videos from "Emily Yang"