Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Ready to get started?

Our Human Value in the Age of AI

Our Human Value in the Age of AI

Emily Yang

Human-Centred AI (HCAI) Specialist

Discover how human-AI collaboration creates better outcomes than either alone. Join Emily Yang and learn how inclusive design, social impact metrics, and human oversight ensure AI strengthens human judgment, empathy, and trust across every stage of development.

Discover how human-AI collaboration creates better outcomes than either alone. Join Emily Yang and learn how inclusive design, social impact metrics, and human oversight ensure AI strengthens human judgment, empathy, and trust across every stage of development.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Our Human Value in the Age of AI

16 mins 59 secs

Key learning objectives:

  • Understand how human-AI collaboration outperforms either humans or machines alone

  • Understand the risks of excluding diverse stakeholders in AI development

  • Outline methods for measuring the social impact of AI systems

  • Outline practical principles to design AI that enhances human agency

Overview:

Human value in the age of AI is defined not by what machines can replace, but by how collaboration elevates performance. When people guide and contextualise AI, outcomes surpass what either can achieve alone. Success depends on embedding human oversight, designing systems that support judgment and empathy, and ensuring diverse voices shape development. Measuring social impact alongside technical performance prevents harm and builds trust. Practical approaches, like human-in-the-group design, inclusive co-creation, and transparency mechanisms, help ensure AI enhances rather than undermines human roles. The future of AI hinges on processes that prioritise meaningful human contribution at every stage.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Summary
How does human-AI collaboration outperform either alone?
When humans bring strategic oversight and contextual judgment, and AI handles complex data analysis or repetitive tasks, the result is often superior to what either could achieve independently. This was shown in Kasparov’s “centaur chess” and echoed in real-world settings like medical diagnostics and hiring. The most effective outcomes arise when humans guide AI, interpret its results, and intervene when nuance, ethics, or empathy are required, turning AI into a multiplier of human capability rather than a substitute.

Why is stakeholder inclusion essential in AI design?
AI systems designed without input from diverse users and communities risk embedding structural bias and causing real-world harm. Involving a broad range of stakeholders, especially those most affected by deployment, uncovers blind spots and helps build fairer, more accountable systems. Historical failures, such as facial recognition software misidentifying people with darker skin tones, highlight the dangers of excluding marginalised groups. Inclusive design practices improve AI’s performance, legitimacy, and public trust by making its benefits accessible and meaningful to all.

What should organisations measure beyond AI accuracy?
Accuracy alone fails to capture whether an AI system is fair, transparent, or socially responsible. Organisations should evaluate metrics like error rates across user groups, the frequency of human intervention, and whether users can understand and challenge AI decisions. Tools such as bias dashboards and algorithmic impact assessments help identify disparities and guide responsible iteration. By tracking these indicators, organisations can reduce harm, ensure legal compliance, and demonstrate a commitment to ethical AI use.

How can teams design AI to amplify human value?
Designing for human-AI collaboration means embedding human-in-the-loop decision points, giving users control, and ensuring transparency in how outputs are generated. Teams should map where human judgment is most valuable and ensure AI augments rather than overrides it. Practical strategies include co-design with end-users, piloting tools in controlled environments, and maintaining continuous feedback loops. A culture of psychological safety and cross-disciplinary collaboration ensures AI supports human goals while remaining accountable and inclusive in real-world use.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Emily Yang

Emily Yang

Emily Yang leads Human-Centred AI and Innovation at a global financial institution and serves on the organisation’s AI Safety and Governance committees. Her work focuses on advancing responsible and trustworthy AI systems that balance innovation with accountability. She is among the first practitioners in the industry to apply Human-Centred AI at scale. With over a decade of experience in human-computer interaction and user experience, Emily has held roles across tech startups, corporate venture builders, and major technology companies. Her journey into AI began with studies in biochemistry and neuroscience, followed by a research master’s in HCI and natural language technologies, during which she published work on perceived empathy and emotional intelligence in virtual agents.

There are no available Videos from "Emily Yang"