Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Ready to get started?

HCAI as a Strategic Framework

HCAI as a Strategic Framework

Emily Yang

Human-Centred AI (HCAI) Specialist

Discover why technically accurate AI can still fail without human context. Join Emily Yang and learn how Human-Centred AI aligns technology with strategy, governance, and ethics, by embedding oversight, fairness, and accountability into every stage of AI design.

Discover why technically accurate AI can still fail without human context. Join Emily Yang and learn how Human-Centred AI aligns technology with strategy, governance, and ethics, by embedding oversight, fairness, and accountability into every stage of AI design.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

HCAI as a Strategic Framework

14 mins 41 secs

Key learning objectives:

  • Understand why technically accurate AI can still fail without human-centred design

  • Understand how HCAI creates strategic alignment across departments and leadership

  • Outline how HCAI principles apply to governance, procurement, and implementation

  • Outline practical techniques to embed human oversight and feedback in AI systems

Overview:

Human-Centred AI (HCAI) ensures that AI systems support, not replace, human judgement, trust, and accountability. When AI is deployed without understanding user context or ethical implications, it can damage reputations and outcomes, even if technically accurate. This approach embeds purpose, oversight, and stakeholder input into every stage of AI design and implementation. It aligns AI with business strategy, enhances governance structures, and sets procurement expectations that elevate industry standards. HCAI demands ongoing monitoring, cross-functional collaboration, and clear human oversight to ensure AI serves real needs. Done well, it unlocks innovation while safeguarding fairness, transparency, and long-term value across the enterprise.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Summary
Why do technically sound AI systems still fail in the real world?
Because technical accuracy alone is not enough. When AI systems overlook human context, fairness, and explainability, they risk alienating users, damaging trust, and triggering reputational harm. These failures stem from a lack of transparency, accountability, and human oversight. Even if bias isn’t intentional, the inability to explain or justify outputs erodes public and regulatory confidence. HCAI addresses this by embedding human-centred principles throughout AI design and decision-making, ensuring systems are not just accurate but also appropriate, ethical, and understandable.


How can HCAI help align AI development with organisational strategy?
HCAI encourages cross-functional collaboration that links AI projects to specific business outcomes, risk considerations, and ethical standards. Rather than running isolated AI pilots, organisations use HCAI frameworks to connect data scientists, ethics teams, operations, and product leaders under shared goals. Structures like AI Centres of Excellence ensure strategic alignment by coordinating governance, knowledge sharing, and value creation. This approach improves efficiency, prevents duplication, and ensures AI contributes to long-term organisational goals while minimising unintended consequences.


What does HCAI look like in practice within governance, procurement, and implementation?
In governance, HCAI embeds ethical review into standard workflows via impact assessments, ethics committees, and diverse stakeholder input. In procurement, it informs vendor selection by requiring standards like explainability, bias testing, and compliance with ethical frameworks (e.g. EU AI Act). In implementation, HCAI ensures representative data, stakeholder involvement, and clear human-AI task division. Teams use practical tools such as model cards, bias audits, and transparency features. Human oversight remains essential, especially in high-stakes settings, so AI serves as an advisor, not a decision-maker.


How can organisations embed HCAI principles into daily practice?
Start by aligning every AI initiative with a clear purpose tied to human outcomes. Educate and empower teams through training in ethics, data literacy, and interdisciplinary design. Build governance structures that include ethical review as a non-negotiable step. Design workflows with humans “in the group and in the loop,” ensuring people have meaningful control. Finally, treat deployment as the beginning, not the end, of the lifecycle. Actively monitor systems, solicit user feedback, and iterate based on real-world impact. This culture of continuous improvement is what sustains responsible, effective AI at scale.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Emily Yang

Emily Yang

Emily Yang leads Human-Centred AI and Innovation at a global financial institution and serves on the organisation’s AI Safety and Governance committees. Her work focuses on advancing responsible and trustworthy AI systems that balance innovation with accountability. She is among the first practitioners in the industry to apply Human-Centred AI at scale. With over a decade of experience in human-computer interaction and user experience, Emily has held roles across tech startups, corporate venture builders, and major technology companies. Her journey into AI began with studies in biochemistry and neuroscience, followed by a research master’s in HCI and natural language technologies, during which she published work on perceived empathy and emotional intelligence in virtual agents.

There are no available Videos from "Emily Yang"