Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Our Platform

Expert-led content

100's of expert presented, on-demand video modules

Learning analytics

Keep track of learning progress with our comprehensive data

Interactive learning

Engage with our video hotspots and knowledge check-ins

Testing and certifications

Gain CPD / CPE credits and professional certification

Managed learning

Build, scale and manage your organisation’s learning

Integrations

Connect Data Unlocked to your current platform

Featured Content

Featured Content

Implementing AI in your Organisation

In this video, Elizabeth explains how organisations can successfully adopt AI and data science by fostering a data-driven culture and strategically implementing AI projects.

Blockchain and Smart Contracts

In the first video of this video series, James explains the concept of blockchain along with its benefits.

Featured Content

Ready to get started?

Ready to get started?

Roles for a HCAI Future

Roles for a HCAI Future

Emily Yang

Human-Centred AI (HCAI) Specialist

In this video, Emily Yang explores how Human-Centred AI is reshaping the workforce. Learn why AI literacy, collaboration, and emotional readiness are key to responsible adoption, and how new roles, training, and culture can help humans and machines thrive together.

In this video, Emily Yang explores how Human-Centred AI is reshaping the workforce. Learn why AI literacy, collaboration, and emotional readiness are key to responsible adoption, and how new roles, training, and culture can help humans and machines thrive together.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Roles for a HCAI Future

12 mins 46 secs

Key learning objectives:

  • Understand AI literacy in the context of Human-Centred AI

  • Understand why cross-functional AI fluency is critical for responsible and inclusive AI adoption

  • Outline key roles and skill sets that support the development and governance of HCAI systems

  • Outline targeted training and hiring practices to reinforce human-first AI use

Overview:

Human-Centred AI depends not just on ethical design, but on the people who shape, apply, and oversee it. As AI reshapes the workforce, literacy must be tailored to the distinct roles of users, operators, and governance leads. Organisations must invest in interdisciplinary teams, cultural readiness, and emotional resilience to navigate the shift. Success requires clear role definitions, inclusive collaboration, and structures that promote responsible innovation. While some roles disappear or evolve, new ones emerge that prioritise trust, judgement, and accountability. Rather than displacing humans, AI redefines what meaningful contribution looks like, and the future belongs to those who adapt with purpose.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Summary
Why do technically sound AI systems still fail in the real world?
Because technical accuracy alone is not enough. When AI systems overlook human context, fairness, and explainability, they risk alienating users, damaging trust, and triggering reputational harm. These failures stem from a lack of transparency, accountability, and human oversight. Even if bias isn’t intentional, the inability to explain or justify outputs erodes public and regulatory confidence. HCAI addresses this by embedding human-centred principles throughout AI design and decision-making, ensuring systems are not just accurate but also appropriate, ethical, and understandable.


How can HCAI help align AI development with organisational strategy?
HCAI encourages cross-functional collaboration that links AI projects to specific business outcomes, risk considerations, and ethical standards. Rather than running isolated AI pilots, organisations use HCAI frameworks to connect data scientists, ethics teams, operations, and product leaders under shared goals. Structures like AI Centres of Excellence ensure strategic alignment by coordinating governance, knowledge sharing, and value creation. This approach improves efficiency, prevents duplication, and ensures AI contributes to long-term organisational goals while minimising unintended consequences.


What does HCAI look like in practice within governance, procurement, and implementation?
In governance, HCAI embeds ethical review into standard workflows via impact assessments, ethics committees, and diverse stakeholder input. In procurement, it informs vendor selection by requiring standards like explainability, bias testing, and compliance with ethical frameworks (e.g. EU AI Act). In implementation, HCAI ensures representative data, stakeholder involvement, and clear human-AI task division. Teams use practical tools such as model cards, bias audits, and transparency features. Human oversight remains essential, especially in high-stakes settings, so AI serves as an advisor, not a decision-maker.


How can organisations embed HCAI principles into daily practice?
Start by aligning every AI initiative with a clear purpose tied to human outcomes. Educate and empower teams through training in ethics, data literacy, and interdisciplinary design. Build governance structures that include ethical review as a non-negotiable step. Design workflows with humans “in the group and in the loop,” ensuring people have meaningful control. Finally, treat deployment as the beginning, not the end, of the lifecycle. Actively monitor systems, solicit user feedback, and iterate based on real-world impact. This culture of continuous improvement is what sustains responsible, effective AI at scale.

Subscribe to watch

Access this and all of the content on our platform by signing up for a 7-day free trial.

Emily Yang

Emily Yang

Emily Yang leads Human-Centred AI and Innovation at a global financial institution and serves on the organisation’s AI Safety and Governance committees. Her work focuses on advancing responsible and trustworthy AI systems that balance innovation with accountability. She is among the first practitioners in the industry to apply Human-Centred AI at scale. With over a decade of experience in human-computer interaction and user experience, Emily has held roles across tech startups, corporate venture builders, and major technology companies. Her journey into AI began with studies in biochemistry and neuroscience, followed by a research master’s in HCI and natural language technologies, during which she published work on perceived empathy and emotional intelligence in virtual agents.

There are no available Videos from "Emily Yang"