Roles for a HCAI Future
Emily Yang
Human-Centred AI (HCAI) Specialist
In this video, Emily Yang explores how Human-Centred AI is reshaping the workforce. Learn why AI literacy, collaboration, and emotional readiness are key to responsible adoption, and how new roles, training, and culture can help humans and machines thrive together.
In this video, Emily Yang explores how Human-Centred AI is reshaping the workforce. Learn why AI literacy, collaboration, and emotional readiness are key to responsible adoption, and how new roles, training, and culture can help humans and machines thrive together.
Subscribe to watch
Access this and all of the content on our platform by signing up for a 7-day free trial.
Roles for a HCAI Future
12 mins 46 secs
Key learning objectives:
Understand AI literacy in the context of Human-Centred AI
Understand why cross-functional AI fluency is critical for responsible and inclusive AI adoption
Outline key roles and skill sets that support the development and governance of HCAI systems
Outline targeted training and hiring practices to reinforce human-first AI use
Overview:
Subscribe to watch
Access this and all of the content on our platform by signing up for a 7-day free trial.
Because technical accuracy alone is not enough. When AI systems overlook human context, fairness, and explainability, they risk alienating users, damaging trust, and triggering reputational harm. These failures stem from a lack of transparency, accountability, and human oversight. Even if bias isn’t intentional, the inability to explain or justify outputs erodes public and regulatory confidence. HCAI addresses this by embedding human-centred principles throughout AI design and decision-making, ensuring systems are not just accurate but also appropriate, ethical, and understandable.
How can HCAI help align AI development with organisational strategy?
HCAI encourages cross-functional collaboration that links AI projects to specific business outcomes, risk considerations, and ethical standards. Rather than running isolated AI pilots, organisations use HCAI frameworks to connect data scientists, ethics teams, operations, and product leaders under shared goals. Structures like AI Centres of Excellence ensure strategic alignment by coordinating governance, knowledge sharing, and value creation. This approach improves efficiency, prevents duplication, and ensures AI contributes to long-term organisational goals while minimising unintended consequences.
What does HCAI look like in practice within governance, procurement, and implementation?
In governance, HCAI embeds ethical review into standard workflows via impact assessments, ethics committees, and diverse stakeholder input. In procurement, it informs vendor selection by requiring standards like explainability, bias testing, and compliance with ethical frameworks (e.g. EU AI Act). In implementation, HCAI ensures representative data, stakeholder involvement, and clear human-AI task division. Teams use practical tools such as model cards, bias audits, and transparency features. Human oversight remains essential, especially in high-stakes settings, so AI serves as an advisor, not a decision-maker.
How can organisations embed HCAI principles into daily practice?
Start by aligning every AI initiative with a clear purpose tied to human outcomes. Educate and empower teams through training in ethics, data literacy, and interdisciplinary design. Build governance structures that include ethical review as a non-negotiable step. Design workflows with humans “in the group and in the loop,” ensuring people have meaningful control. Finally, treat deployment as the beginning, not the end, of the lifecycle. Actively monitor systems, solicit user feedback, and iterate based on real-world impact. This culture of continuous improvement is what sustains responsible, effective AI at scale.
Subscribe to watch
Access this and all of the content on our platform by signing up for a 7-day free trial.
Emily Yang
There are no available Videos from "Emily Yang"