How HR teams can play a key part in the roll out of AI

Key areas where HR teams can add value and contribute to a successful role out of new AI technology within an organization.

An increasing number of employers are adopting AI technology in the workplace. While the benefits of AI are well-publicised, organizations must also understand and mitigate associated legal and commercial risks across various business areas. This requires a team of experts from different fields. Within this multi-disciplinary AI taskforce, HR teams play a crucial role. But what exactly is the role of HR in the roll out of AI and why is it so important?

HR teams are uniquely positioned to manage and mitigate employment law and HR risks related to AI use. Where AI tools are involved in making employment decisions (for example, recruitment, pay, redundancy selection), or being used by employees to carry out key tasks, those potential risks can be high. These include: discrimination and bias; privacy and data protection issues; information and consultation obligations; the risk of industrial action; and more.

The legal and HR landscape for AI use is also evolving and HR teams must support their organizations to achieve continuing compliance. The EU’s AI Act is entering into force over 2025/26 and, within the UK, regulators are expanding compliance activities, such as the ICO’s recent AI tools in recruitment audit outcomes report.

The UK government has also pledged to consult on how surveillance technologies are implemented in the workplace, including the role of trade unions and worker representatives.

HR teams will also play a central role in fostering trust, confidence, and understanding among the workforce regarding new AI technology. Although AI is an exciting prospect for many workers, it is a growing area of sensitivity for others, including some trade unions, who are concerned about its impact on jobs and the future of work. Transparency and the right workforce messaging will be key to any organization moving successfully into the next phase of its AI journey.

Below, we explore key areas where HR teams can add value and contribute to a successful role out of new AI technology within an organization.

Where can HR teams play a key part in the roll out of AI?

PoliciesAssessing and mitigating risksEmployee relationsAI StrategyOngoing governanceLearning and development
GenAI PolicyEquality/data protection impact assessments on new toolsConsider workforce messaging and communications strategy for new toolsAI use in HR processes, for example recruitment performance, management restructuring (and more)Implementing governance framework (what are the HR aspects/HR voice in this?)AI literacy – upskilling and reskilling (of wider business and HR team)
Update other HR policies re AI usage, for example disciplinary procedure policySpotting HR risks and liaising with employment law specialistsInformation and consultation obligations?Redesigning roles and organizational structure – job replacement v job creation due to AIHow are employee complaints raised and responded to?Risk training – consider different levels of risk training depending on role
Update data privacy policyEnsuring HR risks are flagged to the businessStakeholder engagementAlignment with values for example wellbeing, culture, equal opportunitiesContinuing review of risks from HR perspectiveIs learning and development aligned to your organization’s AI strategy?
  • Policies – if your organization is permitting the use of GenAI tools, does it have a GenAI policy outlining guidelines for employee use? Having clear guidelines can help to reduce associated risks and confirm any consequences of employee misuse, including the potential for disciplinary action. Are other HR policies up-to-date to reflect AI usage, such as disciplinary, information and technology, data privacy policy, and more?
  • Assessing and mitigating risks – assessing and testing new AI tools is key to identifying risk, both at the procurement stage and on an ongoing basis. For example, through equality impact assessments and/or data impact assessments, HR are uniquely placed to identify, understand and manage HR and employment law risks relating to AI use.
  • Employee relations – as new AI tools are launched, workforce messaging and communications strategies will be key to transparency and building workforce trust and confidence in AI; ensuring workers understand how AI is being used and why. In some circumstances, there may also be information and consultation obligations to comply with at law prior to launch.
  • AI strategy – the World Economic Forum Future of Jobs Report 2025 predicts that, by 2030, AI and other tech will create 170 million new jobs globally, while displacing 92 million. HR will have a key role to play in any potential restructuring of the workforce as AI use transforms job roles over time as well as the development of HR use cases. Further, how do new AI tools align with other people strategies within your organization including culture, wellbeing and diversity and inclusion?
  • Governance – the roll out of AI requires a multi-disciplinary taskforce across departments including IT, data protection and more. Does your HR team have a voice in your organization’s multi-disciplinary AI taskforce? Separately, how do employees raise complaints about AI tools and how are those complaints managed?
  • Training and development –  as AI use increases, job roles will evolve and demand new skillsets. Upskilling and reskilling the workforce will be critical to future proofing your organization. How is learning and development aligned to AI strategy, including risk training both for the HR team and wider workforce? For those employers who fall within the scope of the EU AI Act (ie if their establishment or location is within the EU, or outside the EU but with AI system output used in the EU), from February 2025, the Act requires that all employees and operators of AI systems have a sufficient level of AI literacy. This includes having an understanding of the technical aspects, ethical considerations and practical applications of AI systems. Find out more and link to our AI Literacy e-module.

What should HR teams do next?

Although each organization will be at a different place in its own AI journey, with varying levels of HR involvement to date, below are three steps that all HR teams can consider taking now:

  1. Review and update policies – including a GenAI policy that protects your organization against legal and commercial risks, particularly HR and employment law risks.
  2. Learning and development – if you haven’t done so already, start by upskilling the HR team – do they understand key HR and employment law risks and how this interacts with the wider legal and commercial risks when using AI?
  3. Explain HR’s key role in a multi-disciplinary AI task force – if responsibility for AI’s roll out currently sits with IT, or another area of the business, or HR does not have a place on any existing multi-disciplinary AI taskforce, seek out ways to be involved and to understand the next steps in the development of AI within your organization. Explain the value HR can add to the roll out of AI, sharing some of those points above

Hannah Mahon is a partner in the Employment, Labor and Pensions group. Hannah C. Wilkins is a partner and head of International Technology sub-sector.