What are the people and business implications of the EU Artificial Intelligence (AI) Act 2024? There is a lot of added value from regulation that will regulate and “promote the uptake of human centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the EU Charter of Fundamental Rights. So, it is important that national legislation delivers on the commitment of being human-centred, trustworthy and protective of human rights, while keeping human decision-making at its heart.

It’s increasingly clear that AI can provide great opportunities to drive outcomes such as improved productivity and innovation which is important to every organisation and the Irish economy as a whole. Used responsibly, it can also help to create better jobs that utilise people’s skills more effectively, support their wellbeing and engagement, give people opportunities for meaningful work and support greater inclusion, all of which in turn can enable positive outcomes around productivity, innovation and social inclusion.

But while AI development is moving fast, uptake, rationale and results from deployment across sectors and organisations are patchy. Many businesses are unsure how to proceed or when to jump on to a fast-moving train. It’s a phenomenon that’s only anticipated to grow in size, with some estimates predicting that the generative AI market may reach $1.3 trillion by 2032 with compound annual growth (CAG) expectations of 40% or higher.

There is widespread optimism about AI’s potential to support growth and other beneficial outcomes at work but, as yet, this is not matched with knowledge or confidence about best practice. Understanding the risks, opportunities and choices through the process of procurement, development, adoption, adaption and ongoing monitoring is what will drive the best outcomes for firms and people alike. The CIPD’s exploratory roundtables with leaders in AI, HR and business have confirmed this assessment, and highlights the need for practical tools and guidance.

The EU Artificial Intelligence (AI) Act 2024 is the first-ever designed and comprehensive legal framework on AI worldwide. The Act aims to foster trustworthy AI adoption in Europe and beyond, by ensuring that AI systems don’t violate fundamental rights, safety or ethical principles.

The Act aims to ensure that individuals and organisations can trust what AI has to offer in Europe. While many AI systems pose no or limited risk and can contribute to solving many societal challenges, some create risks that must be addressed to avoid undesirable outcomes.

Assessing different levels of risk

The proposed rules and risk-based approach in the Act will impact the people profession, as much of the dataset that the profession manages includes personal details and may be labelled high-risk. There are four levels of risk, namely minimal, limited, high and unacceptable. The rules aim to:

  • address risks specifically created by AI applications 
  • prohibit AI practices that pose unacceptable risks 
  • determine a list of high-risk applications 
  • set clear requirements for AI systems for high-risk applications 
  • define specific obligations for deployers and providers of high-risk AI applications 
  • require a conformity assessment before a given AI system is put into service or placed on the market 
  • put enforcement in place after a given AI system is placed into the market 
  • establish a governance structure at a national level.

Understanding what these rules mean and their implications in practice will require significant investment in education and informational campaigns.

Unacceptable risk

There are eight harmful uses of AI that contravene EU values because they violate fundamental rights. These AI uses are already prohibited and include:

  • subliminal techniques likely to cause a person, or another, significant harm 
  • exploiting vulnerabilities due to age, disability, social or economic situation 
  • social scoring leading to disproportionate detrimental or unfavourable treatment 
  • profiling individuals for prediction of criminal activity 
  • untargeted scraping of facial image 
  • inferring emotions in work or education 
  • biometric categorisation of race, religion, sexual orientation 
  • real-time remote biometric identification for law enforcement purposes. 

These prohibitions apply from February 2025. The European Commission has published guidelines on prohibited AI practices as defined by the Act. 

High-risk

AI systems identified as high-risk include AI technology used in:

  • educational or vocational training that may determine the access to education and the professional course of someone’s life (eg scoring of exams) 
  • safety components of products (eg AI application in robot-assisted surgery) 
  • employment, management of workers and access to self-employment (eg CV-sorting software for recruitment procedures) 
  • migration, asylum (eg automated examination around visas) 
  • administration of justice and democratic processes (eg the implications for disciplinary procedures which could result in a person could losing their job or having their pay reduced).

The rules around biometric identification systems will need considerable clarity as many organisations and individuals use such tools on a daily basis.

Limited and minimal risk 

The Act introduces specific disclosure in limited risk cases where transparency on AI use is necessary to preserve trust. For instance, when using AI systems such as chatbots, people should be made aware that they are interacting with a machine so they can take an informed decision. Further, certain AI-generated content should be clearly and visibly labelled.

The Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category and includes applications such as AI-enabled video games or spam filters.

Managing and enforcing the Act 

The Act entered into force on 1 August 2024, and will be fully applicable two years later on 2 August 2026, though exceptions are as follows: 

  • the governance rules and the obligations for general-purpose AI models become applicable on 2 August 2025 
  • the rules for high-risk AI systems - embedded into regulated products - have an extended transition period until 2 August 2027.

Meanwhile, prohibitions and AI literacy obligations apply from 2 February 2025, meaning that:

  • HR professionals will be required under Article 4 to ensure all staff involved in AI systems have AI literacy   
  • HR professionals will need to equip staff with the right skills, knowledge and understanding of the system(s) provided 
  • HR/L&D professionals will be required to implement accessible training programs, workshops and communication strategies 
  • non-compliance can result in fines, litigation, reputational damage and loss of employee and/or customer trust. 

The Irish Government announced an initial list of eight public bodies as competent authorities responsible for implementing and enforcing the Act within their respective sectors. Each EU Member State must also identify national public authorities that supervise or enforce obligations under EU law protecting fundamental rights, including the right to non-discrimination, in relation to certain high-risk uses of AI. In Ireland, the bodies assigned with both these responsibilities are:

Penalties

The maximum penalty for the infringement of the rules for AI systems set out in the Act is €35 million or 7% of the total worldwide annual turnover of the preceding financial year (whichever is higher). Ireland must legislate for penalties for infringements of the Act by 2 August 2025. 

AI literacy 

To help build AI literacy in Ireland, the Department of Enterprise and Employment has made an introduction to AI course freely available to increase awareness of AI, help people learn about the EU AI Act and support adoption of AI in organisations.  

How should new rules be implemented? 

How AI systems are monitored and defined will be key to the success of any implementation. In our discussions with CIPD members, concerns were raised about the effort needed to manage high-risk systems in the workplace, conduct test and model evaluations, as well as cybersecurity requirements, and whether the mitigation measures required could be too overwhelming and expensive to use.

Our consultations highlighted how the framing of legislation and guidance on the new rules would be critical, recommending that implementation of the Act should:

  • ensure legislation is accessible by keeping it as simple and straightforward as possible 
  • build in human oversight into critical decisions 
  • ensure elimination of bias is central to all operating systems especially within decisions that affect people 
  • build in the approach that defines 'human-centric' and includes a focus on good work, employee protections and employee sustainability, not just societal benefits 
  • recognise the dynamic nature of AI, its early stage of maturity, and account for this in legislation 
  • be aware that solutions to converting high-risk systems to low-risk situations could become arduous, which may be a blocker to AI use
  • understand the obligations around AI literacy and training. 

Promoting human-led AI 

In 2024 the Irish Government refreshed its National AI Strategy: AI - Here for Good programme, which recognises the need to adopt a human-centric approach to the application of AI.

Generally, references to human-centred AI relate to how AI can benefit society, (eg in improving health care outcomes) but often avoid examining what is happening in workplaces where AI is being implemented, how it links to good work, sustainable employees and what good practice looks like. This needs further attention and research at both a national and international level.

The CIPD strongly believes that this is a critical time and the EU AI Act must be used to lay down ethical and trustworthy approaches to AI use. The people profession should be closely involved in working within and across organisations in addressing these issues. It should work directly on job design, operating models and organisation development strategies, considering the skills implications and how to address them.

We have engaged with several organisations to understand the different capabilities needed and to develop initial content. We have also engaged with communities that can influence and reach a wide range of stakeholders.

We have identified a pressing need for practical guidance and tools, specific to the workplace, to support ethical and responsible use and implementation of AI from a business perspective.  

There is a need for responsible pilots to deepen and share learning, and provide bridges from the best multi-disciplinary research into an accessible form, empowering HR professionals and business leaders to apply AI principles in human-centred, context-sensitive ways.  

The gaps in relation to adopting a people-centric approach to AI are significant and the protections in the workplace need full consideration in Ireland’s legislative approach. Further research will add value in harnessing the opportunity of technology to benefit job design, employee welfare, inclusion, as well as performance. So too will investment in education and skills be paramount for successful AI adoption.

About the author

Mary Connaughton, Strategic Engagement Director

Mary leads the growth, development and contribution of the people profession in Ireland. She pushes forward our agenda of people-centric decisions, wellbeing, inclusion and flexible working through research, policy and member engagement. 

Mary has a wealth of HR experience, supporting individuals and companies on the strategic people agenda, HR practice and organisation development. Previously she headed up HR Development at employers’ group Ibec, consulted widely across the public and private sector and held organisation development roles in the financial and consulting sectors.

Mary is on the Boards of the Public Appointments Service and the Retirement Planning Council and represents the people profession in Ireland at the European Association of People Management.

  • Podcast

    Ireland Podcasts

    Listen to episodes from our CIPD Ireland Podcast Series on a range of topical workplace, HR and L&D issues

    Listen now
  • Thought leadership

    Could AI solve skills shortages?

    A look at whether artificial intelligence can cover skills shortages by exploring the benefits of AI and the advantages that can be gained by using generative AI such as ChatGPT