Developing a contingency plan for sudden exceptional events
Use this practical tool to create an overview of key considerations to look at prior to determining your organisational response to a sudden exceptional event
Discover how the CIPD and the Institute for the Future of Work (IFOW) found insights from eight diverse case studies around the friction between AI and workforce stability. Learn why strategic pauses are necessary and why safeguarding your organisation’s future expertise is essential
|
Drawing insights from eight diverse case studies, the CIPD and IFOW explore the friction between rapid automation and workforce stability. The eight case studies took place as part of an action research project by CIPD’s research partner the Institute for the Future of Work and supported by the InnovateUK BridgeAI programme. From FinanceCo to HealthTrust (pseudonyms used to protect confidentiality) the evidence suggests that we must look beyond standard governance that focuses only on data security and human accountability. Learn why strategic pauses are necessary and why protecting the early to mid-career talent pipeline is essential for delivering a sustained return on investment. |
The conversation around artificial intelligence (AI) has transitioned from futuristic speculation to a high-stakes boardroom reality. However, as organisations race to deploy new tools, a critical truth is emerging. The success of an AI rollout is rarely determined by the software itself, but by the talent ecosystem it sits within. Wharton’s Accountable Acceleration research report confirms this, noting that while usage is now mainstream, organisational readiness is what truly sets the pace. Crucially, the report highlights that human capital factors, talent, training, and trusted guardrails directly impact the speed and efficacy of the rollout and its ultimate return on investment (ROI).
To capture true ROI, you must rebalance the relationship between people and technology to ensure that AI productivity gains benefit both society and the economy. This requires the courage to take a strategic pause periodically because AI, like other disruptive technologies, deconstructs roles and shifts power dynamics. We believe this creates a moral and economic obligation for senior people professionals to protect talent pipelines from early to mid-career.
The journey towards adapting to AI is rarely a straight line. Across eight organisations, we found that the rollout of AI tools has moved from technical experiment to a fundamental shift in how we work. While the industries ranged from construction to professional services, the patterns of success and failure remain remarkably consistent.
A common thread is that speed does not equal maturity. FinanceCo and LargeCo both pursued rapid deployments in pursuit of efficiency, and to protect profit margins. LargeCo discovered that moving too fast without employee buy-in creates a legacy of mistrust. This is especially true when past rollouts have led to redundancy consultations, which can undermine even the most advanced tools. In contrast, LegalCo took a ‘verify, verify, verify’ approach, making human oversight a non-negotiable rule. This didn’t slow them down. Instead, it gave employees the confidence to experiment safely, preserving the high standards expected in the legal profession.
The most profound turnarounds were seen when organisations shifted from top-down mandates to co-creative processes. MemberOrg initially relied on AI use guidelines that placed the full burden of the risk on employees, leading to compliance anxiety and uneven uptake. By pivoting to a co-design model involving half their workforce, they replaced a ‘panic response’ with collective ownership. Similarly, InnovationCo risked strategy drift as the technology provider steered its AI project. Through a strategic pause, they laid the groundwork to align their AI strategy with their people strategy.
The case study findings suggest that your role has shifted from being a supporter of technology change to an architect of it. To lead effectively, senior people professionals should focus on five strategic pillars:
Our research confirms that success doesn’t always require a brand-new approach. Instead, it requires applying established organisational capabilities with fresh intent. Whether as a targeted change initiative or an ongoing evolution in agility, these were commonly suggested initiatives to support employees:
By using these established methods, you provide a sense of stability in an otherwise disruptive period. This proves that a strategic pause isn’t about stopping – it’s about ensuring the right support is in place to move forward safely.
OECD and UNESCO AI principles remind us that inclusive growth is a fundamental pillar of responsible AI. In our research, we found that employees are increasingly focused on how AI will affect their legal and professional rights. At HolidayCo, employees explicitly stated that AI use must uphold their employment rights and protect their terms and conditions. Similarly, at FinanceCo, software developers pushed to uphold their right to 10% L&D time (a clause already in their employment contracts) to ensure they have the space to adapt to new tools.
The people team must ensure that AI benefits are shared rather than used to erode these protections. By safeguarding the economic security and professional development time of employees, you build the trust necessary for a successful transition. Our action research with InnovationCo highlighted the value of proactive union engagement. The working group realised the importance of bringing trade union representatives into the conversation well before formal consultation begins. By establishing joint working groups in neglected areas like job design, human control over decisions, and procurement, you can move towards a more transparent, less conflict-prone relationship. This allows you to identify new challenges where there is a shared interest between the organisation and its workforce.
Research from Cornell University warns of ‘asymptomatic AI harms,’ where long-term losses in employee autonomy and judgment were masked by short-term productivity gains. MemberOrg identified a specific human tax – vigilance fatigue – where the constant burden of checking AI accuracy increases monotony and drops productivity. Furthermore, as seen in FinanceCo and LegalCo, a mentorship gap can emerge if AI disrupts the ability for early-career employees to learn foundational skills. You must ensure employees retain enough learning-by-doing time to protect their professional expertise and prevent their intuition from rusting.
The group at highest risk is the mid-career professional. Similar to the radiologists cited in the Cornell research, many employees are experiencing a crisis of identity as their expertise feels increasingly sidelined. Supporting mid-career employees to adapt prevents a systemic economic shock. This demographic typically holds the highest level of financial commitment, from mortgages to sandwich responsibilities of childcare and eldercare. A sudden loss of mid-career stability wouldn’t just impact individuals – it would hollow out the tax base and diminish the consumer spending that other industries rely on to survive.
AI doesn’t just replace jobs; it deconstructs them. MemberOrg pivoted to a task-by-task assessment to identify where AI genuinely adds value. Without proper organisation and job design, efficiency gains may be lost to unsustainable productivity expectations. As one MemberOrg employee warned, the danger is ‘compressing time frames’ to the point of burnout, rather than using saved time for meaningful engagement with colleagues and clients.
To turn AI from a technical experiment into sustained value, senior people professionals must manage the talent ecosystem rather than just the tools. The following action plan provides a framework for you to become a key architect of the transition, balancing operational pace with workforce stability. These timelines are a guide as the actual pace and actions will depend on your organisation size, range and complexity of your AI tools and your organisation’s existing digital maturity.
The first step is to move beyond reactive guidelines. Use this period to align your people and technology strategies with organisational goals.
Shift the focus to mid-career professionals, starting with roles most impacted by AI. This should happen during the early scoping phase to address fears about job security long before any formal consultation is required.
As a senior people professional, you are the guardian of your organisation’s future expertise.
Use this practical tool to create an overview of key considerations to look at prior to determining your organisational response to a sudden exceptional event
Developing people practices that improve organisational performance and outcomes
In this interview, Munir Zakaria, Assoc CIPD HR practitioner, discusses trauma-informed HR practices to support employee resilience in high-stress, crisis-affected operational environments and real-world work settings
This guide outlines five steps people professionals can take to become a strategic partner in responsible AI governance.
Discover how the CIPD and the Institute for the Future of Work are supporting organisations in understanding how to take a people-centred approach to integrate AI-enabled solutions effectively and responsibly at work
Find out what people professionals said about their working lives and career development prospects in our recent pulse survey
As artificial intelligence continues its rapid advancement and becomes the much touted focus for investment and development, we highlight the critical role of the people profession and explain how the CIPD and its members will be involved shaping its impact at work
A look at whether artificial intelligence can cover skills shortages by exploring the benefits of AI and the advantages that can be gained by using generative AI such as ChatGPT