Navigating the generative AI middle ground through co-creation in a not-for-profit organisation
This case study details steps taken by a not-for-profit membership organisation to navigate towards a collective position on using generative AI
This case study looks at the rollout of an AI writing assistant at a large organisation. It highlights how failing to involve employees and preserve foundational skills creates a trust gap that undermines investment
![]() |
Supported by the Innovate UK BridgeAI programme, this case study took place as part of an action research project carried out by CIPD’s research partner, the Institute for the Future of Work (IFOW). The project sought to foster a shared understanding of how to use AI effectively and responsibly. This observational case study describes how a large organisation tried to introduce an AI writing assistant. Participants in this case study did not complete an action research cycle. |
This case study focused on a large organisation, referred to here as LargeCo, which employed thousands of people in corporate functions and frontline operations. LargeCo outsourced some HR activities to an offshore third-party provider.
Among the extensive range of AI tools in use at LargeCo, a generative AI writing assistant was chosen as the focus of this research. The AI writing assistant was procured from a technology provider and rolled out within one of LargeCo’s teams in early 2024.
To support this research, LargeCo formed a cross-functional working group featuring senior leaders and colleagues from enterprise technology, HR, people, and sales functions. The goal of this group was to engage in action research, encouraging people to think critically about the implications of AI on people, their jobs, and how they work together. This effort was initially led by an AI people lead, whose remit was to build people’s ‘capacities and skills’ and embed the ‘responsible dimension’ when people decide which tasks to automate.
However, the project faced challenges with internal buy-in. When the AI people lead left partway through the research, IFOW could not establish meaningful contact with the chief of staff who had taken over their remit. As such, this case study pivoted from an action research study to an observational case study.
LargeCo operated in a business-to-business (B2B) environment with a heavy focus on risk mitigation through cybersecurity and data privacy. AI implementation at LargeCo was fast-paced – IFOW identified at least 10 tools already in use or under consideration, mostly generative AI. While LargeCo was conducting an internal audit to gain full oversight of these tools, AI agents were being deployed to automate some administrative tasks.
The drive behind this speed was the belief that AI can enable growth without increasing overheads. As one senior technology leader said: ‘The way we improve our profit margin is by growing our top-level growth, but without increasing the operating cost… we can double the size of our business without doubling the size of the team’. Beyond mere efficiency, the working group noted that LargeCo aimed to become the industry leader in ‘secure’ and ‘trustworthy’ AI systems.
To manage this, LargeCo developed an AI policy centring on privacy, transparency, fairness and ethical oversight. LargeCo also created a risk-based framework for AI implementation to align with the EU AI Act. Under this Act, applications such as emotion recognition in the workplace are strictly prohibited.
Approvals for specific AI use cases went through a governance and control board, primarily tasked with corporate risk identification and management. This board ensured that AI deployment and use complied with the organisation’s responsible AI policy. The board included leaders from legal, digital and data privacy, with HR and L&D represented on the board in an advisory capacity. While senior leaders within individual teams could decide to implement low-risk AI, any high-risk AI use cases were flagged for the board’s decision and tracked through a dedicated risk register.
The cross-functional board was designed to prevent AI from being siloed and to ensure its impact was ‘visible and understood’. While the board received praise, some working group members noted that discussions were skewed towards technical risks at the expense of ‘people impacts’. Although L&D efforts were underway to upskill the workforce, there was a clear need for a more proactive approach to job redesign and strategic workforce planning.
This gap in planning was often discovered too late. As one working group member explained: ‘…we’ve had a couple (of automations) where we’ve gone live and it has led to some redundancy consultations… how do we … get ahead of the inevitable conversations … because what we learned from the first couple of experiences was that it was too late in the journey that we started to talk to people about, actually, what does this mean for their job.’
Furthermore, internal data revealed a lack of trust regarding the responsible use of AI. Employees felt excluded from the process, with a small percentage aware of the organisation’s strategy. This lack of transparency was mirrored in the decision-making processes, which lacked any form of employee participation. Without employee participation, any AI implementation faced a growing trust gap that threatened long-term success.
The AI writing assistant was introduced to reduce drafting time, improve quality and mitigate the risks of shadow AI. As the head of one of LargeCo’s core teams noted: ‘if you want to use AI, you can, but please use this one … because we’ve got all the kind of checks and balances in place. It’s a secure environment. It draws from our own content.’
As part of the rollout, the technology provider trained one of LargeCo’s teams and periodically surveyed team members’ views of the AI writing assistant. The survey focused on comfort and confidence with the AI writing assistant. But the questions were framed positively and offered little space to reflect on negative impacts. For example, employees were asked whether they agreed with statements like ‘[It] makes my work more enjoyable’.
An additional optional survey, designed by IFOW, was deployed by LargeCo to understand the team's views of the writing assistant. However, LargeCo removed questions on the team members’ current experiences of work – such as level of mastery, autonomy and job demands – as these were deemed not relevant. This reflected a broader lack of attention to how this AI writing assistantreshaped job characteristics. Furthermore, LargeCo was concerned that a longer survey, duplicated questions in their existing employee surveys, would lower the response rate.
Nonetheless, the survey designed by IFOW surfaced a range of views, which underscored the importance of preserving foundational skills:
While the AI writing assistant wasn’t intended to reduce headcount, it was intended to reduce the need to hire ‘specialist technical knowledge’. A working group member expected an impact on junior roles ‘because actually (it) can do some of the work done by junior resource’.
Even though a small group of users saw time savings, the overall impact was more limited than leaders had claimed. Leaders suggested that it was possible to generate essential documents with AI in half a day. By the end of the research, however, the AI writing assistant faced being dropped due to low return on investment and an unenthusiastic uptake. This was partly because a superior alternative that embedded into LargeCo’s information architecture was being rolled out. Reflecting on the experience, a working group member noted: ‘What this whole process ... has taught me is that you cannot force people to use a tool that they are not getting value out of’.
This case study details steps taken by a not-for-profit membership organisation to navigate towards a collective position on using generative AI
This case study looks at how a strategic innovation company navigated the sector’s structural challenges while introducing an AI tool for maintenance planning
This case study looks at how a construction company learnt from the past to become more deliberate in how they buy and use AI and other technologies
This case study looks at how an NHS Foundation Trust engaged staff on plans to roll out an AI-enabled HR chatbot
Leading Voices is a series of short audio essays in which senior people professionals reflect on how they have tackled some of the profession's most pressing challenges.
Leading Voices is a series of short audio essays in which senior people professionals reflect on how they have tackled some of the profession's most pressing challenges.