Explore this illustrative example of how a people team might use the AI Risk Management standard.
Situation
The people team would like to replace its in-house helpdesk with an AI people assistant, a chatbot that answers employee questions and points to sources for further advice. The IT team decided to refer to the AI Risk Management standard for guidelines on risk assessment and involves the people and legal teams in this.
Concerns
The IT, legal and people teams created a risk register to record the different types of risks. For example, what questions could employees ask the assistant and what’s the risk of the assistant giving wrong answers? What’s the risk of an employee receiving an answer they don’t understand, and if this is the case how do they escalate their question to a human?
Following the risk assessment, it was decided that the assistant was not yet providing robust answers to all employee questions for the people team, so this risk was flagged in the risk register. The assistant sometimes used words (eg ‘should’ instead of ‘must’) that completely changed the legal and policy position of a response. While some AI processes can resolve this common problem with assistants that use large language models (LLMs), it created significant risks that can’t be defended easily.
Solutions
To mitigate the risk of giving wrong answers, the assistant was restricted to answering simpler questions, such as providing leave balance or referencing a section in a policy. Complex questions were still escalated to the people team. The assistant would not try to answer complex questions that needed human judgment and nuanced interpretation of employees’ terms and conditions, organisation’s policies or the law.
Having evaluated the trade-offs, the people team decided that the assistant wouldn’t replace the helpdesk, but it would free up some time for the people team to work on other tasks. A side benefit of the assistant is employees have an assistant they can access at all times instead of trawling through policies and procedures to find an answer. Over time the organisation will continue to train the assistant to answer more employee questions and improve its answers whilst maintaining an acceptable risk profile for the solution.
The value of this exercise was to find a solution that met the organisation’s needs without exposing it to costly risk. In this illustrative example, the organisation decided to rollout a more basic AI people assistant instead of taking the risk and finding problems during testing or live use. A different organisation might decide not to implement an AI people assistant at all and instead focus on simplifying their policies and procedures for employees.
An organisation selling AI solutions might have felt it necessary to make improvements and implement a more sophisticated assistant in their people team to demonstrate the value to their customers. In other words, the outcome of a risk assessment would vary depending on the organisation’s circumstances.