Nigel Cassidy (NC): Can we trust machines to hire the best people? And what to do when your job seekers sprinkle their applications with generous layers of AI fairy dust? I'm Nigel Cassidy and this is the CIPD Podcast.
Now, AI can scan a CV faster than you can say quirky skill sets. No wonder a third of organisations are using AI to help choose new staff, bring them on board. It's a real fear of missing out on this recruitment revolution. It really is an AI feeding frenzy. Yet some are learning to their cost that if you're not scrupulous with what candidate data you ask the machines to look for, well, AI may only repeat or amplify poor or biased past hiring patterns. You might even have to answer charges of discrimination. Many worry too that applicants are freely using AI to game the system and might just try to fake their way into jobs.
So, where does all this leave human judgment in hiring? How do you mitigate the risks and reap the undoubted benefits that AI can bring? Well, with us first, an experienced chief people officer and HR director, one so exercised by getting the AI to work better in recruitment, she created a platform to do just that. Natalie Sheils, founder of Talenaut, says hiring needs to be fast and automated, but above all, still human-centred. Hi, Natalie.
Natalie Sheils (NS): Hello, Nigel.
NC: We've another chief people officer, one who now specialises in helping organisations to use AI and technology to transform how they work. It's Katie Obi from One Advanced.
Katie Obi (KO): Hello, Nigel.
NC: And our final contributor says smarter recruitment is the key to organisational profitability. She's Talent Strategy Director for the consultancy Omni. So, hopefully she knows just where the AI should be in the mix. It's Katie Noble. Hello.
Katie Noble (KN): Hello, Nigel.
NC: So AI, the good, the bad and the ugly. Shall we just start just hearing from each of you? I just wondered what's the worst example you've come across of things going wrong using AI or if you prefer the most impressive use of it?
KN: There are so many examples that we can kind of see across the news in terms of where AI has perhaps been deployed badly or there is some bad press. What's really interesting to think about right now from a HR perspective is the lawsuit that's happening in the US with Workday. And I think this is probably one of the most significant actions that can happen when it comes to implications from HR and from a technology perspective. So, this is the case that's around discrimination, particularly focused on kind of race, age, and disability as well. So, we're all kind of watching with bated breath, I suppose, in terms of what's going to happen with that. Certainly from my perspective, that's what we're doing.
NS: Yeah. I mean, this taste of what Workday is going through. I've certainly been through it. In my past experience, especially at Mosaic, we were very keen on AI quite early on.
One of the many memorable examples of some of the growing pains of using AI that we had was we quickly connected our HR systems, which included our onboarding data, as well as the last hires that we had with an AI screening tool. Fantastic because it was very quick in sort of picking up sort of the skills that we were looking for. The downside with that is that we had an exceptional referral rate from our developers. Now, as you know, people tend to refer people they know. They've worked within past organisations. They either went to the same universities. So, unbeknownst to us, our platform started favouring people with those particular sort of backgrounds. We didn't feed it that information. The prompt was very simple. Look for the particular skills. It went a few steps further and started prioritising people that come from a similar background. And most of our developers happen to be male as well.
Coincidentally, they started becoming sort of favoured. So, we noticed when we were doing sort of audits of that, that we were having a disproportionate number of candidates who were being highly matched to be from a specific school, from a specific gender. And thankfully, we did something about that. Otherwise that would have also ended up in potentially a lawsuit like this.
NC: That's already got me thinking that example. Katie Obi.
KO: Yeah, a couple of things that spring to mind. Those are great examples. One of them is quite similar, but a little bit adjacent, which was during COVID when algorithms were applied to predict A-level results for students who weren't able to finish their A-levels. And for some not too dissimilar reasons to the ones Natalie just outlined, what came out as a result and the predictions that came out. The result were actually very biased because it was based on historical data and trends which really significantly disproportionately impacted those from disadvantaged backgrounds where maybe, maybe the non-selective schools had lower A-level scores in the past in the data that was used to train the models which meant that really top students, their scores were brought down versus the data that was available for selective schools, for instance, where the grades in general tended to be higher.
So, that's one example and then the other example I can think about is when In the early days of ChatGPT, a couple of lawyers decided to use it as a search engine to look up case documentation. It essentially hallucinated the case that helped to provide some case law to help to back up their claims as part of this. When that was rejected by the judge, they went back to ChatGPT to ask it to retrieve the case documentation itself or the court documentation, which ChatGPT then equally hallucinated and generated all of this content, because it's great at generating things. They then submitted, and these lawyers were eventually fined as a result. So, that not really understanding the technology that you're interacting with and applying critical thinking to really validate it, is super important. And you can very quickly get yourself into some not insignificant challenges and trouble if you're not really thinking about what you're doing.
NC: The AI obviously didn't know about when you're in a hole, stop digging. We'll probably get back onto that really interesting subject of how you keep a watch on what the AI is finding out and keep training it. Let's do that in a minute. Before we get there, Katie Noble, should we just start with kind of where we are? Could you remind us of the major recruitment processes that AI is now widely doing?
KN: Yeah, absolutely. And actually, what's perfect to kind of bring in at this point is the biannual resourcing and talent planning survey that the CIPD conducts alongside Omni, which is a report across the whole resourcing and talent space. But one of the key areas that we have included over the last kind of couple of reports that have been out is around the use of AI and machine learning. And what was great to see from this report was that there is an absolute increased use in the AI and machine learning technology and recruitment processes. However, it remains far from common, which I think was still great to see that increase, but it's certainly not to the level that we would hope. The results show that around four-fifths, so around 80% of SMEs, public sector organisations, and actually even higher proportion for non-profits are not using any kind of AI at all.
NC: Is it the smaller the organisation, the less likely?
KN: Yeah, the small, medium-sized organisations, public sector organisations who seem to have a far more stringent view of no AI is to be employed at all. And they put that onus onto the candidates as well sometimes, that there is less adoption happening in these spaces.
So, although overall there is an increase, it's certainly not to the level that you would hope. If we break down the processes and think about the different elements around resourcing, where we have seen some of that increase is around things like using AI to write your job descriptions and adverts. I think that's a fairly common thing that's happening. When organisations talk about the use of AI within the resourcing process, and actually where we see that working really well is that they're using AI to create attractive, great job descriptions and adverts. It's going to appeal to more people of those that have adopted the use of AI as part of their recruitment processes, 66% say that it has improved the hiring efficiency, and 62% say that it has increased the availability of useful information for resource planning.
There’s also, an EDI angle there as well that we're seeing that people are also using this to remove biased language, for example. A couple of other areas where they've seen an increased use is around things like using chatbots to respond to candidates' questions. So, rather than having a frequently asked questions section, for example, they’re using live chatbots to answer questions. So, it feels really interactive.
Two of the other areas, the other areas around the onboarding process, so we're seeing more AI being used around engaging with candidates during the onboarding process. And we're also seeing the increased use of AI to shortlist candidates, which is probably the area that I talked about last because it’s the area that comes under the most scrutiny. As it should, because where you have any kind of automation or technology making a decision, this is why Workday is where it is, because people want to know what the governance is, what the framework is behind how it's made that decision.
NC: And people worry that it's a black box, so you can't see how it got to the results. OK, well, let's pick up on that with Natalie Sheils, particularly this use of AI for skill assessments for screening candidates. Is that the area where you, I mean, you've already hinted at it in your example, but where we're starting to see the most problems. So, what would you say are the big drawbacks people need to look out for?
NS: Yeah, 100%. I think for some reason, recruiting has been seen as sort of a low-hanging fruit, where it just seems like the most easiest way to be able to apply, to blanket apply AI and I think that's the problem. Certainly when it comes to how you screen, and it begins even at the sourcing stage, right? Because the same filters that most recruiting technologies use to be able to filter candidates, they're the same criteria that they're using to actually source candidates. So already, if you have that bias and there are at least four different kinds of biases, but if you have at least just one of those at all within your AI, you already start to run into some problems in terms of how you're screening for those candidates.
The main thing is that when it comes to resume screening, a lot of these technologies are then looking for filtering those keywords, the formatting in order to find who are the candidates that fit the most. Plus side is, hundreds of applicants, sometimes thousands, quickly filtered very quickly. Downside is you've missed on a whole lot of other sort of triggers that you probably haven't inbuilt into your process because you were looking for particular words, particular formatting, and quite frankly, maybe you didn't train your AI to look for particular things. And that's where the problem is. But I'll add another component, and it's probably something we'll pick up on a bit later, is where there's the wrong, right? So, there's a lot of wrong, of course, in terms of how we train our AI. And I think the most important thing, if we can, is sort of hopefully come out of this podcast with people understanding the four different categories of how bias can happen in AI. And then I think once they understand that they can think about how they can mitigate those in practice.
So, there's four types of biases, and I'm going to kind of quickly kind of simplify them as much as I can. You've got sort of algorithmic bias. For an example, let's say an AI model started focusing or favouring particular names because it happened to notice that a lot of your developers come from Eastern Europe and have the name Andre. Another one would be your sample or representation bias. And this really happens when your AI training data is not diverse enough or has overall under-representation of specific populations. Now, the big thing with that is that's probably the likelihood in many organisations, unless you've got a large sample size. And so an AI system that's trained primarily, let's say, on male candidate's data, because of the people that you happen to hire, it may start to show prejudice against female candidates.
And then another one is predictive kind of bias. Now, this happens when your AI system kind of consistently overestimates or underestimates a particular group's future performance. An example would be, let's say, an AI tool that consistently ranks candidates from Harvard, one school, higher or lower than another, right, from other institutions, despite it having kind of similar backgrounds or the candidates having levels of experience.
And then the last one is what's called measurement bias. Now, this one occurs when your AI models, kind of the dataset that you've trained it on, needs it to make an inaccurate or unfair conclusions when working with real data. So, an example of this would be, let's say you train your AI system on inaccurate data from your company's top performers. Right, because it's just kind of picked up some of the information and maybe some of your hiring managers were not really so strict around how they're giving feedback when it comes candidates because some reason employees for some reason they hate performance reviews now it won't accurately identify the right traits within candidates but it will just take that information and sort of run with it and because AI is one it's ever it's a revolving door, it loves to hallucinate so that those things become the components that are sort of a problem when it comes to that.
But my point of mentioning these biases that tend to happen when you're screening and using resumes is, resumes tend to be where a lot of companies are leaning towards when it comes to evaluating candidates. My kind of question or challenge would be for us to consider is now something like a resume, where it's all about keywords or a cover letter, keywords. Does it make sense now for us to still evaluate based on how eloquently people put their experience? Or is the challenge, do we start to evaluate for value, flip the funnel around, right? Start by saying evidence of capability.
NC: A lot to take in there. We've had a really good list of what goes wrong with AI and we ended with a big attack on the CV, which I'm sure other people might agree with. Katie Noble, you just want to come in on that.
KN: I think exactly as you just said there, Nigel, Natalie, you covered so many different things. And certainly from Omni's perspective, this is where we probably spend most of our time when we think about the impact of AI. The efficiencies brought by AI, the advantages that it can bring is absolutely around that assessment space, and the education from lots of different spaces in terms of, exactly as you say now, in terms of all of the biases but thinking about what's really needed is transparency of all of that. So, if we understand, one, that I'm being assessed by some form of AI, what does that look like and what has happened beforehand to make sure that it is bias-free? Because exactly as you say, bias in, then bias out, and that's going to keep happening.
But then also, I think the assessment space is one that is ever evolving ad absolutely, we're trying to speak to organisations about moving beyond that CV, because just like we are encouraging ourselves to push ourselves into this space because again there's so much literature out there but you know according to LinkedIn AI literacy is the third most in demand skill so we know this is only going to increase. So, if we are expecting that of ourselves and of our organisations we have to expect that our candidates are doing that as well.
NC: OK so let's try to be practical now. Katie Obi, how do we bring all this back? Organisations have talent strategies. Well, they ought to have them. They ought to know the people that they need. Somehow the AI then isn't delivering. It's focusing on the wrong data or it's misleading us. How do we start bringing it back? How do you get the AI to start doing what you want it to?
KO: Yeah, and maybe one point to also talk about as part of that is AI isn't always solely used to narrow down and select. And in many ways, it shouldn't be used to select, as we've talked about. AI can actually also be used to widen access to talent pools that recruiters aren't used to going out to or referrals don't get you access to.
So, sometimes looking at adjacent skills is really important to be able to surface up different profiles than would normally come through your recruiting funnel. So, we should not think of AI as just a down select, more efficiently down select tool, because that's where a lot of the problems come in. We should think about it as helping us to access a wider, more diverse talent pool if we use it in the right way.
I think the biggest thing for me is there's a bit of an approach to the answer is AI, what was the question? And that's really unhelpful, I think, to have that as an approach. What's most important is what challenge are you trying to solve? If you're really clear around that, you can then make sure you're selecting the right tool, make sure it's being trained on the right data, make sure you're asking the right questions of whoever's developing it to make sure that the decisions are explainable. It’s being used for the intention of what it was actually built and tested and trained for. The right voices are in the room testing and making sure there's not the right, there aren't unintended outcomes coming as a result. So, really being critically clear around what is the business trying to solve? What is really important to get to in terms of outcomes? What are the potential risks? And then making sure there's a cross-functional group of diverse opinions and perspectives who are involved in testing that and bringing it to life is really important.
NC: Makes sense. Natalie Sheils, you mentioned to me beforehand that HR must be mindful of being able to defend its actions in the light of a future, you were saying, as well as existing legislation. I mean, this does imply, you know, you do have some ethical and legal concerns, maybe about what we're doing with the data or what?
NS: Oh, it involves a few things. So at least when it comes to the EU Act, AI within recruiting has already been classified as a high risk. Europe is quite ahead of, let's say, the US and I'd probably put us within that umbrella as well. And it's for a lot of reasons. It's for things that are all related, sort of, to governance. So, bias is one element of this, right? The other one is you want compliance. You also want transparency and you also want accountability.
And so, when you're thinking about how you utilise your AI systems and you're selecting it, 100% agree with Katie. I would add that once you've understood what is it that we want as outcomes as an organisation, and I'm going to bring it back to sort of a practical example when it comes to this, is we're hiring a product manager. Why are we hiring a product manager? What are the skills that we're looking for? Is it because we want more efficiency? Is it because we are shorthanded?
A lot of organisations are going to be looking at what else could be done by AI versus a human. Right. And what are the skills that we want to see? And then what is the representation of how that would show up in practice when it comes to work? When you go through that process, after you've gone through all the steps that Katie mentioned around sort of the governance, the selection of what we want to use as a tool, there is sort of your transparent AI tools that you can use. And we can talk about what that looks like in practice and what kind of sort of what are the markers that you can look at when it comes to that and I think that'll be helpful. But once you've kind of understood that, OK, this is where we want to use AI, we've written a policy around where will we use AI. We're very clear around what decisions the AI makes when it comes to what parts of recruiting. We're very candid with our candidates about where they use AI, where they can, where they cannot, how we use AI to evaluate them.
So, there's a transparency along the way. Now, when it comes to HR recruiters, right now, it feels like the grunt of the work falls on your AI technology tools. I perceive that that's going to change very soon. As we think about the decisions that we're also responsible as organisations, I think it's going to turn into a hybrid approach where it's going to be, all right, but did we train the AI model? And when we're doing so, did we make sure that there is feedback loops that are built into the systems that we're using? All of this problem, is if we bypass decision-making and we say, hey, we're handing it over to AI. No, it's a tool. It's a tool. It’s not the most, it's not God, right? Humans still need to go in, look at all the shortlisted candidates, look at the reasons that the AI has selected these particular candidates, right? Against the criteria that you requested from that particular AI. And by the way, before you even press run, You should have already checked all that criteria in the first place.
Now, the beauty of that is that when you do that in practice with your system, with your tools, you're then able to, in real time, look at the decisions that the AI is making and people that it's shortlisting, quickly ascertain and recognise where the decisions it's making is biased or discriminatory, or perhaps you need to change the way that you're prompting it. Hiring managers can also opine to that. So, now not only are you checking, you're making audits in real time, you're retraining your AI models in real time, and you're kind of starting to build a pipeline for your AI in terms of how it's recognising the information and the prompts that you're making. And if the system, and they will be intelligent enough as time goes on, it's going to check in with you on those areas that you've already started to request check-ins with it.
NC: OK, so good ideas for best practice there. Katie Noble, can we kind of relate that to your own findings and experience, how you get this human judgment involved in everything that the AI is doing for you? And indeed, maybe also how you manage senior management who just want the AI to save them money quickly?
KN: Yeah, a great question. And actually, the points that Natalie were making, I think were really great as well and really considered because I think this is one of the challenges that we have right now is that the key to all of this being successful is that transparency, is that constant review of where the discrimination comes in, almost like when you are creating any kind of assessment process, that you would constantly do that. But I think there is this, almost like either a reluctance, a hesitation, a nervousness to share this information. And then that's when everyone becomes nervous.
So, if we think about the recruitment process, and we were saying before that hiring managers, HR are nervous of candidates using AI to enhance their skills and their application processes. Candidates are also really nervous about somebody making a decision that's not a human, that they don't have access to how somebody has made that decision. So, I think there are so many things to consider there.
I think what's really interesting, going back again, if you don't mind just thinking about some of that, you know, the ethical, moral implications, I think there's also something else that we need to consider. Because again, Katie, you said yourself that, you know, sometimes it's AI is, you know, the solution. Let's just do it. Almost a bit like when we think about EDI, you know, let's do EDI through recruitment. No, this is actually, there's so much more to making change, to making cultural change, to being really effective and making whatever this is do what we need it to do.
And what we're finding now is that because there is this nervousness around making any kind of policies or governance around the use of AI, that actually there was a finding recently, I think it was the LinkedIn and Microsoft Trends report that said that 78% of AI users in organisations are using their own tools. So, they're actually not being you know governed or not being given a directive to maybe use it within these constraints but actually they're using their own tools and in doing so they're potentially making decisions that go against what that organisation framework maybe is but there's so little governance around it that there is a huge risk there that people are off doing their own things because they don't have the guidance that they're looking for. I'm not saying that that shouldn't just be a HR decision. Exactly as we've all just said, this is it, this is much bigger than that. But it is something that we need to consider because whether we're willing to accept it or not, our employees, our people, our colleagues are using AI. But are they doing it in the way that we want? And the answer probably is not.
NC: OK, well, I'm glad you brought up that question of candidates using AI. I mean, Katie Obi, we can hardly blame them if the employers are doing it. There are kind of two ways. I mean, in some ways, candidates can use AI to show the best of themselves, to help them prepare for an interview. But the concern is that they might sort of blag their way into a job and somehow misrepresent themselves and not be spotted. So, can you just talk about, particularly about the fault lines, the areas that employers might be concerned about? Do they have reason to be concerned? And I mean, could you seriously ban people from using AI? I don't see how that would work, but I have heard stories employers have tried it.
KO: Yeah, it's really interesting. I agree with a lot of what Katie said earlier, which is if you don't actually find out the right way to facilitate it, people will do their own thing. And it becomes shadow AI, which is AI that's happening, but you have no control over it whatsoever. So, much better to actually have guidance, have policy, to be able to help determine that. And then you have more control around what's happening with your data, who your data gets released to, etc.
From a candidate standpoint, I think we should all assume that the candidates coming through will be using it. They're using it to enhance their CVs and improve their job applications, and they're probably using it to prepare for interviews as well. So, they're using it to try and improve their chances of getting selected. They might be using it to apply to jobs in bulk too. We're seeing more and more of that and then to prepare for those interviews, you're almost like an interview coach. And I personally think this is a great thing for people to do, to prepare. It's no different than someone, you're going and having a coffee with someone saying, will you do a mock interview? It's just now at people's fingertips and more people have access to that.
Where I think employers should be more clear is when it can't be used. 38% of British students and graduates admitted to using AI, but one of the things that they admitted to was using it for assessment tests, so verbal reasoning, general tests, personality tests. I personally think employers should be clear about when it should be used and when it shouldn't be used and that's probably a good case for when it shouldn't be used.
I think one of the challenges is this is a technological advantage that people are using. But for years and years, there has been examples of where someone has had their cousin do an aptitude test for them or someone else interview and then suddenly someone you've never met before turns up on the job. You know, these things have been happening. It's just happening in a different way at the moment.
NC: Can you use it in a live test?
KO: Well, I think we might see a trend to more in person to help to validate some of these things. I've personally experienced it being done in a live interview, which was really interesting, and the telling sign for me was there was a 10 second pause after each question and then a perfectly scripted answer came at the end of it. And I couldn't connect the answer to the other conversations that we'd had. So for me, the best way to navigate through it and, in addition to having clear guidance that's set out so you know you're clear with expectations, is to make sure that our managers are really skilled in terms of interviewing properly.
It comes down to core basic human skills, human management skills, you have to understand how to ask questions to really ascertain the skills of the individual. You have to think critically about things and you have to know that some of this might be happening so what would you need to ask or look for or inquire deeper into to really understand if the candidate has that skill or not.
NC: Natalie Sheils, have you come across that? Have you any tips, ideas for filters to ensure that the candidates use the AI to help them but not misrepresent them?
NS: Yeah, certainly. I mean, there's lots of things that I've seen recently, including one having an avatar attend an interview for them. Very obvious because when they looked on the side, the face changed. But I think the main thing is this, there's different steps to the candidate journey. There's the, I'm applying for the job stage, which is where we all had these thousands of resumes coming in and then we're filtering those resumes. And then there's the, I'm being interviewed in real-time to assess my skills. Right. So, you've kind of got to think about your HR technology when it comes to that. Now, the thing with AI is it's ubiquitous and it's invisible. Unless you've gone and built a whole entire system that can try and detect whether someone has written something through AI, you're going to have a hard time. And even then, they could still put it on things like Anthropic and all of that and actually change it in a way where it doesn't even look like AI, just by simply changing a few things, adding the Is, the wes, taking out hyphens and all of that. So, it's very, very kind of difficult to judge it that way.
The three things I think probably the main ones that employers tend to ask, are concerned about, at least at the applicant stage, is what if this person used ChatGPT to write their cover letter? I would argue, what if they use Grammarly? What if they used a career coach? Exactly like Katie said. What if they used AI to pass the skills test? Right. Katie already talked about people asking their cousins to do those assessments for them. Are we hiring the candidate or are we hiring their chatbot? Which is the crucial one. You can have a chatbot now, by the way, on Google Extensions, where you can actually, unless the technology stops that, you can actually have people feel that for them. And so the question really is then, how should we be thinking about how we're actually assessing people? The most important part, yes, is how we're interviewing for the right skills. But the same way that AI is making it easier for candidates and making it easier for us to also filter, it's also presenting us with amazing opportunities to create valuations that are intelligent and changing in real-time and dynamic based on the person who's responding to it.
And so, that's something we can use AI for as well and good old-fashioned situation, task, action, result, alternative results kind of thing being built in. And maybe I would even add a tip that instead of looking at it from upload your resume and cover letter, have it free form, right? Because when something is free form on a particular technology, you can't copy paste it. And then you have that free form based on those questions that are like a star method. That's very hard, by the way, for AI to bypass. Most people don't know that but that's sort of some of the things that you can do and you can have your AI system or recruiting technology intelligently make the questions harder in real time. And it can record these sequences as they're happening with the candidate and you can see those summaries.
So, that's another way that you can use AI in a way that you can evaluate. We just have to look at it differently, how we evaluate. I said it before, we've got to flip the funnel around. This way, you're getting the higher quality candidates coming in front of you. And probably, like Katie said, we're going to see more face-to-face interviews until somebody creates some kind of technology that disallows people to add their extensions on.
NC: Wow. Well, Natalie has brought things almost to a close with some excellent tips there. So perhaps I could go to the other two Katies, just to get a sense of your ideas to get people doing this better. Katie Noble.
KN: I think just to pick up on the points that both Katie and Natalie mentioned there, lots of these risks already existed. They just took different forms. We would often work with, in particular, public sector organisations that had to answer specific competency-based questions that studies showed that if they had family members that worked within there, they were more likely to pass that because they were almost like the referrals. So, these risks kind of always were there.
I think the key for this is, and Natalie touched on this, it's the skills of your hiring managers and your interviewers to really probe and assess and understand that interviewing effectively is a skill that needs to be developed. And actually, again, we always have the risk of people being great interviewers, being able to answer great questions. And the mitigation for that was always, working with your hiring managers, working with your decision makers to enable them to enhance their skills to get through this. I think we have to recognise that AI is here to stay. AI is here to develop, is here to help. There are massive gains to be made and we just need to almost start to get to think about how we can do that effectively and more efficiently.
KO: Yeah, I think for me, there is a really big difference between using AI to misrepresent your skills and capabilities and having great AI skills and being AI literate, which is a really important skill of the future.
So, I know we want to have people coming into our workforce who know how to use these tools and are using them to help make them more effective. And we shouldn't go down a route where we are discouraging people from doing that because it's a really important skill for the future and for us to be able to make our business objectives and to be able to achieve what we want to achieve. The big thing for me is are they doing it in a way that misrepresents what their capabilities are? So, I think each organisation has to work out where their line is but I would say don't draw the line too far that means that you end up with lots of people who have no ability to be able to use these skills because they're going to need to use them in in the future.
And then the biggest thing for me is I think the um the most important skill of the future is going to be critical thinking because that that really helps you understand, you know, the difference between what you can do as an organisation versus what you should do, which goes into how you select and use AI. But it also goes into evaluating what someone is presenting to you versus how you need to assess the reality of that, which goes into our interviewing and assessing elements too. And I think this is going to be a hard skill for us to continue to build as organisations going forward, because you tend to develop critical thinking based on layers of experience that you've got over time. AI is going to take out some of those more kind of entry-level foundational skills. Can you really build the right critical thinking skills if you haven't been through that process? And there's a study as well that came out which I find a little bit alarming, which says that if people use AI a lot they're less likely to rely on their critical thinking skills even if they had it.
So, I think that whole critical thinking piece is going to be such a key for navigating through the future and one that organisations and individuals really need to work out how they're going to continue to build and embed it, and make sure that everyone realises how important what an impact those skills have.
NC: Wow. Well, my head is spinning. I'm tempted to say you wouldn't get all that information we just heard in the last half hour or so from AI. And I think you've made it very, very clear that what you need to be doing is constantly refining what the AI is looking for, what its purpose is and what it may tell you about the candidates and indeed help you look a bit further for them maybe than where you're looking at the moment. Many thanks to our trio of Katie Obi from One Advance, Katie Noble of Omni and Natalie Sheils of Talenaut. The CIPD website offers lots of advice on AI and selection and recruitment and the impact of using algorithms on candidates' perception of fairness. I spotted that one. Next time, we hope to be diving into business culture. We'll be at the CIPD Festival of Work. So business culture, who sets it? Is it always from the top? And how do you do that? But until then, from me, Nigel Cassidy, and all the CIPD Podcast Team, it's goodbye.