Written by Ollie Gooding

Why AI will never replace human judgement and expertise

Article Featured Image

With AI usage rising significantly, we’ve been exploring how and when it should be used in social research. Last week we discussed how a granular understanding of the processes which lead to quality outputs is a necessary but not sufficient condition of repeatably producing work of value. This week, I would argue that the final step to achieving repeatable quality is, and always will be, the application of human expertise, experience and judgement to evaluate that quality.

The first part of this argument is easier to make.

We are yet to see reliable fully-automated evaluation of quality for the complex outputs of social research. There’s no set of instructions for assessing narrative clarity, contextual awareness, appropriate tone, or whether an insight answers the client’s real question. These things combine in ways that resist reduction to a checklist. Which means that right now, a skilled person using AI may get to quality faster, but an unskilled person with AI, even with infinite time, can’t match that, because they can’t benchmark quality. Continuing to invest in the human skills of producing and evaluating quality isn’t just sensible, it’s essential.

But I’d go further. I don’t think this need for human expertise in producing and evaluating quality will ever disappear, because it will always matter how quality is achieved, who it adds value for, and through what process.

A foray into the not-too-distant future

Let’s consider the counterargument for a moment: “it doesn’t matter how you produce quality if you can produce it repeatedly and reliably”. Surely this argument is dangerously short sighted.

Say this were true, and AI could already repeatably produce all your complex outputs at a high quality. Short term, is that still true when what constitutes a quality output evolves, and as new ideas and processes come to the fore? If you don’t understand the mechanism, how can you adapt it when the benchmarks for quality or value change?

But even if we get to a stage where AI will perfectly imitate the processes and evaluation that lead to quality, and adapt to all changing conditions, you’ve got to ask the question, why? To what end? And for whom? The processes we go through as humans to produce and define value and quality are the part of the conversation about what we value as a society. Value and quality do not exist outside of this conversation – they are permanently being constructed. When we talk about value, we need to recognise that we really mean human value.

And fundamentally, that shouldn’t and can’t be outsourced. When we work on producing something valuable for society, we get to create and mould the idea of value through our daily struggle to produce it. But we only get to choose what value is if we’re actively engaged in all stages of creating it. Stakeholders on all sides urgently need to grasp this, because without leadership, human value and economic value will become further decoupled.

Let's get concrete

But we needn’t be so grandiose. Right now, and for the foreseeable future, having human beings engaged in the conversation and becoming experts about what creates value (in this case what constitutes a quality output) leads to value for all stakeholders. Be that the client receiving a report reviewed by a subject matter expert, or the junior report drafter who learned how to better articulate their point through feedback, or the expert who had the satisfaction of another satisfied customer. Fundamentally, we’re still humans creating value for other humans, and we need to be wary about who benefits wherever we see this changing.

So, it matters how we achieve quality. It matters that we understand the processes involved to get there. And it matters that we understand how to evaluate quality, and that through this work we’re part of the conversation that constructs what we value as a society.

This is what we mean at IFF by “being human first”. Not a resistance to AI, but a commitment to staying connected to why we do the work in the first place. The right path for adopting AI isn’t the one that makes our work easiest – it’s the one that gives us more time and energy to develop the judgement, expertise and intuition to create human value for the people we serve.