As AI rapidly expands what’s possible, many organisations are asking the same questions: how much should be automated, what should be a team-led task, and how do we protect quality while embracing efficiency?
In social research, these questions matter deeply. Insight isn’t just about speed or scale – it’s about judgment, context, and understanding people. Used well, AI can strengthen that work. Used carelessly, it can undermine trust and quality.
That’s why, we’ve developed an AI Accreditation scheme at IFF to help our team answer exactly these kinds of questions. Our starting point is simple: linked with our values, we are human first. We use AI deliberately to create time and headspace for our researchers to do what they do best – apply critical thinking, bring deep contextual understanding, and craft compelling stories that drive impact for our clients. AI is a powerful tool in our pursuit of operational excellence, but it doesn’t replace human judgment, accountability, or expertise.
A human-centred test for automation
So how do we decide what should be automated, and what shouldn’t? Rather than starting with technology, we start with our people. We ask a set of questions that help us assess when AI adds value – and when it risks eroding quality, learning, or responsibility:
- Does this task give me ownership or purpose? If the work is satisfying or builds pride in craft, we protect it.
- Can I keep getting better at it? If it grows skills or judgment, automation should support – not supplant – learning.
- Can I think creatively or critically here? If yes, then our teams keep the lead on this.
- Does the task rely on values, context or discretion? If it needs interpretation, domain knowledge, or ethical nuance, one of our team leads.
- Will using AI free me to do higher‑priority work? If automation makes space for higher‑value activities, that’s a win.
- Will using AI maintain or improve the insight per hour? Quality stays the benchmark, not just speed.
- What happens if it goes wrong? If the stakes are high, the bar for automation is higher, and our team remains accountable.
Our AI task hierarchy
By working through these questions together, we’ve developed an AI task hierarchy that helps our researchers use AI confidently and responsibly – enhancing efficiency without compromising insight or experience:
- No brainers: Tasks with low developmental value and low risk, where quality is easy to check. Think admin, formatting, transcription, templated briefs, and Excel nudges. If AI can do it well, let it.
- Use with discretion: Tasks where AI can accelerate but shouldn’t author your thinking. Examples: stress‑testing structure, critiquing writing clarity, surfacing patterns to explore. We use AI as a critical friend – our team sets the direction, does the reasoning, and validates the outputs.
- Don’t use it: Tasks that take us closest to the data, the thinking, writing, analysis, interpretation, narrative, and recommendations. AI may support these activities (for example through retrieval or summarisation), but it doesn’t substitute for our judgment, context, and voice.
As AI continues to evolve, some tasks may move up this scale. But others shouldn’t – and may never. Social research is fundamentally about understanding human preferences and behaviour. We’ll always champion what only humans bring: curiosity, empathy, and judgment.
Our north star remains clear: being human first. By applying this lens consistently, we believe we can deliver better outcomes for clients and a better experience for researchers – while being transparent about when, where, and how AI helps.