The House Opinion Article | How to avoid concreting cowpats (and other AI BS)

How to avoid concreting cowpats (and other AI BS)


6 min read

When AI is touted as the solution to every policy question, we need to learn to ask the right questions to sift the real from the illusory.

Back in 2018 – when ChatGPT was a mere glint in Sam Altman’s eye – I chaired an event asking, “What role should artificial intelligence play in government?”. Our panellists each gave a definition of AI. The first said it was using machines and large quantities of data to understand the present and predict the future. The second, applied statistics. The third? “AI is magic.” And with all its connotations, from the impressive to the illusory.

AI continues to conjure headlines, tech companies and ministers alike promising ever bigger rabbits out of hats, and the hope is we’re too spellbound to question them. So, how can you distil the substance from the snake oil, and distinguish the good magic from the dark arts?

Whether it’s a public sector leader promising to revolutionise the state, or a company selling you a new tool, here are some questions you shouldn’t be afraid to ask.

What ‘AI’ are you talking about?

Post-ChatGPT, ‘AI’ often means ‘generative AI’, which produces text or media from prompts. But there are many other technologies that fall under the umbrella term ‘AI’ – machine learning, natural language processing, robotic process automation, neural networks – some of them used for years, for everything from analysis to admin. More recently, ‘agentic AI’ has entered the chat (autonomous agents that carry out tasks and make decisions with limited human interaction).

‘AI’ is often defined by what it resembles rather than what it does: “the capability of computational systems to perform tasks typically associated with human intelligence” (Wikipedia). Being less generous, it’s a hype term – as soon as the ‘AI’ works, it’s either called something else or not called anything at all.

If someone’s selling you AI, as a tool or policy solution, ask them to explain what form of AI it is. Ask them to explain how it works. Get granular.

What problem are you trying to solve? Will AI solve it?

This isn’t the first time technology has been sold as solving all our problems; yet, despite previous waves of digital transformation, problems remain. You can’t point at a problem, wave the ‘AI’ wand and make it disappear.

If you add ‘digital’ to a broken thing, as the saying goes, you just get a digital broken thing. (More likely, a very expensive, digital broken thing.) If your processes aren’t currently working, adding AI risks embedding and exacerbating, not eradicating, your problems. You may get ‘efficient inefficiencies’ – you’ll do the wrong things more quickly. Or, to use a more vivid image, you risk concreting cowpats.

What’s the actual problem in your team, with your service, or with your policy? The many AI tools people are piloting across government may help. But if your problem is poor processes, a lack of money, a lack of people, a lack of time, people working in siloes, misaligned incentives… those are the problems you need to solve first.

Does it actually work?

Many things we call ‘AI’ are still nascent technologies. We don’t know how it will fit into what humans do and how humans work – you need to think about the wider ‘sociotechnical’ system, not just the shiny new tech. Ask for evidence and case studies we can learn from.

We have examples of real harm where things haven’t worked. The Australian Robodebt scandal and Dutch childcare benefits scandal (the clue is the common suffix) ruined countless lives through the faulty use of algorithms. In the UK, the exam algorithm fiasco and Post Office Horizon remind us of the consequences of blindly trusting what a computer says.

Less dramatically, a Department for Business and Trade evaluation of Microsoft Copilot found some benefits (including some time saving) but no increase in productivity: some tasks took longer or were only carried out because the civil servant had access to Copilot.

What data is it using – and what are the limitations?

AI systems rely on data, whether statistics for analysis, personal data for agentic AI, or text and media for large language models. So, what data is the ‘AI’ using? Where does it come from? What are its limitations and biases? What’s missing?

Too often we put blind faith in data, but quality varies. (See the ONS’s travails with surveys and economic stats.) Every dataset has limitations and is the result of human assumptions and decisions. Think about who or what isn’t represented – and who may be over-represented. This is (one reason) why facial recognition and other technologies hit errors in ‘recognising’ people who aren’t white, or why recruitment software often discriminates against women and people from ethnic minorities. Forcing the real world into neat categories for us to compare and analyse inevitably strips the data of context.

And then there’s the copyright debate – have the people whose work has been scraped had any say in, or recompense for, it?

Who is accountable and responsible for it?

How transparent is the use of AI? Can we see what data the system is using and what parameters have been set? How can people challenge a decision?

What guidance has been followed, what ethical frameworks applied? The UK government has plenty to help, from the algorithmic transparency recording standard to the AI Playbook to AI ethics to a human-centred guide to the service manual (to say nothing of the wealth of resources from external organisations like the Ada Lovelace Institute).

Is it even legal? The Child Poverty Action Group questioned whether Universal Credit digital systems were compatible with the law.

And go up a level: consider Tony Benn’s five questions to the powerful: “What power have you got? Where did you get it from? In whose interests do you exercise it? To whom are you accountable? And how can we get rid of you?”. It’s easy to see questions about AI as technical or technocratic. But they’re not: the assumptions, companies and economics behind these systems (and our reliance on them) shape our policies, public services and politics. The digital is political.

What do the people affected think?

Have they even been asked?

A recent report from the Tony Blair Institute – not known for its reticence on AI – backed up other studies by highlighting the lack of public trust as a “serious problem” in adopting AI.

Public participation should go beyond merely building trust in rolling out a technology too often seen as inevitable. It should help shape the use of technology in a way that actually benefits people, and bring different perspectives on how – indeed, whether – to use it. Listening to people will build better products and avoid costly failures, whether it’s workers (rightly) warning that tech can’t replace them and they’ll just have to be hired back at greater expense, or representatives from deaf organisations challenging the shortcomings of a Sign Language AI system and saving the public sector money.

There will be some powerful public benefit use cases in the public sector, but there are real risks. Don’t be afraid to ask basic questions (it’s your duty to ask!). Don’t feel bamboozled by a lack of technical knowledge. The trick is to peek behind the curtain and watch out for sleight of hand. It’s always humans behind the magic. 

Leave a Comment