Facebook Pixel

The pandemic disruption created a problem for the UK education system: how to move students forward without being able to run their typical exams and tests. In response to this, testers designed an algorithm they would use to assign probable scores instead. It worked about as well as you’d expect (poorly) and was subject to a lot of equity and bias criticism, spurring headlines like:

 

“UK Government faces Elitism Row after Algorithm Downgrades Final-Year Pupils”

 

“Controversial exams algorithm to set 97% of GCSE results”

 

There’s a lot of merit to the criticisms of this, and many other, algorithms, but we think there’s an even more fundamental problem with these headlines: they talk about ‘algorithms’ like they’re people. An algorithm can’t “downgrade” anyone. A person can use an algorithm as a tool in deciding who to downgrade. An algorithm can’t set exam results, a person can design an algorithm to set results and another person can choose to use it. 

 

An algorithm is a tool, not a sentient being. You would never say “a hammer built this birdhouse” any more than you would say “this gun murdered someone”. Someone hammered. Someone shot the gun. Maybe an algorithm is more like a machine than a simple tool, but the same issue still applies. Which makes more sense to you: “a bulldozer demolished a city block to make way for condos” or “this company bulldozed a city block to make way for condos”? 

 

As algorithms are used more and more, we’d have a better understanding of them if we talked about them as actions not actors. We should all start saying that problems were ‘algorithmed’ or ‘algorithmatized’ or something! If a hammer can hammer and a bulldozer bulldozes, then an algorithm algorithms

 

This isn’t a trivial semantic problem. By thinking and talking about algorithms and other data tools as active processes, we get closer to the reality of the situation. First, by saying “he used an algorithm” we put the responsibility where it should be: on the people designing and using algorithms. How effective and how fair the algorithm is is up to them. Second, it removes an inhumane buffer between decision makers and the people affected by those decisions. If we always talk about algorithms as tools of varying quality with real, fallible designers, we can neutralize the ability to say “Hey, sorry, but it’s what the algorithm decided, my hands are tied…”. 

 

We All Count doesn’t fault the above headline writers for this common problem. We’ve fallen into that language trap many times ourselves. From now on we encourage you to talk about an ‘algorithm’ as a process not a decision maker, because it is afterall just an inanimate tool used by people.