The invention of law is a huge milestone in the progress of civilisation. Without it, we would have chaos; people would do as they saw fit, and justice would be arbitrary. Law allows us to have a set of rules we all live by, and a set of procedures for when those rules are broken. For most of us, it allows us to live our lives in peace.
Who makes the law? Experts do – parliament and the courts create laws, police and the courts enforce laws, judges test the law. Most of us don’t get involved. But if we do, we can, with enough effort, see how the law is working. If we’re caught up in it, we at least have recourse to processes under the law, designed to ensure justice and fairness.
This is not an article on the law, though, and I don’t profess to be a lawyer. I’m a software engineer. Software has many of the same characteristics as law. It’s based on rules; describe a set of rules for how you want your machine or algorithm to behave, and it’ll follow those rules. And sometimes we software engineers don’t know all the consequences for just how our program will behave; they can have bugs, and unexpected outcomes.
We’re all increasingly subject to software rules. Buy anything online, and we have to follow the purchasing steps laid out for us. We can’t skip those steps; the software compels us to follow them. Our alternative is not to buy. If we don’t like the rules, it’s tough; there’s no-one to complain to, and the software doesn’t care if we think it’s excessive to set up a lengthy account and type in endless details just to buy a few pounds’ worth of goods.
Algorithms control an increasing proportion of our lives. If we apply for benefits, or credit, or a visa, our data will be put through an algorithm to decide the outcome. That algorithm is like the shopping experience; we cannot deviate from the set path because the software will not allow us to. And, unlike the law, there’s no transparency. How the outcome is computed is known only to the software engineers who created it, and sometimes not even by them. There are no software courts, to which we can appeal if we think the outcome is unjust. If our credit, benefits or visa are turned down, often it’s just tough – we can’t see how the software came to the conclusion that it did.
This effect is compounded by the reverence we give to the outcome of computer processes. We feel inherently that they’re accurate and unbiased, and tend to trust them, partly because they give no indication how they arrive at the outcome that they do. Who has programmed the sat nav, on a journey that we don’t know ourselves, and followed it despite a sense of disquiet that we’re not sure it’s following the right route? We’ve probably read stories of people who followed sat navs when it was quite obvious that the route was wrong – lorries going under bridges plainly too low for them, people driving their cars into rivers. And yet if our own sat nav takes us on that route, we often trust it, since there’s no alternative; we don’t know the route ourselves. It takes quite a lot of evidence to override our natural trust the sat nav knows what it’s doing.
This effect is only going to increase during our lifetimes. As more and more aspects of our lives go online, the rules that govern those aspects will become increasingly due to software rules, and not to the law. Law will always be there, but increasingly it will become irrelevant. How much do we pay for that airline seat, or can we get that mortgage when we want it? Will our car let us drive when we want to, or will it decide we’re not safe to drive and won’t permit us?
Perhaps the good news is that we do have some better outcomes than a software tyranny. Humans are good at spotting injustices and wrong outcomes. Algorithms are very good indeed at sifting data and spotting patterns. If we allow algorithms to make the first choices, and then back those choices up with human judgement on the outcomes, we can get the benefits of both worlds – the huge processing power of computers, and the intelligence and judgement of humanity. A world in which we allow computers to search for signs of cancer, and then in which humans check the results, is a good world – we maximise the chances both of spotting cancers while we can still act and avoiding over-diagnosis. And we can press for transparency in how software makes the decisions that it does. It will still be the preserve of experts, like the law, but can be made subject to the same kind of scrutiny.