Exam season had drawn to a close. I was hanging out with a few students having a few drinks. Of course, chatGPT came up and they have learned a lesson. Their math exam was open book, meaning they were allowed also to use the internet. Of course, they all used chatGPT. While chatGPT gave a very convincing-sounding solution, it was totally wrong. They will all get zero marks for that question
Then I switched on the radio and there is a call-in show asking listeners how to deal with the looming Artificial Intelligence (AI) ‘apocalypse’. Then a cleric on Thought for the Day worries about robots taking over the world.
On the one hand we have a half-baked solution which the students trusted, that then clashed with reality. On the other hand, we have a wild projection into the future which sounds more like 90s science fiction than reality. One is the promise of an all-knowing assistant with 100% reliability. The other is a 100% evil AI which will eradicate humanity.
‘Tech bros’ love ‘over-promising’
Superficially, these look like completely opposing fantasies, but they have one thing in common: AI tech entrepreneurs (known in the industry as ‘tech bros’) love ‘over-promising’. In AI this can be traced back to the year 2014 when the AI start-up business, DeepMind, claimed that computers are now “superhuman” because a demo could play Atari video games better than a human.
Google subsequently bought DeepMind. It produced more ‘superhuman’ demos that were happily reported in the press. That lead to tech bros claiming that soon consultants won’t be needed as their AI counterparts would be better. For example, superhuman AI claimed to diagnose x-ray scans better than any GP. A year later, a research article was swept under the carpet that asserted the algorithm just learned trivial aspects of the x-rays such as time of day or identified which x-ray machine was used.
The over-promising populist doom-mongering borrowed from sci-fi at first looks counter-productive to the goals of tech-bros – selling their algorithms. However, this needs to be seen in the context of two aspects. The one is libertarian ideology, in particular Effective Altruism. In a nutshell its case is it’s not worth being altruistic to all humans, only to those who further humanity. Not surprisingly, tech entrepreneurs see themselves belonging to the group that needs to be preserved for future generations. Such Effective Altruism was brilliantly satirised in the film Don’t look up. They all fly to Mars before the comet hits the earth.
Dark reason behind doom-mongering
The other reason why doom-mongering is beneficial to some is because it’s a fictional account created by the tech industry about the remote future, so distracting from current issues. Issues like dangerous algorithms which amplify disinformation, face detection that can lead to the arrest of innocent people, or gig worker delivery drivers at the mercy of crude AI. They can be sacked for deviating from the AI route, even if only to avoid roadworks. By forcing policy makers to think about science fiction rather than today’s science fact, some in the AI industry hope to dissuade governments from legislating on current, very real, dangers.
Given the rapid development of AI, both the UK and the EU are developing new legislation. What is positive is that both have identified this important principle, safety, security and robustness. However, in the UK this is guidance only and not legally binding. In the EU it will be law. So, UK businesses can potentially still claim that their algorithms are just perfect, while businesses in the EU need to prove it. Needless to say there is no need to legislate against Hollywood doom stories. Fortunately, neither the UK nor the EU have been scared into kneejerk reactions like legislating against extreme science fiction scenarios.