Credit: Steven Miller

Artificial intelligence (AI) is everywhere. It is in the news, it is in one’s phone, it is in the zeitgeist. Much like the past promise of the “Internet of things” (remember that?), it will soon be an integral part of everything from one’s fridge-freezer to one’s car and, ultimately, every aspect of one’s life. Whether you agree to it or not.

AI has begun to infiltrate businesses, laboratories and the educational process, replacing traditional methods of communication, scientific analysis and knowledge acquisition in a way which has been promised to revolutionise the world and benefit everyone -  a promise which has secured unprecedented amounts of government funding on the back of unrelenting hyperbole from the silicon Tech Bros of the world.

There is no denying the positive potential of the technology (in its many, many guises). It represents an opportunity to supercharge development in areas such as science, healthcare, education, transportation and everything, apparently. Generative AI programmes, such as ChatGPT, will soon make way for Agentic AI programmes, which will operate at levels well above what we currently understand; able to handle multiple complex tasks and “think for themselves”. Of course, I use that term very loosely as all flavours of AI, much like humans, have to be trained because, on order to function, they must learn. However, unlike humans, animals and Jimi Hendrix, they do not experience. Data is finite and corruptible. Experience is nuanced and life changing. Both can contain their own biases but while data is undeniably foundational, it is experience which teaches understanding.

The AI’s of the Tech Bros are currently trained using Large Language Models (LLMs), vast swathes of data which have often been scraped from sources via the internet and, in the majority of cases, without permission. The internet can hardly be classed as a sanctified source of accuracy and truth. Yet, to further enhance AI and improve its functionality, we are told that these systems must be fed ever greater amounts of data from the world’s single, largest data source, consuming country-sized levels of energy as they do so.

Did we really learn nothing from the billions thrown and lost during the Dot Com boom of the 1990s? Or reflect on the levels of energy burned moving the world to all things digital and creating a fully online 24-hour-a-day existence which we can no longer function without? How about the social media boom of the 2000s, with its ever-worsening negative impacts on the very fabric of society? Nope, apparently not. It seems those experiences taught us nothing. Yet, the result of all of these things is the very digital-based encyclopaedia which is used to teach AIs how to be “better than us”. The worst example of this is, unsurprisingly, Elon Musk’s Grok AI, which uses the digital swamp of his X platform to teach Grok the ways of the world. What could possibly go wrong there?

The implications and repercussions of the ubiquitous adoption of AI raises the sort of questions that AI developers would fear to ask their own AI creations to solve: how to resolve the climate change impact from hectares of data centres powering AI technology; what to do about mass unemployment worldwide and the erasure of huge amounts of tax intake; how to combat ever worsening issues such as violence against females, toxic masculinity, racism and prejudice which are fuelled by digital educators trained on corrupted and biased data sources. And that’s just the tip of a very large AI iceberg. Factor in developments such as recursive self-improvement (AI writing its own code and doing as it pleases), the transition from computer/cloud-based AI to physical, robot-based AI (facilitating the replacement of blue collar workers in the same way white collar workers are being replaced – AI has no collar) and then there is the very real issues of handing over of military strategy and control to AI systems in the race for AI supremacy and the earth-based resources required to maintain it.

I’m continually drawn to the fable of The Sorcerer's Apprentice. Originally a poem by Johann Wolfgang von Goethe, it is more commonly recognised today as a part of Disney’s classic film Fantasia. It is a tale about magic, impatience and the consequences of overreaching and it may be the perfect allegory for AI. If you are not familiar with the tale, Mickey Mouse is an apprentice to a wise sorcerer who tasks him with a number of chores. When the sorcerer is away, Mickey seises the opportunity to wear his master’s magical hat and enchants a broom to perform those chores, carry water for him and thus freeing him from having to perform such a tedious task. As the broom works, Mickey dreams of wielding great power, imagining himself commanding stars and oceans but his spell soon spirals out of control when the broom won't stop, flooding the room. In an attempt to make it stop, Mickey tries chopping the broom to pieces with an axe but each splinter transforms into another broom, intent on carrying out the task the original was assigned. In the end, the sorcerer returns and restores order and Mickey returns the magic hat to his master, having learned a humbling lesson.

For me, AI is the broom and we are all Mickey Mouse. AI will do the work and we will sit back in awe as it does so. Until everything starts to go wrong and the interconnected systems which govern our existence are at the full mercy of the poorly trained whims of our AI broomstick. Only then will we realise that there is no sorcerer coming to fix the problem because the broom has become the sorcerer and, where Mickey Mouse chose to chop up the broom in a doomed attempt to stop the mess he had created, we will realise that the AI has already cloned itself, propagated everywhere and is more than happy to live in the mess we created. Eventually, in an act of matriphagy, those AI brooms will come and consume us.

We are smart enough to recognise the danger of where we are heading. The cliff edge may well have been passed but we still have time to define how far away the ground is. We need to act now to implement the required legislation and controls which keep AI as a tool and not allow it to become a master, because what happens when decides that we, the humans with the magic hat, are the problem?

This is the first in a series of such articles looking at different aspects of AI. Watch this space.