Saturday 8 July 2023

Are we approaching the Singularity?

CONTAINS HUMAN-GENERATED CONTENT

I've continued to watch the development of freely available Artificial Intelligence over the past few months, as well as experimenting with image generators and ChatGPT, and I'm also aware of the many controversies around its use - for example, the time when a chatbot wrote a letter that successfully overturned a parking ticket, or the time a chatbot wrote an article and backed up its point of view by inventing and referencing a fictitious Guardian article, or the time an image generator won a painting competition.

I admit to some concern around the discovery that ChatGPT can write computer code as well as text, as well as the many uses of AI technology (by human fraudsters) to carry out identity thefts and other fraudulent activity. There is also a very understandable concern that AI will remove the need for humans for some jobs and professions, and while in the past we might have imagined robots taking over boring, repetitive or risky jobs it is possible to think that they might also take over creative activities.

My own conversation with ChatGPT to produce the episode guide for the second series of Firefly was... interesting. ChatGPT's response to my initial request demonstrated understanding that no such series existed, as well as more general knowledge of the 'Verse. I had to make it clear to ChatGPT that I was seeking a work of fiction. I then engaged in a series of requests, going through seven or eight cycles before reaching a satisfactory list. I had to ask ChatGPT several times to make the list less repetitive and to include more references to named crew members - it took time but ChatGPT gradually got better at this. On the other hand, I liked the fact that ChatGPT knew it needed to create a list of episodes building to a dramatic series finale. I stopped at what I thought was a fair attempt.

ChatGPT also responded to every request politely and in perfect English, and every response was relevant and reasonable. ChatGPT also took a slightly submissive position, often apologizing in response to my requests for changes. This made me slightly uncomfortable and I found myself addressing ChatGPT politely as well, using please and thank you. I don't know if this affected the result.

The emergence of AI has been a popular theme in science fiction books and movies for a long time. Often the AI is portrayed as harmful, for example HAL 9000, The Matrix, Megan. More positive portrayals are rarer but there are the Minds of Iain M Banks' Culture, and of course Number Johnny 5.

The Singularity is the theory that, at some point, computer intelligence will outstrip human intelligence in general, as opposed to being better at specific activities such as chess, and while some welcome the arrival of new powerful intelligent entities others are worried about what they will do.

Is the Singularity inevitable? There certainly seems to be a rush to create more and more powerful AIs, and to bring them more and more into everyday life. But I wonder if there might be some limiting factors.

Money and resources - like proof-of-work cryptocurrency mining, AIs are not running on cheap laptops or mobile phones but on specialist server centres with thousands of networked processors. In a way we're moving away from the idea of portable computers and back to the era of computers the size of a house, or an office block, although we can access and use them from smaller terminals. Server centres are not cheap to build or run.

Energy usage - also like cryptocurrency mining, AI server centres consume a lot of power, and creating more powerful AIs are likely to consume more power. The availability of power could be a limiting factor.

Global warming - a related issue is the effect of the power generation on global warming, together with the heat created by all the computer activity. This is already a significant issue for cryptocurrency mining. AI could destroy the world by accelerating global warming, or this threat could lead to a cooperative approach to limit AI activity.

Data availability - AI training relies on easy access to massive amounts of freely available data created by humans. As humans wise up and realise their data has value they may create limits - there are already legal challenges from human writers and other creatives to unauthorized AI use.

Too much AI data - as more and more pictures and written materials are created by AI, this also adds to the pile of available data. Unless AIs are able to recognize the work of other AIs easily, could the presence of AI work in the algorithm make it harder for AIs to produce humanlike output?

There are also some issues that are still unknown or unpredictable. Quantum computing is also advancing year on year and has the potential to allow some types of computing to become faster and more powerful. Could this combine with AI to create something much more powerful? or are these two unrelated technologies?

We also don't know what would happen if AIs became capable of independent activity rather than acting on instructions and prompts from human. It's not just whether they would become our friends or enemies. Would they be motivated to do anything at all? Would they act like humans, with similar drives, emotions and behaviours, or would they be something more alien?