iMarc

thoughts on tech • by Marc Wickens

Cutting Though the AI Hype

An imaginary prog-rock album cover whose theme is AI

There is so much hyperbole about Large Language Models (LLMs) in the media right now that I’m finding it can be overwhelming, and I am someone who works in the field of AI! From claims that AI will put people out of jobs to claims that it will takeover and enslave the human race, it’s difficult to know where to start. Some say AI should be regulated now, others are happy to let the “free market”1 take its course. It’s not easy to navigate, especially when many of the people with strong opinions have their own agendas. This post is really an attempt to briefly answer many of the questions I’ve asked myself over the past few weeks.

What’s Changed and Why Now?

What’s changed is that recent years the neural networks that power many of the previous generation speech to text and language classification models have gotten a lot better. Not because they evolved by themselves, but because engineers and mathematicians made them better. In the case of text generation, this means GPT-3.5 and 4 are uncannily good at predicting the next set of words for a given prompt. Text to image models like Midjourney and Stable Diffusion are now able to predict a set of pixels that can resemble a photo-realistic image from a text prompt. In the case of Open AI and ChatGPT, there has been some genuinely mind-blowing innovation. I’m more sceptical of Microsoft, who after decades of being seen as a laggard behind Google and Apple in every industry they enter, seem a little too keen to plug Open AI into all of their products. Still, there is a big PR push from both companies, and Microsoft is a big investor in Open AI. What’s new therefore is a breakthrough in the mathematical models which link training samples to prediction. Given the amount of money invested, it’s now time to monetise, secure more investment and/or research funding.

Will It Turn Against Humans and Take Over the World?

A common refrain among many skeptics of AI is to make the analogy between humans and other animals on Earth. We have used our intelligence to dominate all other species and decimate the planet. Wouldn’t a more intelligent being do the same to humans? It’s a convincing argument but I’m not sure that the likes of Stable Diffusion and ChatGPT or their successors should worry us. In fact, I think the way in which we see these statistical models today will be akin to how some people in the 1950s saw the “electronic brains” that we now call computers. Mysterious and magical. Frightening. There are two reasons I am not worried. Firstly, I cannot see how the ability to predict the next word in a sentence or the pixels that make a convincing image equate to intelligence in the human sense, or anything beyond human intelligence. Even if we assume that the breakthroughs in language prediction will also be possible in other areas of life (politics, art, engineering) it’s still simply providing predictions from a given input. What I can see, and think we’ve already seen to a certain extent already is how AI might disrupt our society. The way in which the algorithms that run social media have damaged our democracy by convincing vast swathes of the population on both the left and the right to believe in nonsense science and conspiracy theories is an obvious example of this. But there is no drive, agency or consciousness behind the AI. The second reason is history and our even current political climate tells us that intelligence is not something we should necessarily fear. Take a few examples of individuals who’ve caused damage to humankind in in the past. Trump, Putin, Hitler, Pinochet. I could go on. None of them are known for their raw intelligence. Charisma and ruthlessness perhaps. They manage to co-opt other intelligent people on to work their behalf. The most intelligent human beings from Einstein, Galileo and Lovelace to Lennon and McCartney are not the ones we need to worry about. I’m far more concerned about humans who have average intelligence and Napoleon complex, and have access to nuclear weapons than I am super intelligent AI.

Will We All Be Out of Jobs?

Unfortunately I am not as optimistic on this one. I can easily see jobs like copywriting and graphic design being disrupted at the junior level. We’ve already seen an NYT Bestseller using AI generating artwork on its cover. We’re all told not to judge a book by its cover, but let’s face it, we all do. In this case the AI generated image was listed in a stock image library alongside other human generated images, so I doubt the choice was conscious. It was also heavily modified by a human. You have to ask if the author cared so little about the cover of their book that they were happy for a stock image to be used, rather than commission an artist themselves, is there really any loss? They didn’t want to pay much for the images in the first place. But somewhere, someone would have been paid something to make that image, and it hasn’t happened this time. That junior designer would need to work on small jobs like this in order to get the experience and skills to go on to move up in their profession. How will they do that now? The problem I see is that with AI taking the low-end, basic work away from these professions, how does a human work their way up? On the other hand, the pocket calculator allowed mathematicians, accountants and engineers to focus on bigger problems and have the gruntwork done for them. Anyone gifted at maths can still find a well paid job, despite the fact we’ve had powerful calculators in our pockets for 50 years. Perhaps the same will be true of writing and graphic artistry. The bottom line is, these large models can only generate was is akin to a statistical average of what is already on the Internet. If it was perceived wisdom that the word was flat, GPT-4 would reliably tell us that the world was flat. GPT-4 has no mechanism to generate anything else and nor will its successors.

What about other jobs? Programming is often cited as being at risk from GPT, because it can generate code. Yet, I have yet to have it generate anything I couldn’t have found by searching on Google and finding the first result on StackOverflow. When I asked it to write an application to covert files from one format to another, it told me it didn’t know enough about their implementation, despite them being both public documented on the Internet. Impressive that it “knew” what it didn’t “know” (I wish more developers were like that!). The jury is out, but I am not as worried about software developers all loosing their jobs because I know just how difficult software development can be, even for humans with 30+ years of experience. Implementing a variation on a well known algorithm in the abstract is one thing, but integrating it into existing business domains, data structures, user interfaces and architectures is something else entirely.

In Summary

I find it hard to get excited about the likes of GPT and Midjourney because I find them emblematic of a tech industry that has lost its way. If someone were to have suggested 20 years ago that we should build a mathematical model that can crawled web pages as an input, and then autocomplete text and images as an output, and that it could likely spread misinformation and put journalism and other important professions at risk, we would have collectively responded with a resounding “Nope.” While I am intrigued that we may have stumbled across a deeper mathematical theory of language, if only by chance, I am not yet excited by the potential utility of such models. Beyond that, I am disappointed that in a world where many people still struggle feed themselves, that is heading for a climate disaster, and where many populist leaders are coming to power, technology – something that was always a cause for optimism when I was growing up in the 90s/2000s – looks likely to make it worse, not better.

Notes

  1. The free market of course, doesn’t exist, and is generally a euphemism for ‘what I want the rules to be in order to suit me‘.

Published by

Leave a comment