Curiosity, as usual, got the better of me. Everyone was talking about ChatGPT: the new text generator powered by an artificial neural network. I knew roughly what to expect—I had watched, warily but with interest, as GPT-3 (a more powerful but less user-friendly version) sparked breathless chatter across the tech world in 2020. Could the hype be true? Had AI really crossed the threshold from ‘text generator’ to author?
ChatGPT answers queries in natural language, so you can ask it to write an essay on the Enlightenment, an explanation of how bicycles were invented, or a poem about the sea. It responds in eerily human-like syntax—a significant development in artificial language models. I resisted the urge to try it out. I was both disdainful of the claims that ChatGPT was genuinely poetic and creative, and fearful of what I might find.
AI-generated ‘art’ has bothered me for a while. Enthusiasts boast that it will ‘democratise art’ by generating pictures or stories at the touch of a few keystrokes, seemingly with blithe disregard for the labour, skill and passion involved when human artists and authors create works. However, the first and understandable worry that it will put artists out of a job is not the only issue. What’s at stake is much greater: the social and cultural signals that human beings send through art.
A deluge of AI-generated songs on music platforms doesn’t just make it harder for musicians to reach an audience, it also robs listeners of genuine emotional connection with the song. AI-generated images that furrow the brows of art historians aren’t just a neat trick, they dilute the signals that paintings send about specific times and places in human history. And, of course, every time someone interacts with ChatGPT, or any other text or image generator, they’re teaching it to be a more effective mimic. If art is an act of social communication, we’re currently at risk of churning out a whole stream of nonsense.
Natural language generation is notoriously difficult for machines.
Literature seemed like the hardest field for AI to crack. Natural language generation is notoriously difficult for machines, especially tasks requiring longer-term ‘memory’, like keeping the thread of what was previously said in the conversation or developing a consistent theme across several paragraphs.
That changed in 2020 with the release of GPT-3, the most powerful language generator ever created—a huge artificial neural network, trained on about 500 billion ‘tokens’ of data collected from years of web trawling. Given a prompt, it can produce news articles and answer questions. It was also widely reported that GPT-3 could tell jokes and write poetry, things we tend to associate with a strong proficiency in language.
But things exploded with the release of ChatGPT in 2022. A more specialised version of GPT-3, ChatGPT is primed for conversational use. This model was released to the public to test its performance and is free to use (for now). Suddenly, the natural language capacities of this neural network were apparent to anyone who cared to interact with it. On LinkedIn, I saw dozens of posts with a description of how AI will help us to write emails and structure meetings more efficiently, all with the ‘cheeky’ disclaimer at the end: ‘PS… this post was written entirely by AI!’ Others took to Twitter to showcase essays or poems ChatGPT had written.
Nothing stands still in the world of AI. Last month, GPT4 was released. Available only via subscription, it is reported to produce better and more accurate responses than ChatGPT. Other models are crowding in too, with Google and Microsoft trying to capitalise on the AI buzz with mixed success. The rapid rate of these developments in this area makes it difficult, but crucial, to step back and consider their longer-term implications.
Proponents of AI-generated work often seek to position AI as nothing more than a tool or an aid, albeit a powerful one. Of course, the reality is that the neural networks being developed now have the capacity to replace human expression altogether. Hundreds of ChatGPT-written books are already flooding Amazon. In future, a publisher may ask an AI model to generate ‘10 manuscripts in the style of [insert bestselling author here]’, and then choose the best one.
But even where people are ostensibly making use of AI to help them write, the AI-as-tool mentality is dangerous. We tend to play down the importance of tools and play up our own involvement. Many of the posts I saw on LinkedIn or Twitter, written by ChatGPT, were accompanied by the poster insisting something along the lines of ‘the ideas are all mine, AI just helped me put it into words’.
The AI-as-tool mentality is dangerous. We tend to play down the importance of tools.
Do we really have such a low opinion of writers? ‘Putting things into words’ is difficult, as anyone who has ever tried it knows. And in the words of Margaret Atwood: ‘A word after a word after a word is power.’ Choosing words isn’t an annoying task that can be outsourced so that you can write your novel—it is writing the novel. The same idea can be expressed through various arrangements of words, some more elegant than others.
The implications of this mindset are twofold. The first is that the act of writing comes to be seen as quasi-mechanical drudgery. Apply the same principle more broadly and widespread devaluation of human creativity becomes obvious. The second implication is that if we don’t do the slow and difficult work of putting our thoughts into words often enough, we will diminish our ability to do so at all. AI is already finishing our sentences for us, through ‘auto-complete’ functions in text messages and email. This tends to erase personal idiosyncrasies, smoothing out our individual voices into one bland and homogenous style (this also perpetuates hierarchies in terms of ‘correct’ language use). An AI-assisted ‘colour-by-numbers’ tool for writing a novel will let us feel like we’re being creative, while simultaneously robbing us of the ability to truly express ourselves.
Another common argument in defence of AI-generated art is that ‘creativity’ is nothing more than a remix of everything we’ve heard or read, so there is nothing particularly special about it. True, there is no ‘pure’ new idea, untainted by other socio-cultural products. But the kind of ‘remixing’ we do is rich in sensory and emotional layers. Certain works inspire us because they resonate. Often, they move us to intense emotion.
Creativity, then, is collaborative: we draw on the intensity of feeling or aesthetic appreciation produced by a work and use that to fuel our own attempts to spark a similar result in others. We build better and higher by virtue of seeing what succeeds in the art of others. There is a kind of cultural magic in the way deeply personal, idiosyncratic art can resonate with a wide audience.
An artificial neural network does none of this emotional work. If it churns out a piece that moves us, that is entirely accidental. Its ‘remixing’ is truly random: it feeds parasitically on works made with passion and intent and creates bloodless imitations of them. It would be a shame if we were so dazzled by this stunt that we forgot to marvel at our own minds.
When I finally decided to try ChatGPT for myself, I opened, a little cynically, by asking, ‘What is it like to live on Anarres?’ The communitarian, anarchic society of Ursula K Le Guin’s 1974 novel The Dispossessed couldn’t be further from the confluence of late-capitalist excess and rule-governed training that has given rise to sophisticated models like ChatGPT. The model acquitted itself well, giving a detailed and accurate description of the fictional planet. Further exchanges established that ChatGPT was excellent at picking up on things I had mentioned in previous queries. This was an impressive technical feat, given the memory difficulties faced by other large language models.
But what about poetry? So many were waxing lyrical about ChatGPT’s ability to write poems that I had to try. I gave it the first verse of Sylvia Plath’s ‘Morning Song’ and asked it to write four lines in her style. It generated this:
The mirror reflects my weary face,
A mask of sorrow I cannot replace.
My heart a cage, my thoughts a storm,
Trapped in my mind, my fate is worn.
It went on to tell me, rather earnestly, that Plath’s poems often touch on themes of melancholy and mental health, and make use of metaphor to do so. There was something touching and perhaps a little pitiable in its eagerness to please.
There will soon come a point where AI works are, at least superficially, indistinguishable from those made by humans.
I wonder whether people tend to be more polite in their interactions with ChatGPT or treat it with contempt. (Roko’s Basilisk has some people wanting to hedge their bets.) For my part, I felt compelled to write ‘thank you for your time’ before I logged off for the night. I was a little reassured to see that the poetry it produced was terrible. (Nothing like those indelible first words of ‘Morning Song’: Love set you going like a fat gold watch. / The midwife slapped your footsoles, and your bald cry / Took its place among the elements.)
But perhaps I hadn’t given it enough of a chance. The overall trend, despite my small experiment, is clear: AI-produced works are getting better, fast. A 2021 study showed that people had only 50% accuracy in distinguishing AI-generated poems from human ones. More than that: asked which poem they preferred (regardless of who/what wrote it), only 65% of participants liked the work of professional human poets better. There will soon come a point where AI works are, at least superficially, indistinguishable from those made by humans.
It may not be immediately obvious why it matters that AI works can pass for human-authored ones. After all, meaning can still exist for a work where an artist is unknown or anonymous. But even an anonymous author is assumed to be part of a shared human experience.
Art opens vital channels to empathy, allowing us to feel what we have not lived. It would create a deep fissure in our common humanity, and room for pervasive cynicism to seep in, if we allowed that channel to be opened by a network that has no capacity to feel. And as for meaning, any critical discussion of AI-generated works is necessarily shallow. AI can act as a mirror, reflecting existing patterns or biases in the data back to us.
But if we want to probe the meaning of the work, we’ll find ourselves speaking to a wall. A 2021 paper jointly authored by computer linguistics expert Emily Bender and Timnit Gebru, a computer scientist and leading AI ethics researcher, warns of the trap of being taken in by the seemingly meaningful text generated by language models. The authors point out that ‘coherence is in the eye of the beholder’. When we communicate with other humans, we work to make sense of what they are saying on the assumption that the speaker is conveying meaning and intent. When we encounter a sophisticated language model, like GPT-3, we do the same: filling in the gaps to establish meaning and building a partial model of the ‘person’ behind the words, even when no such person exists. The trouble is that text generated by language models does not have communicative intent. GPT-3 isn’t writing poetry; it is generating phrases that align sufficiently with the pattern it has identified for ‘poetry’.
Artificial neural networks, capable of churning out a constant stream of works, will produce large amounts of ‘low-creativity’ material (a distinction made by law professor Daniel J Gervais in his paper ‘The Machine as Author’). In the visual arts world, several online art platforms have already banned AI-generated images outright because they are too easy to produce, crowding out human-made artworks by sheer force of volume. Writers may react similarly when AI-written scripts and novels start to compete for attention on streaming platforms and bookshelves. After all, it’s not hard to see how entertainment corporations will take advantage of algorithmic ‘content producers’ that never get writer’s block or miss a deadline.
View this post on Instagram
It is tempting to be nihilistic. To shrug and accept that in a few years’ time, humans could stop creating, drugged by an endless stream of irresistibly tailored ‘content’. What a sad way to trail off the long story of human history.
But art is more resilient than that.
The end of culture has been predicted forever, and yet it keeps surviving and reinventing itself. People are still making art, lots of it, and just as there are new challenges, there are new opportunities. AI-generated works are amplifying the underlying problems of a consumerist culture, highlighting the fact that our incentives are skewed towards producing large volumes of work, irrespective of quality.
Faced with the loss of social and cultural meaning, people may discover a new-found appreciation for the imagination and passion that drives ‘high-creativity’ work. If anything, we should see these developments as a call to action. Read a poem aloud and find the rhythm in it. Lose yourself in a book. Find a painting you love and inspect the brushstrokes. And if any of this sparks inspiration, claim your birthright as a living, feeling human being: pick up your pen or your brush or your laptop, and make some art before the machines beat us to it.