What AI Can’t Write

If you’re a creative writer who’s been following recent developments in AI and related technologies while chewing your fingernails down to bloody nubs, I have great news: the robots are not after your job.

 At least, not exactly. They can write content (sometimes riddled with factual inaccuracies -two recent ChatGPT sessions about a writing colleague of mine gave him two separate birth years and credited him with two novels he’d never written), they can write pretty credible passages in a certain style if given enough information, and they can write in a way that sounds like someone (you) gave them a template to follow. AI can also write content that might work to promote a B2B business, and articles about as accurate as a Wikipedia page written prior to 2021. 

AI can also write a blog post. Or can it? 

A middle-aged femme-presenting person wearing a cardboard and foil robot costume.

I asked ChatGPT to write me a 1000-word blog post about whether or not AI could write creatively. After prattling on for a while about what “creativity” is and whether or not AI could theoretically be creative, out popped this paragraph: 

“The truth lies somewhere in the middle. While AI can produce works that are original and impressive, it is unlikely that it can truly replicate the depth and complexity of human creativity. This is because creativity is not just about producing something new, but about the underlying thought processes and emotional experiences that lead to the creation of something truly original.”

(Which actually just means that this was the general sentiment of the language model used to train ChatGPT.)

All art, as hippy-dippy as it feels to me to write it, is at its core about feeling and connection.You can’t make your three-act play cause someone to feel catharsis if you’ve never felt it yourself. You can’t write a novel about heartbreak if you haven’t felt, or at the very least seen others feel, similarly distraught. And you can’t make someone connect to your painting of your pet goat, Sigmund, if you don’t love Sigmund enough to paint his portrait in the first place. Feeling and connection are to creativity what umami is to the palate: we can taste salt and sugar without it, but when you add it in you experience a richness that you couldn’t have gotten any other way.

AI can also be seriously wrong (as in, errors of fact), like Google Bard erroneously claiming in a public demo that the James Webb Space Telescope “took the very first pictures of a planet outside of our own solar system.”. That gaffe (exoplanets were first discovered using European Southern Observatory’s Very Large Telescope) cost Google $100 Billion in plummeting stock.

AI, clearly, can make mistakes. Big ones. But what it can’t do is defend them, wallow in them, make them into song lyrics, get into a fight because of them, or use them as a reason to drown their sorrows at the neighbourhood pub. Chatbots and other forms of AI are dependent on the data models that their human creators have fed them. All the messy idiosyncrasies of being human aren’t understood by humans themselves (think of how much we still don’t know about how human brains process memory, for example). So how could a human train a chatbot on a set of data that doesn’t (as yet) exist? We cannot accurately map the human feelings of falling in love, stubbing one’s toe, almost drowning, or being provoked to seething rage. As of present writing (March 2023), we therefore can’t train an AI to map them into its neural network, either. Therefore, it can’t show signs of fully understanding these experiences.

You can also tell ChatGPT a style to write in, but it does poorly when coming up with metaphors of its own. ChatGPT and similar systems are trained on large samples from existing language patterns, so their metaphor output is rife with cliches. I asked ChatGPT to write 7 original metaphors for falling asleep, and here’s what it came up with: 

  1. Drifting off to dreamland like a boat on a calm sea.
  2. Falling asleep is like diving into a soft, fluffy pillow.
  3. The sensation of drifting off to sleep is like being wrapped in a warm, cozy blanket.
  4. Falling asleep is like a gentle, slow-motion freefall into a peaceful slumber.
  5. Sleep is like a soft, warm cloud that gently envelopes and lulls you to rest.
  6. Slipping into slumber is like slowly sinking into a feather bed.
  7. Falling asleep is like a slow, steady fade into a serene and restful state of mind.

These examples are neutered of all personality, to be sure, but each one of these suggestions also feels deeply unoriginal, even if we’ve never seen this exact combination of words before. You can’t (as of yet) train a bot into generating fresh and exciting language. You can’t, therefore, give its output the same level of personality as something created by humans.

Which brings me to the other day. I was watching an episode of the TV show Arthur with my two-year-old niece. We were watching the episode where Mr. Ratburn, the third grade teacher, is getting married and the class hatches a plot to make him fall in love with the local librarian. They go so far as to write the (female) librarian a love letter “from” Mr. Ratburn. Aside from the obvious flaw of Mr. Ratburn being in love with a man, the plan falls apart right here. You can tell a group of 8-year-old anthropomorphic animals exactly what Mr. Ratburn looks and sounds like, but when all is said and done they still have all the limitations of third graders. When they try to write a love letter to a librarian, it comes across as disingenuous – particularly as they have incorrectly spelled the word “library” multiple times.  


AI has all the limitations of AI. Poets, novelists, screenwriters, and essayists (and especially that original: you!) just don’t.

Leave a Reply

Your email address will not be published. Required fields are marked *