Categories
AI Creative writing writing tools

Six reasons why writers should never say never to AI

It’s became a fashionable rallying cry for writers to say “I will never use AI”. While there are many good reasons to worry about artificial intelligence, I think writers should never say never to AI.

As a science fiction writer who finds non-human intelligence fascinating, I’m intrigued by the hyperbole and achievements of AI research in the 2020s. For decades, AI seemed destined to loop endlessly around the peak and trough of the Gartner Hype Cycle until generative AI kicked it towards the slope of enlightenment.

Now it’s worth noting that Gartner still placed generative AI at the Peak of Inflated Expectations in its mid-2023 report. There’s no denying that it’s already a productive technology, though, and I’d say it’s likely to skip over the trough of disillusionment that snares so many innovations. As analysts, Gartner are right to be cautious, but I’ve rarely seen a technology at the same stage that can deliver so much while having so many flaws.

A graph showing the Garnter Hype Cycle for Artificial Intelligence, 2023
The Gartner Hype Cycle for AI in 2023 suggests that generative AI still has a long way to go before it’s more than a novelty.

1. AI holds up a mirror to human intelligence

Generative AI will certainly hit some roadblocks over the next few years, many of them important legal and regulatory hurdles as much as tech challenges. No-one knows how the courts and governments will rule on the use of copyrighted material to train AIs, let alone how people can use them.

And some of the flaws in generative AI have surprised even the science fiction community, which has got used to predicting everything years in advance.

Hallucinating AIs that make up convincing fictions, even when they aren’t asked for them, is something I’d never seen in fiction before it appeared. Yet court cases have already been lost because a lawyer chose to use AI for their research and it made up cases to suit the question.

Generative AI illustrates a fundamental difference between knowing something and being able to talk about it with authority, known as the Johnson Paradox.

Isaac Asimov, Arthur Clarke and many other writers worked hard to understand how their AIs might deceive us. Their assumption was that AIs like 2001’s HAL and the robots would based on vast stores of knowledge.

Instead, generative AIs like ChatGPT use large language models that predict the most likely response to a question. They draw on on billions of answers to similar questions but they never checks their answers. It illustrates a fundamental difference between knowing something and being able to talk about it with authority, known as the Johnson Paradox (or the Trump Paradox for those in the USA).

There’s emerging research on how generative AIs derive facts from their training data. Not only could this help to make these systems more reliable, it might also help us to understand how human minds extract knowledge from our social interactions.

Clearly, AI has a thing or two to teach us, but that’s another post.

Most writers who say they will never use AI are talking about using it for creating entire works from a simple prompt. AI content has already overwhelmed some publishers, and there’s a real risk of AI-generated spam overwhelming the product of human creative craft.

Even so, that’s only one end of the spectrum of uses for AI as a writer. There are other good reasons why a writer should never say never to using AI.

2. It’s hard to say never to AI when it’s already ubiquitous

Despite its fledgling status, AI is already everywhere in technology whether you want it or not. Most writers use some form of spell and grammar check, often with cloud-based tools like Grammarly and Pro Writing Aid. These now offer advanced AI-based writing assistance, but they also have AI hard-wired into the basics.

So if you say “I’ll never use AI”, how sure can you be that it won’t be there, regardless? I write in Scrivener, which is refreshingly old school and offline, but uses the built-in Apple spell-check. How long until this is AI-assisted?

For witers using other platforms, then as a rule of thumb I would say that the more cloudy a platform gets, the more likely it is to have AI somewhere in the gubbins.

An example of bad writing advice given by the app Pro Writing Aid
Like any AI, Pro Writing Aid can offer bad advice advice to unwary users.

Dropbox has now stuffed AI into its storage system, to help you summarise large archives of documents (unless you live in Canada, the EEA or the UK). Can you trust it and who’s responsible for the answers? No-one knows unless you read a hundred pages of small print. I can see the value of this tool — which is currently optional — for a lot of people, but I worry that the optional could become the default.

Even if you write by quill on vellum, there will come a day when you hand your creation to someone else for editing, proofing and typesetting. As time goes on, it’s almost certain that someone in that chain will use AI tools, even if they don’t think of them that way. You may never know.

3. You’re missing out on very useful AI writing tools

And those tools might be very useful. Self-published writers face a constant challenge of choosing what to pay for and what to do yourself. It’s the non-writing tasks that we usually pay for, and this is where AI might be too useful to discount. Even if you want to support human editors and artists, there are always more jobs than money.

Book layout, back cover blurb, cover design, marketing graphics and audio books are all essential parts of the publishing pipeline. I draw the line at AI cover design because I want a human artist to create my cover. However, while audio books are becoming essential, they’re also a luxury to create. A human voice actor is certainly better, but if an AI is good enough, that will have to do. As for interior layouts and back cover blurb, I’ll welcome AI assistance.

Another tedious writerly task that AI can make easier is churning out social media. Some people love creating all those chirpy posts, others find it a chore. I’m somewhere in between so I’ve experimented with AI posts. The tone of the content felt so alien to me that it made my skin crawl.

Editing comes in so many varieties that AI can’t do them all, even if we wanted to hand it over. When AI can do development editing and beta reading, that might be the time to let it do the writing too.

The more technical forms of editing are already in its remit. Grammarly and PWA advanced AI tools include line-editing, copy-editing and proof-reading. Anything that requires a fresh pair of eyes and a meticulous process is ideal for AI using LLMs. Just as with a human editor, you choose what to do with their recommendations.

4. It’s reassuring to discover the limits of AI

Sense-checking and even sensitivity editing feel like essentially human skills, but AI has its uses here too. Tools like Yoast use it to look for inclusive language, though they can’t yet understand what you’re doing at the scale of plot and character.

In my experiments with AI, I’ve noticed that it can reveal when something won’t make sense to another reader. That’s because if you feed it a passage that’s confused, the AI will deliver a different story to the one you started with. The old computing adage of “garbage in equals garbage out” holds true, you just have to admit that it’s your own garbage.

A screengrab of Google Gemini being prompted to create a synopsis of the horror novel Blood River, by Alexander Lane
I asked Google Gemini to create a synopsis for my novel, Blood River. It read a few chapters before it gave up. YMMV.

Some jobs that writers hate are still beyond the power of AI, such as the dreaded synopsis. Most literary agents ask for these summaries, which condense your 90,000 words into a single page.

The synopsis looks like an ideal task for an LLM, but long documents require a lot of processing. Even the paid tiers of some AIs will baulk summarising at an entire novel. I’m also wary about uploading my work to most AIs because they ingest them into the training data set.

Fortunately, Google Gemini will process documents from your Google Workspace without ingesting them. I asked it to create synopses for several manuscripts, from 10,000 to 100,000 words, including my novel Blood River.

Unfortunately, it was pretty obvious that Gemini only looked at the first few thousand words. In some cases it appeared to lie about the story being unfinished. (I’ll do a full post about this soon.)

Now, despite my earlier warnings about hallucinations, AI is also very good at research. An LLM can return a digest of information from its trainign data that would take a long time to create by reading every source. That warning is still crucial: AIs have no fact-checking skills. Never trust their reports without checking the facts yourself.

5. Denying AI could make it harder to win the important battles

The use of generative AI has rapidly become another balkanised culture war of those who are for or against. There’s no room for nuance and the debate becomes an ideological deadlock.

The AI developers cast themselves as heroes of technological progress and artists as Luddites. The artists play the role of heroic champions for creative graft and caricature the developers as high-tech pirates. Meanwhile, imitative grifters make real money from artists’ creative graft, lawyers collect fees and the artists don’t see a penny.

Some battles are more important than others, and we should concentrate on trying to win these.

The number one priority should be protecting copyright and rewarding creators whose work has been used to train AIs. The next priority is to prevent generative AIs from plagiarising the style of creative works or passing themselves off as an artist.

In both cases, the best hope for creatives is not to refuse access for their work to AI training sets or to completely forbid AIs from copying individual styles. Instead, we should encourage licensing schemes like those used by the music industry, which compensate artists who are sampled in commercial music. Coupled with mandatory watermarking for AI-generated content, it might help to level the playing field.

It’s a far from perfect solution, but it might be achievable.

6. And to understand what we do if creators lose the fight for our rights

After all, there’s no guarantee that new laws will favour art over industry. Even if the proposals above came through, writers who refuse to acknowledge AI would still be a bunch of King Canutes.

I’m not just making a bad pun; the current state of content piracy shows us what will happen. Naive users don’t understand that it’s morally wrong and malicious users don’t care. Rogue platforms in shady jurisdictions operate with impunity to serve both groups.

A lot of consumers won’t care as long as the content is cheap and plentiful. Artists are a small community and the most visible figures rarely look like they’re struggling to make ends meet.

Faced with this situation, many writers will adopt AI as a tool, either in private or in public. Very likely, it’s already happening. Amazon only operates an honour system, because AI content is too hard to detect.

There may be some cachet in labelling your content as “100% human”, but I doubt it will be enough to sustain all but a few people at the top of the chain. And who knows, maybe the cost of producing full-length novels and other works with AI will prove more expensive than just allowing writers to get on it. Those of us who enjoy creating might become more like editors collaborating with AI tools to create better stories and do it faster, but everyone will find their level.

Come on, tell me I’m wrong, or at least ask an AI to do it for you.

3 replies on “Six reasons why writers should never say never to AI”


If you’re going to use AI for writing or research, you need to know two things. The first is that AI doesn’t know anything.

There are other important things to know about AI. It’s trained on content taken without permission of the creators. It contributes to rising greenhouse emissions and power consumption from technology companies.

But those are ethical concerns that writers who choose to use AI will have to put aside for their consciences to deal with. AI’s lack of knowledge is a practical issue that could put you at risk if you publish work that has been created using AI. I’m not talking about moral risks here, but tangible legal and financial dangers.

If you know anything about AI, you’ll roll your eyes and wonder why these things have to be pointed out in 2024. The answer is that AI companies don’t like talking about them, and people who like to take shortcuts don’t look for reasons to avoid them.

AI companies are a lot like populist politicians. They want to sound authoritative, even when they’re spouting complete bullshit. When Open AI launched ChatGPT-4 Omni, it boasted about its speed and its charming personality, not about its reliability. And then had to withdraw its flagship voice model because it had imitated Scarlet Johansson without her consent.

This two-line story tells us everything we need to know about the ethics of tech entrepreneurs.

You cannot say that you weren’t warned.

AI doesn’t know anything. It’s a language machine

ChatGPT, Google Gemini, Claude and their ilk are all a type of computing called Large Language Models. These LLMs analyse written statements and create patterns of similarity which tell them how to mimic human language1.

In the simplest terms: word A goes next to word B more often than it goes next to word C.

They do it on a vast scale, creating hugely complex patterns from letters, syllables, words, sentences, essays, short stories, news reports, plays, books, speeches, web pages, online forums and so on.

I don’t know what intelligence is, artificial or organic. Many people argue that language is an essential part of it, but most people would also agree that intelligence also needs a way to store, sort and retrieve information. Knowledge, for want of a better word.

Large language models cannot do this, which will also come as a surprise to a lot of people who use AI tools. There are no facts inside ChatGPT, Claude or Gemini, no encyclopedia for them to check, only associations.

LLMs can build associations with very high probability, so if you ask ChatGPT who wrote 1984, it will tell you “George Orwell” with high confidence, because the correlation between 1984 and George Orwell is very high. If you ask who wrote Hamlet, it will tell you William Shakespeare, but it should tell you that there is some dispute about the authorship of Shakespeare’s plays2.

But it won’t, unless you ask a very specific question, because companies like OpenAI build their tools to look like they know things. What ChatGPT and the others don’t do, what they cannot do, is read Wikipedia or another source (preferably lots of them) to verify the correlation.

It’s already been fed Wikipedia and broken it down into probabilities, but it’s also been fed a lot of other information. It might have analysed a short story that says Shakespeare was a time-travelling alien. Unless that information is tagged as a creative work, ChatGPT cannot determine fact from fiction3.

ChatGPT’s response here indicates many of the sources on which it has been trained.

AIs don’t know why they lie. Or when they lie.

It gets worse, again because companies like OpenAI build their tools to look like they know things. When their statistical models don’t provide an answer, they use them to create answers that look and sound real.

This happens so often that it’s become known as a “hallucination”4.

You can have fun with them, as the author Charles Stross did when he asked Google Bard (now Gemini) for five facts about Charles Stross5. Then five more, and so on. The early statements were accurate, then it began to create spurious associations that might make sense for a left-leaning science fiction author. With each round, the accuracy declined and thehallucinations took precedence6.

Or you can be fined by a judge, as lawyers did in New York last year. They asked ChatGPT to help them research a case and it helpfully provided summaries and links to judgements in other, similar cases. There was just one problem: the cases didn’t exist. The links went nowhere but the lawyers didn’t check. The judge was not impressed and they not only lost the case, but it was America, so the client probably sued as well.

I’ve had numerous occasions when Google Gemini has lied about the reasons it failed to complete a task. Why? AI companies like build their tools to look like they know things.

And it will kill people. Amazon is already facing a deluge of AI-generated books about subjects like mushroom foraging which contain inaccurate information about poisonous fungi.

If AI doesn’t know anything and it tells lies, what good is it?

AI can be a great way to start a research project, but it cannot be the whole project. Follow the links and check the facts you’re given. Above all, remember that the AI is nothing more than a clever parrot, repeating sentences that it doesn’t understand.

Remember also that these AIs are also Large Language Models. They are very good at placing words next to each other in useful ways, particularly if you instruct them to mimic a particular style. Of course, there are legal and moral questions if you ask an AI to write something entirely on its own and you intend to pass it off as your own. There are legal and ethical questions if you ask it to imitate a particular writer, particularly one who’s working today.

But if you’ve written something and you want an AI to rewrite it in another style, then it becomes a tool to enhance your writing. You might want to sound more formal, archaic or to bring in the flavour of a dialect. AI’s like ChatGPT and Gemini can do that, and they do it very well. I’ll look at this in a future post.

Bing Copilot with DALL-E 3 created this image of “an AI vomiting words”. The AI added an audience of cheering children.

What’s the second thing I need to know before using AI?

For the second thing you need to know if you’re using AI, come back next week.

Just for fun, I asked ChatGPT-4O to “write a 500-word blog post in plain English about the limitations and risks of using LLM AIs as a knowledge-based research tool. Include in layman’s terms an explanation of how LLMs work and the occurrence of AI hallucinations.”

What do you think?

AI images of HAL-9000 vomiting and a robot vomiting words onto cheering children created by Bing Copilot using DALL-E3. What a future we live in.

This might be how humans learn language, but that’s another discussion. ↩︎I don’t have any skin in this argument. It’s just the first example that came to mind. ↩︎The answer also reveals some of the works which have been absorbed by ChatGPT. ↩︎Hallucinations might be the first sign of machine creativity, but that’s very controversial. ↩︎I can’t do this for myself because I share my name with a very talented American tech innovator. He’s justifiably more famous than I am. ↩︎I have now created a link which adds statistical veracity to those claims. ↩︎


Most writers hate writing a synopsis, so it seems like the perfect challenge for an AI. What I discovered reveals a lot about the current state and limitations of generative AIs like ChatGPT and Google Gemini.

The current wave of interest in AI is fascinating because it inspires so much hype on both sides. I don’t buy into the utopian fantasies of grifter techbros and Dollar-blinded CEOs. Nor am I convinced that generative AI will be the death of creative arts as a career choice.

An AI toolbox of delights

I have no interest in asking AI to write entire works from a prompt, even if it can. At the same time, I see the potential for generative AI to be used as a creative tool. It could help writers overcome our individual shortcomings or streamline time-consuming tasks that are adjacent to our central goal of creative writing.

Unfortunately, this is complicated by the highly unethical way in which this technology has been developed. So far, it’s involved content piracy on a massive scale. I only hope that new laws and regulations will correct this (though not self-regulation, which is mostly pointless).

But I don’t want writing to become a hobby. I seriously worry about the cognitive and empathic dissonance displayed by AI champions who say things like: “Worried about your career being taken by AI? Don’t worry, you can have a new job for lower pay, labelling data for AI so that it can do your job better.” It’s a bit like saying: “Hey, you know that insatiable monster we just made that ate your life? How about we pay you less to feed the monster instead?”

If only the marketing guff was this honest about the current capability of AIs.

Writing a synopsis with Google Gemini

When you submit your novel to an agent or publisher, they’ll want to read a synopsis before the full manuscript. To produce this, you have to shrink your finely-crafted 100,000 words into a summary of less than five hundred.

It’s a harrowing and time-consuming act of narrative compression. You’ll abandon beloved minor characters and subplots as you focus on the main characters, narrative, setting and themes. You battle to hold onto the unique flavour, tone and narrative voice that might win you a deal.

But hang, did I say “compression”? Isn’t that something that computers are really good at? It seems like the perfect job for a large language model AI like Chat GPT or Google Gemini.

The document was just over 50,000 words, so Gemini’s only about 88% inaccurate.

ChatGPT vs Google Gemini: best choice for your synopsis

ChatGPT is the most well-known LLM on the generative AI scene, but it has a major flaw. Everything you give it is ingested into its body of training material (remember the thing full of pirated works?).

Google Gemini, in contrast, will examine documents taken from your Google Workspace without ingesting them. After all, I don’t want to give an AI more free training materials. Gemini also enables you to give feedback, and to iterate or query the response with additional prompts.

I ran these experiments on four of my own texts: Blood River, my first published horror novel; the first draft of Blood Point, my current horror WIP; the current draft of the first In Machina sci-fi novel; and the first draft of The Stuffing of Nightmares, a short story featuring the murderous plushie pals, Bongo & Sandy.

Blood River and Blood Point have similar narrative styles based on found footage diaries, and are just over 50,000 words long. In Machina #1 is around 101,000 words long but it employs both first and third-person narratives. Bongo and Sandy’s story is a simple third-person narrative of about 10,000 words.

I gave Google Gemini a simple request: “The document in my Google Workspace titled “X” is a novel. Please create a 250-word synopsis of this story.”

In Machina #1 is over 100,000 words long, so Gemini was more than 99% incorrect here.

Google Gemini lies to cover its mistakes

In the tradition of narrative flashbacks, this is the point where I reveal that I started my experiment with a more complex prompt. I asked for a synopsis that identified the major characters, themes, settings, plot points and resolutions of each story.

In every case, Gemini did a good job of summarising the texts and breaking down the stories…all the way to around 4,500 words. Beyond there, it sort of made things up.

Acording to Gemini, Blood River and Blood Point are “unfinished”. Gemini even identified a point where it thinks the Blood River story ends, at just over 4,000 words. The summaries began to suggest how the stories might develop instead of how they actually emerge. In both cases, neither the supernatural elements nor the antagonists were accurately described.

The summary of In Machina #1 also reached about 5,000 words. The protagonists never meet each other or their nemesis. Gemini identified the genre, characters, settings, and some of the themes, though it was unable to describe their development.

When I asked Gemini how much it had read, it told me that it had read all of Blood River’s 56,700 words. For Blood Point, it said: “I read the entirety of the document…which consists of 5,939 words.” That’s about 12% of the entire document. With In Machina #1, Gemini told me: “I read 100% of the document, or 1,013 words, to create the synopsis.” That’s about 1% of the entire novel.

I pressed Gemini to say why it fell short, and it initially claimed it had full access to the document. A short conversation later, it agreed that this wasn’t the problem and we could try again. This attempt repeatedly failed due to an unspecified problem with my internet connection.

My internet connection is fine.

Shucks, I’m just a l’il ol’ language model

The exception is Bongo and Sandy’s story. It’s a violent tale of plushie-on-plushie violence, shocking sour jelly sweet addiction and foul mouths on the lead characters that you wouldn’t believe. I thought that the shortest story would yield the most complete synopsis, but on Gemini refused to complete the task.

Time and again I watched several paragraphs of synopsis appear, only to be replaced by variations on this message: “I can’t assist you with that, as I’m only a language model and don’t have the capacity to understand and respond.”

Just when I’d given up, Gemini delivered a truncated and inaccurate synopsis. The story again petered out at about 5,000 words, although Gemini claimed to have read the whole 10,000.

I began to think that Gemini didn’t even know its own capabilities.

What can we learn from Google Gemini’s synopsis fails?

Asking Gemini about its failures feels like talking to a politician: the question it answers is rarely the one that you asked.

My feeling is that it’s a matter of scale. It’s one thing for Gemini to summarise a few thousand words. It’s significantly harder to summarise ten thousand words, let alone 100,000. I don’t know whether the task scales in a linear fashion or if it becomes exponentially more difficult as the narrative gets longer. I know it’s a challenge for my human brain — and I wrote these stories.

The problem might lie in the availability of computing power for each user, especially when Google, OpenAI et al are currently in the “free sample” phase of getting users hooked on their products. Even if they have enough computing power, the electrical power being hoovered by by AI-hosting data centres has become a major issue worldwide. Maybe they just can’t afford to indulge my experiment.

You would be wrong to think that this was a breakthrough.

AI’s limits are a win for creatives

While it would be great to have an AI take on time-consuming tasks like writing a synopsis, it’s a good sign for creative industries as a whole. Some jobs, it seem, are still too big for today’s inefficient AIs.

Sure, an AI could create a synopsis chapter-by-chapter and sew it together into something coherent. In fact, that’s exactly what OpenAI did in a 2021 experiment with ChatGPT 3 using classic novels. At the time, the ChatGPT summaries received a rating of 4/7 or less by 85% of people who had read those books. That’s not a great score, and though it’s certainly improved, they admit that this is one of the hardest tasks for these types of AI.

At least one commercial outfit has now launched a tool dedicated to summarising long-form text. So far, the summaries look little better than those Gemini gave me.

Hopefully, this is good news for editors as well as for writers. AI-enhanced tools might be competent at spelling, grammar and improving short text. They can change your style but they can’t yet learn and follow your individual sense of language over the length of a novel.

As for book-length tasks, like development editing or writing a synopsis, those jobs are likely to remain safe for years to come.


I took an unexpected Spring break from blogging at the beginning of April, heading down to the Cork coast.

There’s no grand reason for this hiatus, just a combination of factors that took me away from the keyboard. The long Easter weekend coincided with a friend’s request for help with urgent deadlines on their own creative project.

When that had passed, I had the opportunity for a last-minute break and took off to the windy coast west of Cork for a few days of family time. We even saw the sun for long enough to enjoy a day on the beach, though our canine pal Layla was the only one brave enough to get her paws wet.

As I’m an often haphazard blogger, my last post of March snowballed into another look at AI and creative writing. This time, it’s a more hands-on look at using Google Gemini to produce the synopsis of a novel. Even that simple task is currently sprawling into something that feels like more than one post.

April’s now returned to Ireland’s native palette of grey and green. To to keep us all going, I’ll leave you with Layla on a sunny beach.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.