AI Creative writing writing tools

Six reasons why writers should never say never to AI

It’s became a fashionable rallying cry for writers to say “I will never use AI”. While there are many good reasons to worry about artificial intelligence, I think writers should never say never to AI.

As a science fiction writer who finds non-human intelligence fascinating, I’m intrigued by the hyperbole and achievements of AI research in the 2020s. For decades, AI seemed destined to loop endlessly around the peak and trough of the Gartner Hype Cycle until generative AI kicked it towards the slope of enlightenment.

Now it’s worth noting that Gartner still placed generative AI at the Peak of Inflated Expectations in its mid-2023 report. There’s no denying that it’s already a productive technology, though, and I’d say it’s likely to skip over the trough of disillusionment that snares so many innovations. As analysts, Gartner are right to be cautious, but I’ve rarely seen a technology at the same stage that can deliver so much while having so many flaws.

A graph showing the Garnter Hype Cycle for Artificial Intelligence, 2023
The Gartner Hype Cycle for AI in 2023 suggests that generative AI still has a long way to go before it’s more than a novelty.

1. AI holds up a mirror to human intelligence

Generative AI will certainly hit some roadblocks over the next few years, many of them important legal and regulatory hurdles as much as tech challenges. No-one knows how the courts and governments will rule on the use of copyrighted material to train AIs, let alone how people can use them.

And some of the flaws in generative AI have surprised even the science fiction community, which has got used to predicting everything years in advance.

Hallucinating AIs that make up convincing fictions, even when they aren’t asked for them, is something I’d never seen in fiction before it appeared. Yet court cases have already been lost because a lawyer chose to use AI for their research and it made up cases to suit the question.

Generative AI illustrates a fundamental difference between knowing something and being able to talk about it with authority, known as the Johnson Paradox.

Isaac Asimov, Arthur Clarke and many other writers worked hard to understand how their AIs might deceive us. Their assumption was that AIs like 2001’s HAL and the robots would based on vast stores of knowledge.

Instead, generative AIs like ChatGPT use large language models that predict the most likely response to a question. They draw on on billions of answers to similar questions but they never checks their answers. It illustrates a fundamental difference between knowing something and being able to talk about it with authority, known as the Johnson Paradox (or the Trump Paradox for those in the USA).

There’s emerging research on how generative AIs derive facts from their training data. Not only could this help to make these systems more reliable, it might also help us to understand how human minds extract knowledge from our social interactions.

Clearly, AI has a thing or two to teach us, but that’s another post.

Most writers who say they will never use AI are talking about using it for creating entire works from a simple prompt. AI content has already overwhelmed some publishers, and there’s a real risk of AI-generated spam overwhelming the product of human creative craft.

Even so, that’s only one end of the spectrum of uses for AI as a writer. There are other good reasons why a writer should never say never to using AI.

2. It’s hard to say never to AI when it’s already ubiquitous

Despite its fledgling status, AI is already everywhere in technology whether you want it or not. Most writers use some form of spell and grammar check, often with cloud-based tools like Grammarly and Pro Writing Aid. These now offer advanced AI-based writing assistance, but they also have AI hard-wired into the basics.

So if you say “I’ll never use AI”, how sure can you be that it won’t be there, regardless? I write in Scrivener, which is refreshingly old school and offline, but uses the built-in Apple spell-check. How long until this is AI-assisted?

For witers using other platforms, then as a rule of thumb I would say that the more cloudy a platform gets, the more likely it is to have AI somewhere in the gubbins.

An example of bad writing advice given by the app Pro Writing Aid
Like any AI, Pro Writing Aid can offer bad advice advice to unwary users.

Dropbox has now stuffed AI into its storage system, to help you summarise large archives of documents (unless you live in Canada, the EEA or the UK). Can you trust it and who’s responsible for the answers? No-one knows unless you read a hundred pages of small print. I can see the value of this tool — which is currently optional — for a lot of people, but I worry that the optional could become the default.

Even if you write by quill on vellum, there will come a day when you hand your creation to someone else for editing, proofing and typesetting. As time goes on, it’s almost certain that someone in that chain will use AI tools, even if they don’t think of them that way. You may never know.

3. You’re missing out on very useful AI writing tools

And those tools might be very useful. Self-published writers face a constant challenge of choosing what to pay for and what to do yourself. It’s the non-writing tasks that we usually pay for, and this is where AI might be too useful to discount. Even if you want to support human editors and artists, there are always more jobs than money.

Book layout, back cover blurb, cover design, marketing graphics and audio books are all essential parts of the publishing pipeline. I draw the line at AI cover design because I want a human artist to create my cover. However, while audio books are becoming essential, they’re also a luxury to create. A human voice actor is certainly better, but if an AI is good enough, that will have to do. As for interior layouts and back cover blurb, I’ll welcome AI assistance.

Another tedious writerly task that AI can make easier is churning out social media. Some people love creating all those chirpy posts, others find it a chore. I’m somewhere in between so I’ve experimented with AI posts. The tone of the content felt so alien to me that it made my skin crawl.

Editing comes in so many varieties that AI can’t do them all, even if we wanted to hand it over. When AI can do development editing and beta reading, that might be the time to let it do the writing too.

The more technical forms of editing are already in its remit. Grammarly and PWA advanced AI tools include line-editing, copy-editing and proof-reading. Anything that requires a fresh pair of eyes and a meticulous process is ideal for AI using LLMs. Just as with a human editor, you choose what to do with their recommendations.

4. It’s reassuring to discover the limits of AI

Sense-checking and even sensitivity editing feel like essentially human skills, but AI has its uses here too. Tools like Yoast use it to look for inclusive language, though they can’t yet understand what you’re doing at the scale of plot and character.

In my experiments with AI, I’ve noticed that it can reveal when something won’t make sense to another reader. That’s because if you feed it a passage that’s confused, the AI will deliver a different story to the one you started with. The old computing adage of “garbage in equals garbage out” holds true, you just have to admit that it’s your own garbage.

A screengrab of Google Gemini being prompted to create a synopsis of the horror novel Blood River, by Alexander Lane
I asked Google Gemini to create a synopsis for my novel, Blood River. It read a few chapters before it gave up. YMMV.

Some jobs that writers hate are still beyond the power of AI, such as the dreaded synopsis. Most literary agents ask for these summaries, which condense your 90,000 words into a single page.

The synopsis looks like an ideal task for an LLM, but long documents require a lot of processing. Even the paid tiers of some AIs will baulk summarising at an entire novel. I’m also wary about uploading my work to most AIs because they ingest them into the training data set.

Fortunately, Google Gemini will process documents from your Google Workspace without ingesting them. I asked it to create synopses for several manuscripts, from 10,000 to 100,000 words, including my novel Blood River.

Unfortunately, it was pretty obvious that Gemini only looked at the first few thousand words. In some cases it appeared to lie about the story being unfinished. (I’ll do a full post about this soon.)

Now, despite my earlier warnings about hallucinations, AI is also very good at research. An LLM can return a digest of information from its trainign data that would take a long time to create by reading every source. That warning is still crucial: AIs have no fact-checking skills. Never trust their reports without checking the facts yourself.

5. Denying AI could make it harder to win the important battles

The use of generative AI has rapidly become another balkanised culture war of those who are for or against. There’s no room for nuance and the debate becomes an ideological deadlock.

The AI developers cast themselves as heroes of technological progress and artists as Luddites. The artists play the role of heroic champions for creative graft and caricature the developers as high-tech pirates. Meanwhile, imitative grifters make real money from artists’ creative graft, lawyers collect fees and the artists don’t see a penny.

Some battles are more important than others, and we should concentrate on trying to win these.

The number one priority should be protecting copyright and rewarding creators whose work has been used to train AIs. The next priority is to prevent generative AIs from plagiarising the style of creative works or passing themselves off as an artist.

In both cases, the best hope for creatives is not to refuse access for their work to AI training sets or to completely forbid AIs from copying individual styles. Instead, we should encourage licensing schemes like those used by the music industry, which compensate artists who are sampled in commercial music. Coupled with mandatory watermarking for AI-generated content, it might help to level the playing field.

It’s a far from perfect solution, but it might be achievable.

6. And to understand what we do if creators lose the fight for our rights

After all, there’s no guarantee that new laws will favour art over industry. Even if the proposals above came through, writers who refuse to acknowledge AI would still be a bunch of King Canutes.

I’m not just making a bad pun; the current state of content piracy shows us what will happen. Naive users don’t understand that it’s morally wrong and malicious users don’t care. Rogue platforms in shady jurisdictions operate with impunity to serve both groups.

A lot of consumers won’t care as long as the content is cheap and plentiful. Artists are a small community and the most visible figures rarely look like they’re struggling to make ends meet.

Faced with this situation, many writers will adopt AI as a tool, either in private or in public. Very likely, it’s already happening. Amazon only operates an honour system, because AI content is too hard to detect.

There may be some cachet in labelling your content as “100% human”, but I doubt it will be enough to sustain all but a few people at the top of the chain. And who knows, maybe the cost of producing full-length novels and other works with AI will prove more expensive than just allowing writers to get on it. Those of us who enjoy creating might become more like editors collaborating with AI tools to create better stories and do it faster, but everyone will find their level.

Come on, tell me I’m wrong, or at least ask an AI to do it for you.

3 replies on “Six reasons why writers should never say never to AI”


  • AI doesn't know anything and it lies | Alexander Lane
  • Phew! Google's Gemini AI still can’t write a synopsis of my novel | Alexander Lane
  • A Spring retreat | Alexander Lane

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.