Categories
AI Creative writing writing tools

You do not own your AI-generated writing

If you want to publish or submit AI-generated writing, you need to know this: you don’t own that work.

No one knows who owns that work.

You had an idea and created the prompt that lead to the work, but in the world of intellectual property, ideas are worth nothing. Expression is everything.

That’s why there are lots of stories about young men picking up (laser) swords and going off to rescue (space) princesses from evil (cyborg) lords.

You cannot copyright an idea. You can only copyright the expression of an idea. When it comes to generative AI, the idea is your prompt but the expression comes from the AI. You didn’t make it, so you don’t own it.

I’m not being an arse. That’s just how it works.

A ChatGPT story about young men picking up laser swords and going off to rescue space princesses from evil cyborg lords.
Goerge Lucas doesn’t own this story, but neither do I (although ChatGPT did call the villain Darthon, so…)

If I don’t own an AI-generated writing, can I sell it?

If someone will buy something, then you can sell it. Let the buyer beware, and all that.

The problem is that if you don’t own it, I can copy it and sell it as well. As can anyone else. I can even use your title, since book titles can’t be protected by copyright. You might be able to trademark the title, but that’s a different set of laws with its own complications.

But it looks like Amazon and other sales platforms are very cautiously moving against AI-generated content. Authors uploading to Kindle Direct Publishing (KDP) have to declare the use of AI-generated content. Even so, it’s an honour system that anyone can abuse.

Generative AI is a legal no man’s land

You might argue that your prompt was extremely detailed, more than enough to deserve the protection of copyright. You’ll have to make the argument in court. Possibly in lots of courts all over the world.

There is no established law about the ownership of AI-generated works. Law comes in two flavours: statute law passed by elected bodies and case law created by courts as they work out the nuance of the statutes.

Right now, no country has passed statute law on the ownership of AI-generated works. The courts will have to decide how existing copyright law applies.

Case law is only really useful when it becomes a precedent. That requires a case to wind its way up through the appeal courts, which can take years.

If you try to establish ownership of your AI-generated masterwork in court, what arguments could you expect?

Copyright decisions are are incredibly subjective and divisive. They rely on courts interpreting terms like “expression”, “substantial” and “transformative” to judge how involved an artist has been in the creation of a work. It’s even more subjective when the work has been derived from another source, such as an AI.

One argument where case law exists is digital photography.
Photographers spend a lot of time setting up their shot, selecting lenses, choosing their stops and shutter speeds. Yet in the digital world they also take tens or hundreds of photos and choose the best shots to become their published works1. Is that skill or serendipity?

Panels from the AI-generated sci-fi graphic novel Zarya And The Dawn by Kris Kashtanova and MidJourney
Kris Kashtanova wrote Zarya And The Dawn. MidJourney drew it. But was Zendaya consulted?

How can I turn AI-generated writing or art into my own work?

The photography argument gives us a clue: AI-generated work is only a starting point.

Digital photographers take their shots and feed them through Lightroom or Photoshop. They crop, rotate, adjust the balance of light and colour, blur out unwanted objects and enhance the subjects. The final work is often dramatically different to their original photo.

In the USA, artist Kris Kashtanova is attempting to push the envelope by registering copyright in an AI-generated graphic novel. Kashtanova used the image AI MidJourney to create each panel of the story, wrote the story and arranged the panels. The US Copyright Office ultimately granted limited protection for the text and arrangement, but not for the individual images. A flurry of similar applications lead to guidance that “a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim“.

Copyright law is often deliberately vague in this way, because it has to apply across many forms of work, in circumstances that no-one could have imagined when they were writing it.

The last major update of UK copyright law was in 2003 and the USA is stuck with a 1998 law. These are rules for a world without smartphones, social media or self-publishing, let alone AI. Only the EU has anything like digital-ready copyright regulation, and even that’s not AI-ready.

The UK says OK, Computer

It’s likely to be a few years before there are new copyright laws that specifically deal with AI-generated content, but the UK is one of the few countries that already has a a possible loophole. UK copyright exists in “computer generated work”, made by a computer where there is no human creator. The author of these works is defined as “the person by whom the arrangements necessary for the creation of the work are undertaken”.

It’s not clear whether this could be applied to someone writing an AI prompt, but the UK’s Intellectual Property Office offers this guidance for work created with AI assistance: “If the work expresses original human creativity it will benefit from copyright protection like a work created using any other tool.”

The IPO gives the example of AI inside a digital camera which assists with light levels and focusing. This sounds a lot like the AI in Grammarly, but it could include ChatGPT rewriting your work to a specific goal or pushing an image through multiple iterations in MidJourney.

If I don’t own this AI-generated art, who does?

A lot of artists are very angry that no one in the world of AI bothered to ask if it was OK to train their new toys on copyrighted works. The AI wonks point to the doctrine of Fair Use and claim it gives them carte blanche.

It will be up to legislators or the courts to decide who’s right now and what happens next. In the UK, the IPO is currently consulting on potential changes to copyright law that could see the computer-generated works exception extended to AI or removed entirely. It could also grant or deny new privileges for AI companies to use copyrighted material for commercial gain. If you publish in the UK, there’s still time to have your say.

The US Copyright Office consulted extensively in 2023 and has promised a report in 2024 that will recommend changes to the legislation, as well as new guidance for the existing rules.

With cases making their way through courts on both sides of the Atlantic, new guidance and better laws can’t come soon enough, but a double-bill of elections makes it even less certain what will happen. I live in Ireland and publish in the UK, EU and USA, so I’d like to know.

The risk for anyone making extensive commercial use of AI is that they could end up owing license fees to the artists whose work was used to train those machines. The more derivative your work, the worse it will be. Or creative artists might lose any protection from being gobbled up by ChatGPT.

If I can’t own AI-generated writing, what can I do with it?

For now, the best policy is to avoid using AI generation for anything you want to call your own work. That’s a pragmatic step, not a moral or ethical judgement of your choices2.

If you intend to use generative AI, use it for things like ideation, inspiration and prototyping. If you have a creative block, mash ideas together to see what tumbles out. When you see something interesting, your own creative work can get started.

At the other end of the process, AI language tools like ChatGPT can be effective editorial tools. Asking them to summarise your chapter is like getting an instant critical read that reveals where you’re not as clear as you’d like to be.

In spite of the ethical issues with their training, I firmly believe that generative AI tools have a place as part of the writer’s toolbox. I also believe that there is a difference between using AI as a creative tool and using it to bypass creative effort.

And it’s a powerful research tool, so long as you’re careful.

  1. Calm down, lens hounds, I know there’s a lot more to it than this in practice. ↩︎
  2. We all know by now that generative AIs like ChatGPT and Midjourney have been trained without permission on the copyrighted works of real artists. Using them is an ethical choice. ↩︎

One reply on “You do not own your AI-generated writing”


If you’re going to use AI for writing or research, you need to know two things. The first is that AI doesn’t know anything.

There are other important things to know about AI. It’s trained on content taken without permission of the creators. It contributes to rising greenhouse emissions and power consumption from technology companies.

But those are ethical concerns that writers who choose to use AI will have to put aside for their consciences to deal with. AI’s lack of knowledge is a practical issue that could put you at risk if you publish work that has been created using AI. I’m not talking about moral risks here, but tangible legal and financial dangers.

If you know anything about AI, you’ll roll your eyes and wonder why these things have to be pointed out in 2024. The answer is that AI companies don’t like talking about them, and people who like to take shortcuts don’t look for reasons to avoid them.

AI companies are a lot like populist politicians. They want to sound authoritative, even when they’re spouting complete bullshit. When Open AI launched ChatGPT-4 Omni, it boasted about its speed and its charming personality, not about its reliability. And then had to withdraw its flagship voice model because it had imitated Scarlet Johansson without her consent.

This two-line story tells us everything we need to know about the ethics of tech entrepreneurs.

You cannot say that you weren’t warned.

AI doesn’t know anything. It’s a language machine

ChatGPT, Google Gemini, Claude and their ilk are all a type of computing called Large Language Models. These LLMs analyse written statements and create patterns of similarity which tell them how to mimic human language1.

In the simplest terms: word A goes next to word B more often than it goes next to word C.

LLMs do it on a vast scale, creating hugely complex patterns from letters, syllables, words, sentences, essays, short stories, news reports, plays, books, speeches, web pages, online forums and so on.

I don’t know what intelligence is, artificial or organic. Many people argue that language is an essential part of it, but most people would also agree that intelligence also needs a way to store, sort and retrieve information. Knowledge, for want of a better word.

Large language models cannot do this, which will also come as a surprise to a lot of people who use AI tools. There are no facts inside ChatGPT, Claude or Gemini, no encyclopedia for them to check, only associations.

LLMs can build associations with very high probability, so if you ask ChatGPT who wrote 1984, it will tell you “George Orwell” with high confidence, because the correlation between 1984 and George Orwell is very high. If you ask who wrote Hamlet, it will tell you William Shakespeare, but it should tell you that there is some dispute about the authorship of Shakespeare’s plays2.

But it won’t, unless you ask a very specific question, because companies like OpenAI build their tools to look like they know things. What ChatGPT and the others don’t do, what they cannot do, is read Wikipedia or another source (preferably lots of them) to verify the correlation.

It’s already been fed Wikipedia and broken it down into probabilities, but it’s also been fed a lot of other information. It might have analysed a short story that says Shakespeare was a time-travelling alien. Unless that information is tagged as a creative work, ChatGPT cannot determine fact from fiction3.

ChatGPT’s response here indicates many of the sources on which it has been trained.

AIs don’t know why they lie. Or when they lie.

It gets worse, again because companies like OpenAI build their tools to look like they know things. When their statistical models don’t provide an answer, they use them to create answers that look and sound real.

This happens so often that it’s become known as a “hallucination”4.

You can have fun with them, as the author Charles Stross did when he asked Google Bard (now Gemini) for five facts about Charles Stross5. Then five more, and so on. The early statements were accurate, then it began to create spurious associations that might make sense for a left-leaning science fiction author. With each round, the accuracy declined and thehallucinations took precedence6.

Or you can be fined by a judge, as lawyers did in New York last year. They asked ChatGPT to help them research a case and it helpfully provided summaries and links to judgements in other, similar cases. There was just one problem: the cases didn’t exist. The links went nowhere but the lawyers didn’t check. The judge was not impressed and they not only lost the case, but it was America, so the client probably sued as well.

I’ve had numerous occasions when Google Gemini has lied about the reasons it failed to complete a task. Why? AI companies like build their tools to look like they know things.

And it will kill people. Amazon is already facing a deluge of AI-generated books about subjects like mushroom foraging which contain inaccurate information about poisonous fungi.

If AI doesn’t know anything and it tells lies, what good is it?

AI can be a great way to start a research project, but it cannot be the whole project. Follow the links and check the facts you’re given. Above all, remember that the AI is nothing more than a clever parrot, repeating sentences that it doesn’t understand.

Remember also that these AIs are also Large Language Models. They are very good at placing words next to each other in useful ways, particularly if you instruct them to mimic a particular style. Of course, there are legal and moral questions if you ask an AI to write something entirely on its own and you intend to pass it off as your own. There are legal and ethical questions if you ask it to imitate a particular writer, particularly one who’s working today.

But if you’ve written something and you want an AI to rewrite it in another style, then it becomes a tool to enhance your writing. You might want to sound more formal, archaic or to bring in the flavour of a dialect. AI’s like ChatGPT and Gemini can do that, and they do it very well. I’ll look at this in a future post.

Bing Copilot with DALL-E 3 created this image of “an AI vomiting words”. The AI added an audience of cheering children.

What’s the second thing I need to know before using AI?

For the second thing you need to know if you’re using AI, find out here.

Just for fun, I asked ChatGPT-4O to “write a 500-word blog post in plain English about the limitations and risks of using LLM AIs as a knowledge-based research tool. Include in layman’s terms an explanation of how LLMs work and the occurrence of AI hallucinations.”

What do you think?

AI images of HAL-9000 vomiting and a robot vomiting words onto cheering children created by Bing Copilot using DALL-E3. What a future we live in.

This might be how humans learn language, but that’s another discussion. ↩︎I don’t have any skin in this argument. It’s just the first example that came to mind. ↩︎The answer also reveals some of the works which have been absorbed by ChatGPT. ↩︎Hallucinations might be the first sign of machine creativity, but that’s very controversial. ↩︎I can’t do this for myself because I share my name with a very talented American tech innovator. He’s justifiably more famous than I am. ↩︎I have now created a link which adds statistical veracity to those claims. ↩︎

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.