OpenAI introduced its AI chat bot, ChatGPT, on November 30, 2022. It gained a million users in under a week and 100 million within two months. In February 2023, OpenAI announced an upcoming paid version. It works by using large language models and deep learning to predict text and arrange words in a logical order.
At least, that’s its aim. AI can produce coherent text, but at this stage, it’s often what James Vincent at The Verge called “plausible bullshit.” It sounds good, but is wrong or nonsense.
It can also exacerbate existing biases, sometimes blatantly. In December 2022, Steven Piantadosi demonstrated that ChatGPT could produce functions equating white men with good scientists. ChatGPT quickly fixed this, but it’s still concerning.
The biases can also be much subtler. I’ve written about ableist language and know that many words and expressions have racist, antisemitic, and ableist origins most people might not know about. I think it’s good that ChatGPT addresses issues of bias quickly and users can rate answers. AI’s capabilities are being refined as more people use it. However, I’m skeptical that it’ll ever parse implicit bias or subtext like people can. Writing is much more than syntax and grammar.
One weekend in December 2022, Ammaar Reshi used ChatGPT to create a kids’ picture book. Many writers and visual artists expressed concerns that AI imitated their work without their consent. Developers have sued Microsoft for using their code without permission. This suit could set a precedent, especially if writers and visual artists do the same. But unless plagiarism consists of long blocks of verbatim text, it can be hard to prove.
Book Deals Newsletter
Sign up for our Book Deals newsletter and get up to 80% off books you actually want to read.
Thank you for signing up! Keep an eye on your inbox.
By signing up you agree to our terms of use
Intellectual property laws are a huge concern here, but the consent issues go much further than that. People’s words, names, and images can be used in deep-fake photos or AI-written erotica and violent texts without their consent. AI writing bots are often called generative, but they’re better described as derivative. They can’t generate new ideas.
Even before ChatGPT, online bots would often crawl and copy or paraphrase my articles. Instead of linking the original article or mentioning me or Book Riot by name, they would sometimes repost or paraphrase my entire article as “anonymous” or under someone else’s name.
People often describe AI in ways that downplay the human costs. Take for example how OpenAI paid Kenyan workers $2 per hour to label and edit triggering content. Many employees were distressed, like the workers who were paid to flag explicit content on Facebook (now Meta). AI always has human costs and often exacerbates inequality, even when it’s hidden.
Before ChatGPT was released, I read an article saying AI “hallucinated” an experience of incarceration. No, it didn’t. It plagiarized one or more of the countless people who’ve written about their experiences of being incarcerated.
AI can’t think or hallucinate. I don’t like this use of the term“hallucinated” because it implies that AI is sentient or uses medical symptoms as a metaphor for AI. The popular “hallucinating AI” theory combines an ableist metaphor with erasure of people’s contributions and experiences. There are better ways to describe producing inaccurate content.
Yes, I’m a skeptic on the future of AI in writing. I believe in human creativity in all its forms, such as inventing, engineering, and every form of art. STEM and the arts are both creative. If automation values tech over the humanities, costing artists jobs, then that unfairly pits STEM and the arts against each other.
However, I think it’s possible to use technology and writing together productively. How might AI change the future of book publishing? It’s too early to say, because AI is constantly being fine-tuned. Some authors use ChatGPT to help with aspects of writing they find challenging, like titles, or to keep their turnaround fast for Kindle. But it can be difficult and subjective to draw ethical lines around using it. In this interview, Jennifer Lepp says she’d never use AI to imitate another, living author’s unique style.
In some ways, AI isn’t so different from tools I use, like the grammar checker in Word or other apps — but those aren’t scraping other writers’ work from the internet. Some writers say AI helps them grasp grammar and idiom in a second language. I’m still unsure where the ethical lines are, though. Are people who can afford to hire visual artists, proofreaders, or copy editors already opting to use AI instead?
Despite my concerns, AI has the potential for limitless, creative mashups. By focusing on public domain works, writers may be able to use AI in both fun and ethical ways. If you want to use AI to rewrite a Dickens novel as a Shakespearean play in iambic pentameter, that could be a creative project with AI assistance. It might even be too time-consuming or niche to do without AI.
Future copyright laws will need to contend with the issues AI raises. Instead of scraping the web, AI could draw from a database. Writers and visual artists could opt into or out of these databases when signing contracts. Maybe they could receive royalties if AI detects overlap with a certain percentage of their work. I don’t know how feasible this idea is, and it doesn’t solve other ethical issues.
If it’s used in specific ways, AI might open up creative options, instead of eliminating writing jobs. Developing a unique voice, skills, and interests are important aspects of being a writer. AI cannot replicate writers’ diverse expertise and experiences. With safeguards, it may help with certain projects someday: for example, imitating the recognizable style of an author from centuries ago.
Find out why BR editor Kelly considers censorship a bigger threat than AI. In 2021, I examined the popular expression “great artists steal.”