In this AI age, I still write my own articles
July 3, 2024How can you know if this text was really written by a human? I know, because I am writing it right now. But, of course, that previous sentence could have been written by artificial intelligence. It seems we now have to live with this uncertainty of who or what the author of something actually is.
Writing as an end in itself
The difference between man and machine has been a recurring theme since at least the time of the Industrial Revolution. But in this age when robots are programmed to speak of themselves in the first person, it has gained a special urgency. Are human beings replaceable? In some areas, certainly. But how far are we willing to go? British-Australian author Alan Baxter has a decided opinion: "In a world where people are still cleaning toilets and working in mines, I can't believe we've got the robots making our art and stories. I thought robots were supposed to do the sh*tty jobs to allow more people to pursue their passions."
When asked why they write, most writers say it's because it's their passion. It's the process of writing itself that is enjoyable, finding the right words and relating to the world. As literary translator Claudia Hamm explained to DW, "The act itself is the purpose. If we [writers] didn't want to write, then we could live much less precariously.”
'Text-generative AI is a stolen car'
So is the solution just to let those who like it do their own writing and leave the rest to AI? Not at all, says Hamm, who edited "Automatensprache” ("Machine language”), a recently published volume of essays, poems and interviews dealing comprehensively with various aspects of artificial text generation. One hotly-debated topic is copyright. The language-generating computers, also known as LLMs (large language models), only function because they have been fed — free of charge — with millions of existing texts written by humans. Numerous bestselling authors have already filed lawsuits.
Claudia Hamm puts it like this: "Text-generative AI is a stolen car. You can sit in it and drive. You can also drive it to Paris and have fun. But it remains a stolen car.”
Machine language versus human language
Hamm goes on to say that to her, machine language isn't a real language at all, since there is no "I” who speaks, and therefore no intention. "AI has no communicative intent. When we use language, we're trying to find expression as human beings, an expression of a very specific inner world,” she says. But the machine has no inner or outer world. So, according to Hamm, it is unable to create poetry and can only put together unusual combinations of words. "A machine can't make a statement about itself,” she says. "It can't put itself in relation to the world.”
There are also problems when it comes to accuracy and the truth, particularly with the phenomenon of "AI hallucinations,” information more or less "invented” by text-generating AI. Or, as writer Nina George describes it in her contribution to the book "Automatensprache”: "Inaccurate assertions and event falsifications that make text vomit a more unreliable source of information than Putin, BILD [a German tabloid newspaper -ed.] and Wikipedia combined — as if an uptight, know-it-all uncle were blubbering drunkenly to himself in a pointless stream of drivel.”
A simulated counterpart
The problem, says Claudia Hamm, is that LLMs are designed to make humans and machines indistinguishable. Users should get the feeling they are talking to an intelligent counterpart. The fact that AI functions as a substitute for a human counterpart is the big difference to past technological revolutions. "A steamship has never denied that it was a thing," says Hamm.
Beyond that is the question of whose reality is reflected by AI is insufficiently explored. The words published online that serve as training data for LLMs like ChatGPT are overwhelmingly written by white people, especially men and the wealthy. Accordingly, the output generated from them does not reflect diversity.
Publishing and AI
Ultimately, the publishing industry isn't interested in a blanket condemnation of artificial intelligence. It can certainly be useful. Beate Muschler, vice president of digital development at Penguin Random House, says it's now commonplace to use AI for inspiration. In an interview with DW, she says, "We don't publish any AI-generated content. But that doesn't mean we're an AI-free space. The approach is to look at the production processes and define areas where AI tools can be implemented sensibly as a tool where it's unproblematic regarding copyright.”
Muschler says, for instance, that employees are allowed to use ChatGPT to come up with new ideas. But the content that comes out of those ideas — such as book covers — has to be created by a human. She adds that it's the same for authors: Using AI for inspiration is fine, but the final text has to be the writer's own work. "Our contracts clearly stipulate that the author promises that they created the work on their own,” says Muschler.
AI and climate change
The situation is similar in schools and universities. Students are supposed to write their own work, engage with it and get involved in processes — otherwise, say critics of AI, there's no learning and the world would eventually become stultified. And that is certainly not a good prospect.
And so I have got a bit further here. I have researched, reflected, written; I have grappled with myself and the world — and have even protected the environment, because AI uses a great deal of electricity. That's a topic that also deserves attention in this age of climate change.
This article was originally written in German.