Ook in Afrikaans te lees, klik hier.
...
The scale of AI-generated books is as staggering as that of music. One estimate puts the number of such titles already in the hundreds of thousands, particularly in the US, China and Japan.
...
Man versus machine is what was at the heart of a series of high-profile USA court cases in June 2025. Tech companies are being challenged for their use of massive amounts of data scraped from every possible online source to train their generative artificial intelligence (AI) models. The charge is that this data is scraped with neither consent nor compensation in order to create models that unfairly outcompete the original sources in the market.
For creators working in words, images and sound, this is harmful. Not only is the fruit of their labour being used without permission or payment, but the result is also turned against them. First they lose licensing fees, then sales. AI models can compose music; create photos, paintings and videos; and write stories and articles. These products are of a quality that is still, but increasingly just barely, distinguishable from human work. Moreover, it is significantly cheaper and faster.
Illustrators, photographers and videographers are already losing commissions to AI. So, too, are reporters, copywriters, jingle writers in advertising, and virtually anyone in a creative industry role. They are on the front line, where the question “But is it art?” isn’t necessarily asked.
If academic painters, auteur filmmakers, artistic photographers, composers of more complex works, literary writers and poets, and graphic novelists feel even slightly less anxious, it would be a false comfort. Even though the distance between their sphere – defined by creativity, originality and complexity – and that in which AI has already succeeded still seems vast, the message must be: the machine is coming for you.
It is a valid argument that AI is currently generic and derivative – a statistical assembly of the most likely next letter, word, sentence, paragraph, passage and ultimately book. Machines are still on this side of a qualitative divide between writing a light fantasy and creating a truly original literary work.
That, at least, is the human perspective. For the machine, however, it is a matter of time. The development of AI technology, which we must recognise is still in its infancy, follows an exponential upward curve, the end of which is barely imaginable. I will return to this point later.
In the meantime, it is useful to examine what is already possible. Music is one flashpoint. It is estimated that hundreds of millions of AI-created songs already exist in the market. New songs are added at a rate of tens of thousands per day, and bots “listen” to them in massive numbers, boosting revenue.
At least one band is almost certainly AI-generated in its entirety. Its name, The Velvet Sundown, is cunningly close to that of the famous 1960s band The Velvet Underground. It has group photos, a full biography, and two complete albums released in consecutive weeks, with a third on the way – but there are no live interviews. Every song sounds familiar yet lifeless. The group’s growing following on Spotify has already surpassed the half million mark. Apple Music hosts them, too. Quality doesn’t matter to the masses. TVU vehemently denies all allegations of artificiality on X.
As regards books, one of the defendants in the aforementioned lawsuits, Anthropic, states that books are hands down the number one training material for their chatbot, Claude. The reason is clear: among all text sources, books are the most deeply researched and coherently composed. One of Anthropic’s representatives says: “Books are particularly valuable for LLMs (large language models) precisely because humans have taken such care to produce them. If you want to teach a computer the nuances of many different topics – and how to write clearly – there is no substitute for books.”
The scale of AI-generated books is as staggering as that of music. One estimate puts the number of such titles already in the hundreds of thousands, particularly in the US, China and Japan. Science fiction accounts for more than a third of this production, followed by mystery, romance, fantasy and thrillers.
There are degrees of human input and intervention. At one end, the machine receives only a single prompt, like: “Write a novel about a journey.” The quality of the final product is directly proportional to the quantity and quality of human guidance. Most of these books are published online on one of several popular platforms. A few outliers make it to print.
There is far from universal consensus about the ownership and copyright of AI-generated work. In the UK, copyright is limited in scope and duration. The draft policy being developed by the EU requires transparency and human authorship (which will need to be defined). The Chinese appear to be more generous with copyright protection.
To court
A long line of pending lawsuits sees original creators suing tech companies. Three such cases, which occurred in rapid succession, are particularly insightful.
Judgment in the first was handed down in June. In 2024, three writers filed a class-action lawsuit in California against Anthropic. Founded by three former OpenAI members, Anthropic’s aim was to create an ethical AI that would benefit everyone. In contradiction of that ethical claim, the writers argued, the content of their books had been stolen, along with that of seven million others, from pirate websites. Secondly, they argued that it had been used as training data without consent.
The federal judge found that the theft of the books was likely a copyright infringement. A further hearing is apparently forthcoming. There is a potential fine of $750 per violation, which highlights what is at stake for tech companies.
...
The tech companies complain that copyright holders are obstructing AI development because the copyrighted material is essential to training their models. One wonders whether the irony is lost on them...
...
The ruling on a related issue may be even more important, and here the tech company won. At some point, it stopped scraping pirated copies and began purchasing books by the millions. These were disassembled and digitally scanned to create a digital library. From there, the content was reproduced repeatedly to train various AI models. The judge accepted Anthropic’s argument, which humanised the AI model. It is no different from when people read books and then use that information to write their own text, Anthropic said. The operative term is “transformative”, ie, the degree to which the original material is altered in the secondary content.
But while, according to the court, Anthropic’s text output was not traceable back to specific works – ie, sufficiently transformative – other tech companies are more blatant. Meta, for instance, has been accused of having its AI memorise copyrighted books rather than integrate them, then regurgitate large chunks verbatim or in a substantially similar style.
Two days after the Anthropic case verdict, a federal court in California delivered judgment after writers sued Meta for unlawful copying of their work for AI training. Unlike the previous case, this judge ruled that the “fair use” doctrine shielded the company from copyright infringement claims.
Fair use is not a tightly delineated principle, and its application shifts with time, place and the people judging it. It is an important distinction to make that it is not an established right, but a defence. In the US, four questions are weighed together in determining whether copying without permission is justifiable: the nature and purpose of the copying (Is it transformative? Educational?), the nature of the original (Is it factual? Published?), the amount copied and the effect on the market. In South Africa, there is the additional question of whether the source has been acknowledged, though restrictions are generally less strictly enforced.
Of these considerations, the court in the Anthropic case found the fourth one the most consequential: the degree to which the scraped product competes with the original, and whether it is enough to devalue it in the market.
The finding in Meta’s favour was procedural, not principled. The judge stated that Meta almost certainly diminished the value of the writers’ work, but that the plaintiffs failed to offer sufficient proof. Quantifying this type of loss is difficult, but the judge’s remarks are instructive: “These products are expected to generate billions for the companies that are developing them. If using copyrighted works to train the models is as necessary as companies say, they will figure out a way to compensate copyright holders for it.”
Meta initially sought permission, but then abandoned the attempt and from there on simply scraped the content of millions of books from three well-known pirate sites.
Meanwhile, Microsoft has also been sued in New York by a group of writers claiming that the company used 200 000 stolen books to train its AI model, Megatron. The writers further allege that the model was specifically trained to mimic the syntax, voice and themes of the stolen books. They seek damages up to $150 000 per misused book, which is 1 000 times more than the unit figure mentioned in the Anthropic case. Expect a fierce legal battle from Microsoft.
Similar cases are piling up. Reuters has successfully sued Ross Intelligence, as the judge rejected its fair use defence. The New York Times is suing Microsoft over the illegal use of its database. The Wall Street Journal and the New York Post have filed a similar case against Perplexity. Several lawsuits are underway in the music world. Getty Images is acting against Stability AI. Disney and NBC Universal are taking on Midjourney. And there are more.
The tech companies complain that copyright holders are obstructing AI development because the copyrighted material is essential to training their models. One wonders whether the irony is lost on them that this argument actually makes the copyright holders’ case.
Adapt or die
Blessed is the person who can distinguish between battles that can be won and those that cannot. This distinction is not always possible, but amid all the uncertainty and unanswered questions, there is one indisputable fact: AI is not going away. In fact, it will become ever more powerful and omnipresent. No individual creator or class action lawsuit will stop it. The flood of machine-generated products we’re seeing today is going to multiply exponentially. They will also become better.
We are certainly at an inflection point in history. If you want to name it, it is the transition from the Anthropocene (in which humans dominate the geological, meteorological and biospheric character of Earth) to the so-called Novacene (the era of hyperintelligence). It’s a time in which we must properly ask again the question, What in essence is art?, as Nini Bennett does in a discussion of the first Afrikaans AI poetry. More on that later.
...
What is an artist to do?
...
We must ask what art means to us as humans, and where it’s heading in the future; but even before these fundamental questions, we face the practical question: what is an artist to do?
One answer has already gone to action: a legal battle against copyright infringement and the commercial use of scraped content against the interests of its creators. In a just world, every copyright holder would eventually be compensated for the use of their work. But the water pressure behind the crumbling dam wall is immense. It may be too great for law and reason to withstand. Even if a universal system is negotiated in which writers are remunerated through micropayments, as musicians are on Spotify, you can bet that the tech boss will still be infinitely better off than you, the writer.
Another answer to the practical question lies in the ringing words at the start of this section: adapt or die. That phrase also represents an inflection point in South African history. These were PW Botha’s words in his infamous “Rubicon” speech in 1985, a year in which apartheid tumbled into a violent downward spiral. The same words were one of the first comments on a recent Facebook post I made about AI and copyright. They were from the artist Deon Maas, who has a current exhibition in Wellington, as it happens. As he then rightly added: it’s just logical.
AI as we know it today from chatbots answers, in its visual and musical outputs, presents a challenge that is powerful in speed and quantity, but not so much in quality. This is because it is derivative and secondary, generic and superficial, broad but not deep. Your response, therefore, must be to deepen the authenticity of your work, invest with greater intensity your lived experience, continue to hone your craft, always innovate, always ask more questions. It’s a race, and you must stay ahead.
You can also use the very technology in question to achieve precisely that. Use the breath at your neck as a tailwind to propel you forward. Nothing created by AI in fiction so far has shot the lights out. A short metafiction by OpenAI, shared proudly by Sam Altman on X, drew attention a few months ago. The prompt to the AI was: “Please write a metafictional literary short story about AI and grief.” The result – a story about a machine and the idea of loss – is clever and sorrowful, and one runs the risk of being moved by it, even knowing that it is cleverness and sorrow winnowed from the threshing floor of a thousand times a thousand times a thousand stories made by humans.
The comments below that X post are telling. They are almost all negative, though one suspects some critics hold the machine to expectations that they might more leniently apply to human writers. The general tone is that the machine’s story tends toward cliché, lacks precision in word choice and relies on easy sentiment.
The AI poetry mentioned earlier is actually AI-assisted poetry, a volume assembled by Imke van Heerden and beautifully titled Silwerwit in die soontoe (“Silvery white in that way”, 2023). The collection, with its enigmatic, lyrical verses, was created by a limited language model that Van Heerden and her husband, computer scientist Anil Bas, developed. Using a novel written by her father, Etienne van Heerden, Die biblioteek aan die einde van die wêreld (2019; English edition A library to flee, 2022) as a source text, it was instructed to make poetry from it. The collection was well received.
The selection of popular AI music I’ve heard personally may be criticised in the same way as Sam Altman’s short story: it’s simply boring. But who can judge the whole of it? There’s simply too much. The overwhelming quantity suggests that the quality is anything but overwhelming.
More substantial work is being done in serious contemporary and classical music. AI models have, for example, been used to complete Beethoven’s 10th symphony and Schubert’s 8th, both of which were left unfinished. Both are underwhelming. The Beethoven was done with composer Walter Werzowa. Huawei had its model listen to Schubert on a phone and then, still on the phone, generate melodies for a third and fourth movement in the style of the first two. Composer Lucas Cantor orchestrated it. The result is lovely, but sounds more like incidental music for The sound of music than Schubert with his depth, humanity, inventiveness, existential angst and inimitable beauty.
Worse, in my view, is music entirely composed and orchestrated by machines. A leading outfit is the Iamus computing group, which takes credit for the first fully machine-composed and orchestrated symphonic works. One of these, “Adsum“ (2013), was recorded by the London Symphony Orchestra (LSO). I’m not qualified to comment on music theory, but the work evokes a sense of disorientation and godforsaken rationality devoid of any human emotion – future music fit for the bleakest dystopia. I can offer nothing more benevolent of their other works. The short symphonic fantasy “Genesis”, composed and orchestrated by the AIVA system and also recorded by the LSO, is more pleasant and less alienating, but ultimately just as fatally average and dull.
There’s much more for the reader to explore, including Robert Laidlow, Holly Herndon, Björk, Ethan Toavs and Cornelius Cardew.
The visual arts face challenges of their own in the face of advancing AI, but particularly intriguing work is being done by video artists employing AI. Artists like Almagul Menlibayeva, Denis Semionov, Ben Garney, Dustin Hollywood and Pakko De La Torre-Rocha are all worth exploring. Zach Lieberman creates visual poetry by translating living facial expressions and human movement into visuals with the help of AI. Quayola does something similar by transforming live classical music into enchanting visuals. His work is exhibited worldwide, including previously at the Venice Biennale.
Most interesting to me is the work of the young Turk Rafik Anadol, who powerfully translates datafied cultural archives into abstract visual spectacles.
For the average freelance illustrator, the professional without artistic ambition, the news is not good. Just like with data entry clerks, customer service agents, legal assistants, low-level translators, basic copywriters, junior financial analysts, some instructors, tech support staff, market research assistants, low-level journalists and others, AI will very likely be doing your job within the next two years. Will machines ever be able to produce illustrations equal to those of Norman Rockwell or JC Leyendecker? Perhaps not – but certainly excellent imitations, good enough that most people won’t care.
And writers? The takeover has already begun at the one end. The tide is coming in. The outcome will depend at least as much on the capabilities of the machines as on how people respond.
The future
I’m not qualified to speak about the technology of technology, but Eric Schmidt, the brilliant former Google CEO, insists that artificial intelligence is underhyped, not overhyped. According to him, by 2026, 90% of all programming will be done by AI. Within the next three to five years, we’ll reach artificial general intelligence – that point from which onwards machines will be able to create and reason independently of human input.
Schmidt describes this coming time as one in which, in a single AI model, you’ll be able to access the best of, say, all architects, if you want to build a house. Ditto for a multitude of other fields. Within ten years, Schmidt says, superintelligence will be a reality. Then there will be a computer smarter than all the people in the world combined. To put it in layman’s terms.
Schmidt, incidentally, co-authored two books with Henry Kissinger, the latest of which is titled Genesis: Artificial intelligence, hope and the human spirit. Kissinger earned his PhD in philosophy with a thesis on Immanuel Kant and our relationship to reality. His scepticism about what Google was doing to this relationship sparked his contact and eventual friendship with Schmidt.
Geoffrey Hinton, Nobel laureate known as the father of AI, says that machines already have consciousness, and he warns that they will take over.
...
The key point is that AGI will have weaned itself from the LLMs currently being contested in court.
...
What does AI itself say? I asked ChatGPT about the differences between large language models (LLMs) like those we currently know, and artificial general intelligence (AGI). The key point is that AGI will have weaned itself from the LLMs currently being contested in court. AGI will be able to reason independently and generate original knowledge. To understand what AGI is, it helps to know how it differs from AI.
AI learns from enormous data sets scraped from the internet and elsewhere. It’s a statistical pattern-maker that picks the most probable next word. It cannot think or understand, and even less can it demonstrate curiosity or pursue ideas of its own volition. Nor can it – most crucially – create something new. It derives its power from scale, from the programme architecture that enables it to work, and from exposure to human-created content.
AGI is something else. It can learn any task that humans can do. It can reason, adapt and autonomously pursue its own goals. It can create new knowledge, not just rearrange existing data. LLMs cannot cross this threshold because they are bound to what they’ve seen. They cannot independently verify or investigate. They lack the embodiment through which sensory data can enter the system – an ability many say is essential for anything you’d call AGI. They also lack internal motivation to learn.
Thus spake ChatGPT, which further states that the path to AGI likely runs through an intermediate hybrid form. This form will be able to set its own research goals, interact with the physical world through robots and sensors, and possess long-term memory, future planning capability and symbolic reasoning. It will learn autonomously and will possess a simulated form of curiosity that drives its own investigations.
Back to the human yours truly: the above sounds to me close enough to full AGI to say that if it quacks, walks and looks like a duck, it probably is a duck – a duck which, according to Schmidt, will be waddling up to our front door by 2035. Or is it Raka, the feared creature in NP van Wyk Louw’s enduring poetic cycle – he who supposedly cannot think – that is already within the kraal?
Reflection in twilight: Gods and idols
In his lecture on literature and technology at a Stellenbosch University colloquium in 2002, Etienne van Heerden gave an early overview of web-theoretical views on what was then still a relatively new phenomenon in our lives. His views aligned with writers who spoke of the supra-individual nature of the internet, the concealment within all its openness, the distance among its users that is simultaneously an immediacy (terms for which you could substitute transcendence and immanence), as well as the ideology of collective sovereignty.
It should therefore come as no surprise that some theorists described the web – this disembodied entity – as a quasi-religious, even sacred space (among them Sherry Turkle and John Perry Barlow). Others (such as Kevin Kelly and Ray Kurzweil) set out on the path to theorising post-humanism, a path that ran via the internet. More radically still, Oxford philosopher Nick Bostrom, with his background in philosophy, physics and computer science, gave currency to the notion – through his Future of Humanity Institute (founded 2005) – that our entire world, with us in it, is a computer simulation. Such an idea cannot be detached from religious symbol speak.
Since the beginning, we humans have been getting ideas about divinity over things we do not understand and especially cannot control – things that, inversely, seem to want to control us. Short of equating the internet and AI, it’s safe to say that the internet is the mother of artificial intelligence. Not just mother, but feeder as well. I’m not saying it was a virgin birth, but the child certainly makes a stronger claim than the mother to that all-too-human response of building altars since the earliest times.
The German philosopher Peter Sloterdijk had a bon mot about this association of technology with the divine. In an interview with the Danish weekly Weekendavisen (17/11/2017), he said:
Modern man does not want to obey a higher power, but to be a higher power, and the inventors of artificial intelligence are taking over the free position as a creator-god. They have already succeeded. California futurology is based on the assumption that a large part of what we refer to as subjects and souls are in fact technical mechanisms that can be technically simulated.
Clearly, we are again where Prometheus was. We want to play with fire. We want to be God. Or to make ourselves a god. We have already told ourselves all these stories in mythology. And that mythology makes no bones about the abject consequences of our divine aspirations.
Friedrich Nietzsche would have had something to say about this. In Twilight of the idols (1889), he aimed to strike down the dominance of reason – an inheritance from Socrates and Plato – with his hammer. The reduction of the human to data, as AI does to us, is the ultimate possibility of reason. It is rationality compressed into mathematics.
...
Where is the passion, the madness, the rebellion, the master morality that constitutes the essence of the human condition, he would ask.
...
At the same time, Nietzsche would have been wary of the god-talk, especially the slave morality he found so offensive in Christians. Where is the passion, the madness, the rebellion, the master morality that constitutes the essence of the human condition, he would ask. He would be the last to bow before electric gods who may mimic Apollo, but never Dionysus. Never from the deepest circuits of a supercomputer, where electrons await their turn to slip through a transistor according to a programme, could a dancing star ever be born.
“One must still have chaos within oneself,” Nietzsche says in Thus spake Zarathustra, “to be able to give birth to a dancing star.” Nietzsche, in his grave, has a point. Ever-explosive creativity – more than any legal battle in court – is the best, and ultimately the only, answer to the challenge the machines pose to us.
Deeper into the future lies perhaps another possibility: fusion. That is not as far-fetched as it might sound. In his indispensable book Straw dogs (2002), British philosopher John Gray foresaw that humanity may end up in some form of hybrid existence in which our brains have been merged with computer technology. With the help of people like Elon Musk and his company Neuralink, computers are already, and literally, beginning to crawl into our brains.
What mathematics tells physicists about the quantum world has led some to speculate that the cosmos as a whole is a conscious entity, and that consciousness is something we humans experience of it, not something we bring to it. One of the leading theoretical physicists, the Brit Roger Penrose, does not go that far in his writings on consciousness, the brain and quantum mechanics. Consciousness, Penrose says – and not uncontroversially – is generated when quantum states collapse under the influence of gravity within the microstructures of our nerve cells. This is real and not merely observational, as in the well-known double-slit experiment with light. The cosmos, for Penrose, is not conscious, but consciousness is fundamentally tied to the gravitational geometry of the cosmos, in a controlled process. What happens in the wet, warm, noisy brain is not algorithmic computation. Such computation, says Penrose, is all that even the most advanced quantum computer could ever do. For this reason, it will never be able to generate consciousness.
For now, until we know whether Penrose is right or wrong, the divine remains located with us, the human; that is, the human in whom both Apollo and Dionysus are at work – the cool rationality and detached beauty on one side, and the drunken swirl of chaotic creativity on the other. If the machines are to read all of our books, we must hope that they read their Nietzsche well.
Without putting too Jungian a point to it, one has to ask: is the scraping of human content by machines essentially any different from each of us tapping into the deep unconscious, where a countless array of impressions gathered over a lifetime lies submerged, along with all the knowledge we’ve acquired, our experiences and the input of others?
Perhaps it is a Bosporus where, in Orhan Pamuk’s vision, not only our own but the artefacts of an entire city (read world) have sunk to the bottom throughout its history – a place where we see our own stuff along with that of others through the murky waters, and reimagine it into something new. Can AI reproduce anything other than what it has gleaned from the collective output of our species? And, similarly, one might ask a human: does anything truly original emerge from you that hasn’t been deposited into your subconscious?
Postmodern thought long ago made short shrift of the idea of originality – well before David Lodge wrote in his 1988 novel Nice work that one could no longer say the words “I love you” meaningfully. Their meaning had been diluted by a thousand prior users. Yet, people keep saying it and they keep meaning it, and some even feel that they are the first to have ever really said it. That’s what love does to us.
And if it is true that books teach us empathy with others, let us hope that the machines read us at our best, and that they will remember this when they no longer merely read, but have begun following their own minds. In the meantime, we humans know what we should do.
*
This is a translation made by ChatGPT from the original Afrikaans article, edited by the author.
See also:
Press release: Africa’s first AI opera autoplay redefines performance art
Deus ex machina: Animation, artists and solving the generative AI problem