It is a tumultuous time for creators. There is no denying that. ChatGPT is only approaching its first birthday, and arguably, it has already changed the world more than any other technology since the Internet.
And because of the Internet, the aggressiveness of the impact of generative AI is that much stronger. This leaves many people from the creative economy, people like me, journalists, writers, artists, authors, and really anybody with an artistic streak who has sought to make their work public, affected by the innovations.
With that premise, I have seen and read several statements that have come out recently, the most recent one entitled “For an Innovation and Creator Friendly AI Act”. You can find it here.
It was published on 23 November, and 12 groups representing over 500,000 writers, artists, journalists, musicians, and other creatives signed it. As a supporter of collective action, I am really pleased to see this.
In the statement, the organisations are urging European policymakers to focus on transparency for their upcoming EU AI Act, specifically regarding the training data used for large language models. The statement takes a somewhat ambivalent position on whether damage has already been done or could be done by those models and the AI tools built on them. That is a missed opportunity, I think.
There is certainly room to say unequivocally that damage has been done and that forms of compensation for copyright holders should be discussed. OpenAI, Microsoft, Google and the like have deep enough pockets to pay their dues. We should not be shy in asking for them.
Then, we get into other areas with this statement that are more problematic. One is the labelling of the content produced with generative AI tools. Without wanting to intentionally hurt the feelings of these well-meaning artists, I have to be honest: this is wishful thinking.
We have to acknowledge that creativity comes from many places, and that is certifying creative products as “100 per cent human” versus “AI-supported” or “fully AI-generated”, whatever that means, considering the initial prompts or inputs are ALWAYS human-led, it’s just not doable.
Clearly, the intent of these writers is to lobby European policymakers as they approach the completion of the negotiations on this EU AI Act. I am European but have not lived in Europe for about 15 years, and I am not a pro-EU ideologue. To me, imperialism is imperialism, no matter its colour or nationality.
That being said, there is certainly an opportunity here. Just as I strongly believe in the worker’s right to unionise to protect their livelihoods and their human rights, there is certainly a lot that can be done through collective action like this to ensure that creatives are not disproportionally harmed by this new technology.
I have also been very vocal about the fact that many industries within the creator economy are simply outdated and ready for disruption, whether from AI or elsewhere. I will stick here with publishing because it is the one that I have worked in for the longest time, and it is impossible not to see resistance to change and the desire to avoid rocking the boat of the status quo as a fundamental driver here.
That I am 100 per cent not supportive of.
Also Read: AI is not almighty: Why the ‘magic tool’ still needs human help
It is very clear to me that most channels for the commercialisation of creativity today do not favour creators beyond a narrow niche of geographically, gender, and culturally-defined subsets of creators (i.e. mostly white dudes like me). Traditional publishing, where pathetic royalties and outdated business models kill any chance for the average writer to make a living out of their writing, does not help creators.
That is also the case with self-publishing. As it currently stands, self-publishing equals Amazon, and Amazon is a monopolistic, abusive superpower with way too much lobbying and regulatory influence to give a damn about really supporting the millions of creators, and in particular writers, who use platforms like Amazon Kindle.
So, creators have been getting ripped off for the longest time, and now AI comes along, threatening to unleash even more chaos on creators.
I understand the fear, I really do, but I also have a deep belief in the duty of creators to explore and experiment as a core component of their identity as creators.
Is it possible that there is a schism brewing within the creator community? Yes, it is possible. Some creators I speak to, shier than I am in making their positions on AI known, have already talked to me about “AI positive” and “AI negative” creators and organisations. So, it is possible that statements like these will further formalise a divide.
But I also believe that it is too early for that. In fact, I am almost certain that a lot of the fear being expressed in these statements also arises from the very simple fact that most creators don’t know how to use this technology. As someone who spent the last three years exploring generative AI, I can attest to the fact that it is a complex technology to master, although it is deceptively simple to approach because of the chatbot format.
AI has helped me kill and bury writer’s block. It has helped me visualise fantastical locations and events for my speculative fiction that I would never have imagined before. It has helped me edit, draft and iterate on novels that had been stuck on the back-burner for years because of family and professional commitments. These are just a few examples, and they are a topic for other conversations, ones that I am thrilled to have with those willing.
What is relevant to today’s topic are positions like those expressed here.
This is a group of well-meaning writers speaking out against “Writoids” — meaning AI systems that attempt to imitate human writing. They argue these systems violate copyright laws by training on copyrighted content without permission. They view Writoids as a threat to human creativity and livelihoods.
Also Read: These Artificial Intelligence startups are proving to be industry game-changers
They quote Isaac Asimov’s Three Laws of Robotics in their manifesto, questioning whether copyright infringement qualifies as “harm” under the first law.
On this, I cannot agree. It is not the technology infringing copyright but the businesses building the technology with illegal data collection practices. I do not think there is anyone who understands what is happening within the field of artificial intelligence who would agree with the statement that these large language models and the tools built on top of them are just regurgitating copyrighted content, at least so far as text is concerned.
There were certainly justified concerns around image generation and that it was very easy in the earlier days to see watermarks and signatures from copyright holders as fragments of the generated images. Compensation should be obtained for those violations. But the technology has already moved past that issue, even though better safeguards for the styles and techniques of living artists must be further explored. This is a great example.
The Writoid manifesto criticizes the fact that intelligence seems to be very much just a facade for this tech, and they criticize the tech for lacking “true” intelligence or creativity (I’d love to hear their “definitive definitions”).
This is quite naïve. As many people in the AI industry note, the current AI is the dumbest that AI will ever be. It is getting smarter literally week by week, so basing any argument on the current limitations of this tech misses the point.
We should certainly ensure that these systems are and will be built without plain old-school robbery, something that capitalism is fantastic at, from slavery to colonialism and so forth. However, we should not confuse capitalist dynamics with technology itself. Steam engines and electricity caused plenty of social changes, but ultimately, it was the exploitation that surrounded those technologies that caused societal harm, such as displacements, pollution, child labour and more. Certainly not the technology itself.
Technology is and will remain a net positive for society.
Ultimately, there is no turning back. Technologies are here to stay, and the percentage of humanity that will come to rely on AI for many things, including their creative needs (of both creation and creative consumption), is only going to skyrocket from here. So, burying our heads in the sand and blaming technology for the exploitation of the surrounding area is not the way forward.
I believe the way forward is to ensure there are proper safeguards for copyright-holding creative, which should be achieved through collective action to protect the livelihood of authors and make sure proper compensation is given for any exploitative appropriation of copyrighted material that has already taken place.
So, let’s have fairness and compensation where it is due, but let’s also have better, more informed opinions on the other side based on a deeper understanding of this incredibly fast-moving technology.
To me, those should be the two guiding principles going forward for all creators, writers and artists worldwide.
This article originally appeared in the newsletter Code Red for Writers.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic
Join our e27 Telegram group, FB community, or like the e27 Facebook page
Image credit: Canva
The post Laws, capitalism, creators and AI appeared first on e27.