The Intersection of AI, Literature, and Ethics
Not everybody has warmed up to the intervention of Artificial Intelligence (AI) in the creative realm of things. Not so surprising, is it?
Intense debates about authorship and the impact of AI on society have been going on for the past few months, ranging from the unauthorised use of books to train AI models to Biden’s recent Executive Order on AI.
The use of AI in professional writing presents ethical questions regarding responsible technology use and finding a balance between automation and human creativity. How much is too much?
Assistance, not replacement
Let’s look at AI-assisted writing as a tool. Writing is, after all, a combination of the writer's worldview, storytelling passion, and personal experiences. However, the problem is that many people who wish to share their stories with the world find it difficult to put their thoughts into words or organise them in a way that makes them compelling. Even though they have brilliant ideas, they often struggle to articulate their experiences or insights, which makes them lose motivation to share or even finish their work. That’s where AI tools come in handy.
Right from brainstorming ideas to fine-tuning the final result, AI-powered tools can be used at any stage of the writing process. So, the user maintains autonomy; it all comes down to how the tool is used rather than what the tool itself does.
Some folks might just find the blank page staring back at them intimidating and need a little push, while others might prefer to have a first draft they can personalise and edit. Still, one thing is clear, you can keep the essence of your thoughts intact while using AI to bring them to life. There are also cases when people want AI to write a piece from beginning to end by feeding it only ideas. This is similar to hiring a ghostwriter, only less expensive. Even though not everyone can hire a ghostwriter, artificial intelligence support is becoming more and more available in today's world.
A matter of accessibility
Before we rush to criticise AI-assisted writing, it's good to keep an open mind about it. To some people, writing just doesn’t come naturally. What about neurodivergent individuals who struggle with traditional authorship processes? With the current state of technological advancements, everyone should be able to share stories and experiences.
Think about non-fiction writing - it requires thorough research, breaking down complex material into manageable chunks, and blending anecdotes and insights into coherent reading material. For those without the gift of the quill, the entire writing process could prove to be overwhelming and stressful.
If someone lacks confidence and motivation from the start, they are unlikely to complete their work. That being said, think of all the brilliant ideas that have never seen the light of day simply because someone didn't have the means to share those ideas with the world!
Authenticity & AI-generated content
Yes, AI is fascinating, but there are also ethical considerations to be aware of. When large amounts of AI-generated content are churned out without human intervention, that’s certainly a matter of concern. This is why technologies that enable AI assistance in inherently creative tasks, such as writing, need to take into account the fact that the more a person engages with the process, the more human the output is. For example, it wouldn’t really make sense if the written work lacked the author's unique perspectives and individualistic touch, would it?
Providing structure to the stories we already have and shaping our big ideas in a way that allows thoughts to flow freely and find expression in words is the most beneficial part of an AI-human writing collaboration. In fact, this process of co-creation makes the writing process not only less stressful but also more enjoyable. It allows writers to channel their focus toward nurturing and refining their ideas rather than becoming bogged down in the details of implementation.
With a renewed focus on authenticity in the age of AI-generated content, many tech companies are working on their respective watermarking systems, like Google’s SynthID which distinguishes AI-generated content within metadata, and Adobe’s “icon of transparency” which will be added to the metadata of images, videos, and PDFs, serving to disclose the ownership and origin of the data. When it comes to discussions surrounding the development of watermarking systems to identify AI-generated data, it’s essential to recognise that the complexities involved extend beyond a simplistic black-and-white perspective. There are shades of grey that need to be considered— to what extent did the user shape the creation of the content that was generated? Was the user actively engaged in refining the content after its generation, or was it published without proper review?
Ultimately, rather than being seen as a hero or a villain, AI should be viewed as a silent co-creator that provides the support needed to bring ideas to life. The goal is to make writing more accessible and tackle the mental block people experience while deciding how to write something rather than what to write.