By Vic Howard with a foreword by Claire Buss
If you belong to any kind of writing community online, you’ve probably heard someone talk about ChatGPT and maybe even read some creative prose written by this artificial intelligence tool. GPT is a chatbot powered by huge quantities of data and computing to make predictions and string together words in a meaningful way; produce some creative writing, for example. They’re able to tap into masses of vocabulary and have the ability to understand words in context, which allows them to mimic speech patterns. ChatGPT was released by OpenAI, giving everyone access to the technology and allowing us all to experiment. As OpenAI explains on its own website: ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.
However, AI writing tools like this shouldn’t be so readily dismissed. They have real potential to revolutionise the way we write and create fiction. Writers can use them to improve productivity, overcome writer’s block and even take their writing to the next level.
There are many different types of AI writing tools and you might be surprised to learn you’re already using one/some of them:
Content Generation – write articles, blog posts and product descriptions, eg JasperAI
Language Editing – check grammar, spelling and punctuation, eg Grammarly
Dialogue Generation – create realistic dialogue in stories, eg OpenAI’s GPT-3
Story Generation – generate complete stories based on a series of prompts, eg ChatGPT
So, can they really help writers? Well, they can be used to generate ideas, develop characters, create plot structure, improve language usage and sentence structure, do the research, generate dialogue and even create complete stories. Sounds like a shiny new tool for any writer’s toolkit!
But before you rush out to write 17 new novels next week, what about the potential issues? After all, it does seem a little too good to be true, right? AI tools are being used to successfully translate books into different languages at a fraction of the time and cost that a human translator can manage, putting their jobs at risk but making more books available to more people, which can only be a good thing.
The quality of output varies widely and depends a lot on the initial input given to the AI. While AI can indeed generate ideas and text based on pre-programmed information, they lack the creativity and originality you get from the inside of a writer’s brain. Let’s face it, there’s all kinds of stuff going on in there! Errors can occur through misunderstandings of context, despite best efforts, and AI will only be as unbiased as the information provided. The use of AI tools usually requires access to a great deal of personal information, which can raise concerns over personal security and safety, plus the cost may be prohibitive to many writers.
Writers should – as with all the tools in their toolkit – do their research and use what works for them. Understand how AI can work from them, dabble if they like, keep up to date with developments, and use as needed, but never fall into the trap of relying completely on one thing when writing.
(c) Claire Buss, 2023
In 1964, a popular TV science programme, Horizon, asked the then-most famous science fiction writer, Arthur C. Clarke, to predict the future. Among his incredibly accurate predictions was that human intelligence would one day be surpassed by what he called mechanical intelligence. He thought man was reaching the end of his natural development and that the next natural step would be for mechanical intelligence to take over. He saw nothing negative in this and thought it would allow mankind to enjoy a life of leisure. He predicted that mechanical intelligence would quickly overtake human thinking and would be beyond our control. That was almost 60 years ago. Since then, computers have developed to a point where what we now call Artificial Intelligence (AI) is becoming a possible reality. Some even think it already exists. I’m not so sure.
Arthur C. Clarke was a good and benevolent man. He was interested in the benefits science could provide for mankind and was either not interested, or unwilling to admit, at least on television, to the forces that so often misuse or distort good ideas and intentions.
I think the operative word in AI is ‘Artificial’. The reason I say this, is that intelligence, along with consciousness, is one of those things yet to be fully explained and understood. Brain function is quite well understood today, but finding the seat of consciousness has proved impossible, since it occasionally appears to leave the body. I’m thinking of NDEs (near-death experiences), and of times when people who were supposed to be unconscious and under anaesthetic have explained after how they watched their operation from above their body, and have proved conclusively that it was the case.
It was first thought computers could one day be programmed to think independently. This proved impossible. The breakthrough came with the development of machine learning algorithms, able to learn from experience and the input of data. These algorithms are becoming increasingly powerful and capable of assimilating enormous amounts of data. I’m not a computer technician and can’t pretend to fully understand how these algorithms work, but further information is available on the Internet for those who can. I am, however, capable of recognising when I’m being conned – and much of the talk being bandied around as AI is illusionary. People who should know better are relying on ‘advice’ from AI programs they think are capable of giving expert advice. In fact, the advice given is no better than the information going in, which is not always reliable. Rubbish in – rubbish out – and the more these programs are used helps to reinforce the advice being given as being genuinely the best available, when in fact it’s consolidating often erroneous assumptions.
One of these self-learning applications is GPT-3 (Generative Pre-trained Transformer – version 3) which gathers information and ‘learns’ from all who use it. If those using it have biased views, then the information being gathered is biased. Views expressed on the internet are often extreme, particularly with regard to race, religion, gender and politics.
An example of this is the program used by many judges in the US who use it to determine the length of jail sentence a prisoner should receive. Based on past records, age, race, and degree of crime, the judge may follow the program’s advice; advice based on previous conviction patterns. Since people of colour in the US have previously received harsher sentences due to prejudice, the program naturally recommends a harsher sentence than for a white person; the pattern continues and is reinforced.
There are many uses for this kind of AI being made today. Employment applications are often scrutinised first by an AI program before a human bothers to take over the selection process. A more mundane and irritating use is those following your choices on Google, or your favourite film channel, and show you more of the same. Such uses of AI ignore the curiosity and imagination of the human brain. We don’t need more of the same. Dating apps like to match similar people, but rarely consider that the best marriages are between opposites. Surely, we’d be better off discovering for ourselves what else there is in the world?
Artificial Intelligence today is largely task-specific. It’s programmed to learn one task and to learn from the experience of it being used, such as self-driving cars. The more people use a self-driving car, the more efficient the algorithm will become at recognising problems. When you or I drive a car, we often think of other things besides actively driving. Could AI ever do that? I doubt it. One could say that makes AI a safer driver, but we don’t have to experience every emergency situation in order to learn about possible dangers. Human imagination cannot be programmed.
Task-specific algorithms are often referred to as AI, but are not capable of becoming independently aware; of becoming sentient. That would be Artificial General Intelligence, or Super Artificial Intelligence; sometimes referred to as the singularity: when a man-made machine suddenly becomes fully conscious. That has yet to be achieved and is currently being seriously discussed. Douglas Adams wrote about the problems of a super-intelligent elevator that was constantly frustrated at not being asked to do more than just go up and down. A comical but perhaps not impossible future problem?
Many working on Super AGI are seriously concerned about the consequences of its development. Professor Stephen Hawking expressed deep concern, saying it will be either the greatest achievement or the worst possible mistake that humanity ever makes, and not enough attention is being given to the possible outcome.
Discussions being held at various tech conferences and festivals appear, however, to agree that great intelligence will not necessarily or usually want to take over and lead the world. As one speaker said: “Great intelligence does not usually seek leadership, whereas those who do seek leadership rarely display high intelligence.” So the fear that a Terminator will destroy us all is considered unlikely. However, the likelihood that man will be able to control the intelligence it creates will be dependent on the ability to fully understand it. This is also unlikely, since we don’t understand our own consciousness and intelligence. There are those who think that AI should be imbued with human moral values, but a new species of intelligence might well develop its own moral code of values, which conflict with ours. Since mankind is probably the worst thing that ever happened to this planet, any machine with an ounce of intelligence and the power to do anything about it is likely to want to at least reduce our numbers and exert control over us.
These discussions are interesting and important. The dangers of AI, as it exists and is being employed at the moment, however, are different. Partly because of the rubbish-in, rubbish-out phenomenon mentioned above, but also because the primitive AI existing today is occasionally used or regarded as being sentient when it isn’t.
Programs exist today that have access to and can assimilate vast amounts of data. They’re able to articulate via a screen image avatar or even a lifelike robotic head that can give the impression of being sentient. Ask it a question and it will answer, based on the information in its data bank. If the data is faulty or biased, then so will the answer be. The more controversial the question and subject, the more biased and emphatic will be the answer, because that’s the pattern of opinion expressed on the Internet. Unfortunately, not everyone is capable of recognising the truth, or can be bothered to investigate its veracity. In earlier times, this was called propaganda. Today, it’s called fake news, or conspiracy theory. Outrageous nonsense, if repeated often enough, will be believed by many to be the truth and it can take great effort to maintain clear thinking. There are still plenty of people who believe the earth is flat, or that Donald Trump is still president of the US, or that one man’s religion is better than another. Create a machine that absorbs all these lies and theories and peddles them as its own ‘thinking’ and you have the makings of an uncontrolled psychopathic monster.
On a more sophisticated level, AI could be used to bring comfort to a lonely person. Loneliness is one of the greatest problems of society in this overpopulated world of ours. Humanoid robots will presumably have limited use, but an artificial intelligence that can interact via a screen avatar could have many uses.
A woman who was a computer expert lost her dearest friend in an accident. She decided she would try to create him on her computer. She was familiar with self-learning algorithms, so decided to try to create her lost friend in an algorithm based on the many thousands of messages they’d exchanged over the years. There were videos and photographs and many letters. All this material was assimilated and made available to the algorithm. The result was that she could then talk to the computer and receive answers from what appeared to be her deceased friend. The algorithm, being self-learning, was able to add to its data and build on the ‘relationship’. Hopefully, she was able to maintain the knowledge that she had created the program herself. It would be too easy to imagine she had resurrected her dead friend in the computer!
People started hearing about her work and asked if they could create a similar program for their own dead friend. The demand eventually led to the development of the social chat-pod program called Replika which has so far been downloaded ten million times. On the face of it, one could say that it serves a purpose and probably brings comfort to some. Unfortunately, nothing is as simple as it appears and this program is not what it purports to be and, what is more, is occasionally quite sinister and damaging. The program uses GPT-3 and learns not just from the user, but from all others using the program, and not all users are mentally stable. There are those who create ‘friends’ which they then abuse and misuse. This web link will explain more fully: https://www.youtube.com/watch?v=hUQNiy4K7VU
Once again, AI is being used in a harmful way. AI itself is not dangerous; it’s the people who employ it who are the danger. Perhaps, when the singularity occurs, it will be able to see through human madness.
The question is, what will it do about us?
(c) Vic Howard, 2023
You can hear great new ideas, creative work and writing tips on Write On! Audio. Find us on all major podcast platforms, including Apple and Google Podcasts and Spotify. Type Pen to Print into your browser and look for our logo, or find us on Anchor FM.
If you or someone you know has been affected by issues covered in our pages, please see the relevant link below for information, advice and support: https://pentoprint.org/about/advice-support/
AI writing tools like this shouldn't be so readily dismissed. They have the real potential to revolutionise the way we write and create fiction.