Are humans replaceable?

The vibrant mixture of excitement and fear from AI makes it awfully hard to evaluate the opportunities and risks. However, lessons can be learned from other industries, and from past technological shifts.

This past year – since the introduction of ChatGPT in November 2022 – not a day has passed that we haven’t been inundated with news about the rise of artificial intelligence and the ever-quickening pace of its development. The system is just one of many so-called Generative AI systems which can almost instantaneously create text on any topic, in any style, of any length, and at a human level of performance.

ChatGPT is what is known as a “large language model”. It’s large because it has been coached on enormous quantities of text and it produces the kind of highly sophisticated work that we would associate with a competent human.  As such, it resembles, in a more limited sense, the predictive text system on any smartphone.

The science fiction author Arthur C. Clarke once said that “any sufficiently advanced technology is indistinguishable from magic”. For many years now, there has been mounting evidence that these technologies can have considerable negative consequences. For example, AI systems have been shown to replicate and magnify human biases while others amplify gender and racial biases, often due to hidden biases in the data used to train them.  At the same time, they have also been shown to be fragile, being easily fooled by carefully constructed or manipulated queries.

AI systems have been developed to perform tasks that instigate complex ethical issues such as, for example, predicting the sexual orientation of individuals.  This explains the extensive focus on the ethics of AI, how AI can be made trustworthy and safe, and the many international initiatives related to the regulation of AI.

Also, there is increasing concern among workers about the impact of AI on employment and the future of work.  Will AI automate so many tasks that entire categories of jobs will be destroyed and lead to large-scale unemployment?  Should we develop non-human minds that might eventually outnumber, outsmart, and replace us? Should we risk loss of control of our civilisation?

These questions were asked in an open letter from the Future of Life Institute, an NGO. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence and was signed by tech luminaries, including Elon Musk. It is the most prominent example yet of how rapid progress in AI has sparked anxiety about the potential dangers of the technology.

Among the jobs it threatens are those in creative industries. Three-quarters of Americans who responded to a poll by YouGov said that they worry that AI will sap human creativity. But 83% of respondents to a recent survey published in It’s Nice That, a magazine for commercial artists and designers, already use machine-learning tools for work. The new relationship between man and machine may be tense but creative. AI seems to spark anxiety and excitement in equal measure.

Where is the red line?

Between July and November this year, American screenwriters and actors gathered outside Hollywood’s film studios every weekday, shouting slogans and marching in the sun, striking in part over fears that AI will soon be writing scripts or even crowding them out of roles. “You go for a job and they scan you,” said one actress, who worried that her face will be used over and over in crowd scenes. The technology is “disgusting”, said another, who considered its use “an infringement of yourself, of your career”.

The studios and tech firms supporting them argued that “training” an AI model on copyrighted work amounted to fair use. In the words of Matthew Sag of Emory University, AI does not copy a book “like a scribe in a monastery” so much as learn from it like a student.  Pieces of data, whether novels or songs, usually play such a small role in the model’s output as to be barely traceable. But not always, reported the scriptwriters. “If you say, ‘Write in the style of Dan Brown [of The Da Vinci Code],’ of course it will pull from Dan Brown’s books,” declares Mary Rasenberger, head of the Authors Guild, which represented writers.

The actors eventually struck a deal to end their strike, where they secured protections from their artificial rivals.

What are the conditions of creativity in the age of Artificial Intelligence (AI)?  And are we to fear that AI will eventually take over and surpass our own creative abilities? These are questions that are being explored in a large-scale exhibition at the Louisiana Museum of Modern Art in Copenhagen

With ‘The Irreplaceable Human – Conditions of Creativity in the Age of AI’, the museum aims to render visible an idea, a phenomenon, or a world that, in different ways, impacts our sense of self as humans. The exhibition endeavours to pin down the nature of creativity from a broad, humanistic perspective incorporating art, literature, cultural history, sociology, and science.

A central motif throughout the exhibition is that of human beings being reduced to their functionality in a system. For example, in the Japanese artist Tetsuya Ishida’s masterpiece Mebae (Awakening) from 1998, we see schoolboys with identical faces sitting in perfect rows. Some have even become the microscopes being used in the teaching, like children turned into instruments.

The phenomenon of creativity

When we talk about the irreplaceability of humans, we are grappling with what we can do that the machine/computer cannot handle. Here, one of the answers has been the phenomenon of creativity. As the exhibition seeks to show, creativity is more about how one works than what one works with.  The curator of the exhibition says that, “thus, it becomes something that we all have a potential for and defines us as people: we do not endlessly do the same thing but develop and reinvent ourselves and our needs all the time. The creative is thus something fundamentally human – a core of humanism.”

Examples of creativity abound. For example, one AI application being considered is the use of robotic bees to help pollinate plants, now that human activities have reduced the population of real bees. These robotic bees would buzz from one place to another while using GPS data and high-resolution cameras to locate specific plants to pollinate. Fascinating.

Another AI robot could scan people’s faces in an interview room – watching out for pursed lips, furrowed brows, or sighs – to relay all their emotions to the interview panel.  In this way, applicants who seem disinterested in the process or downright bored would be weeded out.

If you are wondering how football or tennis match commentators like Peter Drury on Sky seem to have so many facts or analysis at their fingertips, it’s all due to AI which has simplified the techniques of sports journalism.  For example, AI-driven platforms can convert score data into narratives based on automated insights that sync intelligently with computer vision so that a virtual avatar of Peter Drury aided by a Voice Assistant Personification that reproduces exactly a person’s voice, would make you believe that you are watching Drury while the real one is sunbathing in Bali.

Dangers of AI

On the downside, some scary or dangerous applications include deepfakes of a President Trump declaring war on China and actually precipitating it; privacy violations; a fake Stock Exchange collapse which triggers enormous market volatility; uncontrollable self-aware AI robots, and Lethal Autonomous Weapon Systems which locate and destroy targets on their own while abiding by few regulations.

All AI models transform humans’ relationship with computers, knowledge, and even with themselves. In defence of AI, its proponents argue that it has the potential to solve big problems by developing new drugs, design new materials to help fight climate change, or untangle the complexities of fusion power. To others, the fact that AI’s capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the robots that outsmart their inventors, often with fatal consequences.

Regulation

This vibrant mixture of excitement and fear from AI makes it awfully hard to evaluate the opportunities and risks. However, lessons can be learned from other industries, and from past technological shifts.  

As the courts grind into action, governments are also getting involved. On 30th October, Joe Biden, America’s President, issued an executive order setting out basic rules for AI development.

The EU too wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology.  In April 2021, the European Commission proposed a first regulatory framework. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.  The European Parliament wants to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

Like all other human inventions, AI will bring a change even more widespread and sweeping than the introduction of computing devices. It will change the way we transact, get diagnosed, perform surgeries, and drive our cars. It is already changing industrial processes, medical imaging, financial modelling, and computer vision.

We are well on our way to tapping into this enormous potential.  It is up to us to make sure that its worst uses and excesses are properly controlled.

Main illustration: © Tetsuya Ishida, 2019

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Menu