Computers control our future. Will they have a conscience?

Robots cannot feel but might pretend that they do.

I am a technological determinist who believes humans do not control machines; they control us. In the age of artificial intelligence, we should revisit that viewpoint to glimpse what awaits us.

During the heyday of internet in the 1990s, developers created algorithms to glue us to screens. They downplayed the impact on our psyches. After all, we could always turn off computers.

Then those advocates freaked out with the Y2K bug of 1999. They implored us to shut down computers as we transitioned from 31st December 1999, to 1st January 2000. To save memory space, programmers had coded machines to recognise four-digit years as two digits. They worried that computers would recognise “00” as “1900” not “2000”, setting us back a century.

Iowa State University prophesied that few problems would occur. “Most faculty and staff won’t need to visit the workplace to check for Y2K problems on New Year’s Eve, New Year’s Day or even Sunday, 2nd January. If potential Y2K problems don’t pose serious risks to your department operations, you probably can wait until the next workday to check your office and equipment.”

Y2K was a sign that computers were controlling us. Rather than celebrate the new millennium, we fretted about losing data.

Now we fret about losing jobs to robots.

Already artificial intelligence (AI) is driving our cars, recognising our faces, decoding our fingerprints, developing public policy, making parole decisions, informing marketing, controlling supply chains, and revising performance in travel, insurance, medicine, agriculture, retail, automotive assembly, and aerospace and defense.

To be sure, AI promises myriad social benefits. It has enhanced medical science in disease diagnosis, drug treatment and clinical trials. It automates production lines, eliminating repetitive tasks and safety risks. It predicts severe weather conditions.

But there are significant risks. Chatbots hallucinate (presenting false information as fact). AI algorithms can be biased, trained on stereotypical exabytes that elevate men over women in job placement or deny loans to people of colour.

My concerns are futuristic.

There are four AI types:

  • Reactive Machines, operating on set rules in such tasks as purchase history and customer preference. You get Netflix recommendations based on past viewing.
  • Limited Memory Machines, imitating the way our brains function as they process data. You get self-driving cars that scan traffic lights, signs, curves, potholes, and road closures.
  • Theory of Mind Machines, still under development, with the potential to cipher and then act on human thoughts and emotions.
  • Self-Aware Machines, theorised to possess a sense of self, or consciousness.

Philosophers, scientists and engineers are debating when machines will become conscious and whether we will know when they do.

A 2023 report in Nature notes that a failure to do so “has important moral implications”. The article cites several neuroscience-based theories defining biological consciousness. But there is no consensus.

AI knows how to fool humans by mimicking likely responses. If trained on the Descartes maxim, “I think, therefore I am”, a machine might make that philosophical claim. Or it simply might define consciousness as the ability to identify and locate itself in physical space like a Google map.

There is no there there in machines. They may have neural networks inspired by human brains, but no inkling of how that brain evolved over millennia to ensure biological and moral survival. As the New York Times surmises, “It’s hard to see how these things could be coded into a machine.”

Humans evolved via mirror neurons. We feel what others experience. We empathise with strangers undergoing trauma and even mourn them in the safety of our living rooms while viewing traumatic news.

We also possess a conscience associated with “gut instinct”. Johns Hopkins Medicine calls this “our second brain”.

In other words, we not only perceive our environment, but sense it. The operate word is “feel”. Robots cannot feel but might pretend that they do.

An article titled ‘Will AI have a conscience?’ notes that AI is being used “to care for the elderly, teach our children, and perform many other tasks that require moral human judgement”. Human brains developed that judgment via “a reward and punishment system” that ensures the survival of our genes. That is why parents protect offspring at personal expense. Some people even risk their lives to save complete strangers in distress.

Theory of Mind development fails to compensate for that because, in sum, machines can’t.

Nevertheless, Popular Mechanics reports a “stunning” Theory of Mind achievement involving a neural network with intuitive skills of a 9-year-old. It hopes that machines develop “empathy and morality”, which could be “a big boon for things like self-driving cars when needing to decide whether to put a driver in danger to save the life of a child crossing the street”.

Good luck with that. What would an adolescent do if driving a car with a child in the lane? Also, factor vehicle owners paying AI insurance to save their lives, not those of pedestrians.

Psychiatry theorises that people without consciences are psychopaths and ones who feign consciences are narcissists. The former doesn’t care about empathy and the latter only cares what other people think.

That is the future of machines and, perhaps, us too, in the absence of regulation and oversight.

Photo: Adobe Stock

Michael Bugeja is the author of “Living Media Ethics” (Routledge/Taylor & Francis) and “Interpersonal Divide in the Age of the Machine” (Oxford Univ. Press). He is a regular contributor to Iowa Capital Dispatch, where this article was first published. Views expressed here are his own.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments