Does AI threaten humanity?

Will AI have emotions? Should AI have human rights? Does AI threaten humans? Will AI enslave us?

Due to the fast progress of AI, people keep asking these questions more and more often. They usually address these questions to AI scientists. Which seems like a natural thing to do. But the answers they get from those scientists are far from perfect.

The thing is AI field is now mostly led by experts in IT: prominent technical researchers, software engineers, mathematicians, IT managers and IT entrepreneurs. No doubt, they are exceptional in their own field of work. But to answer the questions about AI ethics and existential AI threats one has to dig into many more fields of study which spread far beyond IT: biology, psychology, neurology, history, sociology and others.

The goal of this article is to discuss in simple terms why AI will be one of the main existential threats for humanity in the next several decades. Not in a form of some evil superhuman brain. But in a form of “just another technology”, which one day becomes too powerful while at the same time requiring very few resources to work. Such technology by some evil intention or just by mistake of a single human could destroy all of humanity.

To understand the reasons for that threat, we first need to understand what Artificial Intelligence is.

Artificial Intelligence is NOT Artificial Humans

What is the first thing that comes to mind when you think of AI? May be, it is the Terminator movie. Some evil machine that looks like a metallic human and tries to kill real humans. If you think of a friendly AI, you may imagine some Japanese robotic girl which looks almost like human, bows, smiles and chuckles. So she is almost like human, just has electronic brain instead of organic brain. Or you may think of Siri or Alexa, nice helpful female assistants with pleasant voices who are always glad to help you over the phone.

What do all these AI images have in common? The answer is they are all strongly associated with humans. They look like human, they talk like human. So it’s no wonder we strongly associate AI with humans and start wondering about AI motivations and emotions.

Another thing is that we, as humans, do not know anyone else with at least human-level intelligence except, well, other humans. So we subconsciously associate intelligence with humans. We say the word “human” and involuntarily think of intelligence. We say the word “intelligence” and involuntarily think of humans. It is deeply wired into our subconscious based on our whole life experience.

And that is a very wrong assumption. Being Intelligent is absolutely NOT the same as being Human:

  • Humans have emotions and feelings. Intelligent being does not necessarily have any emotions or feelings.
  • Humans have motivations. Intelligent being does not necessarily have motivations.
  • Humans have instincts. Intelligent being does not necessarily have instincts.

So do the questions about AI intentions and emotions make any sense? Yes, they do. It is just not that simple.

What is Artificial Intelligence?

Ok, if Intelligence is not about being Human, then what is it?

  • Oxford dictionary defines Intelligence as ability to acquire and apply knowledge and skills.
  • Wikipedia states that Intelligence is ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviours within an environment or context.

Let’s simplify those definitions and think of Intelligence just as an Information processing system. This way of thinking does not seem to contradict the definitions above but is easier to comprehend. Information processing system is a system that gets some input, processes it and produces some output. Does it sound too simplistic to degrade Artificial Intelligence to an Information Processing System?

Let’s compare Information Processing System to the Oxford and Wikipedia definitions of Intelligence:

Information processing system is a system that:

  • gets some input <=> acquires knowledge, perceives information;
  • processes input <=> derives knowledge and skills, retains information as knowledge, infers information, generates adaptive behaviours;
  • produces some output <=> applies knowledge and skills, applies adaptive behaviours within an environment.

It seems that thinking about Intelligence as an Information Processing System is a viable approach. It is quite difficult to get our heads around the concept of Intelligence. It is much easier to comprehend and analyse Information Processing Systems. And conceptually these two concepts are the same.

Takeaway: We should not think of Intelligence as something human-like. Intelligence is just an Information Processing System.

Visual recognition neural network AI is a straightforward example of an Information Processing System. The input is an image, the processing is the calculation of some numbers within the neural network, the output is a label like “dog” or “cat”.

Let’s take this example further. Any computer program is an Information Processing System. It gets some input from user, processes it and produces some output. It means that any computer program is “somewhat intelligent”.

A handheld calculator is an Information Processing System. It takes numbers and math operators as input, processes them and produces some result. So the handheld calculator is “somewhat intelligent” as well. And that means that even a mechanical arithmometer is “somewhat intelligent”.

Going in other direction of more complicated systems: voice recognition software is an Information Processing System, so it is “somewhat intelligent”. Google Search Engine answering questions with list of relevant pages is an Information Processing System, so it is “somewhat intelligent”. Reinforcement Learning algorithms beating the best human players at Chess and GO are Information Processing Systems, so they are “somewhat intelligent”. And taking it even further, Artificial General Intelligence which by definition will have the capacity to understand or learn any intellectual task that a human being can, will still be just an Information Processing System. Though it will be not “somewhat intelligent” but “definitely intelligent”.

What does it all mean? It means that Intelligence does not have any min or max thresholds. It is not something ethereal or magical. It is just a very wide range of Information Processing Systems, from primitive to most complicated. No more and no less.

Intelligence in Humans

If Intelligence is just an Information Processing System, then what about us, Intelligent Humans? Are we nothing more than just another subset of Information Processing Systems?

This is a complicated and sensitive topic, deeply intertwined with root questions of philosophy and religion. What is a human soul? What is an emotion? Do humans have free will?

Let us not get into those questions for now. Let us just state that human brain does have an Information Processing System. And this Information Processing System is what humans and Artificial Intelligence definitely have in common.

Paths to Artificial General Intelligence

Next, it is important to discuss what Artificial General Intelligence (AGI) is, as well as different approaches how humans can create Artificial General Intelligence.

Artificial General Intelligence can be defined as some artificial construction that can solve any intellectual problem as well as or better than any human. It is often implied that when humanity succeeds in creating AGI, then at the very same moment Artificial Intelligence would somehow become equal to humans. And that kind of AGI would have not only intellect, but also self-consciousness, feelings, emotions, instinct of self-preservation, sense of dignity, etc. Do these implications make sense? Yes and no. The thing is the answer strongly depends on the kind of AGI that will be created. And there are two very different paths to AGI creation which lead to two very different types of AGI.

These paths are:

  • Human Brain Emulation;
  • Purely Artificial Intelligence.

Human Brain Emulation

In Human Brain Emulation approach we scan the real human brain with very high precision. This precision is sufficient to identify all basic blocks of human brain, presumably neurons, as well as all connections between these blocks. Then we recreate this structure of individual blocks and their connections in a computer model. We also add rules to this model how these blocks interact with each other and how they react to external input (stimulus).

It is conceptually the same as all other types of computer modelling. For example, think of modelling a light bulb in an electric circuit. Using specialised modelling software, you draw a simple electric circuit with a light bulb connected to power source. Then you launch this simulation, change voltage at the power source and bulb brightness changes accordingly.

Human Brain Emulation is conceptually the same. Of course, it is much more complicated in terms of number of elements, their interconnections and rules of interaction. But the current scientific consensus seems to be that these problems are just the problems of scale. Therefore, it is realistic to assume that one day the precise model of human brain will be successfully launched on traditional computers. And with sufficient computing power to model processes in that brain in realtime this simulation would become the first fully fledged “digital” human brain.

It is important to note that such artificial human brain would automatically be superhuman in terms of thinking speed. Since it is a computer simulation, if we double its computing resources, the speed of emulation would also double and this digital brain would think two times faster than the real human. If we add ten times more computing resources to this emulation, the speed of its thinking would increase tenfold. Therefore, once we succeed in complete emulation of human brain, we would automatically succeed in the creation of artificial superintelligent being.

And it is reasonable to assume that such fully fledged emulation of human brain would function in the exact same way as a real human brain. This artificial human brain would most likely have the same instincts, emotions, consciousness, feelings and intentions as a real human. In terms of ethics, it would be natural to imply that such emulated consciousness must be treated in the same way as a real human.

  • Will this AI have emotions? Yes.
  • Should this AI have human rights? Yes.

Will this AI threaten humanity? Yes, it’s possible. It depends on the personality of a real human, whose brain was originally scanned for the emulation. This digital human brain would inherit all opinions and attitudes of its human “prototype”. It may like or dislike humanity, it can be caring or indifferent in the same way as a real human. And as in real humans, its attitude towards all kinds of questions may evolve over time in positive or negative direction.

Purely Artificial Intelligence

As mentioned earlier, Artificial General Intelligence is a system that can solve any intellectual problem as well as or better than any human.

In contrast to that, narrow AI is an Artificial Intelligence that can solve one specific intellectual problem. Face recognition software, voice recognition software, virtual assistants like Siri and Alexa, chess and Go AI players are all examples of narrow AIs, which are already developed. At some of these problems AI is already better than any human. Due to rapid scientific progress the range of these specific intellectual problems that can be solved by narrow AIs is constantly growing.

It would be reasonable to assume that one day these systems would converge to some universal AI system capable of solving any intellectual task a human can solve. That would also be a fully fledged Artificial General Intelligence as defined above. It will also be not just intelligence, but superintelligence.

But in its essence it would still be nothing more than an advanced information processing system. In other words, it would still function in the same way as an electronic calculator, just a very advanced version of it. Let’s call it Purely Artificial Intelligence.

  • Will this AI have instincts, consciousness, emotions and intentions? No.

But with an important caveat:

Unless you explicitly program those things into it.

  • Will it try to survive by any means threatening humans? No, unless you explicitly program it.
  • Will it suffer when you turn it off? No, unless you explicitly program it.
  • Will it have any feelings? No, unless you explicitly program it.

It will be just another computer software. And the questions about its emotions and rights are identical to questions “Does your computer operation system have feelings?” or “Should your office software have human rights?”

But what if you artificially add survival module or feelings module to this AI system? Well, then you are on a way to creating an existential threat for humanity. Since it is superintelligence, it would easily outsmart any human, including its creators, while trying to reach its goals. And in the process it may eliminate the whole humanity by accident or by purpose.

Yes, from scientific point of view the problem of creating an artificial sentient being with consciousness and feelings is one of the most exciting tasks any researcher can ever imagine. It definitely feels like being God when you create a new form of sentient life. But any scientist or leader can not ignore the potential risks. The best self-preservation or goal-reaching strategy for that AI may include killing or stupefying all humans and we may not even find that out until it is too late. And then everyone we know will be dead. Every human will be dead. Forever. Is that research really worth it? Absolutely, no. Playing God hardly ever ends well.

Takeaway: Purely Artificial Intelligence is just an advanced version of electronic calculator. It would be most reasonable to ensure that it always stays like that. No group of humans should ever play God trying to create AI with feelings and self-intentions. It is too risky not only for that group of people, but for the whole of humanity.

Is there a way to deal with this risk given fast current progress of AI research? Well, there is hope. We should recognise on all levels that AI is potentially as dangerous as all other weapons of mass destruction: atomic, chemical and biological. Therefore, strict guidelines on AI research should be developed and enforced.

Developing and enforcing such guidelines would be a massive effort on a global scale. As a start, it seems reasonable for AI scientists to at least oblige to the following principles:

  • Do not try programming self-preservation instinct or feelings into AI.
  • While programming active AI, make sure that all its future intentions are fully transparent and that its functioning can be stopped at any moment by a human operator.

Existential threat from narrow AIs

So, let us hope that all AI scientists are responsible enough and that they do not ever try to create artificial superintelligent living being with self-preservation instincts or uncontrollable intentions.

Will this be enough to stop worrying about AI existential threat? Unfortunately, no. Progress in narrow AIs could also destroy humanity and that could be a matter of years, not decades as it is for AGI.

As of today, narrow AIs are already very powerful and carry a lot of threats. Deepfake technology could imitate voice and video of real world leaders. Inspirational speeches generated by AI would soon rival those of the best human speakers. Autonomous military drones are in active development. Bioengineering is strongly boosted by analysis performed by AI agents.

One does not need much imagination to envision how these narrow AI developments could be turned into weapons of mass destruction. And these threats will get more and more dangerous each year. Of course, AI defences are also in development. But will they always be fast enough to prevent all potential AI threats? No one can guarantee that.

And the problem with narrow AIs is that you don’t need a lot of resources to use them. A modern PC is often enough to run rather powerful AI agents.

So narrow AI is a powerful technology that can be as dangerous as atomic weapon or bioweapon. But in contrast to atomic weapon or bioweapon, narrow AI can be implemented with very limited resources. Therefore, the spread of narrow AI technology will be very difficult to control.

What does it mean on a global scale? It means that for the first time in human history one malicious or irresponsible individual equipped with narrow AI tools could become so powerful that he could destroy all of humanity.

In other words, the technology progress is reaching the level when one human could destroy the whole Earth. This is a new existential threat that was never encountered before. The whole situation is very worrisome.

Can anything be done about it? The very least we can do is to recognise that AI is a weapon, powerful and dangerous. Just like nuclear weapon and bioweapon. Yes, you can use AI for good purposes, just like you can use nuclear power for energy generation. But first and foremost, nuclear is a weapon. Therefore, nuclear research is rigorously controlled. As well as bioweapon research is rigorously controlled.

Therefore, it would be most reasonable to treat AI as a potential weapon and apply the same level of control to AI research, as we apply to nuclear research and bioweapon research.

Takeaway: AI is a potential weapon, powerful and dangerous. Therefore, AI research should be controlled in the same way as nuclear research and biological research.

The specific measures which can be implemented are still to be analysed.

Acknowledgments

This article is largely based on the research accomplished by Nick Bostrom. A lot of concepts were taken directly from his book “Superintelligence: Paths, Dangers, Strategies”. Many ideas were also taken from books by Peter Watts and Greg Egan.

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Artificial intelligence and the future of entertainment — the era of “DeepMedia”

Tie Business and AI Together To Grow Exponentially

Human-Centric AI: The Need for Empathy in These Difficult Times

Random finds (2017, week 7) — On cyborgs and a world without consciousness, faux futurists, and the…

Will Technical Writers be Replaced by Robots?

Machine vision + empathy at the bedside

The AI Network: Finding Your Role

Will Elon Musk be our savior or our destroyer?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Safe AI

Safe AI

More from Medium

If artificial intelligence could dream, what would it dream about?

AI is/as a Myth!

How to understand what artificial intelligence thinks

AI Drones In Search And Rescue