How to Build a Conscious Robot

Henry Grynnsten

Artificial Intelligence (AI) can be a misleading term. It is often misunderstood, at least by the general public. Robots or machines with advanced programs are already in use today, and in the future we will have much more advanced robots that will be able to perform many tasks. However, these are not the focus of this article.

Instead, I focus on a robot that is conscious. Without plunging into a deeper discussion, I assume that only conscious beings can be intelligent in the human sense, so what is discussed in this article is limited to the question of whether it is possible to build a conscious robot.

The difference between an unconscious robot servant that mimics humans, as in science fiction, and a conscious robot would be the same as that between the robot servant and a human. A robot servant would be a machine that you could treat as a machine—like a coffee machine or an autonomous robotic vacuum cleaner—but a conscious robot would have to possess the same rights as a human, and for that reason could not, for example, be forced into servitude and used as a servant against its will. That would just be slavery under another name.

Conscious robots can potentially be built after the model of the human brain. So a conscious robot would need a body and senses connected to a brain with what we might call a sense organizer: the unit, whatever it is, that connects together the various senses into a total experience of reality.

Although it would seem possible in principle for future humans to build such a robot, it might prove impossible in practice, because many difficulties could follow, and they would be too complicated or expensive to solve.

Furthermore, the project might possibly not be worth the effort, even if all the problems of construction were solved, because a conscious robot would have several limitations and difficulties, as described below. In this article, I will outline a goal and describe problems, in principle solvable, in very general terms.

General areas to consider in connection with a conscious robot are discussed under the headings Body, Ethics, Intelligence, Learning, Nurturing, and Drive. It is worth emphasizing again that these are problems for a conscious robot with human-level intelligence, not for artificial intelligence as commonly understood or robotic machines in general.

Body

A conscious robot cannot be any computer or system of interconnected computers, because it would have to be built after a human model, that is, possessing a body with senses going to the brain. To build a conscious robot after other models would be unnecessarily difficult, maybe impossible, because the only intelligent being we know of is the human, and we barely know how the human brain works.

This means that an intelligent robot would have to be a conscious robot. An unconscious robot could not be intelligent, in the human sense, because consciousness is required for systematic learning. The only animals with consciousness in the human sense are humans, and humans are the only animals that can learn languages and other advanced skills.

An intelligent robot cannot be built, say, after a nonhuman primate model, because you would end up with (in human terms) an unintelligent robot. You cannot teach a chimpanzee to reliably serve five o’clock tea without considerable damage to the china. For that application, an unconscious robot servant would be much better.

We could also not build a hyper-intelligent robot. It is often assumed, by both scientists and philosophers, that superhuman intelligence is a given, but actually we do not know that such a thing exists. We know about human intelligence, and we can build computers that do super-fast calculations, but there is no evidence suggesting that these two can be combined. Even human autistic savants who are able to quickly do advanced calculations are most often severely disabled in other areas.

If there is no evidence for the existence of hyper-intelligence, how could we simulate it?

The conscious robot would have to have one brain that cannot be connected with computers, other robots, or even humans. That would be to break the model—the conscious, intelligent human—that we are trying to follow and that is the only one we know to work.

The robot would also have to have senses, which it could integrate in the brain to create consciousness, which, as said, is necessary for systematic learning and intelligence. Consciousness requires senses that can be bound together in the brain. The senses in turn obviously need to be connected to a body of some sort. Again, the easiest route is to copy the human body rather than, say, an eight-legged super-spider.

But how many senses are needed? Helen Keller was deaf and blind and had an uphill struggle to overcome her difficulties, but she had touch, smell, and taste, as well as proprioception. You could probably do without a few senses, but the best route would be to include all those that humans usually have, which would make it easier later in the process.

Ethics

If such a system is indeed conscious, it will have to be treated like any human being; anything less would be unethical. We would not imprison a human and force him or her to work at certain tasks without compensation, which is what some seem to want to do with intelligent robots. Even though the robot is built and paid for by us and considered just a machine, an intelligent robot is necessarily as conscious as a human being. Thus, it could not be forced into slavery.

Consequently, we will have to let the conscious robot do what it wants to do, and that may be anything that other humans do. It might want to be a dancer, a bricklayer, or a professional skier. We cannot force it to work at certain tasks we want done. If we tried, it could refuse, do a poor job, sabotage the work, shirk, or even commit suicide.

If the whole point is to make a robot work at certain tasks we want done, intellectual or physical, then the whole project of developing a conscious robot would be misguided and would surely fail.

Perhaps you could provide the robot with software that overrides its conscious volition and forces it to do the jobs we give it, but that would also not be ethical. For the robot, that would be like being imprisoned in one’s own mind. With a human, it would be equivalent to having electrodes implanted in one’s brain that force one to do certain things. This could only be described as torture.

I believe that no modern, democratic, open society would allow slavery or exploitation of conscious robots.

Another way around the problem would be to make the robot want to do the work we give it. This would also be unethical, like implanting some chip in a child’s brain to ensure its sexual orientation, for example. If the robot is moderately intelligent, then it could find out why and how it was constructed and rebel for that reason.

Besides probably sabotaging the possibility to build a conscious robot, connecting it to computers, other robots, or even humans without its permission would also be unethical, trespassing on the integrity of the self. Again, a human example makes this clear: we would not kidnap two humans and connect their brains together (if that were possible) without their explicit consent.

For these reasons, an AI system that was conscious would not necessarily want to do anything its makers wanted it to do. And thus, one of the biggest reasons to build such robots is taken away.

Intelligence

A conscious robot would be modeled after humans and thus have all the opportunities and difficulties of humans. So it could not be more intelligent than the most intelligent human. Without changing the structure of the human brain, it is unclear if it could be made significantly more intelligent, so the same is true for machine brains modeled on it.

There have been very intelligent people, but in all of human history there has never been anyone with superhuman intelligence—anyone that is to other humans as a human is to a chimpanzee. If such a level can easily be reached, the question is why has nobody reached it? If achieved, it would confer enormous competitive advantages. Obviously, it is difficult—or even impossible—for evolution to reach.

One consequence of this is to allay the fears that a conscious robot could become hyper-intelligent and be able to manipulate humans so as to, as it is sometimes expressed, take over the world.

Even if robots became extremely intelligent, even in the human range, this would not automatically lead to the conclusion that they would want to govern the world. Just as all university mathematicians, or the physicists at CERN (the European Organization for Nuclear Research), do not get together to take over the world, so there is no reason to think that a group of highly intelligent robots would do it.

A related fear has been that a conscious robot would want to control and oppress, even kill, humans. If indeed it were very intelligent, surely it would instead be more benign. The person you most fear to meet in a dark alley is not Bill Gates or Steven Pinker but someone quite on the other end of the spectrum. A highly intelligent robot would not want to attack people in alleys but rather to be engaged in some intellectual activity.

Another consequence is that conscious AI could not do anything that smart humans cannot do, so again the project would fail if we wished the machine or robot to take great leaps forward in science and technology for us. We already have 7.5 billion people, so having a few intelligent machines would really make no difference at all. The whole project would be more trouble than it was worth if that were its goal.

We also do not know how geniuses are made, so we cannot easily see to it that the robot would even reach the top of human-level intelligence even if we could make it conscious and intelligent.

Learning

If a robot has human consciousness and intelligence, then it must learn like humans learn. This entails a long childhood and youth so that the robot would not be fully adult before the age of twenty-five, when the human brain is fully developed. This would be a long time before a robot could be put to good use, that is, whatever use it thought was good.

We could try to speed up the process, but it would likely lead to negative consequences. We would be in unknown territory. Again, the consciousness of the robot is modeled after human consciousness. If we cannot speed up the development of humans and make them adult in five or ten years instead of twenty-five, then we would not be able to do it with a conscious robot.

Even if we pushed a robot to become a child-robot prodigy, there is no reason to think that it would translate into an adult robot genius. Child prodigies are rare, so one assumes that robot child prodigies would also be rare.

So if we know of no reliable way to produce a human genius, then we know of no way to produce a robot genius. It is unknowable whether the robot would become very intelligent in the human range; but the likelihood is that it would be average, near the middle of the intelligence scale. The bell curve shows us that the very stupid and the very smart are rarer than the average.

So after all the cost and trouble of building a conscious robot, and after a couple of decades, when it had matured, it would probably turn out to have average intelligence.

Nurturing

A conscious robot modeled after a human would need nurture and close contact with other conscious beings (humans).

If we know the AI is conscious in the same way that a human is conscious and develops as a human child does, anything other than a normal upbringing in society—that is, not being brought up in a lab with cameras watching around the clock as the scientists might like—would be tantamount to child neglect. This would possibly also affect its IQ and highly likely damage its ability to interact normally with other humans—which could be dangerous as well as unethical.

Thus, a conscious robot would need human parents or some kind of substitutes that could teach it human culture, how to interact with humans, and how to live in society. If it is counterargued that this is not necessary for a robot, then we will not need a conscious robot but a sophisticated unconscious robot of the conventional type.

To be able to function in society and understand the problems that humans have and want solved, the conscious robot would have to be raised as a normal human.

This makes Isaac Asimov’s Three Laws of Robotics—which many have taken as authoritative, though they originated in works of fiction—quite unnecessary. An unconscious machine such as a self-driving car or an industrial robot in a factory has no need for these laws; they can just be programmed to not drive over or hit human-like figures. A conscious robot wouldn’t need them either because it is raised as a human. A conscious robot would have no more need for Laws of Robotics than a human does. To hardwire such laws into a conscious robot would be like hardwiring them into a human, so again it would be highly unethical.

Drive

Humans are driven by biological needs, for example the need to reproduce. One reason humans strive to achieve anything at all is to find a mate. Humans want to do things, and that will is caused by the body and its biological forces.

But because a robot has no biological drives and does not need (and is not able) to mate, it would possibly be without any will to achieve.

If the robot does not have hormones that could be simulated, it would become a shallow person without drive, which brings us to the next problem.

If we wanted to compensate for this lack and program the conscious robot to strive for certain things, again we would get into ethically muddy waters. After all, we do not put electrodes in the brains of humans and program them with wishes and drives that society or the state find useful at the moment. That sounds distinctly like something from a dystopian tale. Even putting electrodes into the brains of chimpanzees and driving them, so to speak, for their whole lives, would make most of us want to throw up and could only happen in some kind of extremely inhuman, dictatorial society. And a chimpanzee with nonhuman consciousness still has less consciousness than a conscious robot would.

A robot would likely have a will to survive and see to it that its body was working. It would want broken or worn parts changed or repaired, its oil changed, and the like because its consciousness would not want to disappear. But it would not have a biological drive to reproduce, and it would have no hormones and would not experience puberty. This leads to the conclusion that it would have to be childlike, with all that that entails.

The question thus arises whether a robot could be made an adult or must forever remain a child. During puberty, estrogen and testosterone “start going wild, and the parts of the brain they affect become more active, amplifying emotional range, reasoning, critical thinking, decision-making ability, and memory,” says writer Kristen Mae.1

Without the experiences produced by hormones, in short, a conscious robot could not even function as an average human adult. In that case, the child-robot would have to be taken care of by parents or caretakers in perpetuity, if we assume that its maintenance, with mechanical parts, is easier than keeping a human body alive. The only other alternative is to kill the robot (or kill it indirectly by neglecting to repair it). That robot would not only be conscious, it would be a child. We even have problems executing adult men who have committed serious crimes such as multiple murders. The ethical dilemma of this is difficult to say the least.

Conclusion

Sophisticated robots can and will be built in the future, and they will offer great benefits for mankind. Of course there are also risks, as with any technology, but the same could be said of cars or cell phones.

Building conscious robots is another matter altogether. Only a conscious robot can be intelligent in the human sense, so it is the only artificial intelligence in the strict sense. It could possibly be built after the human model but could not, for the several reasons enumerated above, be used as some kind of machine slave, and many or most practical uses would be excluded.

However, there could still be several other reasons to build intelligent, conscious robots. Here are just three.

1) Research

For ethical reasons, any research on AI could not be very invasive. We could not take apart an artificial intelligence that was conscious to look at its brain, because that would be tantamount to killing a man or woman, or perhaps closer in this case, to a child. The researchers would know that the robot is as conscious as a human and would in this case effectively murder that being to do research on it.

Any research would have to be approved by the robot, just as we could not do extensive research on a random human without approval from the subject. So if the robot refused, you would have to let it be or engage in unethical behavior that could be condemned by authorities.

Even if individual researchers could bring themselves to keep beings as conscious as humans in slavery, and later kill them, ethical commissions and society at large would certainly not accept this.

2) Being fruitful

Another reason to build intelligent, conscious robots would, for some, be similar to the reason certain religions and ideologies encourage people to procreate. They would want conscious robots simply because they are conscious. Other religious groups would perhaps think that it is wrong to try to emulate God and create conscious beings.

3) Alternative humans

There are some scenarios in which humans might become extinct for various reasons not necessary to enumerate here. Robots could be built—despite having many of the characteristics of a human body—that could survive the harsh conditions after a catastrophe and so keep human culture alive. Human culture without flesh-and-blood humans would not be the same, but we might still want it to survive.

Note

  1. Kristen Mae, “Why We Need to Talk to Tweens and Teens about Changes in Their Brains.” January 2019. Accessed October 27, 2020 at https://www.scarymommy.com/puberty-brain-changes/.

Henry Grynnsten

Henry Grynnsten is a writer living in Sweden. Recently, he has become interested in topics such as consciousness, the existence of God, the simulation hypothesis, the likely improbability of super-intelligence (in humans, aliens, and machines), and other contested questions.


This article is available to subscribers only.
Subscribe now or log in to read this article.