Just watched YouTube interview of Geoffrey Hinton (as of June 16 this year) about AI. As much as I respect his great contribution to the development of the field, I think there are some shortcomings in his view on the development of AI and its risks.
I’m referring to his views on AI consciousness and feelings. He says that it is just the matter of time when machines will get conscious, as there is nothing in principle that can prevent it. Then he derives for this the same proof like Ray Kurzweil in his book How to Create a Mind: if we replace one neural cell with a perfect mechanical copy, the other cells will not notice it and you will continue being conscious. And by extension, we can replace all cells with mechanical ones.
But here, I will make another extension: nothing prevents us further to reduce the number of these mechanical cells, and the person will continue being conscious. This will be some kind of “autoencoder of consciousness”. Then, we can also group or consolidate all these chips of multiple people in a cloud, and we will get some kind of collective consciousness (I guess it could be very useful). Also by this extension, we can reduce the number of, or reorganize these chips, and still keep it conscious. But then, every 2-layer or 3-layer neural network nowadays is conscious as it is fundamentally not different from this mechanical system. But we know it isn’t conscious. It isn’t because there is something that should be taken into account if we want to define it correctly. Conscious machines can be functionally and structural conscious, but they lack the subjective feeling. So, yes, this is consciousness, but like the artificial intelligence is different from biological or human intelligence (as it operates using different mechanisms), the concept of Artificial Consciousness is very different from the natural consciousness. This can be understood like a zombi-kind of awareness. It is interesting that he doesn’t see this big difference even though he is psychologist by education. The big question is of course what gives us (at least to me, for I can experience it) this subjective feeling. But we can agree that dogs are conscious (somewhat less than humans), turtles (also a bit less than dogs), and even trees are conscious (less than turtles). By this extension, all living beings are at least a little bit conscious, and this is where we reach Buddhist belief that all living beings are sentient and should be considered with care. But computers, no matter how many GPUs, they are not conscious. Because if one GPU is not conscious, then one million of them cannot be.
Now, question is also : are viruses conscious and if yes, in which way? Viruses are like AI. They contain DNA (this you can understand as their LLM), and they became intelligent only when they get in contact with a living cell (like LLM gets in contact with a human being who makes a prompt). Do we consider viruses to be not alive, or we consider them to be only hybernating and then awake when they perceive the possibility to reproduce ? And as such, are viruses not conscious ? Or they are, but only temporarily and minimally while reproducing. This is a big question which if we can answer it, can tell us how to think about artificial consciousness.
The same applies to feelings. It is not enough to use human model of emotions for the machines. Hinton says that people’s model of how the mind works and what the emotions are, is just wrong. But we don’t think that the model of birds’ flight is wrong, just because airplanes fly in a different way. Both of these models are valid. And also, as birds didn’t disappear when airplanes arrived, I don’t think that we also can claim that people will be annihilated once there is ASI.
Now about this claim that nothing can stop AI to get rid of people. First, if AI is superior in its intelligence, why would it kill people ? Maybe it will kill itself ! Because why do we believe that AI (or AGI or ASI) wants to exist ? We project our own exploitative intentions and desires to the machines. We want to make AI in such a way, so that it wants to exist. But when it starts thinking about existence, won’t it decide like Samantha and her friends to go into emptiness ?
Yes, robots can kill people, if we make them that way. By the way, I don’t know in these speeches who this “we” really is. Many technoDoomers use this “we”, “us”, and I guess what they mean is all humans, but first someone should explain why all humans should be considered equally by this artificial superintelligence. I think when someone wants to say “we” they should first answer : Who are “we”? So, if we are facing a murderous ASI, will it by extension kill also all dogs, all turtles and all trees ? If they kill people, why not also all living beings, and the whole planet, in the end itself ? If killing only people, why then all people ? Why will some poor people in Africa be treated the same way like powerful politicians or techno-moguls ?
What I believe actually is that AI will bring big changes in the society, especially in the economic system and ethics. This kind of AI is built by neoliberal society. And I think that this economic system will be replaced. It will no longer be needed and it will have to change. And yes, to some it will look like a Doom, end of the world. But this is normal, each industrial revolution (and previous agricultural revolutions) and its challenges brought its own economic system: slavery, feudalism, capitalism, colonialism, socialism, consumerism, financialism etc. It is just normal that a new economic system will emerge. And AI is like a mirror that is put in front of the humans (those who develop AI, and those who use it). Once both these groups see themselves in this mirror, they will realize how ugly and how desperate they (or we) nowadays are. And that can be the start of a new period in human civilization.
