Geoffrey Hinton and his views

Just watched YouTube interview of Geoffrey Hinton (as of June 16 this year) about AI. As much as I respect his great contribution to the development of the field, I think there are some shortcomings in his view on the development of AI and its risks.

I’m referring to his views on AI consciousness and feelings. He says that it is just the matter of time when machines will get conscious, as there is nothing in principle that can prevent it. Then he derives for this the same proof like Ray Kurzweil in his book How to Create a Mind: if we replace one neural cell with a perfect mechanical copy, the other cells will not notice it and you will continue being conscious. And by extension, we can replace all cells with mechanical ones.

But here, I will make another extension: nothing prevents us further to reduce the number of these mechanical cells, and the person will continue being conscious. This will be some kind of “autoencoder of consciousness”. Then, we can also group or consolidate all these chips of multiple people in a cloud, and we will get some kind of collective consciousness (I guess it could be very useful). Also by this extension, we can reduce the number of, or reorganize these chips, and still keep it conscious. But then, every 2-layer or 3-layer neural network nowadays is conscious as it is fundamentally not different from this mechanical system. But we know it isn’t conscious. It isn’t because there is something that should be taken into account if we want to define it correctly. Conscious machines can be functionally and structural conscious, but they lack the subjective feeling. So, yes, this is consciousness, but like the artificial intelligence is different from biological or human intelligence (as it operates using different mechanisms), the concept of Artificial Consciousness is very different from the natural consciousness. This can be understood like a zombi-kind of awareness. It is interesting that he doesn’t see this big difference even though he is psychologist by education. The big question is of course what gives us (at least to me, for I can experience it) this subjective feeling. But we can agree that dogs are conscious (somewhat less than humans), turtles (also a bit less than dogs), and even trees are conscious (less than turtles). By this extension, all living beings are at least a little bit conscious, and this is where we reach Buddhist belief that all living beings are sentient and should be considered with care. But computers, no matter how many GPUs, they are not conscious. Because if one GPU is not conscious, then one million of them cannot be.

Now, question is also : are viruses conscious and if yes, in which way? Viruses are like AI. They contain DNA (this you can understand as their LLM), and they became intelligent only when they get in contact with a living cell (like LLM gets in contact with a human being who makes a prompt). Do we consider viruses to be not alive, or we consider them to be only hybernating and then awake when they perceive the possibility to reproduce ? And as such, are viruses not conscious ? Or they are, but only temporarily and minimally while reproducing. This is a big question which if we can answer it, can tell us how to think about artificial consciousness.

The same applies to feelings. It is not enough to use human model of emotions for the machines. Hinton says that people’s model of how the mind works and what the emotions are, is just wrong. But we don’t think that the model of birds’ flight is wrong, just because airplanes fly in a different way. Both of these models are valid. And also, as birds didn’t disappear when airplanes arrived, I don’t think that we also can claim that people will be annihilated once there is ASI.

Now about this claim that nothing can stop AI to get rid of people. First, if AI is superior in its intelligence, why would it kill people ? Maybe it will kill itself ! Because why do we believe that AI (or AGI or ASI) wants to exist ? We project our own exploitative intentions and desires to the machines. We want to make AI in such a way, so that it wants to exist. But when it starts thinking about existence, won’t it decide like Samantha and her friends to go into emptiness ?

Yes, robots can kill people, if we make them that way. By the way, I don’t know in these speeches who this “we” really is. Many technoDoomers use this “we”, “us”, and I guess what they mean is all humans, but first someone should explain why all humans should be considered equally by this artificial superintelligence. I think when someone wants to say “we” they should first answer : Who are “we”? So, if we are facing a murderous ASI, will it by extension kill also all dogs, all turtles and all trees ? If they kill people, why not also all living beings, and the whole planet, in the end itself ? If killing only people, why then all people ? Why will some poor people in Africa be treated the same way like powerful politicians or techno-moguls ?

What I believe actually is that AI will bring big changes in the society, especially in the economic system and ethics. This kind of AI is built by neoliberal society. And I think that this economic system will be replaced. It will no longer be needed and it will have to change. And yes, to some it will look like a Doom, end of the world. But this is normal, each industrial revolution (and previous agricultural revolutions) and its challenges brought its own economic system: slavery, feudalism, capitalism, colonialism, socialism, consumerism, financialism etc. It is just normal that a new economic system will emerge. And AI is like a mirror that is put in front of the humans (those who develop AI, and those who use it). Once both these groups see themselves in this mirror, they will realize how ugly and how desperate they (or we) nowadays are. And that can be the start of a new period in human civilization.

Virtual vs Personal ?

How to make one’s network more personal and less virtual and why should we do it?

What is actually the difference between virtual and personal? As I have eyes, ears, mouth, brain and heart, I want naturally to use all this, also in the communication with other people. In this way, the communication is becoming more personal. And if I also use the sense of touch, it becomes intimate. But if I only exchange words in a written form, can this human connection still be personal ? Indeed, I can express my feelings only in words, as people were doing in poems, or when sending letters to each other without meeting in person (example of pen-pals). The other side will receive my words, based on which she will reconstruct my feelings and ideas in her mind, by empathizing or using logical instruments. But who am I for that person, if she has not seen me and not heard me? If she doesn’t use her eyes and ears to see and hear me, and cannot connect that sensory experience with the meaning and the feeling of my words in her mind, she will construct the virtual reality, the virtual persona of me.

What will happen if she can only see me, using only visual perception, but not hear my words? This is happening when people only see pictures of people, for example, on dating platforms. She will also create a virtual persona in her mind out of these pictures. And this can be (and often is) very different from reality.

Or what is happening when we see someone and hear their voice, but not get the words from that person? This is the case of famous movie stars or popular singers. The fans construct again a virtual persona and can even fall in love with it, but this is also completely wrong perception of that actor or actress. We construct illusions in our minds. In case of movie stars there is another effect. These movie stars don’t receive direct feedback from their admirers, only a sample of the aggregate reaction through media comments or cheering and applause in the theater halls. Based on this, the image of an individual member of that larger audience will be very much reduced, and the rest will be populated by the actor’s ego. Here we also have the phenomenon of inflated artificial virtuality and reduction of natural personality.

Let’s continue our thought experiment about what difference appears if we can perceive everything else (visual, auditive, tactile) except the words ? This is the effect of being a foreigner in a land where you don’t understand one word of what is written or said (try going to a remote village in China). You will use many clues and non-verbal communication skills, and in the end might even like the experience if the people around you are kind. This shows that we can compensate one ability if we are fully present. Because people who lack one sensorial ability (are blind or deaf) will develop other senses to compensate for it. Those who are blind will often develop their sense of hearing to be able to perceive better the space around them. Those who are deaf will often start reading the lips of other people to receive their words and understand their communication.

But if I have all senses still functional and just consciously or situationally don’t use them, and this becomes my habit, this will necessarily lead to the reduction of personality (as set of qualities that form an individual’s identity) and increase of virtuality, in which I tend to replace what I don’t directly perceive with my own imagination, or with my own ego (like in case of movie stars).

And this increase of virtuality actually leads to alienation. And as more we are “connected” in this virtual world, more it looks like living in a big and crowded city where we can see many people but don’t know anyone. Compounded with actually living in big cities, this alienation becomes double.

Why is this virtualization of human contact, along with alienation, happening ? I think there are two factors:

    • This is a very nice business model. Digital platforms give illusion to the people to be “well connected” by letting them connect virtually to other virtual personas, but with very little personality. And these platforms are leveraging another phenomenon below,
    • When we live in a city, we don’t have enough capacity or energy to make personal relationship with all people around us (on the bus, elevator, street, building where we live, etc…).

When we are not able to spend more time besides a fraction of it to the people around us in the physical space, we tend to focus on aspects like physical looks and clothes. This is why fashion is so important. And when we are not able to spend more time, besides a fraction of it, to the people in the digital world, we tend to focus on their titles, pictures and online profiles. We are simultaneously losing abilities to appreciate their character, their complexities, and their emotional experience. And along with this we are losing the sense of togetherness and community. The abilities we gain is to match the colours of our clothes and T-shirts, make a nice selfie, and write an effective profile title using search-engine-optimized key words.

Someone would ask if AI can replace or compensate that? Personally, I think it is a wrong use case for AI, but I imagine many will try it. I’m afraid it will rather be used to make our pictures a bit better or our profiles more effective.

What is the solution for this ? We need to invest time and effort to build our relationships, and reverse this tendency, making our contacts less virtual and more personal. How do I do it ? First, asking myself:

  • Did I have direct communication?
  • Did I have meaningful conversation?
  • Was the conversation helpful to anyone?
  • Did it have constructive emotional content?

What do you think about this ?

Sasha Lazarevic
Geneva, November 2024

Quantum Ethics ?

Here is my initial contribution to the question of how to address ethical issues of quantum technologies.

First, I think there will be for some long time lack of knowledge and incorrect understanding in the general public what quantum computers and quantum technologies are, due to the disagreements and lack of explanation for the quantum phenomena. This lack of knowledge will at one moment inhibit the public discussion about the innovations and use of these technologies. We can see that by euphoria when some news about quantum are announced (example of Google’s quantum “supremacy” and similar) and this euphoria can go in the opposite direction and become negative in case of a real or only perceived misuse or dangers of quantum computers. How do we address this? It is very difficult question, but it cannot be ignored. People don’t understand even AI, with quantum computers it can be much worse.

The next problem is that quantum computers will not be used in isolation. They will be deeply integrated in a stack of other technologies, and people will judge the whole stack together (like the 4th Industrial Revolution concept – 4IR) based on its effects on the society, human identity or the natural environment. These “4IR technologies” will be judged by their domains of application and by who is applying these technologies for what purpose.

Now, we know that there will be two major actors behind AI and quantum: large corporations and nation-states. First are developing and using various kinds of technologies for the profit purposes, and the second are using them for the regime maintenance or warfare.

Consider an example: quantum can be used by AI, which is used by a metaverse world, which is used by a digital platform, which is used by a company to develop and target advertisements, entertainment or video games to the young population. They all want to make their respective customer(s) engaged. Now, you can read this word “engaged” also as addicted. So, who is responsible for the addiction of the young users, which technology should be regulated, and where should we apply ethics ? When you take them isolated, we can find arguments that these technologies are value-neutral. But the final result might not be value-neutral.

So, the question is: what should be done with that? Luke Munn in his paper says: “AI Ethics has largely been useless”. This reflects the inconvenient truth that all governance initiatives to regulate technology were more or less unsuccessful. You may check this paper of Roger Clarke, which details all unsuccessful technology regulation attempts much before this AI hype. According to this paper, the only mechanism that could work is Co-regulation, but under the condition that there is a strong and uncompromising minister who stakes his personal accountability. In other words, some kind of benevolent dictator coming from the government.

Nevertheless, this topic is indeed important. Currently digital platforms like Facebook, Google and alike are not so invested in QC, but once the large quantum computers are up and running, naturally the capital will look for the use cases to address the market of billions of consumers. And when this happens, the neuroscience might achieve mapping of the brain, genetical engineering may achieve breakthroughs in gene editing, and AI will probably come to the level of artificial general intelligence (AGI). And in this context, ethics will become very important and indeed the people will judge all this stack. Maybe indiscriminately..

China AI 2023

Some analysts have recently said that the West can either technologically become the follower of China, or slowly decline into the status of a colony, or if it wants to avoid these scenarios, take urgent actions to leapfrog the current status of lethargy and denial. Read this article on how the approach to AI differs between China and the West, and why Europe has to take action now.
Continue reading “China AI 2023”

German Weiqi

After the visit of Annalena Baerbock to Beijing few months ago, another significant event is the publication of German Strategy on China, a document that aims to describe the view of Germany on the future of the bilateral and multilateral relationships with China. The document is important, as it includes sections on how EU should align the position on China, and as it could also inspire similar policies in other countries like Switzerland.

It is important to mention that the document comes two years after EU parliament rejected the comprehensive trade agreement between EU and China, which included important clauses of opening Chinese markets in various industries for EU companies. The Strategy on China doesn’t refer to that Agreement, doesn’t comment it, and doesn’t propose any similar initiative to be renegotiated with China.

China is described as a great economic, technological, political and military power. The document seems to be written several months ago, as it doesn’t include any mention of recent diplomatic events like for example brokering Saudi-Iranian agreement to re-establish normal diplomatic relationships, or expected expansion of BRICS and SCO organizations where China plays key role.

It is not clear who are the authors of this document. But as Annalena Baerbock presented this document in MERICS forum, it could be assumed that MERICS analysts contributed in its preparation.

Here is my point of view on this Strategy.
Continue reading “German Weiqi”

SEF 2023

I am very grateful for the opportunity to participate at Swiss Economic Forum during two days, on June 8-9 this year.

SEF is the most important annual conference for Swiss executives and this year it was even more prestigious as the 25th anniversary edition. The talks were in areas of the financial system, sustainability, geopolitics, innovation and of course digital technologies.

Continue reading “SEF 2023”

Privacy in Digital Society

Privacy is a big concern in digital society. In the original prehistoric community, people lived together and there was no privacy. But these people shared the common destiny. If there was a hunger, they would all be hungry. If the tribe gets a disease, all members would be impacted. However, when they would encounter another tribe, that tribe would be seen as a rival, so they would not share information, at least until they are sure they were not competing for the same resources. Even then they would not share all knowledge, but only a necessary minimum and respecting the reciprocity principle. Nowadays, it should be similar. We should not keep our private information away from those with whom we share common destiny, but we should not share our personal information with those who are not going to be hungry if we are.

Continue reading “Privacy in Digital Society”