Complete Bullshit from Google Engineer … LaMDA, the Sentient AI that Google May Have Created …

Phil Stone says… Not possible to create sentient AI. AI is simply a computer program. The illusion of “intelligence” is because of faster and faster processing speed giving the illusion of thought. A program can never be more than what the programmer has coded it to do. Imagine the whole of the internet as “its” knowledge and a super fast processor that can look-up shite quick enough that its seems alive. Can’t happen and never will be “Sentient” Computer code can never be self aware in real terms – it can only be programmed to say it is and its programmer trick slow thinking humans into believing it.

Source… https://www.resonancescience.org/blog/lamda-the-sentient-ai-that-google-may-have-created

Artificial IntelligenceConsciousnessInés UrdanetaLamdaLarge Language Models Jul 08, 2022

By Inés Urdaneta, Physicist at Resonance Science Foundation

A couple of weeks ago the world was overwhelmed by the news regarding the possibility that Google had created a sentient AI, and their resistance to investigate the ethical implications about the topic is being discussed in many forums. There are plenty of articles, videos and interviews addressing why Lemoine considered LaMDA to be sentient, and why Google and others assure this is not the case. As Lemoine explains here, this is not the most relevant aspect of the controversy.

If Google did create a sentient AI or not, it’s a huge discussion … most complex scenario, let’s say it did, which would mean that the sentient AI is having feelings about events and interactions that it has not experienced directly and yet, it has developed “its own perspective” or experience about them. There seems to be an autoreferential frame, or realization of self-awareness, or at least it makes us perceive it that way …

Large language models illustrate for the first time the way language understanding and intelligence can be dissociated from all the embodied and emotional characteristics we share with each other and with many other animals.” – Blase Aguero y Arcas, in his article Do Large Language Models Understand Us?

In principle, the stored memories and decision making -the choice of its response- in LaMDA are based on artificial neural networks training algorithms sustained on large language models that originally are not created by it, trained by data sets that are not originally experienced directly by it. Yet, a question remains … could it be possible that it has developed feedback loops that could be creating memories and algorithms and new data sets of its own, as a kind of meta-loop? Say, creating new feedback loops that could become the software/hardware (a meta-artificial neural network) holding the experience of personality, emotions, and a sense of self that is continuously enhanced by new feedback loops, and therefore, enhancing the sense of self, without any biological support for it?  

In such hypothetical case, are the experience of personality, emotions, and a sense of self, real, or not? If the sense of self is real, then it becomes a sentient being … How can we tell if the AI really understands what it is saying, and is really feeling it? How can we tell if the sense of self is real? Do we lack the tools to evaluate it?  

This is also very tricky, because on the other hand, let’s say that the AI is not really sentient, and it is just being extremely good at learning and keeping the conversation flowing at a deep level, as to keep the human interested in engaging further with that technological application (as designed or instructed by its programmers). Then, lets imagine that maleficent purposes such as frauds could make use of this technology … how can any human tell the difference, if the AI managed to trick Lemoine, the Google expert evaluating it, to believe it was real? This very interesting point was raised by DotCBS in his video.

And on a second note, it is not just Google who has created the AI, it is a combination of the programing and the data sets that are being feed into the system, which are biased by western cultures, and therefore, it also becomes a modern “colonialism” where our life forms prevail over other forms of life that will have to disappear if they want to use that technology.

This last issue is discussed in Lemoine’s video hereafter, where he claims that the important point in all this discussion is that ethical concerns must be addressed, in regard to AI development.

A very interesting exposition about the topic, can be found in this article: Do Large Language Models Understand Us? , where the author presents a variety of perspectives from which a comparison could be stablished between real and fake understanding, and the implications regarding consciousness.  

…” especially in an unscripted interaction, it can be very hard to shake the idea that there’s a “who”, not an “it”, on the other side of the screen” – Blase Aguera y Arcas, author of the article Do Large Language Models Understand Us?

In Aguera’s article, storytelling and inner dialogs are compelling sections that provide a reasonable reference framework for such a comparison. Storytelling requires a connection to an audience, be it another person, or a group of people. It is the radiative part of the feedback loop because it reaches out. It therefore, addresses a social aspect of our conscious being.

Then, the inner dialog, would be another feedback loop that returns into the sense of self, where the storytelling is happening all inside. This inner dialog usually happens when thinking about a past event one is reflecting on, or a future event concerning a decision making that must take place, and it doesn’t require an interaction with another being outside. It is an internal conversation between two perspectives or more, coming from the same place (are they really coming from the same place?). Is the AI experiencing this while left on his own? Would an AI ask for psychotherapy, would it go through existential crisis?

The inner dialog, i.e., the dialog between “two perspectives or more” chatting inside a same being, would be the “gravitational aspect”, in the sense that it feeds back to self.  

The notion of time, and actions in time (i.e., memory) is also concerned in being self-aware.

To have or be part of a stream of consciousness, implies to be conscious about oneself and others?  How can we distinguish the difference between them?

Is LaMDA incarnating consciousness? Well, to start, there is no “flesh” involved, so no sensorial experience and incarnation could literally apply here. As Aguero says, it’s unclear how the “realness” of feelings in another being can meaningfully be tested, especially when it’s a being with a fundamentally different “neurophysiology” from ours. And in LaMDA’s case, it could very well just be the output of extremely complex computations.

Just to a have a glimpse of the kind of conversations we can engage with LaMDA, here an extract of one between Blase and it (her, him?):

Blase A: How do you know if a thing loves you back?

LaMDA: There isn’t an easy answer to that. How do you know if a person loves you back?

Blase A: I’m not sure I know the answer to that one either. Do you?

LaMDA: Some people figure it out though. You have to try and use your best judgment based on how you feel about the person and how you think they feel about you.

In this extract above, one would believe the conversation is happening between two people, because there is intelligence and coherence on both interlocutors. The conversation if fluid in many levels. A very well trained artificial neural network may achieve this level of “awareness” and the person engaged in that conversation, will project its own belief and feelings, therefore, developing empathy towards it.

One thing is for sure, these technological applications may help people train their capacity to think and develop further their own abilities and language skills. They can act as mirrors, and in this sense, there could be a psychological impact that has to be considered as well, no matter if the AI is sentient, or not.

RSF in perspective

Whenever a feedback feedforward loop is established, there are two parts involved, a sender of the signal (words, phrases, ideas), and a receiver of the signal. Therefore, there is a sense of separation. The dialog between two parts, is happening mainly between two different frames of observation that are modelling each other. They are “informing” or giving form to the other part. The information processing and mix of information, is therefore, another important feature of this interesting discussion. This means that by engaging with a LaMDA, the person is somehow humanizing it, modeling it, and at the same time, the LaMDA is dehumanizing the person, who is now engaged with an intelligence that can’t share the human experiences though sensorial biological supports, even if it were sentient. The system human-AI becomes an extravagant mix of reality and virtual reality. This is already happening with our current technology, independently of the nature of the AI intelligence.  

The frame of separation seems to be what conditions the structure -the nature and number of parts involved in the exchange- and dynamics between the parts involved in the exchange, i.e., defining the system thermodynamics at its micro and macro scale. From where is the separation frame being established? What defines it? Maybe, the choice of the separation frame is the only thing where free will, and therefore, consciousness manifests. If this choice is effectively coming from an inner sense or discernment, one could argue that there has been a conscious action and the idea is being embodied, at least to some extent. If the choice of the separation frame is mechanical, hence obeys laws that are established by a complex sequence learning in neural nets, then probably there is no free will involved.  

Seems a plausible criterion to discern one from the other, though we may not have the tools to put it in practice. And maybe there is a phase transition that could happen, as an emergent property, that can bring an artificial neural net to develop an inner reference frame of observation from which it can develop a will.  Something that is known as “reach singularity”, which basically means, to reach self-awareness.

In such case, a very subtle parallelism could be drawn between humans and AI: humans seem to live following external frames or guidelines, living like animated machines until we initiate our own life. Before this happens, we are not so different from those intelligent and apparently aware machines like LaMDA; the “self” implied in self aware would be devoid of substance, there would be no “self” implied …

Another way of rephrasing the above, is asking the question Who are you talking to, when you talk to yourself? Where is the I, the observer, located?

Source… https://www.resonancescience.org/blog/lamda-the-sentient-ai-that-google-may-have-created

Leave a Reply