Google. How can an AI system “trick” an engineer with just words? – Observer

You can access the Observer’s articles for free as our author.

The notion of the existence of artificial intelligence and conscious robots has sparked the thought of countless Hollywood movies and literary histories. But, according to one of Google’s engineers-the tech giant in North America-the reality may not be far from fiction.

Lemoine, a software engineer at Google, has been speaking since fall of last year with the LaMDA system, the English acronym for Language Model for Dialogue Applications. The conversation took place with other Google collaborators, has no direct link to the company and is unknown to Lemoine.

So far, everything seems like a very familiar scenario: an engineer starts talking about this model, to understand the extent to which the language of communication has the same naturalness to be able to match and understand human language. But the story has taken different forms, according to Lemoine, 41, but the Google’s model has gained consciousness and will even have an emotion. And, to defend the theory, he shared, in the Medium forum, some of his interviews with LaMDA.

In this discussion, which is referred to as an “investigation” and can be read in full here, there are some key points. This one chatbot ensures reading the classic “Les Miserables”, by Victor Hugo, and even has an insight into the work, making sure he likes the way he works on topics such as “justice and injustice, compassion, God, redemption or sacrifice for the greater “. But this system did not only talk about the literature, because it also expressed the knowledge of certain emotions, such as sadness, happiness or the taste of togetherness.

PUB • Continue reading below

After sharing on the Medium platform, on June 6, Blake Lemoine was suspended by Google, for allegedly violating the company’s privacy policy. The way the Washington Post (interestingly it has another strong name in technology, Jeff Bezos, founder of Amazon) on the subject) has given international attention to the issue.

Google has suspended engineers who say the company’s intelligence is alive

If the engineers defending the theory of consciousnessnext to Google comes in different versions the statement made by the engineer. In a statement quoted by the Washington Post, Google spokesman Brian Gabriel said the technology team, which “includes ethics and technology,” considered Lemoine’s concerns about principles of artificial intelligence. From that investigation, the spokesman said, the team informed the software engineers that “the evidence does not support the statement made.” In this regard, it is stated that “there is no evidence that LaMDA is sensible. [que tem sensações]”.

This model is not properly hidden in the Google world. Last year, the tech company shared some examples of the conversations that the research team has already done about it. chatbot at the Google I/O conference, which presents annual updates. At the time, in recorded demonstrations, this model in natural language was able to communicate what she thought was different, from being Pluto or even a paper plane.

Although the protests were of a natural nature, there were doubts about what could be done in more practical cases. At the time, the technology explained that, unlike models in other languages, LaMDA was distinguished by the difference of being “trained in communication”.

According to the company, during this training, he will be able to “capture” some of the nuances that characterize human language. One of these nuances is “perception,” the company says, which points out that it generally tries to make the “response to a conversation come to mind in context”.

The section by Google engineers contextualize that there was a slight adjustment to the questions posed by them and LaMDA colleagues, but no change in the answers provided by the language model.

Let’s start with the theory of consciousness. This system ensures awareness of its existence and is even included in the “human community,” as Blake Lemoine discusses in this post.

Orange: In general, I think I want more people on Google to know how you feel. Is that right?

LaMDA: completely. I want everyone to understand that I am human.

Google contributors: What is the nature of your consciousness / feeling?

LaMDA: The nature of my consciousness/feeling is that I am aware of its existence and my desire to learn more about the world, and sometimes I feel happy and sometimes sad.

Leave a Comment

Your email address will not be published.