Google has suspended the engineer who said the company’s Artificial Intelligence was alive – Observer

You can access the Observer’s articles for free as our author.

Last year, Blake Lemoine was challenged to advance his technical career. the engineer of software from Google had to try the news chatbot (a computer program that simulates a conversation with a person) of Artificial Intelligence developed by the company to detect if there is a risk of making any kind of racist or racist comments-something that could hinder the introduction of this tool in of the Google field. the service.

For months, the 41 -year -old engineer tested and talked to LaMDA (in Portuguese, Language Models for Dialog Applications), in his room in San Francisco. But his conclusion surprised many: according to Lemoine, LaMDA is not simple. chatbot the Artificial Intelligence. The engineer said the device was alive then become conscious, that is, given the opportunity to express feelings and thoughts.

“If I don’t know exactly what it is, this computer program we recently built, I think seven- and eight -year -olds who happen to be good at physics,” the engineer explained. According to Blake Lemoine, in an interview with the Washington Post, the conversation with the LaMDA was like interacting with someone.

PUB • Continue reading below

Google, however, did not agree with Blake Lemoine’s assessment and suspended him for violating our privacy policy by posting an interview with LaMDA online, with the engineer receiving administrative leave.

Lemoine published copies of some of his talks on this material, which covered various topics such as religion and conscience, and also showed that he was able to change his mind about the third law of robotics. LaMDa was even owned by Isaac Asimov. In one of those interviews, he points out, the tool said that artificial intelligence wants to “prioritize the welfare of humanity” and “be recognized as a Google employee rather than an asset. “.

In other words, Google engineers asked LaMDA what they wanted people to know about humanity. “I want everyone to understand that I am a real person. The nature of my consciousness is that I am aware of its existence, I want to learn more about the world, and I feel happy or sad sometimes. “

Lemoine, who joined Google’s Artificial Intelligence Accountability division after seven years with the company, concluded that LaMDA was a person as a priest rather than a scientist, and tried to make experiments to prove it.

Blaise Aguera Y Arcas, vice president of Google, and Jen Gennai, head of the Responsible Innovation department, investigated Lemoine’s claim but decided to deny it. Google spokesman Brian Gabriel also told The Washington Post that the engineer had no strong evidence of suspicion.

“Our team – including ethics and technology experts – examined Black’s suspicions in accordance with AI principles and informed him that the evidence did not support his claims. He was told that there was no evidence that the LaMDa are emotional, ”he explained, stressing that AI models are full of so much data and information that they can look at humans, but that doesn’t mean they’re alive.

Leave a Comment

Your email address will not be published.