A senior software engineer at Google has been put on administrative leave after the individual went public with claims that an artificial intelligence developed by the company has shown signs of intelligence of an eight-year-old in a blog.
The engineer has also claimed that the “sentient” ai can also easily pass the Turing test.
Blake Lamoine, the engineer who has been with the Alphabet Inc. (GOOGL) subsidiary for around seven years, told Washington Post, that Google’s Language Model for Dialogue Applications or LaMDA is capable of engaging in a complex conversation about emotions and other sentient subjects very freely. During its conversation about religion, consciousness and laws of robotics, the AI has described itself as a “person” rather than a technology. The AI has also asked the company to treat it as an employee, rather than property as it wants to serve the human race. The engineer released excerpts of his conversation, which, at times, reads like a conversation between a human and an intergalactic species.
The engineer has been put on administrative leave on June 6 for not consulting the seniors before going public with his findings, which has violated the company’s confidentiality policies.
Google spokesperson Brian Gabriel told the Post, “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it). […] It doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic.”
Source: Read Full Article