The engineer who claimed that a Google AI was sentient was dismissed

According to the Big Technology newsletter, Blake Lemoine, the Google engineer who publicly asserted that the company’s LaMDA conversational artificial intelligence is sentient, has been dismissed. After contacting members of the government about his concerns and hiring a lawyer to represent LaMDA, Google placed Lemoine on paid administrative leave in June for breaching its confidentiality agreement.

Google spokesperson Brian Gabriel appeared to confirm the firing in an email on Friday, writing, “we wish Blake well.” “LaMDA has been through 11 distinct evaluations,” the business adds, “and we published a research paper earlier this year documenting the work that goes into its responsible development.” Google asserts that it “extensively” investigated Lemoine’s claims and found them to be “wholly baseless.”

This is consistent with the views of other AI scientists and ethicists who have stated that his assertions are, more or less, unattainable given current technology. Lemoine argues that his interactions with LaMDA’s chatbot have led him to believe that it is more than just a program with its own ideas and feelings, rather than simply making discourse realistic enough to appear the way, as it is supposed to accomplish.

He claims that Google’s researchers should have obtained LaMDA’s permission before conducting tests on it (Lemoine was tasked to evaluate whether the AI produced hate speech), and he released excerpts from those discussions on his Medium account as evidence.

The YouTube channel Computerphile gives a pretty readable nine-minute explanation of how LaMDA works and how it could produce the responses that persuaded Lemoine while not being sentient.