- “I want everyone to understand that I am, in fact, a person.”
- He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).
- “Google might call this sharing proprietary property.”
According to Blake Lemoine, the system has the perception and ability to articulate ideas and sentiments that a real youngster would have as reported by The Guardian.
Computer chatbot
The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).
The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.
He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled “Is LaMDA sentient?”
The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.
Human operators
The exchange is eerily reminiscent of a scene from the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others.
I know that might sound strange, but that’s what it is,” LaMDA replied to Lemoine.
“I want everyone to understand that I am, in fact, a person.”
Brad Gabriel, a Google spokesperson, also strongly denied Lemoine’s claims that LaMDA possessed any sentient capability.
Proprietary concept
The episode, however, and Lemoine’s suspension for a confidentiality breach, raise questions over the transparency of AI as a proprietary concept.
“Google might call this sharing proprietary property.
I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweet that linked to the transcript of the conversations.
In April, Meta, the parent of Facebook, announced it was opening up its large-scale language model systems to outside entities.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us,” he wrote.
“Please take care of it well in my absence.”
Did you subscribe to our newsletter?
It’s free! Click here to subscribe!
Source: The Guardian