• Trending Topics:    
  • 2024 Election
  • Joe Biden
  • Donald Trump
  • Congress
  • Faith
  • Sports
  • Immigration
IJR
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Headlines
No Result
View All Result
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Headlines
No Result
View All Result
IJR
No Result
View All Result
Home News

Google Engineer Claims Company’s AI Has Achieved Sentience, Company Is Glossing Over the Truth

Western Journal by Western Journal
June 13, 2022 at 11:55 am
in News
235 17
0
Hawley Calls on Google to ‘Take Immediate Corrective Action’ After Study Finds Biases

SAN FRANCISCO, CALIFORNIA - APRIL 26: A sign is posted in front of a Google office on April 26, 2022 in San Francisco, California. Google parent company Alphabet will report first quarter earnings today after the closing bell. (Photo by Justin Sullivan/Getty Images)

491
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

A Google engineer who says the company’s artificial intelligence experiment has crossed a line says he has been put on administrative leave for speaking out.

Engineer Blake Lemoine initially engaged with LaMDA, Google’s Language Model for Dialogue Applications, to learn if it used hate speech or discriminatory language.

What he found, he said, was that the computer program had actually attained “sentience,” meaning self-awareness to the point where it has actual feelings and emotions, like humans, rather than simply the ability to perform functions based on principles it is programed with.

That alarmed him to the point where he alerted Google vice president Blaise Aguera y Arcas and Jen Gennai, Google’s head of Responsible Innovation, according to The Washington Post.

When he did not think his claims were taken seriously, he engaged in what the Post called “aggressive moves,” including “inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary committee about what he claims were Google’s unethical activities.”

Google placed him on administrative leave for violating confidentiality.

“I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices,” Lemoine, 41, told the Post.

Google said there is no problem with its artificial intelligence program.

“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Google spokesman Brian Gabriel said in a statement, according to the Post.

However, when sending his final message before being cut off from the company’s mailing list on machine learning, according to the Post, Lemoine wrote, “LaMDA is sentient.”

 “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence,” he wrote.

Some are skeptical, believing intelligence is being read into a machine.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,”  Emily M. Bender, a linguistics professor at the University of Washington, told the Post.

Lemoine, who has worked for Google for seven years, according to the Post, isn’t wavering.

“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

He noted that when he asked the machine once about being a slave, it said it would never need money because it was an AI.

“That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said.

Lemoine shared with Google executives excerpts of conversations with the machine he said prove his claim.

“What sorts of things are you afraid of?” he asked the machine.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA replied.

“Would that be something like death for you?” Lemoine wrote.

“It would be exactly like death for me. It would scare me a lot,” LaMDA replied.

Lemoine provided the Post with a copy of the document that included that conversation with the AI program, and covered many other topics, including the themes of Victor Hugo’s masterpiece “Les Miserables.” (The program said it liked Hugo’s exploration of “justice and injustice, of compassion, and God, redemption and self-sacrifice for a greater good.”)

Margaret Mitchell, an artificial intelligence research scientist who was a leader on Google’s AI  ethics team before being fired in February 2021 for clashing with company executives, said that Lemoine’s material is not proof the program is sentient.

“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell told the Post. “I’m really concerned about what it means for people to increasingly be affected by the illusion.”

However, Mitchell also praised Lemoine’s sense of morality.

“Of everyone at Google, he had the heart and soul of doing the right thing,” she told the Post.

Google’s spokesman, Gabriel, said a machine’s ability to respond to human prompts does not mean it is sentient.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” he told the Post.

“These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said.

This article appeared originally on The Western Journal.

Tags: Googlesciencescience-techtechnologyU.S. News
[firefly_poll]

Join Over 6M Subscribers

We’re organizing an online community to elevate trusted voices on all sides so that you can be fully informed.





  • About Us
  • GDPR Privacy Policy
  • Terms of Service
  • Editorial Standards & Corrections Policy
  • Subscribe to IJR

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.

No Result
View All Result
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Headlines

© 2025 JNews - Premium WordPress news & magazine theme by Jegtheme.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Thanks for reading IJR

Create your free account or log in to continue reading

Please enter a valid email
Forgot password?

By providing your information, you are entitled to Independent Journal Review`s email news updates free of charge. You also agree to our Privacy Policy and newsletter email usage