• Latest
  • Trending
  • All
  • News
  • Business
  • Lifestyle
Facebook Will Not Say if Its Algorithms Boosted Trump’s Violent Rhetoric

Facebook’s AI Bugs Out in Ugly Way, Labels Black Men ‘Primates’

September 5, 2021

Iranian Gunboats Foiled in Attempt to Board US Oil Tanker as Trump Boosts Military Presence

February 3, 2026

Democratic Senate primary heats up following allegations of candidate making offensive remarks about Black man’s qualifications.

February 3, 2026

Lawmakers seek to revoke federal charter of leading teachers union amidst criticism: ‘Facing backlash for straying off course’

February 3, 2026

Sen Grassley: Credit Suisse uncovers 890 Nazi regime accounts, shocking findings emerge

February 3, 2026

Fulton County, Georgia Prepares Legal Response Following FBI Seizure of 2020 Election Records

February 3, 2026

Trump encourages Republicans to prioritize national voting efforts.

February 3, 2026

Gas station sparks debate by refusing service to federal agents: Why I don’t support ICE!

February 3, 2026

Lucky Friday the 13th meets Groundhog Day!

February 3, 2026

Celebrate Groundhog Day and Friday the 13th with us!

February 3, 2026

Celebrate Groundhog Day and Friday the 13th with us!

February 3, 2026

Celebrate Groundhog Day and Friday the 13th with Us!

February 3, 2026

Celebrate Groundhog Day and Friday the 13th Together!

February 3, 2026
  • Trending Topics:    
  • 2024 Election
  • Joe Biden
  • Donald Trump
  • Congress
  • Faith
  • Sports
  • Immigration
Tuesday, February 3, 2026
IJR
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Headlines
No Result
View All Result
IJR
No Result
View All Result
Home Commentary

Facebook’s AI Bugs Out in Ugly Way, Labels Black Men ‘Primates’

by Western Journal
September 5, 2021 at 10:49 pm
in Commentary
242 10
0
Facebook Will Not Say if Its Algorithms Boosted Trump’s Violent Rhetoric

FILE PHOTO: A 3D-printed Facebook logo is seen placed on a keyboard in this illustration taken March 25, 2020. (Dado Ruvic/Reuters)

491
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Artificial intelligence has become such an integral part of our online experience, we barely think about it.

It gives marketers the ability to gather data about an individual’s activities, purchases, opinions and interests. That information is then used to predict what products and services will appeal to him or her.

This technology has come a long way, but it is far from perfect.

The Daily Mail released a video on Facebook last June that included clips of black men clashing with white civilians and police officers.

Facebook users who recently watched the video were alarmed when an automatic prompt asked them if they would like to “keep seeing videos about Primates,” according to The New York Times.

The outlet reported that there had been no references to monkeys in the video and that Facebook is at a loss as to why such a prompt would appear.

The company immediately disabled the “artificial intelligence-powered feature” responsible for the prompt.

“As we have said, while we have made improvements to our AI, we know it’s not perfect, and we have more progress to make. We apologize to anyone who may have seen these offensive recommendations,” Facebook spokeswoman Dani Lever said.

The company said the error was “unacceptable” and that it is conducting an investigation to “prevent this from happening again.”

This incident is not the first time a Big Tech company has been called out for faulty AI.

The Times cited a similar hiccup involving Google Photos in 2015. Several images of black people were labeled as “gorillas.” The company issued an apology and said it would fix the problem.

Two years later, Wired determined that all Google had done to address the issue was to censor the words “gorilla,” “chimp,” “chimpanzee” and “monkey” from searches.

According to the Times, AI is especially suspect in the area of facial recognition technology.

In 2018, the outlet detailed a study on facial recognition conducted by a researcher at the MIT Media Lab. The project found that “when the person in the photo is a white man, the software is right 99 percent of the time.

“But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women.”

“These disparate results … show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition.”

Are real-world racial “biases” somehow seeping into AI? Or is it just a case of the system having more difficulty “seeing” darker images? I think we know the answer.

Regardless, it is a little concerning that Facebook, the master of the universe and the gatekeeper of what the public can and cannot see, uses AI that apparently can’t tell the difference between a black person and an ape.

This article appeared originally on The Western Journal.

Tags: Big TechFacebooksocial mediatechnologyU.S. News
Share196Tweet123

Join Over 6M Subscribers

We’re organizing an online community to elevate trusted voices on all sides so that you can be fully informed.





IJR

    Copyright © 2024 IJR

Trusted Voices On All Sides

  • About Us
  • GDPR Privacy Policy
  • Terms of Service
  • Editorial Standards & Corrections Policy
  • Subscribe to IJR

Follow Us

No Result
View All Result
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Headlines

    Copyright © 2024 IJR

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Thanks for reading IJR

Create your free account or log in to continue reading

Please enter a valid email
Forgot password?

By providing your information, you are entitled to Independent Journal Review`s email news updates free of charge. You also agree to our Privacy Policy and newsletter email usage