play_arrow

keyboard_arrow_right

Listeners:

Top listeners:

skip_previous play_arrow skip_next
00:00 00:00
chevron_left
chevron_left
  • cover play_arrow

    RADIO ROXI TIMELESS TUNES

Alternative News

Police departments are turning to AI-powered facial recognition technology – leading to a rise in WRONGFUL ARRESTS – NaturalNews.com

today18/01/2025

Background
share close



Police departments are turning to AI-powered facial recognition technology – leading to a rise in WRONGFUL ARRESTS

  • Police departments are increasingly turning to AI-powered facial recognition technology to identify and arrest suspects.
  • Many departments are making these arrests without doing additional basic police work to verify evidence, leading to a rise in wrongful arrests.
  • There have been at least eight documented cases of wrongful arrests made due to facial recognition errors.
  • Many police departments are not being transparent with their use of facial recognition tech, strongly suggesting that the number of wrongful arrests made by AI is much higher.
  • AI facial recognition algorithms struggle with low-quality images, poor lighting and unusual angles and exhibits significant biases, particularly against women and people of color with darker skin tones.

More and more police departments are turning to artificial intelligence-powered facial recognition software – leading to a rise in false arrests.

An investigation found that police departments nationwide are relying on facial recognition software to identify and arrest suspects, often without corroborating evidence. Shockingly, most departments are not required to disclose or document their use of this technology.

Among 23 departments with available records, 15 across 12 states made arrests based solely on AI matches, frequently violating their own internal policies that mandate additional evidence before taking such drastic action.

From the information gathered, there have been eight documented cases of wrongful arrests in the U.S. made based on information from AI facial recognition, with two of these cases previously unreported. All the charges against the individuals were eventually dismissed. The investigation notes that basic police work such as verifying alibis and comparing physical evidence could have easily prevented these wrongful arrests.

The report further warns that data on wrongful arrests from facial recognition technology is very sparse and the true scale of the problem remains unknown, as most departments lack transparency and refuse to disclose their use of AI facial recognition tech.

Algorithms used by AI facial recognition tech failing to detect actual criminals

Facial recognition software uses algorithms to analyze and compare facial features in photographs or video footage. The technology maps unique characteristics – such as the distance between the eyes, the shape of the jawline and the contours of the nose – to create a digital “faceprint.” This faceprint is then compared against a database of images to find potential matches.

While this might sound like a foolproof system, the reality is far more complicated. Facial recognition algorithms are far from perfect. They can struggle with low-quality images, poor lighting or unusual angles.

More troublingly, these systems often exhibit racial and gender biases. Studies have shown that facial recognition software is significantly more likely to misidentify women and people of color, particularly those with darker skin tones.

This inherent bias has had devastating consequences. Take the case of Randal Reid, who spent a week in jail for a crime committed in a state he had never visited.

Or Porcha Woodruff, a visibly pregnant woman who was arrested for carjacking despite being physically incapable of committing such an act.

Robert Williams, the first documented victim of a wrongful arrest due to facial recognition, was accused of stealing thousands of dollars worth of watches — while he was driving home at the time of the alleged crime. (Related: SURVEILLANCE? U.S. expands biometric technology in airports.)

At the heart of this issue lies a deeply human problem: confirmation bias. When facial recognition software identifies a suspect, law enforcement officers are often quick to accept the result as definitive, ignoring contradictory evidence.

As Mitha Nandagopalan, a staff attorney with the Innocence Project, explains, “When police showed up to Porcha’s house, she was visibly eight months pregnant, yet there was nothing in the victim’s description of the suspect that mentioned pregnancy. The circumstances described would be very difficult for someone near the end of a pregnancy to carry out, and yet they went forward with the arrest.”

Watch this video showing how facial recognition technology is being used in airports.

This video is from the Marjory Wildcraft channel on Brighteon.com.

More related stories:

Air Canada rolls out facial recognition technology at boarding gates for domestic flights.

Malfunctioning facial recognition technology may put innocent individuals at risk.

British gov’t to equip cops with facial recognition technology as part of $69M plan to combat retail crime.

Fashion company creating clothing line that shields people from AI facial recognition technology.

Spain turning into a police state: Spanish police to use automated facial recognition technology soon.

Sources include:

ZeroHedge.com

Foundation.Mozilla.org

Brighteon.com



Source link

Written by: radioroxi

Rate it

Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *

0%