AI technology is a big problem. It’s stealing people’s jobs, it’s making it easier to spread real misinformation online, it’s also giving new opportunities to scammers. And now? Racist AI face detection software is effectively accusing innocent people of crimes they didn’t commit.
Here’s what’s going on.
At a glance:
- Facial recognition software has been used in over 1,000 criminal investigations, often without informing the defendants.
- Clearview AI has provided facial recognition technology to over 2,200 law enforcement agencies, sparking privacy concerns.
- Many innocent individuals, especially people of color, have been falsely identified, raising issues about fairness and transparency.
A controversial facial recognition tool, Clearview AI, has been used in more than 1,000 criminal investigations across 15 states without authorities disclosing its use, according to a Washington Post investigation. Hundreds of U.S. citizens have been arrested after being connected to crimes through this software, yet many were never told how they were identified.
https://www.youtube.com/watch?v=-JkBM8n8ixI
Clearview AI, which has built an extensive database by scraping photos from social media platforms like Facebook and Instagram, has provided access to more than 2,200 law enforcement agencies, including ICE and the DEA. While the company’s database offers significant advantages in identifying suspects, concerns have mounted over the technology’s accuracy, especially when it comes to identifying people of color. Studies have shown that facial recognition software tends to be less effective at correctly identifying minorities, leading to several false arrests.
In some cases, police have actively concealed the use of this technology. Officers have been known to use vague phrases like “through investigative means” to obscure the fact that they relied on facial recognition to make an arrest. For example, in Evansville, Indiana, a man was identified by his long hair and tattoos, but police reports did not disclose the use of facial recognition. Similarly, in Pflugerville, Texas, a suspect who stole merchandise worth $12,500 was identified using investigative databases, but facial recognition’s role was never mentioned.
Clearview AI’s facial recognition system works by comparing images, often from surveillance footage, to a massive database of publicly available images. However, there are no universal standards for determining a match, leading to inconsistencies and errors. Critics argue that the lack of transparency, combined with the system’s flaws, raises serious concerns about civil rights violations and due process.
Despite opposition from lawmakers and privacy advocates, Clearview AI continues to expand its contracts with police departments across the country. The technology has even faced international criticism, with several European countries and Canadian provinces requesting that Clearview take down photos obtained without users’ consent.
As facial recognition technology becomes more widespread, the controversy surrounding its use, accuracy, and the lack of transparency will likely continue to grow. With no federal regulations in place, the debate over facial recognition technology’s role in law enforcement will remain a significant issue in the years to come.
Should we just go back to regular, good old fashioned policing?