Three years after shutting down its facial recognition software on Facebook due to concerns over privacy and regulatory pressure, Meta has announced plans to test the technology again. This time, the company is using it to fight "celeb bait" scams. These scams often involve using famous personalities’ images in fake advertisements to lure people into investing in fraudulent schemes.
Meta will enroll around 50,000 public figures in a trial where their Facebook profile pictures will be compared with images used in suspected scam ads. If the system detects a match and determines the ad to be a scam, Meta will block the advertisement. Celebrities who are part of the trial will be informed and can opt-out if they prefer not to participate.
The company aims to roll out this trial globally by December, but there are exceptions. Meta cannot conduct the trial in some regions due to a lack of regulatory approval, such as Britain, the European Union, South Korea, and U.S. states like Texas and Illinois.
Monika Bickert, Meta's vice president of content policy, explained that the focus is on protecting public figures whose likenesses have been misused in these scams. "The idea here is to provide as much protection as possible for them. They can opt out if they wish, but we want to make this protection easy and available," Bickert said during a media briefing.
This new test highlights the balancing act Meta is attempting—using potentially controversial technology to address growing concerns about scams while also managing its own track record on user data privacy. For years, social media companies have faced criticism over how they handle personal information, and this test puts Meta back in the spotlight.
In 2021, Meta ended its facial recognition program, deleting face scan data for a billion users due to "growing societal concerns." More recently, in August, Meta was ordered to pay $1.4 billion to Texas after being accused of illegally collecting biometric data in the state. At the same time, Meta is facing legal challenges for not doing enough to stop scams that use images of celebrities, often created with artificial intelligence, to trick users into parting with their money.
Under the new trial, Meta emphasized that any face data collected through the system will be immediately deleted, whether or not a scam is detected. The company is making efforts to ensure that the tool adheres to privacy standards, undergoing an internal "robust privacy and risk review process" and being discussed with regulators, policymakers, and privacy experts.
In addition to tackling celeb bait scams, Meta is exploring other uses for facial recognition. One potential application is to help regular users of Facebook and Instagram regain access to their accounts if they’ve been hacked or locked out due to forgotten passwords. This feature is also set for testing.