Facebook Messenger Adds Safety Alerts—Even in Encrypted Chats

For the last year, governments around the world have pressured Facebook to abandon its plans for end-to-end encryption across its apps , arguing that the feature provides cover for criminals and above all child predators . Today Facebook is rolling out new abuse-detection and alert tools in Messenger that may help address that criticism—without weakening its protections.Facebook today announced new features for Messenger that will alert you when messages appear to come from financial scammers or potential child abusers, displaying warnings in the Messenger app that provide tips and suggest you block the offenders. The feature, which Facebook started rolling out on Android in March and is now bringing to iOS, uses machine learning analysis of communications across Facebook Messenger's billion-plus users to identify shady behaviors. But crucially, Facebook says that the detection will occur only based on metadata—not analysis of the content of messages—so that it doesn't undermine the end-to-end encryption that Messenger offers in its Secret Conversations feature. Facebook has said it will eventually roll out that end-to-end encryption to all Messenger chats by default .
"We’re introducing safety notices in Messenger that will pop up in a chat and provide tips to help people spot suspicious activity and take action to block or ignore someone when something doesn’t seem right," reads a blog post from Facebook's director of product management for Messenger privacy and safety Jay Sullivan. "As we move to end-to-end encryption, we are investing in privacy-preserving tools like this to keep people safe without accessing message content."
Facebook hasn't revealed many details about how its machine-learning abuse detection tricks will work. But a Facebook spokesperson tells WIRED the detection mechanisms are based on metadata alone: who is talking to whom, when they send messages, with what frequency, and other attributes of the relevant accounts—essentially everything other than the content of communications, which Facebook's servers can't access when those messages are encrypted. "We can get pretty good signals that we can develop through machine learning models, which will obviously improve over time," a Facebook spokesperson told WIRED in a phone call. They declined to share more details in part because the company says it doesn't want to inadvertently help bad actors circumvent its safeguards.

The company's blog post offers the example of an adult sending messages or friend requests to a large number of minors as one case where its behavioral detection mechanisms can spot a likely abuser. In other cases, Facebook says, it will weigh a lack of connections between two people's social graphs—a sign that they don't know each other—or consider previous instances where users reported or blocked a someone as a clue that they're up to something shady.

One screenshot from Facebook, for instance, shows an alert that asks if a message recipient knows a potential scammer. If they say no, the alert suggests blocking the sender, and offers tips about never sending money to a stranger. In another example, the app detects that someone is using a name and profile photo to impersonate the recipient's friend. An alert then shows the impersonator's and real friend's profiles side-by-side, suggesting that the user block the fraudster.

Photograph: Facebook