Facebook said the gunman sent them minutes before the massacre
Could Facebook have known about ominous direct-message threats made by a gunman who Texas authorities say massacred 19 children and two teachers at an elementary school? Could it have warned the authorities?
Texas Gov. Greg Abbott revealed the online messages sent minutes before the Wednesday attack, although he called them posts, which are typically distributed to a wide audience. Facebook stepped in to note that the gunman sent one-to-one direct messages, not public posts, and that they weren’t discovered until “after the terrible tragedy.”
The latest mass shootings in the US by active social-media users may bring more pressure on social media companies to heighten their scrutiny of online communications, even though conservative politicians — Abbott among them — are also pushing social platforms to relax their restrictions on some speech.
Facebook parent company Meta has said it monitors people’s private messages for some kinds of harmful content, such as links to malware or images of child sexual exploitation. But copied images can be detected using unique identifiers — a kind of digital signature — which makes them relatively easy for computer systems to flag. Trying to interpret a string of threatening words — which can resemble a joke, satire or song lyrics — is a far more difficult task for artificial intelligence systems.
Facebook could, for instance, flag certain phrases such as “going to kill” or “going to shoot,” but without context — something AI in general has a lot of trouble with — there would be too many false positives for the company to analyse. So Facebook and other platforms rely on user reports to catch threats, harassment and other violations of the law or their own policies. As evidenced by the latest shootings, that often comes too late, if at all.
Even this kind of monitoring could soon be obsolete, since Meta plans to roll out end-to-end-encryption on its Facebook and Instagram messaging systems next year. Such encryption means that no one other than the sender and the recipient — not even Meta — can decipher people’s messages. WhatsApp, also owned by Meta, already has such encryption.
A recent Meta-commissioned report emphasized the benefits of such privacy but also noted some risks — including users who could abuse the encryption to sexually exploit children, facilitate human trafficking and spread hate speech.
Apple has long had end-to-end encryption on its messaging system. That has brought the iPhone maker into conflict with the Justice Department over messaging privacy. After the deadly shooting of three U.S. sailors at a Navy installation in December 2019, the Justice Department insisted that investigators needed access to data from two locked and encrypted iPhones that belonged to the alleged gunman, a Saudi aviation student.
Security experts say this could be done if Apple were to engineer a “backdoor” to allow access to messages sent by alleged criminals. Such a secret key would let them decipher encrypted information with a court order.
But the same experts warned that such backdoors into encryption systems make them inherently insecure. Just knowing that a backdoor exists is enough to focus the world’s spies and criminals on discovering the mathematical keys that could unlock it. And when they do, everyone’s information is essentially vulnerable to anyone with the secret key.
ALSO READ: