“It seemed this time like they were more ready for this,” he said. “Like someone just couldn’t wait to do it.”
Facebook officials told the Washington Post that the suspect’s real account was removed and they were working to shut down the impersonating accounts.
The tech giant, which has come under fire for its response to disinformation and questions about users’ data privacy, said this week that it disabled more than 500 million fake accounts in the first three months of 2018.
According to Christopher Bouzy, whose site Bot Sentinel tracks more than 12,000 automated Twitter accounts that are often used to spread disinformation, four of the top 10 phrases tweeted by bot or troll accounts in the immediate 24-hour aftermath were related to the Santa Fe shooting — which he called “significant activity,” reports the Post.
Hoaxes, conspiracy theories and fake news reports have spread like wildfire in our digital age, often blossoming on message boards like 4chan or platforms like Reddit before being picked up by far-right news sites.
Misinformation can also reach the mainstream, as happened in the wake of the shooting at Stoneman Douglas High School, when a video labeling a shooting survivor a “crisis actor” zoomed to the top of YouTube’s “Trending” list and eventually resulted in the student being forced to respond and YouTube apologizing.
Facebook has 10,000 human moderators monitoring the site, plans to hire many more in the coming year and utilizes artificial intelligence to remove certain types of banned or fake content.
Mark Zuckerberg’s firm recently announced a partnership with the Atlantic Council, a think tank that has received money from a wide range of foreign corporations and governments, including Saudi Arabia and Turkey, to battle disinformation.
YouTube seems to have avoided some of the mistakes following the Parkland rampage, but there were seven videos posted as of Sunday morning claiming without evidence that the incident was a “false flag” operation.