Nearly a year after Facebook and Google launched offensives against fake news, they are still inadvertently promoting it. The worst part of it all is that it is often at the most unpleasant time.
In the immediate aftermath of the Las Vegas shooting, Facebook's "Crisis Response" page for the attack features a false article misidentifying the gunman and claiming he was a "far left loon".
Google promoted a similar erroneous item from the anonymous prankster site 4chan in its "Top Stories" results.
YouTube even featured a conspiracy video prominently in searches for news on the shooting.
None of these stories were true and the companies quickly purged offending links and tweaked their algorithms to favour more authoritative sources. But their work is incomplete.
The question remains: Why do these highly automated services keep failing to separate truth from fiction? One big factor: most online services systems tend to emphasis posts that engage an audience — exactly what a lot of fake news is specifically designed to do.
The companies have already taken a number of steps since December; it now features fact-checks by outside organizations, puts warning labels on disputed stories and has de-emphasized false stories in people’s news feeds. Which is still not enough, we will have to wait and see how Facebook, Google and YouTube will work around the fake news that are misleading the users using these platforms.