On Thursday, Facebook launched fake-news checking in South Africa.
France-based AFP news service and Africa Check, a third-party fact-checking department at the journalism department of the University of the Witwatersrand in Johannesburg, will receive potential fake news stories for review from Facebook.
The launch comes as Facebook rolls fact-checking out across the world – and just before elections in South Africa, due in 2019.
Africa Check released a post on Thursday confirming its involvement with Facebook, including a stern criticism of Facebook's current fact-checking programme, saying it's failed in its responsibility as a publisher for many years. It has also been reported that Facebook will pay Africa Check for its services.
So, how does it work if you suspect a Facebook post of delivering fake-news? It's easy, all you need to do is click on the three little dots in the top right-hand corner of the post, select to 'Give feedback on this post' and select the 'False news' button in the list displayed.
This will alert Facebook and the possible fake-news will be sent to a trusted third-party fact checker for review.
It's not just up to human intervention though. Facebook's smart algorithms will also flag potential articles for review and, along with reports from humans will learn to better identify and flag fake-news in the future.
There is a global network of Facebook fact-checkers, many that are largely localised to their country specifically but also more widely and well-known myth busters such as Snopes.com and large news agencies such as AFP. According to Facebook, the network currently covers 17 countries.
According to Africa Check's, Anim Van Wyk, fact-checkers review the flagged material and rate it according to a range of options. They can classify articles as true, false, or a mixture of the two. They also have to ability to mark the content as "not eligible" for fact-checking if it is "satire or clear opinion".
According to Facebook policy, only articles, pictures, or videos for which the primary claim is false will be flagged.
When fake-news is confirmed, Facebook does not remove it, but rather downgrades its distribution so that fewer people see it. Apparently this "strikes the best balance between freedom of expression and not promoting falsehoods".
What it does do though is display additional information from the fact-checkers when a fake-news story does show up in a feed, and if you try sharing the flagged article, Facebook warns you that it's fake. Conveniently, anyone who has shared the offending article before will also receive a notification saying as much.
Taking it a step further, those who repeat offend will be throttled of not just audience but potential income too. Facebook says, "If a Facebook Page or website repeatedly shares misinformation, we’ll reduce the overall distribution of the Page or website, not just individual false articles. We’ll also cut off their ability to make money or advertise on our services."
It's not just about calling out fake-news distributors though. If a publisher has been flagged, they can invoke a process to dispute any finding against them. If they have indeed published something that turns out to be fake, they can either issue a correction or request a review and, if it turns out to be a legitimate error, the "strike" can be removed and the publisher's feed will return to normal again.
According to Van Wyk, Africa Check will start with fake news that has consequences, such as, "focus on bogus health cures, false crime rumours and things like pyramid schemes – the kind of content that can lead to poor decisions and physical harm," and that Facebook's third-party fact-checkers will proactively deal with posts on Facebook too.