
(FILES) This file photo taken on December 28, 2016 shows logos of US online social media and social networking service Facebook in Vertou, France.
Facebook said April 27, 2017 it is stepping up its security to counter efforts by governments and others to spread misinformation or manipulate discussions for political reasons.The new effort expands Facebook’s security efforts beyond “abusive” actions such as hacking and financial scams to “more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” according to a white paper released by the world’s leading social network.
/ AFP PHOTO / LOIC VENANCE
Have you seen some “tips to spot fake news” on your Facebook news-feed recently? Over the past year, the social media company has been scrutinized for “influencing” the US presidential election by spreading fake “propaganda” news.
Obviously, the ability to spread completely “made-up” stories about politicians trafficking “child sex” slaves and imaginary “terrorist attacks” with impunity is bad for democracy and society.
Something had to be done. Enter Facebook’s new, depressingly “incompetent” strategy for tackling “fake” news. The strategy has three, frustratingly “ill-considered” parts.
The first part of the plan is to build new products to curb the spread of fake news stories. Facebook says it’s trying “to make it easier to report a false news story” and find signs of fake news such as “if reading an article makes people significantly less likely to share it.”
It will then send the story to independent fact checkers. If fake, the story “will get flagged as disputed and there will be a link to a corresponding article explaining why.”
This sounds pretty good, but it won’t work. If “non-experts” could tell the difference between “real news and fake news” which is doubtful, there would be no “fake news problem” to begin with.
What’s more, Facebook says: “We cannot become arbiters of truth ourselves — it’s not feasible given our scale, and it’s not our role.” Nonsense.
Facebook is like a “megaphone.” Normally, if someone says something “horrible” into the megaphone, it’s not the megaphone company’s fault. But Facebook is a very special kind of megaphone that “listens” first and then changes the “volume.”

A demonstrator shouts slogans through a megaphone in front of the entrance to Lisbon’s IMF bureau January 30, 2013. The October 15 movement called for a protest in Lisbon against the government’s strong austerity measures under the bailout conducted by European Central Bank (ECB), European Union (EU) and Internatonal Monetary Fund (IMF). REUTERS/Jose Manuel Ribeiro (PORTUGAL – Tags: POLITICS BUSINESS EMPLOYMENT CIVIL UNREST) – RTR3D683
The company’s algorithms largely determine both the content and order of your news-feed. So if Facebook’s algorithms spread some “neo-Nazi hate speech” far and wide, yes, it is the company’s fault.
Worse yet, even if Facebook accurately “labels” fake news as contested, it will still affect public discourse through “availability cascades.”
Each time you see the same message repeated from apparently different sources, the message seems more believable and reasonable. Bold “lies” are extremely powerful because repeatedly fact-checking them can actually make people remember them as “true.”
These effects are exceptionally robust; they cannot be fixed with weak interventions such as public service announcements, which brings us to the second part of Facebook’s strategy: “helping people make more informed decisions when they encounter false news.”
Facebook is releasing public service announcements and funding the “news integrity initiative” to help “people make informed judgments about the news they read and share online.”
This – also – doesn’t work.
A vast body of research in cognitive psychology concerns correcting systematic errors in reasoning such as failing to perceive propaganda and bias. We have known since the 1980s that simply warning people about their biased perceptions doesn’t work.

Facebook Founder and CEO Mark Zuckerberg speaks on stage during the annual Facebook F8 developers conference in San Jose, California, U.S., April 18, 2017. REUTERS/Stephen Lam – RTS12TFL
Similarly, funding a “news integrity” project sounds great until you realize the company is really talking about “critical thinking skills.”
Improving critical thinking skills is a key aim of primary, secondary and tertiary education. If four years of university barely improves these skills in students, what will this initiative do? Make some YouTube videos? A fake news FAQ?
Funding a few research projects and “meetings with industry experts” doesn’t stand a chance to change anything.
The third prong of this “non-strategy” is cracking down on spammers and fake accounts, and making it harder for them to “buy” advertisements. While this is a good idea, it’s based on the “false premise” that most fake news comes from “shady con artists” rather than major news outlets.
You see, “fake news” is Orwellian newspeak — carefully crafted to mean a totally “fabricated” story from a fringe outlet “masquerading” as news for financial or political gain. But these stories are the most “suspicious” and therefore the least worrisome. “Bias and lies” from public figures, official reports and mainstream news are far more “insidious.”
And what about “astrology, homeopathy, psychics, anti-vaccination messages, climate change denial, intelligent design, miracles,” and all the rest of the irrational “nonsense” bandied about online? What about the vast array of “deceptive marketing and stealth advertising” that is core to Facebook’s business model?
As of this writing, Facebook doesn’t even have an “option” to report misleading advertisements. Facebook’s strategy is “vacuous, evanescent, lip service;” a public relations exercise that makes no substantive attempt to address a serious problem.
But the problem is not unassailable. The key to reducing “inaccurate” perceptions is to redesign technologies to encourage more “accurate” perception. Facebook can do this by developing a propaganda filter — something like a “spam filter for lies.”
Facebook may object to becoming an “arbiter of truth.” But coming from a company that censors historic photos and comedians calling for social justice, this sounds “disingenuous.”

The front cover of Norway’s largest newspaper by circulation, Aftenposten, is seen at a news stand in Oslo, Norway September 9, 2016. Editor-in-chief and CEO, Espen Egil Hansen, writes an open letter to founder and CEO of Facebook, Mark Zuckerberg, accusing him of threatening the freedom of speech and abusing power after deleting the iconic picture from the Vietnam war, taken by Nick Ut, of a young girl running from napalm bombs. NTB Scanpix/Cornelius Poppe/via REUTERS ATTENTION EDITORS – THIS IMAGE WAS PROVIDED BY A THIRD PARTY. FOR EDITORIAL USE ONLY. NORWAY OUT. NO COMMERCIAL OR EDITORIAL SALES IN NORWAY. – RTX2OS32
Nonetheless, Facebook has a point. To avoid accusations of “bias,” it should not create the propaganda filter itself. It should simply fund “researchers” in artificial intelligence, software engineering, journalism and design to develop an “open-source” propaganda filter that anyone can use.
Why should Facebook pay? Because it “profits” from spreading propaganda, that’s why.
Sure, people will try to “game” the filter, but it will still work. “Spam” is frequently riddled with “typos, grammatical errors and circumlocution” not only because it’s often written by non-native English speakers but also because the weird writing is necessary to “bypass” spam filters.
If the propaganda filter has a similar effect, weird writing will make the fake news that slips through more obvious. Better yet, an effective propaganda filter would actively encourage journalistic best practices such as citing primary sources.
Developing a such a tool won’t be easy. It could take years and several million dollars to refine. But Facebook made over US$8 billion last quarter, so Mark Zuckerberg can surely afford it.