Although Macedonian fake news merchants still generate thousands of Facebook likes and shares, the platform's recent measures against bad actors have drastically reduced their reach. The next step in the arms race takes the battle to Twitter.
An investigation by Lead Stories and Nieuwscheckers into a network of fake news websites run by several Macedonians revealed a sprawling network of over 70 sites, some of them launched as far back as 2016. Numbers were not available for all sites but data from BuzzSumo revealed they received over 7.1 million engagements (mainly on Facebook and Twitter, measured on January 10, 2019) on a total of 7226 articles tracked by the service. The full list of sites and engagement numbers can be viewed here.
(For comparison: according to BuzzSumo fact checking website Snopes.com ran about 5,687 articles in the past year with a total of about 8.8 million engagements.)
Most engagement for the network happened on Facebook but there was an interesting trend visible once we started breaking down the numbers. For each site in the network we looked at the registration date found in WHOIS data so we could order them chronologically. Then we used BuzzSumo to find the average number of engagements per post for each site (total and split out by social network).
Early in the life of the network the operators would launch about one new site per month. As you can see in this graph that drastically increased after a while, with peaks when five or six sites were being registered in a single month.
Because of this the time axis in the first three graphs is stretched and distorted a bit. In the graph below we took the average of the average engagement per post for all sites in a given month so we could have a realistic time axis (at the cost of slightly distorting the engagements).
Firstly it appears Facebook's measures against fake news were clearly hurting this network. There is a distinct drop in average engagement starting from April and despite the occasional 'lucky' post that still managed to get some traction most articles don't have much impact anymore. Somewhere in the summer of 2018 Facebook also started stricter reach reduction of posts copied from articles that were marked as false by third party fact checkers, something which probably hurt this network even more as the operators were often prone to copy-pasting from dubious sources.
Secondly, from the graph showing the number of new domain registrations per month it also appears the operators of the network were having to register more and more sites to stay ahead of blocking and filtering measures.
Thirdly there seems to be a marked rise in Twitter engagement around this time. Much of this is undoubtedly due to the increased use of fake Twitter accounts to spam links to articles on network sites. But it doesn't seem like it manages to rise to the level of the Facebook engagement of the "glory days" of 2016.
Asked for comment, a spokesperson for Twitter told us the higher engagement numbers didn't necessarily mean more people on the platform got to see the links that were being spread: there was a chance the accounts spamming them were being identified and challenged by Twitter's filtering systems and subsequently hidden from view in search results and user timelines.
However we discovered many of the accounts by using Twitter's public search function which strongly implies they avoided detection. We did notice the accounts seemed to be very careful about only tweeting out a few links, sometimes three, sometimes four, in some case twelve over a period of some days, sometimes mixing in a few legitimate sites. After seeding their links the accounts usually stopped posting (but remained findable via Twitter's public search function).
To us it looked like the account operators were feeling out the limits of the filtering systems and that they needed to put in a lot of effort to avoid tripping any alarms. They had to create a lot of accounts and couldn't build up big ones with large audiences for long term use. So they used lots of small ones for very short periods instead.
The network managed to publish two stories in 2018 that made it into BuzzFeed's top 50 list of the biggest hoaxes for that year:
- Muslim Figure: "We Must Have Pork-Free Menus Or We Will Leave U.S." How Would You Respond This? - Published on vtamedia.com on April 5, 2018
- Pedophile's Decapitated Corpse Found On Judge's Doorstep After Bail Hearing - published on cvikasdrv.com on September 1, 2018
Technically they should have had a third story featuring in that list because it got more than enough engagement on it:
- Florida: Largest food stamp fraud bust in history, $20M, Muslim store owners arrested - published on opreminfo.com on June 4, 2018
We compiled a list with the most engaged with article of each site in the network and then sorted it by engagement numbers. Here is the top ten which gives a good impression of the type of content published by the network (links go to archived copies):
Comments by Facebook, Google, and Twitter
Although the problem of Macedonian fake news sites and their money-making model has been known since 2016, this business continues to exist thanks to the facilities offered by Facebook, Google, and, to a lesser extent, by other platforms. To be sure, the companies have taken counter-measures but these did not catch this network, even though it carried out its activities very much in the open. We asked the tech companies for comment.
Neither of the three wanted to discuss the specifics of the case or the companies' countermeasures on the record. A Google spokesperson said, 'It is not allowed to use AdSense on websites with harmful, misleading or inappropriate content. We constantly do checks and immediately take action when we notice infractions. But it can happen that our systems overlook something. That is why we encourage people to report websites that break the rules.'
A Twitter spokesperson referred us to the page listing Twitter's approach to bots and misinformation which states (in part):
We're working hard to detect spammy behaviors at source, such as the mass distribution of Tweets or attempts to manipulate trending topics. We also reduce the visibility of potentially spammy Tweets or accounts while we investigate whether a policy violation has occurred. When we do detect duplicative, or suspicious activity, we suspend accounts. We also frequently take action against applications that abuse the public API to automate activity on Twitter, stopping potentially manipulative bots at the source.
It's worth noting that in order to respond to this challenge efficiently and to ensure people cannot circumvent these safeguards, we're unable to share the details of these internal signals in our public API. While this means research conducted by third parties about the impact of bots on Twitter is often inaccurate and methodologically flawed, we must protect the future effectiveness of our work.
We submitted two fake profiles belonging to the network ('Ema Brown' and 'Lisa Sanders') to Facebook, as a sample of what we were dealing with. 'In this instance,' a spokesperson said, 'we have investigated the profiles shared with us and removed them for violating our policies on authenticity and misrepresentation.'
Regarding the fight against abuse of the kind perpetrated by the Kumanovo network, the company merely stated: 'We're encouraged by three separate, recent pieces of research that indicate that misinformation on Facebook has declined and that efforts by Facebook following the 2016 election to limit the spread of misinformation may have had a meaningful impact. These results are encouraging and commensurate with our own data, but we know that this is a highly adversarial space and we have more work to do.'
Banner image credit: Future Atlas on Flickr.