Why Humans are the main reason for Fake News

Even as bots start to take over the internet, humans remain the likeliest culprits to spread fake news

Even as bots start to take over the internet, humans remain the likeliest culprits to spread fake news

Earlier this month, the Pew Research Center released a remarkable report showing that, in the summer of 2017, two-thirds of the links tweeted to popular websites came from an automated account. In other words, from last July to September, Twitter bots were responsible for 66 percent of tweets linking readers to a page on a popular website.

When we narrow in on tweets that link to the most popular news sites, as opposed to the most popular websites in an overall sense, the rate stays the same. For the sake of comparison, bots were responsible for 76 percent of links to sports content and 90 percent of links to adult content on Twitter. The overall rate actually skyrockets, from 66 to a whopping 89 percent, when we look at bot-created tweets to popular news aggregation sites—the ones which gather commentary on current events from around the web.

Sounds terrifying, right?

At a time when fake news about migrants and politicians has the potential to influence elections, the idea that most of what we’re seeing in certain social spaces does not even come from humans is rightfully unnerving. And yet the report also pointed toward some surprising facts.

As a deeper look into the study shows, the vast majority of the automated accounts were not the type of partisan trolls or fake accounts created by a foreign government that most people have come to associate with bots. For example, Netflix, CNN, and the Metropolitan Museum of Art pushed out content via an automated, as opposed to a manual, process that nobody finds illegitimate or worrisome.

After you take away the most popular bots linking back to genuine organizations, most of what’s shared is not particularly interesting — the lesser-known bots link to sales, various illicit movie streaming services, and ads.

In conducting its research, the Pew Research Center found that the bots posting links to popular sites were fairly innocuous. Out of the 57 percent of links that had some political content, most did not have a clear political bias—41 percent of the bot-posted links were shared to primarily liberal sites while 44 percent were shared to primarily conservative sites. The difference is not, according to the researchers, statistically significant or a sign that bots are spreading a particular strain of liberal or conservative agenda.

This means that, even with automated accounts taking over most of the links we see shared on some social media platforms and in the comment sections of websites, fake news remains a human problem. The fake news worth worrying about is neither bot-created nor bot-pushed; it is either disinformation intentionally designed and pushed to disrupt fact-finding and truth-telling, or misinformation powerfully advanced as true thanks to the viral capabilities of uninformed or poorly informed people with social accounts.

The uncomfortable reality is that fake news that is truly dangerous and truly believable—for example, information that falsely identified the Las Vegas shooter in November or the conspiracy theories behind the death of DNC staffer Seth Rich — usually comes from ordinary people.

It’s worth repeating that apart from the bad actors involved in the worst forms of these misinformation campaigns, humans are also responsible for the spread of fake news at a more unwitting level. It’s not just that the truly worrisome forms of fake news are created by human beings, but also that they are spread by them.

Another study, this one from researchers at the Massachusetts Institute of Technology, found that while the average real news story was rarely shared with more than 1,000 people, the top 1 percent of fake stories managed to reach between 1,000 and 100,000 people on Twitter. As the researchers found:

Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

In 2017, the main sources of fake news generated on the internet were as simple as inaccurate Facebook and Twitter posts written by an average user. But why would fake news spread more than real news? According to Soroush Vosoughi, Deb Roy, and Sinan Aral:

We found that false news was more novel than true news, which suggests that people were more likely to share novel information.

While provocateurs often spread politically-motivated fake news — for example, Alex Jones of Infowars claiming that the man who filmed the Charlottesville rally was a CIA agent — with the goal of achieving certain political or social goals, the reality of the spread of fake news is less nefarious. On many occasions, regular people unknowingly contribute to fake news by sharing it without first bothering to verify its accuracy or the source’s authenticity.

Naturally, false information tends to be more intrinsically dramatic and interesting than reality. The world is uncompromisingly complex, which means that reports offering simple binaries—such as “[Person X] is evil” and “[Innocent victim] attacked by [ideological opponent’s] malicious supporters”—are not as likely to be true, though they are likelier to be more interesting to a person whose antecedent beliefs reflect them.

There’s a reason why the MIT researchers found that false information was 70 percent more likely to receive the first retweet and then, naturally, get disseminated to other sites and accounts.

It bears repeating that, in the process delineated above, robots are not primarily to blame for the dissemination of fake news. In fact, fake news was much more likely to spread when ordinary humans have seen it, believed it, and started to share it. When the authors accounted for bot-based dissemination, they found that bots spread everything regardless of whether it was true or false. Humans, on the other hand, were spreading false information faster than real news.

It looks, sadly, like the cause of fake news is often neither the imagined cigar-smoking foreign hacking director behind a mahogany desk, nor the bots that increasingly populate our online spaces, but ourselves—ordinary human beings whose eyes light up more than they should when we encounter information that confirms our ideological priors.

What can we do to combat this problem?

At a time when fake news is influencing even the highest levels of government leadership, there are steps one can take to be a responsible news consumer and sharer. One of the casualties of our ubiquitous social engagement has been a basic civic duty: Our responsibility to do our due diligence before we put something before another person that may be flat out wrong and misleading.

  • Before you share a post, check the sourceIs it posted from a genuine publication? Or, for social media posts: Does it link to a proper site?
  • Check the link, the author, and the date the article was published. False stories often start as incomplete posts published on sites that have only one active page or lack descriptions. These are the most obvious forms of fake news, but they are not the most common.
  • If possible, do your own fact-checking. It’s understandable that we have neither the time nor the expertise to fact-check much of what we read, yet doing your own fact-checking doesn’t mean performing your own analysis. At a basic level, it could mean checking the way a rival source might characterize the same issue, or the way an ideologically different media outlet frames the same problem. Unfortunately, this has become extremely important: Over the last year, even high-profile journalists have inadvertently (and sometimes purposefully) republished falsehoods based on incomplete claims and scoops that started out as rumors.
  • When you read a report, ask “Who made the claim?” Check that facts are attributed to a source commensurate with their justifiability. In other words, a blogger’s opinion is not an adequate source of confirmation for a factual claim about what was discussed at a closed-door national security meeting.
  • Perhaps above all, read beyond the headline. This is a particularly problematic source of misleading social behavior. The headline is typically the most attention-grabbing part of a post; it is done that way intentionally. Yet that’s the one part of an article that is the least helpful toward disseminating an accurate report to others populating your online social space. You’re better off quoting a selection from inside an article and sharing that instead, or just characterizing the article’s entire message. It’s harder work, sure, but it will tend to be a more accurate picture.

We can become a more responsible online community of news observers. But it will take, at the very least, a strong commitment to the truth to make us willing.

Advertisements