People share information because they believe the information is true when in reality, it is not: that’s the definition of misinformation. That is not the case with “fake” news. Fake news is intended to mislead people, and the source is usually an outlet with a pre-determined goal, for example, to topple a government or create propaganda for an unwanted cause. Many examples probably are running though your mind right now.
The biggest source of misinformation, is social media. News spread fast, and you are more likely to share (mis)information from a trusted colleague/friend without confirming its integrity. If news break that there is an active shooter in some mall in some state, most people will “re-tweet” or share the news without actually confirming the act is taking place. The news is not worthy of confirmation because “why would someone lie about that?” and thats the dilemma.
Targeted Disinformation Campaigns are a multi-channel strategy that uses different tools to reach their intended goal. In the recent 2016 election these targeted campaigns were multi dimensional and pretty much shaped the US politics in an unprecedented way. Words spread fast with social media and identifying the US audience as vulnerable to manipulation worked well for the adversary.
Today, Twitter’s political ad policy is a small step in the fight against disinformation. Last month, Jack Dorsey announced that the social network will no longer permit political ads to run on the Twitter platform. Regular users are sharing more disinformation that any bots, as Twitter research shows. What Dorsey refers to to be more specific is that a political candidate’s message should not be bought and spread, but rather it should be earned: “But removing politicians’ ads is just a small step toward fighting the broader problem of disinformation.” Hopefully this small step will lead the way for Facebook, who seem to get tangled up in this mess.