The Hidden Cybersecurity Risk in Social Media Disinformation
By Aaron Barr, CTO, PiiQ
Disinformation has been an issue since long before social media existed – but the rise of social media has skyrocketed the problem to new levels. Political campaigns have always had a certain level of disinformation but now we’re seeing this spread further to issues like public health, such as what we’ve seen in the past year with COVID-19 misinformation spreading like wildfire. It can be tempting to blame the perpetuation of disinformation on social media, but the reality is that this is a problem that can’t be ignored. Everyone – from the big social media platforms to corporations to individuals – has a role to play in combating it.
In fact, some believe the problem has become significant enough to warrant being declared a national security concern and are urging the current administration to form a task force focused on disinformation. Let’s examine the current scourge of misinformation, where responsibility lies in stopping it and what organizations can do to protect themselves.
Understanding misinformation and disinformation
Disinformation isn’t new – it’s been around as long as humans have. The founding fathers apparently discussed the idea of fake news, even if they didn’t call it that. President John Adams commented in the late 1700s that the press was the source of more news error “in the last ten years than in a hundred years before 1798.” It’s no surprise that libel laws originated in England in as early as the 17th century over false and defaming claims made in written form.
A classic example is Thomas Jefferson’s behavior as a politician. He’s said to have to run disinformation campaigns in local newspapers against his political rivals, writing stories under a pen name about his rival’s infidelities, for instance, in an attempt to discredit them. Today, disinformation is a very important tool in the information warfare toolbox. Parties with an agenda to promote look to sow discord, confusion or misinformation/disinformation about particular issues as an asymmetric warfare capability.
The role of social media in perpetuating disinformation
Thanks to social media, disinformation now has a much bigger ripple effect than ever before – and it has the potential to create real consequences for both individuals and organizations, as well as society at large. Social media tends to create siloes at best and echo chambers at worst. Not only do individuals rely increasingly on social media as their primary source of information, but they also tend to stay focused on the groups and sources that are similar to them or that are most likely to reinforce what they already think or believe.
This human tendency enables misinformation or just plain falsities to run rampant. A 2019 study published in Science by MIT Sloan professors found that falsehoods are “70% more likely to be retweeted on Twitter than the truth and reach their first 1,500 people six times faster.” We’ve certainly seen this in the past few years as political beliefs have become more polarized. What’s more, disinformation can cause corporate risks to companies in terms of reputation.
Social media platforms must take responsibility, but so do individuals
Social media platforms must be held accountable, and obviously, we are seeing some efforts toward holding their feet to the fire – as evidenced by numerous appearances of the CEOs of some of the biggest social media companies before Congress.
There are efforts underway by the platforms themselves to find ways to improve their ability to identify disinformation campaigns and remove the profiles or the individuals involved in these.
There’s also been bipartisan calls for changes to Section 230, a provision of the Communications Decency Act largely seen as shielding many of these platforms from accountability.
Individuals, for their part, can also make a commitment to doing some research before they share posts. There’s so much information out there that everybody falls for misinformation at least once in a while. That’s why Twitter now sometimes asks followers if they’d like to read an article before retweeting it – because most of the time, we quickly ingest what we see and respond without taking a pause to look more deeply into a news piece before spreading it.
A good rule of thumb is that if you don’t have the time to look into something first, don’t re-share it. One way to evaluate information is to see whether it comes from a reliable, credible news source or regulatory agency. Keep as close to the source of information or news as possible rather than those that spread it and offer their personal perspective or opinion.
Organizations can also play a role by having a strong social media use policy in place for employees that informs them about adversarial individuals and potential threats that can come through social media use. The policy should help them understand what makes them and the organization vulnerable to reputational damage and how disinformation can be used against them. Employees should receive training so that they fully understand the policy and their responsibility in helping protect the organization and its brand.
“Fake news” has always existed and likely always will. It’s not possible to screen every social post or news story, but it is possible to create policies at the national and corporate levels that
lay a foundation for social sharing to protect organizations from reputational harm. Organizations need to clearly convey their policy and train employees to adhere to it. In the age of social media, risk management has become everyone’s responsibility.
About the author:
Aaron Barr is the chief technology officer of PiiQ Media and a recognized expert in information operations and exploitation, social engineering, open source intelligence, and digital covert operations. He has 25+ years of experience supporting cybersecurity and U.S. intelligence organizations, with emphasis in cyber offense and defense. Previously, he worked for Northrop Grumman, serving in roles including program manager, technical director for the intelligence and cyber security business unit, and as a lead engineer for the company’s cyber security integration group. In his career, he’s also led technical operations programs for three separate U.S. intelligence agencies.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.