Published on January 28, 2022

Misinformation has become commonplace in online discourse. Unlike disinformation, where falsehoods are deliberately spread with the intent to mislead, the incorrect facts of misinformation are spread with the belief that the information is true. Unintentionally sharing false information is more common than the intentional spread of bad information, making it much harder to combat.

The speed and the volume of data shared via advancing technologies continue to widen the potential impact of misinformation in news coverage. Whether it’s about climate change, US elections, the Omicron variant, or COVID-19 vaccine information updates, misinformation continues to spread like wildfire across the internet.

With the advent of social media, we saw online self-expression and social connection take center stage. However, as platforms like Facebook and Twitter have gained popularity, it has impacted how users share, recommend, and comment on content that resides outside of those platforms. This change has gained importance over the last decade, because news services increasingly rely on social media for distribution and engagement.

The ease and speed of sharing that legitimate news sources have enjoyed, however, has increasingly been used for nefarious purposes in recent years.  In this article, let’s take a look at how misinformation has exploded across digital platforms and the number of ways technology can help address this major issue. 

Digital strategies that led to misinformation

Reporting news is no longer the exclusive domain of professional journalists, and the public is also now involved in the news production process. While one can argue that this has a democratizing effect on news production, there are correlating processes of setbacks with the news.

With the rise of digital personalization and Big Data over the last decade, platforms made customized messaging possible to make sure content resonates with certain demographics. Studies show that one-third of the global population is on social media. While not everyone of these users seeks news, social media platforms’ recommendation engines work to keep their users engaged on their platform for as long as possible. The obligation to fact-check published content does not top their list, and can also negatively impact their engagement goals. So, when a blurb of misleading content proves to fetch more engagement, these social media algorithms don’t hesitate to push this content to other users, especially the ones who have previously engaged with similar content.

These social media algorithms certainly equip content creators and publishers with the tools they need to find their ideal audience, but that is just one side of the story. Here are a few factors that continue to be responsible for misinformation outbreaks:

  1. Content-shaping algorithmsBig Data along with machine learning assures that the consumers will get, as a social media platform might say, “appropriately personalized content.” Sometimes this means serving someone increasingly engaging content of questionable factuality.

  2. Micro-targetingPushing content to a hyper-specific demographic based on region, interests, or other personal information provided on the platform.

  3. Bot-generated contentThese bots can produce content more quickly than the average writer, sometimes generating articles that are hard to differentiate from posts made by people, which creates the false impression that there’s organic buzz around a topic or story.

  4. Content moderation algorithmsThese algorithms detect content on a platform that breaks its terms of service and removes that content without any human involvement.

While we may assume or even demand that social platforms track down misinformation, there is a low chance that Facebook, Twitter, or YouTube will completely wipe them out. One of the reasons is that it requires full-time virtual policing of all content, and given the nature of these platforms is to allow people to publish content immediately and freely, it’s inevitable that misinformation will make its way on to many of these platforms, even if for a short amount of time before being removed.

Let’s take a look at a few misinformation outbreaks that have forced digital platforms to respond. 

COVID-19 false cures and hoax claims

As the pandemic continues to spread across the globe, misinformation about the disease itself and vaccines for it has festered into almost every citizen’s social media feed. Meta revealed that it had to remove around 18 million pieces of content globally from Facebook and Instagram for violating its misinformation policy. Even YouTube has taken down one million videos related to dangerous COVID-19 misinformation, like false cures or claims of a hoax, since February 2020. These platforms have rolled out policies that mention videos and content violating the vaccine policy are those that contradict experts’ consensus on the vaccines from health authorities or the World Health Organization.

“WhatsApp killings” in India

A study by the BBC in India showed that WhatsApp has been extensively used to circulate misinformation. This misinformation has been responsible for violence and mob lynching in the country. A spate of violence in early 2018, when rumors about child kidnappers were forwarded from person to person and group to group, fueled mass hysteria mainly in rural towns and villages across the country, resulting in the killing nearly 18 people. As the platform is protected with end-to-end encryption, it is not easy to measure the extent of misinformation shared on this platform.

Groups in WhatsApp can have up to 256 people and messages with violent intent are said to be forwarded to groups with at least 100 people. Following that event, WhatsApp released an update that restricted the forwarding of messages to five recipients at a time. WhatsApp added that it hoped this measure would curb the frequency of messages being forwarded. The company has also removed the “quick forward button” next to messages containing pictures or videos.

Russia’s interference in the 2016 US elections

According to the US Senate Select Committee on Intelligence report, Russia’s Internet Research Agency (IRA) used digital personalization techniques to interfere in the 2016 US presidential elections. This campaign targeted African-Americans with misinformation to spawn outrage against other social groups, to prompt participation in protests, or even convince individuals not to take part in the elections at all.

The Russian interference in the 2016 US elections was one of the highest-profile congressional inquiries, which ended its three-year investigation in August 2020. Far from being strategy-oriented toward a mass public, the IRA information operation relied on digital personalization. While this extensive investigation proved Russian interference, there are many instances where tracking down the misinformation source proved to be intractable and futile. 

The possible cure for misinformation is here but will it work?

The rising concern about misinformation goes hand-in-hand with the measures taken by platforms to check for disinformation. There have also been several attempts by national governments globally either to come out with legislative interventions or to apply pressure on the major platforms to check for misinformation. 

Combating these technology-led misinformation outbreaks with advanced technology does not seem like a bad idea. As they say, we must fight fire with fire. AI Grover, built by a group of researchers at the University of Washington, was created as a tool to detect fake news generated by AI. It has shown 92% accuracy in detecting whether news pieces were written by humans or bots and also provided readers with up to 72% accuracy in detecting fake text. Other tools have also been created to identify fake text, but these advanced technologies have yet to demonstrate how impactful they can be in combating misinformation issues evoked by bad actors and the digitally illiterate.

However, like Mike Devito, a graduate researcher at Northwestern University says,

“These are not technological problems; they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is a perfect storm of poor civics knowledge and poor information literacy.”

With the gatekeepers of major platforms trying their best, it would be better to prepare a digital space where users are more socially responsible about interacting with the readily available content. This will certainly improve the quality of information exchanged, thereby satisfying the primary purpose of digital platforms.

Naveena Srinivas

Naveena Srinivas

Enterprise Analyst, ManageEngine

Naveena Srinivas is an Enterprise Analyst at ManageEngine. With her evolving understanding of the technology world, she focuses her exploration on cybersecurity and data privacy. She also believes people can strike a delicate balance between the evolution of technology and humanity.

Naveena aims to analyze the other side of the established narrative on trending technologies. As a part of her role, she keeps herself updated with the latest happenings in the IT industry. She has also co-authored a fiction novella and contributed to multiple anthologies.

With an engineering degree and her experience in both B2B and B2C startups, she has gained knowledge in the field of healthcare, academia, and marketing.

 Learn more about Naveena Srinivas
x Your enterprise, your rules: Master digital governance