Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliation School of Intelligence Computing, Hanyang University, Seoul, Republic of Korea

Roles Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

Affiliation College of Information Sciences and Technology, Pennsylvania State University, State College, PA, United States of America

Roles Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

ORCID logo

  • Bogoan Kim, 
  • Aiping Xiong, 
  • Dongwon Lee, 
  • Kyungsik Han

PLOS

  • Published: December 9, 2021
  • https://doi.org/10.1371/journal.pone.0260080
  • Reader Comments

28 Dec 2023: The PLOS One Staff (2023) Correction: A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions. PLOS ONE 18(12): e0296554. https://doi.org/10.1371/journal.pone.0296554 View correction

Fig 1

Although fake news creation and consumption are mutually related and can be changed to one another, our review indicates that a significant amount of research has primarily focused on news creation. To mitigate this research gap, we present a comprehensive survey of fake news research, conducted in the fields of computer and social sciences, through the lens of news creation and consumption with internal and external factors.

We collect 2,277 fake news-related literature searching six primary publishers (ACM, IEEE, arXiv, APA, ELSEVIER, and Wiley) from July to September 2020. These articles are screened according to specific inclusion criteria (see Fig 1). Eligible literature are categorized, and temporal trends of fake news research are examined.

As a way to acquire more comprehensive understandings of fake news and identify effective countermeasures, our review suggests (1) developing a computational model that considers the characteristics of news consumption environments leveraging insights from social science, (2) understanding the diversity of news consumers through mental models, and (3) increasing consumers’ awareness of the characteristics and impacts of fake news through the support of transparent information access and education.

We discuss the importance and direction of supporting one’s “digital media literacy” in various news generation and consumption environments through the convergence of computational and social science research.

Citation: Kim B, Xiong A, Lee D, Han K (2021) A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions. PLoS ONE 16(12): e0260080. https://doi.org/10.1371/journal.pone.0260080

Editor: Luigi Lavorgna, Universita degli Studi della Campania Luigi Vanvitelli, ITALY

Received: March 24, 2021; Accepted: November 2, 2021; Published: December 9, 2021

Copyright: © 2021 Kim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript.

Funding: This research was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (2019-0-01584, 2020-0-01373).

Competing interests: The authors have declared that no competing interests exist.

1 Introduction

The spread of fake news not only deceives the public, but also affects society, politics, the economy and culture. For instance, Buzzfeed ( https://www.buzzfeed.com/ ) compared and analyzed participation in 20 real news and 20 fake news articles (e.g., likes, comments, share activities) that spread the most on Facebook during the last three months of the 2016 US Presidential Election. According to the results, the participation rate of fake news (8.7 million) was higher than that of mainstream news (7.3 million), and 17 of the 20 fake news played an advantageous role in winning the election [ 1 ]. Pakistan’s ministry of Defense posted a tweet fiercely condemning Israel after coming to believe that Israel had threatened Pakistan with nuclear weapons, which was later found to be false [ 2 ]. Recently, the spread of the absurd rumor that COVID-19 propagates through 5G base stations in the UK caused many people to become upset and resulted in a base station being set on fire [ 3 ].

Such fake news phenomenon has been rapidly evolving with the emergence of social media [ 4 , 5 ]. Fake news can be quickly shared by friends, followers, or even strangers within only a few seconds. Repeating a series of these processes could lead the public to form the wrong collective intelligence [ 6 ]. This could further develop into diverse social problems (i.e., setting a base station on fire because of rumors). In addition, some people believe and propagate fake news due to their personal norms, regardless of the factuality of the content [ 7 ]. Research in social science has suggested that cognitive bias (e.g., confirmation bias, bandwagon effect, and choice-supportive bias) [ 8 ] is one of the most pivotal factors in making irrational decisions in terms of the both creation and consumption of fake news [ 9 , 10 ]. Cognitive bias greatly contributes to the formation and enhancement of the echo chamber [ 11 ], meaning that news consumers share and consume information only in the direction of strengthening their beliefs [ 12 ].

Research using computational techniques (e.g., machine or deep learning) has been actively conducted for the past decade to investigate the current state of fake news and detect it effectively [ 13 ]. In particular, research into text-based feature selection and the development of detection models has been very actively and extensively conducted [ 14 – 17 ]. Research has been also active in the collection of fake news datasets [ 18 , 19 ] and fact-checking methodologies for model development [ 20 – 22 ]. Recently, Deepfake, which can manipulate images or videos through deep learning technology, has been used to create fake news images or videos, significantly increasing social concerns [ 23 ], and a growing body of research is being conducted to find ways of mitigating such concerns [ 24 – 26 ]. In addition, some research on system development (i.e., a game to increase awareness of the negative aspects of fake news) has been conducted to educate the public to avoid and prevent them from the situation where they could fall into the echo chamber, misunderstandings, wrong decision-making, blind belief, and propagating fake news [ 27 – 29 ].

While the creation and consumption of fake news are clearly different behaviors, due to the characteristics of the online environment (e.g., information can be easily created, shared, and consumed by anyone at anytime from anywhere), the boundaries between fake news creators and consumers have started to become blurred. Depending on the situation, people can quickly change their roles from fake news consumers to creators, or vice versa (with or without their intention). Furthermore, news creation and consumption are the most fundamental aspects that form the relationship between news and people. However, a significant amount of fake news research has positioned in news creation while considerably less research focus has been placed in news consumption (see Figs 1 & 2 ). This suggests that we must consider fake news as a comprehensive aspect of news consumption and creation .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0260080.g001

thumbnail

The papers were published in IEEE, ACM, ELSEVIER, arXiv, Wiley, APA from 2010 to 2020 classified by publisher, main category, sub category, and evaluation method (left to right).

https://doi.org/10.1371/journal.pone.0260080.g002

In this paper, we looked into fake news research through the lens of news creation and consumption ( Fig 3 ). Our survey results offer different yet salient insights on fake news research compared with other survey papers (e.g., [ 13 , 30 , 31 ]), which primarily focus on fake news creation. The main contributions of our survey are as follows:

  • We investigate trends in fake news research from 2010 to 2020 and confirm a need for applying a comprehensive perspective to fake news phenomenon.
  • We present fake news research through the lens of news creation and consumption with external and internal factors.
  • We examine key findings with a mental model approach, which highlights individuals’ differences in information understandings, expectations, or consumption.
  • We summarize our review and discuss complementary roles of computer and social sciences and potential future directions for fake news research.

thumbnail

We investigate fake news research trend (Section 2), and examine fake news creation and consumption through the lenses of external and internal factors. We also investigate research efforts to mitigate external factors of fake news creation and consumption: (a) indicates fake news creation (Section 3), and (b) indicates fake news consumption (Section 4). “Possible moves” indicates that news consumers “possibly” create/propagate fake news without being aware of any negative impact.

https://doi.org/10.1371/journal.pone.0260080.g003

2 Fake news definition and trends

There is still no definition of fake news that can encompass false news and various types of disinformation (e.g., satire, fabricated content) and can reach a social consensus [ 30 ]. The definition continues to change over time and may vary depending on the research focus. Some research has defined fake news as false news based on the intention and factuality of the information [ 4 , 15 , 32 – 36 ]. For example, Allcott and Gentzkow [ 4 ] defined fake news as “news articles that are intentionally and verifiably false and could mislead readers.” On the other hand, other studies have defined it as “a news article or message published and propagated through media, carrying false information regardless of the means and motives behind it” [ 13 , 37 – 43 ]. Given this definition, fake news refers to false information that causes an individual to be deceived or doubt the truth, and fake news can only be useful if it actually deceives or confuses consumers. Zhou and Zafarani [ 31 ] proposed a broad definition (“Fake news is false news.”) that encompasses false online content and a narrow definition (“Fake news is intentionally and verifiably false news published by a news outlet.”). The narrow definition is valid from the fake news creation perspective. However, given that fake news creators and consumers are now interchangeable (e.g., news consumers also play a role of gatekeeper for fake news propagation), it has become important to understand and investigate the fake news through consumption perspectives. Thus, in this paper, we use the broad definition of fake news.

Our research motivation for considering news creation and consumption in fake news research was based on the trend analysis. We collected 2,277 fake news-related literature using four keywords (i.e., fake news, false information, misinformation, rumor) to identify longitudinal trends of fake news research from 2010 to 2020. The data collection was conducted from July to September 2020. The criteria of data collection was whether any of these keywords exists in the title or abstract. To reflect diverse research backgrounds/domains, we considered six primary publishers (ACM, IEEE, arXiv, APA, ELSEVIER, and Wiley). The number of papers collected for each publisher is as follows: 852 IEEE (37%), 639 ACM (28%), 463 ELSEVIER (20%), 142 arXiv (7%), 141 Wiley (6%), 40 APA (2%). We excluded 59 papers that did not have the abstract and used 2,218 papers for the analysis. We then randomly chose 200 papers, and two coders conducted manual inspection and categorization. The inter-coder reliability was verified by the Cohen’s Kappa measurement. The scores for each main/sub-category were higher than 0.72 (min: 0.72, max: 0.95, avg: 0.85), indicating that the inter-coder reliability lies between “substantial” to “perfect” [ 44 ]. Through the coding procedure, we excluded non-English studies (n = 12) and reports on study protocol only (n = 6), and 182 papers were included in synthesis. The PRISMA flow chart depicts the number of articles identified, included, and excluded (see Fig 1 ).

The papers were categorized into two main categories: (1) creation (studies with efforts to detect fake news or mitigate spread of fake news) and (2) consumption (studies that reported the social impacts of fake news on individuals or societies and how to appropriately handle fake news). Each main category was then classified into sub-categories. Fig 4 shows the frequency of the entire literature by year and the overall trend of fake news research. It appears that the consumption perspective of fake news still has not received sufficient attention compared with the creation perspective ( Fig 4(a) ). Fake news studies have exploded since the 2016 US Presidential Election, and the trend of increase in fake news research continues. In the creation category, the majority of papers (135 out of 158; 85%) were related to the false information (e.g., fake news, rumor, clickbait, spam) detection model ( Fig 4(b) ). On the other hand, in the consumption category, much research pertains to data-driven fake news trend analysis (18 out of 42; 43%) or fake content consumption behavior (16 out of 42; 38%), including studies for media literacy education or echo chamber awareness ( Fig 4(c) ).

thumbnail

We collected 2,277 fake news related-papers and randomly chose and categorized 200 papers. Each marker indicates the number of fake news studies per type published in a given year. Fig 4(a) shows a research trend of news creation and consumption (main category). Fig 4(b) and 4(c) show a trend of the sub-categories of news creation and consumption. In Fig 4(b), “Miscellaneous” includes studies on stance/propaganda detection and a survey paper. In Fig 4(c), “Data-driven fake news trend analysis” mainly covers the studies reporting the influence of fake news that spread around specific political/social events (e.g., fake news in Presidential Election 2016, Rumor in Weibo after 2015 Tianjin explosions). “Conspiracy theory” refers to an unverified rumor that was passed on to the public.

https://doi.org/10.1371/journal.pone.0260080.g004

3 Fake news creation

Fake news is no longer merely propaganda spread by inflammatory politicians; it is also made for financial benefit or personal enjoyment [ 45 ]. With the development of social media platforms people often create completely false information for reasons beyond satire. Further, there is a vicious cycle of this false information being abused by politicians and agitators.

Fake news creators are indiscriminately producing fake news while considering the behavioral and psychological characteristics of today’s news consumers [ 46 ]. For instance, the sleeper effect [ 47 ] refers to a phenomenon in which the persuasion effect increases over time, even though the pedigree of information shows low reliability. In other words, after a long period of time, memories of the pedigree become poor and only the content tends to be remembered regardless of the reliability of the pedigree. Through this process, less reliable information becomes more persuasive over time. Fake news creators have effectively created and propagated fake news by targeting the public’s preference for news consumption through peripheral processing routes [ 35 , 48 ].

Peripheral routes are based on the elaboration likelihood model (ELM) [ 49 ], one of the representative psychological theories that handles persuasive messages. According to the ELM, the path of persuasive message processing can be divided into the central and the peripheral routes depending on the level of involvement. On one hand, if the message recipient puts a great deal of cognitive effort into processing, the central path is chosen. On the other hand, if the process of the message is limited due to personal characteristics or distractions, the peripheral route is chosen. Through a peripheral route, a decision is made based on other secondary cues (e.g., speakers, comments) rather than the logic or strength of the argument.

Wang et al. [ 50 ] demonstrated that most of the links shared or mentioned in social media have never even been clicked. This implies that many people perceive and process information in only fragmentary way, such as via news headlines and the people sharing news, rather than considering the logical flow of news content.

In this section, we closely examined each of the external and internal factors affecting fake news creation, as well as the research efforts carried out to mitigate the negative results based on the fake news creation perspective.

3.1 External factors: Fake news creation facilitators

We identified two external factors that facilitate fake news creation and propagation: (1) the unification of news creation, consumption, and distribution, (2) the misuse of AI technology, and (3) the use of social media as a news platform (see Fig 5 ).

thumbnail

We identify two external factors—The unification of news and the misuse of AI technology—That facilitate fake news creation.

https://doi.org/10.1371/journal.pone.0260080.g005

3.1.1 The unification of news creation, consumption, and distribution.

The public’s perception of news and the major media of news consumption has gradually changed. The public no longer passively consumes news exclusively through traditional news organizations with specific formats (e.g., the inverted pyramid style, verified sources) nor view those news simply as a medium for information acquisition. The public’s active news consumption behaviors began in earnest with the advent of citizen journalism by implementing journalistic behavior based on citizen participation [ 51 ] and became commonplace with the emergence of social media. As a result, the public began to prefer interactive media, in which new information could be acquired, their opinions can be offered, and they can discuss the news with other news consumers. This environment has motivated the public to make content about their beliefs and deliver the content to many people as “news.” For example, a recent police crackdown video posted in social media quickly spread around the world that influenced protesters and civic movements. Then, it was reported later by the mainstream media [ 52 ].

The boundaries between professional journalists and amateurs, as well as between news consumers and creators, are disappearing. This has led to a potential increase in deceptive communications, making news consumers suspicious and misinterpreted the reality. Online platforms (e.g., YouTube, Facebook) that allow users to freely produce and distribute content have been growing significantly. As a result, fake news content can be used to attract secondary income (e.g., multinational enterprises’ advertising fees), which contributes to accelerating fake news creation and propagation. An environment in which the public can only consume news that suits their preferences and personal cognitive biases has made it much easier for fake news creators to achieve their specific purposes (e.g., supporting a certain political party or a candidate they favor).

3.1.2 The misuse of AI technology.

The development of AI technology has made it easier to develop and utilize tools for creating fake news, and many studies have confirmed the impact of these technologies— (1) social bots, (2) trolls, and (3) fake media —on social networks and democracy over the past decade.

3.1.2.1 Social bots . Shao et al. [ 53 ] analyzed the pattern of fake news spread and confirmed that social bots play a significant role in fake news propagation and social bot-based automated accounts were largely affected by the initial stage of spreading fake news. In general, it is uneasy for the public to determine whether such accounts are people or bots. In addition, social bots are not illegal tools and many companies legally purchase them as a part of marketing, thus it is not easy to curb the use of social bots systematically.

3.1.2.2 Trolls . The term “trolls” refers to people who deliberately cause conflict or division by uploading inflammatory, provocative content or unrelated posts to online communities. They work with the aim of stimulating people’s feelings or beliefs and hindering mature discussions. For example, the Russian troll army has been active in social media to advance its political agenda and cause social turmoil in the US [ 54 ]. Zannettou et al. [ 55 ] confirmed how effectively the Russian troll army has been spreading fake news URLs on Twitter and its significant impact on making other Twitter users believe misleading information.

3.1.2.3 Fake media . It is now possible to manipulate or reproduce content in 2D or even 3D through AI technology. In particular, the advent of fake news using Deepfake technology (combining various images on an original video and generating a different video) has raised another major social concern that had not been imagined before. Due to the popularity of image or video sharing on social media, such media types have become the dominant form of news consumption, and the Deepfake technology itself is becoming more advanced and applied to images and videos in a variety of domains. We witnessed a video clip of former US President Barack Obama criticizing Donald Trump, which was manipulated by the US online media company BuzzFeed to highlight the influence and danger of Deepfake, causing substantial social confusion [ 56 ].

3.2 Internal factors: Fake news creation purposes

We identified three main purposes for fake news creation— (1) ideological purposes, (2) monetary purposes, and (3) fear/panic reduction .

3.2.1 Ideological purpose.

Fake news has been created and propagated for political purposes by individuals or groups that positively affect the parties or candidates they support or undermine those who are not on the same side. Fake news with this political purpose has shown to negatively influence people and society. For instance, Russia created a fake Facebook account that caused many political disputes and enhanced polarization, affecting the 2016 US Presidential Election [ 57 ]. As polarization has intensified, there has also been a trend in the US that “unfriending” people who have different political tendencies [ 58 ]. This has led the public to decide whether to trust the news or not regardless of its factuality and has resulted in worsening in-group biases. During the Brexit campaign in the UK, many selective news articles were exposed on Facebook, and social bots and trolls were also confirmed as being involved in creating public opinions [ 59 , 60 ].

3.2.2 Monetary purpose.

Financial benefit is another strong motivation for many fake news creators [ 34 , 61 ]. Fake news websites usually reach the public through social media and make profits through posted advertisements. The majority of fake websites are focused on earning advertising revenue by spreading fake news that would attract readers’ attention, rather than political goals. For example, during the 2016 US Presidential Election in Macedonia, young people in their 10s and 20s used content from some extremely right-leaning blogs in the US to mass-produce fake news, earning huge advertising revenues [ 62 ]. This is also why fake news creators use provocative titles, such as clickbait headlines, to induce clicks and attempt to produce as many fake news articles as possible.

3.2.3 Fear and panic reduction.

In general, when epidemics become more common around the world, rumors of absurd and false medical tips spread rapidly in social media. When there is a lack of verified information, people feel great anxious and afraid and easily believe such tips, regardless of whether they are true [ 63 , 64 ]. The term infodemic , which first appeared during the 2003 SARS pandemics, describes this phenomenon [ 65 ]. Regarding COVID-19, health authorities have recently announced that preventing the creation and propagation of fake news about the virus is as important as alleviating the contagious power of COVID-19 [ 66 , 67 ]. The spread of fake news due to the absence of verified information has become more common regarding health-related social issues (e.g., infectious diseases), natural disasters, etc. For example, people with disorders affecting cognition (e.g., neurodegenerative disorder) are tend to easily believe unverified medical news [ 68 – 70 ]. Robledo and Jankovic [ 68 ] confirmed that many fake or exaggerated medical journals are misleading people with Parkinson’s disease by giving false hopes and unfounded fake articles. Another example is a rumor that climate activists set fire to raise awareness of climate change quickly spread as fake news [ 71 ], when a wildfire broke out in Australia in 2019. As a result, people became suspicious and tended to believe that the causes of climate change (e.g., global warming) may not be related to humans, despite scientific evidence and research data.

3.3 Fake news detection and prevention

The main purpose of fake news creation is to make people confused or deceived regardless of topic, social atmosphere, or timing. Due to this purpose, it appears that fake news tends to have similar frames and structural patterns. Many studies have attempted to mitigate the spread of fake news based on these identifiable patterns. In particular, research on developing computational models that detect fake information (text/images/videos), based on machine or deep learning techniques has been actively conducted, as summarized in Table 1 . Other modeling studies include the credibility of weblogs [ 84 , 85 ], communication quality [ 88 ], susceptibility level [ 90 ], and political stance [ 86 , 87 ]. The table was intended to characterize a research scope and direction of the development of fake information creation (e.g., the features employed in each model development), not to present an exhaustive list.

thumbnail

https://doi.org/10.1371/journal.pone.0260080.t001

3.3.1 Fake text information detection.

Research has considered many text-based features, such as structural (e.g., website URLs and headlines with all capital letters or exclamations) and linguistic information (e.g., grammar, spelling, and punctuation errors) about the news. Research has also considered the sentiments of news articles, the frequency of the words used, user information, and who left comments on the news articles, and social network information among users (who were connected based on activities of commenting, replying, liking or following) were used as key features for model development. These text-based models have been developed for not only fake news articles but also other types of fake information, such as clickbaits, fake reviews, spams, and spammers. Many of the models developed in this context performed a binary classification that distinguished between fake and non-fake articles, with the accuracy of such models ranging from 86% to 93%. Mainstream news articles were used to build most models, and some studies used articles on social media, such as Twitter [ 15 , 17 ]. Some studies developed fake news detection models by extracting features from images, as well as text, in news articles [ 16 , 17 , 75 ].

3.3.2 Fake visual media detection.

The generative adversary network (GAN) is an unsupervised learning method that estimates the probability distribution of original data and allows an artificial neural network to produce similar distributions [ 109 ]. With the advancement of GAN, it has become possible to transform faces in images into those of others. However, photos of famous celebrities have been misused (e.g., being distorted into pornographic videos), increasing concerns about the possible misuse of such technology [ 110 ] (e.g., creating rumors about a certain political candidate). To mitigate this, research has been conducted to develop detection models for fake images. Most studies developed binary classification models (fake image or not), and the accuracy of fake image detection models was high, ranging from 81% to 97%. However, challenges still exist. Unlike fake news detection models that employ fact-checking websites or mainstream news as data verification or ground-truth, fake image detection models were developed using the same or slightly modified image datasets (e.g., CelebA [ 97 ], FFHQ [ 99 ]), asking for the collection and preparation of a large amount of highly diverse data.

4 Fake news consumption

4.1 external factors: fake news consumption circumstances.

The implicit social contract between civil society and the media has gradually disintegrated in modern society, and accordingly, citizens’ trust in the media began to decline [ 111 ]. In addition, the growing number of digital media platforms has changed people’s news consumption environment. This change has increased the diversity of news content and the autonomy of information creation and sharing. At the same time, however, it blurred the line between traditional mainstream media news and fake news in the Internet environment, contributing to polarization.

Here, we identified three external factors that have forced the public to encounter fake news: (1) the decline of trust in the mainstream media, (2) a high-choice media environment, and (3) the use of social media as a news platform .

4.1.1 Fall of mainstream media trust.

Misinformation and unverified or biased reports have gradually undermined the credibility of the mainstream media. According to the 2019 American mass media trust survey conducted by Gallup, only 13% of Americans said they trusted traditional mainstream media: newspapers or TV news [ 112 ]. The decline in traditional media trust is not only a problem for the US, but also a common concern in Europe and Asia [ 113 – 115 ].

4.1.2 High-choice media environment.

Over the past decade, news consumption channels have been radically diversified, and the mainstream has shifted from broadcasting and print media to mobile and social media environments. Despite the diversity of news consumption channels, personalized preferences and repetitive patterns have led people to be exposed to limited information and continue to consume such information increasingly [ 116 ]. This selective news consumption attitude has enhanced the polarization of the public in many multi-media environments [ 117 ]. In addition, the commercialization of digital platforms have created an environment in which cognitive bias can be easily strengthened. In other words, a digital platform based on recommended algorithms has the convenience of providing similar content continuously after a given type of content is consumed. As a result, it may be easy for users to fall into the echo chamber because they only access recommended content. A survey of 1,000 YouTube videos found that more than two-thirds of the videos contained content in favor of a particular candidate [ 118 ].

News consumption in social media does not simply mean the delivery of messages from creators to consumers. The multi-directionality of social media has blurred the boundaries between information creators and consumers. In other words, users are already interacting with one another in various fashions, and when a new interaction type emerges and is supported by the platform, users will display other types of new interactions, which will also influence ways of consuming news information.

4.1.3 Use of social media as news platform.

Here we focus on the most widely used social media platforms—YouTube, Facebook, and Twitter—where each has characteristics of encouraging limited news consumption.

First, YouTube is the most unidirectional of social media. Many YouTube creators tend to convey arguments in a strong, definitive tone through their videos, and these content characteristics make viewers judge the objectivity of the information via non-verbal elements (e.g., speaker, thumbnail, title, comments) rather than facts. Furthermore, many comments often support the content of the video, which may increase the chances of viewers accepting somewhat biased information. In addition, a YouTube video recommendation algorithm causes users who watch certain news to continuously be exposed to other news containing the same or similar information. This behavior and direction on the part of isolated content consumption could undermine the viewer’s media literacy, and is likely to create a screening effect that blocks the user’s eyes and ears.

Second, Facebook is somewhat invisible regarding the details of news articles because this platform ostensibly shows only the title, the number of likes, and the comments of the posts. Often, users have to click on the article and go to the URL to read the article. This structure and consumptive content orientation on the part of Facebook presents obstacles that prevent users from checking the details of their posts. As a result, users have become likely to make limited and biased judgments and perceive content through provocative headlines and comments.

Third, the largest feature of Twitter is anonymity because Twitter asks users to make their own pseudonyms [ 119 ]. Twitter has a limited number of letters to upload, and compared to other platforms, users can produce and spread indiscriminate information anonymously and do not know who is behind the anonymity [ 120 , 121 ]. On the other hand, many accounts on Facebook operate under real names and generally share information with others who are friends or followers. Information creators are not held accountable for anonymous information.

4.2 Internal factors: Cognitive mechanism

Due to the characteristics of the Internet and social media, people are accustomed to consuming information quickly, such as reading only news headlines and checking photos in news articles. This type of news consumption practice could lead people to consider news information mostly based on their beliefs or values. This practice can make it easier for people to fall into an echo chamber and further social confusion. We identified two internal factors affecting fake news consumption: (1) cognitive biases and (2) personal traits (see Fig 6 ).

thumbnail

https://doi.org/10.1371/journal.pone.0260080.g006

4.2.1 Cognitive biases.

Cognitive bias is an observer effect that is broadly recognized in cognitive science and includes basic statistical and memory errors [ 8 ]. However, this bias may vary depending on what factors are most important to affect individual judgments and choices. We identified five cognitive biases that affect fake news consumption: confirmation bias, in-group bias, choice-supportive bias, cognitive dissonance, and primacy effect.

Confirmation bias relates to a human tendency to seek out information in line with personal thoughts or beliefs, as well as to ignore information that goes against such beliefs. This stems from the human desire to be reaffirmed, rather than accept denials of one’s opinion or hypothesis. If the process of confirmation bias is repeated, a more solid belief is gradually formed, and the belief remains unchanged even after encountering logical and objective counterexamples. Evaluating information with an objective attitude is essential to properly investigating any social phenomenon. However, confirmation bias significantly hinders this. Kunda [ 122 ] discussed experiments that investigated the cognitive processes as a function of accuracy goals and directional goals. Her analysis demonstrated that people use different cognitive processes to achieve the two different goals. For those who pursue accuracy goals (reaching a “right conclusion”), information is used as a tool to determine whether they are right or not [ 123 ], and for those with directional goals (reaching a desirable conclusion), information is used as a tool to justify their claims. Thus, biased information processing is more frequently observed by people with directional goals [ 124 ].

People with directional goals have a desire to reach the conclusion they want. The more we emphasize the seriousness and omnipresence of fake news, the less people with directional goals can identify fake news. Moreover, their confirmation bias through social media could result in an echo chamber, triggering a differentiation of public opinion in the media. The algorithm of the media platform further strengthens the tendency of biased information consumption (e.g., filter bubble).

In-group bias is a phenomenon in which an individual favors a group that he or she belongs to. The causes of in-group bias are two [ 125 ]. One is a categorization process, which exaggerates the similarities between members within one category (the internal group) and differences with others (the external groups). Consequently, positive reactions towards the internal group and negative reactions (e.g., hostility) towards the external group are both increased. The other reason is self-respect based on social identity theory. To positively evaluate the internal group, a member tends to perceive that other group members are similar to himself or herself.

In-group bias has a significant impact on fake news consumption because of radical changes in the media environment [ 126 ]. The public recognizes and forms groups based on issues through social media. The emotions and intentions of such groups of people online can be easily transferred or developed into offline activities, such as demonstrations and rallies. Information exchanges within such internal groups proceeds similarly to the situation with confirmation bias. If confirmation bias is keeping to one’s beliefs, in-group bias equates the beliefs of my group with my beliefs.

Choice-supportive bias refers to an individual’s tendency to justify his or her decision by highlighting the evidence that he or she did not consider in making the decision [ 127 ]. For instance, people sometimes have no particular purpose when they purchase a certain brand of products or service, or support a particular politician or political party. They emphasize that their choices at the time were right and inevitable. They also tend to focus more on positive aspects than negative effects or consequences to justify their choice. However, these positive aspects can be distorted because they are mainly based on memory. Thus, choice-supportive bias, can be regarded as the cognitive errors caused by memory distortion.

The behavioral condition of choice-supportive bias is used to justify oneself, which usually occurs in the context of external factors (e.g., maintaining social status or relationships) [ 7 ]. For example, if people express a certain political opinion within a social group, people may seek information with which to justify the opinion and minimize its flaws. In this procedure, people may accept fake news as a supporting source for their opinions.

Cognitive dissonance was based on the notion that some psychological tension would occur when an individual had two perceptions that were inconsistent [ 128 ]. Humans have a desire to identify and resolve the psychological tension that occurs when a cognitive dissonance is established. Regarding fake news consumption, people easily accept fake news if it is aligned with their beliefs or faith. However, if such news is seen as working against their beliefs or faith, people define even real news as fake and consume biased information in order to avoid cognitive dissonance. This is quite similar to cognitive bias. Selective exposure to biased information intensifies its extent and impact in social media. In these circumstances, an individual’s cognitive state is likely to be formed by information from unclear sources, which can be seen as a negative state of perception. In that case, information consumers selectively consume only information that can be in harmony with negative perceptions.

Primacy effect means that information presented previously will have a stronger effect on the memory and decision-making than information presented later [ 129 ]. The “interference theory [ 130 ]” is often referred to as a theoretical basis for supporting the primacy effect, which highlights the fact that the impression formed by the information presented earlier influences subsequent judgments and the process of forming the next impression.

The significance of the primary effect for fake news consumption is that it can be a starting point for biased cognitive processes. If an individual first encounters an issue in fake news and does not go through a critical thinking process about that information, he or she may form false attitudes regarding the issue [ 131 , 132 ]. Fake news is a complex combination of facts and fiction, making it difficult for information consumers to correctly judge whether the news is right or wrong. These cognitive biases induce the selective collection of information that feels more valid for news consumers, rather than information that is really valid.

4.2.2 Personal traits.

We two aspects of personal characteristics or traits can influence one’s behaviors in terms of news consumption: susceptibility and personality.

4.2.2.1 Susceptibility . The most prominent feature of social media is that consumers can be also creators, and the boundaries between the creators and consumers of information become unclear. New media literacy (i.e., the ability to critically and suitably consume messages in a variety of digital media channels, such as social media) can have a significant impact on the degree of consumption and dissemination of fake news [ 133 , 134 ]. In other words, the higher new media literacy is, the higher the probability that an individual is likely to take a critical standpoint toward fake news. Also, the susceptibility level of fake news is related to one’s selective news consumption behaviors. Bessi et al. [ 35 ] studied misinformation on Facebook and found that users who frequently interact with alternative media tend to interact with intentionally false claims more often.

Personality is an individual’s traits or behavior style. Many scholars have agreed that the personality can be largely divided into five categories (Big Five)—extraversion, agreeableness, neuroticism, openness, and conscientiousness [ 135 , 136 ]—and used them to understand the relationship between personality and news consumption.

Extroversion is related to active information use. Previous studies have confirmed that extroverts tend to use social media and that their main purpose of use is to acquire information [ 137 ] and better determine the factuality of news on social media [ 138 ]. Furthermore, people with high agreeableness, which refers to how friendly, warm, and tactful, tend to trust real news than fake news [ 138 ]. Neuroticism refers to a broad personality trait dimension representing the degree to which a person experiences the world as distressing, threatening, and unsafe. People with high neuroticism usually show negative emotions or information sharing behavior [ 139 ]. Neuroticism is positively related to fake news consumption [ 138 ]. Openness refers to the degree of enjoying new experiences. High openness is associated with high curiosity and engagement in learning [ 140 ], which enhances critical thinking ability and decreases negative effects of fake news consumption [ 138 , 141 ]. Conscientiousness refers to a person’s work ethic, being orderly, and thoroughness [ 142 ]. People with high conscientiousness tend to regard social media use as distraction from their tasks [ 143 – 145 ].

4.3 Fake news awareness and prevention

4.3.1 decision-making support tools..

News on social media does not go through the verification process, because of its high degree of freedom to create, share, and access information. The study reported that most citizens in advanced countries will have more fake information than real information in 2022 [ 146 ]. This indicates that potential personal and social damage from fake news may increase. Paradoxically, many countries that suffer from fake news problems strongly guarantee the freedom of expression under their constitutions; thus, it would be very difficult to block all possible production and distribution of fake news sources through laws and regulations. In this respect, it would be necessary to put in place not only technical efforts to detect and prevent the production and dissemination of fake news but also social efforts to make news consumers aware of the characteristics of online fake information.

Inoculation theory highlights that human attitudes and beliefs can form psychological resistance by being properly exposed to arguments against belief in advance. To have the ability to strongly protest an argument, it is necessary to expose and refute the same sort of content with weak arguments first. Doris-Down et al. [ 147 ] asked people who were from different political backgrounds to communicate directly through mobile apps and investigated whether these methods alleviated their echo-chamberness. As a result, the participants made changes, such as realizing that they had a lot in common with people who had conflicting political backgrounds and that what they thought was different was actually trivial. Karduni et al. [ 148 ] provided comprehensive information (e.g., connections among news accounts and a summary of the location entities) to study participants through the developed visual analytic system and examined how they accepted fake news. Another study was conducted to confirm how people determine the veracity of news by establishing a system similar to social media and analyzing the eye tracking of the study participants while reading fake news articles [ 28 ].

Some research has applied the inoculation theory to gamification. A “Bad News” game was designed to proactively warn people and expose them to a certain amount of false information through interactions with the gamified system [ 29 , 149 ]. The results confirmed the high effectiveness of inoculation through the game and highlighted the need to educate people about how to respond appropriately to misinformation through computer systems and games [ 29 ].

4.3.2 Fake information propagation analysis.

Fake information tends to show a certain pattern in terms of consumption and propagation, and many studies have attempted to identify the propagation patterns of fake information (e.g., the count of unique users, the depth of a network) [ 150 – 153 ].

4.3.2.1 Psychological characteristics . The theoretical foundation of research intended to examine the diffusion patterns of fake news lies in psychology [ 154 , 155 ] because psychological theories explain why and how people react to fake news. For instance, a news consumer who comes across fake news will first have doubts, judge the news against his background knowledge, and want to clarify the sources in the news. This series of processes ends when sufficient evidence is collected. Then the news consumer ends in accepting, ignoring, or suspecting the news. The psychological elements that can be defined in this process are doubts, negatives, conjectures, and skepticism [ 156 ].

4.3.2.2 Temporal characteristics . Fake news exhibits different propagation patterns from real news. The propagation of real news tends to slowly decrease over time after a single peak in the public’s interest, whereas fake news does not have a fixed timing for peak consumption, and a number of peaks appear in many cases [ 157 ]. Tambuscio et al. [ 151 ] proved that the pattern of the spread of rumors is similar to the existing epidemic model [ 158 ]. Their empirical observations confirmed that the same fake news reappears periodically and infects news consumers. For example, rumors that include the malicious political message that “Obama is a Muslim” are still being spread a decade later [ 159 ]. This pattern of proliferation and consumption shows that fake news may be consumed for a certain purpose.

5 A mental-model approach

We have examined news consumers’ susceptibility to fake news due to internal and external factors, including personal traits, cognitive biases, and the contexts. Beyond an investigation on the factor level, we seek to understand people’s susceptibility to misinformation by considering people’s internal representations and external environments holistically [ 5 ]. Specifically, we propose to comprehend people’s mental models of fake news. In this section, we first briefly introduce mental models and discuss their connection to misinformation. Then, we discuss the potential contribution of using a mental-model approach to the field of misinformation.

5.1 Mental models

A mental model is an internal representation or simulation that people carry in their minds of how the world works [ 160 , 161 ]. Typically, mental models are constructed in people’s working memory, in which information from long-term memory and the environments are combined [ 162 ]. They also indicate that individuals represent complex phenomena with somewhat abstraction based on their own experiences and understanding of the contexts. People rely on mental models to understand and predict their interactions with environments, artifacts and computing systems, as well as other individuals [ 163 , 164 ]. Generally, individuals’ ability to represent the continually changing environments is limited and unique. Thus, mental models tend to be functional and dynamic but not necessarily accurate or complete [ 163 , 165 ]. Mental models also differ between various groups and in particular between experts and novices [ 164 , 166 ].

5.2 Mental models and misinformation

Mental models have been proposed to understand human behaviors in spatial navigation [ 167 ], learning [ 168 , 169 ], deductive reasoning [ 170 ], mental presentations of real or imagined situations [ 171 ], risk communication [ 172 ], and usable cybersecurity and privacy [ 166 , 173 , 174 ]. People use mental models to facilitate their comprehension, judgment, and actions, and can be the basis of individual behaviors. In particular, the connection between a mental-model approach and misinformation has been revealed in risk communication regarding vaccines [ 175 , 176 ]. For example, Downs et al. [ 176 ] interviewed 30 parents from three US cities to understand their mental models about vaccination for their children aged 18 to 23 months. The results revealed two mental models about vaccination: (1) heath oriented : parents who focused on health-oriented topics trusted anecdotal communication more than statistical arguments; and (2) risk oriented : parents with some knowledge about vaccine mechanisms trusted communication with statistical arguments more than anecdotal information. Also, the authors found that many parents, even those favorable to vaccination, can be confused by ongoing debate, suggesting somewhat incompleteness of their mental models.

5.3 Potential contributions of a mental-model approach

Recognizing and dealing with the plurality of news consumers’ perception, cognition and actions is currently considered as key aspects of misinformation research. Thus, a mental model approach could significantly improve our understanding of people’s susceptibility to misinformation, as well as inform the development of mechanisms to mitigate misinformation.

One possible direction is to investigate the demographic differences in the context of mental models. As more Americans have adopted social media, the social media users have become more representative for the population. Usage by older adults has increased in recent years, with the use rate of about 12% in 2012 to about 35% in 2016 ( https://www.pewresearch.org/internet/fact-sheet/social-media/ ). Guess et al. (2019) analyzed participants’ profiles and their sharing activity on Facebook during the 2016 US Presidential campaign. A strong age effect was revealed. While controlled the effects of ideology and education, their results showed that Facebook users who are over 65 years old were associated with sharing nearly seven times as many articles from fake news domains on Facebook as those who are between 18–29 years old, or about 2.3 times as many as those in the age between 45 to 65.

Besides older adults, college students were shown more susceptibility to misinformation [ 177 ]. We can identify which mental models a particular age group ascribes to, and compare the incompleteness or incorrectness of the mental models by age. On the other hand, such comparison might be informative to design general mechanisms to mitigate misinformation independent of the different concrete mental models possessed by different types of users.

Users’ actions and decisions are directed by their mental models. We can also explore news consumers’ mental models and discover unanticipated and potentially risky human system interactions, which will inform the development and design of user interactions and education endeavors to mitigate misinformation.

A mental-model approach supplies an important, and as yet unconsidered, dimension to fake news research. To date, research on people’s susceptibility to fake news in social media has lagged behind research on computational aspect research on fake news. Scholars have not considered issues of news consumers’ susceptibility across the spectrum of their internal representations and external environments. An investigation from the mental model’s perspective is a step toward addressing such need.

6 Discussion and future work

In this section, we highlight the importance of balancing research efforts on fake news creation and consumption and discuss potential future directions of fake news research.

6.1 Leveraging insights of social science to model development

Developing fake news detection models has achieved great performance. Feature groups used in the model are diverse including linguistics, vision, sentiment, topic, user, and network, and many models used multiple groups to increase the performance. By using datasets with different size and characteristics, research has demonstrated the effectiveness of the models through a comparison analysis. However, much research has considered and used the features that are easily quantifiable, and many of them tend to have unclear justification or rationale of being used in modeling. For example, what is the relationship between the use of question (?), exclamation (!), or quotation marks (“…”) and fake news?, what does it mean by a longer description relates to news trustworthiness?. There are also many important aspects that can be used as additional features for modeling and have not yet found a way to be quantified. For example, journalistic styles are important characteristics that determine a level of information credibility [ 156 ], but it is challenging to accurately and reliably quantified them. There are many intentions (e.g., ideological standpoint, financial gain, panic creation) that authors may implicitly or explicitly display in the post but measuring them is uneasy and not straightforward. Social science research can play a role in here coming up with a valid research methodology to measure such subjective perceptions or notions considering various types and characteristics of them depending on a context or environment. Some research efforts in this research direction include quantifying salient factors of people’s decision-making identified in social science research and demonstrating the effectiveness of using the factors in improving model performance and interpreting model results [ 70 ]. Yet more research that applies socio-technical aspects in model development and application would be needed to better study complex characteristics of fake news.

6.1.1 Future direction.

Insights from social science may help develop transparent and applicable fake news detection models. Such socio-technical models may allow news consumers to have a better understanding of fake news detection results and its application as well as to take more appropriate actions to control fake news phenomenon.

6.2 Lack of research on fake news consumption

Regarding fake news consumption, we confirmed that only few studies involve the development of web- or mobile-based technology systems to help consumers aware possible dangers of fake news. Those studies [ 28 , 29 , 147 , 148 ] tried to demonstrate the feasibility of developed self-awareness systems through user studies. However, due to the limited number of study participants (min: 11, max: 60) and their lack of demographic diversity (i.e., recruited only college students of one school, the psychology research pool at the authors’ institution), the generalization and applicability of these systems are still questionable. On the other hand, research that involves the development of fake news detection models or network analysis to identify the pattern of fake news propagation has been relatively active. These results can be used to identify people (or entities) who intentionally create malicious fake content; however, it is still challenging to restrict people who originally had not shown any behaviors or indications of sharing or creating fake information but later manipulated real news to fake or disseminated fake news with their malicious intention or cognitive biases.

In other words, although fake news detection models have shown great, promising performance, the influence of the models may be exerted in limited cases. This is because fake news detection models heavily rely on the data that were labeled as fake by other fact-checking institutions or sites. If someone manipulates the news that were not covered by fact-checking, the format or characteristics of the manipulated news may be different from those (i.e., conventional features) that are identified and managed in the detection model. Such differences may not be captured by the model. Therefore, to prevent fake news phenomenon more effectively, research needs to consider changes of news consumption.

6.2.1 Future direction.

It may be desirable to support people recognizing that their news consumption behaviors (e.g., like, comment, share) can have a significant ripple effect. Developing a system that tracks activities of people’s news consumption and creation, measures similarity and differences between those activities, and presents behaviors or patterns of news consumption and creation to people would be helpful.

6.3 Limited coverage of fact-checking websites and regulatory approach

Some of the well-known fact-checking websites (e.g., snopes.com, politifact.com) cover news shared mostly on the Internet and label the authenticity or deficiencies of the content (e.g., miscaptioned, legend, misattributed). However, these fact-checking websites may show limited coverage in that they are only used for those who are willing to check the veracity of certain news articles. Social media platforms have been making continuous efforts to mitigate the spread of fake news. For example, Facebook shows that content that has been falsely assessed by fact-checkers is relatively less exposed to news feeds or shows warning indicators [ 178 ]. Instagram has also changed the way that warning labels are displayed when users attempt to view the content that has been falsely assessed [ 179 ]. However, this type of an interface could lead news consumers to relying on algorithmic decision-making rather than self-judgment because these ostensible regulations (e.g., warning labels) tend to lack transparency of the decision. As we explained previously, this is related to filter bubbles. Therefore, it is important to provide a more clear and transparent communicative interface for news consumers to access and understand underlying information of the algorithm results.

6.3.1 Future direction.

It is necessary to create a news consumption circumstance that gives a wider coverage of fake news and more transparent information of algorithmic decisions on news credibility. This will help news consumers preemptively avoid fake news consumption and contribute more to preventing fake news propagation. Consumers also make more proper and accurate decisions based on their understanding of the news.

6.4 New media literacy

With the diversification of news channels, we can easily consume news. However, we are also in a media environment that asks us to self-critically verify news content (e.g., whether the news title reads like a clickbait, whether the news title and content are related), which in reality is hard to be done. Moreover, in social media, news consumers can be news creators or reproducers. During this process, news information could be changed based on a consumer’s beliefs or interests. A problem here is that people may not know how to verify news content or not be aware of whether the information could be distorted or biased. As the news consumer environment changes rapidly and faces modern media deluge, the importance of media literacy education is high. Media literacy refers to the ability to decipher media content, but in a broad sense, to understand the principles of media operation and media content sensibly and critically, and in turn to the ability to utilize and creatively reproduce content. Being a “lazy thinker” is more susceptible to fake news than having a “partisan bias” [ 32 ]. As “screen time” (i.e., time spent looking at smartphone, computer, or television screens) has become more common, people are consuming only stimulating (e.g., sensual pleasure and excitement) information [ 180 ]. This could gradually lower one’s ability of critical, reasonable thinking, leading to making wrong judgments and actions. In France, when fake news problem became more serious, and a great amount of efforts were made to create “European Media Literacy Week” in schools [ 181 ]. The US is also making legislative efforts to add media literacy to the general education curriculum [ 182 ]. However, the acquisition of new media literacy through education may be limited to people in school (e.g., young students) and would be challenging to be expanded to wider populations. Thus, there is also a need for supplementary tools and research efforts to support more people to critically interpret and appropriately consume news.

In addition, more critical social attention is needed because visual content (e.g., images, videos), which had been naturally accepted as facts, can be easily manipulated in a malicious fashion and looked very natural. We have seen that people prefer to watch YouTube videos for news consumption rather than reading news articles. This visual content makes it relatively easy for news consumers to trust the content compared to text-based information and makes it easier to obtain information simply by playing the video. Since visual content will become a more dominant medium in future news consumption, educating and inoculating news consumers about potential threats of fake information in such news media would be important. More attention and research are needed on the technology supporting fake visual content awareness.

6.4.1 Future direction.

Research in both computer science and social science should find ways (e.g., developing a game-based education system or curriculum) to help news consumers aware of their practice of news consumption and maintain right news consumption behaviors.

7 Conclusion

We presented a comprehensive summary of fake news research through the lenses of news creation and consumption. The trends analysis indicated a growing increase in fake news research and a great amount of research focus on news creation compared to news consumption. By looking into internal and external factors, we unpacked the characteristics of fake news creation and consumption and presented the use of people’s mental models to better understand people’s susceptibility to misinformation. Based on the reviews, we suggested four future directions on fake news research—(1) a socio-technical model development using insights from social science, (2) in-depth understanding of news consumption behaviors, (3) preemptive decision-making and action support, and (4) educational, new media literacy support—as ways to reduce the gap between news creation and consumption and between computer science and social science research and to support healthy news environments.

Supporting information

S1 checklist..

https://doi.org/10.1371/journal.pone.0260080.s001

  • View Article
  • Google Scholar
  • 2. Goldman R. Reading fake news, Pakistani minister directs nuclear threat at Israel. The New York Times . 2016;24.
  • PubMed/NCBI
  • 6. Lévy P, Bononno R. Collective intelligence: Mankind’s emerging world in cyberspace. Perseus Books; 1997.
  • 11. Jamieson KH, Cappella JN. Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press; 2008.
  • 14. Shu K, Cui L, Wang S, Lee D, Liu H. defend: Explainable fake news detection. In: In Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD); 2019. p. 395–405.
  • 15. Ruchansky N, Seo S, Liu Y. Csi: A hybrid deep model for fake news detection. In: In Proc. of the 2017 ACM on Conference on Information and Knowledge Management (CIKM); 2017. p. 797–806.
  • 16. Cui L, Wang S, Lee D. Same: sentiment-aware multi-modal embedding for detecting fake news. In: In Proc. of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM); 2019. p. 41–48.
  • 17. Wang Y, Ma F, Jin Z, Yuan Y, Xun G, Jha K, et al. Eann: Event adversarial neural networks for multi-modal fake news detection. In: In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data mining (KDD); 2018. p. 849–857.
  • 18. Nørregaard J, Horne BD, Adalı S. Nela-gt-2018: A large multi-labelled news for the study of misinformation in news articles. In: In Proc. of the International AAAI Conference on Web and Social Media (ICWSM). vol. 13; 2019. p. 630–638.
  • 20. Nguyen AT, Kharosekar A, Krishnan S, Krishnan S, Tate E, Wallace BC, et al. Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking. In: In Proc. of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST); 2018. p. 189–199.
  • 23. Brandon J. Terrifying high-tech porn: creepy’deepfake’videos are on the rise. Fox News . 2018;20.
  • 24. Nguyen TT, Nguyen CM, Nguyen DT, Nguyen DT, Nahavandi S. Deep Learning for Deepfakes Creation and Detection. arXiv . 2019;1.
  • 25. Rossler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Nießner M. Faceforensics++: Learning to detect manipulated facial images. In: IEEE International Conference on Computer Vision (ICCV); 2019. p. 1–11.
  • 26. Nirkin Y, Keller Y, Hassner T. Fsgan: Subject agnostic face swapping and reenactment. In: In Proc. of the IEEE International Conference on Computer Vision (ICCV); 2019. p. 7184–7193.
  • 28. Simko J, Hanakova M, Racsko P, Tomlein M, Moro R, Bielikova M. Fake news reading on social media: an eye-tracking study. In: In Proc. of the 30th ACM Conference on Hypertext and Social Media (HT); 2019. p. 221–230.
  • 35. Horne B, Adali S. This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In: In Proc. of the 11th International AAAI Conference on Web and Social Media (ICWSM); 2017. p. 759–766.
  • 36. Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, et al. Fake news vs satire: A dataset and analysis. In: In Proc. of the 10th ACM Conference on Web Science (WebSci); 2018. p. 17–21.
  • 37. Mustafaraj E, Metaxas PT. The fake news spreading plague: was it preventable? In: In Proc. of the 9th ACM Conference on Web Science (WebSci); 2017. p. 235–239.
  • 40. Jin Z, Cao J, Zhang Y, Luo J. News verification by exploiting conflicting social viewpoints in microblogs. In: In Proc. of the 13th AAAI Conference on Artificial Intelligence (AAAI); 2016. p. 2972–2978.
  • 41. Rubin VL, Conroy N, Chen Y, Cornwell S. Fake news or truth? using satirical cues to detect potentially misleading news. In: In Proc. of the Second Workshop on Computational Approaches to Deception Detection ; 2016. p. 7–17.
  • 45. Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. In: Handbook of the fundamentals of financial decision making: Part I. World Scientific; 2013. p. 99–127.
  • 46. Hanitzsch T, Wahl-Jorgensen K. Journalism studies: Developments, challenges, and future directions. The Handbook of Journalism Studies . 2020; p. 3–20.
  • 48. Osatuyi B, Hughes J. A tale of two internet news platforms-real vs. fake: An elaboration likelihood model perspective. In: In Proc. of the 51st Hawaii International Conference on System Sciences (HICSS); 2018. p. 3986–3994.
  • 49. Cacioppo JT, Petty RE. The elaboration likelihood model of persuasion. ACR North American Advances. 1984; p. 673–675.
  • 50. Wang LX, Ramachandran A, Chaintreau A. Measuring click and share dynamics on social media: a reproducible and validated approach. In Proc of the 10th International AAAI Conference on Web and Social Media (ICWSM). 2016; p. 108–113.
  • 51. Bowman S, Willis C. How audiences are shaping the future of news and information. We Media . 2003; p. 1–66.
  • 52. Hill E, Tiefenthäler A, Triebert C, Jordan D, Willis H, Stein R. 8 Minutes and 46 Seconds: How George Floyd Was Killed in Police Custody; 2020. Available from: https://www.nytimes.com/2020/06/18/us/george-floyd-timing.html .
  • 54. Carroll O. St Petersburg ‘troll farm’ had 90 dedicated staff working to influence US election campaign; 2017.
  • 55. Zannettou S, Caulfield T, Setzer W, Sirivianos M, Stringhini G, Blackburn J. Who let the trolls out? towards understanding state-sponsored trolls. In: Proc. of the 10th ACM Conference on Web Science (WebSci); 2019. p. 353–362.
  • 56. Vincent J. Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news. The Verge . 2018;17.
  • 58. Linder M. Block. Mute. Unfriend. Tensions rise on Facebook after election results. Chicago Tribune . 2016;9.
  • 60. Howard PN, Kollanyi B. Bots, #StrongerIn, and #Brexit: computational propaganda during the UK-EU referendum. arXiv . 2016; p. arXiv–1606.
  • 61. Kasra M, Shen C, O’Brien JF. Seeing is believing: how people fail to identify fake images on the Web. In Proc of the 2018 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI). 2018; p. 1–6.
  • 62. Kirby EJ. The city getting rich from fake news. BBC News . 2016;5.
  • 63. Hu Z, Yang Z, Li Q, Zhang A, Huang Y. Infodemiological study on COVID-19 epidemic and COVID-19 infodemic. Preprints . 2020; p. 2020020380.
  • 71. Knaus C. Disinformation and lies are spreading faster than Australia’s bushfires. The Guardian . 2020;11.
  • 72. Karimi H, Roy P, Saba-Sadiya S, Tang J. Multi-source multi-class fake news detection. In: In Proc. of the 27th International Conference on Computational Linguistics ; 2018. p. 1546–1557.
  • 73. Wang WY. “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv . 2017; p. arXiv–1705.
  • 74. Pérez-Rosas V, Kleinberg B, Lefevre A, Mihalcea R. Automatic Detection of Fake News. arXiv . 2017; p. arXiv–1708.
  • 75. Yang Y, Zheng L, Zhang J, Cui Q, Li Z, Yu PS. TI-CNN: Convolutional Neural Networks for Fake News Detection. arXiv . 2018; p. arXiv–1806.
  • 76. Kumar V, Khattar D, Gairola S, Kumar Lal Y, Varma V. Identifying clickbait: A multi-strategy approach using neural networks. In: In Proc. of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR); 2018. p. 1225–1228.
  • 77. Yoon S, Park K, Shin J, Lim H, Won S, Cha M, et al. Detecting incongruity between news headline and body text via a deep hierarchical encoder. In: Proc. of the AAAI Conference on Artificial Intelligence. vol. 33; 2019. p. 791–800.
  • 78. Lu Y, Zhang L, Xiao Y, Li Y. Simultaneously detecting fake reviews and review spammers using factor graph model. In: In Proc. of the 5th Annual ACM Web Science Conference (WebSci); 2013. p. 225–233.
  • 79. Mukherjee A, Venkataraman V, Liu B, Glance N. What yelp fake review filter might be doing? In: In Proc. of The International AAAI Conference on Weblogs and Social Media (ICWSM); 2013. p. 409–418.
  • 80. Benevenuto F, Magno G, Rodrigues T, Almeida V. Detecting spammers on twitter. In: In Proc. of the 8th Annual Collaboration , Electronic messaging , Anti-Abuse and Spam Conference (CEAS). vol. 6; 2010. p. 12.
  • 81. Lee K, Caverlee J, Webb S. Uncovering social spammers: social honeypots+ machine learning. In: In Proc. of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR); 2010. p. 435–442.
  • 82. Li FH, Huang M, Yang Y, Zhu X. Learning to identify review spam. In: In Proc. of the 22nd International Joint Conference on Artificial Intelligence (IJCAI); 2011. p. 2488–2493.
  • 83. Wang J, Wen R, Wu C, Huang Y, Xion J. Fdgars: Fraudster detection via graph convolutional networks in online app review system. In: In Proc. of The 2019 World Wide Web Conference (WWW); 2019. p. 310–316.
  • 84. Castillo C, Mendoza M, Poblete B. Information credibility on twitter. In: In Proc. of the 20th International Conference on World Wide Web (WWW); 2011. p. 675–684.
  • 85. Jo Y, Kim M, Han K. How Do Humans Assess the Credibility on Web Blogs: Qualifying and Verifying Human Factors with Machine Learning. In: In Proc. of the 2019 CHI Conference on Human Factors in Computing Systems (CHI); 2019. p. 1–12.
  • 86. Che X, Metaxa-Kakavouli D, Hancock JT. Fake News in the News: An Analysis of Partisan Coverage of the Fake News Phenomenon. In: In Proc. of the 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW); 2018. p. 289–292.
  • 87. Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B. A Stylometric Inquiry into Hyperpartisan and Fake News. arXiv . 2017; p. arXiv–1702.
  • 89. Popat K, Mukherjee S, Strötgen J, Weikum G. Credibility assessment of textual claims on the web. In: In Proc. of the 25th ACM International on Conference on Information and Knowledge Management (CIKM); 2016. p. 2173–2178.
  • 90. Shen TJ, Cowell R, Gupta A, Le T, Yadav A, Lee D. How gullible are you? Predicting susceptibility to fake news. In: In Proc. of the 10th ACM Conference on Web Science (WebSci); 2019. p. 287–288.
  • 91. Gupta A, Lamba H, Kumaraguru P, Joshi A. Faking sandy: characterizing and identifying fake images on twitter during hurricane sandy. In: In Proc. of the 22nd International Conference on World Wide Web ; 2013. p. 729–736.
  • 92. He P, Li H, Wang H. Detection of fake images via the ensemble of deep representations from multi color spaces. In: In Proc. of the 26th IEEE International Conference on Image Processing (ICIP). IEEE; 2019. p. 2299–2303.
  • 93. Sun Y, Chen Y, Wang X, Tang X. Deep learning face representation by joint identification-verification. Advances in Neural Information Processing Systems . 2014; p. 1–9.
  • 94. Huh M, Liu A, Owens A, Efros AA. Fighting fake news: Image splice detection via learned self-consistency. In: In Proc. of the European Conference on Computer Vision (ECCV); 2018. p. 101–117.
  • 95. Dang H, Liu F, Stehouwer J, Liu X, Jain AK. On the detection of digital face manipulation. In: In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020. p. 5781–5790.
  • 96. Tariq S, Lee S, Kim H, Shin Y, Woo SS. Detecting both machine and human created fake face images in the wild. In Proc of the 2nd International Workshop on Multimedia Privacy and Security (MPS). 2018; p. 81–87.
  • 97. Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild. In: In Proc. of the IEEE International Conference on Computer Vision (ICCV); 2015. p. 3730–3738.
  • 98. Wang R, Ma L, Juefei-Xu F, Xie X, Wang J, Liu Y. Fakespotter: A simple baseline for spotting ai-synthesized fake faces. arXiv . 2019; p. arXiv–1909.
  • 99. Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2019. p. 4401–4410.
  • 100. Yang X, Li Y, Qi H, Lyu S. Exposing GAN-synthesized faces using landmark locations. In Proc of the ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec). 2019; p. 113–118.
  • 101. Zhang X, Karaman S, Chang SF. Detecting and simulating artifacts in gan fake images. In Proc of the 2019 IEEE International Workshop on Information Forensics and Security (WIFS). 2019; p. 1–6.
  • 102. Amerini I, Galteri L, Caldelli R, Del Bimbo A. Deepfake video detection through optical flow based cnn. In Proc of the IEEE International Conference on Computer Vision Workshops (ICCV). 2019; p. 1205–1207.
  • 103. Li Y, Lyu S. Exposing deepfake videos by detecting face warping artifacts. arXiv . 2018; p. 46–52.
  • 104. Korshunov P, Marcel S. Deepfakes: a new threat to face recognition? assessment and detection. arXiv . 2018; p. arXiv–1812.
  • 105. Jeon H, Bang Y, Woo SS. Faketalkerdetect: Effective and practical realistic neural talking head detection with a highly unbalanced dataset. In Proc of the IEEE International Conference on Computer Vision Workshops (ICCV). 2019; p. 1285–1287.
  • 106. Chung JS, Nagrani A, Zisserman A. Voxceleb2: Deep speaker recognition. arXiv . 2018; p. arXiv–1806.
  • 107. Songsri-in K, Zafeiriou S. Complement face forensic detection and localization with faciallandmarks. arXiv . 2019; p. arXiv–1910.
  • 108. Ma S, Cui L, Dai D, Wei F, Sun X. Livebot: Generating live video comments based on visual and textual contexts. In Proc of the AAAI Conference on Artificial Intelligence (AAAI). 2019; p. 6810–6817.
  • 109. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Advances in Neural Information Processing Systems . 2014; p. arXiv–1406.
  • 110. Metz R. The number of deepfake videos online is spiking. Most are porn; 2019. Available from: https://cnn.it/3xPJRT2 .
  • 111. Strömbäck J. In search of a standard: Four models of democracy and their normative implications for journalism. Journalism Studies . 2005; p. 331–345.
  • 112. Brenan M. Americans’ Trust in Mass Media Edges Down to 41%; 2019. Available from: https://bit.ly/3ejl6ql .
  • 114. Ladd JM. Why Americans hate the news media and how it matters. Princeton University Press; 2012.
  • 116. Weisberg J. Bubble trouble: Is web personalization turning us into solipsistic twits; 2011. Available from: https://bit.ly/3xOGFqD .
  • 117. Pariser E. The filter bubble: How the new personalized web is changing what we read and how we think. Penguin; 2011.
  • 118. Lewis P, McCormick E. How an ex-YouTube insider investigated its secret algorithm. The Guardian . 2018;2.
  • 120. Kavanaugh AL, Yang S, Li LT, Sheetz SD, Fox EA, et al. Microblogging in crisis situations: Mass protests in Iran, Tunisia, Egypt; 2011.
  • 121. Mustafaraj E, Metaxas PT, Finn S, Monroy-Hernández A. Hiding in Plain Sight: A Tale of Trust and Mistrust inside a Community of Citizen Reporters. In Proc of the 6th International AAAI Conference on Weblogs and Social Media (ICWSM) . 2012; p. 250–257.
  • 125. Tajfel H. Human groups and social categories: Studies in social psychology. Cup Archive ; 1981.
  • 127. Correia V, Festinger L. Biased argumentation and critical thinking. Rhetoric and Cognition: Theoretical Perspectives and Persuasive Strategies . 2014; p. 89–110.
  • 128. Festinger L. A theory of cognitive dissonance. Stanford University Press; 1957.
  • 136. John OP, Srivastava S, et al. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: theory and research . 1999; p. 102–138.
  • 138. Shu K, Wang S, Liu H. Understanding user profiles on social media for fake news detection. In: 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE; 2018. p. 430–435.
  • 142. Costa PT, McCrae RR. The NEO personality inventory. Psychological Assessment Resources; 1985.
  • 146. Panetta K. Gartner top strategic predictions for 2018 and beyond; 2017. Available from: https://gtnr.it/33kuljQ .
  • 147. Doris-Down A, Versee H, Gilbert E. Political blend: an application designed to bring people together based on political differences. In Proc of the 6th International Conference on Communities and Technologies (C&T). 2013; p. 120–130.
  • 148. Karduni A, Wesslen R, Santhanam S, Cho I, Volkova S, Arendt D, et al. Can You Verifi This? Studying Uncertainty and Decision-Making About Misinformation Using Visual Analytics. In Proc of the 12th International AAAI Conference on Web and Social Media (ICWSM). 2018;12(1).
  • 149. Basol M, Roozenbeek J, van der Linden S. Good news about bad news: gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition . 2020;3(1).
  • 151. Tambuscio M, Ruffo G, Flammini A, Menczer F. Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks. In Proc of the 24th International Conference on World Wide Web (WWW). 2015; p. 977–982.
  • 152. Friggeri A, Adamic L, Eckles D, Cheng J. Rumor cascades. In Proc of the 8th International AAAI Conference on Weblogs and Social Media (ICWSM) . 2014;8.
  • 153. Lerman K, Ghosh R. Information contagion: An empirical study of the spread of news on digg and twitter social networks. arXiv . 2010; p. arXiv–1003.
  • 155. Cantril H. The invasion from Mars: A study in the psychology of panic. Transaction Publishers; 1952.
  • 158. Bailey NT, et al. The mathematical theory of infectious diseases and its applications. Charles Griffin & Company Ltd; 1975.
  • 159. on Religion PF, Life P. Growing Number of Americans Say Obama Is a Muslim; 2010.
  • 160. Craik KJW. The nature of explanation. Cambridge University Press; 1943.
  • 161. Johnson-Laird PN. Mental models: Towards a cognitive science of language, inference, and consciousness. 6. Harvard University Press; 1983.
  • 162. Johnson-Laird PN, Girotto V, Legrenzi P. Mental models: a gentle guide for outsiders. Sistemi Intelligenti . 1998;9(68).
  • 164. Rouse WB, Morris NM. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin . 1986;100(3).
  • 166. Wash R, Rader E. Influencing mental models of security: a research agenda. In Proc of the 2011 New Security Paradigms Workshop (NSPW). 2011; p. 57–66.
  • 167. Tversky B. Cognitive maps, cognitive collages, and spatial mental models. In Proc of European conference on spatial information theory (COSIT). 1993; p. 14–24.
  • 169. Mayer RE, Mathias A, Wetzell K. Fostering understanding of multimedia messages through pre-training: Evidence for a two-stage theory of mental model construction. Journal of Experimental Psychology: Applied . 2002;8(3).
  • 172. Morgan MG, Fischhoff B, Bostrom A, Atman CJ, et al. Risk communication: A mental models approach. Cambridge University Press; 2002.
  • 174. Kang R, Dabbish L, Fruchter N, Kiesler S. “My Data Just Goes Everywhere:” User mental models of the internet and implications for privacy and security. In Proc of 11th Symposium On Usable Privacy and Security . 2015; p. 39–52.
  • 178. Facebook Journalism Project. Facebook’s Approach to Fact-Checking: How It Works; 2020. https://bit.ly/34QgOlj .
  • 179. Sardarizadeh S. Instagram fact-check: Can a new flagging tool stop fake news?; 2019. Available from: https://bbc.in/33fg5ZR .
  • 180. Greenfield S. Mind change: How digital technologies are leaving their mark on our brains. Random House Incorporated ; 2015.
  • 181. European Commission. European Media Literacy Week; 2020. https://bit.ly/36H9MR3 .
  • 182. Media Literacy Now. U.S. media literacy policy report 2020; 2020. https://bit.ly/33LkLqQ .

Help | Advanced Search

Computer Science > Computers and Society

Title: studying fake news spreading, polarisation dynamics, and manipulation by bots: a tale of networks and language.

Abstract: With the explosive growth of online social media, the ancient problem of information disorders interfering with news diffusion has surfaced with a renewed intensity threatening our democracies, public health, and news outlets' credibility. Therefore, thousands of scientific papers have been published in a relatively short period, making researchers of different disciplines struggle with an information overload problem. The aim of this survey is threefold: (1) we present the results of a network-based analysis of the existing multidisciplinary literature to support the search for relevant trends and central publications; (2) we describe the main results and necessary background to attack the problem under a computational perspective; (3) we review selected contributions using network science as a unifying framework and computational linguistics as the tool to make sense of the shared content. Despite scholars working on computational linguistics and networks traditionally belong to different scientific communities, we expect that those interested in the area of fake news should be aware of crucial aspects of both disciplines.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

A comprehensive review on automatic detection of fake news on social media

  • Published: 26 October 2023

Cite this article

research paper about fake news in social media

  • Manish Kumar Singh   ORCID: orcid.org/0000-0002-4216-7920 1 ,
  • Jawed Ahmed 1 ,
  • Mohammad Afshar Alam 1 ,
  • Kamlesh Kumar Raghuvanshi 2 &
  • Sachin Kumar 3  

603 Accesses

2 Citations

Explore all metrics

Social media sites are now quite popular among internet users for sharing news and opinions. This has become possible due to the inexpensive Internet, easy availability of digital devices, and the no-cost policy to create a user account on social media platforms. People are fascinated by social media sites because they can easily connect with others to share their interests, news, and opinions. According to studies, someone who lacks credibility is more likely to spread false information in order to achieve goals of any kind, be it influencing political opinions, earning attention, or making money. The automatic detection of social media related fake news has thus emerged as a highly anticipated research area in recent years. This paper offers a comprehensive review of the automatic detection of fake news on social media platforms. It contains details of the key models or techniques related to machine/deep learning proposed (or developed) during the period of the year 2011 to the year 2022 along with the performance metrics of each model or technique. The paper discusses (a). the key challenges faced during the development of an effective and efficient fake news detection system, (b). some popular datasets for carrying out fake news detection related research, and (c). the major research gaps, and future research directions in the area of automatic fake news detection on social media.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper about fake news in social media

Similar content being viewed by others

research paper about fake news in social media

Fake news, disinformation and misinformation in social media: a review

Esma Aïmeur, Sabrine Amri & Gilles Brassard

research paper about fake news in social media

Fake news on Social Media: the Impact on Society

Femi Olan, Uchitha Jayawickrama, … Shaofeng Liu

research paper about fake news in social media

Mental Health Analysis in Social Media Posts: A Survey

Muskan Garg

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.

https://www.scopus.com

https://www.kaggle.com/general/232077

https://paperswithcode.com/dataset/liar

https://www.kaggle.com/datasets/mdepak/fakenewsnet

https://github.com/thiagovas/bs-detector-dataset/

https://www.kaggle.com/general/35739

https://github.com/BuzzFeedNews/

https://github.com/ww-rm/weibo-rmdt

https://www.kaggle.com/datasets/usharengaraju/pheme-dataset

https://github.com/compsocial/CREDBANK-data

Kietzmann JH, Hermkens K, McCarthy IP, Silvestre BS (2011) Social media get serious! understanding the functional building blocks of social media. Bus Horiz 54(3):241–251

Article   Google Scholar  

Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–36

Perrin A (2015) Social media usage. Pew Research Center 125:52–68

Google Scholar  

Howell L et al (2013) Digital wildfires in a hyperconnected world. WEF Report 3(2013):15–94

Sailunaz K, Kawash J, Alhajj R (2022) Tweet and user validation with supervised feature ranking and rumor classification. Multimed Tools Appl 81(22):31907–31927

Shu K, Sliva A, Wang S, Tang J, Liu H (2017) Fake news detection on social media: a data mining perspective. ACM SIGKDD Explorations Newsletter 19(1):22–36

Shao C, Ciampaglia GL, Varol O, Yang K-C, Flammini A, Menczer F (2018) The spread of low-credibility content by social bots. Nat Commun 9(1):1–9

Tandoc EC Jr, Lim ZW, Ling R (2018) Defining fake news a typology of scholarly definitions. Digit Journal 6(2):137–153

Aldwairi M, Alwahedi A (2018) Detecting fake news in social media networks. Procedia Comput Sci 141:215–222

Castillo C, Mendoza M, Poblete B (2011) Information credibility on twitter. In: Proceedings of the 20th international conference on world wide web pp 675–684

Niklewicz K (2017) Weeding out fake news: an approach to social media regulation. European View 16(2):335–335

Pierri F, Ceri S (2019) False news on social media: a data-driven survey. ACM Sigmod Record 48(2):18–27

Fleming C, O’Carroll J (2010) The art of the hoax. Parallax 16(4):45–59

Wang C-C (2020) Fake news and related concepts: definitions and recent research development. Contemp Manag Res 16(3):145–174

Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Sci 359(6380):1146–1151

Volkova S, Shaffer K, Jang JY, Hodas N (2017) Separating facts from fiction: Linguistic models to classify suspicious and trusted news posts on twitter. In: Proceedings of the 55th annual meeting of the association for computational linguistics, (vol 2: Short Papers), pp 647–653

Chen Y, Conroy N.J, Rubin VL (2015) Misleading online content: recognizing clickbait as false news. In: Proceedings of the 2015 ACM on workshop on multimodal deception detection, pp 15–19

Alkhodair SA, Ding SH, Fung BC, Liu J (2020) Detecting breaking news rumors of emerging topics in social media. Inf Process Manag 57(2):102018

Shelke S, Attar V (2022) Rumor detection in social network based on user content and lexical features. Multimed Tools Appl 81(12):17347–17368

Rubin VL, Chen Y, Conroy NK (2015) Deception detection for news: three types of fakes. Proc Assoc Inf Sci Technol 52(1):1–4

Elhadad MK, Li KF, Gebali F (2019) Fake news detection on social media: a systematic survey. In: 2019 IEEE Pacific rim conference on communications computers and signal processing (PACRIM). IEEE, pp 1–8

Wardle C, Derakhshan H et al (2018) Thinking about information disorder: formats of misinformation disinformation and mal-information. Ireton Cherilyn; Posetti Julie. Journalism fake news & disinformation. Paris: Unesco, pp 43–54

Kshetri N, Voas J (2017) The economics of fake news. IT Professional 19(6):8–12

Rastogi S, Bansal D (2022) Disinformation detection on social media: an integrated approach. Multimed Tools Appl 81(28):40675–40707

Kucharski A (2016) Study epidemiology of fake news. Nature 540(7634):525–525

Upadhyay R, Pasi G, Viviani M (2022) Vec4cred: a model for health misinformation detection in web pages. Multimed Tools Appl pp 1–20

Zhou X, Zafarani R (2020) A survey of fake news: fundamental theories detection methods and opportunities. ACM Comput Surv 53(5):1–40

Rousidis D, Koukaras P, Tjortjis C (2020) Social media prediction: a literature review. Multimed Tools Appl 79(9–10):6279–6311

Howard PN, Kollanyi B (2016) Bots# strongerin and# brexit: computational propaganda during the uk-eu referendum. Available at SSRN 2798311

Rapoza K (2017) Can fake news impact the stock market. Forbes News

Ferrara E, Varol O, Davis C, Menczer F, Flammini A (2016) The rise of social bots. Commun ACM 59(7):96–104

How fake news in the middle east is a powder keg waiting to blow. Independent

Spohr D (2017) Fake news and ideological polarization: filter bubbles and selective exposure on social media. Bus Inf Rev 34(3):150–160

Shayan Sardarizadeh OR (2022) Ukraine invasion: false claims the war is a hoax go viral. BBC Monitoring

Salve P (2020) Manipulative fake news on the rise in india under lockdown: study. indiaspend.com

PTI (2022) Govt orders blocking of 18 indian four pakistani youtube-based news channels. timesofindia.indiatimes.com

Jin Z, Cao J, Zhang Y, Zhou J, Tian Q (2016) Novel visual and statistical image features for microblogs news verification. IEEE Trans Multimedia 19(3):598–608

Kwon S, Cha M, Jung K, Chen W, Wang Y (2013) Prominent features of rumor propagation in online social media. In: 2013 IEEE 13th international conference on data mining. IEEE, pp 1103–1108

Ma J, Gao W, Wei Z, Lu Y, Wong K-F (2015) Detect rumors using time series of social context information on microblogging websites. In: Proceedings of the 24th ACM international on conference on information and knowledge management, pp 1751–1754

Ruchansky N, Seo S, Liu Y (2017) Csi: a hybrid deep model for fake news detection. In: Proceedings of the 2017 ACM on conference on information and knowledge management, pp 797–806

Jin Z, Cao J, Zhang Y, Luo J (2016) News verification by exploiting conflicting social viewpoints in microblogs. In: Proceedings of the AAAI conference on artificial intelligence, vol 30

Assiroj P, Hidayanto AN, Prabowo H, Warnars HLHS et al (2018) Hoax news detection on social media: a survey. In: 2018 Indonesian association for pattern recognition international conference (INAPR). IEEE, pp 186–191

Vlachos A, Riedel S (2014) Fact checking: task definition and dataset construction. In: Proceedings of the ACL 2014 workshop on language technologies and computational social science, pp 18–22

Cohen S, Hamilton JT, Turner F (2011) Computational journalism. Commun ACM 54(10):66–71

Rubin VL, Lukoianova T (2015) Truth and deception at the rhetorical structure level. J Assoc Inf Sci Technol 66(5):905–917

Atanasova P, Nakov P, Márquez L, Barrón-Cedeño A, Karadzhov G, Mihaylova T, Mohtarami M, Glass J (2019) Automatic fact-checking using context and discourse information. J Data Inf Qual 11(3):1–27

Mohammad SM, Sobhani P, Kiritchenko S (2017) Stance and sentiment in tweets. ACM Trans Internet Technol 17(3):1–23

Moher D, Liberati A, Tetzlaff J, Altman DG, Altman D, Antes G, Atkins D, Barbour V, Barrowman N, Berlin JA et al (2009) Preferred reporting items for systematic reviews and meta-analyses: the prisma statement (chinese edition). J Chin Integr Med 7(9):889–896

Özgöbek Ö Gulla JA (2017) Towards an understanding of fake news. In: CEUR workshop proceedings, vol 354

Meinert J, Mirbabaie M, Dungs S, Aker A (2018) Is it really fake–towards an understanding of fake news in social media communication. In: International conference on social computing and social media. Springer, pp 484–497

Kumar S, Shah N (2018) False information on web and social media: a survey. arXiv:1804.08559

Parikh S.B, Atrey PK (2018) Media-rich fake news detection: a survey. In: 2018 IEEE conference on multimedia information processing and retrieval (MIPR). IEEE, pp 436–441

Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y (2019) Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol 10(3):1–42

Cardoso Durier da Silva F, Vieira R, Garcia AC (2019) Can machines learn to detect fake news a survey focused on social media. In: Proceedings of the 52nd Hawaii international conference on system sciences

Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization detection and discussion. Inf Process Manage 57(2):102025

Mridha MF, Keya AJ, Hamid MA, Monowar MM, Rahman MS (2021) A comprehensive review on fake news detection with deep learning. IEEE Access 9:156151–156170

Bani-Hani A, Adedugbe O, Benkhelifa E, Majdalawieh M (2022) Fandet semantic model: an owl ontology for context-based fake news detection on social media. Combating fake news with computational intelligence techniques, pp 91–125

Rubin VL, Conroy NJ, Chen Y (2015) Towards news verification: deception detection methods for news discourse. In: Hawaii international conference on system sciences, pp 5–8

Ferreira W, Vlachos A (2016) Emergent: a novel data-set for stance classification. In: Proceedings of the 2016 conference of the north American chapter of the association for computational linguistics: human language technologies. ACL

Magnini B, Zanoli R, Dagan I, Eichler K, Neumann G, Noh T-G, Pado S, Stern A, Levy O (2014) The excitement open platform for textual inferences. In: Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pp 43–48

Jin Z, Cao J, Jiang Y-G, Zhang Y (2014) News credibility evaluation on microblog with a hierarchical propagation model. In: 2014 IEEE international conference on data mining. IEEE, pp 230–239

Janze C, Risius M (2017) Automatic detection of fake news on social media platforms

Wang WY (2017) liar liar pants on fire: a new benchmark dataset for fake news detection. In: Proceedings of the 55th annual meeting of the association for computational linguistics. Association for Computational Linguistics, (vol 2: Short Papers), pp 422–426. https://doi.org/10.18653/v1/P17-2067 . https://aclanthology.org/P17-2067

Buntain C, Golbeck J (2017) Automatically identifying fake news in popular twitter threads. In: 2017 IEEE international conference on smart cloud (SmartCloud). IEEE, pp 208–215

Castillo C, Mendoza M, Poblete B (2013) Predicting information credibility in time-sensitive social media. Internet Res

Kotteti CMM, Dong X, Li N, Qian L (2018) Fake news detection enhancement with data imputation. In: 2018 IEEE 16th intl conf on dependable autonomic and secure computing 16th intl conf on pervasive intelligence and computing 4th intl conf on big data intelligence and computing and cyber science and technology congress (DASC/PiCom/DataCom/CyberSciTech). IEEE, pp 187–192

Pan JZ, Pavlova S, Li C, Li N, Li Y, Liu J (2018) Content based fake news detection using knowledge graphs. In: International semantic web conference. Springer, pp 669–683

Della Vedova ML, Tacchini E, Moret S, Ballarin G, DiPierro M, de Alfaro L (2018) Automatic online fake news detection combining content and social signals. In: 2018 22nd conference of open innovations association (FRUCT). IEEE, pp 272–279

Atodiresei C-S, Tănăselea A, Iftene A (2018) Identifying fake news and fake users on twitter. Procedia Comput Sci 126:451–461

Wang Y, Ma F, Jin Z, Yuan Y, Xun G, Jha K, Su L, Gao J (2018) Eann: event adversarial neural networks for multi-modal fake news detection. In: Proceedings of the 24th Acm Sigkdd international conference on knowledge discovery & data mining, pp 849–857

Ajao O, Bhowmik D, Zargari S (2018) Fake news identification on twitter with hybrid cnn and rnn models. In: Proceedings of the 9th international conference on social media and society, pp 226–230

Karimi H, Roy P, Saba-Sadiya S, Tang J (2018) Multi-source multi-class fake news detection. In: Proceedings of the 27th international conference on computational linguistics, pp 1546–1557

Wu L, Liu H (2018) Tracing fake-news footprints: characterizing social media messages by how they propagate. In: Proceedings of the eleventh ACM international conference on web search and data mining, pp 637–645

Gupta S, Thirukovalluru R, Sinha M, Mannarswamy S (2018) Cimtdetect: a community infused matrix-tensor coupled factorization based method for fake news detection. In: 2018 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 278–281

Dong M, Yao L, Wang X, Benatallah B, Sheng QZ, Huang H (2018) Dual: A deep unified attention model with latent relation representations for fake news detection. In: International conference on web information systems engineering. Springer, pp 199–209

Guacho GB, Abdali S, Shah N, Papalexakis EE (2018) Semi-supervised content-based detection of misinformation via tensor embeddings. In: 2018 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 322–325

Helmstetter S, Paulheim H (2018) Weakly supervised learning for fake news detection on twitter. In: 2018 IEEE/ACM international conference on advances in social networks analysis and mining (ASONAM). IEEE, pp 274–277

Liu Y, Wu Y-F (2018) Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks. In: Proceedings of the AAAI conference on artificial intelligence, vol 32

Shu K, Zhou X, Wang S, Zafarani R, Liu H (2019) The role of user profiles for fake news detection. In: Proceedings of the 2019 IEEE/ACM international conference on advances in social networks analysis and mining, pp 436–439

Reis JC, Correia A, Murai F, Veloso A, Benevenuto F (2019) Supervised learning for fake news detection. IEEE Intell Syst 34(2):76–81

Olivieri A, Shabani S, Sokhn M, Cudré-Mauroux P (2019) Creating task-generic features for fake news detection. In: Proceedings of the 52nd Hawaii international conference on system sciences

Bharadwaj P, Shao Z (2019) Fake news detection with semantic features and text mining. Int J Nat Lang Comput 8

Rasool T, Butt WH, Shaukat A, Akram MU (2019) Multi-label fake news detection using multi-layered supervised learning. In: Proceedings of the 2019 11th international conference on computer and automation engineering, pp 73–77

Traylor T, Straub J, Snell N et al (2019) Classifying fake news articles using natural language processing to identify in-article attribution as a supervised learning estimator. In: 2019 IEEE 13th international conference on semantic computing (ICSC). IEEE, pp 445–449

Shu K, Mahudeswaran D, Liu H (2019) Fakenewstracker: a tool for fake news collection detection and visualization. Comput Math Organ Theory 25(1):60–71

Yang S, Shu K, Wang S, Gu R, Wu F, Liu H (2019) Unsupervised fake news detection on social media: a generative approach. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 5644–5651

Kesarwani A, Chauhan SS, Nair AR (2020) Fake news detection on social media using k-nearest neighbor classifier. In: 2020 International conference on advances in computing and communication engineering (ICACCE). IEEE, pp 1–4

Shu K, Mahudeswaran D, Wang S, Liu H (2020) Hierarchical propagation networks for fake news detection: investigation and exploitation. In: Proceedings of the international AAAI conference on web and social media, vol 14, pp 626–637

Zhou X, Jain A, Phoha VV, Zafarani R (2020) Fake news early detection: a theory-driven model. Digit Threat Res Pract 1(2):1–25

Lu Y-J, Li C-T (2020) Gcan: graph-aware co-attention networks for explainable fake news detection on social media. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 505–514. https://doi.org/10.18653/v1/2020.acl-main.48

Ni S, Li J, Kao H-Y (2021) Mvan: multi-view attention networks for fake news detection on social media. IEEE Access 9:106907–106917

Sahoo SR, Gupta BB (2021) Multiple features based approach for automatic fake news detection on social networks using deep learning. Appl Soft Comput 100:106983

Kaliyar RK, Goswami A, Narang P (2021) Fakebert: fake news detection in social media with a bert-based deep learning approach. Multimed Tools Appl 80(8):11765–11788

Dou Y, Shu K, Xia C, Yu PS, Sun L (2021) User preference-aware fake news detection. In: Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval, pp 2051–2055

Verma PK, Agrawal P, Amorim I, Prodan R (2021) Welfake: word embedding over linguistic features for fake news detection. IEEE Trans Comput Soc Syst 8(4):881–893

Ozbay FA, Alatas B (2021) Adaptive salp swarm optimization algorithms with inertia weights for novel fake news detection model in online social media. Multimed Tools Appl 80(26–27):34333–34357

Nasir JA, Khan OS, Varlamis I (2021) Fake news detection: a hybrid cnn-rnn based deep learning approach. Int J Inf Manag Data Insights 1(1):100007

Iwendi C, Mohan S, Ibeke E, Ahmadian A, Ciano T et al (2022) Covid-19 fake news sentiment analysis. Comput Electr Eng 101:107967

Seddari N, Derhab A, Belaoued M, Halboob W, Al-Muhtadi J, Bouras A (2022) A hybrid linguistic and knowledge-based analysis approach for fake news detection on social media. IEEE Access

Min E, Rong Y, Bian Y, Xu T, Zhao P, Huang J, Ananiadou S (2022) Divide-and-conquer: post-user interaction network for fake news detection on social media. In: Proceedings of the acm web conference 2022, pp 1148–1158

Galli A, Masciari E, Moscato V, Sperlí G (2022) A comprehensive benchmark for fake news detection. J Intell Inf Syst 59(1):237–261

Sadeghi F, Bidgoly AJ, Amirkhani H (2022) Fake news detection on social media using a natural language inference approach. Multimed Tools Appl 81(23):33801–33821

Chi H, Liao B (2022) A quantitative argumentation-based automated explainable decision system for fake news detection on social media. Knowl-Based Syst 242:108378

Choudhury D, Acharjee T (2022) A novel approach to fake news detection in social networks using genetic algorithm applying machine learning classifiers. Multimed Tools Appl 1–17

Shao C, Hui P-M, Wang L, Jiang X, Flammini A, Menczer F, Ciampaglia GL (2018) Anatomy of an online misinformation network. Plos One 13(4):0196087

Pérez-Rosas V, Kleinberg B, Lefevre A, Mihalcea R (2018) Automatic detection of fake news. In: Proceedings of the 27th international conference on computational linguistics. Association for Computational Linguistics, pp 3391–3401. https://aclanthology.org/C18-1287

Singh MK, Ahmed J, Raghuvanshi KK, Alam MA (2023) Bharatfakenewskosh: a data repository for fake news research in India. In: International conference on smart computing and communication. Springer, pp 277–288

Al-Rubaie A, Al-Jadir L, Al-Hashimi M (2019) Detecting arabic fake news using machine learning techniques. In: Proceedings of the 2019 3rd international conference on computer science and artificial intelligence (CSAI), pp 201–206

Beauvais C (2022) Fake news: why do we believe it. Joint Bone Spine 89(4):105371

Djenouri Y, Belhadi A, Srivastava G, Lin JC-W (2023) Advanced pattern-mining system for fake news analysis. IEEE Trans Comput Soc Syst

Raponi S, Khalifa Z, Oligeri G, Di Pietro R (2022) Fake news propagation: a review of epidemic models datasets and insights. ACM Trans Web 16(3):1–34

Ahmed U, Lin JC-W, Diaz VG (2022) Automatically temporal labeled data generation using positional lexicon expansion for focus time estimation of news articles. ACM Trans Asian Low-Resour Lang Inf Process

Ahmed U, Lin JC-W, Srivastava G, Yun U (2023) Semi-supervised lexicon-aware embedding for news article time estimation. ACM Trans Asian Low-Resour Lang Inf Process

Raza S, Ding C (2022) Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal 13(4):335–362

Download references

The authors received no specific funding for this work.

Author information

Authors and affiliations.

Department of Computer Science Engineering, School of Engineering Science & Technology, Jamia Hamdard University, New Delhi, 110062, India

Manish Kumar Singh, Jawed Ahmed & Mohammad Afshar Alam

Department of Computer Science, Ramanujan College, University of Delhi, Kalkaji, New Delhi, 110019, Delhi, India

Kamlesh Kumar Raghuvanshi

Cluster Innovation Center, University of Delhi, GC Narang Road, New Delhi, 110007, Delhi, India

Sachin Kumar

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Manish Kumar Singh .

Ethics declarations

Conflicts of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Singh, M.K., Ahmed, J., Alam, M.A. et al. A comprehensive review on automatic detection of fake news on social media. Multimed Tools Appl (2023). https://doi.org/10.1007/s11042-023-17377-4

Download citation

Received : 08 February 2023

Revised : 27 June 2023

Accepted : 01 October 2023

Published : 26 October 2023

DOI : https://doi.org/10.1007/s11042-023-17377-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social media
  • Detection methods
  • Machine learning
  • Find a journal
  • Publish with us
  • Track your research

Fake News Detection on Social Media: A Systematic Survey

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

Fake news and the spread of misinformation: A research roundup

This collection of research offers insights into the impacts of fake news and other forms of misinformation, including fake Twitter images, and how people use the internet to spread rumors and misinformation.

research paper about fake news in social media

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by Denise-Marie Ordway, The Journalist's Resource September 1, 2017

This <a target="_blank" href="https://journalistsresource.org/politics-and-government/fake-news-conspiracy-theories-journalism-research/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

It’s too soon to say whether Google ’s and Facebook ’s attempts to clamp down on fake news will have a significant impact. But fabricated stories posing as serious journalism are not likely to go away as they have become a means for some writers to make money and potentially influence public opinion. Even as Americans recognize that fake news causes confusion about current issues and events, they continue to circulate it. A December 2016 survey by the Pew Research Center suggests that 23 percent of U.S. adults have shared fake news, knowingly or unknowingly, with friends and others.

“Fake news” is a term that can mean different things, depending on the context. News satire is often called fake news as are parodies such as the “Saturday Night Live” mock newscast Weekend Update. Much of the fake news that flooded the internet during the 2016 election season consisted of written pieces and recorded segments promoting false information or perpetuating conspiracy theories. Some news organizations published reports spotlighting examples of hoaxes, fake news and misinformation  on Election Day 2016.

The news media has written a lot about fake news and other forms of misinformation, but scholars are still trying to understand it — for example, how it travels and why some people believe it and even seek it out. Below, Journalist’s Resource has pulled together academic studies to help newsrooms better understand the problem and its impacts. Two other resources that may be helpful are the Poynter Institute’s tips on debunking fake news stories and the  First Draft Partner Network , a global collaboration of newsrooms, social media platforms and fact-checking organizations that was launched in September 2016 to battle fake news. In mid-2018, JR ‘s managing editor, Denise-Marie Ordway, wrote an article for  Harvard Business Review explaining what researchers know to date about the amount of misinformation people consume, why they believe it and the best ways to fight it.

—————————

“The Science of Fake News” Lazer, David M. J.; et al.   Science , March 2018. DOI: 10.1126/science.aao2998.

Summary: “The rise of fake news highlights the erosion of long-standing institutional bulwarks against misinformation in the internet age. Concern over the problem is global. However, much remains unknown regarding the vulnerabilities of individuals, institutions, and society to manipulations by malicious actors. A new system of safeguards is needed. Below, we discuss extant social and computer science research regarding belief in fake news and the mechanisms by which it spreads. Fake news has a long history, but we focus on unanswered scientific questions raised by the proliferation of its most recent, politically oriented incarnation. Beyond selected references in the text, suggested further reading can be found in the supplementary materials.”

“Who Falls for Fake News? The Roles of Bullshit Receptivity, Overclaiming, Familiarity, and Analytical Thinking” Pennycook, Gordon; Rand, David G. May 2018. Available at SSRN. DOI: 10.2139/ssrn.3023545.

Abstract:  “Inaccurate beliefs pose a threat to democracy and fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. Here we present three studies (MTurk, N = 1,606) investigating the cognitive psychological profile of individuals who fall prey to fake news. We find consistent evidence that the tendency to ascribe profundity to randomly generated sentences — pseudo-profound bullshit receptivity — correlates positively with perceptions of fake news accuracy, and negatively with the ability to differentiate between fake and real news (media truth discernment). Relatedly, individuals who overclaim regarding their level of knowledge (i.e. who produce bullshit) also perceive fake news as more accurate. Conversely, the tendency to ascribe profundity to prototypically profound (non-bullshit) quotations is not associated with media truth discernment; and both profundity measures are positively correlated with willingness to share both fake and real news on social media. We also replicate prior results regarding analytic thinking — which correlates negatively with perceived accuracy of fake news and positively with media truth discernment — and shed further light on this relationship by showing that it is not moderated by the presence versus absence of information about the new headline’s source (which has no effect on perceived accuracy), or by prior familiarity with the news headlines (which correlates positively with perceived accuracy of fake and real news). Our results suggest that belief in fake news has similar cognitive properties to other forms of bullshit receptivity, and reinforce the important role that analytic thinking plays in the recognition of misinformation.”

“Social Media and Fake News in the 2016 Election” Allcott, Hunt; Gentzkow, Matthew. Working paper for the National Bureau of Economic Research, No. 23089, 2017.

Abstract: “We present new evidence on the role of false stories circulated on social media prior to the 2016 U.S. presidential election. Drawing on audience data, archives of fact-checking websites, and results from a new online survey, we find: (i) social media was an important but not dominant source of news in the run-up to the election, with 14 percent of Americans calling social media their “most important” source of election news; (ii) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared eight million times; (iii) the average American saw and remembered 0.92 pro-Trump fake news stories and 0.23 pro-Clinton fake news stories, with just over half of those who recalled seeing fake news stories believing them; (iv) for fake news to have changed the outcome of the election, a single fake article would need to have had the same persuasive effect as 36 television campaign ads.”

“Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation” Chan, Man-pui Sally; Jones, Christopher R.; Jamieson, Kathleen Hall; Albarracín, Dolores. Psychological Science , September 2017. DOI: 10.1177/0956797617714579.

Abstract: “This meta-analysis investigated the factors underlying effective messages to counter attitudes and beliefs based on misinformation. Because misinformation can lead to poor decisions about consequential matters and is persistent and difficult to correct, debunking it is an important scientific and public-policy goal. This meta-analysis (k = 52, N = 6,878) revealed large effects for presenting misinformation (ds = 2.41–3.08), debunking (ds = 1.14–1.33), and the persistence of misinformation in the face of debunking (ds = 0.75–1.06). Persistence was stronger and the debunking effect was weaker when audiences generated reasons in support of the initial misinformation. A detailed debunking message correlated positively with the debunking effect. Surprisingly, however, a detailed debunking message also correlated positively with the misinformation-persistence effect.”

“Displacing Misinformation about Events: An Experimental Test of Causal Corrections” Nyhan, Brendan; Reifler, Jason. Journal of Experimental Political Science , 2015. doi: 10.1017/XPS.2014.22.

Abstract: “Misinformation can be very difficult to correct and may have lasting effects even after it is discredited. One reason for this persistence is the manner in which people make causal inferences based on available information about a given event or outcome. As a result, false information may continue to influence beliefs and attitudes even after being debunked if it is not replaced by an alternate causal explanation. We test this hypothesis using an experimental paradigm adapted from the psychology literature on the continued influence effect and find that a causal explanation for an unexplained event is significantly more effective than a denial even when the denial is backed by unusually strong evidence. This result has significant implications for how to most effectively counter misinformation about controversial political events and outcomes.”

“Rumors and Health Care Reform: Experiments in Political Misinformation” Berinsky, Adam J. British Journal of Political Science , 2015. doi: 10.1017/S0007123415000186.

Abstract: “This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on ‘fluency’ — the ease of information recall — this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.”

“Rumors and Factitious Informational Blends: The Role of the Web in Speculative Politics” Rojecki, Andrew; Meraz, Sharon. New Media & Society , 2016. doi: 10.1177/1461444814535724.

Abstract: “The World Wide Web has changed the dynamics of information transmission and agenda-setting. Facts mingle with half-truths and untruths to create factitious informational blends (FIBs) that drive speculative politics. We specify an information environment that mirrors and contributes to a polarized political system and develop a methodology that measures the interaction of the two. We do so by examining the evolution of two comparable claims during the 2004 presidential campaign in three streams of data: (1) web pages, (2) Google searches, and (3) media coverage. We find that the web is not sufficient alone for spreading misinformation, but it leads the agenda for traditional media. We find no evidence for equality of influence in network actors.”

“Analyzing How People Orient to and Spread Rumors in Social Media by Looking at Conversational Threads” Zubiaga, Arkaitz; et al. PLOS ONE, 2016. doi: 10.1371/journal.pone.0150989.

Abstract: “As breaking news unfolds people increasingly rely on social media to stay abreast of the latest updates. The use of social media in such situations comes with the caveat that new information being released piecemeal may encourage rumors, many of which remain unverified long after their point of release. Little is known, however, about the dynamics of the life cycle of a social media rumor. In this paper we present a methodology that has enabled us to collect, identify and annotate a dataset of 330 rumor threads (4,842 tweets) associated with 9 newsworthy events. We analyze this dataset to understand how users spread, support, or deny rumors that are later proven true or false, by distinguishing two levels of status in a rumor life cycle i.e., before and after its veracity status is resolved. The identification of rumors associated with each event, as well as the tweet that resolved each rumor as true or false, was performed by journalist members of the research team who tracked the events in real time. Our study shows that rumors that are ultimately proven true tend to be resolved faster than those that turn out to be false. Whilst one can readily see users denying rumors once they have been debunked, users appear to be less capable of distinguishing true from false rumors when their veracity remains in question. In fact, we show that the prevalent tendency for users is to support every unverified rumor. We also analyze the role of different types of users, finding that highly reputable users such as news organizations endeavor to post well-grounded statements, which appear to be certain and accompanied by evidence. Nevertheless, these often prove to be unverified pieces of information that give rise to false rumors. Our study reinforces the need for developing robust machine learning techniques that can provide assistance in real time for assessing the veracity of rumors. The findings of our study provide useful insights for achieving this aim.”

“Miley, CNN and The Onion” Berkowitz, Dan; Schwartz, David Asa. Journalism Practice , 2016. doi: 10.1080/17512786.2015.1006933.

Abstract: “Following a twerk-heavy performance by Miley Cyrus on the Video Music Awards program, CNN featured the story on the top of its website. The Onion — a fake-news organization — then ran a satirical column purporting to be by CNN’s Web editor explaining this decision. Through textual analysis, this paper demonstrates how a Fifth Estate comprised of bloggers, columnists and fake news organizations worked to relocate mainstream journalism back to within its professional boundaries.”

“Emotions, Partisanship, and Misperceptions: How Anger and Anxiety Moderate the Effect of Partisan Bias on Susceptibility to Political Misinformation”

Weeks, Brian E. Journal of Communication , 2015. doi: 10.1111/jcom.12164.

Abstract: “Citizens are frequently misinformed about political issues and candidates but the circumstances under which inaccurate beliefs emerge are not fully understood. This experimental study demonstrates that the independent experience of two emotions, anger and anxiety, in part determines whether citizens consider misinformation in a partisan or open-minded fashion. Anger encourages partisan, motivated evaluation of uncorrected misinformation that results in beliefs consistent with the supported political party, while anxiety at times promotes initial beliefs based less on partisanship and more on the information environment. However, exposure to corrections improves belief accuracy, regardless of emotion or partisanship. The results indicate that the unique experience of anger and anxiety can affect the accuracy of political beliefs by strengthening or attenuating the influence of partisanship.”

“Deception Detection for News: Three Types of Fakes” Rubin, Victoria L.; Chen, Yimin; Conroy, Niall J. Proceedings of the Association for Information Science and Technology , 2015, Vol. 52. doi: 10.1002/pra2.2015.145052010083.

Abstract: “A fake news detection system aims to assist users in detecting and filtering out varieties of potentially deceptive news. The prediction of the chances that a particular news item is intentionally deceptive is based on the analysis of previously seen truthful and deceptive news. A scarcity of deceptive news, available as corpora for predictive modeling, is a major stumbling block in this field of natural language processing (NLP) and deception detection. This paper discusses three types of fake news, each in contrast to genuine serious reporting, and weighs their pros and cons as a corpus for text analytics and predictive modeling. Filtering, vetting, and verifying online information continues to be essential in library and information science (LIS), as the lines between traditional news and online information are blurring.”

“When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political Attitudes of Inefficacy, Alienation, and Cynicism” Balmas, Meital. Communication Research , 2014, Vol. 41. doi: 10.1177/0093650212453600.

Abstract: “This research assesses possible associations between viewing fake news (i.e., political satire) and attitudes of inefficacy, alienation, and cynicism toward political candidates. Using survey data collected during the 2006 Israeli election campaign, the study provides evidence for an indirect positive effect of fake news viewing in fostering the feelings of inefficacy, alienation, and cynicism, through the mediator variable of perceived realism of fake news. Within this process, hard news viewing serves as a moderator of the association between viewing fake news and their perceived realism. It was also demonstrated that perceived realism of fake news is stronger among individuals with high exposure to fake news and low exposure to hard news than among those with high exposure to both fake and hard news. Overall, this study contributes to the scientific knowledge regarding the influence of the interaction between various types of media use on political effects.”

“Faking Sandy: Characterizing and Identifying Fake Images on Twitter During Hurricane Sandy” Gupta, Aditi; Lamba, Hemank; Kumaraguru, Ponnurangam; Joshi, Anupam. Proceedings of the 22nd International Conference on World Wide Web , 2013. doi: 10.1145/2487788.2488033.

Abstract: “In today’s world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events. It can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper is to highlight the role of Twitter during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty-six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that the top 30 users out of 10,215 users (0.3 percent) resulted in 90 percent of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very little (only 11 percent) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97 percent accuracy in predicting fake images from real. Also, tweet-based features were very effective in distinguishing fake images tweets from real, while the performance of user-based features was very poor. Our results showed that automated techniques can be used in identifying real images from fake images posted on Twitter.”

“The Impact of Real News about ‘Fake News’: Intertextual Processes and Political Satire” Brewer, Paul R.; Young, Dannagal Goldthwaite; Morreale, Michelle. International Journal of Public Opinion Research , 2013. doi: 10.1093/ijpor/edt015.

Abstract: “This study builds on research about political humor, press meta-coverage, and intertextuality to examine the effects of news coverage about political satire on audience members. The analysis uses experimental data to test whether news coverage of Stephen Colbert’s Super PAC influenced knowledge and opinion regarding Citizens United, as well as political trust and internal political efficacy. It also tests whether such effects depended on previous exposure to The Colbert Report (Colbert’s satirical television show) and traditional news. Results indicate that exposure to news coverage of satire can influence knowledge, opinion, and political trust. Additionally, regular satire viewers may experience stronger effects on opinion, as well as increased internal efficacy, when consuming news coverage about issues previously highlighted in satire programming.”

“With Facebook, Blogs, and Fake News, Teens Reject Journalistic ‘Objectivity’” Marchi, Regina. Journal of Communication Inquiry , 2012. doi: 10.1177/0196859912458700.

Abstract: “This article examines the news behaviors and attitudes of teenagers, an understudied demographic in the research on youth and news media. Based on interviews with 61 racially diverse high school students, it discusses how adolescents become informed about current events and why they prefer certain news formats to others. The results reveal changing ways news information is being accessed, new attitudes about what it means to be informed, and a youth preference for opinionated rather than objective news. This does not indicate that young people disregard the basic ideals of professional journalism but, rather, that they desire more authentic renderings of them.”

Keywords: alt-right, credibility, truth discovery, post-truth era, fact checking, news sharing, news literacy, misinformation, disinformation

5 fascinating digital media studies from fall 2018
Facebook and the newsroom: 6 questions for Siva Vaidhyanathan

About The Author

' src=

Denise-Marie Ordway

IMAGES

  1. Your Guide to Spotting Fake News

    research paper about fake news in social media

  2. (PDF) Fake News in Digital Media

    research paper about fake news in social media

  3. (PDF) Fake News on Social Media: The (In)Effectiveness of Warning Messages

    research paper about fake news in social media

  4. Trends in the Diffusion of Misinformation on Social Media

    research paper about fake news in social media

  5. (PDF) Is it Really Fake?

    research paper about fake news in social media

  6. What is social media’s role in stopping fake news?

    research paper about fake news in social media

COMMENTS

  1. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  2. Fake news on Social Media: the Impact on Society

    Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN is increasingly affecting societal values, changing opinions on critical issues and topics as well as redefining facts, truths, and beliefs. To understand the degree to which FN has changed society ...

  3. Fake news detection based on news content and social contexts: a

    Fake news is a real problem in today's world, and it has become more extensive and harder to identify. A major challenge in fake news detection is to detect it in the early phase. Another challenge in fake news detection is the unavailability or the shortage of labelled data for training the detection models. We propose a novel fake news detection framework that can address these challenges ...

  4. Fake news on Social Media: the Impact on Society

    Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN ...

  5. Fake news, disinformation and misinformation in social media: a review

    Online social networks (OSNs) are rapidly growing and have become a huge source of all kinds of global and local news for millions of users. However, OSNs are a double-edged sword. Although the great advantages they offer such as unlimited easy communication and instant news and information, they can also have many disadvantages and issues. One of their major challenging issues is the spread ...

  6. Fake news, social media and marketing: A systematic review

    Fig. 2 presents the frequency of published studies on the topic of fake news. The first two papers were published in 2012 (Polage, 2012, Lewandowsky et al., 2012).The number of publications increased rapidly after 2016 as fake news gained academic and public profile in 2016 for the role played in important political events such as the US Presidential elections and the Brexit referendum ...

  7. Full article: Combating fake news, disinformation, and misinformation

    Fake news stories are shared more often on social media than articles from edited news media (Silverman & Alexander, Citation 2016), where there is some form of gatekeeping. Caplan et al. ( Citation 2018 ) corroborate this assertion and submit that social media platforms like Facebook and Twitter have been heavily cited as facilitating the ...

  8. A systematic review on fake news research through the lens of news

    A visualization of 182 fake news-related papers. The papers were published in IEEE, ACM, ELSEVIER, arXiv, Wiley, APA from 2010 to 2020 classified by publisher, main category, sub category, and evaluation method (left to right). ... Allcott H, Gentzkow M. Social media and fake news in the 2016 election. Journal of Economic Perspectives. 2017;31 ...

  9. (PDF) Surveying the Research on Fake News in Social Media: a Tale of

    Surveying the Research on Fake News in Social Media: a Tale of Networks and Language. ... this work is an attempt to look back to thousands of papers published so far focusing on fake news and.

  10. Detecting Fake News in Social Media Networks

    The widely accepted definition of Internet fake news is: fictitious articles deliberately fabricated to deceive readers". Social media and news outlets publish fake news to increase readership or as part of psychological warfare. Ingeneral, the goal is profiting through clickbaits. Clickbaits lure users and entice curiosity with flashy ...

  11. Fake news on social media: Understanding teens' (Dis)engagement with

    This article takes a qualitative approach to examine the role of fake news in shaping adolescent's participation in news. Instead of experimental approaches that are common with similar research, the current study expands our understanding of teenagers' engagement with news on social media using focus groups, interviews in addition to reviewing research reports by the Norwegian Media ...

  12. Fake News on Social Media: People Believe What They Want to Believe

    Abstract. Fake news (i.e., misinformation) on social media has sharply increased over the past years. We conducted an experiment collecting behavioral and EEG data from 83 social media users to understand whether they could detect fake news on social media, and the factors affecting cognition and judgment.

  13. [2109.07909] Studying Fake News Spreading, Polarisation Dynamics, and

    With the explosive growth of online social media, the ancient problem of information disorders interfering with news diffusion has surfaced with a renewed intensity threatening our democracies, public health, and news outlets' credibility. Therefore, thousands of scientific papers have been published in a relatively short period, making researchers of different disciplines struggle with an ...

  14. A comprehensive review on automatic detection of fake news on social media

    Social media sites are now quite popular among internet users for sharing news and opinions. This has become possible due to the inexpensive Internet, easy availability of digital devices, and the no-cost policy to create a user account on social media platforms. People are fascinated by social media sites because they can easily connect with others to share their interests, news, and opinions ...

  15. PDF Fake News and Advertising on Social Media: A Study of the Anti

    2 Fake News and Health Information on Social Media 2.1 Facebook and Twitter Facebook and Twitter rank as the two largest social media platforms in the US. Users rely on both platforms to obtain news. Approximately two-thirds of US adults use Facebook, and half of Facebook users read news on its site (Pew, 2014). Twitter users account for 16%

  16. Fake News Detection on Social Media: A Systematic Survey

    These days there are instabilities in many societies in the world, either because of political, economic, and other societal issues. The advance in mobile technology has enabled social media to play a vital role in organizing activities in favour or against certain parties or countries. Many researchers see the need to develop automated systems that are capable of detecting and tracking fake ...

  17. Fake news and the spread of misinformation: A research roundup

    Our results suggest that belief in fake news has similar cognitive properties to other forms of bullshit receptivity, and reinforce the important role that analytic thinking plays in the recognition of misinformation." "Social Media and Fake News in the 2016 Election" Allcott, Hunt; Gentzkow, Matthew.

  18. Fake News on Social Media: The (In)Effectiveness of Warning Messages

    It researches fake news and urban legends and. The (In)Effectiveness of Fake News Warning Messages. Thirty Ninth International Conference on Information Systems, San Francisco 2018 8. indicates ...

  19. PDF Social Media and Fake News in The 2016 Election National Bureau of

    social media would suggest that the 38 million shares of fake news in our database translates into 760 million instances of a user clicking through and reading a fake news story, or about three stories read per American adult. A list of fake news websites, on which just over half of articles