Rochester News

  • Science & Technology
  • Society & Culture
  • Campus Life
  • University News
  • From the Magazine

Study of headlines shows media bias is growing

Luke Auburn

  • Facebook Share on Facebook
  • X/Twitter Share on Twitter
  • LinkedIn Share on LinkedIn

Researchers used machine learning to uncover media bias in publications across the political spectrum.

News stories about domestic politics and social issues are becoming increasingly polarized along ideological lines according to a study of 1.8 million news headlines from major US news outlets from 2014 to 2022. A team from the University of Rochester led by Jiebo Luo , a professor of computer science and the Albert Arendt Hopeman Professor of Engineering, used machine learning to analyze headlines and presented their findings about growing media bias at the MEDIATE workshop of the International AAAI Conference on Web and Social Media .

The researchers said that while there is broad consensus that news media outlets adopt ideological perspectives in their articles, previous studies dissecting the differences among outlets were limited in scope and used small sample sizes. Machine-learning techniques allowed the researchers to study a vast sample of headlines over an eight-year period across nine representative media outlets including the New York Times, Bloomberg, CNN, NBC, the Wall Street Journal, Christian Science Monitor, the Federalist, Reason, and the Washington Times.

The study used a technique called multiple correspondence analysis to measure the fine-grained thematic discrepancies among headlines. The researchers grouped the stories into four categories—domestic politics, economic issues, social issues, and foreign affairs—and analyzed how left, right, and central media outlets differed in the language they used in their headlines.

The team observed that US media outlets across the political spectrum were consistent and similar in covering economic issues. While they found discrepancies in reporting foreign affairs, they attributed that to diversity in individual journalistic styles. For example, the authors say the Wall Street Journal and Bloomberg primarily concentrate on the economic and financial implications of geopolitical tensions, resulting in differing perspectives compared to other media outlets. But headlines in the domestic politics and social issues categories showed important differences.

Abortion law or abortion rights?

“We observed a lot of subtle differences in the words they choose when they cover the same high-level topics,” says Hanjia Lyu, a computer science PhD student who was the lead author of the study. “For example, when covering abortion issues, Reason tends to use the term ‘abortion law,’ while CNN underscores its ideological position by using the term ‘abortion rights.’ On a higher level they are both talking about abortion issues, but you can feel the subtle difference in the words that they choose.”

The research team hopes to dig deeper to better understand how and why media outlets use different words to cover the same kind of topics. They say understanding these discrepancies, and when they may indicate media bias, is important for both media outlets and readers alike.

Says Luo: “For consumers, it’s useful to know this information because the echo chamber effect is very strong and people are used to only listening to things they like to hear. Showing the divergence and the increased partisanship may make them aware that they need to be more conscious consumers of news.”

Other coauthors from Luo’s research group include Jinsheng Pan, Weihong Qi, and Zichen Wang. Funding for the study was provided by Rochester’s Goergen Institute for Data Science .

An assistant paints part of a mural by artist Dragon76 that reads

More in Society & Culture

Aerial view of an aircraft carrier at sea with sailors spelling out the words NATO and OTAN.

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

How do we raise media bias awareness effectively? Effects of visualizations to communicate bias

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliations Department of Computer and Information Science, University of Konstanz, Konstanz, Germany, School of Electrical, Information and Media Engineering, University of Wuppertal, Wuppertal, Germany

ORCID logo

Roles Methodology, Writing – original draft

Affiliation Department of Psychology, University of Konstanz, Konstanz, Germany

Roles Funding acquisition, Supervision

Roles Formal analysis, Funding acquisition, Methodology, Supervision, Writing – review & editing

  • Timo Spinde, 
  • Christin Jeggle, 
  • Magdalena Haupt, 
  • Wolfgang Gaissmaier, 
  • Helge Giese

PLOS

  • Published: April 13, 2022
  • https://doi.org/10.1371/journal.pone.0266204
  • Reader Comments

Fig 1

Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an online experiment, we randomized 985 participants to receive a biased liberal or conservative news article in any combination of the three aids. Meanwhile, their subjective perception of media bias in this article, attitude change, and political ideology were assessed. Both the forewarning message and the annotations increased media bias awareness, whereas the political classification showed no effect. Incongruence between an articles’ political position and individual political orientation also increased media bias awareness. Visual aids did not mitigate this effect. Likewise, attitudes remained unaltered.

Citation: Spinde T, Jeggle C, Haupt M, Gaissmaier W, Giese H (2022) How do we raise media bias awareness effectively? Effects of visualizations to communicate bias. PLoS ONE 17(4): e0266204. https://doi.org/10.1371/journal.pone.0266204

Editor: Rogis Baker, Universiti Pertahanan Nasional Malaysia, MALAYSIA

Received: December 14, 2021; Accepted: March 16, 2022; Published: April 13, 2022

Copyright: © 2022 Spinde et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Data are available at https://osf.io/e95dh/ .

Funding: This work was supported by the German Research Foundation [DFG] ( https://www.dfg.de/ ) under Grant 441541975, the German Research Foundation Centre ( https://www.dfg.de/ ) of Excellence 2117 "Centre for the Advanced Study of Collective Behaviour" (ID: 422037984). It was also supported by the Hanns-Seidel Foundation ( https://www.hss.de/ ) and the German Academic Exchange Service (DAAD) ( https://www.daad.de/de/ ). None of the funder played any role in the study design or any publication related decisions.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The Internet age has a significant impact on today’s news communication: It allows individuals to access news and information from an ever-increasing variety of sources, at any time, on any subject. Regardless of journalistic standards, media outlets with a wide reach have the power to affect public opinion and shape collective decision-making processes [ 1 ]. However, it is well known that the wording and selection of news in media coverage often are biased and provide limited viewpoints [ 2 ], commonly referred to as media bias . According to Domke and colleagues [ 3 ], media bias is a structural, often wilful defect in news coverage that potentially influences public opinion. Labeling named entities with terms that are ambiguous in the concepts they allude to (e.g. "illegal immigrants" and "illegal aliens" [ 4 ] or combining concepts beyond their initial contexts into figurative speech that carry a positive or negative association ("a wave of immigrants flooded the country") can induce bias. Still, the conceptualization of media bias is complex since biased and balanced reporting cannot be distinguished incisively [ 5 ]. Many definitions exist, and media bias, in general, has been researched from various angles, such as psychology [ 6 ], computer science [ 7 ], linguistics [ 8 ], economics [ 9 ], or political science [ 10 ]. Therefore, we believe advancement in media bias communication is relevant for multiple scientific areas.

Previous research shows the effects of media bias on individual and public perception of news events [ 6 ]. Since the media are citizens’ primary source of political information [ 11 ], associated bias may affect the political beliefs of the audience, party preferences [ 12 ] and even alter voting behavior [ 13 ]. Moreover, exposure to biased information can lead to negative societal outcomes, including group polarization, intolerance of dissent, and political segregation [ 14 ]. It can also affect collective decision-making [ 15 ]. The implications of selective exposure theory intensify the severity of biased news coverage: Researchers observed long ago that people prefer to consume information that fits their worldview and avoid information that challenges these beliefs [ 16 ]. By selecting only confirmatory information, one’s own opinion is reaffirmed, and there is no need to re-evaluate existing stances [ 17 ]. In this way, the unpleasant feeling of cognitive dissonance is avoided [ 18 ]. Isolation in one’s own filter bubble or echo chamber confirms internal biases and might lead to a general decrease in the diversity of news consumption [ 14 ]. This decrease is further exacerbated by recent technological developments like personalized overview features of, e.g., news aggregators [ 19 ]. How partisans select and perceive political news is thus an important question in political communication research [ 20 ]. Therefore, this study tries to test ways to increase the awareness of media bias (which might mitigate its negative impact) and the partisan evaluation of the media through transparent bias communication.

Media bias communication

Media bias occurs in various forms, for example, whether or how a topic is reported (D’Alessio & Allen, 2000) and may not always be easy to identify. As a result, news consumers often engage with distorted media but are not aware of it and exhibit a lack of media bias awareness [ 21 ]. To address this issue, revealing the existence and nature of media can be an essential route to attain media bias awareness and promote informed and reflective news consumption [ 19 ]. For instance, visualizations may generally help to raise media bias awareness and lead to a more balanced news intake by warning people of potential biases [ 22 ], highlighting individual instances of bias [ 19 ], or facilitating the comparison of contents [ 2 , 23 ].

Although knowledge of how to communicate media bias effectively is crucial, visualizations and enhanced perception of media bias have only played a minor role in existing research, and several approaches have not yet been investigated. Therefore, this paper tests how effectively different strategies promote media bias awareness and thereby may also help understand common barriers to informed media consumption. We selected three major methods in related work [ 19 , 22 ] on the topic to further investigate them in one combined study: forewarning messages, text annotations, and political classifications. Theoretical foundations of bias messages and visualizations are yet scarce, and neither in visualization theory nor in bias theory, suitable strategies in the domain have been extensively tested.

Forewarning message.

According to socio-psychological inoculation theory [ 24 ], it is possible to pre-emptively confer psychological resistance against persuasion attempts by exposing people to a message of warning character. It is similar to the process of immunizing against a virus by administering a weakened dose of the virus: A so-called inoculation message is expected to protect people from a persuasive attack by exposing them to weakened forms of the persuasion attempt. Due to the perceived threat of the forewarning inoculation message, people tend to strengthen their own position and are thus more resistant to influences of imminent persuasion attacks [ 25 ]. Therefore, one strategy to help people detect bias is to prepare them ahead of media consumption that media bias may occur, thereby "forewarning" them against biased language influences. Such warnings have been widely established in persuasion and shown to be effective in different applied contexts [ 26 ]. Furthermore, such warnings also seem to help not only to protect attitudes against influences but also to determine the quality of a piece of information [ 27 – 29 ] and communicate the information accordingly [ 30 ]. For biased language, this may work specifically by focusing the reader’s attention on a universal motive to evaluate the accuracy of information while relying on the individual’s capacity to detect the bias when encountered [ 30 ]; Bolsen & Druckman, 2015).

Annotations.

Other than informing people in advance about bias occurrence, a further approach is to inform them during reading, thereby increasing their awareness of biased language and providing direct help to detect it in an article. Recently, there has been a lot of research on media bias from information science, but it is mainly concerned with its identification and detection [ 31 – 34 ]. However, whereas some research concerning the effects of visualizations of media bias in news articles to detect bias are promising (here: flagging fake news as debunked [ 35 ]) others did not find such effects, potentially also due to the technical issues in accurately annotating single articles [ 19 ]. Still, they offer a good prospect to enable higher media bias awareness and more balanced news consumption. We show our annotation visualization in Fig 1 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Example of the bias annotation "subjective term". Boxed annotation appeared by moving the cursor/finger over the highlighted text section.

https://doi.org/10.1371/journal.pone.0266204.g001

Political classification.

Another attempt to raise media bias awareness is a political classification of biased material after readers have dealt with it. An and colleagues [ 36 ] proposed an ideological left-right map where media sources are politically classified. The authors suggest that showing a source’s political leaning helps readers question their attitudes and even promotes browsing for news articles with multiple viewpoints. Likewise, several other studies indicate that feedback on the political orientation of an article or a source may lead to more media bias awareness and a more balanced news consumption [ 19 ]. Additionally, exposing users to multiple diverse viewpoints on controversial topics encourages the development of more balanced viewpoints [ 23 ]. A study of Munson and colleagues (2013) further suggests that a feedback element indicating whether the user’s browsing history consists of biased news consumption modestly leads to a more balanced news consumption. Based on these findings, we will test whether the sole representation of a source’s leaning helps raise bias awareness among users on the condition that the article is classified as politically skewed. We show our political classification bar in Fig 2 .

thumbnail

Example of an article classification as being politically left-oriented.

https://doi.org/10.1371/journal.pone.0266204.g002

Partisan media bias awareness

Attempts to raise media bias awareness may be further complicated by the fact that the detection of media bias and the evaluation of news seem dependent on the political ideology of the beholder [ 37 – 41 ]. However, this partisan effect is not only apparent in neutral reporting: It is supposed that individuals perceive biased content that corresponds to their opinion as less biased [ 38 ] and biased content that contradicts their viewpoints as more biased [ 41 ].

These findings suggest that incongruence between the reader’s position and the news article’s position may increase media bias perception of the article, whereas congruence may decrease it. Thus, partisan media consumers may engage in motivated reasoning to overcome cognitive dissonance experienced when encountering media bias in any news article generally in line with their viewpoints [ 42 ]. According to Festinger [ 18 ], cognitive dissonance is generated when a person has two cognitive elements that are inconsistent with each other. This inconsistency is assumed to produce a feeling of mental discomfort. People who experience dissonance are motivated to reduce the inconsistency because they want to avoid or reduce this negative emotion.

Furthermore, Festinger notes that exposure to messages inconsistent with one’s beliefs could create cognitive dissonance, leading people to avoid or reduce negative emotions. In line with this notion, raising media bias awareness could increase experienced cognitive dissonance and thereby lead to even more partisan ratings of bias. Another explanation of the phenomenon of partisan bias ratings is varying norms about what content is considered appropriate in media coverage dependent on one’s political identity[ 43 ]. Other researchers focus on the inattention to the quality of news and the motive to only support truthful news [ 44 ]. Both approaches lead us to expect opposite results for the partisanship of the media bias ratings with increased media bias awareness as created by our proposed visualizations: Partisanship of ratings should decrease rather than increase as people are reminded of more general norms and accuracy motives [ 27 ].

Study aims and hypotheses

This project aims to contribute to a deeper understanding of effective media bias communication. To this end, we create a set of bias visualizations revealing bias in different ways and test their effectiveness to raise awareness in an online experiment. Following the respective literature elaborated above for each technique, we would expect enhanced media bias awareness by all visualizations:

  • H1a: A forewarning message prior to news articles increases media bias awareness in presented articles.
  • H1b: Annotations in news articles increase media bias awareness in presented articles.
  • H1c: A political classification of news articles increases media bias awareness in presented articles.

Another goal of this study is to understand better the reader’s political orientation in media bias awareness. In line with the findings of partisan media bias perception (hostile media effect; Vallone et al., 1985), we adopt the following hypothesis:

  • H2: Presented material will be rated less biased if consistent with individual political orientation.

Furthermore, we assume, following the attentional and normative explanation of partisanship in ratings rather than cognitive dissonance theory, the following effect:

  • H3: Bias visualizations will mitigate the effects of partisan bias ratings.

Participants

A total of 1002 participants from the US were recruited online via Prolific in August of 2020. A final sample of N = 985 was included in the analysis (51% female; age : M = 32.67; SD = 11.95 ) . The excluded participants did not fully complete the study or indicated that their data might not be trusted in a seriousness check. The target sample size was determined using power analysis, so that small effects ( f = 0.10) could be found with a power of .80 [ 45 ]. The online study was scheduled to last approximately 10 minutes, for which the participants received £1.10 as payment.

Design and procedure

The experiment was conducted online in Qualtrics ( https://www.qualtrics.com ). It operated with fully informed consent, adheres to the Declaration of Helsinki, and was conducted in compliance with relevant laws and institutional guidelines, including the ones of the University of Konstanz ethics board. All participants confirmed their consent in written form and were informed in detail about the study, the aim, data processing, anonymization, and other background information.

After collecting informed consent and demographic information, we conducted an initial attitude assessment which asked for their general perception of the presented topic on three dimensions and personal relevance. Next, participants read one randomly selected biased news article (either liberal or conservative), randomly supplemented by any combination of the visual aids (forewarning message, annotations, political classification). Thus, the study had a 2x2x2 forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) between design. The article also varied between participants in both article position (liberal/conservative) and article topic (gun law/abortion) to determine the results’ partialness and generalizability. Finally, attitudes towards the topic were reassessed, followed by a seriousness check.

Study material

Visual aids..

Forewarning message . The forewarning message consisted of a short warning and was displayed directly before the news article. It reads: " Beware of biased news coverage . Read consciously . Don’t be fooled . The term ’media bias’ refers to , in part , non-neutral tonality and word choice in the news . Media Bias can consciously and unconsciously result in a narrow and one-sided point of view . How a topic or issue is covered in the news can decisively impact public debates and affect our collective decision making ." Besides, an example of one-sided language was shown, and readers were encouraged to consume news consciously.

Annotations . Annotations were directly integrated into the news texts. Biased words or sentences were highlighted [ 46 ], and by hovering over the marked sections, a short explanation of the respective type of bias appeared. For example, if moving the cursor over a very one-sided term, the following annotation would be displayed: " Subjective term : Language that is skewed by feeling , opinion or taste ." Annotations were based on ratings of six members of our research group, where phrases had to be nominated by at least three raters. The final annotations can be found in the supplementary preregistration repository accompanying this article at https://osf.io/e95dh/‌?view_only=d2fb5dc‌2d64741e393b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). We followed the guidelines applied in existing research to teach annotators about bias and reach higher-quality annotations [ 47 ]. In future work, we will further increase the number of raters, as we address in the discussion.

Political classification . A political classification in the form of a spectrum from left to right indicated the source’s political ideology. It was displayed immediately after the presented article and based on the rating of the webpage Allsides.

We used four biased news articles that varied in topic and political position. Each participant was assigned to one article. The two topics covered were gun law and the debate on abortion, with either a liberal or conservative article position. Topics were selected because we considered them controversial issues in the United States that most people are presumably familiar with. To ensure that articles were biased, they were taken from sources deemed extreme according to the Allsides classification. Conservative texts were taken from Breitbart.com ; liberal articles were from Huffpost.com and Washingtonpost.com . We also conducted a manipulation check to determine whether participants perceived political article positions in line with our assumptions: Just after reading the article, participants were asked to classify its political stance on a visual analogue scale (-5 = very liberal to 5 = very conservative ). To ensure comparability, articles were shortened to approximately the same length, and respective sources were not indicated. All article texts used are listed together with their annotations in the supplementary preregistration repository accompanying this article (we show the link on the previous page).

Media bias awareness.

Five semantic differentials assessed media bias awareness on fairness, partialness, acceptableness, trustworthiness, and persuasiveness [ 48 – 50 ] on visual analogue scales (" I think the presented news article was… "). Media bias awareness was established by averaging the five items and recoded to range from -5 = low bias awareness to 5 = high bias awareness ( α = .88).

Political orientation.

The variable political orientation was measured on a visual analogue scale ranging from –5 = very conservative to 5 = very liberal ), introduced with the question " Do you consider yourself to be liberal , conservative , or somewhere in between ?" adopted by Spinde and colleagues [ 19 , 51 ]. Likewise, we assessed the perceived stance of the read article on the same scale introduced with the question " I think the presented news article was… ".

Attitudes towards article topic.

Attitudes were assessed before and after the article presentation by a three-item semantic differential scale ( wrong - right , unacceptable - acceptable , bad - good ) evaluating the two topics (" Generally , laws restricting abortion/ the use of guns are . . ."; α = .99). The three items were averaged per topic to yield a score from (–5 = very conservative attitude to 5 = very liberal attitude). Besides, we assessed topic involvement by one item before the article presentation (" To me personally , laws restricting the use of guns/ abortions are… irrelevant-relevant") on a scale from –5 to 5.

Statistical analysis

To test effects of the visual aids on media bias perception, we used ANOVAs with effect coded factors in a forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) x2 article position (liberal/conservative) x2 article topic (gun law/abortion) between design. For analyses testing political ideology effects, this was generalized to a GLM with standardized political orientation as an additional interacting variable followed by a simple effects analysis. The same model was applied to the second attitude rating, with first attitude rating and topic involvement as covariates for attitude change. This project and the analyses were preregistered with the DOI https://osf.io/e95dh/?view_only=d2fb5dc2d64741e39‌3b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). All study materials, code, and data are available there.

Manipulation check and other effects on perceived political stance of the article

Overall, the positions of the political articles were perceived as designed ( article position : F (1, 953) = 528.67, p < .001, η p 2 = .357): Articles assigned a liberal position were perceived more liberal ( M = 1.60, SD = 2.70), whereas conservative articles were rated more conservative ( M = –1.98, SD = 2.26). This difference between the conservative and the liberal article was more pronounced, when a forewarning message ( F (1, 953) = 7.33, p = .007, η p 2 = .008), annotations ( F (1, 953) = 3.96, p = .047, η p 2 = .004), or the political classifications were present ( F (1, 953) = 9.12, p = .003, η p 2 = .009; see Fig 3 ). The combination of forewarning and classification further increased the difference ( F (1, 953) = 5.28, p = .022, η p 2 = .006).

thumbnail

Across all conditions, liberal articles were perceived to be more liberal and conservative articles more conservative. The interventions increased the differences between the two ratings. Dots represent means, and lines are standard deviations.

https://doi.org/10.1371/journal.pone.0266204.g003

Effects of visual aids on media bias perceptions

Testing the effects of the visual aids on media bias perceptions in general, we found that both the forewarning message ( F (1, 953) = 8.29, p = .004, η p 2 = .009) and the annotations ( F (1, 953) = 24.00, p < .001, η p 2 = .025) increased perceived bias, which we show in Fig 4 . However, we found no effect of the political classification ( F (1, 953) = 2.56, p = .110, η p 2 = .003) and no systematic higher-order interaction involving any of the manipulations ( p ≥ .085, η p 2 ≤ .003). Moreover, there were differences in media bias perceptions of the specific articles ( topic x article position : F (1,953) = 24.44, p < .001, η p 2 = .025). The two found main effects were by and large robust when testing it per item of the media bias perception scale (forewarning had no significant effect on partialness and persuasiveness) or in a MANOVA ( forewarning : F (5, 949) = 5.22, p < .001, η p 2 = .027; annotation : F (5, 949) = 6.25, p < .001, η p 2 = .032).

thumbnail

The forewarning message, as well as annotations, increased media bias awareness. Dots represent means, and lines are standard deviations.

https://doi.org/10.1371/journal.pone.0266204.g004

Partisan media bias ratings

When considering self-indicated political orientation and its fit to the article position , we found that media bias was perceived less for articles consistent with the reader’s political orientation ( F (1,921) = 113.37, p < .001, η p 2 = .110): For conservative articles, liberal readers rated conservative articles more biased than conservative readers (β = 0.32; p < .001; 95%CI[0.25; 0.38]). Conversely, liberal articles were rated less biased by liberals (β = –0.20; p < .001; 95%CI[–0.27; –0.13]), indicating a partisan bias rating for both political isles, which we show in Fig 5 .

thumbnail

Bias awareness increases when the article is not aligned with the persons’ political position. Shades show 95% confidence intervals of the regression estimation.

https://doi.org/10.1371/journal.pone.0266204.g005

This partisan rating of articles was unaffected by forewarning ( F (1,921) = 1.52, p = .218, η p 2 = .002), annotations ( F (1,921) = 0.26, p = .612, η p 2 < .001), and political classification ( F (1,921) = 2.72, p = .010, η p 2 = .003). Yet, with increasing liberalness of the reader, the combination of forewarning and annotation was slightly less effective on the detection of bias ( F (1,921) = 4.19, p = .041, η p 2 = .005). Furthermore, there were some topic-related differences irrelevant to the current hypotheses (higher bias was perceived for the gun laws articles ( topic : F (1,921) = 11.32, p < .001, η p 2 = .012) and specifically so for the liberal one ( topic x article position : F (1,921) = 23.86, p < .001, η p 2 = .025) with some uninterpretable minor higher order interaction ( forewarning x annotation x classification x political orientation x topic : F (1,921) = 4.10, p = .043, η p 2 = .004)).

Effects on attitudes

By and large, attitudes on the topics were not affected by the experiment: While attitudes after reading the article were in line with prior attitudes ( F (1,919) = 2415.42, p < .001, η p 2 = .724) and individual political orientation ( F (1,919) = 34.54, p < .001, η p 2 = .036), neither article position ( F (1,919) = 2.63, p = .105, η p 2 = .003) nor any of the visual aids had any general impact ( p ≥ .084, η p 2 ≤ .003). Likewise, neither of the aids interacted with the factor article position ( p ≥ .298, η p 2 ≤ .001). Solely, there were some additional minor topic-specific significant effects of the annotation combined with the forewarning ( F (1,919) = 4.77, p = .0292, η p 2 = .005) and an increased liberalness of attitude with higher topic involvement ( F (1,919) = 4.31, p = .038, η p 2 = .005), that we want to disclose, but deem irrelevant to our hypotheses and research questions.

In this study, we tested different techniques to communicate media bias. Our experiment revealed that presenting a forewarning message and text annotations enhanced awareness of biased reporting, while a political classification did not. All three methods (forewarning, annotation, political classification) impacted the political ideology rating of the presented article. Furthermore, we found evidence for partisan bias ratings: Participants rated articles that agreed with their general orientation to be less biased than articles from the other side of the political spectrum. The positive effect of the forewarning message on media bias ratings, albeit small, is in line with a few other findings of successful appeals to and reminders of accuracy motives [ 30 ]. In addition, it accords with the notion that reflecting on media bias involves some efforts [ 44 , 52 ], so motivating people to engage in this process can help detect bias.

Regarding the effects of in-text annotations, our finding differs from a previous study of a similar design [ 19 ], which did not identify the effect due to a lack of power and less optimal annotations. While news consumers may generally identify outright false or fake [ 53 ] news, detecting subtle biases can profit from such aids. This indicates that bias detection is far from ideal, particularly in more ambiguous cases. As in-text annotation and forewarning message effects were independent of each other, participants seemingly do not profit from the combination of aids.

On the other hand, the political classification could solely improve the detection of the political alignment of the text (which was also achieved by both other methods) but not help detecting biased language. Subsequently, the detection of biased language and media bias itself does not appear to be directly related to an article’s political affiliation.

Our study also replicates findings that the detection of media bias and fake news is affected by individual convictions [ 30 , 40 , 42 ]: We found that participants could detect media bias more readily if there was an incongruence between the participant’s and the article’s political ideology. Such a connection may be particularly true for detecting more subtle media biases and holding an article in high regard compared to successfully identifying outright fake news, for which a reversed effect could be found in some instances (Pennycook & Rand, 2019).

In addition, interventions were ineffective to lower such partisan effects. Similarly, attitudes remained relatively stable and were not affected by any of the visual aids. Making biased language more visible and reminding people of potential biases could apparently not help them overcome their ideology in rating the acceptance of an article when there is no clear indication that the information presented in the article is fake but solely biased. Likewise, the forewarning message successfully altered the motivation to look for biased language, but did not decrease the effects of political identity on the rating: While being able to detect the political affiliation of an article, it seems that participants were not capable of separating the stance of the article from its biased use of language, even when prompted to do so. In the same vein, effects were not more pronounced when the political classification was further visualized, potentially pointing to the notion that the stance is also detected without help (after all, while the manipulations increased the distinction between liberal and conservative articles, the article’s position was reliably identified even without any supporting material) and that partisan ratings are not a deliberate derogatory act. Furthermore, the problem of partisan bias ratings also did not increase with increased media bias awareness via the manipulations, as could have been expected by cognitive dissonance theory.

For future work, we will improve the representativeness of the surveyed sample, which limits far-reaching generalizations at this point. Additionally, we will increase the generalizability by employing articles that are politically neutral or exhibit comparatively low bias. Both forewarning and annotations may have increased ratings in this study, but it is unclear whether they also aid in identifying low-bias articles and leading to lower ratings, respectively. Improving the quality of our annotations by including more annotators is an additional step towards exhausting potential findings. We will also investigate how combinations of the visualizations and strategies work together and conduct expert interviews to determine which applications would be of interest in an applied scenario. Still, the current study shows that two of our interventions raised attention to biased language in media, giving a first insight into the yet sparsely tested field of presenting media bias to news consumers.

Furthermore, there is a great challenge in translating these experimental interventions to applications used by news consumers in the field. While forewarning messages could be implemented quite simply in the context of other media, for instance, as a disclaimer (see [ 30 ]), we hope that automated classifiers on the sentence level will prove to be an effective tool to create instant annotating aids for example as browser add-ons. Even though recent studies show promising accuracy improvements for such classifiers [ 31 , 32 ], we still want to note that much research needs to be devoted to finding stable and reliable markers of biased language. Future work also has great potential to consider these strategies as teaching tools to train users in identifying bias without visual aids. This could offer a framework for a large-scale study in which additional variables measuring previous news consumption habits could be employed.

In the context of our digitalized world, where news and information of differing quality are available everywhere, our results provide important insights for media bias research. In the present study, we were able to show that forewarning messages and annotations increased media bias awareness among readers in selected news articles. Also, we could replicate the well-known hostile media bias that consists of people being more aware of bias in articles from the opposing side of the political spectrum. However, our experiment revealed that the visualizations could not reduce this effect, but partisan ratings rather seemed unaffected. In sum, digital tools uncovering and visualizing media bias may help mitigate the negative effects of media bias in the future.

  • View Article
  • Google Scholar
  • 8. Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. Linguistic Models for Analyzing and Detecting Biased Language. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1 : Long Papers) , Association for Computational Linguistics, Sofia, Bulgaria, 1650–1659. Retrieved June 13, 2020 from https://www.aclweb.org/anthology/P13-1162
  • 11. Norris P. 2000. A virtuous circle : Political communications in postindustrial societies . Cambridge University Press. Retrieved from https://doi.org/10.1017/CBO9780511609343
  • 15. Timo Spinde. 2021. An Interdisciplinary Approach for the Automated Detection and Visualization of Media Bias in News Articles. In 2021 IEEE International Conference on Data Mining Workshops (ICDMW) . https://doi.org/10.1109/ICDMW53433.2021.00144
  • 16. Lazarsfeld P. F., Berelson B., and Gaudet H. 1944. The people’s choice . Columbia University Press. Retrieved from https://doi.org/10.1007/978-3-531-90400-9_62
  • PubMed/NCBI
  • 18. Festinger L. 1957. A theory of cognitive dissonance . Stanford University Press.
  • 19. Timo Spinde, Felix Hamborg, Karsten Donnay, Angelica Becerra, and Bela Gipp. 2020. Enabling News Consumers to View and Understand Biased News Coverage: A Study on the Perception and Visualization of Media Bias. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020 , ACM, Virtual Event China, 389–392. https://doi.org/10.1145/3383583.3398619
  • 21. Filipe Ribeiro, Lucas Henrique, Fabricio Benevenuto, Abhijnan Chakraborty, Juhi Kulshrestha, Mahmoudreza Babaei, et al. 2018. Media bias monitor: Quantifying biases of social media news outlets at large-scale. In Proceedings of the International AAAI Conference on Web and Social Media .
  • 23. Souneil Park, Seungwoo Kang, Sangyoung Chung, and Junehwa Song. 2009. NewsCube: delivering multiple aspects of news to mitigate media bias. In Proceedings of the 27th international conference on Human factors in computing systems—CHI 09 , ACM Press, Boston, MA, USA, 443. https://doi.org/10.1145/1518701.1518772
  • 31. Wei-Fan Chen, Khalid Al Khatib, Henning Wachsmuth, and Benno Stein. 2020. Analyzing Political Bias and Unfairness in News Articles at Different Levels of Granularity. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science , Association for Computational Linguistics, Online, 149–154. https://doi.org/10.18653/v1/2020.nlpcss-1.16
  • 32. Christoph Hube and Besnik Fetahu. 2019. Neural Based Statement Classification for Biased Language. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining , ACM, Melbourne VIC Australia, 195–203. https://doi.org/10.1145/3289600.3291018
  • 33. Timo Spinde, Felix Hamborg, and Bela Gipp. 2020. Media Bias in German News Articles : A Combined Approach. In Proceedings of the 8th International Workshop on News Recommendation and Analytics (INRA 2020) , Virtual event. https://doi.org/10.1007/978-3-030-65965-3_41
  • 34. Timo Spinde, Lada Rudnitckaia, Felix Hamborg, and Bela and Gipp. 2021. Identification of Biased Terms in News Articles by Comparison of Outlet-specific Word Embeddings. In Proceedings of the 16th International Conference (iConference 2021) .
  • 36. J. An, M. Cha, K. Gummadi, J. Crowcroft, and D. Quercia. 2012. Visualizing media bias through Twitter. In Sixth International AAAI Conference on Weblogs and Social Media .
  • 46. Timo Spinde, Kanishka Sinha, Norman Meuschke, and Bela Gipp. 2021. TASSY—A Text Annotation Survey System. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries (JCDL) .
  • 47. Timo Spinde, Manuel Plank, Jan-David Krieger, Terry Ruas, Bela Gipp, and Akiko Aizawa. 2021. Neural Media Bias Detection Using Distant Supervision With BABE—Bias Annotations By Experts. In Findings of the Association for Computational Linguistics : EMNLP 2021 , Association for Computational Linguistics, Punta Cana, Dominican Republic, 1166–1177. https://doi.org/10.18653/v1/2021.findings-emnlp.101
  • 51. Timo Spinde, Christina Kreuter, Wolfgang Gaissmaier, Felix Hamborg, Bela Gipp, and Helge Giese. 2021. Do You Think It’s Biased? How To Ask For The Perception Of Media Bias. In Proceedings of the ACM/IEEE Joint Conference on Digital Libraries (JCDL) .
  • Ethics & Leadership
  • Fact-Checking
  • Media Literacy
  • The Craig Newmark Center
  • Reporting & Editing
  • Ethics & Trust
  • Tech & Tools
  • Business & Work
  • Educators & Students
  • Training Catalog
  • Custom Teaching
  • For ACES Members
  • All Categories
  • Broadcast & Visual Journalism
  • Fact-Checking & Media Literacy
  • In-newsroom
  • Memphis, Tenn.
  • Minneapolis, Minn.
  • St. Petersburg, Fla.
  • Washington, D.C.
  • Poynter ACES Introductory Certificate in Editing
  • Poynter ACES Intermediate Certificate in Editing
  • Ethics & Trust Articles
  • Get Ethics Advice
  • Fact-Checking Articles
  • International Fact-Checking Day
  • Teen Fact-Checking Network
  • International
  • Media Literacy Training
  • MediaWise Resources
  • Ambassadors
  • MediaWise in the News

Support responsible news and fact-based information today!

Should you trust media bias charts?

These controversial charts claim to show the political lean and credibility of news organizations. here’s what you need to know about them..

research about media bias

Impartial journalism is an impossible ideal. That is, at least, according to Julie Mastrine.

“Unbiased news doesn’t exist. Everyone has a bias: everyday people and journalists. And that’s OK,” Mastrine said. But it’s not OK for news organizations to hide those biases, she said.

“We can be manipulated into (a biased outlet’s) point of view and not able to evaluate it critically and objectively and understand where it’s coming from,” said Mastrine, marketing director for AllSides , a media literacy company focused on “freeing people from filter bubbles.”

That’s why she created a media bias chart.

As readers hurl claims of hidden bias towards outlets on all parts of the political spectrum, bias charts have emerged as a tool to reveal pernicious partiality.

Charts that use transparent methodologies to score political bias — particularly the AllSides chart and another from news literacy company Ad Fontes Media — are increasing in popularity and spreading across the internet. According to CrowdTangle, a social media monitoring platform, the homepages for these two sites and the pages for their charts have been shared tens of thousands of times.

But just because something is widely shared doesn’t mean it’s accurate. Are media bias charts reliable?

Why do media bias charts exist?

Traditional journalism values a focus on news reporting that is fair and impartial, guided by principles like truth, verification and accuracy. But those standards are not observed across the board in the “news” content that people consume.

Tim Groeling, a communications professor at the University of California Los Angeles, said some consumers take too much of the “news” they encounter as impartial.

When people are influenced by undisclosed political bias in the news they consume, “that’s pretty bad for democratic politics, pretty bad for our country to have people be consistently misinformed and think they’re informed,” Groeling said.

If undisclosed bias threatens to mislead some news consumers, it also pushes others away, he said.

“When you have bias that’s not acknowledged, but is present, that’s really damaging to trust,” he said.

Kelly McBride, an expert on journalism ethics and standards, NPR’s public editor and the chair of the Craig Newmark Center for Ethics and Leadership at Poynter, agrees.

“If a news consumer doesn’t see their particular bias in a story accounted for — not necessarily validated, but at least accounted for in a story — they are going to assume that the reporter or the publication is biased,” McBride said.

The growing public confusion about whether or not news outlets harbor a political bias, disclosed or not, is fueling demand for resources to sort fact from otherwise — resources like these media bias charts.

Bias and social media

Mastrine said the threat of undisclosed biases grows as social media algorithms create filter bubbles to feed users ideologically consistent content.

Could rating bias help? Mastrine and Vanessa Otero, founder of the Ad Fontes media bias chart, think so.

“It’ll actually make it easier for people to identify different perspectives and make sure they’re reading across the spectrum so that they get a balanced understanding of current events,” Mastrine said.

Otero said bias ratings could also be helpful to advertisers.

“There’s this whole ecosystem of online junk news, of polarizing misinformation, these clickbaity sites that are sucking up a lot of ad revenue. And that’s not to the benefit of anybody,” Otero said. “It’s not to the benefit of the advertisers. It’s not to the benefit of society. It’s just to the benefit of some folks who want to take advantage of people’s worst inclinations online.”

Reliable media bias ratings could allow advertisers to disinvest in fringe sites.

Groeling, the UCLA professor, said he could see major social media and search platforms using bias ratings to alter the algorithms that determine what content users see. Changes could elevate neutral content or foster broader news consumption.

But he fears the platforms’ sweeping power, especially after Facebook and Twitter censored a New York Post article purporting to show data from a laptop belonging to Hunter Biden, the son of President-elect Joe Biden. Groeling said social media platforms failed to clearly communicate how and why they stopped and slowed the spread of the article.

“(Social media platforms are) searching for some sort of arbiter of truth and news … but it’s actually really difficult to do that and not be a frightening totalitarian,” he said.

Is less more?

The Ad Fontes chart and the AllSides chart are each easy to understand: progressive publishers on one side, conservative ones on the other.

“It’s just more visible, more shareable. We think more people can see the ratings this way and kind of begin to understand them and really start to think, ‘Oh, you know, journalism is supposed to be objective and balanced,’” Mastrine said. AllSides has rated media bias since 2012. Mastrine first put them into chart form in early 2019.

Otero recognizes that accessibility comes at a price.

“Some nuance has to go away when it’s a graphic,” she said. “If you always keep it to, ‘people can only understand if they have a very deep conversation,’ then some people are just never going to get there. So it is a tool to help people have a shortcut.”

But perceiving the chart as distilled truth could give consumers an undue trust in outlets, McBride said.

“Overreliance on a chart like this is going to probably give some consumers a false level of faith,” she said. “I can think of a massive journalistic failure for just about every organization on this chart. And they didn’t all come clean about it.”

The necessity of getting people to look at the chart poses another challenge. Groeling thinks disinterest among consumers could hurt the charts’ usefulness.

“Asking people to go to this chart, asking them to take effort to understand and do that comparison, I worry would not actually be something people would do. Because most people don’t care enough about news,” he said. He would rather see a plugin that detects bias in users’ overall news consumption and offers them differing viewpoints.

McBride questioned whether bias should be the focus of the charts at all. Other factors — accountability, reliability and resources — would offer better insight into what sources of news are best, she said.

“Bias is only one thing that you need to pay attention to when you consume news. What you also want to pay attention to is the quality of the actual reporting and writing and the editing,” she said. It wouldn’t make sense to rate local news sources for bias, she added, because they are responsive to individual communities with different political ideologies.

The charts are only as good as their methodologies. Both McBride and Groeling shared praise for the stated methods for rating bias of AllSides and Ad Fontes , which can be found on their websites. Neither Ad Fontes nor AllSides explicitly rates editorial standards.

The AllSides Chart

research about media bias

(Courtesy: AllSides)

The AllSides chart focuses solely on political bias. It places sources in one of five boxes — “Left,” “Lean Left,” “Center,” “Lean Right” and “Right.” Mastrine said that while the boxes allow the chart to be easily understood, they also don’t allow sources to be rated on a gradient.

“Our five-point scale is inherently limited in the sense that we have to put somebody in a category when, in reality, it’s kind of a spectrum. They might fall in between two of the ratings,” Mastrine said.

That also makes the chart particularly easy to understand, she said.

AllSides has rated more than 800 sources in eight years, focusing on online content only. Ratings are derived from a mix of review methods.

In the blind bias survey, which Mastrine called “one of (AllSides’) most robust bias rating methodologies,” readers from the public rate articles for political bias. Two AllSides staffers with different political biases pull articles from the news sites that are being reviewed. AllSides locates these unpaid readers through its newsletter, website, social media account and other marketing tools. The readers, who self-report their political bias after they use a bias rating test provided by the company, only see the article’s text and are not told which outlet published the piece. The data is then normalized to more closely reflect the composure of America across political groupings.

AllSides also uses “editorial reviews,” where staff members look directly at a source to contribute to ratings.

“That allows us to actually look at the homepage with the branding, with the photos and all that and kind of get a feel for what the bias is, taking all that into account,” Mastrine said.

She added that an equal number of staffers who lean left, right and center conduct each review together. The personal biases of AllSides’ staffers appear on their bio pages . Mastrine leans right.

She clarified that among the 20-person staff, many are part time, 14% are people of color, 38% are lean left or left, 29% are center, and 18% are lean right or right. Half of the staffers are male, half are female.

When a news outlet receives a blind bias survey and an editorial review, both are taken into account. Mastrine said the two methods aren’t weighted together “in any mathematical way,” but said they typically hold roughly equal weight. Sometimes, she added, the editorial review carries more weight.

AllSides also uses “independent research,” which Mastrine described as the “lowest level of bias verification.” She said it consists of staffers reviewing and reporting on a source to make a preliminary bias assessment. Sometimes third-party analyses — including academic research and surveys — are incorporated into ratings, too.

AllSides highlights the specific methodologies used to judge each source on its website and states its confidence in the ratings based on the methods used. In a separate white paper , the company details the process used for its August 2020 blind bias survey.

AllSides sometimes gives separate ratings to different sections of the same source. For example, it rates The New York Times’ opinion section “Left” and its news section “Lean Left.” AllSides also incorporates reader feedback into its system. People can mark that they agree or disagree with AllSides’ rating of a source. When a significant number of people disagree, AllSides often revisits a source to vet it once again, Mastrine said.

The AllSides chart generally gets good reviews, she said, and most people mark that they agree with the ratings. Still, she sees one misconception among the people that encounter it: They think center means better. Mastrine disagrees.

“The center outlets might be omitting certain stories that are important to people. They might not even be accurate,” she said. “We tell people to read across the spectrum.”

To make that easier, AllSides offers a curated “ balanced news feed ,” featuring articles from across the political spectrum, on its website.

AllSides makes money through paid memberships, one-time donations, media literacy training and online advertisements. It plans to become a public benefit corporation by the end of the year, she added, meaning it will operate both for profit and for a stated public mission.

The Ad Fontes chart

research about media bias

(Courtesy: Ad Fontes)

The Ad Fontes chart rates both reliability and political bias. It scores news sources — around 270 now, and an expected 300 in December — using bias and reliability as coordinates on its chart.

The outlets appear on a spectrum, with seven markers showing a range from “Most Extreme Left” to “Most Extreme Right” along the bias axis, and eight markers showing a range from “Original Fact Reporting” to “Contains Inaccurate/Fabricated Info” along the reliability axis.

The chart is a departure from its first version, back when founder Vanessa Otero , a patent attorney, said she put together a chart by herself as a hobby after seeing Facebook friends fight over the legitimacy of sources during the 2016 election. Otero said that when she saw how popular her chart was, she decided to make bias ratings her full-time job and founded Ad Fontes — Latin for “to the source” — in 2018.

“There were so many thousands of people reaching out to me on the internet about this,” she said. “Teachers were using it in their classrooms as a tool for teaching media literacy. Publishers wanted to publish it in textbooks.”

About 30 paid analysts rate articles for Ad Fontes. Listed on the company’s website , they represent a range of experience — current and former journalists, educators, librarians and similar professionals. The company recruits analysts through its email list and references and vets them through a traditional application process. Hired analysts are then trained by Otero and other Ad Fontes staff.

To start review sessions, a group of coordinators composed of senior analysts and the company’s nine staffers pulls articles from the sites being reviewed. They look for articles listed as most popular or displayed most prominently.

research about media bias

Part of the Ad Fontes analyst political bias test. The test asks analysts to rank their political bias on 18 different policy issues.

Ad Fontes administers an internal political bias test to analysts, asking them to rank their left-to-right position on about 20 policy positions. That information allows the company to attempt to create ideological balance by including one centrist, one left-leaning and one right-leaning analyst on each review panel. The panels review at least three articles for each source, but they may review as many as 30 for particularly prominent outlets, like The Washington Post, Otero said. More on their methodology, including how they choose which articles to review to create a bias rating, can be found here on the Ad Fontes website.

When they review the articles, the analysts see them as they appear online, “because that’s how people encounter all content. No one encounters content blind,” Otero said. The review process recently changed so that paired analysts discuss their ratings over video chat, where they are pushed to be more specific as they form ratings, Otero said.

Individual scores for an article’s accuracy, the use of fact or opinion, and the appropriateness of its headline and image combine to create a reliability score. The bias score is determined by the article’s degree of advocacy for a left-to-right political position, topic selection and omission, and use of language.

To create an overall bias and reliability score for an outlet, the individual scores for each reviewed article are averaged, with added importance given to more popular articles. That average determines where sources show up on the chart.

Ad Fontes details its ratings process in a white paper from August 2019.

While the company mostly reviews prominent legacy news sources and other popular news sites, Otero hopes to add more podcasts and video content to the chart in coming iterations. The chart already rates video news channel “ The Young Turks ” (which claims to be the most popular online news show with 250 million views per month and 5 million subscribers on YouTube ), and Otero mentioned she next wants to examine videos from Prager University (which claims 4 billion lifetime views for its content, has 2.84 million subscribers on YouTube and 1.4 million followers on Instagram ). Ad Fontes is working with ad agency Oxford Road and dental care company Quip to create ratings for the top 50 news and politics podcasts on Apple Podcasts, Otero said.

“It’s not strictly traditional news sources, because so much of the information that people use to make decisions in their lives is not exactly news,” Otero said.

She was shocked when academic textbook publishers first wanted to use her chart. Now she wants it to become a household tool.

“As we add more news sources on to it, as we add more data, I envision this becoming a standard framework for evaluating news on at least these two dimensions of reliability and bias,” she said.

She sees complaints about it from both ends of the political spectrum as proof that it works.

“A lot of people love it and a lot of people hate it,” Otero said. “A lot of people on the left will call us neoliberal shills, and then a bunch of people that are on the right are like, ‘Oh, you guys are a bunch of leftists yourselves.’”

The project has grown to include tools for teaching media literacy to school kids and an interactive version of the chart that displays each rated article. Otero’s company operates as a public benefit corporation with a stated public benefit mission: “to make news consumers smarter and news media better.” She didn’t want Ad Fontes to rely on donations.

“If we want to grow with a problem, we have to be a sustainable business. Otherwise, we’re just going to make a small difference in a corner of the problem,” she said.

Ad Fontes makes money by responding to specific research requests from advertisers, academics and other parties that want certain outlets to be reviewed. The company also receives non-deductible donations and operates on WeFunder , a grassroots crowdfunding investment site, to bring in investors. So far, Ad Fontes has raised $163,940 with 276 investors through the site.

Should you use the charts?

Media bias charts with transparent, rigorous methodologies can offer insight into sources’ biases. That insight can help you understand what perspectives sources bring as they share the news. That insight also might help you understand what perspectives you might be missing as a news consumer.

But use them with caution. Political bias isn’t the only thing news consumers should look out for. Reliability is critical, too, and the accuracy and editorial standards of organizations play an important role in sharing informative, useful news.

Media bias charts are a media literacy tool. They offer well-researched appraisals on the bias of certain sources. But to best inform yourself, you need a full toolbox. Check out Poynter’s MediaWise project for more media literacy tools.

This article was originally published on Dec. 14, 2020. 

More about media bias charts

  • A media bias chart update puts The New York Times in a peculiar position
  • Letter to the editor: What Poynter’s critique misses about the Media Bias Chart

research about media bias

Opinion | Behind the scenes of CBS News’ interview with Pope Francis

The pope gave Norah O’Donnell a historic, hourlong interview from the Vatican. No topic was off-limits.

research about media bias

How a Supreme Court case most people likely have never heard of is reshaping LGBTQ+ rights

A 2020 Supreme Court case called Bostock v. Clayton County has led to a number of policy and legal shifts

research about media bias

CNN mourns the loss of commentator Alice Stewart

Stewart, a veteran political adviser who worked on several Republican presidential campaigns, was 58.

research about media bias

The best Pulitzer leads (or ledes) in 2024

Longtime writing coach Roy Peter Clark gives this year’s award to a gripping narrative about two octogenarians who died in a hurricane

research about media bias

Benny Johnson’s claim that Joe Biden set up Donald Trump with classified documents is false

The conservative podcaster claimed the Biden administration framed former President Donald Trump by shipping boxes of classified documents to his home

Comments are closed.

We are too obsessed with alleged bias and objectivity, which so often is in the biased eye of the beholder. The main standard of good journalism should be verifiable factual accuracy.

Hoping to see a follow-up article about whether we can trust fact checker report card charts created by collecting a fact checker’s subjective ratings.

As a writer for Wonkette, I won’t claim to be objective, but we do like to point out that our rating at Ad Fontes – both farthest to the left and the least reliable, is absurd. Apparently we can’t be trusted at all because we do satirical commentary instead of straight news.

When we’ve attempted to point out to Ms. Otero that we adhere to high standards when it comes to factuality, but we also make jokes, she has replied that satire is inherently untrustworthy and biased, particularly since we sometimes use dirty words.

That seems to us a remarkably biased definition of bias.

Start your day informed and inspired.

Get the Poynter newsletter that's right for you.

Media Bias 101: What Journalists Really Think -- and What the Public Thinks About Them

Media Bias 101 summarizes decades of survey research showing how journalists vote, what journalists think, what the public thinks about the media, and what journalists say about media bias. The following links take you to dozens of different surveys, with key findings and illustrative charts. (Most recent update: May 2014)

A printer-friendly, fully-formatted 48-page version of the report (updated January 2014) is available in PDF format here ( 1.8 MB ).  

Part One: What Journalists Think

Surveys over the past 30 years have consistently found that journalists — especially those at the highest ranks of their profession — are much more liberal than rest of America. They are more likely to vote liberal, more likely to describe themselves as liberal, and more likely to agree with the liberal position on policy matters than members of the general public.

Early Polls of Journalists, 1962-1985 Added January 2014 Exhibit 1-1: The Media Elite Exhibit 1-2: Major Newspaper Reporters Updated January 2014 Exhibit 1-3: The American Journalist Exhibit 1-4: U.S. Newspaper Journalists Exhibit 1-5: Survey of Business Reporters Exhibit 1-6: Journalists - Who Are They, Really? Exhibit 1-7: White House Reporters Exhibit 1-8: The Media Elite Revisited Updated January 2014 Exhibit 1-9: Washington Bureau Chiefs and Correspondents Exhibit 1-10: Newspaper Journalists of the 1990s Exhibit 1-11: Newspaper Editors Exhibit 1-12: The People and the Press: Whose Views Shape the News? Exhibit 1-13: How Journalists See Journalists in 2004 Exhibit 1-14: Campaign Journalists (2004) Exhibit 1-15: TV and Newspaper Journalists Exhibit 1-16: Journalists' Ethics and Attitudes, 2005 Exhibit 1-17: The News Media and the War, 2005 Exhibit 1-18: Slate Magazine Pre-Election Staff Survey Updated January 2014 Exhibit 1-19: Indiana University Polls of Journalists Added May 2014

Part Two: How the Public Views the Media

A wide variety of public opinion polls have documented the fact that most Americans now see the media as politically biased, inaccurate, intrusive, and a tool of powerful interests. By a nearly three-to-one margin, those who see political bias believe the media bend their stories to favor liberals.

Exhibit 2-1: The People and The Press, 1997 Exhibit 2-2: What the People Want from the Press Exhibit 2-3: ASNE Journalism Credibility Project, 1998 Exhibit 2-4: The People and The Press, 2000 Exhibit 2-5: Gallup Polls on Media Bias Updated January 2014 Exhibit 2-6: The People and The Press, 2003 Exhibit 2-7: Bias in the 2004 Presidential Campaign Exhibit 2-8: Missouri School of Journalism 2004 Exhibit 2-9: American Journalism Review, 2005 Exhibit 2-10: CBS's "State of the Media," 2006
Exhibit 2-11: Institute for Politics, Democracy and the Internet/Zogby Survey Exhibit 2-12: Coverage of the War in Iraq, 2007 Exhibit 2-13: Rasmussen Reports on Media Bias, 2007 Exhibit 2-14: Harvard's "National Leadership Index" Survey (2007) Exhibit 2-15: Sacred Heart University Polling Institute (2007) Exhibit 2-16: Public Reaction to Media Coverage of the 2008 Primaries Exhibit 2-17: Rasmussen Reports on Campaign 2008 Bias Exhibit 2-18: Public Overwhelmingly Saw Favoritism For Obama Exhibit 2-19: Pew Study Finds Media Credibility Plummets Exhibit 2-20: Confidence In Media Hits New Low
Exhibit 2-21: Trust and Satisfaction with the National Media (2009) Exhibit 2-22: News Media Both Too Liberal and Too Powerful (2009) Exhibit 2-23: 2010 Surveys Find Two-Thirds of Public Is “Angry” at the Media Exhibit 2-24: Gallup Finds Media Distrusted, Public’s Confidence Low (2011) Exhibit 2-25: Pew Finds Record Low Respect for News Media (2011) Exhibit 2-26: Record High 67% See Political Bias in News Media Exhibit 2-27: In Campaign 2012, Voters Saw Media Favoring Obama Added January 2014 Exhibit 2-28: Seeing Liberal Bias in the News (2013) Added January 2014

Part Three: What Journalists Say about Media Bias

Over the years, the Media Research Center has catalogued the views of journalists on the subject of bias. In spite of overwhelming evidence to the contrary, many journalists still refuse to acknowledge that most of the establishment media tilts to the left. Even so, a number of journalists have admitted that the majority of their brethren approach the news from a liberal angle.

Journalists Denying Liberal Bias Updated May 2014 More Journalists Denying Liberal Bias Still More Journalists Denying Liberal Bias Journalists Admitting Liberal Bias Updated May 2014 More Journalists Admitting Liberal Bias

Media Bias Analysis

  • Open Access
  • First Online: 06 October 2022

Cite this chapter

You have full access to this open access chapter

research about media bias

  • Felix Hamborg 2  

4779 Accesses

This chapter provides the first interdisciplinary literature review on media bias analysis, thereby contrasting manual and automated analysis approaches. Decade-long research in political science and other social sciences has resulted in comprehensive models to describe media bias and effective methods to analyze it. In contrast, in computer science, computational linguistics, and related fields, media bias is a relatively young research topic. Despite many approaches being technically very advanced, we find that the automated approaches could often yield more substantial results by using knowledge from social science research on the topic.

You have full access to this open access chapter,  Download chapter PDF

2.1 Introduction

The Internet has increased the degree of self-determination in how people gather knowledge, shape their own views, and engage with topics of societal relevance [ 249 ]. Unrestricted access to unbiased information is crucial for forming a well-balanced understanding of current events. For many individuals, news articles are the primary source to attain such information. News articles thus play a central role in shaping personal and public opinion. Furthermore, news consumers rate news articles as having the highest quality and trustworthiness compared to other media formats, such as TV or radio broadcasts, or, more recently, social media [ 61 , 249 , 365 ]. However, media coverage often exhibits an internal bias, reflected in news articles and commonly referred to as media bias . Factors influencing this bias can include ownership or source of income of the media outlet or a specific political or ideological stance of the outlet and its audience [ 363 ].

The literature identifies numerous ways in which media coverage can manifest bias. For instance, journalists select events, sources , and from these sources the information they want to publish in a news article. This initial selection process introduces bias to the resulting news story. Journalists can also affect the reader’s perception of a topic through word choice , e.g., if the author uses a word with a positive or a negative connotation to refer to an entity [ 116 ], or by varying the credibility ascribed to the source [ 14 , 99 , 266 ]. Finally, the placement and size of an article within a newspaper or on a website determine how much attention the article will receive [ 37 ].

The impact of media bias, especially when implemented intentionally (see the review of bias definitions in Sect. 2.2.1 ), on shaping public opinion has been studied by numerous scholars [ 24 ]. Historically, major outlets exerted a strong influence on public opinion, e.g., in elections [ 219 , 237 , 259 ], or the social acceptance of tobacco consumption [ 9 , 362 ]. The influence of media corporations has increased significantly in the past decades. In Germany, for example, only five corporations control more than half of the media [ 189 ], and in the USA, only six corporations control 90% [ 40 , 318 ]. This naturally increases the risk of media coverage being intentionally biased [ 82 , 342 ]. Also on social media , which typically reflects a broader range of opinions, people may still be subject to media bias [ 10 , 15 , 111 ], despite social media being characterized by more direct and frequent interaction between users, and hence presumably more exposure to different perspectives. Some argue that social media users are more likely to actively or passively isolate themselves in a “filter bubble” or “echo chamber” [ 352 ], i.e., only be surrounded by news and opinions close to their own. However, this isolation is not necessarily as absolute as often assumed, e.g., Barberá et al. [ 17 ] found noticeable isolation for political issues but not for others, such as reporting on accidents and disasters. Recent technological developments are another reason for topical isolation of social media consumers, which might lead to a general decrease in the diversity of news consumption. For instance, Facebook, the world’s largest social network with more than three billion users [ 85 ], introduced Trending Topics in 2014, a news overview feature. There, users can discover current events by exclusively relying on Facebook. However, the consumption of news from only a single distributor amplifies the previously mentioned level of influence further: only a single company controls what is shown to news consumers.

The automated identification of media bias and the analysis of news articles in general have recently gained attention in computer science. A popular example are news aggregators, such as Google News , which give news readers a quick overview of a broad news landscape. Yet, established systems currently provide no support for showing the different perspectives contained in articles reporting on the same news event. Thus, most news aggregators ultimately tend to facilitate media bias [ 39 , 375 ]. Recent research efforts aim to fill this gap and reduce the effects of such biases. However, the approaches suffer from practical limitations, such as being fine-tuned to only one news category or relying heavily on user input [ 252 , 253 , 276 ]. As we show in this chapter, an important reason for the comparably poor performance of the technically superior computer science methods for automatic identification of instances of media bias is that such approaches currently tend to not make full use of the knowledge and expertise on this topic from the social sciences.

This chapter is motivated by the question of how computer science approaches can contribute to identifying media bias and mitigating the negative bias effects by ultimately making available a more balanced coverage of events and societal issues to news consumers. We address this question by comparing and contrasting established research on the topic of media bias in the social sciences with technical approaches from computer science. This comparative review thus also serves as a guide for computer scientists to better benefit from already more established media bias research in the social sciences. Similarly, social scientists seeking to apply current automated approaches to their own media bias research will also benefit from this review.

The remainder of this chapter is structured as follows. In Sect. 2.2 , we introduce the term media bias, highlight the effects of slanted news coverage, provide an understanding of how bias arises during the production of news, and introduce the most important approaches from the social sciences to analyze media bias. Then, each of the subsections in Sect. 2.3 focuses on a specific form of media bias, describes studies from the social sciences that analyze this form, and discusses methods from computer science that have been used or could be used to identify the specified form of bias automatically. In Sect. 2.4 , we discuss the reliability and generalizability of the manual approaches from the social sciences and point out key issues to be considered when evaluating interdisciplinary research on media bias. Section 2.5 summarizes the key findings of our literature review. Section 2.6 demonstrates the key findings and research gap using a practical example. Lastly, Sect. 2.7 summarizes the findings of the chapter in the context of this thesis.

2.2 Media Bias

This section gives an overview of definitions of media bias as used in social science research on the topic or as employed by automated approaches (Sect. 2.2.1 ). Afterward, we describe the effects of biased news coverage (Sect. 2.2.2 ), develop a conceptual understanding of how media bias arises in the process of news production (Sect. 2.2.3 ), and briefly introduce the most important approaches from the social sciences to analyze bias in the media (Sect. 2.2.4 ).

2.2.1 Definitions

The study of biased news coverage has a long tradition in the social sciences going back at least to the 1950s [ 253 ]. In the classical definition of Williams, media bias must both be intentional, i.e., reflect a conscious act or choice, and be sustained, i.e., represent a systematic tendency rather than an isolated incident [ 382 ]. This definition sets the media bias that we consider apart from other sources of unintentional bias in news coverage. Sources of unintentional bias include the influence of news values [ 141 ] throughout the production of news [ 276 ] and later the news consumption by readers with different backgrounds [ 266 ]. Examples for news values include the geographic vicinity of a newsworthy event to the location of the news outlet and consumers or the effects of the general visibility or societal relevance of a specific topic [ 229 ].

Many other definitions of media bias and its specific forms exist, each depending on the particular context and research questions studied. Mullainathan and Shleifer define two high-level types of media bias concerned with the intention of news outlets when writing articles: ideology and spin [ 327 ]. Ideological bias is present if an outlet biases articles to promote a specific opinion on a topic. Spin bias is present if the outlet attempts to create a memorable story. Another definition of media bias that is commonly used distinguishes between three types: coverage , gatekeeping , and statement (cf. [ 64 ]). Coverage bias is concerned with the visibility of topics or entities, such as a person or country, in media coverage. Gatekeeping bias, also called selection bias or agenda bias, relates to which stories media outlets select or reject for reporting. Statement bias, also called presentation bias, is concerned with how articles choose to report on concepts. For example, in the US elections, a well-observed bias arises from editorial slant [ 75 ], in which the editorial position on a given presidential candidate affects the quantity and tone of a newspaper’s coverage. Further forms and definitions of media bias can be found in the discussion by D’Alessio and Allen [ 64 ].

Even more definitions of media bias are found when considering research on automated bias analysis. Automated approaches tackle media bias, for example, as “subtle differences” [ 210 ], “differences of coverage” [ 278 ], “diverse opinions” [ 251 ], or “topic diversity” [ 252 ]. In sum, these definitions are rather superficial and vague, especially when compared to social science research.

To closely resemble how bias is analyzed in the social sciences, we follow in this literature review the traditional definition by Williams as mentioned previously [ 382 ]. To allow for an extensive overview of media bias literature, we also include studies that are not strictly concerned with intentional biases only. To address the different objectives of social science research on media bias and our thesis, we later provide a task-specific definition of media bias that we use in the methodology chapters of our thesis (Chap. 3 ). Specifically, classical research on media bias in the social sciences is concerned with investigating bias as systematic tendencies or patterns in news coverage on more extended time frames, e.g., to measure the influence of (biased) coverage on society or policy decisions. In contrast, our research question is concerned with biases in current coverage, e.g., to inform news consumers about such biases. Thus, to enable timely bias communication to news consumers, we explicitly allow for biases that may or may not have tendencies on larger time frames.

2.2.2 Effects of Biased News Consumption

Media bias has a strong impact on both individual and public perception of news events and thus impacts political decisions [ 24 , 69 , 97 , 100 , 159 , 166 , 399 ]. Despite the rise of social media, news articles published by well-established media outlets remain the primary source of information on current events (cf. [ 61 , 249 , 365 ]). Thus, if the reporting of a news outlet is biased, readers are prone to adopting similarly biased views. Today, the effects of biased coverage are amplified by social media, in which readers tend to “follow” only the news that conforms with their established views and beliefs [ 92 , 117 , 250 , 254 , 351 ]. On social media, news readers encounter an “echo chamber,” where their internal biases are only reinforced. Furthermore, most news readers only consult a small subset of available news outlets [ 261 , 262 ], as a result of information overload, language barriers, or their specific interests or habits.

Nearly all news consumers are affected by media bias [ 72 , 190 , 194 , 237 , 259 ], which may, for example, influence voters and, in turn, influence election outcomes [ 71 , 75 , 196 , 237 , 259 ]. Another effect of media bias is the polarization of public opinion [ 352 ], which complicates agreements on contentious topics. These negative effects have led some researchers to believe that media bias challenges the pillars of our democracy [ 166 , 399 ]: if media outlets influence public opinion, is the observed public opinion really the “true” public opinion? For instance, a 2003 survey showed that there were significant differences in the presentation of information on US television channels [ 190 ]. Fox News viewers were most misinformed about the Iraq War. Over 40% of viewers believed that weapons of mass destruction were actually found in Iraq, which is the reason used by the US government to justify the war.

According to social science research, the three key ways in which media bias affects the perception of news are priming , agenda setting , and framing [ 75 , 314 ]. Priming theory states that how news consumers tend to evaluate a topic is influenced by their (prior) perception of the specific issues that were portrayed in news on that topic. Agenda setting refers to the ability of news publishers to influence which topics are considered relevant by selectively reporting on topics of their choosing. News consumers’ evaluation of topics is furthermore based on the perspectives portrayed in news articles, which are also known as frames [ 79 ]. Journalists use framing to present a topic from their perspective to “promote a particular interpretation” [ 80 ].

We illustrate the effect of framing using an example provided by Kahneman and Tversky [ 166 ]: Assume a scenario in which a population of 600 people is endangered by an outbreak of a virus. In a first survey, Kahneman and Tversky asked participants which option they would choose:

200 people will be saved.

33% chance that 600 people will be saved. 66% chance that no one will be saved.

In the first survey, 72% of the participants chose A, and 26% chose B. Afterward, a second survey was conducted that objectively represents the exact same choices, but here the options to choose from were framed in terms of likely deaths rather than lives saved.

400 people will die.

33% chance that no one will die. 66% chance that 600 people will die.

In this case, the preference of participants was reversed. 22% of the participants chose C, and 72% chose D. The results of the survey thus demonstrated that framing alone, that is, the way in which information is presented, has the ability to draw attention to either the negative or the positive aspects of an issue [ 166 ].

In summary, the effects of media bias are manifold and especially dangerous when individuals are unaware of the occurrence of bias. The recent concentration of the majority of mass media in the hands of a few corporations amplifies the potential impact of media bias of individual news outlets even further.

2.2.3 Understanding Media Bias

Understanding not only various forms of media bias but also at which stage in the news production process they can arise [ 276 ] is beneficial to devise methods and systems that help to reduce the impact of media bias on readers. We focus on a specific conceptualization of the news production process, depicted in Fig. 2.1 , which models how media outlets turn events into news stories and how then readers consume the stories (cf. [ 14 , 69 , 146 , 147 , 276 , 277 ]). The stages in the process map to the forms of bias described by Baker, Graham, and Kaminsky [ 14 ]. Since each stage of the process is distinctively defined, we find this conceptualization of the news production process and the included bias forms to be the most comprehensive model of media bias for the purpose of devising future research in computer science. In the following paragraphs, we exemplarily demonstrate the different forms of media bias within the news production and consumption process. In Sect. 2.3 , we discuss each form in more detail. Note that while the process focuses on news articles, most of our discussion in Sect. 2.3 can be adapted to other media types, such as social media, blogs, or transcripts of newscasts.

figure 1

The news production process is a model explaining how forms of bias emerge during the process of turning an event (happening in reality ) into a news item (which is then perceived by news consumers ). The orange part at the top represents internal and external factors that influence the production of a news item and its slants. The green parts at the bottom represent bias forms that can emerge during the three phases of the news production process. The “consumer context” label (far right) additionally shows factors influencing the perception of the described news event that are not related to media bias. Adapted from [ 276 ]

Various parties can directly or indirectly, intentionally or structurally influence the news production process (refer to the motives underlying media bias shown in the orange rectangle in Fig. 2.1 ). News producers have their own political and ideological views [59]. These views extend through all levels of a news company, e.g., news outlets and their journalists typically have a slant toward a certain political direction [ 117 ]. Journalists might also introduce bias in a story if the change is supportive of their career [ 19 ]. In addition to these internal forces, external factors may also influence the news production cycle. News stories are often tailored for a current target audience of the news outlet [ 98 , 117 , 220 ], e.g., because readers switch to other news outlets if their current news source too often contradicts their own beliefs and views [ 92 , 98 , 250 , 254 , 351 ]. News producers may tailor news stories for their advertisers and owners , e.g., they might not report on a negative event involving one of their main advertisers or partnered companies [ 69 , 103 , 220 ]. Similarly, producers may bias news in favor of governments since they rely on them as a source of information [ 25 , 65 , 146 ].

In addition to these external factors, business reasons can also affect the resulting news story, e.g., investigative journalism is more expensive than copy-editing prepared press releases. Ultimately, most news producers are profit-oriented companies that may not claim the provision of bias-free information to their news consumers as their main goal [ 281 ]; in fact, news consumers expect commentators to take positions on important issues and filter important from unimportant information (cf. [ 31 , 81 ]).

All these factors influence the news production process at various stages (gray). In the first stage, gathering , journalists select facts from all the news events that happened. This stage starts with the selection of events, also named story selection. Naturally, not all events are relevant to a new outlet’s target audience, or sensational stories might yield more sales [ 117 ]. Next, journalists need to select sources , e.g., press releases, other news articles, or studies, to be used when writing an article. Ultimately, the journalists must decide which information from the sources to be included and which to be excluded from the article to be written. This step is called commission or omission and likewise affects which perspective is taken on the event.

In the next phase, writing , journalists may use different writings styles to bias news. For instance, two forms defined in the production process are labeling (e.g., a person is labeled positively, “independent politician,” whereas for the other party, no label or a negative label is used) and word choice (e.g., how the article refers to an entity, such as “coalition forces” vs. “invading forces”).

The last stage, editing , is concerned with the presentation style of the story. This includes, for instance, the placement of the story and the size allocation (e.g., a large cover story receives more attention than a brief comment on page 3), the picture selection (e.g., usage of emotional pictures or their size influences attention and perception of an event), and the picture explanation (i.e., placing the picture in context using a caption).

Lastly, spin bias is a form of media bias that represents the overall bias of a news article. An article’s spin is essentially a combination of all previously mentioned forms of bias and other minor forms (see Sect. 2.3.8 ).

Summary of the News Production Process

In summary, the resulting news story has potentially been subject to various sources of media bias at different stages of the story’s genesis before it is finally consumed by the reader. The consumer context , in turn, affects how readers actually perceive the described information (cf. [ 16 , 348 ]). The perception of any event will differ, depending on the readers’ background knowledge , their preexisting attitude toward the described event (sometimes called hostile media perception ) [ 367 ], their social status (how readers are affected by the event), and their country (e.g., news reporting negatively about a reader’s country might lead to refusal of the discussed topic), and a range of other factors. Note, however, that “consumer context” is not a form of media bias and thus will be excluded from analysis in the remainder of this chapter.

Other Bias Models

Other models exist of how media bias arises, but their components can effectively be mapped to the news production and consumption process detailed previously. For instance, Entman defines a communication process that essentially mirrors all the same steps discussed in Fig. 2.1 : (1) Communicators make intentional or unintentional decisions about the content of a text. (2) The text inherently contains different forms of media bias. (3) Receivers, i.e., news readers, draw conclusions based on the information and style presented in the text (which, however, may or may not reflect the text’s perspective). (4) Receivers of a social group are additionally subject to culture , also known as a common set of perspectives [ 79 ].

Table 2.1 gives an overview of the previously described forms of media bias, where the “medium” column shows the medium that is the source of the specific form of bias and the column “target object” shows the items within the target medium that are affected.

2.2.4 Approaches in the Social Sciences to Analyze Media Bias

Researchers from the social sciences primarily conduct so-called content analyses to identify and quantify media bias in news coverage [ 64 ] or to, more generally, study patterns in communication. First, we briefly describe the concept and workflow of content analysis. Next, we describe the concept of frame analysis , which is a specialized form of content analysis commonly used to study the presence of frames in news coverage [ 368 ]. Lastly, we introduce meta-analysis , in which researchers combine the findings from other studies and analyze general patterns across these studies [ 155 ].

2.2.4.1 Content Analysis

Content analysis quantifies media bias by identifying and characterizing its instances within news texts. In a content analysis, researchers first define one or more analysis questions or hypotheses. Researchers then gather the relevant news data, and coders (also called annotators) systematically read the news texts, annotating parts of the texts that indicate instances of media bias relevant to the analysis being performed. Afterward, the researchers use the annotated findings to accept or reject their hypotheses [ 228 , 267 ].

In a deductive content analysis, researchers devise a codebook before coders read and annotate the texts [ 68 , 227 ]. The codebook contains definitions, detailed rules, and examples of what should be annotated and in which way. Sometimes, researchers reuse existing codebooks, e.g., Papacharissi and de Fatima Oliveira [ 274 ] used annotation definitions from a previous study by Cappella and Jamieson [ 44 ] to create their codebook, and then they performed a deductive content analysis comparing news coverage on terrorism in the USA and the UK.

In an inductive content analysis, coders read the texts without specified instructions on how to code the text, only knowing the research question [ 117 ]. Since statistically sound conclusions can only be derived from the results of deductive content analyses [ 260 ], researchers conduct inductive content analyses mainly in early phases of their research, e.g., to verify the line of research or to find patterns in the data and devise a codebook [ 260 , 368 ].

Usually, creating and refining the codebook is a time-intensive process, during which multiple analyses or tests using different iterations of a codebook are performed. A common criterion that must be satisfied before the final deductive analysis can be conducted is to achieve a sufficiently high inter-coder reliability (ICR) or inter-rater reliability (IRR) [ 195 ]. ICR, also called inter-coder agreement, inter-annotator reliability, or inter-annotator agreement, represents how often individual coders annotate the same parts of the documents with the same codes from the codebook. IRR represents this kind of agreement as well, but in a labeling task, e.g., with a fixed set of labels to choose from, rather than (also) annotating a phrase in a text. In some cases, these terms and tasks may overlap. In the remainder of this thesis, we will generally use the term ICR for annotation tasks where phrases have to be selected (and labeled), such as in a content analysis. We will use the term IRR for labeling tasks, e.g., where only one or more labels have to be selected but the phrase is given, such as in sentiment classification.

Social scientists distinguish between two types of content analyses: quantitative and qualitative [ 366 ]. A qualitative analysis seeks to find “all” instances of media bias, including subtle instances that require human interpretation of the text. In a quantitative analysis, researchers in the social sciences determine the frequency of specific words or phrases (usually as specified in a codebook). Additionally, researchers may subsume specific sets of words to represent so-called salient topics, roughly resembling frames (cf. [ 63 ]). Quantitative content analyses may also measure other, non-textual features of news articles, such as the number of articles published by a news outlet on a certain event or the size and placement of a story in a printed newspaper. These measurements are also called volumetric measurements [ 64 ].

Thus far, the majority of studies on media bias performed in the social sciences conduct qualitative content analyses because the findings tend to be more comprehensive. Quantitative analyses can be performed faster and can be partially automated, but are more likely to miss subtle forms of bias [ 316 ]. We discuss both qualitative and quantitative analyses for the individual bias forms in Sect. 2.3 .

Content analysis software , generally also called computer-assisted qualitative data analysis software (CAQDAS) , supports analysts when performing content analyses [ 215 ]. Most tools support the manual annotation of findings for the analyzed news data or for other types of reports, such as police reports [ 267 ]. To reduce the large amount of texts that need to be reviewed, the software helps users find relevant text passages, e.g., by finding documents or text segments containing the words specified in the codebook or from a keyword list [ 336 ] so that the coder must review less texts manually. In addition, most software helps users find patterns in the documents, e.g., by analyzing the frequencies of terms, topic, or word co-occurrences [ 215 ].

2.2.4.2 Frame Analysis

Frame analysis (also called framing analysis) investigates how readers perceive the information in a news article [ 79 ]. This is done by broadly asking two questions: (1) What information is conveyed in the article? (2) How is that information conveyed? Both questions together define a frame . As described in Sect. 2.2.2 , a frame is a selection of and emphasis on specific parts of an event.

To empirically determine the frames in news articles or other news texts, frame analysis is typically concerned with one or more of four dimensions [ 271 ]: syntactical, script, thematic, and rhetorical. The syntactical dimension includes patterns in the arrangement of words and, more broadly, information, e.g., descending order of salience in a story. The script dimension refers to characteristics similar to those of a story, i.e., a news article may have an introduction, climax, and end. The thematic dimension refers to which information is mentioned in a news text, e.g., which “facts,” events, or sources are mentioned or quoted to strengthen the text’s argument. Lastly, the rhetorical dimension entails how such information is presented, e.g., the word choice. Using these dimensions, researchers can systematically analyze and quantify the viewpoints of news texts.

Not all frame analyses focus on the text of news articles. For instance, DellaVigna and Kaplan [ 71 ] analyzed the gradual adoption of cable TV of Fox News between 1996 and 2000 to show that Fox News had a “significant impact” [ 71 ] on the presidential elections. Essentially, the study analyzed whether a district had already adopted the Fox News channel and what the election result was. The results revealed that the Republican Party had an increased vote share in those towns that had adopted Fox News.

2.2.4.3 Meta-Analysis

In a meta-analysis , researchers combine the results of multiple studies to derive further findings from them [ 155 ]. For example, in the analysis of event selection bias, a common question is which factors influence whether media organizations will choose to report on an event or not. McCarthy, McPhail, and Smith [ 229 ] performed a meta-analysis of the results of prior work suggesting that the main factors for media to report on a demonstration are the demonstration size and the previous media attention on the demonstration’s topic.

2.2.5 Summary

News coverage has a strong impact on public opinion, i.e., what people think about ( agenda setting ), the context in which news is perceived ( priming ), or how topics are communicated ( framing ). Researchers from the social sciences have extensively studied such forms of media bias, i.e., the intentional, non-objective coverage of news events. The research has resulted in a broad literature on different forms and possible sources of media bias and their impact on (political) communication or opinion formation. In tandem, various well-established research methodologies, such as content analysis, frame analysis, and meta-analysis, have emerged in the social sciences.

The three forms of analysis discussed in Sect. 2.2.4 require significant manual effort and expertise [ 276 ], since those analyses require human interpretation of the texts and cannot be fully automated. For example, a quantitative content analysis might (semi-)automatically count words that have previously been manually defined in a codebook, but they would be unable to read for “meaning between the lines,” which is why such methods continue to be considered less comprehensive than a qualitative analysis. However, the recent methodological progress in natural language processing (NLP) in computer science promises to help alleviate many of these concerns.

In the remainder of this chapter, we discuss the different forms of media bias defined by the news production and consumption process. The process we have laid out in detail previously is in our view the most suitable conceptual framework to map analysis workflows from the social sciences to computer science and thus helps us to discuss where and how computer scientists can make unique contributions to the study of media bias.

2.3 Manual and Automated Approaches to Identify Media Bias

This section is structured into nine subsections discussing all of the forms of media bias depicted in Table 2.1 . In each subsection, we first introduce each form of bias and then provide an overview of the studies and techniques from the social sciences used to analyze that particular form. Subsequently, we describe methods and systems that have been proposed by computer science researchers to identify or analyze that specific form of media bias. Since media bias analysis is a rather young topic in computer science, often no or few methods have been specifically designed for that specific form of media bias, in which case, we describe the methods that could best be used to study the form of bias. Each subsection concludes with a summary of the main findings highlighting where and how computer science research can make a unique contribution to the study of media bias.

2.3.1 Event Selection

From the countless stream of events happening each day, only a small fraction can make it into the news. Event selection is a necessary task, yet it is also the first step to bias news coverage. The analysis of this form of media bias requires both an event-specific and a long-term observation of multiple news outlets. The main question guiding such an analysis is whether an outlet’s coverage shows topical patterns, i.e., some topics are reported more or less in one as compared to another outlet, or which factors influence whether an outlet reports on an event or not.

To analyze event selection bias, at least two datasets are required. The first dataset consists of news articles from one or more outlets; the second is used as a ground truth or baseline, which ideally contains “all” events relevant to the analysis question. For the baseline dataset, researchers from the social sciences typically rely on sources that are considered to be the most objective, such as police reports [ 119 ]. After linking events across the datasets, a comparison enables researchers to deduce factors that influence whether a specific news outlet reports on a given event. For instance, several studies compare demonstrations mentioned in police reports with news coverage on those demonstrations [ 228 , 229 , 267 ]. During the manual content analyses, the researchers extracted the type of event, i.e., whether it was a rally, march, or protest, the issue the demonstration was about, and the number of participants. Two studies found that the number of participants and the issue of the event, e.g., protests against the legislative body [ 267 ], had a high impact on the frequency in news coverage [ 119 ].

Meta-analyses have also been used to analyze event selection bias, mainly by summarizing findings from other studies. For instance, D’Alessio and Allen found that the main factors influencing media reporting on demonstration are the demonstration size and the previous media attention on the demonstration’s topic [ 64 ].

To our knowledge, only few automated approaches have been proposed that specifically aim to analyze event selection bias. Other than in social sciences studies, none of them compares news coverage with a baseline that is considered objective, but they compare the coverage of multiple outlets or other online news sources [ 34 , 307 ]. In the following, we first describe these approaches in more detail, and then we also describe current methods and systems that could support the analysis of this form of bias.

Bourgeois, Rappaz, and Aberer [ 34 ] span a matrix over news sources and events extracted from GDELT [ 201 ], where the value of each cell in the matrix describes whether the source (row) reported on the event (column) [ 215 ]. They use matrix factorization (MF) to extract “latent factors,” which influence whether a source reports on an event. Main factors found were the affiliation, ownership, and geographic proximity of two sources. Saez-Trumper, Castillo, and Lalmas [ 307 ] analyze relations between news sources and events. By analyzing the overlap between news sources’ content, they find, for example, that news agencies, such as AP, publish most non-exclusive content—i.e., if news agencies report on an event, other news sources will likely also report on the event—and that news agencies are more likely to report on international events than other sources. Media type was also a relevant event selection factor. For example, magazine-type media, such as The Economist , are more likely to publish on events with high prominence, i.e., events that receive a lot of attention in the media.

Similar to the manual analyses performed in the social sciences, automated approaches need to (1) find or use articles relevant to the question being analyzed (we describe relevant techniques later in this subsection; see the paragraphs on news aggregation), (2) link articles to baseline data or other articles, and (3) compute statistics on the linked data.

In task (2), we have to distinguish whether one wants to compare articles to a baseline, or, technically said, across different media, or to other articles. Linking events from different media, e.g., news articles and tweets on the same events, has recently gained attention in computer science [ 307 , 361 ]. However, to our knowledge, there are currently no generic methods to extract the required information from police reports or other, non-media databases, since the information that needs to be extracted depends on the particular question studied and the information structure and format differ greatly between these documents, e.g., police reports from different countries or states usually do not share common formats (cf. [ 206 , 231 ]).

To link news articles reporting on the same event, various techniques can be used. Event detection extracts events from text documents. Since news articles are usually concerned with events, event detection is commonly used in news-related analyses. For instance, in order to group related articles, i.e., those reporting on the same event [ 164 ], one needs to first find events described in these articles. Topic modeling extracts semantic concepts, or topics, from a set of text documents where topics are typically extracted as lists of weighted terms. A commonly employed implementation is latent Dirichlet allocation (LDA) [ 30 ], which is, for instance, used in the Europe Media Monitor (EMM) news aggregator [ 26 ].

Related articles can also be grouped with the help of document clustering methods, such as affinity propagation [ 91 ] or hierarchical agglomerative clustering (HAC) [ 226 ]. HAC, for example, computes pairwise document similarity on text features using measures such as the cosine distance on TF-IDF vectors [ 308 ] or word embeddings [ 197 ]. This way, HAC creates a hierarchy of the most similar documents and document groups [ 222 ]. HAC has been used successfully in several research projects [ 232 , 276 ]. Other methods to group related articles exploit news-specific characteristics, such as the five journalistic W questions (5Ws). The 5Ws describe the main event of a news article, i.e., who did what, when, where, and why. A few works additionally extract the how question [ 321 ], i.e., how something happened or was done (5W1H extraction or question answering). Journalists usually answer the 5W questions within the first few sentences of a news article [ 52 ]. Once phrases answering the 5W question are extracted, articles can be grouped by comparing their 5W phrases. We propose a method for 5W1H extraction in Chap. 4 .

News aggregation Footnote 1 is one of the most popular approaches to enable users to get an overview of the large amounts of news that is published nowadays. Established news aggregators, such as Google News and Yahoo News, show related articles by different outlets reporting on the same event. Hence, the approach is feasible to reveal instances of bias by source selection, e.g., if one outlet does not report on an important event. News aggregators rely on methods from computer science, particularly methods from natural language processing (NLP). The analysis pipeline of most news aggregators aims to find the most important news topics and present them in a compressed form to users. The analysis pipeline typically involves the following tasks [ 84 , 128 ]:

Data gathering , i.e., crawling articles from news websites.

Article extraction from website data, which is typically HTML or RSS.

Grouping , i.e., finding and grouping related articles reporting on the same topic or event.

Summarization of related articles.

Visualization , e.g., presenting the most important topics to users.

For the first two tasks, data gathering and article extraction, established and reliable methods exist, e.g., in the form of web crawling frameworks [ 246 ]. Articles can be extracted with naive approaches, such as website-specific wrappers [ 270 ], or more generic methods based on content heuristics [ 185 ]. Combined approaches perform both crawling and extracting and offer other functionality tailored to news analysis. In Sect. 3.5 , we propose news-please , a web crawler and extractor for news articles, which extracts information from all news articles on a website, given only the root URL of the news outlet to be crawled.

The objective of grouping is to identify topics and group articles on the same topic, e.g., using LDA or other topic modeling techniques, as described previously. Articles are then summarized using methods such as simple TF-IDF-based scores or complex approaches considering redundancy and order of appearance [ 294 ]. By performing the five tasks of the news aggregation pipeline in an automatized fashion, news aggregators can cope with the large amount of information produced by news outlets every day.

However, no established news aggregator reveals event selection bias of news outlets to their users. Incorporating this functionality for short-term or event-oriented analysis of event selection bias, news aggregators could show the publishers that did not publish an article on a selected event. For long-term or trend-oriented analysis, news aggregators could visualize a news outlet’s coverage frequency of specific topics, e.g., to show whether the issues of a specific politician or party or an oil company’s accident is either promoted or demoted.

In addition to traditional news aggregators, which show topics and related topics in a list, recent news aggregators use different analysis approaches and visualizations to promote differences in news coverage caused by biased event selection. Matrix-based news aggregation (MNA) is an approach we devised earlier that follows the analysis workflow of established news aggregators while organizing and visualizing articles into rows and columns of a two-dimensional matrix [ 128 , 129 ]. The exemplary matrix depicted in Fig. 2.2 reveals what is primarily stated by media in one country (rows) about another country (columns). For instance, the cell of the publisher country Russia and the mentioned country Ukraine, denoted with RU-UA, contains all articles that have been published in Russia and mention Ukraine. Each cell shows the title of the most representative article, determined through a TF-IDF-based summarization score among all cell articles [ 128 ]. Users either select rows and columns from a list of given configurations for common use cases, e.g., to analyze only major Western countries, or define own rows and columns from which the matrix shall be generated.

figure 2

News overview to enable comparative news analysis in matrix-based news aggregation. The color of each cell refers to its main topic. Source [ 135 ]

To analyze event selection bias, users can use MNA to explore main topics in different countries as in Fig. 2.2 or span the matrix over publishers and topics in a country.

Research in the social sciences concerned with bias by event selection requires significant effort due to the time-consuming manual linking of events from news articles to a second “baseline” dataset. Many established studies use event data from a source that is considered “objective,” for example, police reports (cf. [ 6 , 231 , 267 ]). However, the automated extraction of relevant information from such non-news sources requires the development and maintenance of specialized tools for each of the sources. Reasons for the increased extraction effort include the diversity or unavailability of such sources, e.g., police reports are structured differently in different countries or may not be published at all. Linking events from different sources in an automated fashion poses another challenge because of the different ways in which the same event may be described by each of the sources. This places a limit on the possible contributions of automated approaches for comparison across sources or articles.

In our view, the automated analysis of events within news articles, however, is a very promising line of inquiry for computer science research. Sophisticated tools can already gather and extract relevant data from online news sources. Methods to link events in news articles are already available or are the subject of active research [ 26 , 30 , 164 , 222 , 232 , 276 , 308 ]. In Sect. 4.2 , we propose a method that extracts phrases describing journalistic properties of an article’s main event, i.e., who did what, when, where, why, and how. Of course, news articles must originate from a carefully selected set of news publishers, which represent not only mainstream media but also alternative and independent publishers, such as Wikinews. Footnote 2 Finally, revealing differences in the selection of top news stories between publishers, or even the mass media of different countries, has shown promising results [ 128 ] and could eventually be integrated into regular news consumption using news aggregators demonstrating the potential for computer science approaches to make a unique contribution to the study event selection.

2.3.2 Source Selection

Journalists must decide on the trustworthiness of information sources and the actuality of information for a selected event. While source selection is a necessary task to avoid information overload, it may lead to biased coverage, e.g., if journalists mainly consult sources supporting one perspective when writing the article. The choice of sources used by a journalist or an outlet as a whole can reveal patterns of media bias. However, journalistic writing standards do not require journalists to list sources [ 371 ], which make the identification of original sources difficult or even impossible. One can only find hints in an article, such as the use of quotes, references to studies, phrases such as “according to [name of other news outlet]” [ 116 ], or the dateline, which indicates whether and from which press agency the article was copy-edited. One can also analyze whether the content and the argumentation structure match those of an earlier article [ 68 ].

The effects of source selection bias are similar to the effects of commission and omission (Sect. 2.3.3 ), because using only sources supporting one side of the event when writing an article (source selection) is similar to omitting all information supporting the other side (omission). Because many studies in the social sciences are concerned with the effects of media bias, e.g., [ 24 , 69 , 72 , 98 , 100 , 159 , 166 , 190 , 194 , 237 , 259 , 399 ], and the effects of these three bias forms are similar, bias by source selection and bias by commission and omission are often analyzed together.

Few analyses in the social sciences aim to find the selected sources to derive insights on the source selection bias of an article or an outlet. However, there are notable exceptions, for example, one study counts how often news outlets and politicians cite phrases originating in think tanks and other political organizations. The researchers had previously assigned the organizations to a political spectrum [ 117 ]. The frequencies of specific phrases used in articles, such as “We are initiating this boycott, because we believe that it is racist to fly the Confederate Flag on the state capitol” [ 117 ], which originated in the civil rights organization NAACP, are then aggregated to estimate the bias of news outlets. In another study of media content, Papacharissi and Oliveira annotate indications of source selection in news articles, such as whether an article refers to a study conducted by the government or independent scientists [ 274 ]. One of their key findings is that UK news outlets often referred to other news articles, whereas US news outlets did that less often but referred to governments, opinions, and analyses.

On social media , people can be subject to their own source selection bias, as discussed in Sect. 2.1 . For instance, on Facebook, people tend to be friends with likewise-minded people, e.g., who share similar believes or political orientations [ 15 ]. People who use social media platforms as their primary news source are subject to selection bias not only by the operating company [ 82 , 85 ] but also by their friends [ 15 ].

To our knowledge, there are currently no approaches in computer science that aim to specifically identify bias by source selection. One exception is NewsDeps, an exploratory approach for determining the content dependencies between news articles [ 139 ]. Our approach employs simple methods from plagiarism detection (PD) described afterward to identify which parts of a news article stem from previously published news articles.

However, several automated techniques are well suited to address this form of bias. Plagiarism detection (PD) is a field in computer science with the broad aim of identifying instances of unauthorized information reuse in documents. Methods from PD may be used to identify the potential sources of information for a given article beyond identifying actual “news plagiarism” (cf. [ 179 ]). While there are some approaches focused on detecting instances of plagiarism in news, e.g., using simple text-matching methods to find 1:1 duplicates [ 309 ], research on news plagiarism is not as active as research on academic plagiarism. This is most likely a consequence of the fact that authorized copy-editing is a fundamental component in the production of news. Another relevant field that we describe in this section is semantic textual similarity (STS), which measures the semantic equivalence of two (usually short) texts [ 5 ].

The vast majority of plagiarism detection techniques analyzes text [ 89 , 235 ] and thus could also be adapted and subsequently applied to news texts. Current methods can reliably detect copy and paste plagiarism, the most common form of plagiarism [ 89 , 405 ]. Ranking methods use, for instance, TF-IDF and other information retrieval techniques to estimate the relevance of other documents as plagiarism candidates [ 149 ]. Fingerprinting methods generate hashes of phrases or documents. Documents with similar hashes indicate plagiarism candidates [ 149 , 324 ]. Hybrid approaches assess documents’ similarity using diverse features [ 236 ].

Today’s plagiarism detection methods already provide most of the functionality to identify the potential sources of news articles. Copy-edited articles are often shortened or slightly modified and, in some cases, are a 1:1 duplicate of a press agency release. These types of slight modifications, however, can be reliably detected with ranking or fingerprinting methods (cf. [ 235 , 309 ]). Current methods only continue to struggle with heavily paraphrased texts [ 235 ], but research is extending also to other non-textual data types such as analyzing links [ 107 ], an approach that can be used for the analysis of online news texts as well. Another text-independent approach to plagiarism detection are citation-based plagiarism detection algorithms, which achieve good results by comparing patterns of citations between two scientific documents [ 105 ]. Due to their text independence, these algorithms also allow a cross-lingual detection of information reuse [ 105 ]. News articles typically do not contain citations, but the patterns of quotes, hyperlinks, or other named entities can also be used as a suitable marker to measure the semantic similarity of news articles (cf. [ 107 , 117 , 203 ]). Some articles also contain explicit referral phrases, such as “according to The New York Times .” The dateline of an article can also state whether and from where an article was copy-edited [ 140 ]. Text search and rule-based methods can be used to identify referral phrases and to extract the resources being referenced. In our view, future research should focus on identifying the span of information that was taken from the referred resource (see also Sect. 2.3.3 ).

Semantic textual similarity (STS) methods measure the semantic equivalence of two (usually short) texts [ 5 ]. STS methods use basic measures, such as n-gram overlap, WordNet node-to-node distance, and syntax features, e.g., compare whether the predicate is the same in two sentences [ 312 ]. More recent methods combine various techniques and use deep learning networks, achieving a Pearson correlation of their STS results to human coders of 0.78 [ 306 ]. Recently, these methods have also focused on cross-lingual STS [ 5 ] and use, for example, machine translation before employing regular mono-lingual STS methods [ 36 ]. Machine translation has proven useful also for other cross-lingual tasks, such as event analysis [ 368 ].

Graph analysis is concerned with the analysis of relations between nodes in a graph. The relation between news articles can be used to construct a dependency graph. Spitz and Gertz analyzed how information propagates in online news coverage using hyperlinks linking to other websites [ 333 ]. They identified four types of hyperlinks: navigational (menu structure to navigate the website), advertisement , references (links within the article pointing to semantically related sites), and internal links (further articles published by the same news outlet). They only used reference links to build a network, since the other link types contain too many unrelated sites (internal) or irrelevant information (advertisement and navigational). One finding by Spitz and Gertz is that networks of news articles can be analyzed with methods of citation network analysis. Another method extracts quotes attributed to individuals in news articles to follow how information propagates over time in a news landscape [ 203 ]. One finding is that quotes undergo variation over time but remain recognizable with automated methods [ 203 ].

In our view, computer science research could therefore provide promising solutions to long-standing technical problems in the systematic study of source selection by combining methods from PD and graph analysis. If two articles are strongly similar, the later published article will most likely contain reused information from the former published article. This is a typical case in news coverage, e.g., many news outlets copy-edit articles from press agencies or other major news outlets [ 358 ]. Using PD, such as fingerprinting and pattern-based analysis as previously described, to measure the likelihood of information reuse between all possible pairs of articles in a set of related articles implicitly constructs a directed dependency graph. The nodes represent single articles, the directed edges represent the flow of information reuse, and the weights of the edges represent the degree of information reuse. The graph can be analyzed with the help of methods from graph analysis, e.g., to estimate importance or slant of news outlets or to identify clusters of articles or outlets that frame an event in a similar manner (cf. [ 333 ]). For instance, if many news outlets reuse information from a specific news outlet, the higher we can rate its importance. The detection of semantic (near-)duplicates would also help lower the number of articles that researchers from the social sciences need to manually investigate to analyze other forms of media bias in content analyses.

In summary, the analysis of bias by source selection is challenging, since the sources of information are mostly not documented in news articles. Hence, both in the social sciences and in computer science research, only few studies have analyzed this form of bias. Notable exceptions are the studies discussed previously that analyzed quotes used by politicians originating from think tanks. Methods from computer science can in principle provide the required techniques for the (semi-)automated analysis of this form of bias and thus make a very valuable contribution. The methods, most importantly those from plagiarism detection research, could be (and partially already have been [ 309 ]) adapted and extended from academic plagiarism detection and other domains, where reliable methods already exist.

2.3.3 Commission and Omission of Information

Analyses of bias by commission and omission compare the information contained in a news article with those in other news articles or sources, such as police reports and other official reports. The “implementation” and effects of commission and omission overlap with those of source selection, i.e., when information supporting or opposing a perspective is either included or left out of an article. Analyses in the social sciences aim to determine which frames the information included in such articles support. For instance, frame analyses typically compare the frequencies of frame-attributing phrases in a set of news articles [ 98 , 120 ]. More generally, content analysis compares which facts are presented in news articles and other sources [ 326 ]. In the following, we describe exemplary studies of each of the two forms.

A frame analysis by Gentzkow and Shapiro quantified phrases that may sway readers to one or the other side of a political issue [ 98 ]. For this analysis, the researchers first examined which phrases were used significantly more often by politicians of one party over another and vice versa. Afterward, they counted the occurrence of phrases in news outlets to estimate the outlet’s bias toward one side of the political spectrum. The results of the study showed that news producers have economic motives to bias their coverage toward the ideological views of their readers. Similarly, another method, briefly mentioned in Sect. 2.3.2 , counts how often US congressmen use the phrases coined by think tanks, which the researchers previously associated with political parties [ 117 ]. One finding is that Fox News coverage was significantly slanted toward the US Republican Party.

A content analysis conducted by Smith et al. [ 326 ] investigated whether the aims of protesters corresponded to the way in which news reported one demonstrations. One of their key hypotheses was that news outlets will tend to favor the positions of the government over the positions of protesters. In the analysis, Smith et al. extracted relevant textual information from news articles, transcripts of TV broadcasts, and police reports. They then asked analysts to annotate the data and could statistically confirm the previously mentioned hypothesis.

Bias by commission and omission has not specifically been addressed by automated approaches despite the existence of various methods that we consider beneficial for the analysis of both forms of bias in a (semi-)automated manner. Researchers from the social sciences are already using text search to find relevant documents and phrases within documents [ 336 ]. However, search terms need to be constructed manually, and the final analysis still requires a human interpretation of the text to answer coding tasks, such as “assess the spin of the coverage of the event” [ 326 ]. Another challenge is that content analyses comparing news articles with other sources require the development of scrapers and information extractors tailored specifically to these sources. Footnote 3 To our knowledge, there are no established or publicly available generic extractors for commonly used sources such as police reports.

An approach that partially addresses commission and omission of information is aspect-level browsing as implemented in the news aggregator NewsCube [ 276 ]. Park et al. [ 276 ] define an “aspect” as the semantic proposition of a news topic. The aspect-level browsing enables users to view different perspectives on political topics. The approach follows the news aggregation workflow described in Sect. 2.3.1 , but with a novel grouping phase: NewsCube extracts aspects from each article using keywords and syntactic rules and weighs these aspects according to their position in the article (motivated by the inverted pyramid concept: the earlier the information appears in the article, the more important it is [ 52 ]). Afterward, NewsCube performs HAC to group related articles. The visualization is similar to the topic list shown in established news aggregators, but additionally shows different aspects of a selected topic. A user study found that users of NewsCube became aware of the different perspectives and subsequently read more articles containing perspective-attributing aspects. However, the approach cannot reliably assess the diversity of the aspects. NewsCube shows all aspects, even though many of them are similar, which decreases the efficiency of using the visualization to get an overview of the different perspectives in news coverage. Word and phrase embeddings might be used to recognize the similarity of aspects (cf. [ 197 , 319 ]). The visualization also does not highlight which information is subject to commission and omission bias, i.e., what information is contained in one article and left out in another article.

Methods from plagiarism detection (see Sect. 2.3.2 ) open a promising research direction for the automated detection of commission and omission of information in news. More than 80% of related news articles add no new information and only reuse information contained in previously published articles [ 358 ]. Comparing the presented facts of one article with the facts presented in previously published articles would help identify commission and omission of information. Methods from PD can detect and visualize which segments of a text may have been taken from other texts [ 105 ]. The relatedness of bias by source selection and bias by commission and omission suggests that an analysis workflow may ideally integrate methods from PD to address both issues (also see Sect. 2.3.2 ).

Centering resonance analysis (CRA) aims to find how influential terms are within a text by constructing a graph with each node representing a term that is contained in the noun phrases (NP) of a given text [ 60 ]. Two nodes are connected if their terms are in the same NP or boundary terms of two adjacent NPs. The idea of the approach is that the more edges a node has, the more influential its term is to the text’s meaning. To compare two documents, methods from graph analysis can be used to analyze both CRA graphs (Sect. 2.3.2 gives a brief introduction to methods from graph analysis). Researchers from the social sciences have successfully employed CRA to extract influential words from articles and then manually compare the information contained in the articles [ 274 ]. Recent advancements toward computational extraction and representation of the “meaning” of words and phrases, especially word embeddings [ 197 ], may serve as another way to (semi-)automatically compare the contents of multiple news articles.

To conclude, studies in the social sciences researching bias by commission and omission have always compared the analyzed articles with other news articles and/or non-media sources, such as police reports. No approaches from computer science research specifically aim to identify this bias form. However, automated methods, specifically PD, CRA, graph analysis, and more recent also word embeddings, are promising candidates to address this form of bias opening new avenues for unique contributions of well-established computer science methodology in this area. CRA, for instance, has already been employed by researchers from the social sciences to compare the information contained in two articles.

2.3.4 Word Choice and Labeling

When referring to a semantic concept, such as an entity, a geographic position, or an activity, authors can label the concept and choose from various words to refer to it (cf. [ 86 ]). Instances of bias by labeling and word choice frame the referred concept differently, e.g., simply positively or negatively, or they highlight a specific perspective, e.g., economical or cultural (see Sect. 2.2.2 for a background on framing). Examples include “immigrant” or “economic migrant” and “Robert and John got in a fight” and “Robert attacked John.” The effects of this form of bias range from concept level, e.g., a specific politician is shown to be incompetent, to article level, e.g., the overall tone of the article features emotional or factual words [ 263 , 274 ].

Content analyses and framing analyses are used in the social sciences to identify bias by labeling and word choice within news articles. Similar to the approaches discussed in previous sections, the manual coding task is once again time-consuming, since annotating news articles requires careful human interpretation. The analyses are typically either topic-oriented or person-oriented. For instance, Papacharissi and Oliveira used CRA to extract influential words (see Sect. 2.3.3 ). They investigated labeling and word choice in the coverage of different news outlets on topics related to terrorism [ 274 ]. They found that The New York Times used a more dramatic tone, e.g., news articles dehumanized terrorists by not ascribing any motive to terrorist attacks or usage of metaphors, such as “David and Goliath” [ 274 ]. The Washington Post used a less dramatic tone, and both the Financial Times and The Guardian focused their news articles on factual reporting. Another study analyzed whether articles portrayed Bill Clinton, the US president at that time, positively, neutrally, or negatively [ 263 ].

The automated analysis of labeling and word choice in news texts is challenging due to limitations of current NLP methods [ 128 ], which cannot reliably interpret the frame induced by labeling and word choice, due to the frame’s dependency on the context of the words in the text [ 266 ]. Few automated methods from computer science have been proposed to identify bias induced by labeling and word choice. Grefenstette et al. devised a system that investigates the frequency of affective words close to words defined by the user, for example, names of politicians [ 116 ]. They find that the automatically derived polarity scores of named entities are in line with the publicly assumed slant of analyzed news outlets, e.g., George Bush, the Republican US president at that time, was mentioned more positively in the conservative The Washington Times compared to other news outlets.

The most closely related field is sentiment analysis , which aims to extract an author’s attitude toward a semantic concept mentioned in the text [ 272 ]. Current sentiment analysis methods reliably extract the unambiguously stated sentiment [ 272 ]. For example, those methods reliably identify whether customers used “positive,” such as “good” and “durable,” or “negative” words, such as “poor quality,” to review a product [ 272 ]. However, the highly context-dependent, hence more ambiguous sentiment in news coverage described previously in this section remains challenging to detect reliably [ 266 ]. Recently, researchers proposed approaches using affect analysis , e.g., using more dimensions than polarity in sentiment analysis to extract and represent emotions induced by a text, and crowdsourcing , e.g., systems that ask users to rate and annotate phrases that induce bias by labeling and word choice [ 277 ]. We describe these fields in the following paragraphs.

While sentiment analysis presents one promising technique to be used for automating the identification of bias by word choice and labeling, the performance of current sentiment classification on news texts is poor (cf. [ 167 , 266 ]) or even “useless” [ 335 ]. Two reasons why sentiment analysis performs poorly on news texts [ 266 ] are (1) the lack of large-scale gold standard datasets and (2) the high context dependency or implicitness of sentiment-inducing phrases. Large annotated datasets are required to train current sentiment classifiers [ 400 ]. More traditional classifiers use manually [ 153 ] or semi-automatically [ 13 , 110 , 335 ] created dictionaries of positive and negative words to score a sentence’s sentiment. However, to our knowledge, no sentiment dictionary exists that is specifically designed for news texts, and generic dictionaries tend to perform poorly on such texts (cf. [ 16 , 167 , 266 ]). Godbole, Srinivasaiah, and Skiena [ 110 ] used WordNet to automatically expand a small, manually created seed dictionary to a larger dictionary. They used the semantic relations of WordNet to expand upon the manually added words to closely related words. An evaluation showed that the resulting dictionary had similar quality in sentiment analysis as solely manually created dictionaries. However, the performance of entity-related sentiment classification using the dictionary tested on news websites and blogs is missing a comparison against a ground truth, such as an annotated news dataset. Most importantly, dictionary-based approaches are not sufficient for news texts, since the sentiment of a phrase depends on its context, for example, in economics, a “low market price” may be good for consumers but bad for producers.

To avoid the difficulties of interpreting news texts, researchers have proposed approaches to perform sentiment analysis specifically on quotes [ 16 ] or on the comments of readers [ 278 ]. The motivation for analyzing only the sentiment contained in quotes or comments is that phrases stated by someone are far more likely to contain an explicit statement of sentiment or opinion-conveying words. While the analysis of quotes achieved poor results [ 16 ], readers’ comments appeared to contain more explicitly stated opinions, and regular sentiment analysis methods perform better: a classifier that used the extracted sentiments from the readers’ comments achieved a precision of 0.8 [ 278 ].

Overall, the performance of sentiment analysis on news texts is still rather poor. This is attributable to the fact that, thus far, not much research has focused on improving sentiment analysis when compared to the large number of publications targeting the prime use case of sentiment analysis: product reviews. Currently, no public annotated news dataset for sentiment analysis exists, which is a crucial requirement for driving forward successful, collaborative research on this topic.

A final challenge when applying sentiment analysis to news articles is that the one-dimensional positive-negative scale used by all mature sentiment analysis methods may fall short of representing the complexity of news articles. Some researchers suggested to investigate emotions or affects, e.g., induced by headlines [ 341 ] or in entire news articles [ 116 ], whereas investigating the full text seems to yield better results. Affect analysis aims to find the emotions that a text induces on the contained concepts, e.g., entities or activities, by comparing relevant words from the text, e.g., nearby the investigated concept, with affect dictionaries [ 344 ]. Bhowmick [ 28 ] devised an approach that automatically estimates which emotions a news text induces on its readers using features such as tokens, polarity, and semantic representation of tokens. An ML-based approach by Mishne classifies blog posts into emotion classes using features such as n-grams and semantic orientation to determine the mood of the author when writing the text [ 243 ].

Semantics derived using word embeddings may be used to determine whether words in an article contain a slant, since the most common word embedding models contain biases, particularly gender bias and racial discrimination [ 32 , 42 ]. Bolukbasi describe a method to debias word embeddings [156]; the dimensions that were removed or changed by this process contain potentially biased words; hence, they may also be used to find biased words in news texts.

Besides fully automated approaches to identify bias by labeling and word choice, semi-automated approaches incorporate users’ feedback. For instance, NewsCube 2.0 employs crowdsourcing to estimate the bias of articles reporting on a topic. The system allows users to collaboratively annotate bias by labeling and word choice in news articles [ 277 ]. Afterward, NewsCube 2.0 presents contrastive perspectives on the topic to users. In their user study, Park et al. [ 277 ] find that the NewsCube 2.0 supports participants to collectively organize news articles according to their slant of bias. Section 2.3.8 describes AllSides, a news aggregator that employs crowdsourcing, though not to identify bias by labeling and word choice but to identify spin bias, i.e., the overall slant of an article.

The forms of bias by labeling and word choice have been studied extensively in the social sciences using frame analyses and content analyses. However, to date, not much research on both forms has been conducted in computer science. Yet, the previously presented techniques from computer science, such as sentiment analysis and affect analysis, are already capable of achieving reliable results in other domains. Besides, crowdsourcing has already successfully been used to identify instances of such bias.

2.3.5 Placement and Size Allocation

The placement and size allocation of a story indicates the value a news outlet assigns to that story [ 14 , 64 ]. Long-term analyses reveal patterns of bias, e.g., the favoring of specific topics or avoidance of others. Furthermore, the findings of such an analysis should be combined with frame analysis to give comprehensive insights on the bias of a news outlet, e.g., a news outlet might report disproportionately much on one topic, but otherwise, its articles are well-balanced and objective [ 75 ].

The first manual studies on the placement and size of news articles in the social sciences were already conducted in the 1960s. Researchers measured the size and the number of columns of articles present in newspapers, or the broadcast length in minutes dedicated to a specific topic, to investigate if there were any differences in US presidential election coverage [ 337 , 338 , 339 , 340 ]. These early studies, and also a more recent study conducted in 2000 [ 34 ], found no significant differences in article size between the news outlets analyzed. Fewer studies have focused on the placement of an article, but found that article placement does not reveal patterns of bias for specific news outlets [ 339 , 340 ]. Related factors that have also been considered are the size of headlines and pictures (see also Sect. 2.3.6 for more information on the analysis of pictures), which also showed no significant patterns of bias [ 339 , 340 ].

Bias by article placement and size has more recently not been revisited, even though the rise of online news and social media may have introduced significant changes. Traditional printed news articles are a permanent medium, in the sense that once they were printed, their content could not (easily) be altered, especially not for all issues ever printed. However, online news websites are often updated. For example, if a news story is still developing, the news article may be updated every few minutes (cf. [ 59 ]). Such updates of news articles also include the placement and allotted size of previews of articles on the main page and on other navigational pages. To our knowledge, no study has yet systematically analyzed the changes in the size and position of online news articles over time.

Fully automated methods are able to measure placement and size allocation of news articles because both forms can be determined by volumetric measurements (see Sect. 2.2.4 ). Printed newspapers must be digitalized first, e.g., using optical character recognition (OCR) and document segmentation techniques [ 160 , 248 ]. Measuring a digitalized or online article’s placement and size is a trivial task. Due to the Internet’s inherent structure of linked websites, online news even allows for a more advanced and fully automated measurements of news article importance, such as PageRank [ 269 ], which could also be applied within pages of the publishing news outlet. Most popular news datasets, such as RCV1 [ 205 ], are text-based and do not contain information on the size and placement of a news article. Future research, however, should especially take into consideration the fast pace in online news production as described previously.

While measuring size and placement automatically is a straightforward task in computer science, only few specialized systems currently exist that can measure these forms of news bias. Saez-Trumper, Castillo, and Lalmas [ 307 ] devised a system that measures the importance ascribed to a news story by an outlet by counting the total number of words of news articles reporting on the story. To measure the importance ascribed to the story by the outlet’s readers, the system counts the number of tweets linking to these news articles. One finding is that both factors are slightly correlated. NewsCube’s visualization is designed to provide equal size and avoid unfair placement of news articles to “not skew users’ visual attention” [ 276 ]. Even though the authors ascribe this issue high importance in their visualization, they do not analyze placement and size in the underlying articles.

Research in the social sciences and in computer science benefit from the increasing accessibility of online news, which allows effective automated analysis of bias by taking into consideration article placement and size. Measuring placement and size of articles is a trivial and scalable task that can be performed on any number of articles without requiring high manual effort. However, most recent studies in the social sciences have not considered including bias by placement and size into their analysis. The same is true for systems in computer science that should similarly include the placement and size of articles as an additional dimension of media bias. With the conclusions that have been drawn based on the analysis of traditional, printed articles, still in need of verification for online media, computer science approaches can here make a truly unique contribution.

2.3.6 Picture Selection

Pictures contained in news articles can influence how readers perceive a reported topic [ 304 ]. In particular, readers who wish to get an overview of current events are likely to browse many articles and thus view only each article’s headline and image. The effects of picture selection even go so far as to influence readers’ voting preferences in elections [ 304 ]. Reporters or news agencies sometimes (purposefully) show pictures out of context [ 83 ], e.g., a popular picture in 2015 showed an aggressive refugee with an alleged ISIS flag fighting against police officers. It later turned out that the picture was taken in 2012, before the rise of ISIS, and that the flag was not related to ISIS [ 70 ]; hence, the media had falsely linked the refugee with the terrorist organization.

Researchers from the social sciences have analyzed pictures used in news articles for over 50 years [ 173 ], approximately as long as media bias itself has been studied. Basic studies count the number of pictures and their size to measure the degree of importance ascribed by the news outlet to a particular topic (see also Sect. 2.3.5 for information on bias by size). In this section, we describe the techniques studies use to analyze the semantics of selected images. To our knowledge, all bias-related studies in the social sciences are concerned with political topics. Analyses of picture selection are either person-oriented or topic-oriented .

Person-oriented analyses ask analysts to rate the articles’ pictures showing specific politicians. Typical rating dimensions are [ 169 , 371 ]:

Expression , e.g., smiling vs. frowning

Activity , e.g., shaking hands vs. sitting

Interaction , e.g., cheering crowd vs. alone

Background , e.g., the country’s flags vs. not identifiable

Camera angle , e.g., eye-level shots vs. shots from above

Body posture , e.g., upright vs. bowed torso

Findings are mixed, e.g., a study from 1998 found no significant differences in the selected pictures between the news outlets analyzed, e.g., whether selected pictures of a specific news outlets were in favor of a specific politician [ 371 ]. Another study from 1988 found that The Washington Post did not contain significant picture selection bias but that The Washington Times selected images that were more likely favorable toward Republicans [ 169 ]. A study of German TV broadcasts in 1976 found that one candidate for German chancellorship, Helmut Schmidt, was significantly more often shown in favorable shots including better camera angles and reactions of citizens than the other main candidate, Helmut Kohl [ 171 ].

Topic-oriented analyses do not investigate bias toward persons but toward certain topics. For instance, a recent study on Belgian news coverage analyzed the presence of two frames [ 369 ]: asylum seekers in Belgium are (1) victims that need protection or (2) intruders that disturb Belgian culture and society. Articles supporting the first frame typically chose pictures depicting refugee families with young children in distress or expressing fear. Articles supporting the second frame chose pictures depicting large groups of mostly male, asylum seekers. The study found that the victim frame was predominantly adopted in Belgian news coverage and particularly in the French-speaking part of Belgium. The study also revealed a temporal pattern: during Christmas time, the victim frame was even more predominant.

To our knowledge, there are currently no systems or approaches from computer science that analyze media bias through image selection. However, methods in computer vision can measure many of the previously described dimensions. This is especially true since the recent rise of deep learning, where current methods achieve unprecedented classification performance [ 370 ]. Automated methods can identify faces in images, recognize emotions, categorize objects shown in pictures, and even generate captions for a picture. Research has advanced so far in these applications that several companies, such as Facebook, Microsoft, and Google, are using such automated methods in production, e.g., in autonomous cars, or are offering them as a paid service.

In the broad context of bias through image selection, Segalin et al. [ 317 ] trained a convolutional neural network (CNN) on the Psycho-Flickr dataset to estimate the personality traits of the pictures’ authors. To evaluate the classification performance of the system, they compared the CNN’s classifications with self-assessments by picture authors and also with attributed assessments by participants of a study. The results of their evaluation suggest that CNNs are suitable to derive such characteristics that are not even visible in the analyzed pictures.

Picture selection is an important factor in the perception of news. Basic research from psychology has shown that image selection can slant coverage toward one direction, although studies in the social sciences on bias by selection in the past concluded that there were no significant differences in picture selection. Advances in image processing research and the increasing accessibility of online news provide completely new avenues to study potential effects of picture selection. Computer science approaches can here primarily contribute by enabling the automated analysis of images on much bigger scale allowing us to reopen important questions on the effect of picture selection in news coverage and beyond.

2.3.7 Picture Explanation

Captions below images and referrals to the images in the main text provide images with the needed textual context. Images and their captions should be analyzed jointly because text can change a picture’s meaning and vice versa [ 172 , 173 ]. For instance, during Hurricane Katrina in 2005, two similar pictures published in US media showed survivors wading away with food from a grocery store. The only difference was that one picture showed a black man, who “looted” the store, while the other picture depicted a white couple, who “found” food in the store [ 328 ].

Researchers from the social sciences typically perform two types of analyses that are concerned with bias from image captions: jointly analyzing image and caption, or only analyzing the caption, ignoring the image. Only few studies analyze captions and images jointly. For instance, a comparison of images and captions from The Washington Post and The Washington Times found that the captions were not significantly biased [ 169 ]. A frame analysis on the refugee topic in Belgian news coverage also took into consideration image captions. However, the authors focused on the overall impression of the analyzed articles rather than examining any potential bias specifically present in the picture captions [ 369 ].

The vast majority of studies analyze captions without placing them in context with their pictures. Studies and techniques concerned with the text of a caption (but not the picture) are described in the previous sections, especially in the sections for bias by commission and omission (see Sect. 2.3.3 ) and labeling and word choice (see Sect. 2.3.4 ). We found that most studies in the social sciences either analyze image captions as a component of the main text or analyze images but disregard their captions entirely [ 339 , 340 , 371 ]. Likewise, relevant methods from computer science are effectively the same as those concerned with bias by commission and omission (see Sect. 2.3.3 ) and labeling and word choice (see Sect. 2.3.4 ). For the other type of studies, i.e., jointly analyzing images and captions, relevant methods are discussed in Sect. 2.3.6 , i.e., computer vision to analyze the contents of pictures, and additionally in Sections 2.3.3 and 2.3.4 , e.g., sentiment analysis to find biased words in captions.

To our knowledge, no study has examined picture referrals contained in the article’s main text. This is most likely due to the infrequency of picture referrals.

The few analyses on captions suggest that bias by picture explanation is not very common. However, more fundamental studies show the impact of captions on the perception of images and note rather subtle differences in word choice. While many studies analyzed captions as part of the regular text, e.g., analyzing bias by labeling and word choice, research currently lacks specialized analyses that examines captions in conjunction with their images.

2.3.8 Spin: The Vagueness of Media Bias

Bias by spin is closely related to all other forms of media bias and is also the vaguest form. Spin is concerned with the context of presented information. Journalists create the spin of an article on all textual levels, e.g., by supporting a quote with an explanation (phrase level), by highlighting certain parts of the event (paragraph level), or even by concluding the article with a statement that frames all previously presented information differently (article level). The order in which facts are presented to the reader influences what is perceived (e.g., some readers might only read the headline and lead paragraph) and how readers rate the importance of reported information [ 52 ]. Not only the text of an article but all other elements, including pictures, captions, and the presentation of the information, contribute to an article’s overall spin.

In the social sciences, the two primarily used methods to analyze the spin of articles are frame analysis and more generally content analysis. For instance, one finding in the terrorism analysis conducted by Papacharissi and Oliveira (see Sect. 2.3.2 ) was that The New York Times often personified the events in their articles, e.g., by focusing on persons involved in the event and the use of dramatic language [ 274 ].

Some practices in journalism can be seen as countermeasures to mitigate media bias. Press reviews summarize an event by referring to the main statements found in articles by other news outlets. This does not necessarily reveal media bias, because any perspective can be supported by source selection, e.g., only “reputable” outlets are used. However, typically press reviews broaden a reader’s understanding of an event and might be a starting point for further research. Another practice that supports mitigation of media bias are opposing commentaries in newspapers, where two authors subjectively elaborate their perspective on the same topic. Readers will see both perspectives and can make their own decisions regarding the topic.

Social media has given rise to new collaborative approaches to media bias detection. Reddit Footnote 4 is a social news aggregator, where users post links or texts regarding current events or other topics and rate or comment on posts by other users. Through the comments on a post, a discussion can emerge that is often controversial and contains the various perspectives of commenters on the topic. Reddit also has a “media bias” thread Footnote 5 where contributors share examples of biased articles. Wikinews Footnote 6 is a collaborative news producer, where volunteers author and edit articles. Wikinews aims to provide “reliable, unbiased and relevant news […] from a neutral point of view.” However, two main issues are as follows: first, the mixed quality of the news items, because many authors may participate in producing them, and second, the low number of articles, i.e., only major events are covered in the English version and other languages have even fewer articles. Thus, Wikinews currently cannot be used as a primary, fully reliable news source. Some approaches employ crowdsourcing to visualize different opinions or statements on politicians or news topics, for example, the German news outlet Spiegel Online frequently asks readers to define their position regarding two pairs of contrary statements that span a two-dimensional map [ 331 ]. Below the map, the news outlet lists excerpts from other outlets that support or contradict the map’s statements.

The automated analysis of spin bias using methods from computer science is maybe the most challenging of all forms because its manifestation is the vaguest among the forms of bias discussed. Spin refers to the overall perception of an article. Bias by spin is not, however, just the sum of all other forms but includes other factors, such as the order of information presented in a news article, the article’s tone, and emphasis on certain facts. Methods we describe in the following are partially also relevant for other forms of bias. For instance, the measurement of an article’s degree of personification in the terrorism in news coverage study [ 274 ] is supported by the computation of CRA [ 52 ]. What is not automated is the annotation of entities and their association with an issue. Named entity extraction [ 255 , 391 ] could be used to partially address these previously manually performed tasks.

Other approaches analyze news readers’ input, such as readers’ comments, to identify differences in news coverage. The rationale of these approaches is that readers’ input contains explicitly stated opinions and sentiment on certain topic, which are usually missing from the news article itself. Explicitly stated opinion can reliably be extracted with the help of NLP methods, such as sentiment analysis. For instance, one method analyzes readers’ comments to categorize related articles [ 1 ]. The method measures the similarity of two articles by comparing their reader comments, thereby focusing in each comment on the mentioned entities, the expressed sentiment, and country of the comment’s author. Another method counts and analyzes Twitter followers of news outlets to estimate the political orientation of the audience of the news outlet [ 111 ]. A seed group of Twitter accounts is manually rated according to their political orientation, e.g., conservative or liberal. This group is automatically expanded using those accounts’ followers. The method then estimates the political orientation of a news outlet’s audience by averaging the political orientation of the outlet’s followers in the expanded group of categorized accounts (cf. [ 98 , 117 , 220 ]).

The news aggregator AllSides [ 8 ] shows users the most contrastive articles on a topic, e.g., left and right leaning on a political spectrum. The system asks users to rate the spin of news outlets, e.g., after reading articles published by these outlets. To estimate the spin of an outlet, AllSides uses the feedback of users and expert knowledge provided by their staff. NewsCube 2.0 lets (expert) users collaboratively define and rate frames in related articles [ 277 ]. The frames are in turn presented to other users, e.g., a contrast view shows the most contrasting frames of one event. Users can then incrementally improve the quality of coding by refining existing frames.

Another method for news spin identification categorizes news articles on contentious news topics into two (opposing) groups by analyzing quotes and nearby entities [ 275 ]. The rationale of the approach is that articles portraying a similar perspective on a topic have more common quotes, which may support the given perspective, than articles that have different perspectives. The method extracts weighted triples representing who criticizes whom, where the weight depends on the importance of the triple, e.g., estimated by the position within the article (the earlier, the more important). The method measures the similarity of two articles by comparing their triples.

Other methods analyze frequencies and co-occurrences of terms to find frames in related articles and assign each article to one of the frames. For instance, one method clusters articles by measuring the similarity of two documents using the co-occurrences of the two documents’ most frequent terms [ 241 ]. The results of this rather simple method are then used for a manually conducted frame analysis. Hiérarchie uses recursive topic modeling to find topics and subtopics in tweets posted by users on a specific issue [ 325 ]. A radial treemap visualizes the extracted topics and subtopics. In the presented case study, users find and explore different theories on the disappearance of flight MH-370 discussed in tweets.

Lastly, manually annotated information related to media bias, e.g., the overall spin of articles rated by users of AllSides or articles annotated by social scientists during frame analysis, can in our view serve as a basis when creating training datasets for machine learning . Other data that exploits the wisdom of the crowd might be incorporated as well, e.g., analyzing the Reddit media bias thread. However, one should carefully review the information for its characteristics and inherent biases, especially if crowdsourced.

In our view, the existence of the very concept of spin bias allows drawing two conclusions. First, media bias is a complex model of skewed news coverage with overlapping and partially contradicting definitions. While many instances of media bias fit into one of the other more precisely defined forms of media defined in the news production and consumption process (see Sect. 2.2.3 ), some instances of bias do not. Likewise, such instances of bias may fit into other models from the social sciences that are concerned with differences in news coverage, such as the bias forms of coverage, gatekeeping, and statement (Sect. 2.2.3 briefly discusses other models of media bias), while other instances would not fit into such models. Second, we found that most of the approaches from computer science for identifying, or suitable for identifying, spin bias omit the research that has been conducted in the social sciences. Computer science approaches currently still address media bias as vaguely defined differences in news coverage and therefore stand to profit from prior research in the social sciences. In turn, there are few scalable approaches to the analysis of media biases in the social sciences significantly hampering progress in the field. We therefore see a strong prospect for collaborative research on automated approaches to the analysis of media bias across both disciplines.

2.3.9 Summary

Most automated approaches focus on analyzing vaguely defined “biases.” These biases can be technically significant but may often not represent meaningful slants of the news. In contrast, in social science research, media bias emerges from observing systematic tendencies of specific bias forms or means. For example, the news production process that we use in our literature review defines nine bias forms.

One reason for the previously mentioned lack of conclusive or meaningful results is that almost no automated approach aims to specifically find such individual bias forms. At the same time, however, we found that suitable automated techniques are available to aid in the analysis of the individual bias forms.

2.4 Reliability, Generalizability, and Evaluation

This section discusses how automated approaches for analyzing media bias should be evaluated. Therefore, we first describe how social scientists measure the reliability and generalizability of studies on media bias.

The reliability and generalizability of manual annotation in the social sciences provide the benchmark for any automated approach. Best practices in social science research can involve both the careful development and iterative refinement of underlying codebooks and the formal validation of inter-coder reliability. For example, as discussed in Sect. 2.2.4 , a smaller, careful inductive manual annotation aids in constructing the codebook. The main deductive analysis is then performed by a larger pool of coders where the results of individual coders and their agreement on the assignment of codes can be systematically compared. Standard measures for inter-coder reliability, e.g., the widely used Krippendorff’s alpha [ 144 ], provide estimates for the reliability and robustness of the coding. Whether coding rules, and with these the quality of annotations, can be generalized beyond a specific case is usually not routinely analyzed because, by virtue of the significant effort required for manual annotation, the scope of such studies is usually limited to a specific question or context. Note, however, that the usual setup of a small deductive analysis, conducted on a subset of the data, implies that a codebook generated in this way can generalize to a larger corpus.

Computer science approaches for the automated analysis of media bias stand to profit a lot from a broad adoption of their methods by researchers across a wider set of disciplines. The impact and usefulness of automatized approaches for substantial cross-disciplinary analyses, however, hinge critically on two central questions. First, compared to manual methodologies, how reliable are automated approaches? Specifically, broad adoption of automated approaches in social sciences applications is only likely if the automated approaches identify at least close to the same instances of bias as manual annotations would.

Depending on which kind of more or less subtle form of bias is analyzed, the results gained through manual annotation might represent a more or less difficult benchmark to beat. Especially in complex cases, manual annotation of individual items may systematically perform better in capturing subtle instances relevant to the analysis question than automated approaches. Note that, for example, currently no public annotated news dataset for sentiment analysis exists (see Sect. 3.4 ). The situation is similar for most of the applications reviewed in this chapter, i.e., there is currently a dearth of standard benchmark datasets. Meaningful validation would thus require as a first step the careful (and time-intensive) development of such datasets across a range of relevant contexts.

One way to counter the present lack of evaluation datasets is to not solely rely on manual content analysis for annotation. For simple annotation tasks, such as rating the subjective slant of a news picture, crowdsourcing can be a suitable alternative to content analysis. This procedure requires less effort than conducting a full content analysis, including creation of a codebook and refining it until the ICR is sufficiently high (cf. [ 152 ]). One can also use other available data. For example, Recasens, Danescu-Niculescu-Mizil, and Jurafsky [ 297 ] use bias-related revisions from the Wikipedia edit history to retrieve presumably biased single-word phrases. The political slant classification of news articles and outlets crowdsourced by users on web services such as AllSides (see Sect. 2.3.8 ) may serve as another comparison baseline. As stated in Sect. 2.3.8 , before employing crowdsourced information, one should carefully review its characteristics and quality.

Another way to evaluate the performance of bias identification methods is to manually analyze the automatically extracted instances of media bias, e.g., through crowdsourcing or (typically fewer) specialized coders. However, evaluating the results of an automated approach this way decreases the comparability between approaches, since these have to be evaluated in the same way manually again. Generating annotated benchmark datasets on the other hand requires greater initial effort, but the results can then be used multiple times to evaluate and compare multiple approaches. Footnote 7

The second central question is how well-automated approaches generalize to the study of similar forms of bias in different contexts than those contexts for which they were initially developed. This question pertains to the external validity of developed approaches, i.e., is their performance dependent on a specific empirical or topical context? Out-of-sample performance could be tested against benchmark datasets not used for initial evaluation; however, as emphasized before, such datasets must still be developed. Hence, systematically testing the performance of approaches across many contexts is likely also infeasible for the near future simply because the cost of generating benchmark datasets is too high. Ultimately, it would be best practice for benchmark studies to establish more generally whether or not specific characteristics of news are related to the performance of the automated approaches developed.

2.5 Key Findings

News coverage strongly influences public opinion. While slanted news coverage is not harmful per se, systematically biased news coverage can negatively impact the public. Recent trends, such as social bots that automatically write news posts or the centralization of media outlet ownership, have the potential to further amplify the negative effects of biased news coverage. News consumers should be able to view different perspectives of the same news topic [ 252 ]. Unrestricted access to unbiased information is crucial for citizens to form their own views and make informed decisions [ 135 , 250 ], e.g., during elections. Since media bias has been, and continues to be, structurally inherent in news coverage [ 146 , 147 , 276 ], the detection and analysis of media bias is a topic of high societal and policy relevance.

Researchers from the social sciences have studied media bias over the past decades, resulting in a comprehensive set of methodologies, such as content analysis and frame analysis, as well as models to describe media bias. One of these models, the news production process , describes how journalists turn events into news articles. The process defines nine forms of media bias that can occur during the three phases of news production: In the first phase, “gathering of information,” the bias forms are (1) event selection, (2) source selection, and (3) commission and omission of information. In the second phase, “writing,” the bias forms are (4) labeling and word choice. In the third phase, “editing,” the bias forms are (5) story placement, (6) size allocation, (7) picture selection, and (8) picture explanation. Lastly, bias by (9) spin is a form of media bias that represents the overall bias of a news article and essentially combines the other forms of bias, including minor forms not defined specifically by the news production and consumption process.

For each of the forms of media bias, we discussed exemplary approaches being applied in the social sciences and described the automated methods from computer science that have been used, or could best be used, to address the particular form of bias. We summarize the findings of our review of the current status quo as follows:

Only few approaches in computer science address the analysis of media bias. The majority of these approaches analyze media bias from the perspective of regular news consumers and neglect both the approaches and models that have already been developed in the social sciences. In many cases, the underlying models of media bias are too simplistic, and their results when compared to models and results of research in the social sciences do not provide additional insights.

The majority of content analyses in the social sciences do not employ state-of-the-art methods for automated text analysis. As a result, the manual content analysis approaches conducted by social scientists require exacting and very time-consuming effort, as well as significant expertise and experience. This severely limits the scope of what social scientists can study and has significantly hampered progress in the field.

Thus, there is, in our view, much potential for interdisciplinary research on media bias among computer scientists and social scientists. Automated approaches are available for each of the nine forms of media bias that we discussed. On the one hand, methodologies and models of media bias in the social sciences can help to make automated approaches more effective. Likewise, the development of automated methods to identify instances of specific forms of media bias can help make content analysis in the social sciences more efficient by automating more tasks.

Media bias analysis is a rather young research topic within computer science, particularly when compared with the social sciences, where the first studies on media bias were published more than 70 years ago [ 172 , 377 ]. Our first finding (F1) is that most of the reviewed computer science approaches treat media bias vaguely and view it only as “differences of [news] coverage” [ 278 ], “diverse opinions” [ 251 ], or “topic diversity” [ 252 ]. The majority of the current approaches neglect the state of the art developed in the social sciences. They do not make use of models describing different forms of media bias or how biased news coverage emerges in the news production and consumption process [ 14 , 276 ] (Sect. 2.2.3 ). Also, approaches in computer science do not employ methods to analyze the specific forms of bias, such as content analysis [ 64 ] and frame analysis [ 368 ] (Sect. 2.2.4 ). Consequently, many approaches in computer science are limited in their capability for identifying instances of media bias. For instance, matrix-based news aggregation (MNA) organizes articles and topics in a matrix to facilitate showing differences in international news topics, but the approach can neither determine whether there are actual differences, nor can MNA enforce finding differences [ 129 ]. Likewise, Hiérarchie finds subtopics in news posts that may or may not refer to differences caused by media bias [ 325 ]. To overcome the limitations in identifying bias, some approaches, such as NewsCube 2.0 [ 277 ] and AllSides (Sect. 2.3.8 ), outsource the task of identifying media bias to users, e.g., by asking users to manually rate the slant of news articles.

Content analysis and frame analysis both require significant manual effort and expertise (F2). Especially time-intensive are the tasks of systematic screening and subsequent annotation of texts. Such tasks can currently only be performed by human coders [ 64 , 368 ]. Currently, in our view, the execution of these tasks cannot be improved significantly by employing automated text analysis methods due to the lack of mature methods capable of identifying specific instances of media bias, which follows from F1. This limitation, however, may be revised once interdisciplinary research has resulted in more advanced automated methods. Other tasks, such as data gathering or searching for relevant documents and phrases, are already supported by basic (semi-)automated methods and tools, such as content analysis software [ 215 ]. However, clearly the full potential of the state of the art in computer science is not yet being exploited. The employed techniques, e.g., keyword-based text matching to find relevant documents [ 336 ] or frequency-based extraction of representative terms to find patterns [ 215 ], are rather simple compared to state-of-the-art methods for text analysis. Few of the reviewed tools used by researchers in the social sciences employ methods proven effective in natural language processing, such as resolution of coreferences or synonyms or finding related article using an event-based search approach.

In our view, combining the expertise of the social sciences and computer science results in valuable opportunities for interdisciplinary research (F3). Reliable models of media bias and manual approaches for the detection of media bias can be combined with methods for automated data analysis, in particular, with text analysis and natural language processing approaches. NewsCube [ 276 ], for instance, extracts so-called aspects from news articles, which refer to the frames defined by social scientists [ 159 ]. Users of NewsCube became more aware of the different perspectives contained in news coverage on specific topics, than users of Google News. In this chapter, we showed that promising automated methods from computer science are available for all forms of media bias as defined by the news production and consumption process (see Sect. 2.3 ). For instance, studies concerned with bias by source selection or the commission and omission of information investigate how information is reused in news coverage [ 98 , 117 , 120 ]. Similarly to these studies, methods from plagiarism detection aim to identify instances of information reuse in a set of documents, and these methods yield reliable results for plagiarism with sufficient textual similarity [ 89 , 179 ]. Finally, recent advancements in text analysis, particularly word embeddings [ 197 ] and deep learning [ 198 ], open a promising area of research on media bias. Thus far, few studies use word embeddings and deep learning to analyze media bias in news coverage. However, the techniques have proven very successful in various related problems (cf. [ 5 , 191 , 306 , 311 ]), which lets us anticipate that the majority of the textual bias forms could be addressed effectively with such approaches.

We believe that interdisciplinary research on media bias can result in three main benefits. First, automated approaches for analyzing media bias will become more effective and more broadly applicable, since they build on the substantial, theoretical expertise that already exists in the social sciences. Second, content analyses in the social sciences will become more efficient, since more tasks can be automated or supported by automated methods from computer science. Finally, we argue that news consumers will benefit from improved automated methods for identifying media bias, since the methods can be used by news aggregators to detect and visualize the occurrence of potential media bias in real time.

2.6 Practical View on the Research Gap: A Real-World Example

This section practically demonstrates the implications of the literature review’s finding using a real-world example of news coverage and consumption.

Suppose you are reading the news. When viewing the coverage on an event, e.g., in your favorite news aggregator, or a single article reporting on the event, e.g., on the website of your favorite news outlet, you are wondering whether there might be other perspectives on the event. Which information are you missing since it is not mentioned in the articles you viewed or read? Mapping these questions to the terminology introduced earlier, the objective in this scenario is to efficiently and effectively get an overview of all the major perspectives present in the media. Efficiency is vital since newsreaders typically have only limited time for informing themselves on current events. While this example entails only one event, newsreaders are interested in multiple events, limiting the time available for a single event further. Effectiveness refers to understanding distinct and meaningful perspectives that help determine whether one already has a comprehensive overview of the coverage or if and which articles may offer alternative interpretations or additional information.

Table 2.2 shows headlines of news articles reporting on the Republican Party debate during the US presidential primaries in New Hampshire hosted by ABC News on February 6, 2016. We selected the articles using the following criteria: they had to primarily report on the event and be published by a popular online US news outlet Footnote 8 on the day of the event or the day after. This way, we retrieved more than 30 articles. Afterward, we conducted an inductive frame analysis (Sect. 2.2.4.2 ) to get a comprehensive overview of the content and perspectives present in the event coverage. For the sake of simplicity in this example, we selected eight articles that represented all major perspectives with only minor differences between the articles. In daily news consumption, the eight articles could, for example, be the results of an online search for coverage on the event or be shown in a news aggregator or another news application. Note that our pre-selection of articles already gives an unrealistic improvement concerning the example’s objective compared to regular news consumption because the article set is small and at the same time fully represents the coverage’s substantial frames.

Interactive experiment

Look at the headlines in Table 2.2 . The headlines are taken from news articles that report on a debate during the 2016 presidential primaries. Estimate how many major perspectives there are in the event coverage on the debate. Think of a perspective as a distinct viewpoint on the debate that is the most prominent viewpoint common to one or more articles.

Next, decide for each article which perspective it has on the event.

You can try to increase the “accuracy” of your results by looking at further information, such as the articles’ outlets, their political orientation (Table 2.2 ), or the articles’ full text (Appendix A.1). Please write down your results for each article and compare them with those presented in the following.

Manual Frame Analysis

The previously mentioned frame analysis yielded three frames, Footnote 9 which are shown in the last column (“Frame”) for each article (“ID”) in Table 2.3 . Frame F1 occurs in a single article (ID 2 with political orientation center), which is the only article that was updated consistently during the event to contain up-to-date information. In contrast to the other frames and articles, F1 consists primarily of quotes by the candidates, mostly about themselves. The frame thus portrays most candidates as they portrayed themselves in the debate, i.e., positively. There is not much commentary or assessment by journalists in this frame.

Common to much coverage on the event and thus also common to the two remaining frames is the prominence of three candidates. Chris Christie is portrayed as rather strong, and Marco Rubin as weak, being a target of verbal attacks by Christie and the other candidates. Also common to most articles reporting on the debate is that they prominently or often report on Donald Trump. At the time of the event, he generally received particular media interest, e.g., because he had boycotted the previous debate. As such, Trump is also frequently mentioned in the remaining articles of the set and serves as a distinguishing factor for the two remaining frames. Articles of frame F2 portray Trump rather negatively. Articles of F2 mention, for example, that Trump was “booed” by the audience (0, left), that Trump was accused “of taking advantage of an elderly woman” (3, center), and that “Trump was hit hard by Bush” (6, right). In contrast, articles of frame F3 portray Trump primarily positively, e.g., that “he seemed to do well enough to possibly win” (4, center), that “he was unwaveringly in charge” (7, right), that “Trump was measured and thoughtful” (7, right), and that “it is easy to see the Trump train getting on a roll” (1, left).

We use the results of the manual frame analysis as the ground truth since the technique represents one of the standards in social science research on media bias.

Means for Bias-Sensitive News Consumption

In addition to frame analysis, we tested three means to identify the articles’ perspectives. These means represent practices suitable for daily news consumption as well as automated techniques. Table 2.3 shows the perspectives assigned to individual articles by the approaches. The column “Headline” represents a means applied by many news consumers due to its high efficiency, i.e., determining the content of an article by its headline. Specifically, the column contains the author’s results of the previous interactive experiment, where H1 represents a perspective Footnote 10 that portrays Rubin negatively. Using as much information as available in the headlines, we identified two sub-perspectives of H1 where additionally Christie and Bush are portrayed positively (H1a) and Trump is portrayed positively (H1b). H2 represents an “anti-Trump” perspective, and in perspective H3, all candidates and especially Rubio are portrayed negatively. Following the previous perspective categorization centered on persons, two headlines (articles 1 and 2) could not be assigned to a meaningful perspective. Footnote 11 When comparing these headline-implied perspectives with the frames in the right column that were deduced by carefully analyzing the articles’ full content, the lack of an overall coherence across both directly indicates that the headlines do not allow for reliably estimation of an article’s slant.

Using the political orientation of the articles’ outlets to determine the articles’ potential slant is another means [ 8 ] for bias identification (column “Political”). Employing the left-right dichotomy is fast and often also effective when analyzing political discourse and even more so in polarized media landscapes such as in the USA [ 395 ]. However, the lack of coherence between the perspectives implied by the outlets’ political orientation and the frames shown in Table 2.3 highlights that this approach is superficial and its results are inconclusive. While employing the political orientation can increase the visibility of slants, they cannot reliably identify an article’s slant. In the example, there are major differences even across articles that have the same perspective according to this means.

The clustering approach (column “Clustering”), albeit simply using affinity propagation [ 91 ] on word embeddings, Footnote 12 is the only approach to determine the previously mentioned difference of article 2, the only with frame F1, compared to all others. However, otherwise, the technique yields inconclusive results, e.g., a large group of articles (C2) entailing articles from the entire political spectrum, and entails both remaining frames. The results of this simple approach are representative of automated approaches for bias identification, which analyze bias, for example, as vaguely defined “topic diversity” [ 252 ] or “differences in coverage” [ 278 ] as shown in the literature review. Other technical means may even amplify the newsreaders’ own biases, e.g., Google News, Facebook, and other news aggregators or channels learn from users’ preferences and show primarily those news items that are to the users’ liking or interest. Footnote 13

Of course, the generalizability of this simple example is limited by various factors. For example, the inductive frame analysis was conducted only by one person, likely increasing the degree of subjectivity. In frame analyses, researchers in the social sciences typically rely on the annotations of multiple persons. At least during test phases, the annotations are compared and discussed to avoid subjectivity or achieve a known level of subjectivity that is coherent across the annotations (Sect. 2.2.4 ).

However, the example also highlights two key findings of our literature review. Whether they are automated or manual, current means are unreliable and suffer from superficial methodology and results or are reliable but cause high manual effort. There is no coherence across the perspectives determined by the three fast approaches compared to the results of the frame analysis. There is not even any coherence when comparing any pair of the fast methods.

If you participated in the interactive experiment, your findings might differ from those shown in Table 2.3 , depending on which information you analyzed. Examining further information than the headlines alone may have yielded a more comprehensive understanding of the news coverage but came at an additional investment of time and effort. This effort is even increased in regular news consumption since newsreaders first have to research relevant articles of an event. Ultimately, critical assessment of the news takes too much time to be applied during regular news consumption. However, as automated approaches are unreliable, such manual practices currently present the only reliable means to analyze media bias.

It is this gap that the thesis at hand aims to address.

2.7 Summary of the Chapter

This chapter reviewed the issue of media bias and gave an interdisciplinary overview on the topic, particularly on methods and tools used to analyze media bias. The comparison of prior work in computer science, political science, and related disciplines revealed differences. Media bias has been studied extensively in the social sciences, whereas it is a relatively young research subject in computer science and other disciplines concerned with devising automated approaches. Consequently, while many automated methods offer effortless, scalable analysis, they yield inconclusive or less substantial results than methods used in the social sciences. Conversely, social science methods are practice-proven and effective but require much effort because researchers have to conduct them manually.

The chapter showed that the work conducted in either of the disciplines could benefit from incorporating knowledge and methods established in the other disciplines. Thus, while this thesis has a focus on computer science methodology, our general research principle is to make use of social science expertise where possible and feasible. Chapter 3 discusses how we can effectively address our research question in the context of the state of the art in computer science and the social sciences.

The paragraphs about news aggregation have been adapted partially from [ 129 ].

https://en.wikinews.org/wiki/ .

In Sect. 3.5 , we propose a system for crawling and extracting news articles.

https://www.reddit.com/ .

https://www.reddit.com/r/MediaBias/ .

The SemEval series [ 5 ] are a representative example from computer science where with high initial effort comprehensive evaluation datasets are created, allowing a quantitative comparison of the performance of multiple approaches afterward.

An outlet was defined as being “popular” if it was contained in the list of “top outlets” shown on https://www.allsides.com/media-bias/media-bias-ratings .

Frame analyses are task-specific, and the resulting frames may depend on the data and analysis question at hand. Due to the articles’ focus on persons involved in the debate, we centered our framing categories on these persons.

We use the term “perspective” to highlight that this classification resulted from applying a practice or technique. In contrast to a frame, a perspective may, however, not fully or meaningfully represent an article’s content and framing.

However, in another categorization scheme, the headlines could be interpreted as a perspective giving an overview of the event.

The embeddings were derived using the largest model “en_core_web_lg” of the natural language processing toolkit spaCy (v3.0). Source: https://spacy.io/usage/v3 .

A typical example highlighting the filter bubble issue occurred when compiling the set of articles used in this example. Google News and Google Search presented the author with articles from only two political orientations, even when using the browser’s privacy mode. This could only be overcome by using search engines that did not adapt search results to their users, such as DuckDuckGo.

Sofiane Abbar et al. “Real-time recommendation of diverse related articles”. In: Proceedings of the 22nd international conference on World Wide Web . ACM. 2013, pp. 1–12. doi : 10.1145/2488388.2488390. url : https://doi.org/10.1145/2488388.2488390 .

Eneko Agirre et al. “SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation”. In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) . Stroudsburg, PA, USA: Association for Computational Linguistics, 2016, pp. 497–511. isbn : 978-1-941643-95-2. doi : https://doi.org/10.18653/v1/S16-1081 . url : http://aclweb.org/anthology/S16-1081 .

Phyllis F. Agran, Dawn N. Castillo, and Dianne G. Winn. “Limitations of data compiled from police reports on pediatric pedestrian and bicycle motor vehicle events”. In: Accident Analysis and Prevention 22.4 (1990), pp. 361–370. issn : 00014575. doi : https://doi.org/10.1016/0001-4575(90)90051-L .

AllSides.com. AllSides - balanced news. 2021. url : https://www.allsides.com/unbiased-balanced-news (visited on 02/24/2021).

Amanda Amos and Margaretha Haglund. “From social taboo to “torch of freedom”: the marketing of cigarettes to women”. In: Tobacco control 9.1 (2000), pp. 3–8. doi : 10.1136/tc.9.1.3. url : https://doi.org/10.1136/tc.9.1.3 .

Jisun An et al. “Visualizing media bias through Twitter”. In: Proc. ICWSM SocMedNews Workshop. 2012. url : https://www.aaai.org/ocs/index.php/ICWSM/ICWSM12/paper/view/4775 .

Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. “SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining.” In: Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10) . Vol. 10. Valletta, Malta: European Language Resources Association (ELRA), 2010, pp. 2200–2204. url : https://www.aclweb.org/anthology/L10-1531/ .

Brent H Baker, Tim Graham, and Steve Kaminsky. How to identify, expose & correct liberal media bias . Alexandria, VA: Media Research Center, 1994. isbn: 978-0962734823.

Google Scholar  

Eytan Bakshy, Solomon Messing, and Lada A Adamic. “Exposure to ideologically diverse news and opinion on Facebook”. In: Science 348.6239 (2015), pp. 1130–1132. doi : https://doi.org/10.1126/science.aaa1160 . url : https://science.sciencemag.org/content/348/6239/1130 .

Alexandra Balahur et al. “Sentiment analysis in the news”. In: Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10) . Valletta, Malta: European Language Resources Association (ELRA), 2010. url : https://arxiv.org/abs/1309.6202 .

Pablo Barberá et al. “Tweeting From Left to Right”. In: Psychological Science 26.10 (Oct. 2015), pp. 1531–1542. issn : 0956-7976. doi : https://doi.org/10.1177/0956797615594620 . url : http://journals.sagepub.com/doi/10.1177/0956797615594620 .

David P Baron. “Persistent media bias”. In: Journal of Public Economics 90.1 (2006), pp. 1–36. doi : https://doi.org/10.1016/j.jpubeco.2004.10.006 .

Dan Bernhardt, Stefan Krasa, and Mattias Polborn. “Political polarization and the electoral effects of media bias”. In: Journal of Public Economics 92.5-6 (June 2008), pp. 1092–1104. issn : 00472727. doi : https://doi.org/10.1016/j.jpubeco.2008.01.006 . url : https://linkinghub.elsevier.com/retrieve/pii/S0047272708000236 .

Timothy Besley and Andrea Prat. “Handcuffs for the Grabbing Hand? Media Capture and Government Accountability”. In: American Economic Review 96.3 (May 2006), pp. 720–736. issn : 0002-8282. doi : https://doi.org/10.1257/aer.96.3.720 . url : https://pubs.aeaweb.org/doi/10.1257/aer.96.3.720 .

Clive Best et al. Europe Media Monitor - System Description . Tech. rep. December. 2005, pp. 1–57. url : https://publications.europa.eu/flexpaper/common/view.jsp?doc=c0d6bb93-7ec4-496f-b857-b7fe9bc33d19.en.PDF.pdf&format=pdf&page=10 .

Plaban Kumar Bhowmick. “Reader Perspective Emotion Analysis in Text through Ensemble based Multi-Label Classification Framework”. In: Computer and Information Science 2.4 (Oct. 2009), pp. 64–74. issn : 1913-8997. doi : 10.5539/cis.v2n4p64. url : http://www.ccsenet.org/journal/index.php/cis/article/view/3872 .

DavidMBlei. “Probabilistic topic models”. In: Communications of the ACM 55.4 (2012), pp. 77–84. doi : https://doi.org/10.1145/2133806.2133826 .

Pablo J. Boczkowski. “The Processes of Adopting Multimedia and Interactivity in Three Online Newsrooms”. In: Journal of Communication 54.2 (June 2004), pp. 197–213. issn : 0021-9916. doi : https://doi.org/10.1093/joc/54.2.197 . url : http://joc.oupjournals.org/cgi/doi/10.1093/joc/54.2.197 .

Tolga Bolukbasi et al. “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings”. In: Advances in Neural Information Processing Systems . 2016, pp. 4349–4357. url : https://arxiv.org/abs/1607.06520 .

Dylan Bourgeois, Jérémie Rappaz, and Karl Aberer. “Selection Bias in News Coverage: Learning it, Fighting it”. In: Companion of the The Web Conference 2018 on The Web Conference 2018 - WWW ’18 . 2018. isbn : 9781450356404. doi : https://doi.org/10.1145/3184558.3188724 .

TomÁš Brychcín and Lukáš Svoboda. “UWB at SemEval-2016 Task 1: Semantic Textual Similarity using Lexical, Syntactic, and Semantic Information”. In: Proceedings of the 10th InternationalWorkshop on Semantic Evaluation (SemEval-2016) . 2016. isbn : 9781941643952. doi : https://doi.org/10.18653/v1/S16-1089 .

Hans Jürgen Bucher and Peter Schumacher. “The relevance of attention for selecting news content. An eye-tracking study on attention patterns in the reception of print and online media”. In: Communications 347–368.31 (2006), p. 3. issn : 03412059. doi : https://doi.org/10.1515/COMMUN.2006.022 .

C Bui. “How online gatekeepers guard our view: News portals’ inclusion and ranking of media and events”. In: Global Media Journal 9.16 (2010), pp. 1–41. url : https://www.globalmediajournal.com/peer-reviewed/howonline-gatekeepers-guard-our-viewnews-portals-inclusion-and-rankingof-media-and-events-35232.html .

Business Insider. These 6 Corporations Control 90% Of The Media In America . 2014. url : http://www.businessinsider.com/these-6-corporationscontrol-90-of-the-media-in-america-2012-6 (visited on 01/13/2021).

Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. “Semantics derived automatically from language corpora contain human-like biases”. In: Science (2017). issn : 10959203. doi : https://doi.org/10.1126/science.aal4230 . arXiv: 1608.07187.

Joseph N . Cappella and Kathleen Hall Jamieson. Spiral of cynicism: The press and the public good . Oxford University Press on Demand, 1997.

Darrell Christian et al. The Associated Press Stylebook and Briefing on Media Law . Basic Books, 2019. isbn : 978-1541699892.

Nicole S. Cohen. “AtWork in the DigitalNewsroom”. In: Digital Journalism 7.5 (May 2019), pp. 571–591. issn : 2167-0811. doi : https://doi.org/10.1080/21670811.2017.1419821 . url : https://www.tandfonline.com/doi/full/10.1080/21670811.2017.1419821 .

Steven R Corman et al. “Studying Complex Discursive Systems.” In: Human communication research 28.2 (2002), pp. 157–206.

Jackie Crossman. Aussies Turn To Social Media For News Despite Not Trusting It As Much . Nov. 2014. url : https://www.bandt.com.au/aussies-turnsocial-media-news-despite-trusting-much/ (visited on 12/11/2020).

Christian S. Czymara and Marijn van Klingeren. “New perspective? Comparing frame occurrence in online and traditional news media reporting on Europe’s “Migration Crisis””. In: Communications (Apr. 2021), pp. 1–27. issn : 1613-4087. doi : https://doi.org/10.1515/commun-2019-0188 . url : https://www.degruyter.com/document/doi/10.1515/commun-2019-0188/html .

Dave D’Alessio and Mike Allen. “Media Bias in Presidential Elections: A Meta-Analysis”. In: Journal of Communication 50.4 (Dec. 2000), pp. 133–156. doi : https://doi.org/10.1111/j.1460-2466.2000.tb02866.x . url : http://doi.wiley.com/10.1111/j.1460-2466.2000.tb02866.x .

Paul D’Angelo and Jim A Kuypers. Doing news framing analysis: Empirical and theoretical perspectives . Routledge, 2010.

Murray S. Davis and Erving Goffman. “Frame Analysis: An Essay on the Organization of Experience.” In: Contemporary Sociology 4.6 (Nov. 1975), p. 599. issn : 00943061. doi : https://doi.org/10.2307/2064021 . url : http://www.jstor.org/stable/2064021?origin=crossref .

Claes H De Vreese. “News framing: Theory and typology”. In: Information design journal and document design 13.1 (2005), pp. 51–62.

Lizzie Dearden. The fake refugee images that are being used to distort public opinion on asylum seekers . Sept. 2015. url : http://www.independent.co.uk/news/world/europe/the-fake-refugee-images-that-are-being-usedto-distort-public-opinion-on-asylum-seekers-10503703.html (visited on 02/18/2020).

Stefano DellaVigna and Ethan Kaplan. The Fox News Effect: Media Bias and Voting. Tech. rep. 3. Cambridge, MA: National Bureau of Economic Research, Apr. 2006, pp. 1187–1234. doi : https://doi.org/10.3386/w12169 . url : http://www.nber.org/papers/w12169.pdf .

P. M. DeMarzo, Dimitri Vayanos, and Jeffrey Zwiebel. “Persuasion Bias, Social Influence, and Unidimensional Opinions”. In: The Quarterly Journal of Economics 118.3 (Aug. 2003), pp. 909-968. issn : 0033-5533. doi : 10.1162/00335530360698469. url : https://doi.org/10.1162/00335530360698469 .

James N Druckman and Michael Parkin. “The impact of media bias: How editorial slant affects voters”. In: Journal of Politics 67.4 (2005), pp. 1030–1049.

RobertMEntman. “Framing: Toward Clarification of a Fractured Paradigm”. In: Journal of Communication 43.4 (Dec. 1993), pp. 51–58. issn : 0021-9916. doi : https://doi.org/10.1111/j.1460-2466.1993.tb01304.x . url : https://academic.oup.com/joc/article/43/4/51-58/4160153 .

Robert M. Entman. “Framing Bias: Media in the Distribution of Power”. In: Journal of Communication 57.1 (Mar. 2007), pp. 163–173. issn : 00219916. doi : https://doi.org/10.1111/j.1460-2466.2006.00336.x . url : https://academic.oup.com/joc/article/57/1/163-173/4102665 .

Frank Esser. “Editorial Structures and Work Principles in British and German Newsrooms”. In: European Journal of Communication 13.3 (Sept. 1998), pp. 375–405. issn : 0267-3231. doi : https://doi.org/10.1177/0267323198013003004 . arXiv: 0803973233. url : http://journals.sagepub.com/doi/10.1177/0267323198013003004 .

Frank Esser, Carsten Reinemann, and David Fan. “Spin Doctors in theUnited States, Great Britain, and Germany Metacommunication about Media Manipulation”. In: The Harvard International Journal of Press/Politics 6.1 (2001), pp. 16–45.

James Estrin. The Real Story About the Wrong Photos in #BringBackOur-Girls. May 2014. url : http://lens.blogs.nytimes.com/2014/05/08/thereal-story-about-the-wrong-photos-in-bringbackourgirls/ (visited on 02/18/2020).

David Kirk Evans, Judith L. Klavans, and Kathleen R.McKeown. “Columbia Newsblaster”. In: Demonstration Papers at HLT-NAACL 2004 on XX - HLTNAACL ’04 . Morristown, NJ, USA: Association for Computational Linguistics, 2004, pp. 1–4. doi : https://doi.org/10.3115/1614025.1614026 . url : http://portal.acm.org/citation.cfm?doid=1614025.1614026 .

Facebook. Company Info. 2021. url : http://web.archive.org/web/20210210223947/ https://about.fb.com/company-info/ (visited on 02/12/2021).

Lukas Feick, Karsten Donnay, and Katherine T. McCabe. “The Subconscious Effect of Subtle Media Bias on Perceptions of Terrorism”. In: American Politics Research 49.3 (May 2021), pp. 313–318. issn : 1532-673X. doi : https://doi.org/10.1177/1532673X20972105 . url : http://journals.sagepub.com/doi/10.1177/1532673X20972105 .

Tomáš Foltýnek, Norman Meuschke, and Bela Gipp. “Academic Plagiarism Detection”. In: ACM Computing Surveys 52.6 (Jan. 2020), pp. 1–42. issn : 0360-0300. doi : https://doi.org/10.1145/3345317 . url : https://dl.acm.org/doi/10.1145/3345317 .

Brendan Frey and Delbert Dueck. “Clustering by Passing Messages Between Data Points”. In: Science 315.5814 (Feb. 2007), pp. 972–976. doi : https://doi.org/10.1126/science.1136800 .

Dieter Frey. “Recent research on selective exposure to information”. In: Advances in experimental social psychology 19 (1986), pp. 41–80. url : https://doi.org/10.1016/S0065-2601(08)60212-9 .

Matthew Gentzkow, Edward Glaeser, and Claudia Goldin. The Rise of the Fourth Estate: How Newspapers Became Informative and Why It Mattered. Tech. rep. Cambridge, MA: National Bureau of Economic Research, Sept. 2004, pp. 187–230. doi : https://doi.org/10.3386/w10791 . url : http://www.nber.org/papers/w10791.pdf .

Matthew Gentzkow and Jesse M Shapiro. “What drives media slant? Evidence from US daily newspapers”. In: Econometrica 78.1 (2010), pp. 35–71. url : https://web.stanford.edu/~gentzkow/research/biasmeas.pdf .

Matthew Gentzkow and Jesse M. Shapiro. “Media Bias and Reputation”. In: Journal of Political Economy 114.2 (Apr. 2006), pp. 280–316. issn : 0022-3808. doi : 10.1086/499414. url : https://doi.org/10.1086/499414 .

Alan S Gerber, Dean Karlan, and Daniel Bergan. “Does the media matter? A field experiment measuring the effect of newspapers on voting behavior and political opinions”. In: American Economic Journal: Applied Economics 1.2 (2009), pp. 35–52.

Martin Gilens and Craig Hertzman. “Corporate ownership and news bias: Newspaper coverage of the 1996 Telecommunications Act”. In: The Journal of Politics 62.02 (2000), pp. 369–386.

Bela Gipp. Citation-based Plagiarism Detection. Wiesbaden: Springer FachmedienWiesbaden, 2014. isbn : 978-3-658-06393-1. doi : https://doi.org/10.1007/978-3-658-06394-8 . url : http://link.springer.com/10.1007/978-3-658-06394-8 .

Bela Gipp, Adriana Taylor, and Jöran Beel. ‘Link Proximity Analysis - ClusteringWebsites by Examining Link Proximity”. In: 2010, pp. 449–452. doi : https://doi.org/10.1007/978-3-642-15464-5_54 . url : http://link.springer.com/10.1007/978-3-642-15464-554 .

Namrata Godbole, Manja Srinivasaiah, and Steven Skiena. “Large-Scale Sentiment Analysis for News and Blogs”. In: Proceedings of the International Conference onWeblogs and Social Media (ICWSM) 7 (2007), pp. 219–222.

Jennifer Golbeck and Derek Hansen. “Computing political preference among twitter followers”. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems . ACM. 2011, pp. 1105–1108.

Gregory Grefenstette et al. “Coupling Niche Browsers and Affect Analysis for an Opinion Mining Application”. In: Coupling Approaches, Coupling Media and Coupling Languages for Information Retrieval . Vaucluse, France: Le Centre de Hautes Etudies Internationales D’Informatique Documentaire, 2004, pp. 186–194. url : https://dl.acm.org/doi/abs/10.5555/2816272.2816290 .

Tim Groseclose and Jeffrey Milyo. “A Measure of Media Bias”. In: The Quarterly Journal of Economics 120.4 (Nov. 2005), pp. 1191–1237. issn : 0033-5533. doi : https://doi.org/10.1162/003355305775097542 . url : http://dx.doi.org/10.1162/003355305775097542 .

Jeff Gruenewald, Jesenia Pizarro, and Steven M. Chermak. “Race, gender, and the newsworthiness of homicide incidents”. In: Journal of Criminal Justice 37.3 (May 2009), pp. 262–272. issn: 00472352. doi: https://doi.org/10.1016/j.jcrimjus.2009.04.006 . url : https://linkinghub.elsevier.com/retrieve/pii/S0047235209000440 .

Joachim W H Haes. “September 11 in Germany and the United States: Reporting, reception, and interpretation”. In: Crisis Communications: Lessons from September 11 (2003), pp. 125–132.

Felix Hamborg, Norman Meuschke, and Bela Gipp. “Bias-aware news analysis using matrix-based news aggregation”. In: International Journal on Digital Libraries 21.2 (June 2020), pp. 129–147. issn : 1432–5012. doi : https://doi.org/10.1007/s00799-018-0239-9 . url : http://link.springer.com/10.1007/s00799-018-0239-9 .

Felix Hamborg, Norman Meuschke, and Bela Gipp. “Matrix-Based News Aggregation: Exploring Different Newsctives”. In: 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL) . IEEE, June 2017, pp. 1–10. isbn : 978-1-5386-3861-3. doi : https://doi.org/10.1109/JCDL.2017.7991561 . url : http://ieeexplore.ieee.org/document/7991561/ .

Felix Hamborg et al. “Identification and Analysis of Media Bias in News Articles”. In: 15th International Symposium of Information Science (ISI 2017) . Berlin, Germany: Verlag Werner Hülsbusch, 2017, pp. 224–236. isbn : 978-3-86488-117-6.

Felix Hamborg et al. “NewsDeps: Visualizing the Origin of Information in News Articles”. In: Wahrheit und Fake im postfaktisch-digitalen Zeitalter . Ed. by Peter Klimczak and Thomas Zoglauer. Springer Vieweg, 2021, pp. 151–166. isbn : 978-3-658-32957-0. doi : https://doi.org/10.1007/978-3-658-32957-0 .

Mark Hanna. “Keywords in News and Journalism Studies”. In: Journalism Studies 15.1 (Jan. 2014), pp. 118–119. issn : 1461-670X. doi : https://doi.org/10.1080/1461670X.2012.712759 . url : http://www.tandfonline.com/doi/abs/10 . 1080/1461670X.2012.712759.

Tony Harcup and Deirdre O’neill. “What is news? Galtung and Ruge revisited”. In: Journalism studies 2.2 (2001), pp. 261–280. url : https://www.tandfonline.com/doi/10.1080/14616700118449 .

Andrew F. Hayes and Klaus Krippendorff. “Answering the Call for a Standard Reliability Measure for Coding Data”. In: Communication Methods and Measures 1.1 (Apr. 2007), pp. 77–89. issn : 1931-2458. doi : https://doi.org/10.1080/19312450709336664 . url : http://www.tandfonline.com/doi/abs/10.1080/19312450709336664 .

Edward S Herman. “The propaganda model:Aretrospective”. In: Journalism Studies 1.1 (2000), pp. 101–112. doi : https://doi.org/10.1080/146167000361195 .

Edward S Herman and Noam Chomsky. Manufacturing consent: The political economy of the mass media . Random House, 2010.

Timothy C Hoad and Justin Zobel. “Methods for identifying versioned and plagiarized documents”. In: Journal of the American society for information science and technology 54.3 (2003), pp. 203–215.

George Hripcsak. “Agreement, the F-Measure, andReliability in Information Retrieval”. In: Journal of the American Medical Informatics Association 12.3 (Jan. 2005), pp. 296–298. issn : 1067-5027. doi : https://doi.org/10.1197/jamia.M1733 . url : https://academic.oup.com/jamia/article-lookup/doi/10.1197/jamia.M1733 .

Minqing Hu and Bing Liu. “Mining and summarizing customer reviews”. In: Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’04 . New York, New York, USA: ACM Press, 2004, p. 168. doi : https://doi.org/10.1145/1014052.1014073 . url : http://portal.acm.org/citation.cfm?doid=1014052.1014073 .

John Edward Hunter, Frank L Schmidt, and Gregg B Jackson. Meta-analysis: Cumulating research findings across studies . Vol. 4. Sage Publications, Inc, 1982.

Shanto Iyengar. Is anyone responsible? How television frames political issues . University of Chicago Press, 1994.

Anil K Jain and Sushil Bhattacharjee. “Text segmentation using Gabor filters for automatic document processing”. In: Machine Vision and Applications 5.3 (1992), pp. 169–184. url : https://link.springer.com/article/10.1007/BF02626996 .

Silvia Julinda, Christoph Boden, and Alan Akbik. Extracting a Repository of Events and Event References fromNews Clusters . Dublin, Ireland,Aug. 2014. doi : https://doi.org/10.3115/v1/W14-4503 . url : https://www.aclweb.org/anthology/W14-4503 .

Daniel Kahneman and Amos Tversky. “Choices, values, and frames.” In: American Psychologist 39.4 (1984), pp. 341–350. issn : 0003-066X. doi : https://doi.org/10.1037/0003-066X.39.4.341 . url : http://content.apa.org/journals/amp/39/4/341 .

Mesut Kaya, Guven Fidan, and Ismail H Toroslu. “Sentiment Analysis of Turkish Political News”. In: 2012 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology . IEEE Computer Society. IEEE, Dec. 2012, pp. 174–180. isbn : 978-1-4673-6057-9. doi : https://doi.org/10.1109/WI-IAT.2012.115 . url : http://ieeexplore.ieee.org/document/6511881/ .

Keith Kenney and Chris Simpson. “Was coverage of the 1988 presidential race by Washington’s two major dailies biased?” In: Journalism & Mass Communication Quarterly 70.2 (1993), pp. 345–355. doi : https://doi.org/10.1177/107769909307000210 .

Hans Mathias Kepplinger. “Visual biases in television campaign coverage”. In: Communication Research 9.3 (1982), pp. 432–446. doi : https://doi.org/10.1177/009365082009003005 .

Jean S Kerrick. “News pictures, captions and the point of resolution”. In: Journalism & Mass Communication Quarterly 36.2 (1959), pp. 183–188. doi : https://doi.org/10.1177/107769905903600207 .

Jean S. Kerrick. “The Influence of Captions on Picture Interpretation”. In: Journalism Quarterly 32.2 (June 1955), pp. 177–182. issn : 0022-5533. doi : https://doi.org/10.1177/107769905503200205 . url : http://journals.sagepub.com/doi/10.1177/107769905503200205 .

JongWook Kim, K Selçuk Candan, and Junichi Tatemura. “Efficient overlap and content reuse detection in blogs and online news articles”. In: Proceedings of the 18th international conference on World wide web - WWW ’09 . 0735014. New York, New York, USA: ACM Press, 2009, p. 81. isbn : 9781605584874. doi : https://doi.org/10.1145/1526709.1526721 . url : http://portal.acm.org/citation.cfm?doid=1526709.1526721 .

Christian Kohlschütter, Peter Fankhauser, and Wolfgang Nejdl. “Boilerplate detection using shallow text features”. In: Proceedings of the third ACM international conference on Web search and data mining - WSDM ’10 . New York, New York, USA: ACM Press, 2010, p. 441. isbn : 9781605588896. doi: https://doi.org/10.1145/1718487.1718542 . url : http://portal.acm.org/citation.cfm?doid=1718487.1718542 .

Wolfgang Kreißig. Medienvielfaltsmonitor 2020-I: Anteile der Medienangebote und Medienkonzerne am Meinungsmarkt der Medien in Deutschland . Tech. rep. Munich, Germany: Bayerische Landeszentrale für neue Medien (BLM), 2020. url : https://www.blm.de/files/pdf2/medienvielfaltsmonitor-2020-1.pdf .

Steven Kull, Clay Ramsay, and Evan Lewis. “Misperceptions, the media, and the Iraqwar”. In: Political Science Quarterly 118.4 (2003), pp. 569–598. url : https://onlinelibrary.wiley.com/doi/10.1002/j.1538-165X.2003.tb00406.x .

Ankit Kumar et al. “Ask Me Anything: Dynamic Memory Networks for Natural Language Processing”. In: arXiv (2015). issn : 1938–7228. doi : https://doi.org/10.1017/CBO9781107415324.004 . arXiv: arXiv:1506.07285v1.

George Lakoff. “Women, fire, and dangerous things”. In: What categories reveal about the mind (1987).

J Richard Landis and Gary G Koch. “The Measurement of Observer Agreement for Categorical Data”. In: Biometrics 33.1 (Mar. 1977), p. 159. issn : 0006341X. doi : https://doi.org/10.2307/2529310 . url : https://www.jstor.org/stable/2529310?origin=crossref .

Valentino Larcinese, Riccardo Puglisi, and James M Snyder. “Partisan bias in economic news: Evidence on the agenda-setting behavior of US newspapers”. In: Journal of Public Economics 95.9 (2011), pp. 1178–1189.

Quoc V. Le and Tomas Mikolov. “Distributed Representations of Sentences and Documents”. In: International Conference on Machine Learning - ICML 2014 32 (May 2014). arXiv: 1405.4053. url : http://arxiv.org/abs/1405.4053 .

Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. “Deep learning”. In: Nature 521.7553 (2015), pp. 436–444. issn : 14764687. doi : https://doihorg/10.1038/nature14539. arXiv: arXiv:1312.6184v5.

Kalev Leetaru and Philip A Schrodt. “GDELT: Global Data on Events, Location and Tone, 1979-2012”. In: Annual Meeting of the International Studies Association (2013), pp. 1–51. url : http://data.gdeltproject.org/documentation/ISA.2013.GDELT.pdf .

Jure Leskovec, Lars Backstrom, and Jon Kleinberg. “Meme-tracking and the dynamics of the news cycle”. In: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining . ACM. 2009, pp. 497–506. doi : https://doi.org/10.1145/1557019.1557077 .

David D Lewis et al. “RCV1: A New Benchmark Collection for Text Categorization Research”. In: The Journal of Machine Learning Research 5 (2004), pp. 361–397. url : https://dl.acm.org/doi/10.5555/1005332.1005345 .

LexisNexis. LexisNexis Police Reports . 2020. url : http://web.archive.org/web/20200405053436/https://policereports.lexisnexis.com/search/search (visited on 02/12/2020).

Sora Lim, Adam Jatowt, and Masatoshi Yoshikawa. “Towards Bias Inducing Word Detection by Linguistic Cue Analysis in News Articles”. In: DEIM Forum 2018 . 2018, pp. 1–6. url : https://db-event.jpn.org/deim2018/data/papers/275.pdf .

Will Lowe. “Software for content analysis-A Review”. In: Cambridge: Weatherhead Center for International Affairs and the Harvard Identity Project (2002).

Luca Luceri, Silvia Giordano, and Emilio Ferrara. “Detecting Troll Behavior via Inverse Reinforcement Learning: A Case Study of Russian Trolls in the 2016 US Election”. In: Proceedings of the Fourteenth International AAAI Conference on Web and Social Media (ICWSM 2020) . Association for the Advancement of ArtificialIntelligence, 2020, pp. 417–427. arXiv: 2001.10570. url : http://arxiv.org/abs/2001.10570 .

Brent MacGregor. Live, direct, and biased? making television news in the satellite age . Arnold, 1997.

Oded Maimon and Lior Rokach. “Introduction to knowledge discovery and data mining”. In: Data mining and knowledge discovery handbook. Springer, 2009, pp. 1–15.

Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze. Introduction to Information Retrieval. Cambridge: Cambridge University Press , 2008. isbn : 9780511809071. doi : https://doi.org/10.1017/CBO9780511809071 . url : http://ebooks.cambridge.org/ref/id/CBO9780511809071 .

Jörg Matthes. “What’s in a Frame? A Content Analysis of Media Framing Studies in the World’s Leading Communication Journals, 1990-2005”. In: Journalism & Mass Communication Quarterly 86.2 (June 2009), pp. 349–367. issn : 1077-6990. doi : https://doi.org/10.1177/107769900908600206 . url : http://journals.sagepub.com/doi/10.1177/107769900908600206 .

John McCarthy et al. “Assessing stability in the patterns of selection bias in newspaper coverage of protest during the transition from communism in Belarus”. In: Mobilization: An International Quarterly 13.2 (2008), pp. 127–146.

John D McCarthy, Clark McPhail, and Jackie Smith. “Images of Protest: Dimensions of Selection Bias in Media Coverage of Washington Demonstrations, 1982 and 1991”. In: American Sociological Review 61.3 (June 1996), p. 478. issn : 00031224. doi : https://doi.org/10.2307/2096360 . url : http://www.jstor.org/stable/2096360?origin=crossref .

Margaret J McGregor et al. “Why don’t more women report sexual assault to the police?” In: Canadian Medical Association Journal 162.5 (2000), pp. 659–660.

KathleenRMcKeown et al. “Tracking and summarizing news on a daily basis with Columbia’s Newsblaster”. In: Proceedings of the second international conference on Human Language Technology Research . 2002, pp. 280–285.

Norman Meuschke and Bela Gipp. “State-of-the-art in detecting academic plagiarism”. In: International Journal for Educational Integrity 9.1 (June 2013), p. 50. issn : 1833-2595. doi : https://doi.org/10.21913/.EI.v9i1.847 . url : https://ojs.unisa.edu.au/index.php/.EI/article/view/847 .

Norman Meuschke et al. “HyPlag”. In: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval . NewYork, NY, USA: ACM, June 2018, pp. 1321–1324. isbn : 9781450356572. doi : https://doi.org/10.1145/3209978.3210177 . url : https://dl.acm.org/doi/10.1145/3209978.3210177 .

Joshua Meyrowitz. No sense of place: The impact of electronic media on social behavior . Oxford University Press, 1986.

M. Mark Miller. “Frame Mapping and Analysis of News Coverage of Contentious Issues”. In: Social Science Computer Review 15.4 (Dec. 1997), pp. 367–378. issn : 0894-4393. doi : https://doi.org/10.1177/089443939701500403 . url : http://journals.sagepub.com/doi/10.1177/089443939701500403 .

Gilad Mishne. “Experiments with mood classification in blog posts”. In: Proceedings of ACM SIGIR 2005 Workshop on Stylistic Analysis of Text for Information Access (2005).

Ryan Mitchell. Web scraping with Python: collecting data from the modern web . O’Reilly Media, Inc., 2015.

Shunji Mori, Hirobumi Nishida, and Hiromitsu Yamada. Optical character recognition . John Wiley & Sons, Inc., 1999.

Karen Mossberger, Caroline J Tolbert, and Ramona S McNeal. Digital citizenship: The Internet, society, and participation . MIT Press, 2007. isbn : 9780262134859. url : https://mitpress.mit.edu/books/digital-citizenship .

Sendhil Mullainathan and Andrei Shleifer. “The market for news”. In: American Economic Review (2005), pp. 1031–1053.

Sean A Munson and Paul Resnick. “Presenting diverse political opinions”. In: Proceedings of the 28th international conference on Human factors in computing systems -CHI ’10 . ACM. NewYork, NewYork, USA: ACMPress, 2010, p. 1457. isbn : 9781605589299. doi : https://doi.org/10.1145/1753326.1753543 . url : http://portal.acm.org/citation.cfm?doid=1753326.1753543 .

Sean A Munson, Daniel Xiaodan Zhou, and Paul Resnick. “Sidelines: An Algorithm for Increasing Diversity in News and Opinion Aggregators.” In: ICWSM . 2009.

Sean A. Munson, Stephanie Y. Lee, and Paul Resnick. “Encouraging reading of diverse political viewpoints with a browser widget”. In: Proceedings of the 7th International Conference on Weblogs and Social Media, ICWSM 2013 . 2013.

Diana C Mutz. “Facilitating communication across lines of political difference: The role of mass media”. In: American Political Science Association . Vol. 95. 01. Cambridge Univ Press. 2001, pp. 97–114.

David Nadeau and Satoshi Sekine. “A survey of named entity recognition and classification”. In: Lingvisticae Investigationes 30.1 (Aug. 2007), pp. 3–26. issn : 0378-4169. doi : https://doi.org/10.1075/li.30.1.03nad . url : http://www.jbeplatform.com/content/journals/10.1075/li.30.1.03nad .

Joseph Napolitan. The election game and how to win it . Doubleday, 1972.

Kimberly A Neuendorf. The content analysis guidebook . Sage Publications, 2016. isbn : 9781412979474.

Nic Newman, David A L Levy, and Rasmus Kleis Nielsen. Reuters Institute Digital News Report 2015 . Reuters Institute for the Study of Journalism, 2015. isbn : 978-1907384134.

Nic Newman et al. Reuters Institute Digital News Report 2020 . Reuters Institute for the Study of Journalism, 2020.

David Niven. Tilt? The search for media bias . Praeger, 2002. isbn : 978–0275975777.

Daniela Oelke, Benno Geißelmann, and Daniel A Keim. “Visual Analysis of Explicit Opinion and News Bias in German Soccer Articles”. In: Euro- Vis Workshop on Visual Analytics. Vienna , Austria, 2012. doi : 10.2312/PE/EuroVAST/EuroVA12/049-053. url : https://doi.org/10.2312/PE/EuroVAST/EuroVA12/049-053 .

Pamela E. Oliver and Gregory M. Maney. “Political Processes and Local Newspaper Coverage of Protest Events: From Selection Bias to Triadic Interactions”. In: American Journal of Sociology 106.2 (Sept. 2000), pp. 463–505. issn : 0002-9602. doi : https://doi.org/10.1086/316964 . url : http://www.journals.uchicago.edu/doi/10.1086/316964 .

Lawrence Page et al. The PageRank citation ranking: bringing order to the web . Tech. rep. 1999.

Georgios Paliouras et al. “PNS: A Personalized News Aggregator on the Web”. In: Intelligent interactive systems in knowledge-based environments . Ed. by George A. Tsihrintzis and Maria Virvou. Berlin, Germany: Springer, 2008, pp. 175–197. isbn : 978-3-540-77471-6. doi : https://doi.org/10.1007/978-3-540-77471-6_10 . url : http://link.springer.com/10.1007/978-3-540-77471-6_10 .

Zhongdang Pan and GeraldKosicki. “Framing analysis: An approach to news discourse”. In: Political Communication 10.1 (1993), pp. 55–75. issn : 1058-4609. doi : https://doi.org/10.1080/10584609.1993.9962963 . url : http://www.tandfonline.com/doi/abs/10.1080/10584609.1993.9962963 .

Bo Pang and Lillian Lee. “Opinion mining and sentiment analysis”. In: Foundations and trends in information retrieval 2.1-2 (2008), pp. 1–135. doi: https://doi.org/10.1561/1500000011 .

Zizi Papacharissi and Maria de Fatima Oliveira. “News Frames Terrorism: A Comparative Analysis of Frames Employed in Terrorism Coverage in U.S. and U.K. Newspapers”. In: The International Journal of Press/Politics 13.1 (Jan. 2008), pp. 52–74. issn : 1940-1612. doi : https://doi.org/10.1177/1940161207312676 . url: http://journals.sagepub.com/doi/10.1177/1940161207312676 .

Souneil Park, KyungSoon Lee, and Junehwa Song. “Contrasting Opposing Views of News Articles on Contentious Issues”. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies . Portland, Oregon, USA: Association for Computational Linguistics, 2011, pp. 340–349. url : https://www.aclweb.org/anthology/P11-1035 .

Souneil Park et al. “NewsCube”. In: Proceedings of the 27th international conference on Human factors in computing systems - CHI 09 . New York, New York, USA: ACM Press, 2009, p. 443. isbn : 9781605582467. doi : https://doi.org/10.1145/1518701.1518772 . url : http://dl.acm.org/citation.cfm?doid=1518701.1518772 .

Souneil Park et al. “NewsCube 2.0: An Exploratory Design of a Social News Website for Media Bias Mitigation”. In: Workshop on Social Recommender Systems . 2011.

Souneil Park et al. “The politics of comments”. In: Proceedings of the ACM 2011 conference on Computer supported cooperative work - CSCW ’11 . ACM. New York, New York, USA: ACM Press, 2011, p. 113. isbn : 9781450305563. doi : https://doi.org/10.1145/1958824.1958842 . url : http://portal.acm.org/citation.cfm?doid=1958824.1958842 .

Richard Paul and Linda Elder. The Thinker’s Guide for Conscientious Citizens on how to Detect Media Bias & Propaganda in National and World News . Foundation Critical Thinking, 2004.

Dragomir R Radev et al. “Centroid-based summarization of multiple documents”. In: Information Processing & Management 40.6 (2004), pp. 919–938.

Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. “Linguistic Models for Analyzing and Detecting Biased Language”. In: Proceedings of the 51st Annual Meeting on Association for Computational Linguistics . Sofia, BG: Association for Computational Linguistics, 2013, pp. 1650–1659. isbn : 9781937284503. url : https://www.aclweb.org/anthology/P13-1162.pdf .

Shawn W Rosenberg et al. “The image and the vote: The effect of candidate presentation on voter preference”. In: American Journal of Political Science 30.1 (1986), pp. 108–127. doi : https://doi.org/10.2307/2111296 .

Barbara Rychalska et al. “Samsung Poland NLP Team at SemEval-2016 Task 1: Necessity for diversity; combining recursive autoencoders,WordNet and ensemble methods to measure semantic similarity.” In: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) . Stroudsburg, PA, USA: Association for Computational Linguistics, 2016, pp. 602–608. isbn : 9781941643952. doi : https://doi.org/10.18653/v1/S16-1091 . url : http://aclweb.org/anthology/S16-1091 .

Diego Saez-Trumper, Carlos Castillo, and Mounia Lalmas. “Social media news communities”. In: Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM ’13 . New York, New York, USA: ACM Press, 2013, pp. 1679–1684. isbn : 9781450322638. doi : https://doi.org/10.1145/2505515.2505623 . url : http://dl.acm.org/citation.cfm?doid=2505515.2505623 .

Gerard Salton and Christopher Buckley. “Term-weighting approaches in automatic text retrieval”. In: Information processing and management 24.5 (1988), pp. 513–523. doi : https://doi.org/10.1016/0306-4573(88)90021-0 .

Mark Sanderson. “Duplicate detection in the Reuters collection”. In: ”Technical Report (TR-1997-5) of the Department of Computing Science at the University of Glasgow G12 8QQ, UK” (1997).

Cicero Nogueira dos Santos and Maira Gatti. “Deep Convolutional Neural Networks for Sentiment Analysis of Short Texts”. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers . 2014, pp. 69–78. url : https://www.aclweb.org/anthology/C14-1008 .

Frane Šariæ et al. “Takelab: Systems for Measuring Semantic Text Similarity”. In: Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth InternationalWorkshop on Semantic Evaluation . Association for Computational Linguistics, 2012, pp. 441–448. url : https://www.aclweb.org/anthology/S12-1060 .

Dietram A Scheufele. “Agenda-setting, priming, and framing revisited: Another look at cognitive effects of political communication”. In: Mass Communication & Society 3.2-3 (2000), pp. 297–316. doi : 10.1207/S15327825MCS0323_07. url : https://doi.org/10.1207/S15327825MCS0323_07 .

Margrit Schreier. Qualitative content analysis in practice. SAGE Publications, 2012, pp. 1–280. isbn : 9781849205931.

Crisitina Segalin et al. “The Pictures We Like Are Our Image: Continuous Mapping of Favorite Pictures into Self-Assessed and Attributed Personality Traits”. In: IEEE Transactions on Affective Computing 8.2 (Apr. 2017), pp. 268–285. issn : 1949-3045. doi : https://doi.org/10.1109/TAFFC.2016.2516994 . url : http://ieeexplore.ieee.org/document/7378902/ .

Anup Shah. Media Conglomerates, Mergers, Concentration of Ownership. 2009. url : https://www.globalissues.org/article/159/media-conglomeratesmergers-concentration-of-ownership (visited on 02/19/2021).

Walid Shalaby, Wlodek Zadrozny, and Hongxia Jin. “Beyond word embeddings: learning entity and concept representations from large scale knowledge bases”. In: Information Retrieval Journal (2018), pp. 1–18. doi : s10791-018-9340-3. url : https://doi.org/10.1007/s10791-018-9340-3 .

Smriti Sharma et al. “News Event Extraction Using 5W1H Approach & Its Analysis”. In: International Journal of Scientific & Engineering Research 4.5 (2013), pp. 2064–2068. url : https://www..ser.org/onlineResearchPaperViewer.aspx?News-Event-Extraction-Using-5W1HApproach-Its-Analysis.pdf.

Narayanan Shivakumar and Hector Garcia-Molina. “SCAM: A Copy Detection Mechanism for Digital Documents”. In: In Proceedings of the Second Annual Conference on the Theory and Practice of Digital Libraries . 1995. url : http://ilpubs.stanford.edu:8090/95/ .

Alison Smith, Timothy Hawes, and Meredith Myers. “Hiérarchie: Interactive Visualization for Hierarchical Topic Models”. In: Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces . Association for Computational Linguistics, 2014, pp. 71–78. isbn : 9781941643150. doi : https://doi.org/10.3115/v1/W14-3111 .

Jackie Smith et al. “From Protest to Agenda Building: Description Bias in Media Coverage of Protest Events in Washington, D.C.” In: Social Forces 79.4 (2001), pp. 1397–1423. url : https://www.jstor.org/stable/2675477 .

Norman Solomon. “Media Bias”. In: New Political Science 24.2 (June 2002), pp. 293–297. issn : 0739-3148. doi : https://doi.org/10.1080/073931402200145252 . url : http://www.tandfonline.com/doi/abs/10.1080/073931402200145252 .

Samuel R Sommers et al. “Race and media coverage of Hurricane Katrina: Analysis, implications, and future research questions”. In: Analyses of Social Issues and Public Policy 6.1 (2006), pp. 39–55. doi : https://doi.org/10.1111/j.1530-2415.2006.00103.x .

Spiegel Online. Übertreibt Horst Seehofer seine Attacken? Das sagen die Medien. 2016. url : http://www.spiegel.de/politik/deutschland/uebertreibthorst-seehofer-seine-attacken-das-sagen-die-medien-a-1076867.html (visited on 02/15/2021).

Andreas Spitz and Michael Gertz. “Breaking theNews: Extracting the Sparse CitationNetwork Backbone of OnlineNews Articles”. In: Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2015 . ACM. 2015, pp. 274–279. doi : https://doi.org/10.1145/2808797.2809380 .

Ralf Steinberger et al. “Large-scale news entity sentiment analysis”. In: RANLP 2017 - Recent Advances in Natural Language Processing Meet Deep Learning . Incoma Ltd. Shoumen, Bulgaria, Nov. 2017, pp. 707–715. isbn: 9789544520496. doi : https://doi.org/10.26615/978-954-452-049-6_091 . url : http://www.acl-bg.org/proceedings/2017/RANLP%202017/pdf/RANLP091.pdf .

Steve Stemler. “An overview of content analysis”. In: Practical assessment, research & evaluation 7.17 (2001), pp. 137–146.

Guido H Stempel. “The prestige press meets the third-party challenge”. In: Journalism & Mass Communication Quarterly 46.4 (1969), pp. 699–706. doi : https://doi.org/10.1177/107769906904600402 .

Guido H Stempel and John W Windhauser. “The prestige press revisited: coverage of the 1980 presidential campaign”. In: Journalism and Mass Communication Quarterly 61.1 (1984), p. 49. doi : https://doi.org/10.1177/107769908406100107 .

James Glen Stovall. “Coverage of 1984 presidential campaign”. In: Journalism and Mass Communication Quarterly 65.2 (1988), p. 443. doi : https://doi.org/10.1177/107769908806500227 .

James Glen Stovall. “The third-party challenge of 1980: News coverage of the presidential candidates”. In: Journalism and Mass Communication Quarterly 62.2 (1985), p. 266. doi : https://doi.org/10.1177/107769908506200206 .

Carlo Strapparava and Rada Mihalcea. “Semeval-2007 task 14: Affective text”. In: Proceedings of the 4th InternationalWorkshop on Semantic Evaluations . Association for Computational Linguistics. Prague, Czech Republic, 2007, pp. 70–74. url : https://www.aclweb.org/anthology/S07-1013/ .

Joseph D Straubhaar. Media Now: Communication Media in Information Age. Thomson Learning, 2000.

Pero Subasic and Alison Huettner. “Affect analysis of text using fuzzy semantic typing”. In: IEEE Transactions on Fuzzy Systems 9.4 (2001), pp. 483–496. issn : 10636706. doi : https://doi.org/10.1109/91.940962 . url : http://ieeexplore.ieee.org/document/940962/ .

S Shyam Sundar. “Exploring receivers’ criteria for perception of print and online news”. In: Journalism & Mass Communication Quarterly 76.2 (1999), pp. 373–386. doi : https://doi.org/10.1177/107769909907600213 .

Cass R Sunstein. Echo Chambers: Bush v. Gore, Impeachment, and Beyond . Princeton University Press Princeton, 2001.

Cass R Sunstein. “The law of group polarization”. In: Journal of political philosophy 10.2 (2002), pp. 175–195. url : https://papers.ssrn.com/sol3/papers.cfm?abstract_id=199668 .

The Media Insight Project. The Personal News Cycle: How Americans Get Their News. Tech. rep. 2014. url : https://www.americanpressinstitute.org/publications/reports/survey-research/personal-news-cycle/ .

Manos Tsagkias, Maarten De Rijke, and Wouter Weerkamp. “Linking online news and social media”. In: Proceedings of the fourth ACM international conference on Web search and data mining . ACM. 2011, pp. 565–574. doi : https://doi.org/10.1145/1935826.1935906 .

Larry Tye. The father of spin: Edward L. Bernays and the birth of public relations . Macmillan, 2002.

University of Michigan. News Bias Explored - The art of reading the news. 2014. url : http://umich.edu/~newsbias/ (visited on 02/01/2021).

Christine D Urban. Examining Our Credibility: Perspectives of the Public and the Press . Asne Foundation, 1999, pp. 1–108.

Mojtaba Vaismoradi, Hannele Turunen, and Terese Bondas. “Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study”. In: Nursing & Health Sciences 15.3 (Sept. 2013), pp. 398–405. issn : 14410745. doi : https://doi.org/10.1111/nhs.12048 . url : http://doi.wiley.com/10.1111/nhs.12048 .

Robert P Vallone, Lee Ross, and Mark R Lepper. “The hostile media phenomenon: biased perception and perceptions of media bias in coverage of the Beirut massacre.” In: Journal of personality and social psychology 49.3 (1985), p. 577. doi : https://doi.org/10.1037//0022-3514.49.3.577 .

Baldwin Van Gorp. “Strategies to take subjectivity out of framing analysis”. In: Doing news framing analysis: Empirical and theoretical perspectives (2010), pp. 100–125. url : https://www.taylorfrancis.com/chapters/edit/10.4324/9780203864463-11/strategies-take-subjectivity-framing-analysisbaldwin-van-gorp .

Baldwin Van Gorp. “Where is the frame? Victims and intruders in theBelgian press coverage of the asylum issue”. In: European Journal of Communication 20.4 (2005), pp. 484–507. doi : https://doi.org/10.1177/0267323105058253 .

Athanasios Voulodimos et al. “Deep Learning for Computer Vision: A Brief Review”. In: Computational Intelligence and Neuroscience 2018 (2018), pp. 1–13. issn : 1687-5265. doi : https://doi.org/10.1155/2018/7068349 . url : https://www.hindawi.com/journals/cin/2018/7068349/ .

Paul Waldman and James Devitt. “Newspaper Photographs and the 1996 Presidential Election: The Question of Bias”. In: Journal of Mass Communication 75.2 (1998), pp. 302–311. issn : 10776990. doi : https://doi.org/10.1177/107769909807500206 .

Wayne Wanta, Guy Golan, and Cheolhan Lee. “Agenda setting and international news: Media influence on public perceptions of foreign nations”. In: Journalism & Mass Communication Quarterly 81.2 (2004), pp. 364–377. doi : https://doi.org/10.1177/107769900408100209 .

David Manning White. “The ”Gate Keeper”: A Case Study in the Selection of News”. In: Journalism Bulletin 27.4 (1950), pp. 383–390. issn : 0197-2448. doi : https://doi.org/10.1177/107769905002700403 . url : http://journals.sagepub.com/doi/10.1177/107769905002700403 .

Alden Williams. “Unbiased Study of Television News Bias”. In: Journal of Communication 25.4 (Dec. 1975), pp. 190–199. issn : 0021-9916. doi : https://doi.org/10.1111/j.1460-2466.1975.tb00656.x . url : https://academic.oup.com/joc/article/25/4/190-199/4553978 .

Vikas Yadav and Steven Bethard. “A Survey on Recent Advances in Named Entity Recognition from Deep Learning models”. In: Proceedings of the 27th International Conference on Computational Linguistics. Santa Fe, New Mexico, USA: Association for Computational Linguistics , 2018, pp. 2145–2158. url : https://www.aclweb.org/anthology/C18-1182 .

JungHwan Yang et al. “Why Are “Others” So Polarized? Perceived Political Polarization and Media Use in 10 Countries”. In: Journal of Computer-Mediated Communication 21.5 (Sept. 2016), pp. 349–367. issn : 10836101. doi: https://doi.org/10.1111/jcc4.12166 . url : https://academic.oup.com/jcmc/article/21/5/349-367/4161799 .

John Zaller. The nature and origins of mass opinion . Cambridge university press, 1992. doi : https://doi.org/10.1017/CBO9780511818691 .

Biqing Zeng et al. “LCF: A Local Context Focus Mechanism for Aspect-Based Sentiment Classification”. In: Applied Sciences 9.16 (Aug. 2019), pp. 1–22. issn : 2076-3417. doi : https://doi.org/10.3390/app9163389 . url : https://www.mdpi.com/2076-3417/9/16/3389 .

Sven Meyer Zu Eissen and Benno Stein. “Intrinsic plagiarism detection”. In: European Conference on Information Retrieval. Springer . 2006, pp. 565–569. url : https://link.springer.com/chapter/10.1007/11735106_66 .

Download references

Author information

Authors and affiliations.

Department of Computer Science, Humboldt University of Berlin, Berlin, Germany

Felix Hamborg

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2023 The Author(s)

About this chapter

Hamborg, F. (2023). Media Bias Analysis. In: Revealing Media Bias in News Articles. Springer, Cham. https://doi.org/10.1007/978-3-031-17693-7_2

Download citation

DOI : https://doi.org/10.1007/978-3-031-17693-7_2

Published : 06 October 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-17692-0

Online ISBN : 978-3-031-17693-7

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • University of Wisconsin–Madison
  • University of Wisconsin-Madison
  • Research Guides
  • College Undergraduate Research Group
  • Mass Media: An Undergraduate Research Guide

Mass Media: An Undergraduate Research Guide : Media Bias

  • Writing, Citing, & Research Help
  • Advertising
  • Copyright/Intellectual Property
  • Social Media
  • Women in Advertising
  • Newspaper Source Plus Newspaper Source Plus includes 1,520 full-text newspapers, providing more than 28 million full-text articles.
  • Newspaper Research Guide This guide describes sources for current and historical newspapers available in print, electronically, and on microfilm through the UW-Madison Libraries. These sources are categorized by pages: Current, Historical, Local/Madison, Wisconsin, US, Alternative/Ethnic, and International.

Organizations

  • Center for Media and Democracy's PR Watch Madison, WI-based nonprofit organization that focuses on "investigating and exposing the undue influence of corporations and front groups on public policy, including PR campaigns, lobbying, and electioneering"
  • CAMERA The Committee for Accuracy in Middle East Reporting in America describes itself as "a media-monitoring, research and membership organization devoted to promoting accurate and balanced coverage of Israel and the Middle East"
  • Fairness & Accuracy in Reporting (FAIR) "FAIR, the national media watch group, has been offering well-documented criticism of media bias and censorship since 1986"
  • Media Research Center Conservative watch group with a "commitment to neutralizing left-wing bias in the news media and popular culture"

About Media Bias

This guide focuses on bias in mass media coverage of news and current events. It includes concerns of sensationalism, allegations of media bias, and criticism of media's increasingly profit-motivated ethics. It also includes examples of various types of sources coming from particular partisan viewpoints.

Try searching these terms using the resources linked on this page: media bias, sensational* AND (news or media), bias AND media coverage, (liberal or conservative) AND bias, [insert topic] AND media bias, media manipulation, misrepresent* AND media

Overview Resources - Background Information

  • International Encyclopedia of Media Studies This encyclopedia covers the broad field of "media studies” which includes encompassing print journalism, radio, film, TV, photography, computing, mobile phones, and digital media.
  • Opposing Viewpoints Resource Center (OVRC) provides viewpoint articles, topic overviews, statistics, primary documents, links to websites, and full-text magazine and newspaper articles related to controversial social issues.
  • FactCheck.org A nonpartisan, nonprofit "consumer advocate" for voters that aims to reduce the level of deception and confusion in U.S. politics by monitoring the factual accuracy of what is said by major U.S. political players in the form of TV ads, debates, speeches, interviews and news releases.

research about media bias

Articles - Scholarly and Popular

  • Academic Search Includes scholarly and popular articles on many topics.
  • Communication & Mass Media Complete Includes articles on communication and media topics.
  • ProQuest One Business (formerly ABI Inform) covers a wide range of business topics including accounting, finance, management, marketing and real estate.
  • Project Muse Disciplines covered include art, anthropology, literature, film, theatre, history, ethnic and cultural studies, music, philosophy, religion, psychology, sociology and women's studies.
  • JSTOR: The Scolarly Journal Archive full-text journal database which provides access to articles on many different topics.

Statistics and Data

  • Data Citation Index The Data Citation Index provides a single point of access to quality research data from repositories across disciplines and around the world. Through linked content and summary information, this data is displayed within the broader context of the scholarly research, enabling users to gain perspective that is lost when data sets or repositories are viewed in isolation.
  • << Previous: Copyright/Intellectual Property
  • Next: Social Media >>
  • Last Updated: Apr 24, 2024 3:01 PM
  • URL: https://researchguides.library.wisc.edu/massmediaURG
  • IPR Intranet

INSTITUTE FOR POLICY RESEARCH

Fake news, big lies: how did we get here and where are we going.

IPR experts explain how mis- and disinformation affect our lives and offer ideas for how to counter it

Get all our news

Subscribe to newsletter

I think that misinformation / disinformation about candidates is bad, but misinformation / disinformation that destroys trust in our political process, institutions, and the integrity of democracy itself is a thousand times worse.”

Erik Nisbet Communication and policy scholar and IPR associate

news headlines

“Are we going to be a nation that lives not by the light of the truth but in the shadow of lies?” President Joe Biden asked the country on the first anniversary of the January 6 insurrection at the U.S. Capitol.

But distinguishing truth from lies can be a difficult task when every day Americans read and hear false “facts”—misinformation—and deliberately misleading information created to cause harm—disinformation.

IPR faculty experts have generated a noteworthy body of research across different disciplines that explores what drives people to believe in untruths—and how the U.S. may be especially susceptible to disinformation. They also examine how misinformation and disinformation have affected the media, our politics, and even our health.

Why It’s Easy to Believe Misinformation and Disinformation

How Misinformation and Disinformation Flourish in U.S. Media

Declining Trust in News

What About Social Media?

Misinformation, Disinformation, and Polarization

‘Fake News’ in Presidential Elections

Informational Distrust in the COVID-19 Era

Resisting the ‘Shadow of Lies’

Although propaganda meant to persuade via argument, rumor, misunderstanding, and falsehood goes back to at least ancient Greece, today misinformation and disinformation are at the center of debate and research. Scholars have identified “information disorder syndrome,” the creating or sharing of false information out of error—misinformation—or to mislead or cause harm—disinformation or mal-information.

Why do people believe in misinformation and disinformation? Psychologist and IPR associate David Rapp , who studies how people learn through reading, finds that memory is key.

In experiments, he finds that when people read incorrect information, even about trivial subjects they already know, they often become confused and remember the inaccuracies. Subsequently, they answer questions using the incorrect statements.

“You can build memories for the things you’ve read that can then get resuscitated or recalled later in your decision making,” he said, especially when people are not carefully considering what they read.

Repeating false information over and over again—such as that the 2020 election was fraudulent— can lead to building memories for the information. And repeated information is often easy to retrieve, which can lead to problems, Rapp explained.

“If you can easily retrieve something, you tend to think it’s more true than if it’s something that’s hard to think of,” he said.

The more familiar people are with information they remember, including lies, the more likely they are to believe it is true, communication and policy scholar and IPR associate Erik Nisbet adds. In his and IPR associate research professor Olga Kamenchuk ’s research, they note people might believe misinformation or disinformation they recall even if they do not recall if the source is credible.

Moreover, people are more likely to believe the content they read or listen to that reflects the same emotions—anger, sadness, or anxiety—that they presently feel.

“Certain emotional states might make you more open to misinformation,” Nisbet said.

Breeding familiarity through repetition and seeing one’s emotional state mirrored in content are examples of a “mental shortcut,” according to Nisbet, and together they make people more likely to accept false information as true.

Examples of media bias charts that map newspapers, cable news, and other media sources on a political spectrum are easy to find. Can understanding bias in news sources help clarify why people fall prey to misinformation and disinformation?

Stephanie Edgerly , media scholar and IPR associate, suggests that a better place to start is with people’s individual biases, rather than those of news sources. In examining how people make sense of news sources, she points to the audience’s understanding of whether the source was news or entertainment—its genre—and its political orientation.

But how people perceive political orientation varies widely. Some see the media world as conservative vs. liberal with no middle ground. Others position news outlets in surprising places, such as the very conservative woman Edgerly interviewed who only centered Fox News between right- and leftwing media.

“We need to be really careful about how we talk about media,” Edgerly said. “This either/or way of making sense of media is too reductive—it’s simplistic.”

She is also concerned that accusations of biased reporting—or worse—can backfire and lead people to lose trust in all sources.

“We’re in a moment where we give a lot of attention to what the negative sources, low quality, disinformation-prone sources, are doing,” Edgerly noted. “I see this as creating a narrative where people think: ‘There’s a lot of bad sources out there, I don’t know how to find good sources, and, therefore, I’m just not going to trust any of it.’”

For decades, the U.S. media market was known as apolitical, objective, and neutral, Edgerly points out, but that is no longer seen as true.

If people do not trust news sources, and there’s no general acceptance of where to find unbiased information, then misinformation and disinformation will likely continue to flourish, she says. In such a news environment, even fact-checking breaks down as a tool to change beliefs.

Media, technology, and society researcher and IPR associate Pablo Boczkowski explains that trust in news institutions, as well as political and social institutions, is declining in the U.S. as the country becomes more fractured.

In his research, Boczkowski shows that people view news reporting today as biased and polarized, and they are especially distrustful of news circulated via social media. They are also more concerned about the effects untrustworthy sources could have on others than on themselves.

He points out that an increase in the supply of misinformation does not necessarily imply an increase in the take up of misinformation. 

“Most of the conversation—both academic and in news and policy circles—about issues of misinformation and disinformation focuses on the supply side: How much there is, and known distribution issues, how rapidly it propagates,” he said. He questions the implicit assumption that if there is more misinformation and disinformation, they must have proportionally more impact on the audience.

“I know that is not necessarily the case when I look at our research outside of the United States, at least,” he continued.

Social media such as Facebook and Twitter are often blamed as top disseminators of misleading and fabricated information. In public opinion surveys like this one on healthcare workers, respondents point directly at social media channels as spreaders of false information. While some IPR researchers hold social media channels responsible for misleading people, others note these outlets are easy targets of blame.

Boczkowski questions our “post-2016 fixation on the dystopic consequences of information technology,” pointing out that misinformation and disinformation are “as old as humanity itself.”

Nisbet offered, “I honestly believe that our focus on social media is a bit of a canard.”

“It’s easier to talk about regulating social media and dealing with social media as a problem than what I believe are the underlying political, economic, social, and cultural drivers of this ‘information disorder,’” he continued. “Social media might be a symptom or maybe amplifies like when you have a comorbidity—but it’s not the cause of our problems.”

IPR political scientist James Druckman , who studies the origins of partisanship and the role of persuasion in politics, sees “a mutually reinforcing relationship” between disinformation and polarization.

He describes those holding more polarized opinions as also being more susceptible to considering information as biased, and therefore, more susceptible to partisan bias.

“That information may reinforce their polarized tendencies,” he explained.

“Yet what is less appreciated but equally concerning is false polarization where people have misinformation about the other side and that misperception fuels their own polarization,” he added. “They believe the other side is much more different and threatening than they actually are, and that breeds polarization with social and political consequences.”

In Nature Human Behaviour , Druckman and his co-authors note partisan media and social media’s contribution to partisan animosity, but they highlight other social and political causes as well.

“I would be hesitant to place all the blame for political ills on misinformation,” Druckman cautions. “There are equally, if not more crucial, social and institutional factors at play–such as demographic shifts and political institutions that were set up in ways that did not anticipate some of these shifts.”

Did misinformation and disinformation play a role in the 2016 and 2020 U.S. presidential elections?

As Nisbet notes, we know a good deal about false and misleading information and why people believe it. What we do not fully understand, according to his research , is the impact it has on people’s attitudes and behavior.

It may seem that “fake news” and social media conspiracy theories grew in size and importance. However, as Druckman points out, since we could not measure misinformation very well in the past, we do not really know its full impact on opinion.

“It remains unclear just how much misinformation is out there—most systematic studies suggest less than many think—and if it has changed, given we could not measure it as easily before,” he said.

During the 2016 campaign, candidates were the focus of misinformation and disinformation, Nisbet explains, much of it on social media and mainly about Democratic candidate Hillary Clinton. Those attacks ended after the election.

The press and social media were a bit savvier during the 2020 campaign, Nisbet says, about allowing the spread of misinformation. But after the election, a “deluge of misinformation” followed when Facebook eased up on its precautions.

“It was not about Biden. It was about the election results and electoral processes and the integrity of the election,” Nisbet explained. “So the timing and the nature of the misinformation/disinformation was very different in 2016 versus 2020.”

He sees the possible long-term effects of the spread of false information about election integrity as a huge concern.

“I think that misinformation/disinformation about candidates is bad,” Nisbet said. “But misinformation/disinformation that destroys trust in our political process, institutions, and the integrity of democracy itself is a thousand times worse.”

As we enter the third year of the COVID-19 pandemic, some IPR experts have turned to tracking how misinformation and disinformation affect people’s health and survival.

Much of Druckman’s recent research has been based on the regular surveys collected and analyzed since April 2020 by the COVID States Project of the university consortium of Northwestern, Northeastern, Harvard, and Rutgers, which he co-leads. In July 2021, the project reported that people who relied on Facebook for news about COVID had substantially lower vaccination rates than the overall U.S. population, and they were more likely to believe falsehoods, such as that vaccines alter DNA or contain tracking microchips, were factual.

A November 2021 survey finds that nearly three-quarters (72%) of healthcare workers believe that misinformation has negatively influenced people’s decision to seek care for or get vaccinated against COVID-19.

Nisbet is also studying the effects of online COVID-19 information on health decisions in research supported by the National Science Foundation.

“One of the main effects of exposure, or at least endorsement, of COVID misinformation is reducing public trust in scientists and medical experts,” he said. “The more you believe false or misleading information about COVID the more you’re likely to be distrustful of public health or scientific experts.”

Communication studies researcher and IPR associate Ellen Wartella investigates how Twitter users prior to the pandemic promoted vaccine misinformation and connected it with a decrease in vaccination rates for diseases such as measles and tetanus and growing distrust of science and public health.

She sees a similar pattern of misinformation about the COVID vaccine.

“It’s absolutely the case that social media has been the main conveyor and mechanism by which anti-vaxxers can spread their message,” she said.

Rapp contributed to the “COVID-19 Vaccine Communication Handbook & Wiki,” an international collaboration created to improve vaccine communication and fight misinformation. To combat misinformation about the COVID vaccine, he suggests trying to find common ground with people to begin to persuade them.

“It’s going to take a concerted effort among many constituents,” he said.

Perhaps the biggest question overhanging the research is, how can we combat misinformation?

Druckman notes that a host of techniques have been developed, such as literacy courses.

Fact-checking is a very limited tool, as Edgerly and Nisbet observe, because it depends on the audience trusting the source of the checking.

Nisbet suggests what he calls “prebunking,” an “inoculation” against misinformation ahead of its distribution.  For example, news organizations could have done more to publicize prior to the 2020 election that vote tallies would change overnight as mail-in ballots were added to the totals.

Rapp and Edgerly recommend scientists and journalists be more transparent about what they do.

“The general public largely doesn’t understand what journalists do, but they can recognize the power and importance of good journalism,” Edgerly said. She would like to see “a little bit of reminding the public about what journalism is supposed to do so it’s not tied into narratives about fake news and partisan bickering.”

Rapp encourages more “lateral reading” of different sources on the same subject—a technique endorsed in many classroom settings. He also suggests that academics, doctors, and politicians quit only speaking in jargon and in a top-down way about issues if we want to bridge the partisan divides exacerbated by misinformation and disinformation.

For Boczkowski, trust in institutions, including the media, is the fundamental issue. To restore trust, he says we must improve our institutions to work fairly for all groups, not just the privileged ones.

“Instead of spending so much time on [disinformation], we should spend all the energy we spend on that looking at what can we do to make our society more equitable, more just, more inclusive, to emphasize those that have been disenfranchised,” he said. “Otherwise, it’s like having a strep infection and thinking you’re going to cure it with Tylenol!”

Pablo Boczkowski is Hamad Bin Khalifa Al-Thani Professor in Communication Studies. James Druckman is Payson S. Wild Professor of Political Science and IPR associate director. Stephanie Edgerly is associate professor and director of research in the Medill School. Erik Nisbet is the Owen L. Coon Endowed Professor of Policy Analysis & Communication. David Rapp is professor of psychology. Ellen Wartella is Sheikh Hamad bin Khalifa Al-Thani Professor of Communication. All are IPR faculty members.

Photo credit: iStock

Published: January 26, 2022.

Related Research Stories

Newborn baby

New IPR Research: January 2024

Laurel Harbridge-Yong

Laurel Harbridge-Yong to Become IPR’s Ninth Associate Director

Tribal constitutions

Examining Tribes’ Sovereignty Through Their Constitutions

MediaBiasGroup_Logo_verlauf_neu_2_bold

Resources & Publications

research about media bias

Most recent models are published on Huggingface

[Benchmark, GitHub] MBIB – the first Media Bias Identification Benchmark Task and Dataset Collection

[Dataset, GitHub] BABE – Bias Annotations By Experts

[Scale/Questionnaire to measure bias perception] Do You Think It’s Biased? How To Ask For The Perception Of Media Bias (A set of tested questions to assess media bias perception to be used in any bias-related research)

[Dataset, Zenodo] MBIC -A Media Bias Annotation Dataset Including Annotator Characteristics

Publications

Wessel, Martin; Horych, Tomas

Beyond the Surface: Spurious Cues in Automatic Media Bias Detection Proceedings Article

In: Bharathi B Bharathi Raja Chakravarthi, Paul Buitelaar (Ed.): Proceedings of the Fourth Workshop on Language Technology for Equality, Diversity, Inclusion, pp. 21–30, Association for Computational Linguistics, 2024 .

Abstract | Links | BibTeX | Tags:

  • https://aclanthology.org/2024.ltedi-1.3

Horych, Tomas; Wessel, Martin; Wahle, Jan Philip; Ruas, Terry; Wassmuth, Jerome; Greiner-Petter, Andre; Aizawa, Akiko; Gipp, Bela; Spinde, Timo

MAGPIE: Multi-Task Analysis of Media-Bias Generalization with Pre-Trained Identification of Expressions Proceedings Article

In: "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation", 2024 .

Links | BibTeX | Tags:

  • https://media-bias-research.org/wp-content/uploads/2024/04/Horych2024a.pdf

Wessel, Martin; Horych, Tomas; Ruas, Terry; Aizawa, Akiko; Gipp, Bela; Spinde, Timo

Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection Proceedings Article

In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’23), ACM, New York, NY, USA, 2023 , ISBN: 978-1-4503-9408-6/23/07 .

  • https://media-bias-research.org/wp-content/uploads/2023/04/Wessel2023Preprint.pdf
  • doi:https://doi.org/10.1145/3539618.3591882

Spinde, Timo; Richter, Elisabeth; Wessel, Martin; Kulshrestha, Juhi; Donnay, Karsten

What do Twitter comments tell about news article bias? Assessing the impact of news article bias on its perception on Twitter Journal Article

In: Online Social Networks and Media, vol. 37-38, pp. 100264, 2023 , ISSN: 2468-6964 .

Abstract | Links | BibTeX | Tags: Hate speech detection , media bias , Sentiment analysis , Transfer learning

  • https://www.sciencedirect.com/science/article/pii/S246869642300023X
  • doi:https://doi.org/10.1016/j.osnem.2023.100264

Spinde, Timo; Hinterreiter, Smilla; Haak, Fabian; Ruas, Terry; Giese, Helge; Meuschke, Norman; Gipp, Bela

The Media Bias Taxonomy: A Systematic Literature Review on the Forms and Automated Detection of Media Bias Journal Article

In: arXiv preprint, 2023 .

  • https://media-bias-research.org/wp-content/uploads/2023/12/spinde2023.pdf

Krieger, David; Spinde, Timo; Ruas, Terry; Kulshrestha, Juhi; Gipp, Bela

A Domain-adaptive Pre-training Approach for Language Bias Detection in News Proceedings Article

In: 2022 ACM/IEEE Joint Conference on Digital Libraries (JCDL), Cologne, Germany, 2022 .

  • https://media-bias-research.org/wp-content/uploads/2022/06/Krieger2022_mbg.pdf
  • doi:10.1145/3529372.3530932

Zhukova, Anastasia; Hamborg, Felix; Gipp, Bela

Towards Evaluation of Cross-document Coreference Resolution Models Using Datasets with Diverse Annotation Schemes Proceedings Article

In: Proceedings of the Thirteenth Language Resources and Evaluation Conference, pp. 4884–4893, European Language Resources Association, Marseille, France, 2022 .

  • https://aclanthology.org/2022.lrec-1.522

Spinde, Timo; Krieger, Jan-David; Ruas, Terry; Mitrović, Jelena; Götz-Hahn, Franz; Aizawa, Akiko; Gipp, Bela

Exploiting Transformer-based Multitask Learning for the Detection of Media Bias in News Articles Proceedings Article

In: Proceedings of the iConference 2022, Virtual event, 2022 .

  • https://media-bias-research.org/wp-content/uploads/2022/03/Spinde2022a_mbg.pdf
  • doi:https://doi.org/10.1007/978-3-030-96957-8_20

Spinde, Timo; Jeggle, Christin; Haupt, Magdalena; Gaissmaier, Wolfgang; Giese, Helge

How do we raise media bias awareness effectively? Effects of visualizations to communicate bias Journal Article

In: PLOS ONE, vol. 17, no. 4, pp. 1-14, 2022 .

  • https://doi.org/10.1371/journal.pone.0266204
  • doi:10.1371/journal.pone.0266204

Haak, Fabian; Schaer, Philipp

Auditing Search Query Suggestion Bias Through Recursive Algorithm Interrogation Proceedings Article

In: WebSci '22: 14th ACM Web Science Conference 2022, ACM, 2022 .

BibTeX | Tags: bias esupol haak myown schaer

Zhukova, Anastasia; Hamborg, Felix; Donnay, Karsten; Gipp, Bela

XCoref: Cross-Document Coreference Resolution in the Wild Proceedings Article

In: Information for a Better World: Shaping the Global Future: 17th International Conference, IConference 2022, Virtual Event, February 28 – March 4, 2022, Proceedings, Part I, pp. 272–291, Springer-Verlag, Berlin, Heidelberg, 2022 , ISBN: 978-3-030-96956-1 .

Abstract | Links | BibTeX | Tags: Cross-document coreference resolution , media bias , news analysis

  • https://doi.org/10.1007/978-3-030-96957-8_25
  • doi:10.1007/978-3-030-96957-8_25

Spinde, Timo; Plank, Manuel; Krieger, Jan-David; Ruas, Terry; Gipp, Bela; Aizawa, Akiko

Neural Media Bias Detection Using Distant Supervision With BABE - Bias Annotations By Experts Proceedings Article

In: Findings of the Association for Computational Linguistics: EMNLP 2021, Dominican Republic, 2021 .

  • https://media-bias-research.org/wp-content/uploads/2022/01/Neural_Media_Bias_Dete[...]
  • doi:10.18653/v1/2021.findings-emnlp.101

Hinterreiter, Smilla

A Gamified Approach To Automatically Detect Biased Wording And Train Critical Reading Proceedings Article

In: 2021 IEEE International Conference on Data Mining Workshops (ICDMW), 2021 .

  • https://media-bias-research.org/wp-content/uploads/2021/10/hinterreiter2021a.pdf
  • doi:10.1109/ICDMW53433.2021.00141

Spinde, Timo

An Interdisciplinary Approach for the Automated Detection and Visualization of Media Bias in News Articles Proceedings Article

Links | BibTeX | Tags: media bias , news analysis , slanted coverage , text retrieval

  • https://media-bias-research.org/wp-content/uploads/2021/09/Spinde2021g.pdf
  • doi:10.1109/ICDMW53433.2021.00144

Spinde, Timo; Sinha, Kanishka; Meuschke, Norman; Gipp, Bela

TASSY - A Text Annotation Survey System Proceedings Article

In: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries (JCDL), 2021 .

  • https://media-bias-research.org/wp-content/uploads/2022/01/Spinde2021c.pdf
  • doi:10.1109/JCDL52503.2021.00052

Spinde, Timo; Kreuter, Christina; Gaissmaier, Wolfgang; Hamborg, Felix; Gipp, Bela; Giese, Helge

Do You Think It’s Biased? How To Ask For The Perception Of Media Bias Proceedings Article

  • https://media-bias-research.org/wp-content/uploads/2022/01/Spinde2021e.pdf
  • doi:10.1109/JCDL52503.2021.00018

Spinde, Timo; Krieger, David; Plank, Manu; Gipp, Bela

Towards A Reliable Ground-Truth For Biased Language Detection Proceedings Article

In: Proceedings of the ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL), Virtual Event, 2021 .

  • https://media-bias-research.org/wp-content/uploads/2022/01/Spinde2021d.pdf
  • doi:10.1109/JCDL52503.2021.00053

Haak, Fabian; Engelmann, Björn

IRCologne at GermEval 2021: Toxicity Classification Proceedings Article

In: Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments, pp. 47–53, Association for Computational Linguistics, Duesseldorf, Germany, 2021 .

Abstract | Links | BibTeX | Tags: 2021 bias classification data engelmann haak nlp programming snorkel toxic

  • https://aclanthology.org/2021.germeval-1.7

Hamborg, F.; Heinser, K.; Zhukova, A.; Donnay, K.; Gipp, B.

Newsalyze: Effective Communication of Person-Targeting Biases in News Articles Proceedings Article

In: 2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL), pp. 130-139, IEEE Computer Society, Los Alamitos, CA, USA, 2021 .

Abstract | Links | BibTeX | Tags: visualization;costs;atmospheric measurements;voting;natural languages;manuals;particle measurements

  • https://doi.ieeecomputersociety.org/10.1109/JCDL52503.2021.00025
  • doi:10.1109/JCDL52503.2021.00025

Cabot, Pere-Lluís Huguet; Abadi, David; Fischer, Agneta; Shutova, Ekaterina

Us vs. Them: A Dataset of Populist Attitudes, News Bias and Emotions Proceedings Article

In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 1921–1945, Association for Computational Linguistics, Online, 2021 .

  • https://aclanthology.org/2021.eacl-main.165
  • doi:10.18653/v1/2021.eacl-main.165

Spinde, Timo; Rudnitckaia, Lada; Hamborg, Felix; Bela,; Gipp,

Identification of Biased Terms in News Articles by Comparison of Outlet-specific Word Embeddings Proceedings Article

In: Proceedings of the iConference 2021, Beijing, China (Virtual Event), 2021 .

  • https://media-bias-research.org/wp-content/uploads/2021/01/Spinde2021.pdf
  • doi:10.1007/978-3-030-71305-8_17

Spinde, Timo; Rudnitckaia, Lada; Kanishka, Sinha; Hamborg, Felix; Bela,; Gipp,; Donnay, Karsten

MBIC – A Media Bias Annotation Dataset Including Annotator Characteristics Proceedings Article

  • https://media-bias-research.org/wp-content/uploads/2021/01/Spinde2021a.pdf
  • doi:10.6084/m9.figshare.17192924

Spinde, Timo; Rudnitckaia, Lada; Mitrović, Jelena; Hamborg, Felix; Granitzer, Michael; Gipp, Bela; Donnay, Karsten

Automated identification of bias inducing words in news articles using linguistic and context-oriented features Journal Article

In: Information Processing & Management, vol. 58, no. 3, pp. 102505, 2021 , ISSN: 0306-4573 .

Abstract | Links | BibTeX | Tags: bias data set , context analysis , feature engineering , media bias , news analysis , text analysis

  • https://www.sciencedirect.com/science/article/pii/S0306457321000157/pdfft?md5=64[...]
  • doi:https://doi.org/10.1016/j.ipm.2021.102505

Ehrhardt, Jonas; Spinde, Timo; Vardasbi, Ali; Hamborg, Felix

Omission of Information: Identifying Political Slant via an Analysis of Co-occurring Entities Book Section

In: Information between Data and Knowledge, vol. 74, pp. 80–93, Werner Hülsbusch, Glückstadt, 2021 , (Session 2: Information Behavior and Information Literacy 2) .

Abstract | Links | BibTeX | Tags: media bias; bias by omission; news articles; co-occurrences

  • https://epub.uni-regensburg.de/44939/

Garz, Marcel; Martin, Gregory J.

Media Influence on Vote Choices: Unemployment News and Incumbents' Electoral Prospects Journal Article

In: American Journal of Political Science, vol. 65, no. 2, pp. 278-293, 2021 .

  • https://onlinelibrary.wiley.com/doi/abs/10.1111/ajps.12539
  • doi:https://doi.org/10.1111/ajps.12539

Babaei, Mahmoudreza; Kulshrestha, Juhi; Chakraborty, Abhijnan; Redmiles, Elissa M.; Cha, Meeyoung; Gummadi, Krishna P.

Analyzing Biases in Perception of Truth in News Stories and Their Implications for Fact Checking Journal Article

In: IEEE Transactions on Computational Social Systems, 2021 .

  • doi:10.1109/TCSS.2021.3096038

Perception-Aware Bias Detection for Query Suggestions Proceedings Article

In: Boratto, Ludovico; Faralli, Stefano; Marras, Mirko; Stilo, Giovanni (Ed.): Advances in Bias and Fairness in Information Retrieval - Second International Workshop on Algorithmic Bias in Search and Recommendation, BIAS 2021, Lucca, Italy, April 1, 2021, Proceedings, Springer Nature, Switzerland, 2021 , ISBN: 978-3-030-78817-9 .

Links | BibTeX | Tags: 2021 bias esupol haak myown schaer

  • doi:10.1007/978-3-030-78818-6_12

Concept Identification of Directly and Indirectly Related Mentions Referring to Groups of Persons Proceedings Article

In: Diversity, Divergence, Dialogue: 16th International Conference, IConference 2021, Beijing, China, March 17–31, 2021, Proceedings, Part I, pp. 514–526, Springer-Verlag, Beijing, China, 2021 , ISBN: 978-3-030-71291-4 .

Abstract | Links | BibTeX | Tags: Coreference resolution , media bias , news analysis

  • https://doi.org/10.1007/978-3-030-71292-1_40
  • doi:10.1007/978-3-030-71292-1_40

Cabot, Pere-Lluís Huguet; Dankers, Verna; Abadi, David; Fischer, Agneta; Shutova, Ekaterina

The Pragmatics behind Politics: Modelling Metaphor, Framing and Emotion in Political Discourse Proceedings Article

In: Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 4479–4488, Association for Computational Linguistics, Online, 2020 .

  • https://aclanthology.org/2020.findings-emnlp.402
  • doi:10.18653/v1/2020.findings-emnlp.402

Spinde, Timo; Hamborg, Felix; Gipp, Bela

Media Bias in German News Articles : A Combined Approach Proceedings Article

In: Proceedings of the 8th International Workshop on News Recommendation and Analytics ( INRA 2020), Virtual event, 2020 .

  • https://media-bias-research.org/wp-content/uploads/2021/01/Media-Bias-in-German-N[...]
  • doi:10.1007/978-3-030-65965-3_41

Ganguly, Soumen; Kulshrestha, Juhi; An, Jisun; Kwak, Haewoon

Empirical Evaluation of Three Common Assumptions in Building Political Media Bias Datasets Proceedings Article

In: pp. 939-943, 2020 .

  • https://ojs.aaai.org/index.php/ICWSM/article/view/7362

An Integrated Approach to Detect Media Bias in German News Articles Proceedings Article

In: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, pp. 505–506, Association for Computing Machinery, Virtual Event, China, 2020 , ISBN: 9781450375856 .

Abstract | Links | BibTeX | Tags: content analysis , frame analysis , media bias , news bias , news slant

  • https://doi.org/10.1145/3383583.3398585
  • doi:10.1145/3383583.3398585

Spinde, Timo; Hamborg, Felix; Donnay, Karsten; Becerra, Angelica; Gipp, Bela

Enabling News Consumers to View and Understand Biased News Coverage: A Study on the Perception and Visualization of Media Bias Proceedings Article

In: Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, pp. 389–392, Association for Computing Machinery, Virtual Event, China, 2020 , ISBN: 9781450375856 .

Abstract | Links | BibTeX | Tags: bias visualization , news bias , news slant , perception of news

  • https://doi.org/10.1145/3383583.3398619
  • doi:10.1145/3383583.3398619

Garz, Marcel; Sörensen, Jil; Stone, Daniel F.

Partisan selective engagement: Evidence from Facebook Journal Article

In: Journal of Economic Behavior & Organization, vol. 177, pp. 91-108, 2020 , ISSN: 0167-2681 .

Abstract | Links | BibTeX | Tags: Filter bubble , media bias , Polarization , Political immunity , Social media

  • https://www.sciencedirect.com/science/article/pii/S0167268120302079
  • doi:https://doi.org/10.1016/j.jebo.2020.06.016

Garz, Marcel; Sood, Gaurav; Stone, Daniel F.; Wallace, Justin

The supply of media slant across outlets and demand for slant within outlets: Evidence from US presidential campaign news Journal Article

In: European Journal of Political Economy, vol. 63, pp. 101877, 2020 , ISSN: 0176-2680 .

Abstract | Links | BibTeX | Tags: Horse race news , media bias , Media slant , Motivated beliefs , Polarization , Selective exposure

  • https://www.sciencedirect.com/science/article/pii/S0176268020300252
  • doi:https://doi.org/10.1016/j.ejpoleco.2020.101877

Hamborg, Felix; Zhukova, Anastasia; Gipp, Bela

Automated Identification of Media Bias by Word Choice and Labeling in News Articles Proceedings Article

In: Proceedings of the 18th Joint Conference on Digital Libraries, pp. 196–205, IEEE Press, Champaign, Illinois, 2020 , ISBN: 9781728115474 .

Abstract | Links | BibTeX | Tags: automated content analysis , automated frame analysis , CAQDAS , CAS , emotions , entity perception , news bias , news slant , NLP

  • https://doi.org/10.1109/JCDL.2019.00036
  • doi:10.1109/JCDL.2019.00036

Bonart, Malte; Samokhina, Anastasiia; Heisenberg, Gernot; Schaer, Philipp

An investigation of biases in web search engine query suggestions Journal Article

In: Online Information Review, vol. 44, no. 2, pp. 365-381, 2019 , ISSN: 1468-4527 .

Abstract | Links | BibTeX | Tags: bias bonart esupol myown schaer

  • https://www.emerald.com/insight/content/doi/10.1108/OIR-11-2018-0341/full/html
  • doi:10.1108/OIR-11-2018-0341

Kulshrestha, Juhi; Eslami, Motahhare; Messias, Johnnatan; Zafar, Muhammad Bilal; Ghosh, Saptarshi; Gummadi, Krishna P.; Karahalios, Karrie

Search bias quantification : investigating political bias in social media and web search Journal Article

In: Information Retrieval Journal, vol. 22, no. 1-2, pp. 188–227, 2019 , ISSN: 1386-4564 .

  • doi:10.1007/s10791-018-9341-2

Illegal Aliens or Undocumented Immigrants? Towards the Automated Identification of Bias by Word Choice and Labeling Proceedings Article

In: Taylor, Natalie Greene; Christian-Lamb, Caitlin; Martin, Michelle H.; Nardi, Bonnie (Ed.): Information in Contemporary Society, pp. 179–187, Springer International Publishing, Cham, 2019 , ISBN: 978-3-030-15742-5 .

  • https://media-bias-research.org/wp-content/uploads/2023/05/hamborg2019.pdf

Babaei, Mahmoudreza; Kulshrestha, Juhi; Chakraborty, Abhijnan; Benevenuto, Fabrício; Gummadi, Krishna P.; Weller, Adrian

Purple Feed: Identifying High Consensus News Posts on Social Media Proceedings Article

In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 10–16, Association for Computing Machinery, New Orleans, LA, USA, 2018 , ISBN: 9781450360128 .

Abstract | Links | BibTeX | Tags: audience leaning based features , consensus , news consumption in social media , Polarization , purple feed

  • https://doi.org/10.1145/3278721.3278761
  • doi:10.1145/3278721.3278761

Ribeiro, Filipe N.; Henrique, Lucas; Benevenuto, Fabricio; Chakraborty, Abhijnan; Kulshrestha, Juhi; Babaei, Mahmoudreza; Gummadi, Krishna P.

Media Bias Monitor : Quantifying Biases of Social Media News Outlets at Large-Scale Proceedings Article

In: Twelfth International AAAI Conference on Web and Social Media, pp. 290–299, AAAI Press, Palo Alto, California, 2018 , ISBN: 978-1-57735-798-8 .

  • https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17878

Bonart, Malte; Schaer, Philipp

Intertemporal Connections Between Query Suggestions and Search Engine Results for Politics Related Queries Proceedings Article

In: EuroCSS 2018 Dataset Challenge, Cologne, 2018 .

Links | BibTeX | Tags: bias bonart esupol myown schaer

  • https://arxiv.org/abs/1812.08585

Garz, Marcel

Good news and bad news: evidence of media bias in unemployment reports Journal Article

In: Public Choice, vol. 161, no. 3/4, pp. 499–515, 2014 , ISSN: 00485829, 15737101 .

  • http://www.jstor.org/stable/24507505

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of plosone

How do we raise media bias awareness effectively? Effects of visualizations to communicate bias

Timo spinde.

1 Department of Computer and Information Science, University of Konstanz, Konstanz, Germany

2 School of Electrical, Information and Media Engineering, University of Wuppertal, Wuppertal, Germany

Christin Jeggle

3 Department of Psychology, University of Konstanz, Konstanz, Germany

Magdalena Haupt

Wolfgang gaissmaier, helge giese, associated data.

Data are available at https://osf.io/e95dh/ .

Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an online experiment, we randomized 985 participants to receive a biased liberal or conservative news article in any combination of the three aids. Meanwhile, their subjective perception of media bias in this article, attitude change, and political ideology were assessed. Both the forewarning message and the annotations increased media bias awareness, whereas the political classification showed no effect. Incongruence between an articles’ political position and individual political orientation also increased media bias awareness. Visual aids did not mitigate this effect. Likewise, attitudes remained unaltered.

Introduction

The Internet age has a significant impact on today’s news communication: It allows individuals to access news and information from an ever-increasing variety of sources, at any time, on any subject. Regardless of journalistic standards, media outlets with a wide reach have the power to affect public opinion and shape collective decision-making processes [ 1 ]. However, it is well known that the wording and selection of news in media coverage often are biased and provide limited viewpoints [ 2 ], commonly referred to as media bias . According to Domke and colleagues [ 3 ], media bias is a structural, often wilful defect in news coverage that potentially influences public opinion. Labeling named entities with terms that are ambiguous in the concepts they allude to (e.g. "illegal immigrants" and "illegal aliens" [ 4 ] or combining concepts beyond their initial contexts into figurative speech that carry a positive or negative association ("a wave of immigrants flooded the country") can induce bias. Still, the conceptualization of media bias is complex since biased and balanced reporting cannot be distinguished incisively [ 5 ]. Many definitions exist, and media bias, in general, has been researched from various angles, such as psychology [ 6 ], computer science [ 7 ], linguistics [ 8 ], economics [ 9 ], or political science [ 10 ]. Therefore, we believe advancement in media bias communication is relevant for multiple scientific areas.

Previous research shows the effects of media bias on individual and public perception of news events [ 6 ]. Since the media are citizens’ primary source of political information [ 11 ], associated bias may affect the political beliefs of the audience, party preferences [ 12 ] and even alter voting behavior [ 13 ]. Moreover, exposure to biased information can lead to negative societal outcomes, including group polarization, intolerance of dissent, and political segregation [ 14 ]. It can also affect collective decision-making [ 15 ]. The implications of selective exposure theory intensify the severity of biased news coverage: Researchers observed long ago that people prefer to consume information that fits their worldview and avoid information that challenges these beliefs [ 16 ]. By selecting only confirmatory information, one’s own opinion is reaffirmed, and there is no need to re-evaluate existing stances [ 17 ]. In this way, the unpleasant feeling of cognitive dissonance is avoided [ 18 ]. Isolation in one’s own filter bubble or echo chamber confirms internal biases and might lead to a general decrease in the diversity of news consumption [ 14 ]. This decrease is further exacerbated by recent technological developments like personalized overview features of, e.g., news aggregators [ 19 ]. How partisans select and perceive political news is thus an important question in political communication research [ 20 ]. Therefore, this study tries to test ways to increase the awareness of media bias (which might mitigate its negative impact) and the partisan evaluation of the media through transparent bias communication.

Media bias communication

Media bias occurs in various forms, for example, whether or how a topic is reported (D’Alessio & Allen, 2000) and may not always be easy to identify. As a result, news consumers often engage with distorted media but are not aware of it and exhibit a lack of media bias awareness [ 21 ]. To address this issue, revealing the existence and nature of media can be an essential route to attain media bias awareness and promote informed and reflective news consumption [ 19 ]. For instance, visualizations may generally help to raise media bias awareness and lead to a more balanced news intake by warning people of potential biases [ 22 ], highlighting individual instances of bias [ 19 ], or facilitating the comparison of contents [ 2 , 23 ].

Although knowledge of how to communicate media bias effectively is crucial, visualizations and enhanced perception of media bias have only played a minor role in existing research, and several approaches have not yet been investigated. Therefore, this paper tests how effectively different strategies promote media bias awareness and thereby may also help understand common barriers to informed media consumption. We selected three major methods in related work [ 19 , 22 ] on the topic to further investigate them in one combined study: forewarning messages, text annotations, and political classifications. Theoretical foundations of bias messages and visualizations are yet scarce, and neither in visualization theory nor in bias theory, suitable strategies in the domain have been extensively tested.

Forewarning message

According to socio-psychological inoculation theory [ 24 ], it is possible to pre-emptively confer psychological resistance against persuasion attempts by exposing people to a message of warning character. It is similar to the process of immunizing against a virus by administering a weakened dose of the virus: A so-called inoculation message is expected to protect people from a persuasive attack by exposing them to weakened forms of the persuasion attempt. Due to the perceived threat of the forewarning inoculation message, people tend to strengthen their own position and are thus more resistant to influences of imminent persuasion attacks [ 25 ]. Therefore, one strategy to help people detect bias is to prepare them ahead of media consumption that media bias may occur, thereby "forewarning" them against biased language influences. Such warnings have been widely established in persuasion and shown to be effective in different applied contexts [ 26 ]. Furthermore, such warnings also seem to help not only to protect attitudes against influences but also to determine the quality of a piece of information [ 27 – 29 ] and communicate the information accordingly [ 30 ]. For biased language, this may work specifically by focusing the reader’s attention on a universal motive to evaluate the accuracy of information while relying on the individual’s capacity to detect the bias when encountered [ 30 ]; Bolsen & Druckman, 2015).

Annotations

Other than informing people in advance about bias occurrence, a further approach is to inform them during reading, thereby increasing their awareness of biased language and providing direct help to detect it in an article. Recently, there has been a lot of research on media bias from information science, but it is mainly concerned with its identification and detection [ 31 – 34 ]. However, whereas some research concerning the effects of visualizations of media bias in news articles to detect bias are promising (here: flagging fake news as debunked [ 35 ]) others did not find such effects, potentially also due to the technical issues in accurately annotating single articles [ 19 ]. Still, they offer a good prospect to enable higher media bias awareness and more balanced news consumption. We show our annotation visualization in Fig 1 .

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g001.jpg

Example of the bias annotation "subjective term". Boxed annotation appeared by moving the cursor/finger over the highlighted text section.

Political classification

Another attempt to raise media bias awareness is a political classification of biased material after readers have dealt with it. An and colleagues [ 36 ] proposed an ideological left-right map where media sources are politically classified. The authors suggest that showing a source’s political leaning helps readers question their attitudes and even promotes browsing for news articles with multiple viewpoints. Likewise, several other studies indicate that feedback on the political orientation of an article or a source may lead to more media bias awareness and a more balanced news consumption [ 19 ]. Additionally, exposing users to multiple diverse viewpoints on controversial topics encourages the development of more balanced viewpoints [ 23 ]. A study of Munson and colleagues (2013) further suggests that a feedback element indicating whether the user’s browsing history consists of biased news consumption modestly leads to a more balanced news consumption. Based on these findings, we will test whether the sole representation of a source’s leaning helps raise bias awareness among users on the condition that the article is classified as politically skewed. We show our political classification bar in Fig 2 .

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g002.jpg

Example of an article classification as being politically left-oriented.

Partisan media bias awareness

Attempts to raise media bias awareness may be further complicated by the fact that the detection of media bias and the evaluation of news seem dependent on the political ideology of the beholder [ 37 – 41 ]. However, this partisan effect is not only apparent in neutral reporting: It is supposed that individuals perceive biased content that corresponds to their opinion as less biased [ 38 ] and biased content that contradicts their viewpoints as more biased [ 41 ].

These findings suggest that incongruence between the reader’s position and the news article’s position may increase media bias perception of the article, whereas congruence may decrease it. Thus, partisan media consumers may engage in motivated reasoning to overcome cognitive dissonance experienced when encountering media bias in any news article generally in line with their viewpoints [ 42 ]. According to Festinger [ 18 ], cognitive dissonance is generated when a person has two cognitive elements that are inconsistent with each other. This inconsistency is assumed to produce a feeling of mental discomfort. People who experience dissonance are motivated to reduce the inconsistency because they want to avoid or reduce this negative emotion.

Furthermore, Festinger notes that exposure to messages inconsistent with one’s beliefs could create cognitive dissonance, leading people to avoid or reduce negative emotions. In line with this notion, raising media bias awareness could increase experienced cognitive dissonance and thereby lead to even more partisan ratings of bias. Another explanation of the phenomenon of partisan bias ratings is varying norms about what content is considered appropriate in media coverage dependent on one’s political identity[ 43 ]. Other researchers focus on the inattention to the quality of news and the motive to only support truthful news [ 44 ]. Both approaches lead us to expect opposite results for the partisanship of the media bias ratings with increased media bias awareness as created by our proposed visualizations: Partisanship of ratings should decrease rather than increase as people are reminded of more general norms and accuracy motives [ 27 ].

Study aims and hypotheses

This project aims to contribute to a deeper understanding of effective media bias communication. To this end, we create a set of bias visualizations revealing bias in different ways and test their effectiveness to raise awareness in an online experiment. Following the respective literature elaborated above for each technique, we would expect enhanced media bias awareness by all visualizations:

  • H1a: A forewarning message prior to news articles increases media bias awareness in presented articles.
  • H1b: Annotations in news articles increase media bias awareness in presented articles.
  • H1c: A political classification of news articles increases media bias awareness in presented articles.

Another goal of this study is to understand better the reader’s political orientation in media bias awareness. In line with the findings of partisan media bias perception (hostile media effect; Vallone et al., 1985), we adopt the following hypothesis:

  • H2: Presented material will be rated less biased if consistent with individual political orientation.

Furthermore, we assume, following the attentional and normative explanation of partisanship in ratings rather than cognitive dissonance theory, the following effect:

  • H3: Bias visualizations will mitigate the effects of partisan bias ratings.

Participants

A total of 1002 participants from the US were recruited online via Prolific in August of 2020. A final sample of N = 985 was included in the analysis (51% female; age : M = 32.67; SD = 11.95 ) . The excluded participants did not fully complete the study or indicated that their data might not be trusted in a seriousness check. The target sample size was determined using power analysis, so that small effects ( f = 0.10) could be found with a power of .80 [ 45 ]. The online study was scheduled to last approximately 10 minutes, for which the participants received £1.10 as payment.

Design and procedure

The experiment was conducted online in Qualtrics ( https://www.qualtrics.com ). It operated with fully informed consent, adheres to the Declaration of Helsinki, and was conducted in compliance with relevant laws and institutional guidelines, including the ones of the University of Konstanz ethics board. All participants confirmed their consent in written form and were informed in detail about the study, the aim, data processing, anonymization, and other background information.

After collecting informed consent and demographic information, we conducted an initial attitude assessment which asked for their general perception of the presented topic on three dimensions and personal relevance. Next, participants read one randomly selected biased news article (either liberal or conservative), randomly supplemented by any combination of the visual aids (forewarning message, annotations, political classification). Thus, the study had a 2x2x2 forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) between design. The article also varied between participants in both article position (liberal/conservative) and article topic (gun law/abortion) to determine the results’ partialness and generalizability. Finally, attitudes towards the topic were reassessed, followed by a seriousness check.

Study material

Visual aids.

Forewarning message . The forewarning message consisted of a short warning and was displayed directly before the news article. It reads: " Beware of biased news coverage . Read consciously . Don’t be fooled . The term ’media bias’ refers to , in part , non-neutral tonality and word choice in the news . Media Bias can consciously and unconsciously result in a narrow and one-sided point of view . How a topic or issue is covered in the news can decisively impact public debates and affect our collective decision making ." Besides, an example of one-sided language was shown, and readers were encouraged to consume news consciously.

Annotations . Annotations were directly integrated into the news texts. Biased words or sentences were highlighted [ 46 ], and by hovering over the marked sections, a short explanation of the respective type of bias appeared. For example, if moving the cursor over a very one-sided term, the following annotation would be displayed: " Subjective term : Language that is skewed by feeling , opinion or taste ." Annotations were based on ratings of six members of our research group, where phrases had to be nominated by at least three raters. The final annotations can be found in the supplementary preregistration repository accompanying this article at https://osf.io/e95dh/‌?view_only=d2fb5dc‌2d64741e393b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). We followed the guidelines applied in existing research to teach annotators about bias and reach higher-quality annotations [ 47 ]. In future work, we will further increase the number of raters, as we address in the discussion.

Political classification . A political classification in the form of a spectrum from left to right indicated the source’s political ideology. It was displayed immediately after the presented article and based on the rating of the webpage Allsides.

We used four biased news articles that varied in topic and political position. Each participant was assigned to one article. The two topics covered were gun law and the debate on abortion, with either a liberal or conservative article position. Topics were selected because we considered them controversial issues in the United States that most people are presumably familiar with. To ensure that articles were biased, they were taken from sources deemed extreme according to the Allsides classification. Conservative texts were taken from Breitbart.com ; liberal articles were from Huffpost.com and Washingtonpost.com . We also conducted a manipulation check to determine whether participants perceived political article positions in line with our assumptions: Just after reading the article, participants were asked to classify its political stance on a visual analogue scale (-5 = very liberal to 5 = very conservative ). To ensure comparability, articles were shortened to approximately the same length, and respective sources were not indicated. All article texts used are listed together with their annotations in the supplementary preregistration repository accompanying this article (we show the link on the previous page).

Media bias awareness

Five semantic differentials assessed media bias awareness on fairness, partialness, acceptableness, trustworthiness, and persuasiveness [ 48 – 50 ] on visual analogue scales (" I think the presented news article was… "). Media bias awareness was established by averaging the five items and recoded to range from -5 = low bias awareness to 5 = high bias awareness ( α = .88).

Political orientation

The variable political orientation was measured on a visual analogue scale ranging from –5 = very conservative to 5 = very liberal ), introduced with the question " Do you consider yourself to be liberal , conservative , or somewhere in between ?" adopted by Spinde and colleagues [ 19 , 51 ]. Likewise, we assessed the perceived stance of the read article on the same scale introduced with the question " I think the presented news article was… ".

Attitudes towards article topic

Attitudes were assessed before and after the article presentation by a three-item semantic differential scale ( wrong - right , unacceptable - acceptable , bad - good ) evaluating the two topics (" Generally , laws restricting abortion/ the use of guns are . . ."; α = .99). The three items were averaged per topic to yield a score from (–5 = very conservative attitude to 5 = very liberal attitude). Besides, we assessed topic involvement by one item before the article presentation (" To me personally , laws restricting the use of guns/ abortions are… irrelevant-relevant") on a scale from –5 to 5.

Statistical analysis

To test effects of the visual aids on media bias perception, we used ANOVAs with effect coded factors in a forewarning message (yes/no) x annotations (yes/no) x political map (yes/no) x2 article position (liberal/conservative) x2 article topic (gun law/abortion) between design. For analyses testing political ideology effects, this was generalized to a GLM with standardized political orientation as an additional interacting variable followed by a simple effects analysis. The same model was applied to the second attitude rating, with first attitude rating and topic involvement as covariates for attitude change. This project and the analyses were preregistered with the DOI https://osf.io/e95dh/?view_only=d2fb5dc2d64741e39‌3b30b9ee6cc7dc1 (Non-anonymous Link is made accessible in case of acceptance). All study materials, code, and data are available there.

Manipulation check and other effects on perceived political stance of the article

Overall, the positions of the political articles were perceived as designed ( article position : F (1, 953) = 528.67, p < .001, η p 2 = .357): Articles assigned a liberal position were perceived more liberal ( M = 1.60, SD = 2.70), whereas conservative articles were rated more conservative ( M = –1.98, SD = 2.26). This difference between the conservative and the liberal article was more pronounced, when a forewarning message ( F (1, 953) = 7.33, p = .007, η p 2 = .008), annotations ( F (1, 953) = 3.96, p = .047, η p 2 = .004), or the political classifications were present ( F (1, 953) = 9.12, p = .003, η p 2 = .009; see Fig 3 ). The combination of forewarning and classification further increased the difference ( F (1, 953) = 5.28, p = .022, η p 2 = .006).

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g003.jpg

Across all conditions, liberal articles were perceived to be more liberal and conservative articles more conservative. The interventions increased the differences between the two ratings. Dots represent means, and lines are standard deviations.

Effects of visual aids on media bias perceptions

Testing the effects of the visual aids on media bias perceptions in general, we found that both the forewarning message ( F (1, 953) = 8.29, p = .004, η p 2 = .009) and the annotations ( F (1, 953) = 24.00, p < .001, η p 2 = .025) increased perceived bias, which we show in Fig 4 . However, we found no effect of the political classification ( F (1, 953) = 2.56, p = .110, η p 2 = .003) and no systematic higher-order interaction involving any of the manipulations ( p ≥ .085, η p 2 ≤ .003). Moreover, there were differences in media bias perceptions of the specific articles ( topic x article position : F (1,953) = 24.44, p < .001, η p 2 = .025). The two found main effects were by and large robust when testing it per item of the media bias perception scale (forewarning had no significant effect on partialness and persuasiveness) or in a MANOVA ( forewarning : F (5, 949) = 5.22, p < .001, η p 2 = .027; annotation : F (5, 949) = 6.25, p < .001, η p 2 = .032).

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g004.jpg

The forewarning message, as well as annotations, increased media bias awareness. Dots represent means, and lines are standard deviations.

Partisan media bias ratings

When considering self-indicated political orientation and its fit to the article position , we found that media bias was perceived less for articles consistent with the reader’s political orientation ( F (1,921) = 113.37, p < .001, η p 2 = .110): For conservative articles, liberal readers rated conservative articles more biased than conservative readers (β = 0.32; p < .001; 95%CI[0.25; 0.38]). Conversely, liberal articles were rated less biased by liberals (β = –0.20; p < .001; 95%CI[–0.27; –0.13]), indicating a partisan bias rating for both political isles, which we show in Fig 5 .

An external file that holds a picture, illustration, etc.
Object name is pone.0266204.g005.jpg

Bias awareness increases when the article is not aligned with the persons’ political position. Shades show 95% confidence intervals of the regression estimation.

This partisan rating of articles was unaffected by forewarning ( F (1,921) = 1.52, p = .218, η p 2 = .002), annotations ( F (1,921) = 0.26, p = .612, η p 2 < .001), and political classification ( F (1,921) = 2.72, p = .010, η p 2 = .003). Yet, with increasing liberalness of the reader, the combination of forewarning and annotation was slightly less effective on the detection of bias ( F (1,921) = 4.19, p = .041, η p 2 = .005). Furthermore, there were some topic-related differences irrelevant to the current hypotheses (higher bias was perceived for the gun laws articles ( topic : F (1,921) = 11.32, p < .001, η p 2 = .012) and specifically so for the liberal one ( topic x article position : F (1,921) = 23.86, p < .001, η p 2 = .025) with some uninterpretable minor higher order interaction ( forewarning x annotation x classification x political orientation x topic : F (1,921) = 4.10, p = .043, η p 2 = .004)).

Effects on attitudes

By and large, attitudes on the topics were not affected by the experiment: While attitudes after reading the article were in line with prior attitudes ( F (1,919) = 2415.42, p < .001, η p 2 = .724) and individual political orientation ( F (1,919) = 34.54, p < .001, η p 2 = .036), neither article position ( F (1,919) = 2.63, p = .105, η p 2 = .003) nor any of the visual aids had any general impact ( p ≥ .084, η p 2 ≤ .003). Likewise, neither of the aids interacted with the factor article position ( p ≥ .298, η p 2 ≤ .001). Solely, there were some additional minor topic-specific significant effects of the annotation combined with the forewarning ( F (1,919) = 4.77, p = .0292, η p 2 = .005) and an increased liberalness of attitude with higher topic involvement ( F (1,919) = 4.31, p = .038, η p 2 = .005), that we want to disclose, but deem irrelevant to our hypotheses and research questions.

In this study, we tested different techniques to communicate media bias. Our experiment revealed that presenting a forewarning message and text annotations enhanced awareness of biased reporting, while a political classification did not. All three methods (forewarning, annotation, political classification) impacted the political ideology rating of the presented article. Furthermore, we found evidence for partisan bias ratings: Participants rated articles that agreed with their general orientation to be less biased than articles from the other side of the political spectrum. The positive effect of the forewarning message on media bias ratings, albeit small, is in line with a few other findings of successful appeals to and reminders of accuracy motives [ 30 ]. In addition, it accords with the notion that reflecting on media bias involves some efforts [ 44 , 52 ], so motivating people to engage in this process can help detect bias.

Regarding the effects of in-text annotations, our finding differs from a previous study of a similar design [ 19 ], which did not identify the effect due to a lack of power and less optimal annotations. While news consumers may generally identify outright false or fake [ 53 ] news, detecting subtle biases can profit from such aids. This indicates that bias detection is far from ideal, particularly in more ambiguous cases. As in-text annotation and forewarning message effects were independent of each other, participants seemingly do not profit from the combination of aids.

On the other hand, the political classification could solely improve the detection of the political alignment of the text (which was also achieved by both other methods) but not help detecting biased language. Subsequently, the detection of biased language and media bias itself does not appear to be directly related to an article’s political affiliation.

Our study also replicates findings that the detection of media bias and fake news is affected by individual convictions [ 30 , 40 , 42 ]: We found that participants could detect media bias more readily if there was an incongruence between the participant’s and the article’s political ideology. Such a connection may be particularly true for detecting more subtle media biases and holding an article in high regard compared to successfully identifying outright fake news, for which a reversed effect could be found in some instances (Pennycook & Rand, 2019).

In addition, interventions were ineffective to lower such partisan effects. Similarly, attitudes remained relatively stable and were not affected by any of the visual aids. Making biased language more visible and reminding people of potential biases could apparently not help them overcome their ideology in rating the acceptance of an article when there is no clear indication that the information presented in the article is fake but solely biased. Likewise, the forewarning message successfully altered the motivation to look for biased language, but did not decrease the effects of political identity on the rating: While being able to detect the political affiliation of an article, it seems that participants were not capable of separating the stance of the article from its biased use of language, even when prompted to do so. In the same vein, effects were not more pronounced when the political classification was further visualized, potentially pointing to the notion that the stance is also detected without help (after all, while the manipulations increased the distinction between liberal and conservative articles, the article’s position was reliably identified even without any supporting material) and that partisan ratings are not a deliberate derogatory act. Furthermore, the problem of partisan bias ratings also did not increase with increased media bias awareness via the manipulations, as could have been expected by cognitive dissonance theory.

For future work, we will improve the representativeness of the surveyed sample, which limits far-reaching generalizations at this point. Additionally, we will increase the generalizability by employing articles that are politically neutral or exhibit comparatively low bias. Both forewarning and annotations may have increased ratings in this study, but it is unclear whether they also aid in identifying low-bias articles and leading to lower ratings, respectively. Improving the quality of our annotations by including more annotators is an additional step towards exhausting potential findings. We will also investigate how combinations of the visualizations and strategies work together and conduct expert interviews to determine which applications would be of interest in an applied scenario. Still, the current study shows that two of our interventions raised attention to biased language in media, giving a first insight into the yet sparsely tested field of presenting media bias to news consumers.

Furthermore, there is a great challenge in translating these experimental interventions to applications used by news consumers in the field. While forewarning messages could be implemented quite simply in the context of other media, for instance, as a disclaimer (see [ 30 ]), we hope that automated classifiers on the sentence level will prove to be an effective tool to create instant annotating aids for example as browser add-ons. Even though recent studies show promising accuracy improvements for such classifiers [ 31 , 32 ], we still want to note that much research needs to be devoted to finding stable and reliable markers of biased language. Future work also has great potential to consider these strategies as teaching tools to train users in identifying bias without visual aids. This could offer a framework for a large-scale study in which additional variables measuring previous news consumption habits could be employed.

In the context of our digitalized world, where news and information of differing quality are available everywhere, our results provide important insights for media bias research. In the present study, we were able to show that forewarning messages and annotations increased media bias awareness among readers in selected news articles. Also, we could replicate the well-known hostile media bias that consists of people being more aware of bias in articles from the opposing side of the political spectrum. However, our experiment revealed that the visualizations could not reduce this effect, but partisan ratings rather seemed unaffected. In sum, digital tools uncovering and visualizing media bias may help mitigate the negative effects of media bias in the future.

Funding Statement

This work was supported by the German Research Foundation [DFG] ( https://www.dfg.de/ ) under Grant 441541975, the German Research Foundation Centre ( https://www.dfg.de/ ) of Excellence 2117 "Centre for the Advanced Study of Collective Behaviour" (ID: 422037984). It was also supported by the Hanns-Seidel Foundation ( https://www.hss.de/ ) and the German Academic Exchange Service (DAAD) ( https://www.daad.de/de/ ). None of the funder played any role in the study design or any publication related decisions.

Data Availability

Banner

  • EMU Library
  • Research Guides

Misinformation, Disinformation, and Bias

  • News Media Bias
  • Misinformation
  • Disinformation
  • Reader Bias and Filter Bubbles

Bias in News Media

Five types and five forms of media bias, where to find reliable news sources.

  • Evaluation Methods

News organizations and professional journalists operate within a code of ethics to try to ensure their reporting is as fair and reliable as possible. This entails fact checking, vetting sources, and doing their best to present the facts. However a journalist or publication can do all these things and still show a biased perspective in what stories they choose to cover and how they choose to cover them. In fact, many people will choose one news source over another due to that bias!

Rather than avoiding biased news media altogether or simply taking that bias at face value, ask some questions about what you are reading and who wrote it: 

  • Who published the article? Does the organization have a reputation for having a particular slant or bias in their reporting?
  • Who wrote it? What are the author's credentials and expertise on the subject?
  • Have they written about the topic (or similar topics) before?
  • Do they use evidence to support their claims and arguments?
  • What sources and evidence are they using?

These questions can be answered by doing online searches beyond the article to learn more about the author, the news organization, and to corroborate the article's claims with articles from other news sources on the same topic. 

Understanding Bias  from the News Literacy Project identifies five  types  and five forms  of bias found in news media. PDF version.

research about media bias

  • Media Bias / Fact Check Over 900+ media sources organized in different bias classes
  • All Sides Media Bias Ratings Ratings of the bias of online media outlets, based on multi-partisan, scientific analysis.
  • How to Spot 16 Types of Media Bias An online guide (also available as a PDF) on the 16 types of media bias, created by AllSides, a multipartisan group that rates the bias of online media sources.
  • Interactive Media Bias Chart This interactive chart allows you to look at media sources mapped across two dimensions--reliability and bias.
  • News and Current Events If you are struggling to find reliable news sources (or wish to find more) the EMU Library has a list already compiled for you! Sources range from Newspaper websites and Web Broadcasts for both major and Michigan centered news sources. It also provides resources for exploring particular viewpoints if you are interested in investigating various sides of a particular issue or event.
  • << Previous: Reader Bias and Filter Bubbles
  • Next: Evaluation Methods >>

Get Research Help

Use 24/7 live chat below or:

In-person Help Summer 2024 Mon-Thur, 11am - 3pm

Email or phone replies

Appointments with librarians

 Access  Library and Research Help tutorials

  • Last Updated: Apr 25, 2024 11:53 AM
  • URL: https://guides.emich.edu/misinformation

Become an investor in AllSides.

AllSides

  • Bias Ratings
  • Media Bias Chart
  • Fact Check Bias
  • Rate Your Bias
  • Types of Bias

AllSides Media Bias Chart

The AllSides Media Bias Chart™ helps you to easily identify different perspectives and political leanings in the news so you can get the full picture and think for yourself.

Knowing the political bias of media outlets allows you to consume a balanced news diet and avoid manipulation, misinformation, and fake news. Everyone is biased, but hidden media bias misleads and divides us. The AllSides Media Bias Chart™ is based on our full and growing list of over 1,400 media bias ratings . These ratings inform our balanced newsfeed .

The AllSides Media Bias Chart™ is more comprehensive in its methodology than any other media bias chart on the Web. While other media bias charts show you the subjective opinion of just one or a few people, our ratings are based on multipartisan, scientific analysis, including expert panels and surveys of thousands of everyday Americans.

Share on Facebook | Share on Twitter | Download PNG

This chart does not rate accuracy or credibility. A publication can be accurate, yet biased. Learn why AllSides doesn't rate accuracy.

Unless otherwise noted, these bias ratings are based on online written content , not TV, radio, or broadcast content.

Here's how the AllSides Media Bias Chart™ differs from other media bias charts:

  • Data is gathered from many people across the political spectrum — not just one biased individual or a very small, elite group. We have a patent on rating bias and use multiple methodologies , not an algorithm. Our methods are : Blind Bias Surveys of Americans, Editorial Reviews by a multipartisan team of panelists who look for common types of media bias , independent reviews, and third party data.
  • Our research spans years — we started rating media bias back in 2012.
  • We give separate bias ratings for the news and opinion sections for some media outlets, giving you a more precise understanding.
  • Transparent methodology: we tell you how we arrived at the bias rating for each outlet. Search for any media outlet here.
  • We consider and review data and research conducted by third parties , like universities and other groups.
  • Your opinion matters: we take into account hundreds of thousands of community votes on our ratings. Votes don't determine our ratings, but are valuable feedback that may prompt more research. We know that a mixed group of experts and non-experts will provide a more accurate result, so we solicit and consider opinions of average people.
  • We don't rate accuracy — just bias. Our ratings help readers to understand that certain facts may be missing if they read only outlets from one side of the political spectrum.

Americans are more polarized than ever — if you’re like us, you see it in the news and on your social media feeds every day. Bias is natural, but hidden bias and fake news misleads and divides us. That’s why AllSides has rated the media bias of over 1,400 sources. and put it into a media bias chart. The AllSides Media Bias Chart™ shows the political bias of some of the most-read sources in America.

The outlets featured on the AllSides Media Bias Cart™ have varying degrees of influence. Read about whether conservative or liberal media outlets are more widely read .

Frequently Asked Questions about the AllSides Media Bias Chart

Why does the bias of a media outlet matter, how does allsides calculate media bias, how did allsides decide which media outlets to include on the chart, what do the bias ratings mean, does a center rating mean neutral, unbiased, and better, why are some media outlets on the chart twice, does allsides rate which outlets are most factual or accurate, where can i see past versions of the chart, where can i learn more, i disagree with your media bias ratings. where can i give you feedback.

News media, social media, and search engines have become so biased, politicized, and personalized that we are often stuck inside filter bubbles , where we’re only exposed to information and ideas we already agree with. When bias is hidden and we see only facts, information, and opinions that confirm our existing beliefs , a number of negative things happen: 1) we become extremely polarized as a nation as we misunderstand or hate the "the other side," believing they are extreme, hateful, or evil; 2) we become more likely to be manipulated into thinking, voting, or behaving a certain way; 3) we become limited in our ability to understand others, problem solve and compromise; 4) we become unable to find the truth.

It feels good to hear from people who think just like us, and media outlets have an incentive to be partisan — it helps them to earn ad revenue, especially if they use sensationalism and clickbait . But when we stay inside a filter bubble, we may miss important ideas and perspectives. The mission of AllSides is to free people from filter bubbles so they can better understand the world — and each other. Making media bias transparent helps us to easily identify different perspectives and expose ourselves to a variety of information so we can avoid being manipulated by partisan bias and fake news. This improves our country long-term, helping us to understand one another, solve problems, know the truth, and make better decisions.

Media bias has contributed to Americans becoming more politically polarized .

At AllSides, we reduce the one-sided information flow by providing balanced news  from both liberal and conservative news sources, and over 1,400 media bias ratings . Our tools help you to better understand diverse perspectives and reduce harmful, hateful polarization in America. By making media bias transparent and consuming a balanced news diet, we can arm ourselves with a broader view — and find the truth for ourselves.

Top of Page

Our media bias ratings are based on multi-partisan, scientific analysis. Our methodologies include Blind Bias Surveys of Americans, Editorial Reviews by a panel of experts trained to spot bias , independent reviews, third party data, and community feedback. Visit our Media Bias Rating Methodology page to learn more.

We consider multiple factors including how much traffic the source has according to Pew Research Center and Similarweb , and how many searches for the bias of that outlet land on AllSides.

We also include outlets that represent outlier perspectives. For example, Jacobin magazine is included because it represents socialist thought, while Reason magazine is included because it represents libertarian thought.

These are subjective judgements made by AllSides and people across the country. Learn our rough approximation for what the media bias ratings mean:

Left - Lean Left - Center - Lean Right - Right

Center doesn't mean better! A Center media bias rating does not mean the source is neutral, unbiased, or reasonable, just as Left and Right do not necessarily mean the source is extreme, wrong, or unreasonable. A Center bias rating simply means the source or writer rated does not predictably publish content that tilts toward either end of the political spectrum — conservative or liberal. A media outlet with a Center rating may omit important perspectives, or run individual articles that display bias, while not displaying a predictable bias. Center outlets can be difficult to determine, and there is rarely a perfect Center outlet: some of our outlets rated Center can be better thought of as Center-Left or Center-Right, something we clarify on individual source pages.

While it may be easy to think that we should only consume media from Center outlets, AllSides believes reading in the Center is not the answer. By reading only Center outlets, we may still encounter bias and omission of important issues and perspectives. For this reason, it is important to consume a balanced news diet across the political spectrum, and to read horizontally across the bias chart. Learn more about what an AllSides Media Bias Rating™ of Center rating means here.

We sometimes provide separate media bias ratings for a source’s news content and its opinion content. This is because some outlets, such as the Wall Street Journal and The New York Times , have a notable difference in bias between their news and opinion sections.

For example, on this chart you will see The New York Times Opinion is rated as a Left media bias, while the New York Times news is rated Lean Left .

When rating an opinion page, AllSides takes into account the outlet's editorial board and its individual opinion page writers. The editorial board’s bias is weighted, and affects the final bias rating by about 60%.

For example, the New York Times has a range of individual Opinion page writers, who have a range of biases. We rate the bias of commentators individually as much as possible. Yet The New York Times Editorial Board has a clear Left media bias. We take into account both the overall biases of the individual writers and the Editorial Board to arrive at a final bias rating of Left for the New York Times opinion section .

See how we provide individual bias ratings for New York Times opinion page writers here .

AllSides does not rate outlets based on accuracy or factual claims — this is a bias chart, not a credibility chart. It speaks to perspective only.

We don't rate accuracy because we don't assume we know the truth on all things. The left and right often strongly disagree on what is truth and what is fiction. Read more about why AllSides doesn't rate accuracy.

We disagree with the idea that the more left or right an outlet is, the less credibility it has. There’s nothing wrong with having bias or an opinion, but hidden bias misleads and divides us. Just because an outlet is credible doesn’t mean it isn’t biased ; likewise, just because an outlet is biased doesn’t mean it isn’t credible . 

Learn more about past versions of the chart on our blog:

  • Version 9.2
  • Version 9.1
  • Version 7.2
  • Version 7.1
  • Version 5.1
  • Version 1.1

Visit the AllSides Media Bias Ratings™ page and search for any outlet for a full summation of our research and how we arrived at the rating.

Visit our company FAQ for more information about AllSides.

You can vote on whether or not you agree with media bias ratings ,  contact us , or sign up to participate in our next Blind Bias Survey .

Media Bias/Fact Check

  • May 22, 2024 | MBFC’s Daily Vetted Fact Checks for 05/22/2024
  • May 22, 2024 | Daily Source Bias Check: Delaware Online
  • May 22, 2024 | Daily Source Bias Check: National Health Freedom Coalition
  • May 21, 2024 | Media News: AP Equipment Seized by Israeli Officials Under New Media Law
  • May 21, 2024 | MBFC’s Daily Vetted Fact Checks for 05/21/2024

Media Research Center (MRC) – Bias and Credibility

These media sources are moderately to strongly biased toward conservative causes through story selection and/or political affiliation. They may utilize strong loaded words (wording that attempts to influence an audience by appealing to emotion or stereotypes), publish misleading reports, and omit information that may damage conservative causes. Some sources in this category may be untrustworthy. See all Right Bias sources.

  • Overall, we rate Media Research Center strongly right-biased based on advocacy for a conservative agenda and Mixed for factual reporting due to the promotion of propaganda, pseudoscience, and a poor fact-check record by their primary sources.

Detailed Report

Bias Rating: RIGHT Factual Reporting:  MIXED Country: USA MBFC’s Country Freedom Rank: MOSTLY FREE Media Type: Organization/Foundation Traffic/Popularity: Minimal Traffic MBFC Credibility Rating: MEDIUM CREDIBILITY

The Media Research Center (MRC) is a politically conservative content analysis organization based in Reston, Virginia, founded in 1987 by activist L. Brent Bozell III. Its mission is to “prove—through sound scientific research—that liberal bias in the media exists and undermines traditional American values.” According to their about page , “MRC’s sole mission is to expose and neutralize the propaganda arm of the Left: the national news media. This makes the MRC’s work unique within the conservative movement.”

Read our profile on the United States government and media.

Funded by / Ownership

The Media Research Center is a 501(c)(3) Nonprofit organization. MRC is funded through donations, with some large donors including the Bradley ,  Scaife ,  Olin ,  Castle Rock ,  Carthage , and JM foundations. It also receives funding from  ExxonMobil  due to its skepticism on Climate Change . MRC also owns the Questionable news source CNS News and the factually Mixed right-biased Newsbusters . T he Heritage Foundation   states that Rebekah Mercer serves on the board of the Media Research Center. Rebekah Mercer is the daughter of right-wing philanthropist Robert Mercer who heavily funded the Trump Campaign.

Analysis / Bias

In review, MRC does not produce original content but links to other websites such as MRCTV, Newsbusters, and CNS News. These sources frequently utilize strong loaded emotional language such as this: The Embarrassing Questions NBC’s Moderators NEVER Asked . Articles are typically properly sourced; however, they use the Questionable CNS News as a primary source. CNS News has failed numerous fact-checks by IFCN fact-checkers.

The primary purpose of MRC is to expose liberal bias in the media. They often claim there is a conspiracy in the media to promote liberalism while suppressing conservatism. Journalist Brian Montopoli of Columbia Journalism Review  in 2005 labeled MRC “just one part of a wider movement by the far right to demonize corporate media” rather than “make the media better.” Essentially, MRC is a propaganda outlet for the Republican Party.

When it comes to science, MRC advocates for human-influenced climate denialism as well as creationism .

Failed Fact Checks

  • A factual search reveals that both Newsbusters and CNS News have failed numerous fact checks.
  • Says there are fewer than 2,000 confirmed COVID-19 hospitalizations in the U.S . – False
  • “We have been cooling down for the past 4000 years”; the Earth has cooled since the ‘medieval warming’, “It’s all about when you start the measurements” – Inaccurate

Overall, we rate Media Research Center strongly right-biased based on advocacy for a conservative agenda and Mixed for factual reporting due to the promotion of propaganda, pseudoscience, and a poor fact-check record by their primary sources. (7/19/2016) Updated (D. Van Zandt 05/17/2024)

Source: https://www.mrc.org/

  • Left-Center
  • Least Biased
  • Right-Center

Last Updated on May 17, 2024 by Media Bias Fact Check

Do you appreciate our work? Please consider one of the following ways to sustain us.

MBFC Ad-Free 

MBFC Donation

Left vs. Right Bias: How we rate the bias of media sources

Explore Similar Sources:

  • Mr. Conservative (MRC) – Bias and Credibility
  • Crime Prevention Research Center – Bias and Credibility
  • National Center for Public Policy Research (NCPPR) – Bias and Credibility
  • Capital Research Center – Bias and Credibility
  • The Center for Economic and Policy Research – Bias and Credibility

Support our mission - ad-free browsing & exclusive content. If you value our work, consider becoming a member.

New membership plans available.

Every contribution counts

Never see this message again

research about media bias

University of Washington Information School

Capstone team members Jay Kuo, Jin lee, Douglas S. Lew Tan and  Shinjini Guha.

MSIM team uses AI to battle bias in hiring

Flipping through resumes can be a tedious task. Even as the hiring process has digitized, combing through hundreds if not thousands of resumes often is a manual chore that takes hours. Worse, this initial sorting can introduce bias into hiring.

A team of graduate students from the University of Washington Information School has developed a possible solution. The team created a program that uses artificial intelligence to extract key points from job descriptions and rank resumes based on those requirements.

The goal is to match the ideal candidate with the desired experience and skills while eliminating unintentional or even explicit bias. Recruiters still have a chance to add or reduce the weight of any skill in this initial sorting.

“We're all of diverse backgrounds in America,” said Jin Lee, one of the students. “We felt that we could put our skills to use and actually make a meaningful impact for people — even ourselves — by reducing bias in hiring.”

Other team members are Shinjini Guha, Douglas S. Lew Tan and Jay Kuo. They are all pursuing Master of Science in Information Management degrees. This is their Capstone project, the final, culminating project for many iSchool students. 

They have been working on the project with Seattle startup Included, which aims to build software that embeds diversity, equity and inclusion metrics into its analytics platform. 

Businesses have generally underinvested in human resources departments, said Chandan Golla, Included’s co-founder and chief product officer. Artificial intelligence along with companies such as Included can change that dynamic.

“This is a problem that every company has, whether you are a company of five people or 500,000 people,” Golla said. “Any job you post out there, you do get hundreds of applications. And most of the time the hiring team is small.”

The students started working on the project after connecting with Included at the iSchool’s Capstone Night in October. Part of the task was to narrow the project’s scope to something that would be manageable but meaningful.

Lee took on the role of project manager, Guha led product development, Kuo focused on data science, and Lew Tan worked on user experience research. For the project, the team spoke to two international recruiters, two recruiting agencies and two recruiters at other companies.

“For me, it was a fun process to practice the research methods that I’ve learned and use them in a real-world situation,” Lew Tan said.

The program combs through the job description to match traits with the job applicants, scores each resume and ranks them. The program also allows recruiters to weigh criteria differently. 

“If a master’s degree matters more than years of experience, the recruiter can assign more weight for master’s degree rather than years of experience,” Kuo said. “So, the recruiters can actually play around with all the features to make sure the recruiter gets the best ranking.”

Or as Guha put it: “We’re allowing the recruiters themselves, based on their conversation with the hiring managers and teams, to set importance to different features. … They decide what they call highly qualified.”

One of the challenges that recruiters face is that job applicants can use ambiguous wording on their resumes. For instance, someone who worked for Amazon Web Services could note that on their resume by writing AWS, a common abbreviation for the company, which may stymie a resume-screening program looking for exact matches. 

The team used open-source Mistral AI to be able to interpret unclear passages in resumes, although that’s still a work in progress. Still, Golla said he’s been impressed with the level of work of work from the iSchool students. 

“We are building out the prototypes and the plan is to integrate that into mainstream experiences,” said Golla, who noted this is his company’s first time working with the iSchool and that Included sponsored three teams. 

Guha said she found the experience a great learning opportunity, as she used unfamiliar tools. “Working with a startup is great because you get a lot of freedom to experiment and there’s no restriction on what we can and can’t do,” Guha said. 

Lee said he was proud of what they accomplished, seeing where they started and witnessing the results.

“Part of doing the Capstone project is we should learn how to do our respective roles,” Lee said. “So, I learned a lot of project management skills, especially because I want to go into project management where I work with talented people like Shinjini, Jay and Douglas.”

Pictured at top: From left, Capstone team members Jay Kuo, Jin Lee, Douglas S. Lew Tan and Shinjini Guha.

Full Results

Customize your experience.

research about media bias

  • Future Students
  • Current Students
  • Faculty/Staff

Stanford Graduate School of Education

News and Media

  • News & Media Home
  • Research Stories
  • School's In
  • In the Media

You are here

70 years after brown v. board of education, new research shows rise in school segregation.

Kids getting onto a school bus

As the nation prepares to mark the 70th anniversary of the landmark U.S. Supreme Court ruling in Brown v. Board of Education , a new report from researchers at Stanford and USC shows that racial and economic segregation among schools has grown steadily in large school districts over the past three decades — an increase that appears to be driven in part by policies favoring school choice over integration.

Analyzing data from U.S. public schools going back to 1967, the researchers found that segregation between white and Black students has increased by 64 percent since 1988 in the 100 largest districts, and segregation by economic status has increased by about 50 percent since 1991.

The report also provides new evidence about the forces driving recent trends in school segregation, showing that the expansion of charter schools has played a major role.  

The findings were released on May 6 with the launch of the Segregation Explorer , a new interactive website from the Educational Opportunity Project at Stanford University. The website provides searchable data on racial and economic school segregation in U.S. states, counties, metropolitan areas, and school districts from 1991 to 2022. 

“School segregation levels are not at pre- Brown levels, but they are high and have been rising steadily since the late 1980s,” said Sean Reardon , the Professor of Poverty and Inequality in Education at Stanford Graduate School of Education and faculty director of the Educational Opportunity Project. “In most large districts, school segregation has increased while residential segregation and racial economic inequality have declined, and our findings indicate that policy choices – not demographic changes – are driving the increase.” 

“There’s a tendency to attribute segregation in schools to segregation in neighborhoods,” said Ann Owens , a professor of sociology and public policy at USC. “But we’re finding that the story is more complicated than that.”

Assessing the rise

In the Brown v. Board decision issued on May 17, 1954, the U.S. Supreme Court ruled that racially segregated public schools violated the Equal Protection Clause of the Fourteenth Amendment and established that “separate but equal” schools were not only inherently unequal but unconstitutional. The ruling paved the way for future decisions that led to rapid school desegregation in many school districts in the late 1960s and early 1970s.

Though segregation in most school districts is much lower than it was 60 years ago, the researchers found that over the past three decades, both racial and economic segregation in large districts increased. Much of the increase in economic segregation since 1991, measured by segregation between students eligible and ineligible for free lunch, occurred in the last 15 years.

White-Hispanic and white-Asian segregation, while lower on average than white-Black segregation, have both more than doubled in large school districts since the 1980s. 

Racial-economic segregation – specifically the difference in the proportion of free-lunch-eligible students between the average white and Black or Hispanic student’s schools – has increased by 70 percent since 1991. 

School segregation is strongly associated with achievement gaps between racial and ethnic groups, especially the rate at which achievement gaps widen during school, the researchers said.  

“Segregation appears to shape educational outcomes because it concentrates Black and Hispanic students in higher-poverty schools, which results in unequal learning opportunities,” said Reardon, who is also a senior fellow at the Stanford Institute for Economic Policy Research and a faculty affiliate of the Stanford Accelerator for Learning . 

Policies shaping recent trends 

The recent rise in school segregation appears to be the direct result of educational policy and legal decisions, the researchers said. 

Both residential segregation and racial disparities in income declined between 1990 and 2020 in most large school districts. “Had nothing else changed, that trend would have led to lower school segregation,” said Owens. 

But since 1991, roughly two-thirds of districts that were under court-ordered desegregation have been released from court oversight. Meanwhile, since 1998, the charter sector – a form of expanded school choice – has grown.

Expanding school choice could influence segregation levels in different ways: If families sought schools that were more diverse than the ones available in their neighborhood, it could reduce segregation. But the researchers found that in districts where the charter sector expanded most rapidly in the 2000s and 2010s, segregation grew the most. 

The researchers’ analysis also quantified the extent to which the release from court orders accounted for the rise in school segregation. They found that, together, the release from court oversight and the expansion of choice accounted entirely for the rise in school segregation from 2000 to 2019.

The researchers noted enrollment policies that school districts can implement to mitigate segregation, such as voluntary integration programs, socioeconomic-based student assignment policies, and school choice policies that affirmatively promote integration. 

“School segregation levels are high, troubling, and rising in large districts,” said Reardon. “These findings should sound an alarm for educators and policymakers.”

Additional collaborators on the project include Demetra Kalogrides, Thalia Tom, and Heewon Jang. This research, including the development of the Segregation Explorer data and website, was supported by the Russell Sage Foundation, the Robert Wood Johnson Foundation, and the Bill and Melinda Gates Foundation.   

More Stories

Students sitting in front of a laptop

⟵ Go to all Research Stories

Get the Educator

Subscribe to our monthly newsletter.

Stanford Graduate School of Education

482 Galvez Mall Stanford, CA 94305-3096 Tel: (650) 723-2109

  • Contact Admissions
  • GSE Leadership
  • Site Feedback
  • Web Accessibility
  • Career Resources
  • Faculty Open Positions
  • Explore Courses
  • Academic Calendar
  • Office of the Registrar
  • Cubberley Library
  • StanfordWho
  • StanfordYou

Improving lives through learning

Make a gift now

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

More Americans want the journalists they get news from to share their politics than any other personal trait

Members of conservative media outlets film coverage of a Washington, D.C., policy summit on July 26, 2022. (Kent Nishimura/Los Angeles Times via Getty Images)

Most Americans say it is not important that the news they get comes from journalists who share their political views, age, gender or other traits. But people are more likely to say it is important for journalists to share their politics than any other characteristic we asked about. And certain demographic groups place more value than others on the personal traits of their journalists.

A 2023 Pew Research Center survey asked Americans how important it is for the journalists they get news from to have six personal characteristics that are similar to their own.

Pew Research Center conducted this analysis as part of a broader look at Americans’ views of the news media. The data for this analysis comes from a survey of 10,701 U.S. adults from March 13 to 19, 2023.

Everyone who completed the survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the  ATP’s methodology .

Here are the  questions used for this analysis , along with responses, and the survey  methodology .

Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. This is the latest analysis in Pew Research Center’s ongoing investigation of the state of news, information and journalism in the digital age, a research program funded by The Pew Charitable Trusts, with generous support from the John S. and James L. Knight Foundation.

A bar chart showing that more Americans find it important for journalists to share their politics than any other personal trait.

About four-in-ten Americans say it is at least somewhat important that they get news from journalists who share their political views (39%). That is nearly double the share who say the same about getting news from journalists who share their religious views (22%) or who talk or sound like them (20%).

Smaller shares say they want to get news from journalists who are similar to them in age (15%), share their race or ethnicity (14%), or share their gender (10%).

On several of these questions, opinions vary based on respondents’ political views, age and other personal traits.

Political views

A bar chart showing that liberal Democrats, conservative Republicans most likely to want news from journalists who share their politics.

Similar shares of Republicans and Democrats say it is at least somewhat important for the news they get to come from journalists who share their political views. Four-in-ten Republicans and GOP-leaning independents say this, compared with 41% of Democrats and Democratic leaners.

When combining party and ideology, people who place themselves at either end of the political spectrum are more likely than those toward the center to say journalists’ politics are important.

  • Roughly half each of conservative Republicans (47%) and liberal Democrats (50%) say it is important that the news they get comes from journalists who share their politics.
  • Smaller shares of liberal and moderate Republicans (29%) and conservative and moderate Democrats (33%) hold this view. 

A bar chart showing that younger people more likely to want news from journalists of a similar age.

Younger adults are more likely than older Americans to say they want news from journalists who are around the same age as them. Among U.S. adults ages 18 to 29, 23% say this is at least somewhat important, compared with one-in-ten of those ages 65 and older.

Younger adults are also more likely than older adults to say it’s important that the journalists they get news from are the same gender as them; still, large majorities say this is not important. Some 16% of those ages 18 to 29 say it is at least somewhat important for journalists to share their gender, versus 6% of those 65 and older.

Race or ethnicity

Black Americans are more likely than other racial or ethnic groups to say it is important they get news from journalists who share several of their characteristics – particularly their race or ethnicity.

About four-in-ten Black Americans (41%) say it is at least somewhat important that the news they get comes from journalists who share their race or ethnicity. A quarter of Hispanic Americans, 20% of Asian Americans and just 5% of White Americans say the same.

For more information on how Black Americans answered these questions, read our report on Black Americans and news .

A bar chart showing that White evangelicals, Black Protestants are most likely to want news from journalists with shared religious views.

Overall, Americans who identify with a religion are more likely than those who are religiously unaffiliated to find it at least somewhat important to get news from journalists who share their religious views (26% vs. 15%).

Among Christians, Protestants are more likely than Catholics to say it is important for journalists to share their religious views (30% vs. 21%). But there are also differences among Protestants and among Catholics:

  • White evangelical Protestants (37%) and Black Protestants (33%) are about twice as likely as White nonevangelical Protestants (16%) to say this.
  • About three-in-ten Hispanic Catholics say this (28%), compared with 17% of White Catholics.

Among Jewish Americans, just 10% say it’s at least somewhat important to get news from journalists who share their religious views.

Note: Here are the  questions used for this analysis , along with responses, and the survey  methodology .

  • Journalists
  • News Coverage

Download Emily Tomasik's photo

Emily Tomasik is a research assistant focusing on news and information research at Pew Research Center .

Americans’ Changing Relationship With Local News

Most u.s. journalists are concerned about press freedoms, about one-in-six u.s. journalists at news outlets are part of a union; many more would join one if they could, u.s. journalists differ from the public in their views of ‘bothsidesism’ in journalism, twitter is the go-to social media site for u.s. journalists, but not for the public, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

IMAGES

  1. 35 Media Bias Examples for Students (2024)

    research about media bias

  2. News Media Evaluation Tools

    research about media bias

  3. Media Bias Ratings AllSides

    research about media bias

  4. Media Bias Chart (2018).

    research about media bias

  5. Media Bias Charts: Everything You Need to Know

    research about media bias

  6. Media Bias Chart (2018).

    research about media bias

VIDEO

  1. Media Bias Exposed

  2. PhD research on media bias in politics. #PoliticalEconomy #MediaBias #USPolitics #shorts

  3. ZinklarAi: Bias and Errors Elimination in Open-ended Questions

  4. The Danger of Bias in the Media

  5. True Face of the Media

  6. MEDIA BIAS 101

COMMENTS

  1. Review A systematic review on media bias detection: What is media bias, how it is expressed, and how to detect it

    In the classical definition, media bias must both be intentional, i.e., reflect a conscious act or choice, and it must be sustained, i.e., represent a systematic tendency rather than an isolated incident. Various definitions of media bias and its specific forms exist, each depending on the particular context and research questions studied.

  2. Study of headlines shows media bias is growing

    Researchers used machine learning to uncover media bias in publications across the political spectrum. News stories about domestic politics and social issues are becoming increasingly polarized along ideological lines according to a study of 1.8 million news headlines from major US news outlets from 2014 to 2022. A team from the University of Rochester led by Jiebo Luo, a professor of computer ...

  3. How do we raise media bias awareness effectively? Effects of ...

    Media bias has a substantial impact on individual and collective perception of news. Effective communication that may counteract its potential negative effects still needs to be developed. In this article, we analyze how to facilitate the detection of media bias with visual and textual aids in the form of (a) a forewarning message, (b) text annotations, and (c) political classifiers. In an ...

  4. PDF The Media Bias Taxonomy: A Systematic Literature Review on the Forms

    detect media bias by systematically reviewing 3140 research papers published between 2019 and 2022. To structure our review and support a mutual understanding of bias across research domains, we introduce the Media Bias Taxonomy, which provides a coherent overview of the current state of research on media bias from different perspectives.

  5. Should you trust media bias charts?

    The AllSides Chart. The AllSides chart focuses solely on political bias. It places sources in one of five boxes — "Left," "Lean Left," "Center," "Lean Right" and "Right ...

  6. Media Bias 101: What Journalists Really Think

    Media Bias 101 summarizes decades of survey research showing how journalists vote, what journalists think, what the public thinks about the media, and what journalists say about media bias. The following links take you to dozens of different surveys, with key findings and illustrative charts. (Most recent update: May 2014) A printer-friendly, fully-formatted 48-page version of the report ...

  7. How Media Exposure, Media Trust, and Media Bias Perception Influence

    Media bias perception is mainly related to the party orientation and ideological stance of the media [29,30]. ... Our research confirmed how media exposure, media trust, and media bias perception influence global public judgment of the international urban pandemic. It demonstrated the progress in the mechanism of this effect.

  8. Media Bias Analysis

    This section gives an overview of definitions of media bias as used in social science research on the topic or as employed by automated approaches (Sect. 2.2.1).Afterward, we describe the effects of biased news coverage (Sect. 2.2.2), develop a conceptual understanding of how media bias arises in the process of news production (Sect. 2.2.3), and briefly introduce the most important approaches ...

  9. Media bias detection and bias short term impact assessment

    This research aims to help users detect bias in news media by proposing a model for alarming bias detection. The study also shows how media conditions the opinions of its news-consumers. This justifies the need for a platform to provide safe and healthy balanced news to the news-consumers so that they are not misguided and conditioned by biased ...

  10. U.S. Media Polarization and the 2020 Election: A Nation Divided

    The study asked about use of, trust in, and distrust of 30 different news sources for political and election news. While it is impossible to represent the entire crowded media space, the outlets, which range from network television news to Rush Limbaugh to the New York Times to the Washington Examiner to HuffPost, were selected to represent popular media brands across a range of platforms.

  11. Do all sides deserve equal coverage? U.S ...

    Journalists in the United States differ markedly from the general public in their views of "bothsidesism" - whether journalists should always strive to give equal coverage to all sides of an issue - according to a recent Pew Research Center study.A little more than half of the journalists surveyed (55%) say that every side does not always deserve equal coverage in the news.

  12. Research

    From a psychological perspective, we research how bias is perceived, both in real-world and experimental settings. We research how to visualize media bias and try different types of visualizations (such as a left-right bar, in-text annotations, deeper explanations, or inoculation messages) in diverse online surveys or a self-built browser plugin.

  13. Mass Media: An Undergraduate Research Guide : Media Bias

    This guide focuses on bias in mass media coverage of news and current events. It includes concerns of sensationalism, allegations of media bias, and criticism of media's increasingly profit-motivated ethics. It also includes examples of various types of sources coming from particular partisan viewpoints.

  14. Fake News, Big Lies: How Did We Get Here and Where Are We Going?

    News Stories Get the latest IPR research news and media coverage. ... How Misinformation and Disinformation Flourish in U.S. Media. Examples of media bias charts that map newspapers, cable news, and other media sources on a political spectrum are easy to find. Can understanding bias in news sources help clarify why people fall prey to ...

  15. More Americans now see news media gaining ...

    Americans' views about the influence of the media in the country have shifted dramatically over the course of a year in which there was much discussion about the news media's role during the election and post-election coverage, the COVID-19 pandemic and protests about racial justice.More Americans now say that news organizations are gaining influence than say their influence is waning, a ...

  16. Resources & Publications

    2023. Introducing MBIB - the first Media Bias Identification Benchmark Task and Dataset Collection Proceedings Article. In: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23), ACM, New York, NY, USA, 2023, ISBN: 978-1-4503-9408-6/23/07.

  17. How do we raise media bias awareness effectively? Effects of

    Previous research shows the effects of media bias on individual and public perception of news events . Since the media are citizens' primary source of political information [ 11 ], associated bias may affect the political beliefs of the audience, party preferences [ 12 ] and even alter voting behavior [ 13 ].

  18. Media bias

    Media bias occurs when journalists and news producers show bias in how they report and cover news. The term "media bias" implies a pervasive or widespread bias contravening of the standards of journalism, rather than the perspective of an individual journalist or article. [1] The direction and degree of media bias in various countries is widely ...

  19. News Media Bias

    Ratings of the bias of online media outlets, based on multi-partisan, scientific analysis. How to Spot 16 Types of Media Bias. An online guide (also available as a PDF) on the 16 types of media bias, created by AllSides, a multipartisan group that rates the bias of online media sources. ... Get Research Help. Use 24/7 live chat below or: In ...

  20. Media Bias Chart

    The AllSides Media Bias Chart™ is based on our full and growing list of over 1,400 media bias ratings. These ratings inform our balanced newsfeed. The AllSides Media Bias Chart™ is more comprehensive in its methodology than any other media bias chart on the Web. While other media bias charts show you the subjective opinion of just one or a ...

  21. Media Bias/Fact Check

    We are the most comprehensive media bias resource on the internet. There are currently 7900+ media sources, journalists, and politicians listed in our database and growing every day. Don't be fooled by Questionable sources. Use the search feature above (Header) to check the bias of any source.

  22. Pew Research

    Founded in 2004, Pew Research is a nonpartisan American think tank based in Washington, D.C. It provides information on social issues, public opinion, and demographic trends shaping the United States and the world. It also conducts public opinion polling, demographic research, media content analysis, and other empirical social science research.

  23. Media Polarization

    ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions.

  24. Gender bias in movie posters through the lens of Spatial Agency Bias

    This study attempts to fill the literature gap by examining the top 250 grossing movie posters of all time in the US through the lens of Spatial Agency Bias to determine if there are differences in portrayed agency based on one's gender in movie posters. The two levels of the independent variable - gender - are female and male, and the ...

  25. Media Research Center (MRC)

    The Media Research Center (MRC) is a politically conservative content analysis organization based in Reston, Virginia, founded in 1987 by activist L. Brent Bozell III. Its mission is to "prove—through sound scientific research—that liberal bias in the media exists and undermines traditional American values.".

  26. MSIM team uses AI to battle bias in hiring

    MSIM team uses AI to battle bias in hiring. By Jim Davis Monday, May 20, 2024. Flipping through resumes can be a tedious task. Even as the hiring process has digitized, combing through hundreds if not thousands of resumes often is a manual chore that takes hours. Worse, this initial sorting can introduce bias into hiring.

  27. 70 years after Brown v. Board of Education, new research shows rise in

    As the nation prepares to mark the 70th anniversary of the landmark U.S. Supreme Court ruling in Brown v.Board of Education, a new report from researchers at Stanford and USC shows that racial and economic segregation among schools has grown steadily in large school districts over the past three decades — an increase that appears to be driven in part by policies favoring school choice over ...

  28. More Americans prefer journalists who share their ...

    About four-in-ten Americans say it is at least somewhat important that they get news from journalists who share their political views (39%). That is nearly double the share who say the same about getting news from journalists who share their religious views (22%) or who talk or sound like them (20%).. Smaller shares say they want to get news from journalists who are similar to them in age (15% ...