Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Types of Variables in Research & Statistics | Examples

Types of Variables in Research & Statistics | Examples

Published on September 19, 2022 by Rebecca Bevans . Revised on June 21, 2023.

In statistical research , a variable is defined as an attribute of an object of study. Choosing which variables to measure is central to good experimental design .

If you want to test whether some plant species are more salt-tolerant than others, some key variables you might measure include the amount of salt you add to the water, the species of plants being studied, and variables related to plant health like growth and wilting .

You need to know which types of variables you are working with in order to choose appropriate statistical tests and interpret the results of your study.

You can usually identify the type of variable by asking two questions:

  • What type of data does the variable contain?
  • What part of the experiment does the variable represent?

Table of contents

Types of data: quantitative vs categorical variables, parts of the experiment: independent vs dependent variables, other common types of variables, other interesting articles, frequently asked questions about variables.

Data is a specific measurement of a variable – it is the value you record in your data sheet. Data is generally divided into two categories:

  • Quantitative data represents amounts
  • Categorical data represents groupings

A variable that contains quantitative data is a quantitative variable ; a variable that contains categorical data is a categorical variable . Each of these types of variables can be broken down into further types.

Quantitative variables

When you collect quantitative data, the numbers you record represent real amounts that can be added, subtracted, divided, etc. There are two types of quantitative variables: discrete and continuous .

Categorical variables

Categorical variables represent groupings of some kind. They are sometimes recorded as numbers, but the numbers represent categories rather than actual amounts of things.

There are three types of categorical variables: binary , nominal , and ordinal variables .

*Note that sometimes a variable can work as more than one type! An ordinal variable can also be used as a quantitative variable if the scale is numeric and doesn’t need to be kept as discrete integers. For example, star ratings on product reviews are ordinal (1 to 5 stars), but the average star rating is quantitative.

Example data sheet

To keep track of your salt-tolerance experiment, you make a data sheet where you record information about the variables in the experiment, like salt addition and plant health.

To gather information about plant responses over time, you can fill out the same data sheet every few days until the end of the experiment. This example sheet is color-coded according to the type of variable: nominal , continuous , ordinal , and binary .

Example data sheet showing types of variables in a plant salt tolerance experiment

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

variable presentation is seen

Experiments are usually designed to find out what effect one variable has on another – in our example, the effect of salt addition on plant growth.

You manipulate the independent variable (the one you think might be the cause ) and then measure the dependent variable (the one you think might be the effect ) to find out what this effect might be.

You will probably also have variables that you hold constant ( control variables ) in order to focus on your experimental treatment.

In this experiment, we have one independent and three dependent variables.

The other variables in the sheet can’t be classified as independent or dependent, but they do contain data that you will need in order to interpret your dependent and independent variables.

Example of a data sheet showing dependent and independent variables for a plant salt tolerance experiment.

What about correlational research?

When you do correlational research , the terms “dependent” and “independent” don’t apply, because you are not trying to establish a cause and effect relationship ( causation ).

However, there might be cases where one variable clearly precedes the other (for example, rainfall leads to mud, rather than the other way around). In these cases you may call the preceding variable (i.e., the rainfall) the predictor variable and the following variable (i.e. the mud) the outcome variable .

Once you have defined your independent and dependent variables and determined whether they are categorical or quantitative, you will be able to choose the correct statistical test .

But there are many other ways of describing variables that help with interpreting your results. Some useful types of variables are listed below.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Types of Variables in Research & Statistics | Examples. Scribbr. Retrieved April 9, 2024, from https://www.scribbr.com/methodology/types-of-variables/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, independent vs. dependent variables | definition & examples, confounding variables | definition, examples & controls, control variables | what are they & why do they matter, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

We use cookies on this site to enhance your experience

By clicking any link on this page you are giving your consent for us to set cookies.

A link to reset your password has been sent to your email.

Back to login

We need additional information from you. Please complete your profile first before placing your order.

Thank you. payment completed., you will receive an email from us to confirm your registration, please click the link in the email to activate your account., there was error during payment, orcid profile found in public registry, download history, statistics and data presentation: understanding variables.

  • Charlesworth Author Services
  • 21 January, 2021
  • Academic Writing Skills

All science is about understanding variability in different characteristics, and most characteristics vary, hence we call the characteristics that we are studying ‘variables. When we work in a quantitative area, we make measurements. The scale of measurement is very important because one criterion for selecting the appropriate statistical technique is the scale of measurement used to measure whatever it is, we are studying.

There are different statistical techniques to use with each kind of measurement.

✓       Nominal Scale is the lowest level of measurement. Sometimes this is referred to as qualitative data – not to be confused with qualitative research. This scale uses numbers to describe names of discrete categories. One determines for each case whether they have or do not have the attribute in question.

✓       Ordinal Scale is used to rank people in order (e.g. least politically active to most politically active). This is the lowest level of quantitative data and involves the process of assignment of numbers to cases in terms of how much of the attribute is possessed by each subject.

✓       Continuous data can assume different values within a range. Interval Scale is where a number assigned is the amount of attribute possessed. Most statistics procedures can be used with interval data. Ratio Scale is considered the highest level of measurement, because all statistics tools can be used on ratio data.

When you read an article, you need to figure out what all the variables are in a study. Then you need to identify three things for each variable one at a time: the scale of measurement; the possible score range; and the meaning of high score and low score. Variables take on different functions in a study. We have to be able to tease these functions out. When you are conducting research, you have to recognize the different variables that are at play in your study so you can account for them during your analyses. Variables can take on different functions within the same study, so don’t classify them at the start. Researchers decide on a classification of variables in each analysis. Let’s take a look at the different classifications of variables.

Classification of variables

•          Dependent Variable : The outcome variable of interest is observed to see whether it is influenced by a manipulated variable. This is called a dependent variable. In other words, a characteristic that is dependent on, or thought to be influenced by, an independent variable. This is sometimes called outcome or response variable.

•         Independent Variable :  In experimental research, the researcher can manipulate one variable and measure the effect of that manipulation on another variable. The variable that is manipulated is called an independent variable. In other words, a characteristic that affects, or is thought to influence an outcome or dependent variable, or an antecedent condition. Independent variables are sometimes called factors, treatments, predictors, or manipulated variables.

In a better scenario, the only consistent feature that varies between an intervention and control group would be the outcome variable of interest. However, this is not generally the case, and we often have confounding or extraneous variables that play a part. When we design our research studies, we need to pay attention to and account for these variables also.

•       Control Variable : any variable that is held constant in a research study by observing only one of the instances or levels. Control variables are not necessarily of central interest, but things that a researcher cannot change or remove from participants. They might be known to exert some influence on the dependent variable. We can ’ t study everything, so a researcher may be interested, for example, in how parental education (and some other variable) is related to reading ability in younger children. He/she happens to know through previous research that gender is related to reading. So, for the purposes of the study, they chose to study only girls. Thus, gender is the control variable and is “ held constant ”.

•         Mediator (Intervening) Variable : a hypothetical variable that explains the relationship but is not observed directly in the research study. Rather, it is inferred from the relationship between the independent and dependent variable. This is an important concept to understand because most theory is based on notions of intervening variables and understanding how or why such effects occur. These variables might be clearly identified before doing a study, i.e. measured and analyzed within a study. Often, mediating variables surface as researchers interpret findings and emerge as suggestions for future research.

•         Moderator Variable : a variable/characteristic that moderates or changes the direction and/or strength of the relationship between two other variables. When, under what conditions, a relationship holds; influences on the strength of the relationship. For example, if a researcher were looking at the relationship between Socio economic status and AIDs prevention, age might be a moderator variable such that the relationship is stronger for older kids than younger kids.

Understanding the distinction between mediators and moderators is not always easy. Basically, in a mediation model the independent variable cannot influence the dependent variable directly and does so by means of another variable – the mediator. As a simple example, older people tend to be better drivers than young people. So, age is a predictor of good driving. However, when we think about why this is the case, we see that older people typically make wiser decisions and so wisdom could be seen as the mediating variable.

There are a number of tests that can be used within your statistical software program to test for mediating and moderating effects. Moderated regression is an example. A moderator analysis is used to determine whether the relationship between two variables depends on (is moderated by) the value of a third variable. You can find online tutorials to explore how this is conducted for the statistical package you are using. Regression can also be used to test for a mediating effect.

Maximise your publication success with Charlesworth Author Services.

Charlesworth Author Services offers statistical analysis for researchers in the field of medical and life sciences articles. This service will help the researcher improve the accuracy and reporting of their data prior to submitting their article to a publisher

To find out more about this service please visit:  How does the Statistical Review Service Work?

Join us on our FREE series of webinars designed to help you understand statistics and data presentation for publication.  

Share with your colleagues

Related articles.

variable presentation is seen

Statistics and data presentation: Understanding Effect Size

Charlesworth Author Services 19/01/2021 00:00:00

variable presentation is seen

How to produce compelling Data Visualisations

Charlesworth Author Services 10/08/2020 00:00:00

variable presentation is seen

How to use Tables and Figures in academic writing

Charlesworth Author Services 08/01/2017 00:00:00

Related webinars

variable presentation is seen

Bitesize Webinar:Statistics: Module 1- Understanding research design

Charlesworth Author Services 02/03/2021 00:00:00

variable presentation is seen

Bitesize Webinar: Statistics: Module 2- Including descriptive statistics in academic papers

Charlesworth Author Services 04/03/2021 00:00:00

variable presentation is seen

Bitesize Webinar: Statistics: Module 3- Using statistical tables and figures in academic papers

variable presentation is seen

Bitesize Webinar: Statistics: Module 4 - Reporting the results of statistical analyses

variable presentation is seen

Charlesworth Author Services 21/01/2021 00:00:00

variable presentation is seen

Tips on using figures and tables

Charlesworth Author Services 02/11/2016 00:00:00

variable presentation is seen

Bitesize Webinar: Statistics: Module 5 - Interpreting and discussing research findings

Radiopaedia.org

Variation in fetal presentation

  • Report problem with article
  • View revision history

Citation, DOI, disclosures and article data

At the time the article was created The Radswiki had no recorded disclosures.

At the time the article was last revised Yuranga Weerakkody had no financial relationships to ineligible companies to disclose.

  • Delivery presentations
  • Variation in delivary presentation
  • Abnormal fetal presentations

There can be many variations in the fetal presentation which is determined by which part of the fetus is projecting towards the internal cervical os . This includes:

cephalic presentation : fetal head presenting towards the internal cervical os, considered normal and occurs in the vast majority of births (~97%); this can have many variations which include

left occipito-anterior (LOA)

left occipito-posterior (LOP)

left occipito-transverse (LOT)

right occipito-anterior (ROA)

right occipito-posterior (ROP)

right occipito-transverse (ROT)

straight occipito-anterior

straight occipito-posterior

breech presentation : fetal rump presenting towards the internal cervical os, this has three main types

frank breech presentation  (50-70% of all breech presentation): hips flexed, knees extended (pike position)

complete breech presentation  (5-10%): hips flexed, knees flexed (cannonball position)

footling presentation  or incomplete (10-30%): one or both hips extended, foot presenting

other, e.g one leg flexed and one leg extended

shoulder presentation

cord presentation : umbilical cord presenting towards the internal cervical os

  • 1. Fox AJ, Chapman MG. Longitudinal ultrasound assessment of fetal presentation: a review of 1010 consecutive cases. Aust N Z J Obstet Gynaecol. 2006;46 (4): 341-4. doi:10.1111/j.1479-828X.2006.00603.x - Pubmed citation
  • 2. Merz E, Bahlmann F. Ultrasound in obstetrics and gynecology. Thieme Medical Publishers. (2005) ISBN:1588901475. Read it at Google Books - Find it at Amazon

Incoming Links

  • Obstetric curriculum
  • Cord presentation
  • Polyhydramnios
  • Footling presentation
  • Normal obstetrics scan (third trimester singleton)

Promoted articles (advertising)

ADVERTISEMENT: Supporters see fewer/no ads

By Section:

  • Artificial Intelligence
  • Classifications
  • Imaging Technology
  • Interventional Radiology
  • Radiography
  • Central Nervous System
  • Gastrointestinal
  • Gynaecology
  • Haematology
  • Head & Neck
  • Hepatobiliary
  • Interventional
  • Musculoskeletal
  • Paediatrics
  • Not Applicable

Radiopaedia.org

  • Feature Sponsor
  • Expert advisers

variable presentation is seen

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

2: Graphical Representations of Data

  • Last updated
  • Save as PDF
  • Page ID 22222

In this chapter, you will study numerical and graphical ways to describe and display your data. This area of statistics is called "Descriptive Statistics." You will learn how to calculate, and even more importantly, how to interpret these measurements and graphs.

  • 2.1: Introduction In this chapter, you will study numerical and graphical ways to describe and display your data. This area of statistics is called "Descriptive Statistics." You will learn how to calculate, and even more importantly, how to interpret these measurements and graphs. In this chapter, we will briefly look at stem-and-leaf plots, line graphs, and bar graphs, as well as frequency polygons, and time series graphs. Our emphasis will be on histograms and box plots.
  • 2.2: Stem-and-Leaf Graphs (Stemplots), Line Graphs, and Bar Graphs A stem-and-leaf plot is a way to plot data and look at the distribution, where all data values within a class are visible. The advantage in a stem-and-leaf plot is that all values are listed, unlike a histogram, which gives classes of data values. A line graph is often used to represent a set of data values in which a quantity varies with time. These graphs are useful for finding trends.  A bar graph is a chart that uses either horizontal or vertical bars to show comparisons among categories.
  • 2.3: Histograms, Frequency Polygons, and Time Series Graphs A histogram is a graphic version of a frequency distribution. The graph consists of bars of equal width drawn adjacent to each other. The horizontal scale represents classes of quantitative data values and the vertical scale represents frequencies. The heights of the bars correspond to frequency values. Histograms are typically used for large, continuous, quantitative data sets. A frequency polygon can also be used when graphing large data sets with data points that repeat.
  • 2.4: Using Excel to Create Graphs Using technology to create graphs will make the graphs faster to create, more precise, and give the ability to use larger amounts of data. This section focuses on using Excel to create graphs.
  • 2.5: Graphs that Deceive It's common to see graphs displayed in a misleading manner in social media and other instances. This could be done purposefully to make a point, or it could be accidental. Either way, it's important to recognize these instances to ensure you are not misled.
  • 2.E: Graphical Representations of Data (Exercises) These are homework exercises to accompany the Textmap created for "Introductory Statistics" by OpenStax.

Contributors and Attributions

Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/[email protected] .

Comparing explicit and implicit ensemble perception: 3 stimulus variables and 3 presentation modes

  • Open access
  • Published: 11 October 2023
  • Volume 86 , pages 482–502, ( 2024 )

Cite this article

You have full access to this open access article

  • Noam Khayat 1 ,
  • Marina Pavlovskaya 1 &
  • Shaul Hochstein 1  

849 Accesses

Explore all metrics

Visual scenes are too complex for one to immediately perceive all their details. As suggested by Gestalt psychologists, grouping similar scene elements and perceiving their summary statistics provides one shortcut for evaluating scene gist. Perceiving ensemble statistics overcomes processing, attention, and memory limits, facilitating higher-order scene understanding. Ensemble perception spans simple/complex dimensions (circle size, face emotion), including various statistics (mean, range), and inherently spans space and/or time, when sets are presented scattered across the visual scene, and/or sequentially in rapid series. Furthermore, ensemble perception occurs explicitly, when observers are asked to judge set mean, and also automatically/implicitly, when observers are engaged in an orthogonal task. We now study relationships among these ensemble-perception phenomena, testing explicit and implicit ensemble perception; for sets varying in circle size, line orientation, or disc brightness; and with spatial, temporal or spatio-temporal presentation. Following ensemble set presentation, observers were asked if a test image, or which of two test images, had been present in the set. Confirming previous results, responses reflected implicit mean perception, depending on test image distance from the mean, and on its being within or outside ensemble range. Subsequent experiments asked the same observers to explicitly judge whether test images were larger, more clockwise, or brighter than the set mean, or which of two test images was closer to the mean. Comparing implicit and explicit mean perception, we find that explicit ensemble averaging is more precise than implicit mean perception—for each ensemble variable and presentation mode. Implications are discussed regarding possible separate mechanisms for explicit versus implicit ensemble perception.

Similar content being viewed by others

variable presentation is seen

Emotional judgments of scenes are influenced by unintentional averaging

Yavin Alwis & Jason M. Haberman

variable presentation is seen

Dissociating implicit and explicit ensemble representations reveals the limits of visual perception and the richness of behavior

Sabrina Hansmann-Roth, Árni Kristjánsson, … Andrey Chetverikov

variable presentation is seen

Emotion matters: Face ensemble perception is affected by emotional states

Shenli Peng, Chang Hong Liu, … Zilu Yang

Avoid common mistakes on your manuscript.

Introduction

Gestalt psychologists suggested that similar scene elements are grouped (Koffka, 1935 ; Wagemans et al., 2012 ; Wertheimer, 1923/ 1938 ) so that perception of the group’s spatial arrangement and its summary statistics provide a shortcut toward evaluating the gist of complex scenes (Ariely, 2001 ; Cohen et al., 2016 ; Hochstein & Ahissar, 2002 ). In fact, the idea of set representation was long known, including the phenomenon of central tendency or regression to the mean. For example, Hollingworth ( 1910 ) noted that magnitude estimates tend to gravitate towards a value equal to the mean of the set. It has been suggested that perceiving ensembles rather than individuals expands processing, attention, and memory limits (Alvarez, 2011 ; Cohen et al., 2016 ; Utochkin, 2016 ).

Numerous studies since the turn of the millennium have found that we rapidly perceive set mean values for multiple object features, including size (Ariely, 2001 ; Bauer, 2015 ; Chong & Treisman, 2003 ; Corbett & Oriet, 2011 ), orientation (Dakin & Watt, 1997 ; Parkes et al., 2001 ), brightness (Bauer, 2009 ; Chetverikov et al., 2017 ; Takano & Kimura, 2020 ), color (Olkkonen et al., 2014 ; Webster et al., 2014 ), position (Alvarez & Oliva, 2008 ; Lew & Vul, 2015 ), and face identity, gender, emotional expression, eye-gaze, or general lifelikeness (de Fockert & Wolfenstein, 2009 ; Haberman & Whitney, 2007 , 2009 ; Sweeny & Whitney, 2014 ; Yamanashi Leib et al., 2014 ). Perceived statistics also include set feature variance or range (Dakin & Watt, 1997 ; Haberman & Whitney, 2012 ; Pollard, 1984 ), in the visual and auditory domains (McDermott et al., 2013 ; Schweickert et al., 2014 ) and separate statistics for separable sets of elements (Chong & Treisman, 2003 ; Haberman & Whitney, 2012 ). While set statistics may affect perception, memory, and/or decision-making, we follow all the above, calling the phenomenon “ensemble perception.” For ensemble perception reviews, see Haberman and Whitney ( 2012 ), Bauer ( 2015 ), Cohen et al. ( 2016 ), and Corbett et al. ( 2023 ).

Nearly all of the above experiments tested explicit perception of the ensemble mean—that is, participants were asked to evaluate the mean and perform a task related to this mean. Explicit perception is deliberate and conscious, cognitively demanding with top-down attention (Cohen et al., 2016 ; Hochstein & Ahissar, 2002 ; Reber et al., 1999 ). On the other hand, Khayat and Hochstein ( 2018 , 2019 ; Hochstein, 2020 ; Khayat et al., 2021 ) studied implicit perception and memory of set statistics (see also Hansmann-Roth et al., 2021 ). Implicit perception is automatic and nonconscious, believed to involve bottom-up sensory integration (Cohen et al., 2016 ; Hochstein & Ahissar, 2002 ; Reber et al., 1999 ). Khayat, Fusi, and Hochstein ( 2021 ) presented a rapid serial visual presentation (RSVP) sequence of images differing by low-level properties (circles of different size, lines of different orientation, discs of different brightness; see Fig. 1 , Top, a), and tested only memory of membership in the sequence of test images or items. The mean of the set—mean size circle, mean orientation line, or mean brightness disc—was sometimes included in the set sequence, and sometimes absent. After showing the set RSVP, they presented two images, side by side, simultaneously, one SEEN in the sequence and one not present, a NEW image. They tested observer perception and memory by asking participants to choose which test image had been SEEN in the sequence. They did not inform observers that one test element could be the sequence mean, whether the SEEN test image (i.e., a RSVP sequence member) or the NEW foil image (i.e., not a sequence member). Also, they did not inform them that sometimes the NEW test image was outside the sequence range. They purposely did not mention in their instructions the words “mean” and “range,” in order to test if observers automatically perceive set mean and choose test images that match, or are closer to the mean. They also asked if observers would automatically perceive set property range and easily reject foils outside the sequence range. These test-stimulus contingencies, called “trial subtypes,” are shown in Table 1 , and demonstrated in Fig. 2 , using the terms: “in” and “out”—test elements within and outside the range of the variable sequence property; “mean”—element with property equal to sequence mean. Baseline performance is for the subtype where neither test item equals the mean, and they are, on average, equidistant from the mean. Note that “performance” accuracy is measured by choice of the SEEN test image and not by choice of the test image closer to the mean, even though we are interested in the effect of mean perception on this choice. Thus, choice of the NEW test image, when it equals the set mean, is deemed incorrect (leading to poor performance) in terms of memory of the set, and at the same time reflects (misleading) perception of the mean.

figure 1

Top: Implicit and explicit ensemble perception tests. Rapid Serial Visual Presentation (RSVP) of a sequence of images differing in A circle size, B line orientation, or C disc brightness is followed by presentation of two test images ( a ). To test implicit ensemble perception, observers are asked which image was present in the sequence. Their responding according to the shorter distance from the sequence mean indicates implicit set mean perception. Explicit mean perception is tested by directly asking which of the two test images is closer to the mean. An alternative testing method ( b ) presents only a single test image and asks if it was present in the sequence (implicit mean perception) or if it is larger, more clockwise, or brighter than the set mean (explicit mean perception). Bottom: illustration of the other presentation modes—spatial (e.g., circle size) and spatiotemporal (e.g., line orientation). These were followed by the same 2 or 1 test image(s) as in ( a ). See text

figure 2

Examples of different implicit-2-test-images trial subtypes. In each case, the 8 sequence members are indicated, as well as their mean (M) and the two test images (SEEN and NEW). From the top: SEEN test image = sequence mean, expecting observers to correctly choose it; NEW = mean, expecting incorrect choice of NEW image; neither = mean, SEEN and NEW equidistant from mean, expecting ~50% chance performance; SEEN closer, better than 50%; NEW closer, less than 50%; NEW out of range, expecting easy rejection of NEW, and choice of SEEN, whether = mean or not. Note that performance accuracy is measured by choice of the SEEN test image and not by of the test image closer to the mean, though our central interest is the effect of mean perception on this choice

Khayat and Hochstein ( 2018 ) found that when participants choose which of two test stimuli was present in the preceding RSVP sequence, they tend to select the test image that is closer to the set mean property, even when it was never presented, suggesting that average size, orientation, and brightness are automatically and implicitly encoded (see Maule et al., 2014 , regarding mean color). Note that the variables tested, size, orientation and brightness, have different representations in visual cortex (Gardner et al., 2005 ; Konkle & Oliva, 2012 ; Shapley et al., 2003 ). These findings confirmed earlier results by Corbett and Oriet ( 2011 ) who used an attentional blink paradigm and found implicit perception of an RSVP sequence mean. Similar characteristics were also found when testing memory of objects belonging to a particular category, when presented in RSVP sequence, with observers perceiving the objects’ category prototype (similar to set mean) and the category itself (similar to range; Khayat et al., 2021 ; Khayat & Hochstein, 2019 ).

In the current study, we test both implicit and explicit ensemble perception, comparing their precision in the same participants. Furthermore, while most studies presented ensemble stimuli simultaneously, and only a few presented them serially (e.g., Corbett & Oriet, 2011 ; Khayat & Hochstein, 2018 ), we now test three presentation modes (temporal, spatial and spatio-temporal) allowing direct comparison of results reflecting integration mechanisms over time and/or space. Thus, a central goal of the present study is comparison of ensemble perception of different features, different presentation, and explicit/implicit processing.

The widespread parallels at multiple levels of cortical representation suggest that they reflect basic brain-processing principles. Here we also seek to determine the relationships among the mechanisms underlying these different tasks. Is there a single “averaging” mechanism that performs mean perception for sets differing in various features, and/or spread over space or time, and when observers perform an averaging task or implicitly perceive the mean when engaged in an unrelated task, or are there separate cerebral mechanisms for some or all of these different tasks?

Previous studies have investigated the relationships among ensemble perception of different features with mixed results. Comparing performance of two low-level features (length and orientation of lines) yielded mixed results with significant (Kacin et al., 2021 ) and nonsignificant (Yörük & Boduroglu, 2020 ) individual-differences correlations. Tests of high-level object ensemble perception (planes, birds, cars) found significant correlations (Chang and Gauthier, 2021 ). On the other hand, Haberman et al. ( 2015 ) compared several low- and high-level stimulus ensemble representations and found no significant correlations. Taken together, these studies suggest there is no “domain-general” mechanism, though there may be common mechanisms for similar level features. Note that even if the same computation is used for different features, it may be performed by repeated, local mechanisms, for each.

Besides the two-test image paradigm described above and in Fig. 1 , Top, a, in the current study we also use an alternative testing paradigm, with a single test image, as shown in Fig. 1 , Top, b. For the implicit ensemble test, participants are asked if the test image was a member of the set; for the explicit test, they are asked if it is greater than the set mean (larger size, more clockwise orientation, or lighter brightness). Similar to the 2-test image paradigm, the single test image could be a sequence member (SEEN) or not included in the sequence (NEW), could equal the mean (if SEEN, better implicit membership-task performance, if NEW, worse implicit membership-task performance), or, if NEW, could be outside the sequence range (best implicit membership-task performance by easy rejection, and best explicit performance by easy comparison to perceived mean). The importance of using these two testing paradigms is in more direct comparison of two tests of implicit mean perception with 2 test images, and more direct comparison of explicit and implicit mean perception with a single test image.

Participants

Ninety-six master workers were recruited from the Amazon Mechanical Turk (MTurk) platform, a crowdsourcing platform enabling coordination of online participants of uploaded human information tasks. Each observer participated in 6 experimental sessions, 3 testing implicit ensemble perception, followed by 3 testing explicit mean perception. In each case, the 3 sessions differed in presentation mode, one each with temporal, spatio-temporal, or spatial presentation, in this order. Each session had 3 blocks of trials testing circle size, line orientation and disc brightness, respectively, in this order. 96 participants completed the full 6 experimental sessions (55 the 2-test image paradigm; 41 the 1-test image paradigm).

All stimuli were created using python 3.7, and the experiment was designed using JavaScript and uploaded to the online MTurk platform. Stimuli in the different blocks of each experimental session were either circles with different sizes, bars with different orientations, or discs with different brightness. Each set contained 8 images, presented in random order and/or position. The distributions of stimuli were as follows. The full range of stimuli was divided into 30 equidistant arbitrary units; for each trial, the range was limited to 8-21 units, and the difference between adjacent-size stimuli in a trial was below 5 units.

In circle-size blocks, ensembles consisted of hollow circles with different diameters. Each arbitrary unit of size represents an incremental radius of six pixels (for spatial presentation, five), so the full range of sizes was 1-30 units or 6-180 pixels (for spatial presentation, 5–150). In line-orientation blocks, each unit represents 6° and the full range of orientations was 6-180°. In disc-brightness blocks, each unit represents 2% of maximal screen brightness, and the full range of brightness was 21–79% of maximum screen brightness (RGB [256, 256, 256]). Disc diameter was 250 pixels, with a 5-pixel black border.

There were 100 trials per block (300 trials per session). In the 2-test image sessions, the 100 trials included 40 baseline trials and 20 trials each for the other trial subtypes (SEEN = mean, NEW = mean, NEW = out). In the 1-test image sessions, the 100 trials included 20 trials each for 5 trial subtypes (test image: SEEN = mean, NEW = mean, SEEN ≠ mean, NEW ≠ mean, NEW out of range).

Experimental design

We used the implicit ensemble averaging paradigm, devised by Khayat and Hochstein ( 2018 ) and demonstrated in Fig. 1 . Trial design was similar in all experiments, and the different conditions were determined by the test images' feature and presentation mode. Participants were instructed to sit 57 cm from a computer screen. Each trial began with a fixation cross appearing in the center of the screen for 500 ms and then, following observer press of the space bar, a set of 8 stimuli was presented by one of three modes: temporal presentation : serial sequence with 100 ms/stimulus and 100 ms interstimulus interval (ISI), followed by a masking stimulus to limit within-trial recency effects; spatial presentation : all 8 stimuli presented simultaneously for 500 ms positioned randomly within a 4 wide × 3 high lattice; spatio-temporal presentation : serial sequence of 100 ms/stimulus and 100 ms ISI, with stimuli in different positions (like those of the spatial presentation), in random order.

After presentation of the set stimuli, we had 4 test paradigms, in a 2 × 2 design: implicit vs. explicit and 2 test images vs. 1 test image. For the implicit-2-test images paradigm, a two-alternative-forced choice (2-AFC) membership task was tested: two test images were presented side-by-side and participants were instructed to indicate which one was a member of the sequence by pressing the keyboard's left or right arrow (Fig. 1 a). Response time was unlimited, but we discarded response times that were longer than 3s. There was always one test image which was present in the set, i.e., the "SEEN" image (the correct response), and another one which was not—that is, the "NEW" image (the incorrect response). SEEN and NEW were pseudorandomly located on the left or right side of the display. For the implicit-1-test image paradigm, a single test image was presented in the middle of the screen; the image was randomly either SEEN or NEW, and participants were asked to judge if it had been presented in the set sequence. In either case, we expect a low chance of participants’ remembering which stimuli were in the set and which not. Instead, as we have found previously, they would base their decisions on the distance of the test stimuli from the set mean. It is in this way that these trials test implicit perception and memory of the set mean.

Only following 3 experimental sessions with implicit tests, either all with two or all with one test image, did we begin sessions with explicit mean tests. For the explicit-2-test images paradigm, participants were asked which of the two test images is closer to the set mean. For the explicit-1-test image paradigm, participants were asked if the test image was larger than the set mean circle size, more clockwise than the set mean line orientation, or brighter than the set mean disc brightness. Thus, in these sessions, we explicitly mentioned (for the first time) the notion of mean, and asked participants to assess the mean and use it for deciding on their responses.

The order of the Results section is as follows: First we present results of participants for whom we used the two test images paradigm, for implicit and then explicit tests, followed by their comparison. Next, we present results of participants for whom we used the one test image paradigm, again, for implicit and then explicit tests, followed by their comparison.

Data analysis and statistical tests

The basic method of estimating the different implicit biases was a comparison of membership task accuracy for different trial conditions, assuming dependence on implicit mean perception. As already established using the implicit-2-test image paradigm (Khayat & Hochstein, 2018 ), we measured accuracy of determination of test image membership in the set for 4 different trial subtypes, as described in Fig. 2 and Table 1 . Trial subtype was pseudo-randomly mixed in each session and participants were not aware of this division. To assess the gradual effect of test image distance from the mean, we measured test image membership performance as a function of the parameter Δ that represents the difference of the two test images’ distances from the mean (Fig. 2 ). Positive Δ corresponds to trials where the SEEN image is closer to the mean (increasing accuracy), and negative Δ corresponds to trials where the NEW image is closer to the mean (where we expect more frequent choice of the NEW test image, lowering accuracy to below 50%). This measure is more informative and detailed than the rough division to trial subtypes in which the test images are either exactly equal to the mean or not, as it incorporates the distances of both test images from the mean. This paradigm was found to provide robust effects of the trial mean and range on performance (Khayat & Hochstein, 2018 , 2019 ; Khayat et al., 2021 ). The analysis of membership task performance versus Δ was also done separately for trials where both test images are within the trial range, to dissociate the mean effect from the robust range effect—that is, rejection of test images outside the set range.

Data were analyzed using MATLAB 2020b, SPSS 28.0 and Excel. Trials with RT below 200 ms or above 3 s were excluded from the analysis.

Membership trial results dependence on implicit mean perception was assessed by fitting a Gaussian curve to the data, following the equation: \(y=a* {e}^{-\frac{{\left(x-c\right)}^{2}}{2{\upsigma }^{2}}}\) , where y = fraction reporting “member” and x = distance of the (chosen) test image from the mean, with Gaussian parameters of height (a), width (σ), and center (c).

The gradual explicit mean effects by the distances of the test images from the mean was fit to the sigmoid function: \(y=\mathit{min}+\frac{(max-min)}{1+{e}^{\left(-slope*\left(x-c\right)\right)}}\) , where y = fraction reporting larger than or closer to the mean and x = distance of the 1 test image or difference of distances of the 2 test images from the mean, with sigmoid parameters of minimum ( min ), maximum ( max ), slope ( slope ), and center ( c ).

Experiment 1— 2-test images-implicit test paradigm

We found consistent dependence on trial subtype, i.e., whether the SEEN or NEW test image was the set mean, neither was the set mean (baseline trials), or the NEW image was outside the set range; (see Fig. 2 and Table 1 ). As demonstrated in Fig. 3 , participants tended to choose the SEEN or the NEW image when it was the set mean (the mean effect), even though the NEW image was never among the set images (red bars are above blue baseline reflecting choice of SEEN image when it equals the mean; orange bars are below baseline reflecting infrequent choice of SEEN image when the NEW image equals the mean). They were also better at choosing the SEEN image when they could reject the NEW image when it was outside the set range. These results were true, for all test variables (circle size, line orientation, disc brightness) and all presentation modes (temporal, spatio-temporal, spatial). Two-way repeated-measure ANOVA for subtype and presentation mode, with fraction choose SEEN as dependent variable, showed significant effects of subtype, F (3, 162) = 326, p < .001, and presentation, F (2, 108) = 3.3, p = .038, as well as a significant interaction between them, F (6, 324) = 19.6, p < .001, reflecting a smaller subtype dependence for spatial presentation. Two-way repeated-measure ANOVA for subtype and stimulus variable showed significant effect for subtype, F (3, 162) = 337, p < .001, but nonsignificant effects of stimulus variable, F (2, 108) = 0.795, p = 0.45; with, nevertheless, a significant interaction between them, F (6, 324) = 8.1, p < .001, due to a slightly reduced subtype dependence for brightness. The implicit effect of trial statistics (i.e., mean and range) assessed by comparing the different pairs of trial subtypes (SEEN = mean or NEW = mean vs. baseline, SEEN = mean vs. NEW = mean, and NEW = out vs. baseline), was highly significant and had large effect size for all presentation and stimulus variable blocks; for all comparisons, p < .001 and effect size Cohen’s d >0.7 (7/36 cases d >0.5), except for the case of spatial presentation, circle size, where p < .02 ( d = 0.35) for SEEN = mean vs. NEW = mean or baseline, and p = 0.28 ( d = 0.14) for NEW = mean vs. baseline. The generally significant effect was also found on a participant-by-participant basis, despite performance scatter, as shown in Figs. 4 and 5 . For example, 53 out of 55 participants showed more accurate performance (greater chance of choosing the SEEN image) for SEEN = mean than for NEW = mean, when averaging results across features and presentation modes, as shown in Fig. 5 , right.

figure 3

Experiment 1 — implicit, 2-test image paradigm—Membership task performance as a function of trial subtype for three testing variables (columns: circle size, line orientation and disc brightness) and for three presentation modes (rows: temporal, spatio-temporal, and spatial). In every case, accuracy (proportion reporting that the SEEN image was present in the set) was greater for trials where SEEN = mean than those where NEW = mean, with the baseline subtype (neither = mean) between them, close to 50% chance performance. Best membership task performance was for NEW outside the sequence range and easily rejected. Error bars are standard error of the mean ( SEM ). (Color figure online)

figure 4

Experiment 1 — implicit, 2-test image paradigm—Performance for individual participants as a function of trial subtype for 3 presentation modes, averaging over test variables (top), for 3 test variables, averaging over presentation modes (middle), and averaging over all cases (bottom). Despite considerable scatter among participants, average membership task performance is clearly, and significantly dependent on trial subtype. Each circle corresponds to a single participant’s performance; horizontal lines correspond to the average performance over participants; error bars are SEM . (Color figure online)

figure 5

Experiment 1 — implicit, 2-test image paradigm—Performance for individual participants as a function of which test image was closer to the mean, SEEN or NEW (excluding data for NEW out of set range). Performance, fraction choosing the SEEN image, was superior for almost all participants, in all conditions, when the SEEN image was closer to the mean, despite considerable scatter among participants. Each circle and line connecting performance for the two conditions, correspond to a single participant’s performance

We now look at the absolute distances from baseline performance for SEEN equals mean and NEW equals mean, both in the bar graphs of Fig. 3 and the scatter plots of Fig. 4 . Implicit membership task performance (reporting the SEEN test image was present) is about 0.5 for baseline, better than baseline (~0.6) for SEEN = mean, and worse than baseline (~0.4) for NEW equals mean. Note, however, that the absolute difference from baseline (|0.6–0.5| and |0.4–0.5|) are closely equal and opposite. For all stimulus variables and presentation modes, the absolute difference of task performance accuracy from baseline trials (fraction selecting SEEN image) between trials where the SEEN image versus where the NEW image equals the mean, was not significant (two-tailed t test, p = 0.28–0.98). This is what would be expected if participants basically lack knowledge of image membership, and they respond only on the basis of which test image is equal to the mean.

To include intermediate data in judging implicit mean perception—that is, not just the cases where the SEEN or NEW test image equals set mean, we introduce a new parameter, Δ. For each pair of test images, we measure the absolute distance of each test image from the mean of the set. Then we take the difference between these distances, the absolute distance of the NEW image from the mean, less the absolute distance of the SEEN image from the mean, and call this difference Δ (see examples in Fig. 2 ). We then plot the fraction of selecting the SEEN image as a function of Δ. As shown in Fig. 6 , the result is a sigmoidal curve crossing 0 (SEEN and NEW test images equidistant from the mean) near accuracy = 0.5, that is, chance performance. This, too, is true for all variables (size, orientation, and brightness) and presentation modes (temporal, spatio-temporal, spatial). Sigmoid curves (black) in Fig. 6 are best fits to the function, \(y=\mathit{min}+\frac{(max-min)}{1+{e}^{\left(-slope*\left(x-c\right)\right)}}\) , with parameter ranges: min = 0 – 0.4; max = 0.64 – 1.0; c = -1.3 – 5.9; slope = 0.14 – 0.36/unit. The slopes for these data are presented in Table 3 .

figure 6

Experiment 1 — implicit, 2-test image paradigm—Membership task performance as a function of parameter Δ for three test variables and three presentation modes. Graphs show data and best-fit sigmoid function, including data for NEW out of set range. Δ is the difference between absolute distances of NEW and SEEN images from the mean, with SEEN closer to mean on the right side of each graph, NEW closer on the left. Choice of test image closer to the mean reflects implicit ensemble perception. Red, orange, blue and gray data points reflect average performance for trials where, respectively, SEEN = mean, NEW = mean, neither = mean (baseline), and NEW is outside the ensemble range. Green curves are integral of data from Fig. 7 . (Color figure online)

Another important aspect of ensemble mean perception is the degree of precision of the percept. How precise or how broad is the representation of the mean of the set. This important aspect of ensemble perception has been dealt with previously only rarely (e.g., Hansmann‑Roth et al., 2021 ). To measure precision, we plot the fraction of participant responses of test image presence in the set as a function of the distance of the test image from the mean. Since this was a 2-AFC test, participants needed to choose one of the two test images, which they did by judging which was closer to the mean (as shown already in Fig. 6 ). If ensemble perception were just “equal or not equal to the mean,” then responses should drop to 50% when the chosen test image is not equal to the mean. Figure 7 demonstrates that this is not the case. We plot the rate of choosing a test image (whether SEEN or NEW) as a function of its distance from the mean. There is a gradual, Gaussian-curve-like decay from the peak at the point of test image equal to the mean. The width (standard deviation) of the best-fit Gaussian curve is a measure of precision of the representation of set mean. Table 2 presents σ, the Gaussian curve standard deviation ( SD ) for the averages over variable and/or over presentation mode.

figure 7

Experiment 1 — implicit, 2-test image paradigm—Fraction responding “member of set” as a function of distance of chosen test image, whether SEEN or NEW, from set mean. Data for 3 variables and 3 presentation modes, and averages over variables, presentations, and both. Each graph shows data and best-fit Gaussian function, including data of trials with NEW out of set range. Choice of the test image closer to the mean reflects implicit ensemble perception. Framed and nonframed circles correspond respectively to fraction of selecting the NEW or the SEEN image. Blue curves are derivative of black curves of Fig. 6 , where x -axis is not distance from mean but difference of distances from mean (Δ). (Color figure online)

There is, of course, a mathematical connection between Gaussian and sigmoid curves. The Gaussian is just the derivative of the sigmoid and the sigmoid the integral of the Gaussian; (in both cases appropriately normalized). If the sigmoid curves in Fig. 6 (performance as a function of Δ, difference of distances from mean) reflect the same ensemble mean perception mechanism as the Gaussian curves of Fig. 7 (choice of test image as function of distance from mean), then the derived curves from each should match the other. This is indeed the case, as follows: The green sigmoid curves in Fig. 6 are the integrals of the corresponding (black) data Gaussian curves in Fig. 7 and the blue Gaussian curves in Fig. 7 are the derivatives of the corresponding (black) data sigmoid curves of Fig. 6 . There is a close resemblance in all cases. Table 2 compares the Gaussian curve standard deviations for these data, and Table 3 compares the sigmoid slopes.

Experiment 1: 2-test images-explicit test paradigm

Following 3 sessions testing implicit ensemble perception, we now tested explicit ensemble perception. Participants were asked, for the first time, to evaluate the mean of the set of images, and then judge which of two test images was closer to the set mean in terms of size, orientation or brightness. We expect participants to be accurate when the difference in distance from the mean for the two test images is large, and that they be less accurate when the distances are similar. Indeed, results follow a sigmoid curve, as shown in Fig. 8 . Note that here the choice is between the test image that is closer to the mean versus that which is further from the mean. Table 5 (right values) presents the slopes of these best-fit curves at midpoint.

figure 8

Results for Experiment 1 — Explicit, 2-test image paradigm. Sigmoid curves of mean estimation accuracy performance as function of the relative distance of the test images (target and distractor, closer and further) from the trial mean (i.e., Δmean). A gradual increase in task accuracy is seen as a function of the difference of the test image distances from the mean, for all stimulus variables and presentation modes

A one-way repeated-measure ANOVA (46 participants) with slope as dependent variable and 3 stimulus variables, size, orientation and brightness, as independent variable (averaged across presentation modes), showed no significant difference, F (2, 90) = 0.39, p = 0.6. The one-way repeated-measure ANOVA (44 participants) with the 3 presentation modes, temporal, spatial, and spatio-temporal, as independent variable (averaged across stimulus variables), also showed no significant effect, F (2, 86) = 0.79, p = 0.45.

Experiment 1: 2-test images —Comparing implicit and explicit perception

To compare implicit and explicit ensemble perception, we plot in Fig. 9 the normalized sigmoid curves of implicit perception (black) from Fig. 6 , and the normalized explicit sigmoid curves (blue) from Fig. 8 . Comparing these curves, and in particular the slopes at the center (c), we see that the sigmoid curves for explicit perception (blue) are significantly steeper than those for implicit membership task performance (black); (within-subject data, averaged across presentation and stimulus types: t test, p < .001; effect size, Cohen’s d = 0.97). The slopes at midpoint for these curves are compared in Table 5 , left versus right values for implicit versus explicit data, respectively. Note the large discrepancies between the explicit and implicit values, reflecting the sharper slopes and more precise ensemble perception for explicit tests.

figure 9

Experiment 1 — 2-test images paradigm—Comparing implicit and explicit ensemble perception. Normalized data for each set variable and presentation mode, and their averages. Implicit test data (black) from Fig. 6 : participants asked which test image was a member of the previously presented set; Δ is the difference between distances of NEW and SEEN images from the mean. Explicit test data (blue): participants asked to explicitly estimate set mean and judge which of 2 test images is closer to the set mean; normalized data from Fig. 8 , showing normalized fraction of responding to the closer test image as a function of variable Δ, the difference in distances of the test images from the mean. The explicit (blue) sigmoid has a sharper slope, compared to that of the implicit (black) sigmoid. (Color figure online)

A similar comparison can be made for the Gaussian curves for implicit perception from Fig. 7 , and the Gaussian curves that can be derived by taking the derivatives of the explicit sigmoid curves of Fig. 9 , as shown in Fig. 10 (implicit: black; derived from explicit: blue). Again, the explicit perception curves are narrower than the implicit perception curves, suggesting that explicit ensemble perception is more precise than implicit perception. The best-fit Gaussian curve widths are compared in Table 4 , left and right values for implicit versus explicit data, respectively. Note the large discrepancies between the explicit and implicit values, reflecting the narrower curves and more precise ensemble perception for explicit tests (Table 5 ).

figure 10

Experiment 1 —comparing implicit and explicit ensemble perception , 2-test images paradigm. Normalized data for each set variable and presentation mode, and their averages. Implicit test data (black) from graphs of Fig. 7 : fraction of choosing the test image as a function of its distance from the mean. Explicit test data (blue): Gaussian curve derived by taking the derivative of the (blue) sigmoid curve of Fig.  9 as a function of the difference, Δ, of the test image distances from the mean. Note the narrower Gaussian explicit (blue) curve, compared to the width of the implicit (black) Gaussian. (Color figure online)

Experiment 2: 1-test image— Implicit test paradigm

We move now to the second experiment where the implicit or explicit tests were performed with a single test image. As shown below, the implicit-to-explicit perception comparison is more direct here. A different group of (41) participants was tested here. For the first 3 implicit sessions, participants were asked to judge if the test image had been included in the set (see Methods and Fig. 1 , Top, b; we present the results of the second 3 explicit sessions below). As in Experiment 1 , for the implicit sessions, we assume that it is difficult for participants to judge set membership for our brief presentation and random spacing of set members within the set range. Thus, as found above and previously (Khayat & Hochstein, 2018 , 2019 ; and Experiment 1 ), participants judge membership by test image proximity to the set mean. We find a trial-by-trial Gaussian dependence of membership report on test image distance from set mean, as demonstrated in Fig. 11 , for the different variables (size, orientation, brightness) and different presentation modes (temporal, spatio-temporal, spatial), and their averages.

figure 11

Experiment 2 — implicit, 1-test image paradigm—Fraction responding “member of set” as a function of distance of single test image from set mean. Columns: data for 3 variables: circle size, line orientation, disc brightness, and their average; Rows: data for 3 presentation modes, temporal, spatio-temporal spatial, and their average. Graphs show data and best-fit Gaussian function (black), including data for test images included (red) or excluded (orange) from the set, or out of set range (gray). Attributing membership on basis of test image proximity to set mean reflects implicit ensemble perception. Blue curves are derivatives of corresponding sigmoid curves (of the explicit task; Fig. 12 ). (Color figure online)

Figure 11 includes data for cases when the test image was included in the set (SEEN image; red symbols) and when not in the set (NEW; orange). The finding that there is no difference between these cases, reflects participant lack of knowledge concerning individual set images ( t test, p > .2). Data are also shown for the cases when the test image was outside the range of the set (gray), where the very low probability of responding “set member” indicates that participants perceive set range and reject outsiders. Table 6 shows σ (standard deviations, SD ) of Gaussian curves of Fig. 11 .

Results separately for each participant per presentation mode and stimulus variable are quite noisy, and it was not possible to fit Gaussian curves in all cases. We therefore averaged over results for all presentation modes or for all stimulus variables for computing ANOVAs, where possible. The one-way repeated-measures ANOVA (for 21 participants) with σ as the dependent variable and the 3 stimulus variables, size, orientation and brightness, as independent variable (averaged across presentation modes), showed no significant difference, F (2, 40) = 2.66, p = .082. The one-way repeated-measures ANOVA (24 participants), with the 3 presentation modes, temporal, spatial and spatio-temporal, as independent variable (averaged across stimulus variables), showed a somewhat significant effect, F (2, 46) = 6.75, p < .01. Post hoc, Type 2 t tests showed significant differences for spatial versus either of the other presentation modes ( p < .05), and nonsignificant difference between temporal and spatio-temporal presentations.

Experiment 2: 1-test image-explicit test paradigm

As we did for Experiment 1 , for the second part of Experiment 2  with one test image, following the 3 sessions testing implicit ensemble perception, we now tested explicit ensemble perception. Participants were directly asked to evaluate the mean of the set of images, and then to judge if the presented test image was greater than the set mean—that is, if the test circle was larger than the mean size of the set, if the test line orientation was more clockwise that the set mean orientation, or if the test disc was brighter than the set mean brightness. We expect participants to be accurate when the test image is much greater (larger, more clockwise, brighter, leading to 100% positive responses) or much less (smaller, more counterclockwise, or less bright, leading to 0% positive responses). When the test image equals or is close to the mean, responses should be close to 50% chance (or reflect response biases), and intermediate test cases should follow a sigmoidal curve. This was indeed the case, as displayed in Fig. 12 , showing results for the different presentation modes and perceptual variables. There is no difference between data for test images included (red symbols) or excluded (orange) from the set ( t test p > .4). Test images beyond the ensemble range (gray) are close to perfect performance, close to zero, if much smaller, and to 1, if much larger than the range. Table 7 presents values of the parameters of the best fit sigmoid curves for displays of Fig. 12 .

figure 12

Experiment 2— explicit, 1-test image paradigm—Participants were explicitly asked to estimate set mean and compare it to the test image. Fraction of responding “greater” (larger, more clockwise, brighter) as a function of test image size/orientation/brightness compared to set mean. Columns: data for 3 variables. Rows: data 3 presentation modes. Graphs show data and best-fit sigmoid (black), including data for test images included (red) or excluded (orange) from the set, or out of set range (gray). Green curves are integrals of corresponding Gaussian curves of the implicit task (Fig. 11 ). (Color figure online)

A one-way repeated-measures ANOVA (25 participants) with slope as dependent variable and 3 stimulus variables, size, orientation and brightness as independent variable (averaged across presentation modes), showed no significant difference, F (2, 48) = 0.053, p = 0.9. The one-way repeated-measures ANOVA (26 participants) with 3 presentation modes, temporal, spatial and spatio-temporal, as independent variable (averaged across stimulus variables), also showed no significant effect, F (2, 50) = 2.66, p = .08. Thus, we conclude that there is little difference, if any, between performance of the tasks for different stimulus variables or for different modes of presentation.

Experiment 2: 1-test image —Comparing implicit and explicit perception

We now compare the results for implicit and explicit mean perception. Do they depend on the same neural computation leading to identical performance or is performance different opening the possibility that they may depend on separate mechanisms? Having found that implicit membership-test mean perception follows a Gaussian dependence on distance of the test image from the set mean (Fig. 11 ), and the explicit mean perception follows a sigmoidal dependence on distance of the test image from the set mean (Fig. 12 ), we directly compare the results. This is the same comparison method that we used in Figs. 6 and 7 , but there we compared two, albeit different, implicit tests, finding no difference between them, while here we test implicit and explicit ensemble perception, and ask if these, too, are identical (as we did in Figs. 9 and 10 ).

We use the same natural connection between Gaussian and sigmoidal curves. We compute the integral of the Gaussian best fit (black) curves of each graph of Fig. 11 , and plot the results as the green curves in Fig. 12 . Similarly, we computed the derivative of the sigmoid best fit (black) curves in the graphs of Fig. 12 , and plot the results as the blue curves in the graphs of Fig. 11 . In all cases, the green sigmoid curves in Fig. 8 , derived from the implicit data have shallower slopes than the black curves which are the best fit to the explicit data (within-subject data, averaged across presentation modes and stimulus types: t test, p < .001; effect size, Cohen’s d = 1.45). Similarly, in all cases, the blue Gaussian curves of Fig. 11 , derived from the explicit data are narrower (smaller standard deviation) than the black curves, which are the best fit to the implicit data. Tables 6 and 7 show the values of the parameters of these derived curves and compare them with those of the directly measured curves.

We conclude that explicit mean perception is more precise than implicit mean perception, in that it results in a sharper dependence on distance from the mean, seen in both the steeper sigmoid and narrower Gaussian curves. See below ( Discussion and Fig. 14 ) where this result is summarized, comparing data averaged over all presentation modes and variables.

An objection to this conclusion may arise from the following consideration. We were careful in our experiments to first test implicit ensemble perception and only following these 3 sessions to test explicit perception. This was done to avoid participants consciously knowing that our implicit tests involve mean computation, as explicitly told them in advance of the explicit ensemble perception tests. The potential objection derives from the possibility of considerable perceptual learning being the cause of the better performance found for the explicit tests than for the implicit tests. Indeed, we have previously reported perceptual learning of ensemble perception, though there participants performed many more than 3 sessions (Hochstein & Pavlovskaya, 2020 ; see Hochstein et al., 2018 ). To rule out this potential confound, we tested a new naïve set of participants, chosen not to have had experience with any previous ensemble perception test, using the same 1-test image-explicit test paradigm. Due to the difficulty in recruiting naïve participants, we only tested the temporal presentation mode, but still tested all three test variables. Results for the 7 naïve participants are shown in Fig. 13 (orange symbols), for the 3 variables, together. The explicit perception curves for participants tested after 3 implicit performance sessions (red) and for naïve participants tested without any prior experience are nearly identical. The curves derived from implicit performance (green) are significantly shallower.

figure 13

Results for Experiment 2— implicit and explicit, 1-test image paradigm, temporal presentation, comparing explicit and implicit ensemble perception, and introducing results for naïve participants who performed the explicit tests without prior experience with the implicit tests. Plot shows performance dependence on distance of the test image from the set mean, for the 3 variables together. The 3 curves are the best fits for the explicit test, as in Fig. 12 (red), the explicit test of the naïve participants (orange) and the sigmoid curves (of Fig. 12 ) derived from the implicit data of Fig. 11 (green). The orange and red explicit curves are very similar; the green implicit curve is significantly shallower, confirming that the difference between explicit and implicit ensemble performance does not derive from prior experience with the implicit tests. (Color figure online)

Discussion and conclusions

Most studies in the field of ensemble perception designed experiments with an explicit averaging task, i.e., asking participants to assess the ensemble mean. Such designs typically ask observers to adjust a test probe to reproduce the ensemble mean (e.g., Haberman et al., 2015 ), report on which side, or to which direction, on the feature scale, a test item is located with respect to the mean (e.g., Haberman & Whitney, 2009 ), or compare two sets and report which set mean is more extreme (e.g., larger/smaller, clockwise/counterclockwise, happier/sadder) in the feature scale (e.g., Chong & Treisman, 2003 ). Using these tasks, participants may spread their attention across the display and try to perceive a global summary percept. In contrast to this goal-driven process, implicit ensemble perception tasks are quite different, as participants do not recruit attentional resources to the global statistical properties. The effects of implicit ensemble perception are measured indirectly by their influence on some orthogonal task, such as membership tasks (e.g., Khayat & Hochstein, 2018 ) or visual search tasks (e.g., Chetverikov et al., 2016 ). Different processes may be used for explicit versus implicit task types, and it seems that at least some processing mechanisms may be unique to explicit ensemble perception such as top-down attentional strategies.

The methodology of the current study was designed to use comparable stimulus distributions and parameters (e.g., Δ) to test not only explicit and implicit mechanisms but also their use for perceiving ensembles of different features, and integration over space and time. We also employed two distinct experiments with different participants and different tasks (i.e., 1-test and 2-test image) to assess the consistency of this comparison.

We demonstrated both implicit and explicit ensemble perception for temporal, spatial and spatio-temporal presentation modes, and for ensembles with variables of circle size, line orientation, and disc brightness. In addition, we used 2 testing methodologies, with a single test image or by 2-alternative forced choice between two test images. The importance of this broad study lies first with demonstrating the ubiquitous nature of ensemble perception. Even when asked to judge whether a test image—or which of 2 test images—was present in the previously presented set of stimuli, participants always show a preference to respond according to the proximity of the test image(s) to the mean of the ensemble. With 2 test images, they more frequently choose the image closer to the mean, irrespective of whether that image was present in the set (Figs. 3 , 4 , 5 , 6 and 7 ), and with 1 test image, the frequency of reporting presence in the set depends on the proximity of the test image to the set mean (Fig. 11 ). In both cases, choice is a Gaussian function of the distance from the mean, as demonstrated in Figs. 7 and 11 , respectively. Furthermore, when presented with a 2-AFC test asking which test image was present in the set, participants choose the image that was closer to the mean, with a sigmoid dependence on the difference in distances of the test images from the mean, as shown in Fig. 6 . Though tested in very different ways, these dependences of implicit perception of the ensemble mean on distance(s) from the mean are similar so that the integral of the Gaussian (of Fig. 7 ) matches the sigmoid (of Fig. 6 ), and the derivative of the sigmoid (of Fig. 6 ) matches the Gaussian (of Fig. 7 ). Similarly, the red and orange data points in Figs. 11 and 12 , reflecting choice of images that were present or absent from the set, are along the same Gaussian and sigmoid curves.

Importantly, this type of curve superposition was not found when comparing performance of these implicit ensemble perception tests with direct explicit perception tests. Only following the implicit tests, participants were informed that they would now be tested on perception of the ensemble mean. With 2 test images, they were asked to judge which was closer to the mean, and with 1 test image, they were asked to judge if it was greater than the mean—that is, larger, more clockwise, or brighter that the set mean circle, line or disc. With 2 test images, explicit ensemble perception is reflected in a sigmoid dependence of choosing an image as closer to the mean on the difference between the distances of the two test images from the mean (Figs. 8 and 9 ). Note that this is dependence on the same parameter Δ, the difference in distances of the two test images from the mean, as used for the implicit 2-image ensemble perception test (Fig. 6 ), though here we test explicit choice of the image that is closer to the mean, rather than implicit use of Δ to choose images closer to the mean. With 1 test image, there is also a sigmoid dependence on the distance from the mean (Fig. 12 ): When the test image is much larger, more clockwise, brighter than the mean, participants nearly always report “greater than the mean,” and when much smaller, more counterclockwise, dimmer, they almost never report “greater,” with a sigmoid dependence between these extremes (Fig. 12 ). The important result is that when comparing these sigmoid curves with the implicit results, the two are not equivalent. Instead, explicit perception has a steeper sigmoid, and narrower Gaussian, as demonstrated in Figs. 11 and 12 , for 1 test image, and Figs. 9 and 10 , for 2 test images. Figure 14 summarizes the results of these tests and comparisons, showing equivalence of difference implicit tests (left column) and the lack of equivalence when comparing explicit and implicit tests of ensemble perception (central and right columns for 1 and 2 test images, respectively).

figure 14

Summary of comparisons of results for different tests and measures, comparing Top : direct-result implicit Gaussian (purple) with derivative of either implicit (dashed purple) or explicit (orange) sigmoid; Bottom : direct-result implicit (purple) or explicit (orange) sigmoid curves, with curves derived by integral of implicit Gaussian (dashed purple). Averaged data over presentation modes and set variables, all normalized for comparison. Left: Experiment 1: 2-test image paradigm—Comparing different tests of implicit ensemble perception. Top: Gaussian curves from Fig. 7 ; Bottom: sigmoid curves from Fig. 6 . Note good coincidence of original and derived curves. Center (Experiment 2: 1-test image paradigm) and Right (Experiment 1: 2-test image paradigm) : Comparing implicit and explicit ensemble perception. Top: Implicit test: Gaussian curves (purple), fraction responding “member of set” as function of test image distance from set mean. Explicit test: Gaussian curves (orange) derived from sigmoid curves; from Figs. 10  and  11 ; Z -score normalization was done for all data. Bottom: Center: Experiment 2: 1-test image paradigm—Explicit test: sigmoid curve (orange), fraction responding “greater” (larger, more clockwise, brighter) as function of test image size/orientation/brightness compared to sequence mean; participants asked to compare test image to explicitly estimated set mean. Implicit test: sigmoid curve (purple) derived from Gaussian curve of implicit test; from Fig. 12 . Right: Experiment 1: 2-test image paradigm—Explicit test: sigmoid curve (orange), fraction explicitly choosing test image closer to mean as function of difference of test images’ distances from sequence mean. Implicit test: sigmoid curve (purple) of implicit test; from Fig.  9 . Note lack of coincidence of original and derived curves. Narrower Gaussians and steeper sigmoid slopes for explicit data indicate explicit ensemble perception is more precise. (Color figure online)

One of the goals of this broad comparative study was to seek evidence concerning the relationships among the mechanisms underlying different tasks. Is there a single “averaging” mechanism that performs mean perception for ensembles differing in various features, and/or spread over space or time, and when observers perform an averaging task or implicitly perceive the mean when engaged in an unrelated task, or are there separate cerebral mechanisms for some or all of these different tasks? Taking the above results together, a possible conclusion would be that the 3 stimulus variables and 3 presentation modes all use the same underlying mechanism(s) for computing the mean, since the results are so similar for all 9 tests (and for the 2 testing methodologies). The slight differences for spatial presentation might suggest a different mechanism for this mode. In contrast to these similarities, the significant difference in results for explicit and implicit ensemble perception might suggest that difference mechanisms underly these phenomena.

Interestingly, another recent discrimination was reported between explicit and implicit ensemble perception. Hansmann-Roth et al. ( 2021 ) report that conscious awareness appears to have access only to basic summary statistics (e.g., mean and variance), but the entire feature distribution has only implicit effects on behavior.

Nevertheless, we hesitate to conclude that this difference in precision necessarily reflects different underlying mechanisms. It is possible, as well, that the same mechanism is responsible for both implicit and explicit ensemble perception, but that this mechanism is used more efficiently, or depends on more reliable information, when attention is paid to the stimuli and their mean explicitly. Ultimately, resolving this issue of one or more mechanisms may depend on analysis of individual differences in these tests. We are now performing just such an analysis.

Alvarez, G. A. (2011). Representing multiple objects as an ensemble enhances visual cognition. Trends in Cognitive Science, 15 , 122–131.

Article   Google Scholar  

Alvarez, G. A., & Oliva, A. (2008). The representation of simple ensemble visual features outside the focus of attention. Psychological Science, 19 , 392–398.

Article   PubMed   Google Scholar  

Ariely, D. (2001). Seeing sets: Representation by statistical properties. Psychological Science, 12 , 157–162.

Bauer, B. (2009). Does Stevens’s power law for brightness extend to perceptual brightness averaging? Psychological Record, 59 , 171–185.

Bauer, B. (2015). A selective summary of visual averaging research and issues up to 2000. Journal of Vision, 15 (14), 1–15.

Google Scholar  

Chang, T.-Y., & Gauthier, I. (2021). Domain-general ability underlies complex object ensemble processing. Journal of Experimental Psychology: General, 151 (4), 966–972.

Chetverikov, A., Campana, G., & Kristjánsson, Á. (2016). Building ensemble representations: How the shape of preceding distractor distributions affects visual search. Cognition, 153 , 196–210.

Chetverikov, A., Campana, G., & Kristjánsson, Á. (2017). Representing color ensembles. Psychological Science, 28 (10), 1510–1517. https://doi.org/10.1177/0956797617713787

Chong, S. C., & Treisman, A. (2003). Representation of statistical properties. Vision Research, 43 , 393–404.

Cohen, M. A., Dennett, D. C., & Kanwisher, N. (2016). What is the bandwidth of perceptual experience? Trends in Cognitive Science, 20 , 324–335.

Corbett, J. E., & Oriet, C. (2011). The whole is indeed more than the sum of its parts: Perceptual averaging in the absence of individual item representation. Acta Psychologica, 138 , 289–301.

Corbett, J. E., Utochkin, I., & Hochstein, S. (2023). The pervasiveness of ensemble perception: Not just your average review (Elements in Perception series). Cambridge University Press. https://doi.org/10.1017/9781009222716

Dakin, S. C., & Watt, R. (1997). The computation of orientation statistics from visual texture. Vision Research, 37 , 3181–3192.

de Fockert, J., & Wolfenstein, C. (2009). Rapid extraction of mean identity from sets of faces. Quarterly Journal of Experimental Psychology, 62 , 1716–1722.

Gardner, J. L., Sun, P., Waggoner, R. A., Ueno, K., Tanaka, K., & Cheng, K. (2005). Contrast adaptation and representation in human early visual cortex. Neuron, 47 (4), 607–620.

Article   PubMed   PubMed Central   Google Scholar  

Haberman, J., & Whitney, D. (2007). Rapid extraction of mean emotion and gender from sets of faces. Current Biology, 17 , 751–753.

Haberman, J., & Whitney, D. (2009). Seeing the mean: Ensemble coding for sets of faces. Journal of Experimental Psychology: Human Perception and Performance, 35 (3), 718–734. https://doi.org/10.1037/a0013899

Haberman, J., & Whitney, D. (2012). Ensemble perception: Summarizing the scene and broadening the limits of visual processing. In J. Wolfe & L. Robertson (Eds.), From perception to consciousness: Searching with Anne Treisman (pp. 339–349). Oxford University Press.

Chapter   Google Scholar  

Haberman, J., Brady, T. F., & Alvarez, G. A. (2015). Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation. Journal of Experimental Psychology: General, 144 (2), 432.

Hansmann-Roth, S., Kristjánsson, Á., Whitney, D., & Chetverikov, A. (2021). Dissociating implicit and explicit ensemble representations reveals the limits of visual perception and the richness of behavior. Scientific Reports, 11 , 3889.

Hochstein, S. (2020). The gist of Anne Treisman’s revolution. Attention, Perception, & Psychophysics, 82 (1), 24–30.

Hochstein, S., & Ahissar, M. (2002). View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36 , 791–804.

Hochstein, S., & Pavlovskaya, M. (2020). Perceptual learning of ensemble and outlier perception. Journal of Vision , 20 (8):13, 1–17.

Hochstein, S., Pavlovskaya, M., Bonneh, Y. S., & Soroker, N. (2018) Comparing set summary statistics and outlier pop out in vision. Journal of Vision 18(13):12, 1–13. https://doi.org/10.1167/18.13.12

Hollingworth, H. L. (1910). The central tendency of judgment. The Journal of Philosophy, Psychology and Scientific Methods, 7 (17), 461–469.

Kacin, M., Gauthier, I., & Cha, O. (2021). Ensemble coding of average length and average orientation are correlated. Vision Research, 187 , 94–181.

Khayat, N., & Hochstein, S. (2018). Perceiving set mean and range: Automaticity and precision. Journal of Vision, 18 (9), 23.

Khayat, N., & Hochstein, S. (2019). Relating categorization to set summary statistics perception. Attention, Perception, & Psychophysics, 81 , 2850–2872.

Khayat, N., Fusi, S., & Hochstein, S. (2021). Perceiving ensemble statistics of novel image sets. Attention, Perception, & Psychophysics, 83 , 1312–1328.

Koffka, K. (1935). The principles of Gestalt psychology . Routledge.

Konkle, T., & Oliva, A. (2012). A real-world size organization of object responses in occipitotemporal cortex. Neuron, 74 , 1114–1124.

Lew, T. F., & Vul, E. (2015). Ensemble clustering in visual working memory biases location memories and reduces the Weber noise of relative positions. Journal of Vision, 15 , 10. https://doi.org/10.1167/15.4.10

Maule, J., Witzel, C., & Franklin, A. (2014). Getting the gist of multiple hues: Metric and categorical effects on ensemble perception of hue. Journal of the Optical Society of America A, 31 (4), A93–A102. https://doi.org/10.1364/JOSAA.31.000A93

McDermott, J. H., Schemitsch, M., & Simoncelli, E. P. (2013). Summary statistics in auditory perception. Nature Neuroscience, 16 , 493–498.

Olkkonen, M., McCarthy, P. F., & Allred, S. R. (2014). The central tendency bias in color perception: Effects of internal and external noise. Journal of Vision, 14 , 5. https://doi.org/10.1167/14.11.5

Parkes, L., Lund, J., Angelucci, A., Solomon, J. A., & Morgan, M. (2001). Compulsory averaging of crowded orientation signals in human vision. Nature Neuroscience, 4 , 739–744.

Pollard, P. (1984). Intuitive judgments of proportions, means, and variances. Current Psychology: Research and Reviews, 3 , 5–18.

Reber, A. S., Allen, R., & Reber, P. J. (1999). Implicit versus explicit learning. In R. J. Sternberg (Ed.), The nature of cognition (pp. 475–513). MIT Press.

Schweickert, R., Han, H. J., Yamaguchi, M., & Fortin, C. (2014). Estimating averages from distributions of tone durations. Attention, Perception, & Psychophysics, 76 , 605–620.

Shapley, R., Hawken, M., & Ringach, D. L. (2003). Dynamics of orientation selectivity in the primary visual cortex and the importance of cortical inhibition. Neuron, 38 (5), 689–699.

Sweeny, T. D., & Whitney, D. (2014). Perceiving crowd attention: Ensemble perception of a crowd’s gaze. Psychological Science , 25 (10), 1903–1913. https://journals.sagepub.com/toc/pssa/25/10

Takano, Y., & Kimura, E. (2020). Task-driven and flexible mean judgment for heterogeneous luminance ensembles. Attention, Perception, & Psychophysics, 82 (2), 877–890. https://doi.org/10.3758/s13414-019-01862-w

Utochkin, I. S. (2016). Visual enumeration of spatially overlapping subsets. The Russian Journal of Cognitive Science, 3 , 4–20.

Wagemans, J., Elder, J. H., Kubovy, M., Palmer, S. E., Peterson, M. A., Singh, M., & von der Heydt, R. (2012). A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure–ground organization. Psychological Bulletin, 138 , 1172–1217.

Webster, J., Kay, P., & Webster, M. A. (2014). Perceiving the average hue of color arrays. Journal of the Optical Society of America, 31 (4), A283–A292.

Wertheimer, M. (1938). Laws of organization in perceptual forms. In W. Ellis (Trans.), A source book of Gestalt psychology (pp. 71–88). Routledge & Kegan Paul. (Original work published 1923).

Yamanashi Leib, A., Fisher, J., Liu, Y., Robertson, L., & Whitney, D. (2014). Ensemble crowd perception: A viewpoint-invariant mechanism to represent average crowd identity. Journal of Vision, 14 , 26. https://doi.org/10.1167/14.8.26

Yörük, H., & Boduroglu, A. (2020). Feature-specificity in visual statistical summary processing. Attention, Perception, & Psychophysics, 82 (2), 852–864.

Download references

Acknowledgements

Thanks to Jennifer Corbett, Igor Utochkin, and Merav Ahissar for insightful comments on earlier drafts of this paper. We thank Yuri Maximov for providing assistance with programming, analysis and participant communication. This study was supported by a grant from the Israel Science Foundation (ISF).

We dedicate this paper to the memory of Mrs. Lily Safra, a great benefactor of brain research.

The methodology for this study was approved by the Human Research Ethics committee of the Hebrew University.

Following publication, datasets and materials generated and/or analyzed during this study will be made available at www.shaulhochstein.com or from the corresponding author on reasonable request.

Author information

Authors and affiliations.

ELSC Safra Center for Brain Research and Life Sciences Institute, Hebrew University, Jerusalem, 91904, Israel

Noam Khayat, Marina Pavlovskaya & Shaul Hochstein

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shaul Hochstein .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Khayat, N., Pavlovskaya, M. & Hochstein, S. Comparing explicit and implicit ensemble perception: 3 stimulus variables and 3 presentation modes. Atten Percept Psychophys 86 , 482–502 (2024). https://doi.org/10.3758/s13414-023-02784-4

Download citation

Accepted : 31 August 2023

Published : 11 October 2023

Issue Date : February 2024

DOI : https://doi.org/10.3758/s13414-023-02784-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dual-task performance
  • Visual perception
  • Visual awareness
  • Find a journal
  • Publish with us
  • Track your research
  • Incorta Community
  • Dashboards & Analytics Knowledgebase
  • 3 Ways to Customize Your Dashboards with Presentat...
  • Subscribe to RSS Feed
  • Mark as New
  • Mark as Read
  • Printer Friendly Page
  • Report Inappropriate Content

3 Ways to Customize Your Dashboards with Presentation Variables

LukeG

  • Article History

on ‎05-19-2022 11:42 AM

LukeG_0-1652968865640.png

  • Best Practices

Best Practices

Just here to browse knowledge? This might help!

LukeG

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

Warner Bros. Pictures at CinemaCon 2024: Everything Announced and Revealed

Joker: folie à deux, furiosa: a mad max saga, beetlejuice beetlejuice, and much more..

Adam Bankhurst Avatar

CinemaCon 2024 has officially kicked off and many of the big movie studios are in Las Vegas ready to show off what the future holds for each of them. We here at IGN are in attendance and will be breaking down all the big news from the biggest presentations.

We must sadly share, however, that not everything is released to the public right away after a presentation, do we will do our best to describe as much as we can so you can learn more about your favorite upcoming films!

Warner Bros. Pictures' presentation was the first we attended and was highlighted by Joker: Folie à Deux, Furiosa: A Mad Max Saga, and Beetlejuice Beetlejuice. We also got a new look at Kevin Costner's Horizon and M. Night Shyamalan's Trap.

Check out all the big news from Warner Bros. Pictures' CinemaCon panel below and be sure to stay tuned for more coverage as the week continues. And be sure to let us know what your favorite reveal was at CinemaCon!

Joker: Folie à Deux First Trailer Unites Joaquin Phoenix's Arthur Fleck With Lady Gaga's Harley Quinn

As we mentioned, not everything shown at CinemaCon is released to the public. Luckily for DC fans, the first trailer for Joker: Folie à Deux was and it is already taking the internet by storm.

In the footage, we see parts of Joaquin Phoenix's Arthur Fleck and Lady Gaga's Harley Quinn relationship. What is perhaps most striking is how it appears to switch from what could be described as their romantic delusions to a grimmer reality. It also looks to confirm a big change in Harley Quinn's story in that she will now be a patient at Arkham Asylum rather than a psychiatrist.

Director Todd Phillips took the stage to discuss the film and confirmed that while the sequel is not a full musical, music will be an "essential element." It also apparently won't "veer" too much from the original film and that Arthur Fleck has always had "music in him."

Joker: Folie à Deux will hit theaters on October 4, 2024.

Furiosa: A Mad Max Saga Footage Shows Anya Taylor-Joy on a Mission

The above trailer is not from CinemaCon.

Those in attendance at Warner Bros. Pictures' CinemaCon panel were treated to an extended sneak peek at Furiosa: A Mad Max Saga. The few moments on display showcased the early life story of Anya Taylor-Joy's Furiosa and her arduous journey to avenge her mom and lost childhood. We also saw some of this story in the most recent trailer for Furiosa.

Director George Miller also stopped by CinemaCon and said Furiosa will take place over a span of 16-18 years of backstory and Taylor-Joy shared that this is "the story of one woman's committment to impossible hope."

Furiosa: A Mad Max Saga rides into theaters on May 24, 2024.

Beetlejuice Beetlejuice Is Almost Back After 36 Years of Waiting

Beetlejuice Beetlejuice took center stage at Warner Bros.' presentation and we were shown new footage of the sequel in action alongside previous clips from the trailer. We get good looks at Keaton's Beetlejuice, the Deetzes - Winona Ryder's Lydia, Catherine O'Hara's Delia, and Jenna Ortega's Astrid - and even Willem Dafoe's character. Lydia also seemingly confirms the film will deal with the dead and the living trying to co-exist.

Keaton has seen the film two times now and says it is "really f***** good" and that Ortega is "just perfect" in the movie and got what they were going for right away.

Beetlejuice Beetlejuice opens in theaters on September 6, 2024.

Mickey 17 Trailer Shows the Many Lives of Robert Pattinson's Mickey

The first trailer for Mickey 17 was shown at Warner Bros.' panel and showed how Robert Pattinson's Mickey is an "expendable" asset who can be reprinted whenever he dies. The film is based on Edward Ashton's Mickey 7, but director Bong Joon-ho of Parasite fame changed the title to Mickey 17 because he kills Pattinson's character 10 more times than the book did.

The footage featured Pattinson acting against himself multiple times in this futuristic sci-fi world and Bong-ho knew he could play all these different versions of Mickey because he has a "crazy thing in his eyes." We also got to see Mark Ruffalo's dictator character, his wife who is played by Toni Collette, Mickey's girlfriend who is played by Naomi Ackie, and Steven Yeun, who will be playing Mickey's "strange buddy."

Mickey 17, which really is the story of a "simple man who ends up saving the world," will be released in theaters on January 31, 2025.

Horizon: An American Saga Gets a Breathtaking First Look

Footage from Horizon: An American Saga, the two-part Western epic that stars and is directed, produced, and co-written by Kevin Costner, was revealed at CinemaCon and what was shown was a breathtaking sizzle reel of sorts from the films that tell a story set in the Civil War expansion and settlement of the American West.

Costner said to the audience that he first tried to make these films back in 1988 and then in 2012 and he's so happy he now finally gets to get them across the finish line. However, his full plan for Horizon involves four movies that tell more of the story.

He also discussed how this film will explore the "promise" of America that was earned by people who claimed it for their own by being tough and resilient. However, that came at the expense of those already here. He also wants music to be an important focus for this epic and he even went to Scotland to get 92 musicians to work on the score.

Horizon: An American Saga Part 1 is set to arrive in theaters on June 28, 2024, and the second part will be released on August 16, 2024.

M. Night Shyamalan's Trap Looks to Send Audiences to a Concert Gone Wrong

M. Night Shyamalan's next film is called Trap and he told us that his daughter Saleka, who is a musician, helped him form the idea for the project. As for what the movie is about, Trap looks to tell a story of an immersive experience like a concert that turns into a thriller.

When the concert begins and the singer Lady Raven (played by Saleka!) comes on stage, something terrible happens and you come to find out this has all been a trap to capture a wanted serial killer who is played by Josh Hartnett.

Trap will be released in theaters on August 9, 2024.

Have a tip for us? Want to discuss a possible story? Please send an email to [email protected] .

Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst and on TikTok.

In This Article

Beetlejuice Beetlejuice

IGN Recommends

Deadpool & Wolverine, Inside Out 2, and Everything Else at Disney's CinemaCon Presentation

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Cerebral venous thrombosis.

Prasanna Tadi ; Babak Behgam ; Seth Baruffi .

Affiliations

Last Update: June 12, 2023 .

  • Continuing Education Activity

Cerebral venous thrombosis (CVT), which includes thrombosis of the cerebral veins and the dural sinuses, is a rare disorder that can lead to significant morbidity and mortality. Cerebral venous thrombosis can present with variable signs and symptoms that include a headache, benign intracranial hypertension, subarachnoid hemorrhage, focal neurological deficit, seizures, unexplained altered sensorium, and meningoencephalitis. This activity reviews the cause of cerebral venous thrombosis and highlights the role of the interprofessional team in its management.

  • Review the cause of cerebral venous thrombosis.
  • Describe the presentation of cerebral venous thrombosis.
  • Summarize the treatment of cerebral venous thrombosis.
  • Outline the importance of improving care coordination among interprofessional team members to improve outcomes for patients affected by cerebral venous thrombosis.
  • Introduction

Cerebral venous thrombosis (CVT), which includes thrombosis of the cerebral veins and the dural sinuses, is a rare disorder that can lead to significant morbidity and mortality. Cerebral venous thrombosis can present with variable signs and symptoms that include a headache, benign intracranial hypertension, subarachnoid hemorrhage, focal neurological deficit, seizures, unexplained altered sensorium, and meningoencephalitis. [1] [2]

The diversity of risk factors and variable presentation present challenges in diagnosing cerebral vein thrombosis. Delay in diagnosis is common, as the median delay from symptom onset to hospital admission is four days and from symptom onset to diagnosis is seven days. Thus, having a high index of suspicion for this disorder is crucial to ensure timely diagnosis and treatment. [3] [4]

Many risk factors contribute to the development of cerebral venous thrombosis. At least one risk factor was identified in more than 85% of patients with cerebral venous thrombosis, and multiple risk factors are found in more than 50% of patients with cerebral venous thrombosis. [5] In general, cerebral venous thrombosis is common in any condition that leads to a prothrombotic state, including pregnancy, the post-partum state, or those on oral contraceptives. In the International Study on Cerebral Vein and Dural Sinus Thrombosis (ICSVT), genetic and acquired thrombophilia were present in 34% of patients with cerebral venous thrombosis. Inherited thrombophilia includes protein C and protein S deficiencies, antithrombin deficiency, factor V Leiden mutation, prothrombin gene mutation 20210 , as well as hyperhomocysteinemia. [6] [7]

Acquired thrombophilia should be suspected in patients with a history of nephrotic syndrome (due to loss of antithrombin) or antiphospholipid antibodies. Additional causes and risk factors associated with cerebral venous thrombosis include chronic inflammatory disease states such as systemic lupus erythematosus, inflammatory bowel disease, malignancy, and vasculitides such as Wegener's granulomatosis. Local infections such as otitis and mastoiditis, which can lead to thrombosis of the adjacent sigmoid and transverse sinuses, have also been implicated in developing cerebral venous thrombosis. Cerebral venous thrombosis may also be seen in a patient with a head injury, after certain neurosurgical procedures, direct injury to the sinuses or jugular veins, such as jugular vein catheterization, and even after a lumbar puncture. [8] [9]

  • Epidemiology

Cerebral venous thrombosis is a rare disorder with an annual incidence estimated to be three to four cases per million. The frequency of peripartum and post-partum cerebral venous thrombosis is about 12 cases per 100,000 deliveries in pregnant women, which is only slightly lower than that of peripartum and post-partum arterial stroke. Cerebral venous thrombosis occurs three times more frequently in women than in men. This is thought to be due to gender-specific risk factors, for example, oral contraceptive use and, less frequently, pregnancy, puerperium, and hormone replacement therapy. More recently, there has been a significant female predominance among young adults, with the majority of cases (70% to 80%) being in women of childbearing age, but not among children or elderly persons.

  • Pathophysiology

There are two pathophysiologic mechanisms thought to contribute to the clinical manifestations of cerebral venous thrombosis. First, thrombosis of the cerebral veins leads to increased venous and capillary pressure, which leads to a decrease in cerebral perfusion. Decreased cerebral perfusion results in ischemic injury, manifested by cytotoxic edema, which damages the energy-dependent cellular membrane pumps and leads to intracellular swelling. Disrupting the blood-brain barrier leads to vasogenic edema and leakage into the interstitial space. The increased pressure in the venous system can lead to an intraparenchymal hemorrhage.

The second pathophysiologic mechanism resulting in cerebral venous thrombosis is obstruction of the cerebral sinuses, particularly when the thrombus does not resolve. Normally, the cerebrospinal fluid found in the cerebral ventricles is transported through the subarachnoid space to the arachnoid granulations and absorbed into the venous sinuses. Thrombosis of the venous sinuses results in impaired cerebrospinal fluid absorption, ultimately leading to increased intracranial pressure. Increased intracranial pressure leads to cytotoxic and vasogenic edema and may result in parenchymal hemorrhage. 

  • History and Physical

Physicians should highly suspect cerebral venous thrombosis given the variable presentation and low annual incidence. Signs and symptoms may be acute, subacute, or chronic, with the most common symptom in cerebral venous thrombosis being a headache. A subacute pattern of the clinical presentation was observed in almost 60% of cases compared to acute (<48 hours in 37%) and chronic (>30 days in 7%). [5] A headache presents in up to 90% of patients. [5]

Headaches may be generalized or diffuse and tend to mimic migraines but may increase in severity slowly over days and weeks and are not relieved with sleep. In some instances, the headache may be thunderclap in nature, starting suddenly and maximal in intensity at onset, thereby mimicking the presentation of subarachnoid hemorrhage. A headache is often worsened with Valsalva or coughing, indicative of increased intracranial pressure. Papilledema and visual symptoms, such as diplopia caused by a sixth cranial nerve palsy when the intracranial pressure is too high, may accompany a headache. The funduscopic examination will reveal papilledema, which, depending on the severity, can cause visual impairment and even permanent blindness if left untreated. However, an isolated headache without any other focal neurologic deficits or papilledema has been reported in up to a fourth of patients with cerebral venous thrombosis and further complicates the diagnostic picture.

Focal neurologic signs are common and are seen in up to 44% of patients. Motor weakness, including hemiparesis, is the most common focal finding. However, unlike arterial thrombosis in the setting of cerebrovascular accidents, localization to one vascular territory is often absent. Hemispheric symptoms, such as aphasia and hemiparesis, are a characteristic but rare finding. Seizures are seen in about 40% of patients with cerebral venous thrombosis, the most common of which are focal seizures. Focal seizures account for 50% of those who experience a seizure in the setting of cerebral venous thrombosis but have the potential to generalize to a status epilepticus. Thus, cerebral venous thrombosis should be considered in any patient who presents with a headache and some combination of either focal neurologic deficit or new-onset seizures. Thrombosis of the straight sinus, or in severe cases of venous infarction with hemorrhagic transformation, can lead to compression of the diencephalon and brainstem, resulting in coma or death due to cerebral herniation. [10]

Diagnosis of cerebral venous thrombosis is clinical and confirmed with neuroimaging. Given its varied presentation and myriad of symptoms, one must have a high index of suspicion to identify and diagnose this rare and potentially life-threatening condition correctly. It should be suspected in young and middle-aged patients, especially those with cerebral venous thrombosis risk factors, such as postpartum women, those with genetic or acquired thrombophilia, and patients with focal neurological findings. It should also be suspected in the following: 

  • Under the age of 50
  • Who present with atypical headaches or those having multiple repeat evaluations for an unrelenting headaches
  • Focal neurological deficit
  • Stroke-like symptoms, especially in the absence of vascular risk factors that would predispose to cerebral vascular accidents (carotid atherosclerosis)
  • Seizures (focal, generalized, or status-epilepticus)
  • Intracranial hypertension or evidence of papilledema on funduscopic exam
  • Patients with CT evidence of hemorrhagic infarcts, particularly in the setting of multiple infarcts not confined to a single vascular territory

Some important clinical clues to the diagnosis include slow progression, bilateral involvement, and concurrent seizures. [5]

Laboratory evaluation should include a complete blood count, coagulation panel, chemistry panel, as well as inflammatory markers such as a sedimentation rate and C-reactive protein to evaluate for proinflammatory states. Ideally, a screening test that could effectively rule out the diagnosis without subjecting patients to neuroimaging when it is not necessary would be ideal and prove helpful to clinical practice. The D-dimer assay has been evaluated in this regard, and unfortunately, it has been shown to have an unacceptable false-negative rate of up to 26% in one study. This low sensitivity of the D-dimer assay is in contrast to the utility of the D-dimer in ruling out deep venous thrombosis, which may be due to the lower thrombotic burden of cerebral venous thrombosis compared to DVT. [11] [12]

Based on recent American Heart Association/American Stroke Association guidelines, a negative D-dimer does not effectively rule out   cerebral venous thrombosis   and should not preclude neuroimaging if there is clinical suspicion for cerebral venous thrombosis. [13] [14] However, adding D-dimer (≥500 μg/L) to the clinical CVT score (comprising of variables such as seizure, known thrombophilia, oral contraceptive use, duration of symptoms for >6 days, worst headache ever, and focal neurologic deficits) has shown to improve its predictive value. [15]

Neuroimaging

  • Non-contrast computed tomography (CT):   The speed and accessibility with which this test can be obtained make it the first test that should be obtained in any patient presenting with an atypical headache, focal neurologic deficit, seizures, altered mental status, or coma. A direct sign of cerebral venous thrombosis is the cord sign , a curvilinear hyperdensity within a cortical vein in the presence of thrombosis that can be seen for up two weeks following thrombus formation. Other direct signs include hyperdensity with a triangular shape in the superior sagittal sinus, also known as the dense triangle sign . Intraparenchymal hemorrhages or infarcts may be seen on non-contrast head CT and may cross vascular boundaries. In a multicentric study, brain infarction was observed in 36.4%, hemorrhagic transformation in 17.3%, and intraparenchymal hemorrhage in 3.8% of cohorts. [5] Hyperdensity within a cortical vein or dural sinus in plain CT is observed in only one-third of the cases.  [5]
  • CT Venography (CTV):   While MRI does have a better sensitivity and specificity when compared to computed tomography, diagnostic and confirmatory venography is required to exclude cerebral venous thrombosis. The presence of new, helical CT scanners has led to evidence that CT venography is superior in the identification of cerebral veins when compared to MR venography and that the two methods are equivocal in the identification and diagnosis of cerebral venous thrombosis. The fact that CT venography can rapidly be performed following a non-contrast head CT while the patient is still in the CT scanner makes CT venography a viable option in the emergency setting when access to MRI imaging and venography may otherwise be limited or unavailable. Contrast-enhanced computed tomography illustrates the  empty delta sign , representing contrast enhancement flowing around the comparatively hypodense region of the thrombosed superior sagittal sinus.
  • Magnetic resonance imaging (MRI) and magnetic resonance venography (MRV) are considered the gold standard in diagnosing cerebral venous thrombosis as they have a higher sensitivity than computed tomography. MRI is superior to CT when evaluating for parenchymal edema as a result of cerebral venous thrombosis. MRI findings are dependent on the age of the thrombus, as signal intensities change depending on thrombus age. Thus, MRI interpretation requires a detailed understanding of the evolutionary changes seen radiographically. An acutely formed thrombus (0 to 7 days) is harder to detect, but by week 2, abnormalities are easier to detect, with both T1 and T2-weighted images showing a hyperdense signal. The combination of an abnormal signal in a venous sinus combined with the absence of flow on MRV confirms the diagnosis of cerebral venous thrombosis. 2-dimensional lumen-based TOF showing the absence of a flow void in the dural sinus is the most sensitive imaging modality. Multiscale entropy (MSE) of hemoglobin products within the thrombus is of high diagnostic value. [16] The presence of DWI abnormality within the involved veins or sinus indicates low chances of recanalization. The differentials include arachnoid granulations and fenestrations. [17]
  • Cerebral angiography:  If the diagnosis is still in question after using MRI and MRV, then intra-arterial angiography is indicated. Angiography allows for superior visualization of the cerebral veins and helps identify anatomical variants of normal venous anatomy that mimic cerebral venous thrombosis. It is useful in rare cases of isolated cortical vein thrombosis without sinus thrombosis and may show indirect signs such as dilated and tortuous "corkscrew" collateral veins, evidence that there may be thrombosis further downstream of the sinuses.

Superior sagittal sinus is most frequently involved, followed by transverse sinus. [5]

  • Treatment / Management

Management initially focuses on identifying and addressing life-threatening complications of cerebral venous thrombosis, including increased intracranial pressure (ICP), seizures, and coma. If a patient seizes and has a lesion such as a hemorrhage or infarction on neuroimaging, then specific anticonvulsant therapy, as well as seizure prophylaxis, should be initiated. If a seizure does not occur, then seizure prophylaxis is not indicated. In the case of increased ICP, the head of the bed should be elevated, and administration of dexamethasone and mannitol should be done promptly to reduce increased ICP. This is followed by admission to the intensive care unit or stroke unit for close ICP monitoring, with a neurosurgical consultation if the patient decompensates and requires surgical decompression. Next, attention should be shifted to specific therapy, including anticoagulation and, in certain cases, catheter-directed fibrinolysis and surgical thrombectomy.

Anticoagulation

Anticoagulation has been a controversial topic due to the potential for hemorrhagic transformation of cerebral infarcts before administering anticoagulation. The goal of anticoagulation is to prevent thrombus propagation, help recanalize the lumen of occluded cerebral veins, and to prevent the complications of deep venous thrombosis and pulmonary embolism in patients who already have thrombus burden and are predisposed to forming additional thrombi. The results of two randomized controlled trials, which compared anticoagulation with placebo, although statistically insignificant, showed that anticoagulation had a favorable outcome more often than controls. They also showed that anticoagulation was safe and not contraindicated, even in patients with cerebral hemorrhage.

Based on these randomized controlled trials and other observational studies, anticoagulation is recommended as a safe and effective treatment of cerebral venous thrombosis. It should be initiated immediately upon diagnosis of cerebral venous thrombosis. Anticoagulation with intravenous unfractionated heparin or subcutaneously administered low-molecular-weight heparin is recommended as a bridge to oral anticoagulation with a vitamin K antagonist. There are no outcome differences while comparing unfractionated heparin (UFH) or low molecular weight heparin (LMWH). The European stroke organization (ESO) guidelines advocate unfractionated heparin in patients with renal insufficiency or the probability of requiring emergent reversal. [5]

The target goal of treatment is an international normalized ratio of 2.0 to 3.0 cerebral venous thrombosis 3 to 6 months in patients with provoked cerebral venous thrombosis and 6 to 12 months in patients with unprovoked cerebral venous thrombosis. [5] Indefinite anticoagulation should be considered in patients with recurrent cerebral venous thrombosis, those who develop deep vein thrombosis and pulmonary embolism in addition to cerebral venous thrombosis, or those with first-time cerebral venous thrombosis in the setting of severe thrombophilia.

Thrombolysis

Although most patients see clinical improvement with anticoagulation therapy, a small subset of patients do not, and these individuals clinically deteriorate despite anticoagulation. In these cases, where the prognosis is poor, systemic and catheter-directed thrombolysis is indicated in patients with large and extensive cerebral venous thrombi who clinically deteriorate despite treatment with anticoagulation. As is the case, whenever fibrinolytics are used, there is an increased risk of intracranial hemorrhage. Based on a systemic review conducted in 2003, which looked at 72 studies and 169 patients with cerebral venous thrombosis, there seems to be a possible clinical benefit due to the use of fibrinolytics in patients with a severe presentation. Intracranial hemorrhage occurred in 17% of patients treated with fibrinolytics and was associated with clinical deterioration in 5% of cases. Overall, endovascular thrombolytics should be used at centers with staff experienced in interventional radiology and should be reserved for patients who are clinically deteriorating and despite treatment with anticoagulation. A systematic has shown local thrombolysis to be beneficial only in patients with severe CVT, whereas the results are anecdotal for mechanical thrombectomy. [5]  

Surgical Intervention

Surgical thrombectomy is reserved for cases of severe neurological deterioration despite maximal medical therapy. In the case of large venous infarcts and hemorrhages causing a mass effect with risk of herniation, decompressive surgery has been thought to improve clinical outcomes, especially if done early, although this is level C evidence. Decompressive surgery is life-saving, with favorable outcomes observed in more than 50% of patients, with a mortality rate of approximately 20%. [5]

Supportive Care

It is important to elucidate the underlying contributory factors of cerebral venous thrombosis and devise a treatment strategy to correct them. Women on hormonal contraceptive therapy should seek non-estrogen-based methods of contraception such as levonorgestrel and copper intrauterine devices or progestin-only pills. Further testing to identify the etiology of all acquired and reversible thrombophilic states should be conducted and, when possible, corrected. In addition to clinical follow-up, the American Heart Association and American Stroke Association recommend follow-up imaging 3 to 6 months after diagnosis to assess for recanalization.

The risks for ICH following anticoagulation therapy ranged from zero to 5.4%. A systematic review has shown that the overall mortality was 9.4%, and dependency of 9.4% and 9.7%, respectively. [18] [19]

The quality of evidence and the strength of recommendations of the European Stroke Organization guideline for the diagnosis and treatment of cerebral venous thrombosis (2017) can be summarized as follows:

Image

Recommendations Quality of evidence

  • Differential Diagnosis
  • Abducens nerve palsy
  • Blood dyscrasias
  • Cavernous sinus syndrome
  • Head injury
  • Intracranial abscess
  • Neurosarcoidosis
  • Pediatric status epilepticus
  • Pseudotumor cerebri
  • Staphylococcal meningitis
  • Subdural empyema
  • Enhancing Healthcare Team Outcomes

The diagnosis and management of cerebral venous thrombosis are challenging and best managed by an interprofessional team that includes a neurologist, neurosurgeon, radiologist, hematologist, anesthesiologist, ICU nurses, and intensivist. Other members of the interprofessional team include nursing staff, mid-level practitioners (NPs and PAs), and pharmacists. Management is initially focused on identifying and addressing life-threatening complications of cerebral venous thrombosis, including increased intracranial pressure (ICP), seizures, and coma. Next, attention should be shifted to specific therapy, including anticoagulation and, in certain cases, catheter-directed fibrinolysis and surgical thrombectomy. The prognosis of these patients is guarded. Even those who survive are often left with permanent neurological deficits. [20] [21]

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Illustration of cerebral venous thrombosis. Superior sagittal sinus, cortical veins, inferior sagittal sinus, Vein of Galen, internal cerebral veins, straight sinus, transverse sinus, sigmoid, jugular veins. Contributed by Chelsea Rowe

Disclosure: Prasanna Tadi declares no relevant financial relationships with ineligible companies.

Disclosure: Babak Behgam declares no relevant financial relationships with ineligible companies.

Disclosure: Seth Baruffi declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Tadi P, Behgam B, Baruffi S. Cerebral Venous Thrombosis. [Updated 2023 Jun 12]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Review An Integrated Approach on the Diagnosis of Cerebral Veins and Dural Sinuses Thrombosis (a Narrative Review). [Life (Basel). 2022] Review An Integrated Approach on the Diagnosis of Cerebral Veins and Dural Sinuses Thrombosis (a Narrative Review). Jianu DC, Jianu SN, Dan TF, Munteanu G, Copil A, Birdac CD, Motoc AGM, Docu Axelerad A, Petrica L, Arnautu SF, et al. Life (Basel). 2022 May 11; 12(5). Epub 2022 May 11.
  • Cerebral Venous Thrombosis in Young Adults: Unknown Origins. [Cureus. 2023] Cerebral Venous Thrombosis in Young Adults: Unknown Origins. Zahoor H, Hamza A, Aigbe E, Guzman N, Nabizadeh-Eraghi P. Cureus. 2023 Jul; 15(7):e41970. Epub 2023 Jul 16.
  • A 44-Year-Old Male With Cerebral Venous Sinus Thrombosis. [Cureus. 2023] A 44-Year-Old Male With Cerebral Venous Sinus Thrombosis. Shabbir T, Hunsucker R, Martin D, Shabbir Z, Abou-El-Hassan H, Salahuddin T. Cureus. 2023 Mar; 15(3):e36974. Epub 2023 Mar 31.
  • Review Cerebral venous thrombosis: Diagnosis and management in the emergency department setting. [Am J Emerg Med. 2021] Review Cerebral venous thrombosis: Diagnosis and management in the emergency department setting. Spadaro A, Scott KR, Koyfman A, Long B. Am J Emerg Med. 2021 Sep; 47:24-29. Epub 2021 Mar 16.
  • Review Cerebral venous thrombosis--clinical presentations. [J Pak Med Assoc. 2006] Review Cerebral venous thrombosis--clinical presentations. Mehndiratta MM, Garg S, Gurnani M. J Pak Med Assoc. 2006 Nov; 56(11):513-6.

Recent Activity

  • Cerebral Venous Thrombosis - StatPearls Cerebral Venous Thrombosis - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Watch CBS News

Decades after their service, "Rosie the Riveters" to be honored with Congressional Gold Medal

By Michelle Miller , Kerry Breen

April 6, 2024 / 8:51 AM EDT / CBS News

This week, a long-overdue Congressional Gold Medal will be presented to the women who worked in factories during World War II and inspired " Rosie the Riveter ." 

The youngest workers who will be honored are in their 80s. Some are a century old. Of the millions of women who performed exceptional service during the war, just dozens have survived long enough to see their work recognized with one of the nation's highest honors. 

One of those women is Susan King, who at the age of 99 is still wielding a rivet gun like she did when building war planes in Baltimore's Eastern Aircraft Factory. King was 18 when she first started at the factory. She was one of 20 million workers who were credentialed as defense workers and hired to fill the jobs men left behind once they were drafted into war. 

0406-satmo-roseitheriveter-miller-2816447-640x360.jpg

"In my mind, I was not a factory worker," King said. "I was doing something so I wouldn't have to be a maid." 

The can-do women were soon immortalized in an iconic image of a woman in a jumpsuit and red-spotted bandana. Soon, all the women working became known as "Rosie the Riveters." But after the war, as veterans received parades and metals, the Rosies were ignored. Many of them lost their jobs. It took decades for their service to become appreciated. 

Gregory Cooke, a historian and the son of a Rosie, said that he believes most of the lack of appreciation is "because they're women." 

"I don't think White women have ever gotten their just due as Rosies for the work they did on World War II, and then we go into Black women," said Cooke, who produced and directed "Invisible Warriors," a soon-to-be-released documentary shining light on the forgotten Rosies. "Mrs. King is the only Black woman I've met, who understood her role and significance as a Rosie. Most of these women have gone to their graves, including my mother, not understanding their historic significance." 

rosie_the_riveter_1231.jpg

King has spent her life educating the generations that followed about what her life looked like. That collective memory is also being preserved at the Glenn L. Martin Aviation Museum in Maryland and at Rosie the Riveter National Historic Park in Richmond, California, which sits on the shoreline where battleships were once made. Jeanne Gibson and Marian Sousa both worked at that site. 

Sousa said the war work was a family effort: Her two sisters, Phyllis and Marge, were welders and her mother Mildred was a spray painter. "It gave me a backbone," Sousa said. "There was a lot of men who still were holding back on this. They didn't want women out of the kitchen." 

Her sister, Phyllis Gould, was one of the loudest voices pushing to have the Rosies recognized. In 2014, she was among several Rosies invited to the White House after writing a letter to then-Vice President Joe Biden pushing for the observance of a National Rosie the Riveter Day. Gould also helped design the Congressional Gold Medal that will be issued. But Gould won't be in Washington, D.C. this week. She passed away in 2021 , at the age of 99. 

AP21208150547846-1.jpg

About 30 Riveters will be honored on Wednesday. King will be among them.

"I guess I've lived long enough to be Black and important in America," said King. "And that's the way I put it. If I were not near a hundred years old, if I were not Black, if I had not done these, I would never been gone to Washington." 

  • World War II

Michelle Miller

Michelle Miller is a co-host of "CBS Saturday Morning." Her work regularly appears on "CBS Mornings," "CBS Sunday Morning" and the "CBS Evening News." She also files reports for "48 Hours" and anchors Discovery's "48 Hours on ID" and "Hard Evidence."

More from CBS News

WWII bomber and crew's remains found in South Pacific

Wilmer Valderrama celebrates 1,000th "NCIS" episode

Musician Marcus King's new album focuses on his mental health journey

Hank Aaron memorialized with Hall of Fame statue and USPS stamp

COMMENTS

  1. Types of Variables in Research & Statistics

    Parts of the experiment: Independent vs dependent variables. Experiments are usually designed to find out what effect one variable has on another - in our example, the effect of salt addition on plant growth.. You manipulate the independent variable (the one you think might be the cause) and then measure the dependent variable (the one you think might be the effect) to find out what this ...

  2. Variable Decelerations

    Variable decelerations can be seen resulting from fetal movement if the fetus is premature. ... A history of polyhydramnios, breech presentation, advanced cervical dilation, or high fetal station before the rupture of membranes may raise suspicion for a prolapsed umbilical cord.

  3. Variable presentation between a mother and a fetus with Goltz syndrome

    Variable presentation between a mother and a fetus with Goltz syndrome. Elizabeth A. Sellars, Elizabeth A. Sellars. Section of Genetics and Metabolism, Arkansas Children's Hospital, Little Rock, AR, USA ... Goltz syndrome is a rare and variable X-linked dominant condition that can affect a number of organ systems. Most patients have no family ...

  4. Statistics and data presentation: Understanding Variables

    Statistics and data presentation: Understanding Variables. All science is about understanding variability in different characteristics, and most characteristics vary, hence we call the characteristics that we are studying 'variables. ... we see that older people typically make wiser decisions and so wisdom could be seen as the mediating variable.

  5. iPSC-Based Modeling of Variable Clinical Presentation in Hypertrophic

    Gender, in particular, has been shown to be critical in explaining some of the differences in clinical presentation seen among patients with HCM, although it is still unknown whether these findings reflect a lack of gender-specific criteria for HCM diagnosis, the influence of sex hormones or sex-specific differences in cardiac physiology. 42 ...

  6. Scales of Measurement and Presentation of Statistical Data

    Abstract. Measurement scale is an important part of data collection, analysis, and presentation. In the data collection and data analysis, statistical tools differ from one data type to another. There are four types of variables, namely nominal, ordinal, discrete, and continuous, and their nature and application are different.

  7. PDF METHODS OF PRESENTING DATA FROM EXPERIMENTS

    Statements. The most common way of presentation of data is in the form of statements. This works best for simple observations, such as: "When viewed by light microscopy, all of the cells appeared dead." When data are more quantitative, such as- "7 out of 10 cells were dead", a table is the preferred form. Tables.

  8. Statistical data presentation

    Methods of presentation must be determined according to the data format, the method of analysis to be used, and the information to be emphasized. Inappropriately presented data fail to clearly convey information to readers and reviewers. ... but also data measured over the progression of a continuous variable such as distance. As can be seen in ...

  9. 2.3: Graphical Displays

    Sometimes you have two quantitative variables and you want to see if they are related in any way. A scatter plot helps you to see what the relationship may look like. A scatter plot is just a plotting of the ordered pairs. When you see the dots increasing from left to right then there is a positive relationship between the two quantitative ...

  10. Unusual sites with variable presentation of de novo syringoc ...

    Introduction. Syringocystadenoma papilliferum (SCAP) is an uncommon benign adnexal neoplasm of childhood or adolescence that occurs de novo or in an organoid nevus and has variable clinical presentation.[] It usually presents as a skin-coloured to pink, solitary, smooth, hairless plaque, verruca or nodule frequently on the scalp and forehead.[] ...

  11. various views of variability

    Newer. Wow, tons of variation in the ways people chose to display variability! Check out 41 visualizations of variability in data—featuring box plots, histograms, violins, reference bands and more—with topics ranging from weather to sports to love. Click to see the recap post that includes all the submissions.

  12. Visual Perception. An overview about visual variables and…

    He thought that there are seven main categories of visual variables: location, size, shape, value, color, orientation, and texture. (Roth 2017) The size of an object can be used to communicate the importance of a feature. Describe how large or how small something is. In 3D it expresses closeness.

  13. Variation in fetal presentation

    breech presentation: fetal rump presenting towards the internal cervical os, this has three main types. frank breech presentation (50-70% of all breech presentation): hips flexed, knees extended (pike position) complete breech presentation (5-10%): hips flexed, knees flexed (cannonball position) footling presentation or incomplete (10-30%): one ...

  14. 2: Graphical Representations of Data

    A histogram is a graphic version of a frequency distribution. The graph consists of bars of equal width drawn adjacent to each other. The horizontal scale represents classes of quantitative data values and the vertical scale represents frequencies. The heights of the bars correspond to frequency values. Histograms are typically used for large ...

  15. Types of variable, it's graphical representation

    Types of Variables (photo by author) Mainly two variable types are i) categorical and ii) numerical. i.Categorical: Categorical variables represent types of data which may be divided into groups.It is also known as qualitative variable.. Examples: Car Brand is a categorical variable that holds categorical data like Audi, Toyota, BMW, etc. Answer is a categorical variable that holds categorical ...

  16. Comparing explicit and implicit ensemble perception: 3 stimulus

    For all stimulus variables and presentation modes, the absolute difference of task performance accuracy from baseline trials (fraction selecting SEEN image) between trials where the SEEN image versus where the NEW image equals the mean, was not significant (two-tailed t test, p = 0.28-0.98). This is what would be expected if participants ...

  17. Ultimate Guide to Using Data Visualization in Your Presentation

    It's not enough to just copy and paste your data into a presentation slide. Luckily, PowerPoint has a lot of smart data visualization tools! You just need to put in your data, and PowerPoint will work it up for you. 1. Collect your data. First things first, and that is to have all your information ready.

  18. Incomplete Penetrance and Variable Expressivity: From Clinical Studies

    Incomplete Penetrance and Variable Expressivity. A deleterious genotype should be no more prevalent in the population than the disease that it causes (Minikel et al., 2016).However, the same genetic variant can result in different disease presentations in different people, from clinically asymptomatic to severely affected, even among members of the same family (Mahat et al., 2021).

  19. Variable clinical presentation in lysosomal storage disorders

    Download Citation | Variable clinical presentation in lysosomal storage disorders | Extensive clinical heterogeneity is seen in lysosomal storage disorders, regarding the age of onset and severity ...

  20. Comparing explicit and implicit ensemble perception: 3 stimulus

    Data for 3 variables and 3 presentation modes, and averages over variables, presentations, and both. Each graph shows data and best-fit Gaussian function, including data of trials with NEW out of ...

  21. PPT What are variables?

    Dependent Variable What is observed What is measured The effect caused by the independent variable. The data Also called responding variables Controlled Variables Things that could change but don't Kept constant (the same) by scientists These allow for a fair test. ... Document presentation format: On-screen Show Company:

  22. 3 Ways to Customize Your Dashboards with Presentation Variables

    When a user sets a presentation variable, the value is stored in a named variable. Until that named variable is referenced somewhere in the dashboard, the user will not see any changes to their view. Let's explore three ways to utilize presentation variables. Update a filter for dynamic date ranges. Update a conditional formatting threshold.

  23. Why Is Gold Suddenly Rising Right Now?

    6:34. Gold 's scorching run to an all-time high may seem easy to explain from a distance, given the fractious geopolitical climate and murky outlook for the global economy. The precious metal is ...

  24. Warner Bros. Pictures at CinemaCon 2024: Everything Announced and ...

    Warner Bros. Pictures' presentation was the first we attended and was highlighted by Joker: Folie à Deux, Furiosa: A Mad Max Saga, and Beetlejuice Beetlejuice. ... we see parts of Joaquin Phoenix ...

  25. Cerebral Venous Thrombosis

    The diversity of risk factors and variable presentation present challenges in diagnosing cerebral vein thrombosis. Delay in diagnosis is common, as the median delay from symptom onset to hospital admission is four days and from symptom onset to diagnosis is seven days. ... Cerebral venous thrombosis may also be seen in a patient with a head ...

  26. Decades after their service, "Rosie the Riveters" to be honored with

    Of the millions of women who performed exceptional service during the war, just dozens have survived long enough to see their work recognized with one of the nation's highest honors.