The Nature of Scientific Knowledge: What is it and why should we trust it?

by Anthony Carpi, Ph.D., Anne E. Egger, Ph.D.

Listen to this reading

Did you know that it was not Magellan, Columbus, or even Copernicus who first proposed that the world was round? Rather, 2,000 years before these Europeans, Greek philosophers referred to the Earth as a sphere. An accumulation of evidence over the centuries confirmed that the Earth was round long before explorers sailed around the world.

Science consists of a body of knowledge and the process by which that knowledge is developed.

The core of the process of science is generating testable explanations, and the methods and approaches to generating knowledge are shared publicly so that they can be evaluated by the community of scientists.

Scientists build on the work of others to create scientific knowledge.

Scientific knowledge is subject to revision and refinement as new data, or new ways to interpret existing data, are found.

It seems preposterous to us today that people once thought that the Earth was flat. Who could have possibly thought of our planet as a giant disk with the stars and heavens above, and boulders, tree roots, and other things below? But this was the dominant view of Earth in much of the world before the 2nd century BCE , though the details differed from culture to culture. And it was not explorers who sailed around the world that finally laid the idea to rest, but an accumulation of evidence long before this.

Figure 1: Representation of Eratosthenes' studies demonstrating the curvature of Earth and the geometry used to calculate the circumference of the planet. (Click to see additional information in larger version)

Figure 1: Representation of Eratosthenes' studies demonstrating the curvature of Earth and the geometry used to calculate the circumference of the planet. (Click to see additional information in larger version)

Greek philosophers referred to a spherical Earth as early as the 6th century BCE . They observed that the moon appeared to be a sphere and therefore inferred that Earth might also be spherical. Two hundred years later, in the 4th century BCE, the Greek philosopher Aristotle observed that the shadow of the Earth on the Moon during a lunar eclipse is always curved, thus providing some of the first evidence that Earth is spherical. In the 3rd century BCE, the mathematician Eratosthenes observed that at noon on the summer solstice in the ancient Egyptian city of Syene, the sun was directly overhead as objects did not cast a shadow. Eratosthenes was from Alexandria, Egypt, some 500 miles to the north, and he knew that a tall tower cast a shadow in that city at the same time on the summer solstice. Using these observations and measurements of shadow length and distance, he inferred that the surface of the Earth is curved and he calculated a remarkably accurate estimate of the circumference of the planet (Figure 1). Some years later, the Greek geographer Strabo added to this evidence when he observed that sailors saw distant objects move downward on the horizon and disappear as they sailed away from them. He proposed that this was because Earth was curved and those sailors were not simply moving further away from the objects but also curving around the planet as they sailed.

Figure 2: Earthrise taken on December 24, 1968, from the Apollo 8 mission.

Figure 2: Earthrise taken on December 24, 1968, from the Apollo 8 mission.

Aristotle, Eratosthenes , and Strabo didn't call themselves scientists, yet they were using the process of science by making observations and providing explanations for those observations. Thus, we knew that Earth was a sphere long before Ferdinand Magellan 's men sailed all the way around it in 1522 or before Apollo 8 astronauts sent back pictures of Earth from space in 1968 (Figure 2), documenting its spherical shape. In fact, those astronauts had to be absolutely confident that the Earth was a rotating sphere, orbiting the Sun, or they would never have been able to get into orbit. It is the nature of science and scientific knowledge that gave them that confidence, and understanding the difference between scientific knowledge and other types of knowledge is critical to understanding science itself.

  • What is science?

Science consists of two things: a body of knowledge and the process by which that knowledge is produced. This second component of science provides us with a way of thinking and knowing about the world. Commonly, we only see the "body of knowledge" component of science. We are presented with scientific concepts in statement form – Earth is round, electrons are negatively charged, our genetic code is contained in our DNA , the universe is 13.78 billion years old – with little background about the process that led to that knowledge and why we can trust it. But there are a number of things that distinguish the scientific process and give us confidence in the knowledge produced through it.

So then, what is the scientific process? The scientific process is a way of building knowledge and making predictions about the world in such a way that they are testable. The question of whether Earth is flat or round could be put to the test, it could be studied through multiple lines of research , and the evidence evaluated to determine whether it supported a round or flat planet. Different scientific disciplines typically use different methods and approaches to investigate the natural world, but testing lies at the core of scientific inquiry for all scientists.

As scientists analyze and interpret their data (see our Data Analysis and Interpretation module), they generate hypotheses , theories , or laws (see our Theories, Hypotheses, and Laws module), which help explain their results and place them in context of the larger body of scientific knowledge. These different kinds of explanations are tested by scientists through additional experiments , observations , modeling, and theoretical studies. Thus, the body of scientific knowledge builds on previous ideas and is constantly growing. It is deliberately shared with colleagues through the process of peer review (see our Peer Review module), where scientists comment on each other's work, and then through publication in the scientific literature (see our Utilizing the Scientific Literature module), where it can be evaluated and integrated into the body of scientific knowledge by the larger community. And this is not the end: One of the hallmarks of scientific knowledge is that it is subject to change, as new data are collected and reinterpretations of existing data are made. Major theories, which are supported by multiple lines of evidence , are rarely completely changed, but new data and tested explanations add nuance and detail.

A scientific way of thinking is something that anyone can use, at any time, whether or not they are in the process of developing new knowledge and explanations. Thinking scientifically involves asking questions that can be answered analytically by collecting data or creating a model and then testing one's ideas. A scientific way of thinking inherently includes creativity in approaching explanations while staying within the confines of the data. Thinking scientifically does not mean rejecting your culture and background, but recognizing the role that they play in your way of thinking. While testable explanations are a critical component of thinking scientifically, there are other valid ways of thinking about the world around us that do not always yield testable explanations. These different ways of thinking are complementary – not in competition – as they address different aspects of the human experience.

It's easy to be confident in the scientific process and our knowledge when we can provide irrefutable evidence , as we were able to do by orbiting around the Earth in a spaceship and taking pictures of an obviously round planet. But most scientific investigations do not lead to results that are so easily supported, and yet we still rely on and trust the knowledge produced through the process of science. Why do we trust it? Because it works. Science has a long history of creating knowledge that is useful and that gives us more insight into our surroundings. Take one of the statements above: The universe is 13.78 billion years old. Why should we have confidence in this statement?

Comprehension Checkpoint

  • The age of the universe

How old is the universe? How can we possibly know the age of something that was created not simply before human history, but before our planet came into being? This is a difficult question to address scientifically, so much so that through the early 20th century many scientists assumed that the universe was infinite and eternal, existing for all of time.

  • Machines and entropy

The first indication that the universe may not have existed for all of time came from an unlikely source: the study of engines. In the 1820s, Sadi Carnot was a young officer on leave from the French military. While taking classes at various institutions in Paris, he became interested in industrial problems, and was surprised to see that no scientific studies had been undertaken on the steam engine, a relatively new invention at the time and a poorly understood one. Carnot believed that engines could be better understood – a characteristic common to scientists is that they work to better understand things – and so he studied the transfer of energy in engines. He recognized that no engine could be 100% efficient because some energy is always lost from the system as heat (Figure 3). Carnot published his ideas in a book titled Reflections on the Motive Power of Fire and on Machines Fitted to Develop that Power , which presented a mathematical description of the amount of work that could be generated by an engine, called the Carnot cycle (Carnot, 1824).

Figure 3: An infrared image of a running engine showing the temperature of various parts of the engine. Higher temperatures (red and yellow portions of the image) indicate greater heat loss. The loss of heat represents a loss of efficiency in the engine, and a contribution to the increasing entropy of the universe.

Figure 3: An infrared image of a running engine showing the temperature of various parts of the engine. Higher temperatures (red and yellow portions of the image) indicate greater heat loss. The loss of heat represents a loss of efficiency in the engine, and a contribution to the increasing entropy of the universe.

Carnot's work didn't receive much attention during his lifetime, and he died of cholera in 1832, when he was only 36 years old. But others began to realize the importance of his work and built upon it. One of those scientists was Rudolf Clausius , a German physicist who showed that Carnot's principle was not limited to engines, but in fact applied to all systems in which there was a transfer of energy . Clausius' application of an explanation for one phenomenon to many others is also characteristic of science, which assumes that processes are universal.

In 1850, Clausius published a paper in which he developed the second law of thermodynamics, which states that energy always flows from a high energy state (for example, a system that is hot) to a low energy state (one that is cold) (Clausius, 1850). In later work, Clausius coined the term entropy to describe the energy lost from a system when it is transferred, and as an acknowledgement of the pioneering work of Sadi Carnot in providing the foundation for his discoveries, Clausius used the symbol S to refer to the entropy of a system.

But how do engines and entropy relate to the age of the universe? In 1865, Clausius published another paper that restated the Second Law of Thermodynamics as "the entropy of the universe tends to a maximum." If the universe was infinite and existed for all time, the Second Law of Thermodynamics says that all of the energy within the universe would have been lost to entropy by now. In other words, the stars themselves would have burned out long ago, dissipating their heat into surrounding space. The fact that there are still active stars must mean that the universe has existed for a finite amount of time, and was created at some specific point in time. Perhaps the age of that point in time could be determined?

  • Redshift and the Doppler effect

At about the same time, an Austrian physicist by the name of Christian Doppler was studying astronomy and mathematics. Doppler knew that light behaved like a wave, and so began to think about how the movement of stars might affect the light emitted from those stars. In a paper published in 1842, Doppler proposed that the observed frequency of a wave would depend on the relative speed of the wave's source in relation to the observer, a phenomenon he called a "frequency shift" (Doppler, 1842). He made an analogy to a ship at sail on the ocean, describing how the ship would encounter waves on the surface of the water at a faster rate (and thus higher frequency) if it were sailing into the waves than if it were traveling in the same direction as the waves.

You might be familiar with the frequency shift, which we now call the Doppler Effect in his honor, if you have ever listened to the sound of traffic while standing on the side of the road. The familiar high-to-low pitch change is an example of the effect – the actual frequency of the waves emitted is not changing, but the speed of the passing vehicle affects how quickly those waves reach you. Doppler proposed that we would see the same effect on any stars that were moving: Their color would shift towards the red end of the spectrum if they were moving away from Earth (called a redshift ) and towards the blue end of the spectrum if they were moving closer (called a blueshift ) (see Figure 4). He expected to be able to see this shift in binary stars , or pairs of stars that orbit around each other. Eventually, Doppler's 1842 paper, entitled "On the coloured light of the double stars and certain other stars of the heavens," would change the very way we look at the universe . However, at the time, telescopes were not sensitive enough to confirm the shift he proposed.

Figure 4: A representation of how the perceived spectrum of light emitted from a galaxy is affected by its motion (Click to see additional information in larger version).

Figure 4: A representation of how the perceived spectrum of light emitted from a galaxy is affected by its motion (Click to see additional information in larger version).

Doppler's ideas became part of the scientific literature and by that means became known to other scientists. By the early 1900s, technology finally caught up with Doppler and more powerful telescopes could be used to test his ideas. In September of 1901, an American named Vesto Slipher had just completed his undergraduate degree in mechanics and astronomy at Indiana University. He got a job as a temporary assistant at the Lowell Observatory in Flagstaff, Arizona, while continuing his graduate work at Indiana. Shortly after his arrival, the observatory obtained a three-prism spectrograph , and Slipher's job was to mount it to the 24-inch telescope at the observatory and learn to use it to study the rotation of the planets in the solar system . After a few months of problems and trouble-shooting, Slipher was able to take spectrograms of Mars, Jupiter, and Saturn. But Slipher's personal research interests were much farther away than the planets of the solar system. Like Doppler, he was interested in studying the spectra of binary stars , and he began to do so in his spare time at the observatory.

Over the next decade, Slipher completed a Master's degree and a PhD at Indiana University, while continuing his work at Lowell Observatory measuring the spectra and Doppler shift of stars. In particular, Slipher focused his attention on stars within spiral nebulae (Figure 5), expecting to find that the shift seen in the spectra of the stars would indicate that the galaxies those stars belonged to were rotating. Indeed, he is credited with determining that galaxies rotate, and was able to determine the velocities at which they rotate. But in 1914, having studied 15 different nebulae, he announced a curious discovery at a meeting of the American Astronomical Society in August:

In the great majority of cases the nebulae are receding; the largest velocities are all positive...The striking preponderance of the positive sign indicates a general fleeing from us or the Milky Way.

Slipher had found that most galaxies showed a redshift in their spectrum , indicating that they were all moving away from us in space, or receding (Slipher, 1915). By measuring the magnitude of the redshift, he was able to determine the recessional velocity or the speed at which objects were "fleeing." Slipher had made an interpretation from his observations that put a new perspective on the universe , and in response, he received a standing ovation for his presentation.

Figure 5: The Andromeda galaxy, one of the spiral nebulae studied by Vesto Slipher, as seen in infrared light by NASA's Wide-field Infrared Survey Explorer.

Figure 5: The Andromeda galaxy, one of the spiral nebulae studied by Vesto Slipher, as seen in infrared light by NASA's Wide-field Infrared Survey Explorer.

Slipher continued his work with redshift and galaxies and published another paper in 1917, having now examined 25 nebulae and seen a redshift in 21 of them. Georges Lemaître, a Belgian physicist and astronomer, built on Slipher's work while completing his PhD at the Massachusetts Institute of Technology. He extended Slipher's measurements to the entire universe , and calculated mathematically that the universe must be expanding in order to explain Slipher's observation . He published his ideas in a 1927 paper called "A homogeneous Universe of constant mass and growing radius accounting for the radial velocity of extragalactic nebulae" (Lemaître, 1927), but his paper met with widespread criticism from the scientific community. The English astronomer Fred Hoyle ridiculed the work, and coined the term "Big Bang" theory as a disparaging nickname for Lemaître's idea. And none other than Albert Einstein criticized Lemaître, writing to him "Your math is correct, but your physics is abominable" (Deprit, 1984).

Einstein's criticism had a personal and cultural component, two things we often overlook in terms of their influence on science. Several years earlier, Einstein had published his general theory of relativity (Einstein, 1916). In formulating the theory, Einstein had encountered one significant problem: General relativity predicted that the universe had to be either contracting or expanding – it did not allow for a static universe. But a contracting or expanding universe could not be eternal, while a static, non-moving universe could, and the prevailing cultural belief at the time was that the universe was eternal. Einstein was strongly influenced by his cultural surroundings. As a result, he invented a "fudge factor," which he called the cosmological constant , that would allow the theory of general relativity to be consistent with a static universe. But science is not a democracy or plutocracy; it is neither the most common or most popular conclusion that becomes accepted, but rather the conclusion that stands up to the test of evidence over time. Einstein's cosmological constant was being challenged by new evidence.

  • The expanding universe

In 1929, an American astronomer working at the Mt. Wilson Observatory in southern California made an important contribution to the discussion of the nature of the universe . Edwin Hubble had been at Mt. Wilson for 10 years, measuring the distances to galaxies, among other things. In the 1920s, he was working with Milton Humason, a high school dropout and assistant at the observatory. Hubble and Humason plotted the distances they had calculated for 46 different galaxies against Slipher's recession velocity and found a linear relationship (see Figure 6) (Hubble, 1929).

Figure 6: The original Hubble diagram. The relative velocity of galaxies (in km/sec) is plotted against distance to that galaxy (in parsecs; a parsec is 3.26 light years). The slope of the line drawn through the points gives the rate of expansion of the universe (the Hubble Constant). (Originally Figure 1, from

Figure 6: The original Hubble diagram. The relative velocity of galaxies (in km/sec) is plotted against distance to that galaxy (in parsecs; a parsec is 3.26 light years). The slope of the line drawn through the points gives the rate of expansion of the universe (the Hubble Constant). (Originally Figure 1, from "A Relation Between Distance and Radial Velocity Among Extra-Galactic Nebulae," Proceedings of the National Academy of Sciences, Volume 15, Issue 3, 1929: p. 172. © Huntington Library, San Marino, CA.)

In other words, their graph showed that more distant galaxies were receding faster than closer ones, confirming the idea that the universe was indeed expanding. This relationship, now referred to as Hubble's Law , allowed them to calculate the rate of expansion as a function of distance from the slope of the line in the graph. This rate term is now referred to as the Hubble constant . Hubble's initial value for the expansion rate was 500 km/sec/Megaparsec, or about 160 km/sec per million-light-years.

Knowing the rate at which the universe is expanding, one can calculate the age of the universe by in essence "tracing back" the most distant objects in the universe to their point of origin. Using his initial value for the expansion rate and the measured distance of the galaxies, Hubble and Humason calculated the age of the universe to be approximately 2 billion years. Unfortunately, the calculation was inconsistent with lines of evidence from other investigations. By the time Hubble made his discovery, geologists had used radioactive dating techniques to calculate the age of Earth at about 3 billion years (Rutherford, 1929) – or older than the universe itself! Hubble had followed the process of science, so what was the problem?

Even laws and constants are subject to revision in science. It soon became clear that there was a problem in the way that Hubble had calculated his constant. In the 1940s, a German astronomer named Walter Baade took advantage of the blackouts that were ordered in response to potential attacks during World War II and used the Mt. Wilson Observatory in Arizona to look at several objects that Hubble had interpreted as single stars. With darker surrounding skies, Baade realized that these objects were, in fact, groups of stars, and each was fainter, and thus more distant, than Hubble had calculated. Baade doubled the distance to these objects, and in turn halved the Hubble constant and doubled the age of the universe . In 1953, the American astronomer Allan Sandage, who had studied under Baade, looked in more detail at the brightness of stars and how that varied with distance. Sandage further revised the constant, and his estimate of 75 km/sec/Megaparsec is close to our modern day estimate of the Hubble constant of 72 km/sec/Megaparsec, which places the age of the universe at 12 to 14 billion years old.

The new estimates developed by Baade and Sandage did not negate what Hubble had done (it is still called the Hubble constant , after all), but they revised it based on new knowledge. The lasting knowledge of science is rarely the work of an individual, as building on the work of others is a critical component of the process of science. Hubble's findings would have been limited to some interesting data on the distance to various stars had it not also built on, and incorporated, the work of Slipher. Similarly, Baade and Sandage's contribution were no less significant because they "simply" refined Hubble's earlier work.

Since the 1950s, other means of calculating the age of the universe have been developed. For example, there are now methods for dating the age of the stars, and the oldest stars date to approximately 13.2 billion years ago (Frebel et al., 2007). The Wilkinson Microwave Anisotropy Probe is collecting data on cosmic microwave background radiation (Figure 7). Using these data in conjunction with Einstein's theory of general relativity , scientists have calculated the age of the universe at 13.78 billion years old (Aghanim et al. 2020). The convergence of multiple lines of evidence on a single explanation is what creates the solid foundation of scientific knowledge.

Figure 7: Visual representation of the cosmic microwave background radiation, and the temperature differences indicated by that radiation, as collected by the Wilkinson Microwave Anisotropy Probe.

Figure 7: Visual representation of the cosmic microwave background radiation, and the temperature differences indicated by that radiation, as collected by the Wilkinson Microwave Anisotropy Probe.

  • Why should we trust science?

Why should we believe what scientists say about the age of the universe? We have no written records of its creation, and no one has been able to "step outside" of the system , as astronauts did when they took pictures of Earth from space, to measure its age. Yet the nature of the scientific process allows us to accurately state the age of the observable universe . These predictions were developed by multiple researchers and tested through multiple research methods . They have been presented to the scientific community through publications and public presentations. And they have been confirmed and verified by many different studies. New studies, or new research methods, may be developed that might possibly cause us to refine our estimate of the age of the universe upward or downward. This is how the process of science works; it is subject to change as more information and new technologies become available. But it is not tenuous – our age estimate may be refined, but the idea of an expanding universe is unlikely to be overturned. As evidence builds to support an idea, our confidence in that idea builds.

Upon seeing Hubble's work, even Albert Einstein changed his opinion of a static universe and called his insertion of the cosmological constant the "biggest blunder" of his professional career. Hubble's discovery actually confirmed Einstein's theory of general relativity , which predicts that the universe must be expanding or contracting. Einstein refused to accept this idea because of his cultural biases. His work had not predicted a static universe, but he assumed this must be the case given what he had grown up believing. When confronted with the data , he recognized that his earlier beliefs were flawed, and came to accept the findings of the science behind the idea. This is a hallmark of science: While an individual's beliefs may be biased by personal experience, the scientific enterprise works to collect data to allow for a more objective conclusion to be identified. Incorrect ideas may be upheld for some amount of time, but eventually the preponderance of evidence helps to lead us to correct these ideas. Once used as a term of disparagement, the "Big Bang" theory is now the leading explanation for the origin of the universe as we know it.

There are other questions we can ask about the origin of the universe , not all of which can be answered by science. Scientists can answer when and how the universe began but cannot calculate the reason why it began, for example. That type of question must be explored through philosophy, religion, and other ways of thinking. The questions that scientists ask must be testable. Scientists have provided answers to testable questions that have helped us calculate the age of the universe, like how distant certain stars are and how fast they are receding from us. Whether or not we can get a definitive answer, we can be confident in the process by which the explanations were developed, allowing us to rely on the knowledge that is produced through the process of science. Someday we may find evidence to help us understand why the universe was created, but for the time being science will limit itself to the last 13.78 or so billion years of phenomena to investigate.

Table of Contents

Activate glossary term highlighting to easily identify key terms within the module. Once highlighted, you can click on these terms to view their definitions.

Activate NGSS annotations to easily identify NGSS standards within the module. Once highlighted, you can click on them to view these standards.

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

scientific knowledge essay

Understanding Science

How science REALLY works...

  • Understanding Science 101
  • Scientific findings frequently benefit society through technological and other innovations.
  • Technological innovations may lead to new scientific breakthroughs.
  • Some scientists are motivated by potential applications of their research.

Benefits of science

The process of science is a way of building knowledge about the universe — constructing new ideas that illuminate the world around us. Those ideas are inherently tentative, but as they cycle through the process of science again and again and are tested and retested in different ways, we become increasingly confident in them. Furthermore, through this same iterative process, ideas are modified, expanded, and combined into more powerful explanations. For example, a few observations about inheritance patterns in garden peas can — over many years and through the work of many different scientists — be built into the broad understanding of genetics offered by science today. So although the process of science is iterative, ideas do not churn through it repetitively. Instead, the cycle actively serves to construct and integrate scientific knowledge.

And that knowledge is useful for all sorts of things: designing bridges, slowing climate change, and prompting frequent hand washing during flu season. Scientific knowledge allows us to develop new technologies , solve practical problems, and make informed decisions — both individually and collectively. Because its products are so useful, the process of science is intertwined with those applications:

  • New scientific knowledge may lead to new applications. For example, the discovery of the structure of DNA was a fundamental breakthrough in biology. It formed the underpinnings of research that would ultimately lead to a wide variety of practical applications, including DNA fingerprinting, genetically engineered crops, and tests for genetic diseases.
  • New technological advances may lead to new scientific discoveries. For example, developing DNA copying and sequencing technologies has led to important breakthroughs in many areas of biology, especially in the reconstruction of the evolutionary relationships among organisms.
  • Potential applications may motivate scientific investigations. For example, the possibility of engineering microorganisms to cheaply produce drugs for diseases like malaria motivates many researchers in the field to continue their studies of microbe genetics.

The process of science and you

This flowchart represents the process of formal science, but in fact, many aspects of this process are relevant to everyone and can be used in your everyday life. Sure, some elements of the process really only apply to formal science (e.g., publication, feedback from the scientific community), but others are widely applicable to everyday situations (e.g., asking questions, gathering evidence, solving practical problems). Understanding the process of science can help anyone develop a scientific outlook on life.

  • Take a sidetrip

To find out how to develop a scientific outlook, visit  A scientific approach to life: A science toolkit .

  • Science in action
  • Teaching resources

Scientific results regularly make their way into our everyday lives. Follow scientific ideas from lab bench to application:

  • The structure of DNA: Cooperation and competition
  • Ozone depletion: Uncovering the hidden hazard of hairspray

Want to learn even more about the relationship between science and its applications? Jump ahead to these units:

  • Science and society
  • What has science done for you lately?
  • Use our  web interactive  to help students document and reflect on the process of science.
  • Learn strategies for building lessons and activities around the Science Flowchart: Grades 3-5 Grades 6-8 Grades 9-12 Grades 13-16
  • Find lesson plans for introducing the Science Flowchart to your students in: Grades 3-5 Grades 6-8 Grades 9-16
  • Get  graphics and pdfs of the Science Flowchart  to use in your classroom. Translations are available in Spanish, French, Japanese, and Swahili.

Copycats in science: The role of replication

Science at multiple levels

Subscribe to our newsletter

  • The science flowchart
  • Science stories
  • Grade-level teaching guides
  • Teaching resource database
  • Journaling tool
  • Misconceptions

scientific knowledge essay

Employees converse in the cleanroom of a microchip manufacturing plant in Germany, 2 May 2023. Photo Robert Michael/dpa/Getty

Why not scientism?

Science is not the only form of knowledge but it is the best, being the most successful epistemic enterprise in history.

by Moti Mizrahi   + BIO

‘Philosophy is dead,’ Stephen Hawking once declared , because it ‘has not kept up with modern developments in science, particularly physics.’ It is scientists, not philosophers, who are now ‘the bearers of the torch of discovery in our quest for knowledge’. The response from some philosophers was to accuse Hawking of ‘scientism’. The charge of ‘scientism’ is meant to convey disapproval of anyone who values scientific disciplines, such as physics, over non-scientific disciplines, such as philosophy. The philosopher Tom Sorell writes that scientism is ‘a matter of putting too high a value on science in comparison with other branches of learning or culture’. But what’s wrong with putting a higher value on science compared with other academic disciplines? What is so bad about scientism? If physics is in fact a better torch in the quest for knowledge than philosophy, as Hawking claimed, then perhaps it should be valued over philosophy and other non-scientific fields of enquiry.

Before we can address these questions, however, we need to get our definitions straight. For, much like other philosophical -isms, ‘scientism’ means different things to different philosophers. Now, the question of whether science is the only way of knowing about reality, or at least better than non-scientific ways of knowing, is an epistemological question. Construed as an epistemological thesis, then, scientism can be broadly understood as either the view that scientific knowledge is the only form of knowledge we have, or the view that scientific knowledge is the best form of knowledge we have. But scientism comes in other varieties as well, including methodological and metaphysical ones. As a methodological thesis, scientism is either the view that scientific methods are the only ways of knowing about reality we have, or the view that scientific methods are the best ways of knowing about reality we have. And, construed as a metaphysical thesis, scientism is either the view that science is our only guide to what exists, or the view that science is our best guide to what exists.

Without a clear understanding of the aforementioned varieties of scientism, philosophical parties to the scientism debate are at risk of merely talking past each other. That is, some defenders of scientism might be arguing for weaker varieties of scientism, in terms of scientific knowledge or methods being the best ones, while their opponents interpret them as arguing for stronger varieties of scientism, in terms of scientific knowledge or methods being the only ones. My own position, for example, is a weak variety of scientism. In my paper ‘What’s So Bad about Scientism?’ (2017), I defend scientism as an epistemological thesis, which I call ‘Weak Scientism’. This is the view that scientific knowledge is the best form of knowledge we have (as opposed to ‘Strong Scientism’, which is the view that scientific knowledge is the only knowledge we have).

A ccording to Weak Scientism, while non-scientific disciplines such as philosophy do produce knowledge, scientific disciplines such as physics produce knowledge that is superior – both quantitatively and qualitatively – to non-scientific knowledge. It is important to note that ‘knowledge’ does not refer to justified true belief (or any other analysis of knowledge, for that matter). Rather, ‘knowledge’ means disciplinary knowledge or the research produced by practitioners in an academic field of enquiry. All academic disciplines are in the business of producing knowledge (or research) in this sense. The knowledge of each academic discipline is what we find in the academic publications of the practitioners of an academic discipline. Proponents of Strong Scientism would deny that non-scientific disciplines produce ‘real knowledge’, as Richard Williams puts it in the introduction to the anthology Scientism: The New Orthodoxy (2014), whereas proponents of Weak Scientism would grant that non-scientific disciplines produce knowledge but argue that scientific knowledge is better than non-scientific knowledge along several dimensions.

Now, whether any of these epistemological, methodological and metaphysical theses – weak (‘best’) or strong (‘only’) – is true should be a matter of debate, not definition by fiat. Each claim needs to be put forward, examined, criticised and debated. It can’t be what some have unfortunately sought to do, which is to make scientism a misguided view by definition . Take the psychologist Steve Taylor who wrote in 2019:

One of the characteristics of dogmatic belief systems is that their adherents accept assumptions as proven facts. This is certainly true of scientism. For example, it is a fact that consciousness exists, and that it is associated with neurological activity. But the assumption that consciousness is produced by neurological activity is questionable.

Here, Taylor asserts that scientism is a dogmatic belief system. But why is that? None of the epistemological, methodological and metaphysical theses mentioned above is dogmatic. These theses can be questioned, of course. However, if merely being questionable were sufficient to make a belief dogmatic, then many if not most of our beliefs would be dogmatic. To see why, consider my (and, in all likelihood, your) belief that there is an external world, a world that is there independently of our minds. Our belief in the existence of an external world is notoriously difficult to prove, as any epistemologist will tell you, but that doesn’t mean that our belief in an external world is nothing more than mere (religious) dogma.

Likewise, in her book Defending Science – Within Reason (2003), Susan Haack asserts that, by definition, scientism is ‘an exaggerated kind of deference towards science, an excessive readiness to accept as authoritative any claim made by the sciences, and to dismiss any kind of criticism of science or its practitioners as anti-scientific prejudice’. But this runs into the same problem above. None of the epistemological, methodological and metaphysical theses mentioned above are exaggerated or excessive. To have an exaggerated deference toward something is misguided, and to have an excessive readiness to accept as authoritative any claims made by some source is foolhardy. After all, that’s just what the words ‘exaggerated’ and ‘excessive’ imply.

Instead of condemning scientism by definition, opponents need to show what is wrong with it

To assert that scientism is merely a dogmatic belief, as Taylor does, or an inherently misguided attitude, as Haack does, is to weaponise it, not to argue against it. There is an important history here. In the mid-20th century , theologians and religious scholars weaponised scientism in an attempt to defend their academic territory from what they perceived as a threat of scientific encroachment. In their book Roadblocks to Faith (1954) James Pike and John McGill Krumm distinguished science from scientism and claim that the latter is ‘a threat to the humanities no less than to religion’. Around the same time, in his paper ‘The Preacher Talks to the Man of Science’ (1954), H Richard Rasmusson characterised scientism as ‘a cult that has made a religion out of science’. These religious scholars weaponised scientism out of a concern that science is encroaching on areas of enquiry that presuppose the existence of the very things whose existence they take science to be questioning or denying, such as God, the supernatural, and the like. That is why, according to Ian Barbour, ‘it is [considered] scientism when Richard Dawkins says that the presence of chance in evolution shows that this is a purposeless universe,’ for this claim is taken to be questioning the belief in a providential God.

Some philosophers are now playing a similar game, that is, using scientism as a weapon in the fight against scientists who are critical of academic philosophy. Philosophers who level the charge of ‘scientism’ typically identify prominent scientists, such as Hawking and Neil deGrasse Tyson, as exhibiting this kind of misguided attitude toward science; the philosopher Ian James Kidd called them mere ‘cheerleaders for science’. The problem with thinking of scientism as these philosophers do – as exaggerated deference toward science – is that it is a persuasive conception of scientism. To assert that scientism is ‘putting too high a value on science’ or ‘an exaggerated kind of deference towards science’ is to express disapproval of what could, after all, be a reasonable view to hold.

In argumentation studies, definitions that are intended to transfer emotive force, such as feelings of approval or disapproval, are known as persuasive definitions. To say that scientism just is ‘putting too high a value on science’ is akin to saying that abortion is murder – the definition is overloaded with emotional force. Just as pro-choice advocates would object to saying that abortion is murder, since it expresses disapproval of abortion, advocates of scientism would object to saying that scientism is ‘putting too high a value on science’ since it expresses disapproval of scientism. Instead of condemning scientism by definition, as Sorell and Haack do, opponents of scientism need to show precisely what is wrong with it.

In their introduction to the anthology Scientism: Prospects and Problems (2018), René van Woudenberg, Rik Peels and Jeroen de Ridder agree that scientism should not be weaponised when they write: ‘no one will accept this notion of “scientism” as an adequate characterisation of their own views, as no one will think that their deference to science is exaggerated , or their readiness to accept claims made by the sciences is excessive .’ In fact, in the paper ‘Six Signs of Scientism’ (2012), Haack herself observes that, before it was weaponised by those who sought to defend religion and philosophy from science trespassing on their territories, ‘the word “scientism” was neutral.’

U nlike pejorative conceptions of scientism, Weak Scientism – the view I defend – is a neutral framing according to which scientific disciplines, when compared with non-scientific disciplines, such as philosophy, are better along several dimensions. Briefly, the argument runs as follows. One thing can be said to be better than another thing either quantitatively or qualitatively . Scientific knowledge is quantitatively better than non-scientific knowledge because scientific disciplines produce more knowledge, and the knowledge they produce has more impact than the knowledge produced by non-scientific disciplines. This claim is supported by data on the research output (that is, number of publications) and research impact (that is, number of citations) of scientific and non-scientific academic disciplines. These data show that scientific disciplines produce more publications, and those publications get cited more than the publications of non-scientific disciplines.

In their paper ‘Humanities: The Outlier of Research Assessment’ (2020), Güleda Doğan and Zehra Taşkın use publication and citation data in 255 subjects on the Web of Science from 1980 to 2020 to find that the distribution of publications is such that ‘ 81 per cent were published in three main pure sciences categories: natural sciences ( 33 per cent), medical sciences ( 27 per cent), and engineering and technology ( 21 per cent)’, and that ‘[t]he total number of humanities publications was almost similar to a relatively small pure science area’, namely, agricultural sciences. As far as the distribution of citations is concerned, the ‘[h]umanities had only 0.52 per cent of whole citations in the dataset, while natural sciences had 44 per cent, medical sciences had 30 per cent, engineering and technology had 17 per cent, social sciences had 6 per cent, and agriculture had 1.5 per cent.’ When compared with the natural sciences, engineering and technology, medical and health sciences, and social sciences, the humanities have the lowest values of research output (measured by publication counts), with the exception of agricultural sciences, and research impact (measured by citation counts), without exception. While most scientific publications are cited, only 16 per cent of publications in the humanities are cited. Within the humanities, philosophy, ethics and religion have the highest percentages of uncited publications.

Scientific knowledge can be said to be qualitatively better than non-scientific knowledge because scientific knowledge is explanatorily, predictively and instrumentally more successful than non-scientific knowledge. This is the sort of success that philosophers of science talk about, as they often do, when they say that science is successful. Consider, for example, Albert Einstein ’s theory of relativity. The theory is explanatorily successful insofar as it provides a comprehensive explanation for phenomena that would otherwise seem mysterious, such as gravity, planetary orbits, black holes, electromagnetism, and more. The theory is instrumentally successful insofar as it allows us to intervene in nature as when we use GPS to navigate our world and gravitational lensing to look for new worlds. The theory is predictively successful insofar as it makes novel predictions that are borne out by observation or experimentation, such as the perihelion precession of Mercury, the deflection of light by massive objects, the gravitational redshifting of light, the relativistic delay of light (also known as the Shapiro effect), gravitational waves, and more. One would be hard pressed to find a non-scientific theory that is as explanatorily, instrumentally and predictively successful as the theory of relativity.

This argument for Weak Scientism is not meant to be the final word on the question of scientism. There may be other arguments for and against the varieties of scientism mentioned above, which is exactly what we should want. What we do not want is for scientism to be weaponised. Unfortunately, the ‘scientism’ charge is already being used in the war against science as it is fought on the internet and social media. Anti-vaxxers use it to create mass doubt and disbelief regarding any claim about the COVID-19 vaccines made by public health officials and organisations, such as the World Health Organization. Climate change deniers use it to sow seeds of doubt about the scientific consensus on the anthropogenic climate crisis.

For example, when the tennis player Novak Djokovic posted on Twitter that he would not be going to the 2022 US Open because he is not vaccinated for COVID-19 , the actor Rob Schneider quoted his Tweet and wrote: ‘Because science… And by science, of course I mean the religion of scientism, which is the opposite of science.’ Since then, Schneider’s Tweet was deleted, and for good reason. That is because Schneider had used ‘scientism’ as a weapon against the recommendation of public health officials to get vaccinated for COVID-19 . His use of ‘scientism’ is meant to imply that the United States Tennis Association has no good reasons to require tennis players to get vaccinated for COVID-19 before they can compete in the US Open. It is meant to suggest that the requirement to get vaccinated for COVID-19 is mere dogma that is not supported by scientific evidence.

Academic philosophers who weaponise ‘scientism’ are playing a dangerous game

Similarly, the law professor John O McGinnis uses ‘scientism’ as a weapon of science denial in his essay ‘Blinded by Scientism’ (2020), when he writes:

The mantra of ‘follow the science’ [which he labels ‘scientism’] is not unique to the politics of the virus [namely, the coronavirus SARS-CoV-2 ]. Politicians offer a similar justification for policies on climate change. Just as the science about COVID-19 ‘justifies’ lockdowns, the science of climate change is used to support spending and regulatory policy that will deliver zero net emissions.

This is another example of the use of ‘scientism’ as an anti-science weapon. McGinnis’s use of ‘scientism’ is meant to raise doubts about the scientific consensus over anthropogenic climate change. It is meant to imply that policymakers and lawmakers have no good reasons to enact any policies or legislate any laws that are informed by the science of climate change.

For these reasons, academic philosophers who weaponise ‘scientism’ are playing a dangerous game. In their valiant attempt to defend academic philosophy from the criticism of some celebrity scientists, such as Hawking, they may be providing ammunition to science deniers. Not only philosophers but also scientists seem to occasionally fall into the trap of weaponising scientism, thereby enabling science deniers to use ‘scientism’ as an anti-science weapon of doubt and disbelief. In the article ‘What Is Scientism, and Why Is it a Mistake?’ (2021), the professor of astrophysics Adam Frank quotes the Google definition of ‘scientism’ as ‘excessive belief in the power of scientific knowledge and techniques’ with approval, and then goes on to say that scientism

is a mistake […] because it is confused about what it’s defending. Without doubt, science is unique, powerful, and wonderful. It should be celebrated, and it needs to be protected. Scientism, on the other hand, is just metaphysics, and there are lots and lots of metaphysical beliefs.

Of course, there are many metaphysical beliefs. There are also many scientific beliefs, just as there are many religious, perceptual, testimonial and other kinds of beliefs as well. The mere fact that there are many beliefs of a certain type doesn’t necessarily mean that some beliefs of that type cannot be said to be better than other beliefs of the same type. The question is whether belief in the power of science to produce knowledge (or some other epistemic good) is justified, warranted or reasonable.

N ow, philosophers who weaponise ‘scientism’ tend to find scientism threatening to non-scientific academic disciplines. Again, Haack is a case in point. In her paper ‘The Real Question: Can Philosophy Be Saved?’ (2017) she claims that ‘the rising tide of scientistic philosophy […] spells shipwreck for philosophy itself.’ However, there is a continuum between a dogmatic acceptance of science, or ‘science worship’, which is often mistakenly referred to as ‘scientism’, and a dogmatic rejection of science, or ‘science denial’. If a dogmatic acceptance of science is an epistemic threat, as academic philosophers who weaponise ‘scientism’ tend to claim, then a dogmatic rejection of science is an epistemic threat, too. In fact, a dogmatic rejection of science is a bigger epistemic threat than a dogmatic acceptance of science. Why? Because science is the most successful epistemic enterprise human beings have ever had, as almost all philosophers of science agree.

Neutral conceptions of scientism cannot become anti-science weapons of doubt and disbelief

As Anjan Chakravartty puts it in his entry on scientific realism for the Stanford Encyclopedia of Philosophy , it is a ‘widely accepted premise that our best [scientific] theories are extraordinarily successful: they facilitate empirical predictions, retrodictions, and explanations of the subject matters of scientific investigation, often marked by astounding accuracy and intricate causal manipulations of the relevant phenomena.’ In other words, the flip side of dogmatic ‘science worship’ is dogmatic ‘science denial’. Surely, both are misguided. But the latter is a much riskier mistake to make than the former.

Rather than conceive of scientism in ways that could be weaponised, then, we should think about it along the lines I have proposed above. Epistemological scientism is the view that scientific knowledge is superior to non-scientific knowledge either because scientific knowledge is the only form of knowledge we have, and so non-scientific knowledge is not really knowledge at all, or because scientific knowledge is better than non-scientific knowledge. Unlike pejorative conceptions of scientism, these neutral conceptions cannot be weaponised, and thus cannot become anti-science weapons of doubt and disbelief in the hands of anti-vaxxers, climate-change deniers, and others who harbour anti-science sentiments. This would allow us to keep the following question open and up for debate: what sort of attitude or stance should we have toward science? As far as this question is concerned, the term ‘scientism’ is a useful term, and it would be a shame to let this live and important debate get derailed by pejorative conceptions of scientism that do nothing but provide ammunition to science deniers.

scientific knowledge essay

History of ideas

Reimagining balance

In the Middle Ages, a new sense of balance fundamentally altered our understanding of nature and society

A marble bust of Thucydides is shown on a page from an old book. The opposite page is blank.

What would Thucydides say?

In constantly reaching for past parallels to explain our peculiar times we miss the real lessons of the master historian

Mark Fisher

A man and a woman in formal evening dress but with giant fish heads covering their faces are pictured beneath a bridge on the foreshore of a river

The environment

Emergency action

Could civil disobedience be morally obligatory in a society on a collision course with climate catastrophe?

Rupert Read

scientific knowledge essay

Metaphysics

The enchanted vision

Love is much more than a mere emotion or moral ideal. It imbues the world itself and we should learn to move with its power

Mark Vernon

scientific knowledge essay

What is ‘lived experience’?

The term is ubiquitous and double-edged. It is both a key source of authentic knowledge and a danger to true solidarity

Patrick J Casey

scientific knowledge essay

Thinkers and theories

Philosophy is an art

For Margaret Macdonald, philosophical theories are akin to stories, meant to enlarge certain aspects of human life

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Archaeology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Variation
  • Language Families
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Modernism)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Culture
  • Music and Religion
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Medical Ethics
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business History
  • Business Strategy
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Methodology
  • Economic Systems
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Theory
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Scientific Collaboration and Collective Knowledge: New Essays

Scientific Collaboration and Collective Knowledge: New Essays

Scientific Collaboration and Collective Knowledge: New Essays

ATER (temporary assistant professor)

Assistant Professor

Associate Professor

  • Cite Icon Cite
  • Permissions Icon Permissions

Descartes once argued that, with sufficient effort and skill, a single scientist could uncover fundamental truths about our world. Contemporary science proves the limits of this claim. From synthesizing the human genome to predicting the effects of climate change, some current scientific research requires the collaboration of hundreds (if not thousands) of scientists with various specializations. Additionally, the majority of published scientific research is now coauthored, including more than 80% of articles in the natural sciences. Small collaborative teams have become the norm in science. This is the first volume to address critical philosophical questions about how collective scientific research could be organized differently and how it should be organized. For example, should scientists be required to share knowledge with competing research teams? How can universities and grant-giving institutions promote successful collaborations? When hundreds of researchers contribute to a discovery, how should credit be assigned—and can minorities expect a fair share? When collaborative work contains significant errors or fraudulent data, who deserves blame? In this collection of essays, leading philosophers of science address these critical questions, among others. Their work extends current philosophical research on the social structure of science and contributes to the growing, interdisciplinary field of social epistemology. The volume’s strength lies in the diversity of its authors’ methodologies. Employing detailed case studies of scientific practice, mathematical models of scientific communities, and rigorous conceptual analysis, contributors to this volume study scientific groups of all kinds, including small labs, peer-review boards, and large international collaborations like those in climate science and particle physics.

Signed in as

Institutional accounts.

  • Google Scholar Indexing
  • GoogleCrawler [DO NOT DELETE]

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code

Institutional access

  • Sign in with a library card Sign in with username/password Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Sign in through your institution

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Sign in with a library card

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Objectivity

Scientific objectivity is a property of various aspects of science. It expresses the idea that scientific claims, methods, results—and scientists themselves—are not, or should not be, influenced by particular perspectives, value judgments, community bias or personal interests, to name a few relevant factors. Objectivity is often considered to be an ideal for scientific inquiry, a good reason for valuing scientific knowledge, and the basis of the authority of science in society.

Many central debates in the philosophy of science have, in one way or another, to do with objectivity: confirmation and the problem of induction; theory choice and scientific change; realism; scientific explanation; experimentation; measurement and quantification; statistical evidence; reproducibility; evidence-based science; feminism and values in science. Understanding the role of objectivity in science is therefore integral to a full appreciation of these debates. As this article testifies, the reverse is true too: it is impossible to fully appreciate the notion of scientific objectivity without touching upon many of these debates.

The ideal of objectivity has been criticized repeatedly in philosophy of science, questioning both its desirability and its attainability. This article focuses on the question of how scientific objectivity should be defined , whether the ideal of objectivity is desirable , and to what extent scientists can achieve it.

1. Introduction

2.1 the view from nowhere, 2.2 theory-ladenness and incommensurability, 2.3 underdetermination, values, and the experimenters’ regress, 3.1 epistemic and contextual values, 3.2 acceptance of scientific hypotheses and value neutrality, 3.3 science, policy and the value-free ideal, 4.1 measurement and quantification, 4.2.1 bayesian inference, 4.2.2 frequentist inference, 4.3 feyerabend: the tyranny of the rational method, 5.1 reproducibility and the meta-analytic perspective, 5.2 feminist and standpoint epistemology, 6.1 max weber and objectivity in the social sciences, 6.2 contemporary rational choice theory, 6.3 evidence-based medicine and social policy, 7. the unity and disunity of scientific objectivity, 8. conclusions, other internet resources, related entries.

Objectivity is a value. To call a thing objective implies that it has a certain importance to us and that we approve of it. Objectivity comes in degrees. Claims, methods, results, and scientists can be more or less objective, and, other things being equal, the more objective, the better. Using the term “objective” to describe something often carries a special rhetorical force with it. The admiration of science among the general public and the authority science enjoys in public life stems to a large extent from the view that science is objective or at least more objective than other modes of inquiry. Understanding scientific objectivity is therefore central to understanding the nature of science and the role it plays in society.

If what is so great about science is its objectivity, then objectivity should be worth defending. The close examinations of scientific practice that philosophers of science have undertaken in the past fifty years have shown, however, that several conceptions of the ideal of objectivity are either questionable or unattainable. The prospects for a science providing a non-perspectival “view from nowhere” or for proceeding in a way uninformed by human goals and values are fairly slim, for example.

This article discusses several proposals to characterize the idea and ideal of objectivity in such a way that it is both strong enough to be valuable, and weak enough to be attainable and workable in practice. We begin with a natural conception of objectivity: faithfulness to facts . We motivate the intuitive appeal of this conception, discuss its relation to scientific method and discuss arguments challenging both its attainability as well as its desirability. We then move on to a second conception of objectivity as absence of normative commitments and value-freedom , and once more we contrast arguments in favor of such a conception with the challenges it faces. A third conception of objectivity which we discuss at length is the idea of absence of personal bias .

Finally there is the idea that objectivity is anchored in scientific communities and their practices . After discussing three case studies from economics, social science and medicine, we address the conceptual unity of scientific objectivity : Do the various conceptions have a common valid core, such as promoting trust in science or minimizing relevant epistemic risks? Or are they rivaling and only loosely related accounts? Finally we present some conjectures about what aspects of objectivity remain defensible and desirable in the light of the difficulties we have encountered.

2. Objectivity as Faithfulness to Facts

The basic idea of this first conception of objectivity is that scientific claims are objective in so far as they faithfully describe facts about the world. The philosophical rationale underlying this conception of objectivity is the view that there are facts “out there” in the world and that it is the task of scientists to discover, analyze, and systematize these facts. “Objective” then becomes a success word: if a claim is objective, it correctly describes some aspect of the world.

In this view, science is objective to the degree that it succeeds at discovering and generalizing facts, abstracting from the perspective of the individual scientist. Although few philosophers have fully endorsed such a conception of scientific objectivity, the idea figures recurrently in the work of prominent twentieth-century philosophers of science such as Carnap, Hempel, Popper, and Reichenbach.

Humans experience the world from a perspective. The contents of an individual’s experiences vary greatly with his perspective, which is affected by his personal situation, and the details of his perceptual apparatus, language and culture. While the experiences vary, there seems to be something that remains constant. The appearance of a tree will change as one approaches it but—according to common sense and most philosophers—the tree itself doesn’t. A room may feel hot or cold for different persons, but its temperature is independent of their experiences. The object in front of me does not disappear just because the lights are turned off.

These examples motivate a distinction between qualities that vary with one’s perspective, and qualities that remain constant through changes of perspective. The latter are the objective qualities. Thomas Nagel explains that we arrive at the idea of objective qualities in three steps (Nagel 1986: 14). The first step is to realize (or postulate) that our perceptions are caused by the actions of things around us, through their effects on our bodies. The second step is to realize (or postulate) that since the same qualities that cause perceptions in us also have effects on other things and can exist without causing any perceptions at all, their true nature must be detachable from their perspectival appearance and need not resemble it. The final step is to form a conception of that “true nature” independently of any perspective. Nagel calls that conception the “view from nowhere”, Bernard Williams the “absolute conception” (Williams 1985 [2011]). It represents the world as it is, unmediated by human minds and other “distortions”.

This absolute conception lies at the basis of scientific realism (for a detailed discussion, see the entry on scientific realism ) and it is attractive in so far as it provides a basis for arbitrating between conflicting viewpoints (e.g., two different observations). Moreover, the absolute conception provides a simple and unified account of the world. Theories of trees will be very hard to come by if they use predicates such as “height as seen by an observer” and a hodgepodge if their predicates track the habits of ordinary language users rather than the properties of the world. To the extent, then, that science aims to provide explanations for natural phenomena, casting them in terms of the absolute conception would help to realize this aim. A scientific account cast in the language of the absolute conception may not only be able to explain why a tree is as tall as it is but also why we see it in one way when viewed from one standpoint and in a different way when viewed from another. As Williams (1985 [2011: 139]) puts it,

[the absolute conception] nonvacuously explain[s] how it itself, and the various perspectival views of the world, are possible.

A third reason to find the view from nowhere attractive is that if the world came in structures as characterized by it and we did have access to it, we could use our knowledge of it to ground predictions (which, to the extent that our theories do track the absolute structures, will be borne out). A fourth and related reason is that attempts to manipulate and control phenomena can similarly be grounded in our knowledge of these structures. To attain any of the four purposes—settling disagreements, explaining the world, predicting phenomena, and manipulation and control—the absolute conception is at best sufficient but not necessary. We can, for instance, settle disagreements by imposing the rule that the person with higher social rank or greater experience is always right. We can explain the world and our image of it by means of theories that do not represent absolute structures and properties, and there is no need to get things (absolutely) right in order to predict successfully. Nevertheless, there is something appealing in the idea that factual disagreements can be settled by the very facts themselves, that explanations and predictions grounded in what’s really there rather than in a distorted image of it.

No matter how desirable, our ability to use scientific claims to represent facts about the world depends on whether these claims can unambiguously be established on the basis of evidence, and of evidence alone. Alas, the relation between evidence and scientific hypothesis is not straightforward. Subsection 2.2 and subsection 2.3 will look at two challenges of the idea that even the best scientific method will yield claims that describe an aperspectival view from nowhere. Section 5.2 will deal with socially motivated criticisms of the view from nowhere.

According to a popular picture, all scientific theories are false and imperfect. Yet, as we add true and eliminate false beliefs, our best scientific theories become more truthlike (e.g., Popper 1963, 1972). If this picture is correct, then scientific knowledge grows by gradually approaching the truth and it will become more objective over time, that is, more faithful to facts. However, scientific theories often change, and sometimes several theories compete for the place of the best scientific account of the world.

It is inherent in the above picture of scientific objectivity that observations can, at least in principle, decide between competing theories. If they did not, the conception of objectivity as faithfulness would be pointless to have as we would not be in a position to verify it. This position has been adopted by Karl R. Popper, Rudolf Carnap and other leading figures in (broadly) empiricist philosophy of science. Many philosophers have argued that the relation between observation and theory is way more complex and that influences can actually run both ways (e.g., Duhem 1906 [1954]; Wittgenstein 1953 [2001]). The most lasting criticism, however, was delivered by Thomas S. Kuhn (1962 [1970]) in his book “The Structure of Scientific Revolutions”.

Kuhn’s analysis is built on the assumption that scientists always view research problems through the lens of a paradigm, defined by set of relevant problems, axioms, methodological presuppositions, techniques, and so forth. Kuhn provided several historical examples in favor of this claim. Scientific progress—and the practice of normal, everyday science—happens within a paradigm that guides the individual scientists’ puzzle-solving work and that sets the community standards.

Can observations undermine such a paradigm, and speak for a different one? Here, Kuhn famously stresses that observations are “theory-laden” (cf. also Hanson 1958): they depend on a body of theoretical assumptions through which they are perceived and conceptualized. This hypothesis has two important aspects.

First, the meaning of observational concepts is influenced by theoretical assumptions and presuppositions. For example, the concepts “mass” and “length” have different meanings in Newtonian and relativistic mechanics; so does the concept “temperature” in thermodynamics and statistical mechanics (cf. Feyerabend 1962). In other words, Kuhn denies that there is a theory-independent observation language. The “faithfulness to reality” of an observation report is always mediated by a theoretical überbau , disabling the role of observation reports as an impartial, merely fact-dependent arbiter between different theories.

Second, not only the observational concepts, but also the perception of a scientist depends on the paradigm she is working in.

Practicing in different worlds, the two groups of scientists [who work in different paradigms, J.R./J.S.] see different things when they look from the same point in the same direction. (Kuhn 1962 [1970: 150])

That is, our own sense data are shaped and structured by a theoretical framework, and may be fundamentally distinct from the sense data of scientists working in another one. Where a Ptolemaic astronomer like Tycho Brahe sees a sun setting behind the horizon, a Copernican astronomer like Johannes Kepler sees the horizon moving up to a stationary sun. If this picture is correct, then it is hard to assess which theory or paradigm is more faithful to the facts, that is, more objective.

The thesis of the theory-ladenness of observation has also been extended to the incommensurability of different paradigms or scientific theories , problematized independently by Thomas S. Kuhn (1962 [1970]) and Paul Feyerabend (1962). Literally, this concept means “having no measure in common”, and it figures prominently in arguments against a linear and standpoint-independent picture of scientific progress. For instance, the Special Theory of Relativity appears to be more faithful to the facts and therefore more objective than Newtonian mechanics because it reduces, for low speeds, to the latter, and it accounts for some additional facts that are not predicted correctly by Newtonian mechanics. This picture is undermined, however, by two central aspects of incommensurability. First, not only do the observational concepts in both theories differ, but the principles for specifying their meaning may be inconsistent with each other (Feyerabend 1975: 269–270). Second, scientific research methods and standards of evaluation change with the theories or paradigms. Not all puzzles that could be tackled in the old paradigm will be solved by the new one—this is the phenomenon of “Kuhn loss”.

A meaningful use of objectivity presupposes, according to Feyerabend, to perceive and to describe the world from a specific perspective, e.g., when we try to verify the referential claims of a scientific theory. Only within a peculiar scientific worldview, the concept of objectivity may be applied meaningfully. That is, scientific method cannot free itself from the particular scientific theory to which it is applied; the door to standpoint-independence is locked. As Feyerabend puts it:

our epistemic activities may have a decisive influence even upon the most solid piece of cosmological furniture—they make gods disappear and replace them by heaps of atoms in empty space. (1978: 70)

Kuhn and Feyerabend’s theses about theory-ladenness of observation, and their implications for the objectivity of scientific inquiry have been much debated afterwards, and have often been misunderstood in a social constructivist sense. Therefore Kuhn later returned to the topic of scientific objectivity, of which he gives his own characterization in terms of the shared cognitive values of a scientific community. We discuss Kuhn’s later view in section 3.1 . For a more thorough coverage, see the entries on theory and observation in science , the incommensurability of scientific theories and Thomas S. Kuhn .

Scientific theories are tested by comparing their implications with the results of observations and experiments. Unfortunately, neither positive results (when the theory’s predictions are borne out in the data) nor negative results (when they are not) allow unambiguous inferences about the theory. A positive result can obtain even though the theory is false, due to some alternative that makes the same predictions. Finding suspect Jones’ fingerprints on the murder weapon is consistent with his innocence because he might have used it as a kitchen knife. A negative result might be due not to the falsehood of the theory under test but due to the failing of one or more auxiliary assumptions needed to derive a prediction from the theory. Testing, let us say, the implications of Newton’s laws for movements in our planetary system against observations requires assumptions about the number of planets, the sun’s and the planets’ masses, the extent to which the earth’s atmosphere refracts light beams, how telescopes affect the results and so on. Any of these may be false, explaining an inconsistency. The locus classicus for these observations is Pierre Duhem’s The Aim and Structure of Physical Theory (Duhem 1906 [1954]). Duhem concluded that there was no “crucial experiment”, an experiment that conclusively decides between two alternative theories, in physics (1906 [1954: 188ff.]), and that physicists had to employ their expert judgment or what Duhem called “good sense” to determine what an experimental result means for the truth or falsehood of a theory (1906 [1954: 216ff.]).

In other words, there is a gap between the evidence and the theory supported by it. It is important to note that the alleged gap is more profound than the gap between the premisses of any inductive argument and its conclusion, say, the gap between “All hitherto observed ravens have been black” and “All ravens are black”. The latter gap could be bridged by an agreed upon rule of inductive reasoning. Alas, all attempts to find an analogous rule for theory choice have failed (e.g., Norton 2003). Various philosophers, historians, and sociologists of science have responded that theory appraisal is “a complex form of value judgment” (McMullin 1982: 701; see also Kuhn 1977; Hesse 1980; Bloor 1982).

In section 3.1 below we will discuss the nature of the value judgments in more detail. For now the important lesson is that if these philosophers, historians, and sociologists are correct, the “faithfulness to facts” ideal is untenable. As the scientific image of the world is a joint product of the facts and scientists’ value judgments, that image cannot be said to be aperspectival. Science does not eschew the human perspective. There are of course ways to escape this conclusion. If, as John Norton (2003; ms.—see Other Internet Resources) has argued, it is material facts that power and justify inductive inferences, and not value judgments, we can avoid the negative conclusion regarding the view from nowhere. Unsurprisingly, Norton is also critical of the idea that evidence generally underdetermines theory (Norton 2008). However, there are good reasons to mistrust Norton’s optimism regarding the ineliminability of values and other non-factual elements in inductive inferences (Reiss 2020).

There is another, closely related concern. Most of the earlier critics of “objective” verification or falsification focused on the relation between evidence and scientific theories. There is a sense in which the claim that this relation is problematic is not so surprising. Scientific theories contain highly abstract claims that describe states of affairs far removed from the immediacy of sense experience. This is for a good reason: sense experience is necessarily perspectival, so to the extent to which scientific theories are to track the absolute conception, they must describe a world different from that of sense experience. But surely, one might think, the evidence itself is objective. So even if we do have reasons to doubt that abstract theories faithfully represent the world, we should stand on firmer grounds when it comes to the evidence against which we test abstract theories.

Theories are seldom tested against brute observations, however. Simple generalizations such as “all swans are white” are directly learned from observations (say, of the color of swans) but they do not represent the view from nowhere (for one thing, the view from nowhere doesn’t have colors). Genuine scientific theories are tested against experimental facts or phenomena, which are themselves unobservable to the unaided senses. Experimental facts or phenomena are instead established using intricate procedures of measurement and experimentation.

We therefore need to ask whether the results of scientific measurements and experiments can be aperspectival. In an important debate in the 1980s and 1990s some commentators answered that question with a resounding “no”, which was then rebutted by others. The debate concerns the so-called “experimenter’s regress” (Collins 1985). Collins, a prominent sociologist of science, claims that in order to know whether an experimental result is correct, one first needs to know whether the apparatus producing the result is reliable. But one doesn’t know whether the apparatus is reliable unless one knows that it produces correct results in the first place and so on and so on ad infinitum . Collins’ main case concerns attempts to detect gravitational waves, which were very controversially discussed among physicists in the 1970s.

Collins argues that the circle is eventually broken not by the “facts” themselves but rather by factors having to do with the scientist’s career, the social and cognitive interests of his community, and the expected fruitfulness for future work. It is important to note that in Collins’s view these factors do not necessarily make scientific results arbitrary. But what he does argue is that the experimental results do not represent the world according to the absolute conception. Rather, they are produced jointly by the world, scientific apparatuses, and the psychological and sociological factors mentioned above. The facts and phenomena of science are therefore necessarily perspectival.

In a series of contributions, Allan Franklin, a physicist-turned-philosopher of science, has tried to show that while there are indeed no algorithmic procedures for establishing experimental facts, disagreements can nevertheless be settled by reasoned judgment on the basis of bona fide epistemological criteria such as experimental checks and calibration, elimination of possible sources of error, using apparatuses based on well-corroborated theory and so on (Franklin 1994, 1997). Collins responds that “reasonableness” is a social category that is not drawn from physics (Collins 1994).

The main issue for us in this debate is whether there are any reasons to believe that experimental results provide an aperspectival view on the world. According to Collins, experimental results are co-determined by the facts as well as social and psychological factors. According to Franklin, whatever else influences experimental results other than facts is not arbitrary but instead based on reasoned judgment. What he has not shown is that reasoned judgment guarantees that experimental results reflect the facts alone and are therefore aperspectival in any interesting sense. Another important challenge for the aperspectival account comes from feminist epistemology and other accounts that stress the importance of the construction of scientific knowledge through epistemic communities. These accounts are reviewed in section 5 .

3. Objectivity as Absence of Normative Commitments and the Value-Free Ideal

In the previous section we have presented arguments against the view of objectivity as faithfulness to facts and an impersonal “view from nowhere”. An alternative view is that science is objective to the extent that it is value-free . Why would we identify objectivity with value-freedom or regard the latter as a prerequisite for the former? Part of the answer is empiricism. If science is in the business of producing empirical knowledge, and if differences about value judgments cannot be settled by empirical means, values should have no place in science. In the following we will try to make this intuition more precise.

Before addressing what we will call the “value-free ideal”, it will be helpful to distinguish four stages at which values may affect science. They are: (i) the choice of a scientific research problem; (ii) the gathering of evidence in relation to the problem; (iii) the acceptance of a scientific hypothesis or theory as an adequate answer to the problem on the basis of the evidence; (iv) the proliferation and application of scientific research results (Weber 1917 [1949]).

Most philosophers of science would agree that the role of values in science is contentious only with respect to dimensions (ii) and (iii): the gathering of evidence and the acceptance of scientific theories . It is almost universally accepted that the choice of a research problem is often influenced by interests of individual scientists, funding parties, and society as a whole. This influence may make science more shallow and slow down its long-run progress, but it has benefits, too: scientists will focus on providing solutions to those intellectual problems that are considered urgent by society and they may actually improve people’s lives. Similarly, the proliferation and application of scientific research results is evidently affected by the personal values of journal editors and end users, and little can be done about this. The real debate is about whether or not the “core” of scientific reasoning—the gathering of evidence and the assessment and acceptance scientific theories—is, and should be, value-free.

We have introduced the problem of the underdetermination of theory by evidence above. The problem does not stop, however, at values being required for filling the gap between theory and evidence. A further complication is that these values can conflict with each other. Consider the classical problem of fitting a mathematical function to a data set. The researcher often has the choice between using a complex function, which makes the relationship between the variables less simple but fits the data more accurately , or postulating a simpler relationship that is less accurate . Simplicity and accuracy are both important cognitive values, and trading them off requires a careful value judgment. However, philosophers of science tend to regard value-ladenness in this sense as benign. Cognitive values (sometimes also called “epistemic” or “constitutive” values) such as predictive accuracy, scope, unification, explanatory power, simplicity and coherence with other accepted theories are taken to be indicative of the truth of a theory and therefore provide reasons for preferring one theory over another (McMullin 1982, 2009; Laudan 1984; Steel 2010). Kuhn (1977) even claims that cognitive values define the shared commitments of science, that is, the standards of theory assessment that characterize the scientific approach as a whole. Note that not every philosopher entertains the same list of cognitive values: subjective differences in ranking and applying cognitive values do not vanish, a point Kuhn made emphatically.

In most views, the objectivity and authority of science is not threatened by cognitive values, but only by non-cognitive or contextual values . Contextual values are moral, personal, social, political and cultural values such as pleasure, justice and equality, conservation of the natural environment and diversity. The most notorious cases of improper uses of such values involve travesties of scientific reasoning, where the intrusion of contextual values led to an intolerant and oppressive scientific agenda with devastating epistemic and social consequences. In the Third Reich, a large part of contemporary physics, such as the theory of relativity, was condemned because its inventors were Jewish; in the Soviet Union, biologist Nikolai Vavilov was sentenced to death (and died in prison) because his theories of genetic inheritance did not match Marxist-Leninist ideology. Both states tried to foster a science that was motivated by political convictions (“Deutsche Physik” in Nazi Germany, Lysenko’s Lamarckian theory of inheritance and denial of genetics), leading to disastrous epistemic and institutional effects.

Less spectacular, but arguably more frequent are cases where research is biased toward the interests of the sponsors, such as tobacco companies, food manufacturers and large pharmaceutic firms (e.g., Resnik 2007; Reiss 2010). This preference bias , defined by Wilholt (2009) as the infringement of conventional standards of the research community with the aim of arriving at a particular result, is clearly epistemically harmful. Especially for sensitive high-stakes issues such as the admission of medical drugs or the consequences of anthropogenic global warming, it seems desirable that research scientists assess theories without being influenced by such considerations. This is the core idea of the

Value-Free Ideal (VFI): Scientists should strive to minimize the influence of contextual values on scientific reasoning, e.g., in gathering evidence and assessing/accepting scientific theories.

According to the VFI, scientific objectivity is characterized by absence of contextual values and by exclusive commitment to cognitive values in stages (ii) and (iii) of the scientific process. See Dorato (2004: 53–54), Ruphy (2006: 190) or Biddle (2013: 125) for alternative formulations.

For value-freedom to be a reasonable ideal, it must not be a goal beyond reach and be attainable at least to some degree. This claim is expressed by the

Value-Neutrality Thesis (VNT): Scientists can—at least in principle—gather evidence and assess/accept theories without making contextual value judgments.

Unlike the VFI, the VNT is not normative: its subject is whether the judgments that scientists make are, or could possibly be, free of contextual values. Similarly, Hugh Lacey (1999) distinguishes three principal components or aspects of value-free science: impartiality, neutrality and autonomy. Impartiality means that theories are solely accepted or appraised in virtue of their contribution to the cognitive values of science, such as truth, accuracy or explanatory power. This excludes the influence of contextual values, as stated above. Neutrality means that scientific theories make no value statements about the world: they are concerned with what there is, not with what there should be. Finally, scientific autonomy means that the scientific agenda is shaped by the desire to increase scientific knowledge, and that contextual values have no place in scientific method.

These three interpretations of value-free science can be combined with each other, or used individually. All of them, however, are subject to criticisms that we examine below. Denying the VNT, or the attainability of Lacey’s three criteria for value-free science, poses a challenge for scientific objectivity: one can either conclude that the ideal of objectivity should be rejected, or develop a conception of objectivity that differs from the VFI.

Lacey’s characterization of value-free science and the VNT were once mainstream positions in philosophy of science. Their widespread acceptance was closely connected to Reichenbach’s famous distinction between context of discovery and context of justification . Reichenbach first made this distinction with respect to the epistemology of mathematics:

the objective relation from the given entities to the solution, and the subjective way of finding it, are clearly separated for problems of a deductive character […] we must learn to make the same distinction for the problem of the inductive relation from facts to theories. (Reichenbach 1938: 36–37)

The standard interpretation of this statement marks contextual values, which may have contributed to the discovery of a theory, as irrelevant for justifying the acceptance of a theory, and for assessing how evidence bears on theory—the relation that is crucial for the objectivity of science. Contextual values are restricted to a matter of individual psychology that may influence the discovery, development and proliferation of a scientific theory, but not its epistemic status.

This distinction played a crucial role in post-World War II philosophy of science. It presupposes, however, a clear-cut distinction between cognitive values on the one hand and contextual values on the other. While this may be prima facie plausible for disciplines such as physics, there is an abundance of contextual values in the social sciences, for instance, in the conceptualization and measurement of a nation’s wealth, or in different ways to measure the inflation rate (cf. Dupré 2007; Reiss 2008). More generally, three major lines of criticism can be identified.

First, Helen Longino (1996) has argued that traditional cognitive values such as consistency, simplicity, breadth of scope and fruitfulness are not purely cognitive or epistemic after all, and that their use imports political and social values into contexts of scientific judgment. According to her, the use of cognitive values in scientific judgments is not always, not even normally, politically neutral. She proposes to juxtapose these values with feminist values such as novelty, ontological heterogeneity, mutuality of interaction, applicability to human needs and diffusion of power, and argues that the use of the traditional value instead of its alternative (e.g., simplicity instead of ontological heterogeneity) can lead to biases and adverse research results. Longino’s argument here is different from the one discussed in section 3.1 . It casts the very distinction between cognitive and contextual values into doubt.

The second argument against the possibility of value-free science is semantic and attacks the neutrality of scientific theories: fact and value are frequently entangled because of the use of so-called “thick” ethical concepts in science (Putnam 2002)—i.e., ethical concepts that have mixed descriptive and normative content. For example, a description such as “dangerous technology” involves a value judgment about the technology and the risks it implies, but it also has a descriptive content: it is uncertain and hard to predict whether using that technology will really trigger those risks. If the use of such terms, where facts and values are inextricably entangled, is inevitable in scientific reasoning, it is impossible to describe hypotheses and results in a value-free manner, undermining the value-neutrality thesis.

Indeed, John Dupré has argued that thick ethical terms are ineliminable from science, at least certain parts of it (Dupré 2007). Dupré’s point is essentially that scientific hypotheses and results concern us because they are relevant to human interests, and thus they will necessarily be couched in a language that uses thick ethical terms. While it will often be possible to translate ethically thick descriptions into neutral ones, the translation cannot be made without losses, and these losses obtain precisely because human interests are involved (see section 6.2 for a case study from social science). According to Dupré, then, many scientific statements are value-free only because their truth or falsity does not matter to us:

Whether electrons have a positive or a negative charge and whether there is a black hole in the middle of our galaxy are questions of absolutely no immediate importance to us. The only human interests they touch (and these they may indeed touch deeply) are cognitive ones, and so the only values that they implicate are cognitive values. (2007: 31)

A third challenge to the VNT, and perhaps the most influential one, was raised first by Richard Rudner in his influential article “The Scientist Qua Scientist Makes Value Judgments” (Rudner 1953). Rudner disputes the core of the VNT and the context of discovery/justification distinction: the idea that the acceptance of a scientific theory can in principle be value-free. First, Rudner argues that

no analysis of what constitutes the method of science would be satisfactory unless it comprised some assertion to the effect that the scientist as scientist accepts or rejects hypotheses . (1953: 2)

This assumption stems from industrial quality control and other application-oriented research. In such contexts, it is often necessary to accept or to reject a hypothesis (e.g., the efficacy of a drug) in order to make effective decisions.

Second, he notes that no scientific hypothesis is ever confirmed beyond reasonable doubt—some probability of error always remains. When we accept or reject a hypothesis, there is always a chance that our decision is mistaken. Hence, our decision is also “a function of the importance , in the typically ethical sense, of making a mistake in accepting or rejecting a hypothesis” (1953: 2): we are balancing the seriousness of two possible errors (erroneous acceptance/rejection of the hypothesis) against each other. This corresponds to type I and type II error in statistical inference.

The decision to accept or reject a hypothesis involves a value judgment (at least implicitly) because scientists have to judge which of the consequences of an erroneous decision they deem more palatable: (1) some individuals die of the side effects of a drug erroneously judged to be safe; or (2) other individuals die of a condition because they did not have access to a treatment that was erroneously judged to be unsafe. Hence, ethical judgments and contextual values necessarily enter the scientist’s core activity of accepting and rejecting hypotheses, and the VNT stands refuted. Closely related arguments can be found in Churchman (1948) and Braithwaite (1953). Hempel (1965: 91–92) gives a modified account of Rudner’s argument by distinguishing between judgments of confirmation , which are free of contextual values, and judgments of acceptance . Since even strongly confirming evidence cannot fully prove a universal scientific law, we have to live with a residual “inductive risk” in inferring that law. Contextual values influence scientific methods by determining the acceptable amount of inductive risk (see also Douglas 2000).

But how general are Rudner’s objections? Apparently, his result holds true of applied science, but not necessarily of fundamental research. For the latter domain, two major lines of rebuttals have been proposed. First, Richard Jeffrey (1956) notes that lawlike hypotheses in theoretical science (e.g., the gravitational law in Newtonian mechanics) are characterized by their general scope and not confined to a particular application. Obviously, a scientist cannot fine-tune her decisions to their possible consequences in a wide variety of different contexts. So she should just refrain from the essentially pragmatic decision to accept or reject hypotheses. By restricting scientific reasoning to gathering and interpreting evidence, possibly supplemented by assessing the probability of a hypothesis, Jeffrey tries to save the VNT in fundamental scientific research, and the objectivity of scientific reasoning.

Second, Isaac Levi (1960) observes that scientists commit themselves to certain standards of inference when they become a member of the profession. This may, for example, lead to the statistical rejection of a hypothesis when the observed significance level is smaller than 5%. These community standards may eliminate any room for contextual ethical judgment on behalf of the scientist: they determine when she should accept a hypothesis as established. Value judgments may be implicit in how a scientific community sets standards of inference (compare section 5.1 ), but not in the daily work of an individual scientist (cf. Wilholt 2013).

Both defenses of the VNT focus on the impact of values in theory choice, either by denying that scientists actually choose theories (Jeffrey), or by referring to community standards and restricting the VNT to the individual scientist (Levi). Douglas (2000: 563–565) points out, however, that the “acceptance” of scientific theories is only one of several places for values to enter scientific reasoning, albeit an especially prominent and explicit one. Many decisions in the process of scientific inquiry may conceal implicit value judgments: the design of an experiment, the methodology for conducting it, the characterization of the data, the choice of a statistical method for processing and analyzing data, the interpretational process findings, etc. None of these methodological decisions could be made without consideration of the possible consequences that could occur. Douglas gives, as a case study, a series of experiments where carcinogenic effects of dioxin exposure on rats were probed. Contextual values such as safety and risk aversion affected the conducted research at various stages: first, in the classification of pathological samples as benign or cancerous (over which a lot of expert disagreement occurred), second, in the extrapolation from the high-dose experimental conditions to the more realistic low-dose conditions. In both cases, the choice of a conservative classification or model had to be weighed against the adverse consequences for society that could result from underestimating the risks (see also Biddle 2013).

These diagnoses cast a gloomy light on attempts to divide scientific labor between gathering evidence and determining the degree of confirmation (value-free) on the one hand and accepting scientific theories (value-laden) on the other. The entire process of conceptualizing, gathering and interpreting evidence is so entangled with contextual values that no neat division, as Jeffrey envisions, will work outside the narrow realm of statistical inference—and even there, doubts may be raised ( see section 4.2 ).

Philip Kitcher (2011a: 31–40; see also Kitcher 2011b) gives an alternative argument, based on his idea of “significant truths”. There are simply too many truths that are of no interest whatsoever, such as the total number of offside positions in a low-level football competition. Science, then, doesn’t aim at truth simpliciter but rather at something more narrow: truth worth pursuing from the point of view of our cognitive, practical and social goals. Any truth that is worth pursuing in this sense is what he calls a “significant truth”. Clearly, it is value judgments that help us decide whether or not any given truth is significant.

Kitcher goes on to observing that the process of scientific investigation cannot neatly be divided into a stage in which the research question is chosen, one in which the evidence is gathered and one in which a judgment about the question is made on the basis of the evidence. Rather, the sequence is multiply iterated, and at each stage, the researcher has to decide whether previous results warrant pursuit of the current line of research, or whether she should switch to another avenue. Such choices are laden with contextual values.

Values in science also interact, according to Kitcher, in a non-trivial way. Assume we endorse predictive accuracy as an important goal of science. However, there may not be a convincing strategy to reach this goal in some domain of science, for instance because that domain is characterized by strong non-linear dependencies. In this case, predictive accuracy might have to yield to achieving other values, such as consistency with theories in neighbor domains. Conversely, changing social goals lead to re-evaluations of scientific knowledge and research methods.

Science, then, cannot be value-free because no scientist ever works exclusively in the supposedly value-free zone of assessing and accepting hypotheses. Evidence is gathered and hypotheses are assessed and accepted in the light of their potential for application and fruitful research avenues. Both cognitive and contextual value judgments guide these choices and are themselves influenced by their results.

The discussion so far has focused on the VNT, the practical attainability of the VFI, but little has been said about whether a value-free science is desirable in the first place. This subsection discusses this topic with special attention to informing and advising public policy from a scientific perspective. While the VFI, and many arguments for and against it, can be applied to science as a whole, the interface of science and public policy is the place where the intrusion of values into science is especially salient, and where it is surrounded by the greatest controversy. In the 2009 “Climategate” affair, leaked emails from climate scientists raised suspicions that they were pursuing a particular socio-political agenda that affected their research in an improper way. Later inquiries and reports absolved them from charges of misconduct, but the suspicions alone did much to damage the authority of science in the public arena.

Indeed, many debates at the interface of science and public policy are characterized by disagreements on propositions that combine a factual basis with specific goals and values. Take, for instance, the view that growing transgenic crops carries too much risk in terms of biosecurity, or addressing global warming by phasing out fossil energies immediately. The critical question in such debates is whether there are theses \(T\) such that one side in the debate endorses \(T\), the other side rejects it, the evidence is shared, and both sides have good reasons for their respective positions.

According to the VFI, scientists should uncover an epistemic, value-free basis for resolving such disagreements and restrict the dissent to the realm of value judgments. Even if the VNT should turn out to be untenable, and a strict separation to be impossible, the VFI may have an important function for guiding scientific research and for minimizing the impact of values on an objective science. In the philosophy of science, one camp of scholars defends the VFI as a necessary antidote to individual and institutional interests, such as Hugh Lacey (1999, 2002), Ernan McMullin (1982) and Sandra Mitchell (2004), while others adopt a critical attitude, such as Helen Longino (1990, 1996), Philip Kitcher (2011a) and Heather Douglas (2009). These criticisms we discuss mainly refer to the desirability or the conceptual (un)clarity of the VFI.

First, it has been argued that the VFI is not desirable at all. Feminist philosophers (e.g., Harding 1991; Okruhlik 1994; Lloyd 2005) have argued that science often carries a heavy androcentric values, for instance in biological theories about sex, gender and rape. The charge against these values is not so much that they are contextual rather than cognitive, but that they are unjustified. Moreover, if scientists did follow the VFI rigidly, policy-makers would pay even less attention to them, with a detrimental effect on the decisions they take (Cranor 1993). Given these shortcomings, the VFI has to be rethought if it is supposed to play a useful role for guiding scientific research and leading to better policy decisions. Section 4.3 and section 5.2 elaborate on this line of criticism in the context of scientific community practices, and a science in the service of society.

Second, the autonomy of science often fails in practice due to the presence of external stakeholders, such as funding agencies and industry lobbies. To save the epistemic authority of science, Douglas (2009: 7–8) proposes to detach it from its autonomy by reformulating the VFI and distinguishing between direct and indirect roles of values in science . Contextual values may legitimately affect the assessment of evidence by indicating the appropriate standard of evidence, the representation of complex processes, the severity of consequences of a decision, the interpretation of noisy datasets, and so on (see also Winsberg 2012). This concerns, above all, policy-related disciplines such as climate science or economics that routinely perform scientific risk analyses for real-world problems (cf. also Shrader-Frechette 1991). Values should, however, not be “reasons in themselves”, that is, evidence or defeaters for evidence (direct role, illegitimate) and as “helping to decide what should count as a sufficient reason for a choice” (indirect role, legitimate). This prohibition for values to replace or dismiss scientific evidence is called detached objectivity by Douglas, but it is complemented by various other aspects that relate to a reflective balancing of various perspectives and the procedural, social aspects of science (2009: ch. 6).

That said, Douglas’ proposal is not very concrete when it comes to implementation, e.g., regarding the way diverse values should be balanced. Compromising in the middle cannot be the solution (Weber 1917 [1949]). First, no standpoint is, just in virtue of being in the middle, evidentially supported vis-à-vis more extreme positions. Second, these middle positions are also, from a practical point of view, the least functional when it comes to advising policy-makers.

Moreover, the distinction between direct and indirect roles of values in science may not be sufficiently clear-cut to police the legitimate use of values in science, and to draw the necessary borderlines. Assume that a scientist considers, for whatever reason, the consequences of erroneously accepting hypothesis \(H\) undesirable. Therefore he uses a statistical model whose results are likely to favor ¬\(H\) over \(H\). Is this a matter of reasonable conservativeness? Or doesn’t it amount to reasoning to a foregone conclusion, and to treating values as evidence (cf. Elliott 2011: 320–321)?

The most recent literature on values and evidence in science presents us with a broad spectrum of opinions. Steele (2012) and Winsberg (2012) agree that probabilistic assessments of uncertainty involve contextual value judgments. While Steele defends this point by analyzing the role of scientists as policy advisors, Winsberg points to the influence of contextual values in the selection and representation of physical processes in climate modeling. Betz (2013) argues, by contrast, that scientists can largely avoid making contextual value judgments if they carefully express the uncertainty involved with their evidential judgments, e.g., by using a scale ranging from purely qualitative evidence (such as expert judgment) to precise probabilistic assessments. The issue of value judgments at earlier stages of inquiry is not addressed by this proposal; however, disentangling evidential judgments and judgments involving contextual values at the stage of theory assessment may be a good thing in itself.

Thus, should we or should we not worried about values in scientific reasoning? While the interplay of values and evidential considerations need not be pernicious, it is unclear why it adds to the success or the authority of science. How are we going to ensure that the permissive attitude towards values in setting evidential standards etc. is not abused? In the absence of a general theory about which contextual values are beneficial and which are pernicious, the VFI might as well be as a first-order approximation to a sound, transparent and objective science.

4. Objectivity as Freedom from Personal Biases

This section deals with scientific objectivity as a form of intersubjectivity—as freedom from personal biases. According to this view, science is objective to the extent that personal biases are absent from scientific reasoning, or that they can be eliminated in a social process. Perhaps all science is necessarily perspectival. Perhaps we cannot sensibly draw scientific inferences without a host of background assumptions, which may include assumptions about values. Perhaps all scientists are biased in some way. But objective scientific results do not, or so the argument goes, depend on researchers’ personal preferences or experiences—they are the result of a process where individual biases are gradually filtered out and replaced by agreed upon evidence. That, among other things, is what distinguishes science from the arts and other human activities, and scientific knowledge from a fact-independent social construction (e.g., Haack 2003).

Paradigmatic ways to achieve objectivity in this sense are measurement and quantification. What has been measured and quantified has been verified relative to a standard. The truth, say, that the Eiffel Tower is 324 meters tall is relative to a standard unit and conventions about how to use certain instruments, so it is neither aperspectival nor free from assumptions, but it is independent of the person making the measurement.

We will begin with a discussion of objectivity, so conceived, in measurement, discuss the ideal of “mechanical objectivity” and then investigate to what extent freedom from personal biases can be implemented in statistical evidence and inductive inference—arguably the core of scientific reasoning, especially in quantitatively oriented sciences. Finally, we discuss Feyerabend’s radical criticism of a rational scientific method that can be mechanically applied, and his defense of the epistemic and social benefits of personal “bias” and idiosyncrasy.

Measurement is often thought to epitomize scientific objectivity, most famously captured in Lord Kelvin’s dictum

when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science , whatever the matter may be. (Kelvin 1883, 73)

Measurement can certainly achieve some independence of perspective. Yesterday’s weather in Durham UK may have been “really hot” to the average North Eastern Brit and “very cold” to the average Mexican, but they’ll both accept that it was 21°C. Clearly, however, measurement does not result in a “view from nowhere”, nor are typical measurement results free from presuppositions. Measurement instruments interact with the environment, and so results will always be a product of both the properties of the environment we aim to measure as well as the properties of the instrument. Instruments, thus, provide a perspectival view on the world (cf. Giere 2006).

Moreover, making sense of measurement results requires interpretation. Consider temperature measurement. Thermometers function by relating an unobservable quantity, temperature, to an observable quantity, expansion (or length) of a fluid or gas in a glass tube; that is, thermometers measure temperature by assuming that length is a function of temperature: length = \(f\)(temperature). The function \(f\) is not known a priori , and it cannot be tested either (because it could in principle only be tested using a veridical thermometer, and the veridicality of the thermometer is just what is at stake here). Making a specific assumption, for instance that \(f\) is linear, solves that problem by fiat. But this “solution” does not take us very far because different thermometric substances (e.g., mercury, air or water) yield different results for the points intermediate between the two fixed points 0°C and 100°C, and so they can’t all expand linearly.

According to Hasok Chang’s account of early thermometry (Chang 2004), the problem was eventually solved by using a “principle of minimalist overdetermination”, the goal of which was to find a reliable thermometer while making as few substantial assumptions (e.g., about the form for \(f\)) as possible. It was argued that if a thermometer was to be reliable, different tokens of the same thermometer type should agree with each other, and the results of air thermometers agreed the most. “Minimal” doesn’t mean zero, however, and indeed this procedure makes an important presupposition (in this case a metaphysical assumption about the one-valuedness of a physical quantity). Moreover, the procedure yielded at best a reliable instrument, not necessarily one that was best at tracking the uniquely real temperature (if there is such a thing).

What Chang argues about early thermometry is true of measurements more generally: they are always made against a backdrop of metaphysical presuppositions, theoretical expectations and other kinds of belief. Whether or not any given procedure is regarded as adequate depends to a large extent on the purposes pursued by the individual scientist or group of scientists making the measurements. Especially in the social sciences, this often means that measurement procedures are laden with normative assumptions, i.e., values.

Julian Reiss (2008, 2013) has argued that economic indicators such as consumer price inflation, gross domestic product and the unemployment rate are value-laden in this sense. Consumer-price indices, for instance, assume that if a consumer prefers a bundle \(x\) over an alternative \(y\), then \(x\) is better for her than \(y\), which is as ethically charged as it is controversial. National income measures assume that nations that exchange a larger share of goods and services on markets are richer than nations where the same goods and services are provided by the government or within households, which too is ethically charged and controversial.

While not free of assumptions and values, the goal of many measurement procedures remains to reduce the influence of personal biases and idiosyncrasies. The Nixon administration, famously, indexed social security payments to the consumer-price index in order to eliminate the dependence of security recipients on the flimsiest of party politics: to make increases automatic instead of a result of political negotiations (Nixon 1969). Lorraine Daston and Peter Galison refer to this as mechanical objectivity . They write:

Finally, we come to the full-fledged establishment of mechanical objectivity as the ideal of scientific representation. What we find is that the image, as standard bearer of is objectivity is tied to a relentless search to replace individual volition and discretion in depiction by the invariable routines of mechanical reproduction. (Daston and Galison 1992: 98)

Mechanical objectivity reduces the importance of human contributions to scientific results to a minimum, and therefore enables science to proceed on a large scale where bonds of trust between individuals can no longer hold (Daston 1992). Trust in mechanical procedures thus replaces trust in individual scientists.

In his book Trust in Numbers , Theodore Porter pursues this line of thought in great detail. In particular, on the basis of case studies involving British actuaries in the mid-nineteenth century, of French state engineers throughout the century, and of the US Army Corps of Engineers from 1920 to 1960, he argues for two causal claims. First, measurement instruments and quantitative procedures originate in commercial and administrative needs and affect the ways in which the natural and social sciences are practiced, not the other way around. The mushrooming of instruments such as chemical balances, barometers, chronometers was largely a result of social pressures and the demands of democratic societies. Administering large territories or controlling diverse people and processes is not always possible on the basis of personal trust and thus “objective procedures” (which do not require trust in persons) took the place of “subjective judgments” (which do). Second, he argues that quantification is a technology of distrust and weakness, and not of strength. It is weak administrators who do not have the social status, political support or professional solidarity to defend their experts’ judgments. They therefore subject decisions to public scrutiny, which means that they must be made in a publicly accessible form.

This is the situation in which scientists who work in areas where the science/policy boundary is fluid find themselves:

The National Academy of Sciences has accepted the principle that scientists should declare their conflicts of interest and financial holdings before offering policy advice, or even information to the government. And while police inspections of notebooks remain exceptional, the personal and financial interests of scientists and engineers are often considered material, especially in legal and regulatory contexts. Strategies of impersonality must be understood partly as defenses against such suspicions […]. Objectivity means knowledge that does not depend too much on the particular individuals who author it. (Porter 1995: 229)

Measurement and quantification help to reduce the influence of personal biases and idiosyncrasies and they reduce the need to trust the scientist or government official, but often at a cost. Standardizing scientific procedures becomes difficult when their subject matters are not homogeneous, and few domains outside fundamental physics are. Attempts to quantify procedures for treatment and policy decisions that we find in evidence-based practices are currently transferred to a variety of sciences such as medicine, nursing, psychology, education and social policy. However, they often lack a certain degree of responsiveness to the peculiarities of their subjects and the local conditions to which they are applied (see also section 5.3 ).

Moreover, the measurement and quantification of characteristics of scientific interest is only half of the story. We also want to describe relations between the quantities and make inferences using statistical analysis. Statistics thus helps to quantify further aspects of scientific work. We will now examine whether or not statistical analysis can proceed in a way free from personal biases and idiosyncrasies—for more detail, see the entry on philosophy of statistics .

4.2 Statistical Evidence

The appraisal of scientific evidence is traditionally regarded as a domain of scientific reasoning where the ideal of scientific objectivity has strong normative force, and where it is also well-entrenched in scientific practice. Episodes such as Galilei’s observations of the Jupiter moons, Lavoisier’s calcination experiments, and Eddington’s observation of the 1919 eclipse are found in all philosophy of science textbooks because they exemplify how evidence can be persuasive and compelling to scientists with different backgrounds. The crucial question is therefore: can we identify an “objective” concept of scientific evidence that is independent of the personal biases of the experimenter and interpreter?

Inferential statistics—the field that investigates the validity of inferences from data to theory—tries to answer this question. It is extremely influential in modern science, pervading experimental research as well as the assessment and acceptance of our most fundamental theories. For instance, a statistical argument helped to establish the recent discovery of the Higgs Boson. We now compare the main theories of statistical evidence with respect to the objectivity of the claims they produce. They mainly differ with respect to the role of an explicitly subjective interpretation of probability.

Bayesian inference quantifies scientific evidence by means of probabilities that are interpreted as a scientist’s subjective degrees of belief. The Bayesian thus leaves behind Carnap’s (1950) idea that probability is determined by a logical relation between sentences. For example, the prior degree of belief in hypothesis \(H\), written \(p(H)\), can in principle take any value in the interval \([0,1]\). Simultaneously held degrees of belief in different hypotheses are, however, constrained by the laws of probability. After learning evidence E, the degree of belief in \(H\) is changed from its prior probability \(p(H)\) to the conditional degree of belief \(p(H \mid E)\), commonly called the posterior probability of \(H\). Both quantities can be related to each other by means of Bayes’ Theorem .

These days, the Bayesian approach is extremely influential in philosophy and rapidly gaining ground across all scientific disciplines. For quantifying evidence for a hypothesis, Bayesian statisticians almost uniformly use the Bayes factor , that is, the ratio of prior to posterior odds in favor of a hypothesis. The Bayes factor in favor of hypothesis \(H\) against its negation \(\neg\)\(H\) in the light of evidence \(E\) can be written as

or in other words, as the likelihood ratio between \(H\) and \(\neg\)\(H\). The Bayes factor reduces to the likelihoodist conception of evidence (Royall 1997) for the case of two competing point hypotheses. For further discussion of Bayesian measures of evidence, see Good (1950), Sprenger and Hartmann (2019: ch. 1) and the entry on confirmation and evidential support .

Unsurprisingly, the idea to measure scientific evidence in terms of subjective probability has met resistance. For example, the statistician Ronald A. Fisher (1935: 6–7) has argued that measuring psychological tendencies cannot be relevant for scientific inquiry and sustain claims to objectivity. Indeed, how should scientific objectivity square with subjective degree of belief? Bayesians have responded to this challenge in various ways:

Howson (2000) and Howson and Urbach (2006) consider the objection misplaced. In the same way that deductive logic does not judge the correctness of the premises but just advises you what to infer from them, Bayesian inductive logic provides rational rules for representing uncertainty and making inductive inferences. Choosing the premises (e.g., the prior distributions) “objectively” falls outside the scope of Bayesian analysis.

Convergence or merging-of-opinion theorems guarantee that under certain circumstances, agents with very different initial attitudes who observe the same evidence will obtain similar posterior degrees of belief in the long run. However, they are asymptotic results without direct implications for inference with real-life datasets (see also Earman 1992: ch. 6). In such cases, the choice of the prior matters, and it may be beset with idiosyncratic bias and manifest social values.

Adopting a more modest stance, Sprenger (2018) accepts that Bayesian inference does not achieve the goal of objectivity in the sense of intersubjective agreement (concordant objectivity), or being free of personal values, bias and subjective judgment. However, he argues that competing schools of inference such as frequentist inference face this problem to the same degree, perhaps even worse. Moreover, some features of Bayesian inference (e.g., the transparency about prior assumptions) fit recent, socially oriented conceptions of objectivity that we discuss in section 5 .

A radical Bayesian solution to the problem of personal bias is to adopt a principle that radically constrains an agent’s rational degrees of belief, such as the Principle of Maximum Entropy (MaxEnt—Jaynes 1968; Williamson 2010). According to MaxEnt, degrees of belief must be probabilistic and in sync with empirical constraints, but conditional on these constraints, they must be equivocal, that is, as middling as possible. This latter constraint amounts to maximizing the entropy of the probability distribution in question. The MaxEnt approach eliminates various sources of subjective bias at the expense of narrowing down the range of rational degrees of belief. An alternative objective Bayesian solution consists in so-called “objective priors” : prior probabilities that do not represent an agent’s factual attitudes, but are determined by principles of symmetry, mathematical convenience or maximizing the influence of the data on the posterior (e.g., Jeffreys 1939 [1980]; Bernardo 2012).

Thus, Bayesian inference, which analyzes statistical evidence from the vantage point of rational belief, provides only a partial answer to securing scientific objectivity from personal idiosyncrasy.

The frequentist conception of evidence is based on the idea of the statistical test of a hypothesis . Under the influence of the statisticians Jerzy Neyman and Egon Pearson, tests were often regarded as rational decision procedures that minimize the relative frequency of wrong decisions in a hypothetical series of repetitions of a test (hence the name “frequentism”). Rudner’s argument in section 3.2 has pointed out the limits of this conception of hypothesis tests: the choice of thresholds for acceptance and rejection (i.e., the acceptable type I and II error rates) may reflect contextual value judgments and personal bias. Moreover, the losses associated with erroneously accepting or rejecting that hypothesis depend on the context of application which may be unbeknownst to the experimenter.

Alternatively, scientists can restrict themselves to a purely evidential interpretation of hypothesis tests and leave decisions to policy-makers and regulatory agencies. The statistician and biologist R.A. Fisher (1935, 1956) proposed what later became the orthodox quantification of evidence in frequentist statistics. Suppose a “null” or default hypothesis \(H_0\) denotes that an intervention has zero effect. If the observed data are “extreme” under \(H_0\)—i.e., if it was highly likely to observe a result that agrees better with \(H_0\)—the data provide evidence against the null hypothesis and for the efficacy of the intervention. The epistemological rationale is connected to the idea of severe testing (Mayo 1996): if the intervention were ineffective, we would, in all likelihood, have found data that agree better with the null hypothesis. The strength of evidence against \(H_0\) is equal to the \(p\)-value : the lower it is, the stronger evidence \(E\) speaks against the null hypothesis \(H_0\).

Unlike Bayes factors, this concept of statistical evidence does not depend on personal degrees of belief. However, this does not necessarily mean that \(p\)-values are more objective. First, \(p\)-values are usually classified as “non-significant” (\(p > .05\)), “significant” (\(p < .05\)), “highly significant”, and so on. Not only that these thresholds and labels are largely arbitrary, they also promote publication bias : non-significant findings are often classified as “failed studies” (i.e., the efficacy of the intervention could not be shown), rarely published and end up in the proverbial “file drawer”. Much valuable research is suppressed. Conversely, significant findings may often occur when the null hypothesis is actually true, especially when researchers have been “hunting for significance”. In fact, researchers have an incentive to keep their \(p\)-values low: the stronger the evidence, the more convincing the narrative, the greater the impact—and the higher the chance for a good publication and career-relevant rewards. Moving the goalpost by “p-hacking” outcomes—for example by eliminating outliers, selective reporting or restricting the analysis to a subgroup—evidently biases the research results and compromises the objectivity of experimental research.

In particular, such questionable research practices (QRP) increase the type I error rate, which measures the rate at which false hypotheses are accepted, substantially over its nominal 5% level and contribute to publication bias (Bakker et al. 2012). Ioannidis (2005) concludes that “most published research findings are false”—they are the combined result of a low base rate of effective causal interventions, the file drawer effect and the widespread presence of questionable research practices. The frequentist logic of hypothesis testing aggravates the problem because it provides a framework where all these biases can easily enter (Ziliak and McCloskey 2008; Sprenger 2016). These radical conclusions are also confirmed by empirical findings: in many disciplines researchers fail to replicate findings by other scientific teams. See section 5.1 for more detail.

Summing up our findings, neither of the two major frameworks of statistical inference manages to eliminate all sources of personal bias and idiosyncrasy. The Bayesian considers subjective assumptions to be an irreducible part of scientific reasoning and sees no harm in making them explicit. The frequentist conception of evidence based on \(p\)-values avoids these explicitly subjective elements, but at the price of a misleading impression of objectivity and frequent abuse in practice. A defense of frequentist inference should, in our opinion, stress that the relatively rigid rules for interpreting statistical evidence facilitate communication and assessment of research results in the scientific community—something that is harder to achieve for a Bayesian. We now turn from specific methods for stating and interpreting evidence to a radical criticism of the idea that there is a rational scientific method.

In his writings of the 1970s, Paul Feyerabend launched a profound attack on the rationality and objectivity of scientific method. His position is exceptional in the philosophical literature since traditionally, the threat for objective and successful science is located in contextual rather than epistemic values. Feyerabend turns this view upside down: it is the “tyranny” of rational method, and the emphasis on epistemic rather than contextual values that prevents us from having a science in the service of society. Moreover, he welcomes a diversity of different personal, also idiosyncratic perspectives, thus denying the idea that freedom from personal “bias” is epistemically and socially beneficial.

The starting point of Feyerabend’s criticism of rational method is the thesis that strict epistemic rules such as those expressed by the VFI only suppress an open exchange of ideas, extinguish scientific creativity and prevent a free and truly democratic science. In his classic “Against Method” (1975: chs. 8–13), Feyerabend elaborates on this criticism by examining a famous episode in the history of science. When the Catholic Church objected to Galilean mechanics, it had the better arguments by the standards of seventeenth-century science. Their conservatism in their position was scientifically backed: Galilei’s telescopes were unreliable for celestial observations, and many well-established phenomena (no fixed star parallax, invariance of laws of motion) could not yet be explained in the heliocentric system. With hindsight, Galilei managed to achieve groundbreaking scientific progress just because he deliberately violated rules of scientific reasoning. Hence Feyerabend’s dictum “Anything goes”: no methodology whatsoever is able to capture the creative and often irrational ways by which science deepens our understanding of the world. Good scientific reasoning cannot be captured by rational method, as Carnap, Hempel and Popper postulated.

The drawbacks of an objective, value-free and method-bound view on science and scientific method are not only epistemic. Such a view narrows down our perspective and makes us less free, open-minded, creative, and ultimately, less human in our thinking (Feyerabend 1975: 154). It is therefore neither possible nor desirable to have an objective, value-free science (cf. Feyerabend 1978: 78–79). As a consequence, Feyerabend sees traditional forms of inquiry about our world (e.g., Chinese medicine) on a par with their Western competitors. He denounces appeals to “objective” standards as rhetorical tools for bolstering the epistemic authority of a small intellectual elite (=Western scientists), and as barely disguised statements of preference for one’s own worldview:

there is hardly any difference between the members of a “primitive” tribe who defend their laws because they are the laws of the gods […] and a rationalist who appeals to “objective” standards, except that the former know what they are doing while the latter does not. (1978: 82)

In particular, when discussing other traditions, we often project our own worldview and value judgments into them instead of making an impartial comparison (1978: 80–83). There is no purely rational justification for dismissing other perspectives in favor of the Western scientific worldview—the insistence on our Western approach may be as justified as insisting on absolute space and time after the Theory of Relativity.

The Galilei example also illustrates that personal perspective and idiosyncratic “bias” need not be bad for science. Feyerabend argues further that scientific research is accountable to society and should be kept in check by democratic institutions, and laymen in particular. Their particular perspectives can help to determine the funding agenda and to set ethical standards for scientific inquiry, but also be useful for traditionally value-free tasks such as choosing an appropriate research method and assessing scientific evidence. Feyerabend’s writings on this issue were much influenced by witnessing the Civil Rights Movement in the U.S. and the increasing emancipation of minorities, such as Blacks, Asians and Hispanics.

All this is not meant to say that truth loses its function as a normative concept, nor that all scientific claims are equally acceptable. Rather, Feyerabend advocates an epistemic pluralism that accepts diverse approaches to acquiring knowledge. Rather than defending a narrow and misleading ideal of objectivity, science should respect the diversity of values and traditions that drive our inquiries about the world (1978: 106–107). This would put science back into the role it had during the scientific revolution or the Enlightenment: as a liberating force that fought intellectual and political oppression by the sovereign, the nobility or the clergy. Objections to this view are discussed at the end of section 5.2 .

5. Objectivity as a Feature of Scientific Communities and Their Practices

This section addresses various accounts that regard scientific objectivity essentially as a function of social practices in science and the social organization of the scientific community. All these accounts reject the characterization of scientific objectivity as a function of correspondence between theories and the world, as a feature of individual reasoning practices, or as pertaining to individual studies and experiments (see also Douglas 2011). Instead, they evaluate the objectivity of a collective of studies, as well as the methods and community practices that structure and guide scientific research. More precisely, they adopt a meta-analytic perspective for assessing the reliability of scientific results (section 5.1), and they construct objectivity from a feminist perspective: as an open interchange of mutual criticism, or as being anchored in the “situatedness” of our scientific practices and the knowledge we gain ( section 5.2 ).

The collectivist perspective is especially useful when an entire discipline enters a stage of crisis: its members become convinced that a significant proportion of findings are not trustworthy. A contemporary example of such a situation is the replication crisis , which was briefly mentioned in the previous section and concerns the reproducibility of scientific knowledge claims in a variety of different fields (most prominently: psychology, biology, medicine). Large-scale replication projects have noticed that many findings which we considered as an integral part of scientific knowledge failed to replicate in settings that were designed to mimic the original experiment as closely as possible (e.g., Open Science Collaboration 2015). Successful attempts at replicating an experimental result have long been argued to provide evidence of freedom from particular kinds of artefacts and thus the trustworthiness of the result. Compare the entry on experiment in physics . Likewise, failure to replicate indicates that either the original finding, the result of the replication attempt, or both, are biased—though see John Norton’s (ms., ch. 3—see Other Internet Resources) arguments that the evidential value of (failed) replications crucially depends on researchers’ material background assumptions.

When replication failures in a discipline are particularly significant, one may conclude that the published literature lacks objectivity—at a minimum the discipline fails to inspire trust that its findings are more than artefacts of the researchers’ efforts. Conversely, when observed effects can be replicated in follow-up experiments, a kind of objectivity is reached that goes beyond the ideas of freedom from personal bias, mechanical objectivity, and subject-independent measurement, discussed in section 4.1 .

Freese and Peterson (2018) call this idea statistical objectivity . It grounds in the view that even the most scrupulous and diligent researchers cannot achieve full objectivity all by themselves. The term “objectivity” instead applies to a collection or population of studies, with meta-analysis (a formal method for aggregating the results from ranges of studies) as the “apex of objectivity” (Freese and Peterson 2018, 304; see also Stegenga 2011, 2018). In particular, aggregating studies from different researchers may provide evidence of systematic bias and questionable research practices (QRP) in the published literature. This diagnostic function of meta-analysis for detecting violations of objectivity is enhanced by statistical techniques such as the funnel plot and the \(p\)-curve (Simonsohn et al. 2014).

Apart from this epistemic dimension, research on statistical objectivity also has an activist dimension: methodologists urge researchers to make publicly available essential parts of their research before the data analysis starts, and to make their methods and data sources more transparent. For example, it is conjectured that the replicability (and thus objectivity) of science will increase by making all data available online, by preregistering experiments, and by using the registered reports model for journal articles (i.e., the journal decides on publication before data collection on the basis of the significance of the proposed research as well as the experimental design). The idea is that transparency about the data set and the experimental design will make it easier to stage a replication of an experiment and to assess its methodological quality. Moreover, publicly committing to a data analysis plan beforehand will lower the rate of QRPs and of attempts to accommodate data to hypotheses rather than making proper predictions.

All in all, statistical objectivity moves the discussion of objectivity to the level of population of studies. There, it takes up and modifies several conceptions of objectivity that we have seen before: most prominently, freedom of subjective bias, which is replaced with collective bias and pernicious conventions, and the subject-independent measurement of a physical quantity, which is replaced by reproducibility of effects.

Traditional notions of objectivity as faithfulness to facts or freedom of contextual values have also been challenged from a feminist perspective. These critiques can be grouped in three major research programs: feminist epistemology, feminist standpoint theory and feminist postmodernism (Crasnow 2013). The program of feminist epistemology explores the impact of sex and gender on the production of scientific knowledge. More precisely, feminist epistemology highlights the epistemic risks resulting from the systematic exclusion of women from the ranks of scientists, and the neglect of women as objects of study. Prominent case studies are the neglect of female orgasm in biology, testing medical drugs on male participants only, focusing on male specimen when studying the social behavior of primates, and explaining human mating patterns by means of imaginary neolithic societies (e.g., Hrdy 1977; Lloyd 1993, 2005). See also the entry on feminist philosophy of biology .

Often but not always, feminist epistemologists go beyond pointing out what they regard as androcentric bias and reject the value-free ideal altogether—with an eye on the social and moral responsibility of scientific inquiry. They try to show that a value-laden science can also meet important criteria for being epistemically reliable and objective (e.g., Anderson 2004; Kourany 2010). A classical representative of such efforts is Longino’s (1990) contextual empiricism . She reinforces Popper’s insistence that “the objectivity of scientific statements lies in the fact that they can be inter-subjectively tested” (1934 [2002]: 22), but unlike Popper, she conceives scientific knowledge essentially as a social product. Thus, our conception of scientific objectivity must directly engage with the social process that generates knowledge. Longino assigns a crucial function to social systems of criticism in securing the epistemic success of science. Specifically, she develops an epistemology which regards a method of inquiry as “objective to the degree that it permits transformative criticism ” (Longino 1990: 76). For an epistemic community to achieve transformative criticism, there must be:

avenues for criticism : criticism is an essential part of scientific institutions (e.g., peer review);

shared standards : the community must share a set of cognitive values for assessing theories (more on this in section 3.1 );

uptake of criticism : criticism must be able to transform scientific practice in the long run;

equality of intellectual authority : intellectual authority must be shared equally among qualified practitioners.

Longino’s contextual empiricism can be understood as a development of John Stuart Mill’s view that beliefs should never be suppressed, independently of whether they are true or false. Even the most implausible beliefs might be true, and even if they are false, they might contain a grain of truth which is worth preserving or helps to better articulate true beliefs (Mill 1859 [2003: 72]). The underlying intuition is supported by recent empirical research on the epistemic benefits of a diversity of opinions and perspectives (Page 2007). By stressing the social nature of scientific knowledge, and the importance of criticism (e.g., with respect to potential androcentric bias and inclusive practice), Longino’s account fits into the broader project of feminist epistemology.

Standpoint theory undertakes a more radical attack on traditional scientific objectivity. This view develops Marxist ideas to the effect that epistemic position is related to, and a product of, social position. Feminist standpoint theory builds on these ideas but focuses on gender, racial and other social relations. Feminist standpoint theorists and proponents of “situated knowledge” such as Donna Haraway (1988), Sandra Harding (1991, 2015a, 2015b) and Alison Wylie (2003) deny the internal coherence of a view from nowhere: all human knowledge is at base human knowledge and therefore necessarily perspectival. But they argue more than that. Not only is perspectivality the human condition, it is also a good thing to have. This is because perspectives, especially the perspectives of underprivileged classes and groups in society, come along with epistemic benefits. These ideas are controversial but they draw attention to the possibility that attempts to rid science of perspectives might not only be futile but also costly: they prevent scientists from having the epistemic benefits certain standpoints afford and from developing knowledge for marginalized groups in society. The perspectival stance can also explain why criteria for objectivity often vary with context: the relative importance of epistemic virtues is a matter of goals and interests—in other words, standpoint.

By endorsing a perspectival stance, feminist standpoint theory rejects classical elements of scientific objectivity such as neutrality and impartiality (see section 3.1 above). This is a notable difference to feminist epistemology, which is in principle (though not always in practice) compatible with traditional views of objectivity. Feminist standpoint theory is also a political project. For example, Harding (1991, 1993) demands that scientists, their communities and their practices—in other words, the ways through which knowledge is gained—be investigated as rigorously as the object of knowledge itself. This idea she refers to as “strong objectivity” replaces the “weak” conception of objectivity in the empiricist tradition: value-freedom, impartiality, rigorous adherence to methods of testing and inference. Like Feyerabend, Harding integrates a transformation of epistemic standards in science into a broader political project of rendering science more democratic and inclusive. On the other hand, she is exposed to similar objections (see also Haack 2003). Isn’t it grossly exaggerated to identify class, race and gender as important factors in the construction of physical theories? Doesn’t the feminist approach—like social constructivist approaches—lose sight of the particular epistemic qualities of science? Should non-scientists really have as much authority as trained scientists? To whom does the condition of equally shared intellectual authority apply? Nor is it clear—especially in times of fake news and filter bubbles—whether it is always a good idea to subject scientific results to democratic approval. There is no guarantee (arguably there are few good reasons to believe) that democratized or standpoint-based science leads to more reliable theories, or better decisions for society as a whole.

6. Issues in the Special Sciences

So far everything we discussed was meant to apply across all or at least most of the sciences. In this section we will look at a number of specific issues that arise in the social sciences, in economics, and in evidence-based medicine.

There is a long tradition in the philosophy of social science maintaining that there is a gulf in terms of both goals as well as methods between the natural and the social sciences. This tradition, associated with thinkers such as the neo-Kantians Heinrich Rickert and Wilhelm Windelband, the hermeneuticist Wilhelm Dilthey, the sociologist-economist Max Weber, and the twentieth-century hermeneuticists Hans-Georg Gadamer and Michael Oakeshott, holds that unlike the natural sciences whose aim it is to establish natural laws and which proceed by experimentation and causal analysis, the social sciences seek understanding (“ Verstehen ”) of social phenomena, the interpretive examination of the meanings individuals attribute to their actions (Weber 1904 [1949]; Weber 1917 [1949]; Dilthey 1910 [1986]; Windelband 1915; Rickert 1929; Oakeshott 1933; Gadamer 1960 [1989]). See also the entries on hermeneutics and Max Weber .

Understood this way, social science lacks objectivity in more than one sense. One of the more important debates concerning objectivity in the social sciences concerns the role value judgments play and, importantly, whether value-laden research entails claims about the desirability of actions. Max Weber held that the social sciences are necessarily value laden. However, they can achieve some degree of objectivity by keeping out the social researcher’s views about whether agents’ goals are commendable. In a similar vein, contemporary economics can be said to be value laden because it predicts and explains social phenomena on the basis of agents’ preferences. Nevertheless, economists are adamant that economists are not in the business of telling people what they ought to value. Modern economics is thus said to be objective in the Weberian sense of “absence of researchers’ values” —a conception that we discussed in detail in section 3 .

In his widely cited essay “‘Objectivity’ in Social Science and Social Policy” (Weber 1904 [1949]), Weber argued that the idea of an aperspectival social science was meaningless:

There is no absolutely objective scientific analysis of […] “social phenomena” independent of special and “one-sided” viewpoints according to which expressly or tacitly, consciously or unconsciously they are selected, analyzed and organized for expository purposes. (1904 [1949: 72]) All knowledge of cultural reality, as may be seen, is always knowledge from particular points of view. (1904 [1949:. 81])

The reason for this is twofold. First, social reality is too complex to admit of full description and explanation. So we have to select. But, perhaps in contraposition to the natural sciences, we cannot just select those aspects of the phenomena that fall under universal natural laws and treat everything else as “unintegrated residues” (1904 [1949: 73]). This is because, second, in the social sciences we want to understand social phenomena in their individuality, that is, in their unique configurations that have significance for us.

Values solve a selection problem. They tell us what research questions we ought to address because they inform us about the cultural importance of social phenomena:

Only a small portion of existing concrete reality is colored by our value-conditioned interest and it alone is significant to us. It is significant because it reveals relationships which are important to use due to their connection with our values. (1904 [1949: 76])

It is important to note that Weber did not think that social and natural science were different in kind, as Dilthey and others did. Social science too examines the causes of phenomena of interest, and natural science too often seeks to explain natural phenomena in their individual constellations. The role of causal laws is different in the two fields, however. Whereas establishing a causal law is often an end in itself in the natural sciences, in the social sciences laws play an attenuated and accompanying role as mere means to explain cultural phenomena in their uniqueness.

Nevertheless, for Weber social science remains objective in at least two ways. First, once research questions of interest have been settled, answers about the causes of culturally significant phenomena do not depend on the idiosyncrasies of an individual researcher:

But it obviously does not follow from this that research in the cultural sciences can only have results which are “subjective” in the sense that they are valid for one person and not for others. […] For scientific truth is precisely what is valid for all who seek the truth. (Weber 1904 [1949: 84], emphasis original)

The claims of social science can therefore be objective in our third sense ( see section 4 ). Moreover, by determining that a given phenomenon is “culturally significant” a researcher reflects on whether or not a practice is “meaningful” or “important”, and not whether or not it is commendable: “Prostitution is a cultural phenomenon just as much as religion or money” (1904 [1949: 81]). An important implication of this view came to the fore in the so-called “ Werturteilsstreit ” (quarrel concerning value judgments) of the early 1900s. In this debate, Weber maintained against the “socialists of the lectern” around Gustav Schmoller the position that social scientists qua scientists should not be directly involved in policy debates because it was not the aim of science to examine the appropriateness of ends. Given a policy goal, a social scientist could make recommendations about effective strategies to reach the goal; but social science was to be value-free in the sense of not taking a stance on the desirability of the goals themselves. This leads us to our conception of objectivity as freedom from value judgments.

Contemporary mainstream economists hold a view concerning objectivity that mirrors Max Weber’s (see above). On the one hand, it is clear that value judgments are at the heart of economic theorizing. “Preferences” are a key concept of rational choice theory, the main theory in contemporary mainstream economics. Preferences are evaluations. If an individual prefers \(A\) to \(B\), she values \(A\) higher than \(B\) (Hausman 2012). Thus, to the extent that economists predict and explain market behavior in terms of rational choice theory, they predict and explain market behavior in a way laden with value judgments.

However, economists are not themselves supposed to take a stance about whether or not whatever individuals value is also “objectively” good in a stronger sense:

[…] that an agent is rational from [rational choice theory]’s point of view does not mean that the course of action she will choose is objectively optimal. Desires do not have to align with any objective measure of “goodness”: I may want to risk swimming in a crocodile-infested lake; I may desire to smoke or drink even though I know it harms me. Optimality is determined by the agent’s desires, not the converse. (Paternotte 2011: 307–8)

In a similar vein, Gul and Pesendorfer write:

However, standard economics has no therapeutic ambition, i.e., it does not try to evaluate or improve the individual’s objectives. Economics cannot distinguish between choices that maximize happiness, choices that reflect a sense of duty, or choices that are the response to some impulse. Moreover, standard economics takes no position on the question of which of those objectives the agent should pursue. (Gul and Pesendorfer 2008: 8)

According to the standard view, all that rational choice theory demands is that people’s preferences are (internally) consistent; it has no business in telling people what they ought to prefer, whether their preferences are consistent with external norms or values. Economics is thus value-laden, but laden with the values of the agents whose behavior it seeks to predict and explain and not with the values of those who seek to predict and explain this behavior.

Whether or not social science, and economics in particular, can be objective in this—Weber’s and the contemporary economists’—sense is controversial. On the one hand, there are some reasons to believe that rational choice theory (which is at work not only in economics but also in political science and other social sciences) cannot be applied to empirical phenomena without referring to external norms or values (Sen 1993; Reiss 2013).

On the other hand, it is not clear that economists and other social scientists qua social scientists shouldn’t participate in a debate about social goals. For one thing, trying to do welfare analysis in the standard Weberian way tends to obscure rather than to eliminate normative commitments (Putnam and Walsh 2007). Obscuring value judgments can be detrimental to the social scientist as policy adviser because it will hamper rather than promote trust in social science. For another, economists are in a prime position to contribute to ethical debates, for a variety of reasons, and should therefore take this responsibility seriously (Atkinson 2001).

The same demands calling for “mechanical objectivity” in the natural sciences and quantification in the social and policy sciences in the nineteenth century and mid-twentieth century are responsible for a recent movement in biomedical research, which, even more recently, have swept to contemporary social science and policy. Early proponents of so-called “evidence-based medicine” made their pursuit of a downplay of the “human element” in medicine plain:

Evidence-based medicine de-emphasizes intuition, unsystematic clinical experience, and pathophysiological rationale as sufficient grounds for clinical decision making and stresses the examination of evidence from clinical research. (Guyatt et al. 1992: 2420)

To call the new movement “evidence-based” is a misnomer strictly speaking, as intuition, clinical experience and pathophysiological rationale can certainly constitute evidence. But proponents of evidence-based practices have a much narrower concept of evidence in mind: analyses of the results of randomized controlled trials (RCTs). This movement is now very strong in biomedical research, development economics and a number of areas of social science, especially psychology, education and social policy, and especially in the English speaking world.

The goal is to replace subjective (biased, error-prone, idiosyncratic) judgments by mechanically objective methods. But, as in other areas, attempting to mechanize inquiry can lead to reduced accuracy and utility of the results.

Causal relations in the social and biomedical sciences hold on account of highly complex arrangements of factors and conditions. Whether for instance a substance is toxic depends on details of the metabolic system of the population ingesting it, and whether an educational policy is effective on the constellation of factors that affect the students’ learning progress. If an RCT was conducted successfully, the conclusion about the effectiveness of the treatment (or toxicity of a substance) under test is certain for the particular arrangement of factors and conditions of the trial (Cartwright 2007). But unlike the RCT itself, many of whose aspects can be (relatively) mechanically implemented, applying the result to a new setting (recommending a treatment to a patient, for instance) always involves subjective judgments of the kind proponents of evidence-based practices seek to avoid—such as judgments about the similarity of the test to the target or policy population.

On the other hand, RCTs can be regarded as “debiasing procedure” because they prevent researchers from allocating treatments to patients according to their personal interests, so that the healthiest (or smartest or…) subjects get the researcher’s favorite therapy. While unbalanced allocations can certainly happen by chance, randomization still provides some warrant that the allocation was not done on purpose with a view to promoting somebody’s interests. A priori , the experimental procedure is thus more impartial with respect to the interests at stake. It has thus been argued that RCTs in medicine, while no guarantor of the best outcomes, were adopted by the U.S. Food and Drugs Administration (FDA) to different degrees during the 1960s and 1970s in order to regain public trust in its decisions about treatments, which it had lost due to the thalidomide and other scandals (Teira and Reiss 2013; Teira 2010). It is important to notice, however, that randomization is at best effective with respect to one kind of bias, viz. selection bias. Important other epistemic concerns are not addressed by the procedure but should not be ignored (Worrall 2002).

In sections 2–5, we have encountered various concepts of scientific objectivity and their limitations. This prompts the question of how unified (or disunified) scientific objectivity is as a concept: Is there something substantive shared by all of these analyses? Or is objectivity, as Heather Douglas (2004) puts it, an “irreducibly complex” concept?

Douglas defends pluralism about scientific objectivity and distinguishes three areas of application of the concept: (1) interaction of humans with the world, (2) individual reasoning processes, (3) social processes in science. Within each area, there are various distinct senses which are again irreducible to each other and do not have a common core meaning. This does not mean that the senses are unrelated; they share a complex web of relationships and can also support each other—for example, eliminating values from reasoning may help to achieve procedural objectivity. For Douglas, reducing objectivity to a single core meaning would be a simplification without benefits; instead of a complex web of relations between different senses of objectivity we would obtain an impoverished concept out of touch with scientific practice. Similar arguments and pluralist accounts can be found in Megill (1994), Janack (2002) and Padovani et al. (2015)—see also Axtell (2016).

It has been argued, however, that pluralist approaches give up too quickly on the idea that the different senses of objectivity share one or several important common elements. As we have seen in section 4.1 and 5.1 , scientific objectivity and trust in science are closely connected. Scientific objectivity is desirable because to the extent that science is objective we have reasons trust scientists, their results and recommendations (cf. Fine 1998: 18). Thus, perhaps what is unifying among the difference senses of objectivity is that each sense describes a feature of scientific practice that is able to inspire trust in science.

Building on this idea, Inkeri Koskinen has recently argued that it is in fact not trust but reliance that we are after (Koskinen forthcoming). Trust is something that can be betrayed, but only individuals can betray whereas objectivity pertains to institutions, practices, results, etc. We call scientific institutions, practices, results, etc. objective to the extent that we have reasons to rely on them. The analysis does not stop here, however. There is a distinct view about objectivity that is behind Daston and Galison’s historical epistemology of the concept and has been defended by Ian Hacking: that objectivity is not a—positive—virtue but rather the absence of this or that vice (Hacking 2015: 26). Speaking of objectivity in imaging, for instance, Daston and Galison write that the goal is to

let the specimen appear without that distortion characteristic of the observer’s personal tastes, commitments, or ambitions. (Daston and Galison 2007: 121)

Koskinen picks up this idea of objectivity as absence of vice and argues that it is specifically the aversion of epistemic risks for which the term is reserved. Epistemic risks comprise “any risk of epistemic error that arises anywhere during knowledge practices’ (Biddle and Kukla 2017: 218) such as the risk of having mistaken beliefs, the risk of errors in reasoning and risks related to operationalization, concept formation, and model choice. Koskinen argues that only those epistemic risks that relate to failings of scientists as human beings are relevant to objectivity (Koskinen forthcoming: 13):

For instance, when the results of an experiment are incorrect because of malfunctioning equipment, we do not worry about objectivity—we just say that the results should not be taken into account. [...] So it is only when the epistemic risk is related to our own failings, and is hard to avert, that we start talking about objectivity. Illusions, subjectivity, idiosyncrasies, and collective biases are important epistemic risks arising from our imperfections as epistemic agents.

Koskinen understands her account as a response to Hacking’s (2015) criticism that we should stop talking about objectivity altogether. According to Hacking, “objectivity” is an “elevator” or second-level word, similar to “true” or “real”—“Instead of saying that the cat is on the mat, we move up one story and and say that it is true that the cat is on the mat” (2015: 20). He recommends to stick to ground-level questions and worry about whether specific sources of error have been controlled. (A similar elimination request with respect to the labels “objective” and “subjective” in statistical inference has been advanced by Gelman and Hennig (2017).) In focussing on averting specific epistemic risks, Koskinen’s account does precisely that. Koskinen argues that a unified account of objectivity as averting epistemic risks takes into account Hacking’s negative stance and explains at the same time important features of the concept—for example, why objectivity does not imply certainty and why it varies with context.

The strong point of this account is that none of the threats to a peculiar analysis puts scientific objectivity at risk. We can (and in fact, we do) rely on scientific practices that represent the world from a perspective and where non-epistemic values affect outcomes and decisions. What is left open by Koskinen’s account is the normative question of what a scientist who cares about her experiments and inferences being objective should actually do. That is, the philosophical ideas we have reviewed in this section stay mainly on the descriptive level and do not give an actual guideline for working scientists. Connecting the abstract philosophical analysis to day-to-day work in science remains an open problem.

So is scientific objectivity desirable? Is it attainable? That, as we have seen, depends crucially on how the term is understood. We have looked in detail at four different conceptions of scientific objectivity: faithfulness to facts, value-freedom, freedom from personal biases, and features of community practices. In each case, there are at least some reasons to believe that either science cannot deliver full objectivity in this sense, or that it would not be a good thing to try to do so, or both. Does this mean we should give up the idea of objectivity in science?

We have shown that it is hard to define scientific objectivity in terms of a view from nowhere, value freedom, or freedom from personal bias. It is a lot harder to say anything positive about the matter. Perhaps it is related to a thorough critical attitude concerning claims and findings, as Popper thought. Perhaps it is the fact that many voices are heard, equally respected and subjected to accepted standards, as Longino defends. Perhaps it is something else altogether, or a combination of several factors discussed in this article.

However, one should not (as yet) throw out the baby with the bathwater. Like those who defend a particular explication of scientific objectivity, the critics struggle to explain what makes science objective, trustworthy and special. For instance, our discussion of the value-free ideal (VFI) revealed that alternatives to the VFI are as least as problematic as the VFI itself, and that the VFI may, with all its inadequacies, still be a useful heuristic for fostering scientific integrity and objectivity. Similarly, although entirely “unbiased” scientific procedures may be impossible, there are many mechanisms scientists can adopt for protecting their reasoning against undesirable forms of bias, e.g., choosing an appropriate method of statistical inference, being transparent about different stages of the research process and avoiding certain questionable research practices.

Whatever it is, it should come as no surprise that finding a positive characterization of what makes science objective is hard. If we knew an answer, we would have done no less than solve the problem of induction (because we would know what procedures or forms of organization are responsible for the success of science). Work on this problem is an ongoing project, and so is the quest for understanding scientific objectivity.

  • Anderson, Elizabeth, 2004, “Uses of Value Judgments in Science: A General Argument, with Lessons from a Case Study of Feminist Research on Divorce”, Hypatia , 19(1): 1–24. doi:10.1111/j.1527-2001.2004.tb01266.x
  • Atkinson, Anthony B., 2001, “The Strange Disappearance of Welfare Economics”, Kyklos , 54(2‐3): 193–206. doi:10.1111/1467-6435.00148
  • Axtell, Guy, 2016, Objectivity , Cambridge: Polity Press.
  • Bakker, Marjan, Annette van Dijk, and Jelte M. Wicherts, 2012, “The Rules of the Game Called Psychological Science”, Perspectives on Psychological Science , 7(6): 543–554. doi:10.1177/1745691612459060
  • Bernardo, J.M., 2012, “Integrated Objective Bayesian Estimation and Hypothesis Testing”, in Bayesian Statistics 9: Proceedings of the Ninth Valencia Meeting , J.M. Bernardo et al. (eds.), Oxford: Oxford University Press, 1–68.
  • Betz, Gregor, 2013, “In Defence of the Value Free Ideal”, European Journal for Philosophy of Science , 3(2): 207–220. doi:10.1007/s13194-012-0062-x
  • Biddle, Justin B., 2013, “State of the Field: Transient Underdetermination and Values in Science”, Studies in History and Philosophy of Science Part A , 44(1): 124–133. doi:10.1016/j.shpsa.2012.09.003
  • Biddle, Justin B. and Rebecca Kukla, 2017, “The Geography of Epistemic Risk”, in Exploring Inductive Risk: Case Studies of Values in Science , Kevin C. Elliott and Ted Richards (eds.), New York: Oxford University Press, 215–238.
  • Bloor, David, 1982, “Durkheim and Mauss Revisited: Classification and the Sociology of Knowledge”, Studies in History and Philosophy of Science Part A , 13(4): 267–297. doi:10.1016/0039-3681(82)90012-7
  • Braithwaite, R. B., 1953, Scientific Explanation , Cambridge: Cambridge University Press.
  • Carnap, Rudolf, 1950 [1962], Logical Foundations of Probability , second edition, Chicago: University of Chicago Press.
  • Cartwright, Nancy, 2007, “Are RCTs the Gold Standard?”, BioSocieties , 2(1): 11–20. doi:10.1017/S1745855207005029
  • Chang, Hasok, 2004, Inventing Temperature: Measurement and Scientific Progress , Oxford: Oxford University Press. doi:10.1093/0195171276.001.0001
  • Churchman, C. West, 1948, Theory of Experimental Inference , New York: Macmillan.
  • Collins, H. M., 1985, Changing Order: Replication and Induction in Scientific Practice , Chicago, IL: University of Chicago Press.
  • –––, 1994, “A Strong Confirmation of the Experimenters’ Regress”, Studies in History and Philosophy of Science Part A , 25(3): 493–503. doi:10.1016/0039-3681(94)90063-9
  • Cranor, Carl F., 1993, Regulating Toxic Substances: A Philosophy of Science and the Law , New York: Oxford University Press.
  • Crasnow, Sharon, 2013, “Feminist Philosophy of Science: Values and Objectivity: Feminist Philosophy of Science”, Philosophy Compass , 8(4): 413–423. doi:10.1111/phc3.12023
  • Daston, Lorraine, 1992, “Objectivity and the Escape from Perspective”, Social Studies of Science , 22(4): 597–618. doi:10.1177/030631292022004002
  • Daston, Lorraine and Peter Galison, 1992, “The Image of Objectivity”, Representations , 40(special issue: Seeing Science): 81–128. doi:10.2307/2928741
  • –––, 2007, Objectivity , Cambridge, MA: MIT Press.
  • Dilthey, Wilhelm, 1910 [1981], Der Aufbau der geschichtlichen Welt in den Geisteswissenschaften , Frankfurt am Main: Suhrkamp.
  • Dorato, Mauro, 2004, “Epistemic and Nonepistemic Values in Science”, in Machamer and Wolters 2004: 52–77.
  • Douglas, Heather E., 2000, “Inductive Risk and Values in Science”, Philosophy of Science , 67(4): 559–579. doi:10.1086/392855
  • –––, 2004, “The Irreducible Complexity of Objectivity”, Synthese , 138(3): 453–473. doi:10.1023/B:SYNT.0000016451.18182.91
  • –––, 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh, PA: University of Pittsburgh Press.
  • –––, 2011, “Facts, Values, and Objectivity”, Jarvie and Zamora Bonilla 2011: 513–529.
  • Duhem, Pierre Maurice Marie, 1906 [1954], La théorie physique. Son objet et sa structure , Paris: Chevalier et Riviere; translated by Philip P. Wiener, The Aim and Structure of Physical Theory , Princeton, NJ: Princeton University Press, 1954.
  • Dupré, John, 2007, “Fact and Value”, in Kincaid, Dupré, and Wylie 2007: 24–71.
  • Earman, John, 1992, Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory , Cambridge/MA: The MIT Press. 
  • Elliott, Kevin C., 2011, “Direct and Indirect Roles for Values in Science”, Philosophy of Science , 78(2): 303–324. doi:10.1086/659222
  • Feyerabend, Paul K., 1962, “Explanation, Reduction and Empiricism”, in H. Feigl and G. Maxwell (ed.), Scientific Explanation, Space, and Time , (Minnesota Studies in the Philosophy of Science, 3), Minneapolis, MN: University of Minnesota Press, pp. 28–97.
  • –––, 1975, Against Method , London: Verso.
  • –––, 1978, Science in a Free Society , London: New Left Books.
  • Fine, Arthur, 1998, “The Viewpoint of No-One in Particular”, Proceedings and Addresses of the American Philosophical Association , 72(2): 7. doi:10.2307/3130879
  • Fisher, Ronald Aylmer, 1935, The Design of Experiments , Edinburgh: Oliver and Boyd.
  • –––, 1956, Statistical Methods and Scientific Inference , New York: Hafner.
  • Franklin, Allan, 1994, “How to Avoid the Experimenters’ Regress”, Studies in History and Philosophy of Science Part A , 25(3): 463–491. doi:10.1016/0039-3681(94)90062-0
  • –––, 1997, “Calibration”, Perspectives on Science , 5(1): 31–80.
  • Freese, Jeremy and David Peterson, 2018, “The Emergence of Statistical Objectivity: Changing Ideas of Epistemic Vice and Virtue in Science”, Sociological Theory , 36(3): 289–313. doi:10.1177/0735275118794987
  • Gadamer, Hans-Georg, 1960 [1989], Wahrheit und Methode , Tübingen : Mohr. Translated as Truth and Method , 2 nd edition, Joel Weinsheimer and Donald G. Marshall (trans), New York, NY: Crossroad, 1989.
  • Gelman, Andrew and Christian Hennig, 2017, “Beyond Subjective and Objective in Statistics”, Journal of the Royal Statistical Society: Series A (Statistics in Society) , 180(4): 967–1033. doi:10.1111/rssa.12276
  • Giere, Ronald N., 2006, Scientific Perspectivism , Chicago, IL: University of Chicago Press.
  • Good, Irving John, 1950, Probability and the Weighing of Evidence , London: Charles Griffin.
  • Gul, Faruk and Wolfgang Pesendorfer, 2008, “The Case for Mindless Economics”, in The Foundations of Positive and Normative Economics: a Handbook , Andrew Caplin and Andrew Schotter (eds), New York, NY: Oxford University Press, pp. 3–39.
  • Guyatt, Gordon, John Cairns, David Churchill, Deborah Cook, Brian Haynes, Jack Hirsh, Jan Irvine, Mark Levine, Mitchell Levine, Jim Nishikawa, et al., 1992, “Evidence-Based Medicine: A New Approach to Teaching the Practice of Medicine”, JAMA: The Journal of the American Medical Association , 268(17): 2420–2425. doi:10.1001/jama.1992.03490170092032
  • Haack, Susan, 2003, Defending Science—Within Reason: Between Scientism and Cynicism , Amherst, NY: Prometheus Books.
  • Hacking, Ian, 1965, Logic of Statistical Inference , Cambridge: Cambridge University Press. doi:10.1017/CBO9781316534960
  • –––, 2015, “Let’s Not Talk About Objectivity”, in Padovani, Richardson, and Tsou 2015: 19–33. doi:10.1007/978-3-319-14349-1_2
  • Hanson, Norwood Russell, 1958, Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science , Cambridge: Cambridge University Press.
  • Haraway, Donna, 1988, “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective”, Feminist Studies , 14(3): 575–599. doi:10.2307/3178066
  • Harding, Sandra, 1991, Whose Science? Whose Knowledge? Thinking from Women’s Lives , Ithaca, NY: Cornell University Press.
  • –––, 1993, “Rethinking Standpoint Epistemology: What is Strong Objectivity?”, in Feminist Epistemologies , Linda Alcoff and Elizabeth Potter (ed.), New York, NY: Routledge, 49–82.
  • –––, 2015a, Objectivity and Diversity: Another Logic of Scientific Research , Chicago: University of Chicago Press.
  • –––, 2015b, “After Mr. Nowhere: What Kind of Proper Self for a Scientist?”, Feminist Philosophy Quarterly , 1(1): 1–22. doi:10.5206/fpq/2015.1.2
  • Hausman, Daniel M., 2012, Preference, Value, Choice, and Welfare , New York: Cambridge University Press. doi:10.1017/CBO9781139058537
  • Hempel, Carl G., 1965, Aspects of Scientific Explanation , New York: The Free Press.
  • Hesse, Mary B., 1980, Revolutions and Reconstructions in the Philosophy of Science , Bloomington, IN: University of Indiana Press.
  • Howson, Colin, 2000, Hume’s Problem: Induction and the Justification of Belief , Oxford: Oxford University Press.
  • Howson, Colin and Peter Urbach, 1993, Scientific Reasoning: The Bayesian Approach , second edition, La Salle, IL: Open Court.
  • Hrdy, Sarah Blaffer, 1977, The Langurs of Abu: Female and Male Strategies of Reproduction , Cambridge, MA: Harvard University Press.
  • Ioannidis, John P. A., 2005, “Why Most Published Research Findings Are False”, PLoS Medicine , 2(8): e124. doi:10.1371/journal.pmed.0020124
  • Janack, Marianne, 2002, “Dilemmas of Objectivity”, Social Epistemology , 16(3): 267–281. doi:10.1080/0269172022000025624
  • Jarvie, Ian C. and Jesús P. Zamora Bonilla (eds.), 2011, The SAGE Handbook of the Philosophy of Social Sciences , London: SAGE.
  • Jaynes, Edwin T., 1968, “Prior Probabilities”, IEEE Transactions on Systems Science and Cybernetics , 4(3): 227–241. doi:10.1109/TSSC.1968.300117
  • Jeffrey, Richard C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246. doi:10.1086/287489
  • Jeffreys, Harold, 1939 [1980], Theory of Probability , third edition, Oxford: Oxford University Press.
  • Kelvin, Lord (William Thomson), 1883, “Electrical Units of Measurement”, Lecture to the Institution of Civil Engineers on 3 May 1883, reprinted in 1889, Popular Lectures and Addresses , Vol. I, London: MacMillan and Co., p. 73.
  • Kincaid, Harold, John Dupré, and Alison Wylie (eds.), 2007, Value-Free Science?: Ideals and Illusions , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195308969.001.0001
  • Kitcher, Philip, 2011a, Science in a Democratic Society , Amherst, NY: Prometheus Books.
  • –––, 2011b, The Ethical Project , Cambridge, MA: Harvard University Press.
  • Koskinen, Inkeri, forthcoming, “Defending a Risk Account of Scientific Objectivity”, The British Journal for the Philosophy of Science , first online: 3 August 2018. doi:10.1093/bjps/axy053
  • Kourany, Janet A., 2010, Philosophy of Science after Feminism , Oxford: Oxford University Press.
  • Kuhn, Thomas S., 1962 [1970], The Structure of Scientific Revolutions , second edition, Chicago: University of Chicago Press.
  • –––, 1977, “Objectivity, Value Judgment, and Theory Choice”, in his The Essential Tension. Selected Studies in Scientific Tradition and Change , Chicago: University of Chicago Press: 320–39.
  • Lacey, Hugh, 1999, Is Science Value-Free? Values and Scientific Understanding , London: Routledge.
  • –––, 2002, “The Ways in Which the Sciences Are and Are Not Value Free”, in In the Scope of Logic, Methodology and Philosophy of Science: Volume Two of the 11th International Congress of Logic, Methodology and Philosophy of Science, Cracow, August 1999 , Peter Gärdenfors, Jan Woleński, and Katarzyna Kijania-Placek (eds.), Dordrecht: Springer Netherlands, 519–532. doi:10.1007/978-94-017-0475-5_9
  • Laudan, Larry, 1984, Science and Values: An Essay on the Aims of Science and Their Role in Scientific Debate , Berkeley/Los Angeles: University of California Press.
  • Levi, Isaac, 1960, “Must the Scientist Make Value Judgments?”, The Journal of Philosophy , 57(11): 345–357. doi:10.2307/2023504
  • Lloyd, Elisabeth A., 1993, “Pre-Theoretical Assumptions in Evolutionary Explanations of Female Sexuality”, Philosophical Studies , 69(2–3): 139–153. doi:10.1007/BF00990080
  • –––, 2005, The Case of the Female Orgasm: Bias in the Science of Evolution , Cambridge, MA: Harvard University Press.
  • Longino, Helen E., 1990, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry , Princeton, NY: Princeton University Press.
  • –––, 1996, “Cognitive and Non-Cognitive Values in Science: Rethinking the Dichotomy”, in Feminism, Science, and the Philosophy of Science , Lynn Hankinson Nelson and Jack Nelson (eds.), Dordrecht: Springer Netherlands, 39–58. doi:10.1007/978-94-009-1742-2_3
  • Machamer, Peter and Gereon Wolters (eds.), 2004, Science, Values and Objectivity , Pittsburgh: Pittsburgh University Press.
  • Mayo, Deborah G., 1996, Error and the Growth of Experimental Knowledge , Chicago & London: The University of Chicago Press.
  • McMullin, Ernan, 1982, “Values in Science”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1982 , 3–28.
  • –––, 2009, “The Virtues of a Good Theory”, in The Routledge Companion to Philosophy of Science , Martin Curd and Stathis Psillos (eds), London: Routledge.
  • Megill, Allan, 1994, “Introduction: Four Senses of Objectivity”, in Rethinking Objectivity , Allan Megill (ed.), Durham, NC: Duke University Press, 1–20.
  • Mill, John Stuart, 1859 [2003], On Liberty , New Haven and London: Yale University Press.
  • Mitchell, Sandra D., 2004, “The Prescribed and Proscribed Values in Science Policy”, in Machamer and Wolters 2004: 245–255.
  • Nagel, Thomas, 1986, The View From Nowhere , New York, NY: Oxford University Press.
  • Nixon, Richard, 1969, “Special Message to the Congress on Social Security”, 25 September 1969. [ Nixon 1969 available online ]
  • Norton, John D., 2003, “A Material Theory of Induction”, Philosophy of Science , 70(4): 647–670. doi:10.1086/378858
  • –––, 2008, “Must Evidence Underdetermine Theory?”, in The Challenge of the Social and the Pressure of Practice , Martin Carrier, Don Howard and Janet Kourany (eds), Pittsburgh, PA: Pittsburgh University Press: 17–44.
  • Oakeshott, Michael, 1933, Experience and Its Modes , Cambridge: Cambridge University Press.
  • Okruhlik, Kathleen, 1994, “Gender and the Biological Sciences”, Canadian Journal of Philosophy Supplementary Volume , 20: 21–42. doi:10.1080/00455091.1994.10717393
  • Open Science Collaboration, 2015, “Estimating the Reproducibility of Psychological Science”, Science , 349(6251): aac4716. doi:10.1126/science.aac4716
  • Padovani, Flavia, Alan Richardson, and Jonathan Y. Tsou (eds.), 2015, Objectivity in Science: New Perspectives from Science and Technology Studies , (Boston Studies in the Philosophy and History of Science 310), Cham: Springer International Publishing. doi:10.1007/978-3-319-14349-1
  • Page, Scott E., 2007, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies , Princeton, NJ: Princeton University Press.
  • Paternotte, Cédric, 2011, “Rational Choice Theory”, in The SAGE Handbook of The Philosophy of Social Sciences , Jarvie and Zamora Bonilla 2011: 307–321.
  • Popper, Karl. R., 1934 [2002], Logik der Forschung , Vienna: Julius Springer. Translated as Logic of Scientific Discovery , London: Routledge.
  • –––, 1963, Conjectures and Refutations: The Growth of Scientific Knowledge , New York: Harper.
  • –––, 1972, Objective Knowledge: An Evolutionary Approach , Oxford: Oxford University Press.
  • Porter, Theodore M., 1995, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life , Princeton, NJ, Princeton University Press.
  • Putnam, Hilary, 2002, The Collapse of the Fact/Value Dichotomy and Other Essays , Cambridge, MA: Harvard University Press.
  • Putnam, Hilary and Vivian Walsh, 2007, “A Response to Dasgupta”, Economics and Philosophy , 23(3): 359–364. doi:10.1017/S026626710700154X
  • Reichenbach, Hans, 1938, “On Probability and Induction”, Philosophy of Science , 5(1): 21–45. doi:10.1086/286483
  • Reiss, Julian, 2008, Error in Economics: The Methodology of Evidence-Based Economics , London: Routledge.
  • –––, 2010, “In Favour of a Millian Proposal to Reform Biomedical Research”, Synthese , 177(3): 427–447. doi:10.1007/s11229-010-9790-7
  • –––, 2013, Philosophy of Economics: A Contemporary Introduction , New York, NY: Routledge.
  • –––, 2020, “What Are the Drivers of Induction? Towards a Material Theory+”, Studies in History and Philosophy of Science Part A 83: 8–16.
  • Resnik, David B., 2007, The Price of Truth: How Money Affects the Norms of Science , Oxford: Oxford University Press.
  • Rickert, Heinrich, 1929, Die Grenzen der naturwissenschaftlichen Begriffsbildung. Eine logische Einleitung in die historischen Wissenschaften , 6th edition, Tübingen: Mohr Siebeck. First edition published in 1902.
  • Royall, Richard, 1997, Scientific Evidence: A Likelihood Paradigm , London: Chapman & Hall.
  • Rudner, Richard, 1953, “The Scientist qua Scientist Makes Value Judgments”, Philosophy of Science , 20(1): 1–6. doi:10.1086/287231
  • Ruphy, Stéphanie, 2006, “‘Empiricism All the Way down’: A Defense of the Value-Neutrality of Science in Response to Helen Longino’s Contextual Empiricism”, Perspectives on Science , 14(2): 189–214. doi:10.1162/posc.2006.14.2.189
  • Sen, Amartya, 1993, “Internal Consistency of Choice”, Econometrica , 61(3): 495–521.
  • Shrader-Frechette, K. S., 1991, Risk and Rationality , Berkeley/Los Angeles: University of California Press.
  • Simonsohn, Uri, Leif D. Nelson, and Joseph P. Simmons, 2014, “P-Curve: A Key to the File-Drawer.”, Journal of Experimental Psychology: General , 143(2): 534–547. doi:10.1037/a0033242
  • Sprenger, Jan, 2016, “Bayesianism vs. Frequentism in Statistical Inference”, in Oxford Handbook on Philosophy of Probability , Alan Hájek and Christopher Hitchcock (eds), Oxford: Oxford University Press.
  • –––, 2018, “The Objectivity of Subjective Bayesianism”, European Journal for Philosophy of Science , 8(3): 539–558. doi:10.1007/s13194-018-0200-1
  • Sprenger, Jan and Stephan Hartmann, 2019, Bayesian Philosophy of Science , Oxford: Oxford University Press. doi:10.1093/oso/9780199672110.001.0001
  • Steel, Daniel, 2010, “Epistemic Values and the Argument from Inductive Risk”, Philosophy of Science , 77(1): 14–34. doi:10.1086/650206
  • Steele, Katie, 2012, “The Scientist qua Policy Advisor Makes Value Judgments”, Philosophy of Science, 79(5): 893–904. doi:10.1086/667842
  • Stegenga, Jacob, 2011, “Is Meta-Analysis the Platinum Standard of Evidence?”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 42(4): 497–507. doi:10.1016/j.shpsc.2011.07.003
  • –––, 2018, Medical Nihilism , Oxford: Oxford University Press. doi:10.1093/oso/9780198747048.001.0001
  • Teira, David, 2010, “Frequentist versus Bayesian Clinical Trials”, in Philosophy of Medicine , Fred Gifford (ed.), (Handbook of the Philosophy of Science 16), Amsterdam: Elsevier, 255–297. doi:10.1016/B978-0-444-51787-6.50010-6
  • Teira, David and Julian Reiss, 2013, “Causality, Impartiality and Evidence-Based Policy”, in Mechanism and Causality in Biology and Economics , Hsiang-Ke Chao, Szu-Ting Chen, and Roberta L. Millstein (eds.), (History, Philosophy and Theory of the Life Sciences 3), Dordrecht: Springer Netherlands, 207–224. doi:10.1007/978-94-007-2454-9_11
  • Weber, Max, 1904 [1949], “Die ‘Objektivität’ sozialwissenschaftlicher und sozialpolitischer Erkenntnis”, Archiv für Sozialwissenschaft und Sozialpolitik , 19(1): 22–87. Translated as “‘Objectivity’ in Social Science and Social Policy”, in Weber 1949: 50–112.
  • –––, 1917 [1949], “Der Sinn der ‘Wertfreiheit’ der soziologischen und ökonomischen Wissenschaften”. Reprinted in Gesammelte Aufsätze zur Wissenschaftslehre , Tübingen: UTB, 1988, 451–502. Translated as “The Meaning of ‘Ethical Neutrality’ in Sociology and Economics” in Weber 1949: 1–49.
  • –––, 1949, The Methodology of the Social Sciences , Edward A. Shils and Henry A. Finch (trans/eds), New York, NY: Free Press.
  • Wilholt, Torsten, 2009, “Bias and Values in Scientific Research”, Studies in History and Philosophy of Science Part A , 40(1): 92–101. doi:10.1016/j.shpsa.2008.12.005
  • –––, 2013, “Epistemic Trust in Science”, The British Journal for the Philosophy of Science , 64(2): 233–253. doi:10.1093/bjps/axs007
  • Williams, Bernard, 1985 [2011], Ethics and the Limits of Philosophy , Cambridge, MA: Harvard University Press. Reprinted London and New York, NY: Routledge, 2011.
  • Williamson, Jon, 2010, In Defence of Objective Bayesianism , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199228003.001.0001
  • Windelband, Wilhelm, 1915, Präludien. Aufsätze und Reden zur Philosophie und ihrer Geschichte , fifth edition, Tübingen: Mohr Siebeck.
  • Winsberg, Eric, 2012, “Values and Uncertainties in the Predictions of Global Climate Models”, Kennedy Institute of Ethics Journal , 22(2): 111–137. doi:10.1353/ken.2012.0008
  • Wittgenstein, Ludwig, 1953 [2001], Philosophical Investigations , G. Anscombe (trans.), London: Blackwell.
  • Worrall, John, 2002, “ What Evidence in Evidence‐Based Medicine?”, Philosophy of Science , 69(S3): S316–S330. doi:10.1086/341855
  • Wylie, Alison, 2003, “Why Standpoint Matters”, in Science and Other Cultures: Issues in Philosophies of Science and Technology , Robert Figueroa and Sandra Harding (eds), New York, NY and London: Routledge, pp. 26–48.
  • Ziliak, Stephen Thomas and Deirdre N. McCloskey, 2008, The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives , Ann Arbor, MI: University of Michigan Press.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Norton, John, manuscript, The Material Theory of Induction , retrieved on 9 January 2020.
  • Objectivity , entry by Dwayne H. Mulder in the Internet Encyclopedia of Philosophy .

Bayes’ Theorem | confirmation | feminist philosophy, interventions: epistemology and philosophy of science | feminist philosophy, interventions: philosophy of biology | Feyerabend, Paul | hermeneutics | incommensurability: of scientific theories | Kuhn, Thomas | logic: inductive | physics: experiment in | science: theory and observation in | scientific realism | statistics, philosophy of | underdetermination, of scientific theories | Weber, Max

Copyright © 2020 by Julian Reiss < reissj @ me . com > Jan Sprenger < jan . sprenger @ unito . it >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Home — Essay Samples — Business — Reasoning — Scientific Knowledge And Reasoning

test_template

Scientific Knowledge and Reasoning

  • Categories: Knowledge Reasoning Theory

About this sample

close

Words: 1546 |

Published: Mar 18, 2021

Words: 1546 | Pages: 3 | 8 min read

Image of Prof. Linda Burke

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Life Business Science

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 903 words

5 pages / 2655 words

4 pages / 1934 words

5 pages / 2379 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Reasoning

Common sense is a term frequently invoked in everyday conversations, often used to refer to a person's ability to make sound judgments and practical decisions in ordinary situations. It is considered a fundamental element of [...]

Critical thinking is essential in all aspects of life, especially when it comes to making informed decisions. This essay focuses on the role of inductive reasoning, one of the key elements of critical thinking, in everyday life. [...]

Influencing the Eight Dimensions of Wellness to some element of every day existence can beautify intellectual and physical wellbeing for people with mental and moreover substance utilize scatters. Wellness is being in extremely [...]

Panda Express is one of the fastest casual restaurant chains delivering American Chinese cuisine all over headquartered at Rosemead, California. It has almost 2000 outlets in 50 cities of US, the District of Columbia, Puerto [...]

Marketing automation could nurture leads and develop better relationships with prospects even before interacting with the marketing team. A big part of the purchasing process is overseen by the marketing team. The team ensures [...]

Rene Clausen is an American Composer and the conductor of the Concordia Choir in Moorhead, Minnesota since 1986. He is also the professor of music at Concordia. Before he was a world-renowned composer and conductor. He was born [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

scientific knowledge essay

Scientific Skills and Knowledge Importance Expository Essay

Introduction, importance of scientific skills, scientific literacy, teaching science subject, scientific skills and experiments, introduction of science as a core subject, science curricula in the 21st century, the future of the science subject, reference list.

Literacy is one of the key aspects in career and personal development of almost every individual. It also determines the degree of success achieved by individuals. Literacy can be defined as the ability to read, write, understand or use different types of information. This term has been expanded to refer to a range of knowledge, and skills related to science, and mathematics, among other subjects. This reflects on the changes that have taken place in the last few decades not only on technology but also on the school curriculum.

Technology has gone through deep changes and this has resulted in the use of key science concepts across many occupations. Science has been recognized as one of the most important subject and has become an academic requirement for job recruitment in many occupations. The literacy levels have increased significantly compared to the past two decades and many people are now able to participate in the society and even to understand key issues affecting the society.

Technology and science encompass all facades of life ranging from how people work, converse, do their shopping or pay out bills. It has become an important aspect of life without which is it difficult to live in the society. This paper looks at the importance of scientific skills and knowledge to an individual, society and the nation at large. It also looks at how the subject has developed from a traditionally to a modern perspective and changes expected to occur in future.

While theoretical arguments from the literature on science and innovation suggest that the set of skills scientist acquire during research process may not only be important input into other types of activities, it can be suggested that, the results of research studies have received by far the most attention in innovation studies.

The results show that scientific skills are indeed rated more important than propositional knowledge both for the categories of knowledge classified as specific as well as for non-specific knowledge categories. This indicates that methodological knowledge carries a higher potential for the creation of economic value in areas other than basic research (Terry 2006)

The world has become complex and individuals have to acquire some level of proficiency in scientific knowledge as well as reading in order to comprehend and take part in economic and social life. The level of literacy in the society determines the performance of a country in economic terms. The world has also gone through substantial changes from what it was in the past generation.

Moreover, through innovation in technology, the procedure for working has been altered, there has been an raise in global competition, and the labour market has developed from being agricultural based and it is now based on services. These changes among many others have resulted in an increase in the requirement of skills in almost all sectors of the economy (Terry 2006). Scientific skills and knowledge have become a determining factor of how well a country performs in the global economy.

Countries with a large pool of scientific skills are better placed to deal with the challenges that come about with globalization. Also countries that have a population of strong literacy skills and knowledge are better placed to tackle the social challenges they are confronted with. It is alleged that, a populace with high scientific skills is in a superior position to meet the multifaceted challenges of governance in a diverse society. Such a population is also able to take care of its health problems and other related issues.

Scientific literacy can be defined as the scientific knowledge possessed by individuals and the application of that knowledge in identifying questions, acquiring new knowledge, explaining scientific happenings, and drawing convincing conclusions on issues related to science. It can also be defined as understanding the features of science, knowledge of science and technology and how it shape materials, and enthusiasm to participate in science correlated issues.

This can be summarized in a single term as scientific knowledge. This keeps on changing with time depending on attitudes and skills. Scientific skills need to be updated as technology changes in order to be able to solve problems, make decisions in the rapidly changing world. Students need to become lifelong learners and uphold an intelligence of what goes on in the world.

Although specific knowledge attainment is vital in school learning, the relevance of that knowledge in adult life depends critically on the individual’s attainment of broader perceptions and skills. In reading, the capacity to develop interpretations of written material and to reflect on the content and qualities of text are essential skills.

In sciences, being able to reason quantitatively and to correspond to relationships or dependencies is more significant than the ability to answer well-known textbook questions when it comes to deploying scientific skills in everyday life. In science, having specific knowledge, such as the names of specific plants and animals, is of less value than an understanding of broad concepts and topics such as energy consumption, biodiversity and human health in thinking about the issues of science under debate in the adult community.

In the present information age that is fuelled by technology, students need not only to understand the concepts and processes of science, but also how to apply the scientific skills acquired in class to become effective members of the rapidly changing world of the 21st century. To be scientifically literate, a student has to possess a set of scientific skills that merges the knowledge of science concepts and facts and the ability to use language to communicate about these facts.

It is therefore the responsibility of teachers to ensure that their students are able to internalize scientific habits, for instance the ability to separate opinion from fact. If students are to become influential adults of the 21st century, who are capable of making informed decisions and taking effective actions, then they must be able and willing to absorb scientific habits into their pattern of thoughts so that such habits become part of their thinking even after leaving school.

The possession of scientific skills and knowledge has become an important asset in the present generation and it is clear that, it might become more important in the near future. Educators therefore have an opportunity of merging the teaching of both science and language literacy in order to strengthen the students’ skills.

Studies have revealed that children’s language can be developed through science and increased knowledge of languages is positively related to the development of scientific ideas. Moreover, researchers have found out that students learn science better if they are able to read and write about their thinking and through this process they are able to acquire new ideas and relationships with prior knowledge.

This integration between science and language may also provide a feedback to the writer and encourage personal involvement. Science and language have therefore become inseparable subjects in the learning of scientific facts and also in the application of scientific skills and knowledge. These reciprocal skills give teachers and students a unique leverage: by merging science and language in the classroom, teachers can help students learn both subjects more effectively.

Through research, it has become clear that students learn better when they experience something by doing it practically instead of reading it in class or from the text-book. When students act like scientists, they make use of language to recognize, organize and internalize scientific concepts and principles.

Science is a practical subject that can only be understood clearly through experiments. Experiments provide literacy opportunities for science students that help them enrich the context and effectively expand their personal structures of science knowledge by improving their language skills. Research has shown that true learning takes place only when students engage with information and processes deeply enough to weave that content into their personal views and understandings of how the world works.

The use of experiments or practical applications gives equal weight to knowledge and skills, scientific facts, and processes. It emphasizes concepts more than rote formulas and learning science in a personal and social context rather than through abstractions (Ward, et al 2008).

Teachers must rely on students’ language skills if they are to succeed in taking students beyond the formulaic aspects of science. By embedding an inquiry within both the context of students’ lives and string science content, and then sequencing investigations as part of a larger curricula design, teachers can reach their instructional goals for science and English at the same time.

The introduction of the science subject into the curriculum was faced with many challenges. However, it is one of the success stories of the National Curriculum. The biggest surprise for many teachers, especially the primary school teachers, was the inclusion of science as a core subject. Since its inception, the progress in terms of the amount and quality of science work going on in primary classrooms has changed considerably.

A valuable aspect of the development of science has been the increasing interest in and understanding of the nature of children’s learning in science. The framework for analyzing learning of skills and processes, knowledge and understanding and attitudes, articulated by early science curriculum developers remains appropriate in the current climate. However, research carried out in 1990s provided new insights into the relationship between these dimensions (Meadows 2004).

Most of the scientific research is aimed at addressing the question of how young children learn to behave scientifically and how their scientific ideas about the world around them develop. Research project have pointed out that learners actively construct ideas for themselves and that existing ideas that student bring to the classroom have s significant influence on the development of new ideas.

They place emphasis upon the inextricable links between the process skills students’ use and the concepts they develop. Scientific activity involves exploratory work, which ideally will lead to questions that can be investigated systematically. Crucially, these approaches take children’s existing ideas seriously, using them as the basis for deciding upon appropriate teacher interventions aimed at supporting learners in using their skills and the processes of science to test out their own or other people’s ideas.

Before the introduction of science as a core subject in the school curriculum, good practice in science was evident in some classrooms and schools. However, with the introduction of science as a core subject, teachers have become competent at identifying opportunities for science experiences for their students within the classroom and beyond. They are now able to plan appropriate science activities within an integrated curriculum as well as focusing at times on specific science topics (Meadows 2004).

The debate concerning the relationship between process and content, skills and knowledge and understanding, have been ongoing for many years. Government intervention in the curriculum has led to the science education community becoming a much more unified voice seeking to defend the importance of process and skills in the curriculum. It has not been an easy battle and has not been one with some set-backs; however the profession has somehow won.

In the past century, the content of science curricula was dominated by the desire to provide the foundations for the professional training of a small number of scientists and engineers.

However, with the growing role of science and technology in the 21st century, the objectives of personal fulfilment, employment and full participation in the society require all people (not only those aspiring for scientific careers) to be scientifically and technologically literate.

Science literacy has become an important aspect in the understanding the environment, economy, and other issues affecting the modern society, which are directly related to the technological and scientific advances.

In the past, science was a subject that was only studied by people aspiring to be scientists or engineers but with the changes brought about by technology, it has become a subject that is studied by all people of all ages. it does not matter the career that one aspires to specialize in, but mere knowledge of scientific facts and aspects is crucial if one wants to become an effective member of the society.

Moreover, a country’s performance of best students in scientific subjects may have an implication on the part played by that country in the future’s advanced technology sector, and in its global competitiveness. On the contrary, deficiencies in scientific literacy can have negative consequences on the labour market, earning prospects and participation in the society.

As a result, educators and policy-markers have attached great importance in the study of scientific subjects in the school curriculum. Most of the science subjects such as chemistry, biology, and mathematics have compulsory for students in many schools unlike before when students were allowed to choose for themselves and it did matter whether they decided to take any science subject or not.

Addressing the increasing demand for scientific skills requires excellence throughout the curriculum, and it is important to monitor how well countries provide young adults with fundamental skills in this area.

The role of science subjects and consequently, the role of science educators in the curriculum have undergone evident changes within the last few years. The past two decades have seen an ongoing debate on the future of science subject in the school curriculum. The argument that sciences can no longer be retained in the traditional manner within the curriculum holds true.

As the curriculum evolves, conventional courses should give way to new system of basic science instruction, recognizing that transformation in the classical ideology of basic science teaching is crucial to its subjects sustaining their position within the new curriculum. Indeed, the concept of transformation brings with it, the contemplation of identity within a new environment of schools (Ward, et al 2008).

With a dissolution of input-based curricula that nurtured the traditional format of basic teaching, the challenge that now presents is that of teaching an old subject in a new world. Educators have a growing responsibility therefore, to produce well-educated, competent scientists who are able to professionally and effectively function within the new environment.

They should be able to translate scientific discoveries into practical applications and utilize electronic information technologies. A basic science course is defined by its objective in providing fundamental scientific theories and concepts necessary for application in later years. Traditionally, subjects included anatomy, physiology, biochemistry and pathology. The current teaching model includes genetics, cell and molecular biology, nutrition, and energy metabolism.

Basic science subjects that have traditionally been pure content are now being utilized in order to incorporate the new trend toward holistic education. Holistic education is aimed towards encouraging intellectual, social, creative and emotional development during the learning process. One of the major factors impacting on the style of teaching is the concept of learning towards a more active learning environment.

In this situation the students can learn to restructure the new information and their prior knowledge into new knowledge (Terry 2006). Changes in the approaches to science teaching are paralleled with advancements in technology. The development of more powerful computers, video cards, simulation technology and high speed internet connections has allowed educators to incorporate high quality imaging, interactive training modules, and learning experiments in the classrooms.

Science is a core subject in the curriculum. It is one of the subjects that have acquired great importance not only in the development of individual life but also the development of the society at large.

Traditionally, science was only pursued by students who aspired to take scientific careers such as doctors and engineers. However, with technological advancement this has changed and every student is required to take at least one science subject. It has been found out that, scientific skills and knowledge are important in career development as well as the development of a nation.

A country with students that perform well in science subjects is better placed to deal with the challenges that are likely to present themselves in the near future. It has also been found that, basic scientific knowledge and skills are important in dealing with issues related to health or matters of public policies. Many organizations therefore require the possession of basic scientific skills in their recruitment process.

Science has gone through substantial changes in the curriculum and its still evolving. Before the introduction of science as a core subject, teachers relied entirely on the text-book but this has changed. They have now realized that; science does not only require the understanding of basic aspects and facts but also how to apply these aspects in a real life situations.

Experiments have now become an integral part in the teaching of the science subject and teachers are now relying on the use of experiments to help students understand the subject better. Changes in technology are likely to bring about changes in the approaches used in teaching and it’s therefore upon policy makers and educators to implement such changes as they occur.

Meadows, J., 2004. Science and ICT in the Primary School: A Creative Approach to Big Ideas. London, Fulton.

Terry, J., 2006. Thinking Skills Science. London, Hopscotch Educational.

Ward, H. et al., 2008. Teaching Science in the Primary Classroom. London, SAGE Publications Ltd.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, April 4). Scientific Skills and Knowledge Importance. https://ivypanda.com/essays/science-with-ict/

"Scientific Skills and Knowledge Importance." IvyPanda , 4 Apr. 2024, ivypanda.com/essays/science-with-ict/.

IvyPanda . (2024) 'Scientific Skills and Knowledge Importance'. 4 April.

IvyPanda . 2024. "Scientific Skills and Knowledge Importance." April 4, 2024. https://ivypanda.com/essays/science-with-ict/.

1. IvyPanda . "Scientific Skills and Knowledge Importance." April 4, 2024. https://ivypanda.com/essays/science-with-ict/.

Bibliography

IvyPanda . "Scientific Skills and Knowledge Importance." April 4, 2024. https://ivypanda.com/essays/science-with-ict/.

  • Literacy and Numeracy Across the Curriculum
  • Literacy Definition and Importance
  • Education Theories: Why Literacy Matters?
  • Active Literacy Across the Curriculum
  • Article Reflection about Literacy
  • Literacy Development in Adults
  • Development and Curriculum Leadership: Advanced Curriculum Models
  • Reading Commentary About Literacy Articles: What Does Literacy Mean?
  • Problems of Reading and Literacy
  • Literacy and Numeracy Demands
  • Empirical Study of the Piaget's Main Concepts
  • Standardized Tests and Students’ Abilities
  • Effective Reasons to Support the Idea of Confucius Classrooms
  • Should Public Schools Be Required to Restore Physical Education Classes to the Curriculum?
  • Good School' Definition and Aspects

Advertisement

Advertisement

Scientific Knowledge vs. Knowledge of Science

Public Understanding and Science in Society

  • Open access
  • Published: 09 September 2022
  • Volume 32 , pages 1795–1812, ( 2023 )

Cite this article

You have full access to this open access article

scientific knowledge essay

  • Anjan Chakravartty 1  

11k Accesses

2 Citations

3 Altmetric

Explore all metrics

How is knowledge pertaining to science best transferred to the public in order to bolster support for science-based policy and governance, thereby serving the common good? Herein lies a well-recognized challenge: widespread public support arguably requires a widespread understanding of science itself, but this is naturally undermined by the inherent complexities of the sciences, and by disparities in teaching and popular reporting. A common reaction to this has been to champion educational reform to produce broader scientific literacy, but prevailing conceptions of this, I argue, are misconceived. I consider an account of “knowledge transfer”—a practice whereby science is “transferred” between different contexts of use—to illuminate why some transfers are successful and others are not, and thus, why conventional appeals to scientific literacy are bound to be ineffective in producing public understanding that serves societal wellbeing. As an alternative, principal focus, what is required is a form of philosophical literacy regarding science, amounting to a particular understanding of the claim that “Whatever natural science may be for the specialist, for educational purposes it is knowledge of the conditions of human action” (Dewey, 1916 , p. 128).

Similar content being viewed by others

scientific knowledge essay

Scientific Thinking and Critical Thinking in Science Education

Scientific literacy and social transformation.

scientific knowledge essay

Pragmatism—John Dewey

Avoid common mistakes on your manuscript.

1 Bringing Science to Bear in Pursuit of the Common Good

Many would agree that in addition to the important intellectual functions science may serve, satisfying desires for knowledge and explanation regarding the natural and social worlds in which we live, one crucial function is to contribute toward improving the welfare of those who inhabit these worlds, thereby serving the good of individuals, groups, and societies. Appreciating that interests can diverge and must be negotiated, let me refer to this simply as “the common good.” In a democratic society, in order to bring our best science to bear successfully (or at least as effectively as we can) in making such contributions, widespread support of this particular role for science seems essential, since widespread support is generally (though not always, of course) a major determinant of public policy and governance by elected representatives. Indeed, public support is clearly a non-negligible factor in shaping policy and governance in some non-democratic societies as well. Many would further agree that the most obvious route to widespread support of this kind is widespread understanding: the greater the extent to which society as a whole understands our best science, the greater the likelihood of consequential public support for science-based policy and governance. Finally, it is a truism that public understandings of science are a function of science education, whatever forms this may take. I will assume all of this agreement as a starting point in what follows. Footnote 1

Assuming all of these things, however, leaves much to be clarified and disputed about what sort or sorts of scientific education would, in fact, serve the goal of enhancing public support for the uptake of science in confronting the many challenges societies face. For one thing, there are many different challenges to scientific understanding, and having a clear picture of them may suggest the appropriateness of different antidotes in different cases. For another, wherever scientific education may be an appropriate antidote, it remains to be agreed what this education should comprise, exactly. My aim in this essay is to argue that common conceptions of the public understanding of science, corresponding to common conceptions of scientific literacy—while laudable in their own right—are not well suited to the task of enhancing public support. I will argue for a different conception: one that emphasizes a particular understanding of what science is and can deliver, as an instrumentally successful, problem-solving endeavor, that is shared by all and otherwise conflicting accounts of the nature of science. I will conclude with some incipient thoughts on what insight this may offer concerning the question of which remedies would best address various sources of science skepticism.

In Section  2 , I will briefly review the main challenges to improving levels of public understanding, and how this is naturally connected to prospects for widespread public support for science-based policy and governance. The most frequently advocated proposal for elevating public understanding, namely, changing educational priorities in such a way as to improve levels of scientific literacy, is considered in Section  3 . Here I will argue that, as they are generally conceived, the two most prominent versions of this proposal in recent decades—focusing on improvements in teaching the content of scientific theories and models, including the skills and concepts required to engage with them (“scientific knowledge”), and focusing instead on the practical realities of how science works (knowledge of scientific practice or, as it is often labeled, “the nature of science”)—are likely to be ineffective. Taking inspiration from recent discussions of the idea of knowledge transfer in the philosophy of science, concerning ways in which knowledge is often extracted from one scientific domain and made to function in another, I will then explore conditions under which transfers are successful, in Section  4 . This discussion clarifies the negative conclusion of the previous section, and sets the stage for a positive proposal, the beginnings of which are sketched in Section  5 : emphasizing an understanding of the sciences according to which, in a perpetually evolving and corrigible way, they incorporate our most effective strategies for grappling with concrete problems.

As a final word of introduction, it is worth flagging that at every stage, I will attempt to convey the entirely constructive aspirations of this discussion, which ultimately amounts to an argument for educating people in such a way as to entrench a kind of philosophical literacy with respect to science. This prescription is hardly incompatible with the inherent value of learning how to understand scientific theories and models, or the value of learning about how science works in practice. On the contrary, it may be viewed as building on these influential conceptions of scientific literacy in what I take to be a crucial way, by adding and strongly emphasizing a very specific, philosophical understanding of what, precisely, science is, and what it is for. This understanding focuses on the capacity of the sciences, as our preeminent set of practices for complex problem solving, to allow us to do things successfully—yes, to help us fathom how subatomic particles interact and how galaxies form, but also to create vaccines that save lives, to produce more nutritious foods that enhance our wellbeing, and to make machines that allow us to see and communicate with loved ones the world over. This is by far the most important ingredient in any conception of literacy relating to the sciences that stands a realistic hope of bolstering support for the use of our best science in acting for the common good, or so I will contend.

2 Challenges to Deploying Science: Public Understanding

For present purposes, let us distinguish between what I will call intrinsic and extrinsic challenges to public understandings of science. The former derive from features of science itself, including both scientific practices and the outputs of scientific investigation, such as theories and models, that inevitably problematize the widespread understanding of science beyond certain specialist communities of scientists, students, and other experts. The latter derive from strictly extra-scientific interventions in both scientific work and the reception of the outputs of this work by non-experts. By undermining public understanding in various ways, both intrinsic and extrinsic challenges function as powerful impediments to widespread support for science-based policy and governance. While I will focus primarily on intrinsic challenges here, the morals of this discussion are nonetheless relevant to confronting extrinsic challenges as well. Thus, let me clarify this distinction briefly, which may be useful in subsequent sections for thinking about how to combat these challenges across the board.

The notion of intrinsic challenges to public understanding is familiar. As in any specialist endeavor, in order to engage with subject matters in depth and with the requisite precision, the sciences employ technical terms and concepts, often elaborated by means of highly sophisticated mathematical, statistical, computational, and other tools of description and analysis, none of which can be reasonably expected to make much if any sense at all to anyone lacking the immersive training that mastering these languages and tools requires for understanding. Indeed, as in many branches of study, degrees of sub-specialization within the sciences have rendered many areas of research effectively inaccessible even to other scientists working within the same broad disciplines. It is hardly surprising, then, that the intrinsic complexities of specific phenomena of interest, tools of investigation and description, and resulting theories and models are, for the most part, opaque to non-experts, from which stems a series of challenges to public understanding—intrinsic challenges. The severity of these challenges is evidenced, for example, in the highly variable nature of science reporting (in newspapers, magazines, online platforms, etc.), where a lack of attention to or misunderstandings of the intrinsic complexities of science are not uncommon. Footnote 2

Perhaps less obvious, but posing a no less formidable difficulty, impediments to public understanding stemming from extrinsic interventions have attracted significant attention in recent years in response to cases in which powerful individuals, social and political organizations, and corporations have attempted to subvert scientific knowledge that would otherwise compromise the pursuit of their own social, political, and economic interests. A growing body of history, philosophy, and sociology of science has documented cases in which science has been corrupted at the source—for example, by the funding of specific research programs by investors interested in generating certain results—or undermined by misrepresentations of scientific work, publicized to serve ideological interests whose pursuit might be otherwise derailed by our best science. Examples abound, from funding effects induced by tobacco and pharmaceutical companies, to campaigns aimed at misrepresenting climate science by the fossil fuel industry, to misinformation about the risks of potential side effects disseminated by anti-vaccine movements. Footnote 3 Similarly, advocates of pseudoscience—claims or systems of belief or practice that masquerade as exemplifying the methodological rigor of science (astrology, homeopathy, parapsychology, etc.)—may disrupt public understandings of genuine science, even if that is not their intention.

Having made the distinction between intrinsic and extrinsic challenges to public understanding, it is easy to see why suggestions for wrestling with the latter so often take the form of calls to wrestle with the former. No doubt, pointing to surprising “coincidences” in which research funded by interested parties produces results that are beneficial to those parties, or cases in which lobbyists for certain ideological positions just happen to come equipped with “their own results” in support of their causes, may be sufficient to raise suspicion (if not thoroughgoing skepticism) about such claims. But in many cases, in order to make criticisms of extrinsic interventions stick, it is helpful to demonstrate their epistemic failings in more detailed ways, and this amounts to demonstrating how misrepresentations of science, and pseudoscience, fall short of the standards of genuine science, which requires overcoming intrinsic challenges to public understanding. Furthermore, raising suspicions about extrinsic intervention is only half the battle. It is not, all by itself, sufficient to raise confidence in the alternative , namely, our best science, as something that is worthy of support and inclusion in discussions of policy and governance instead. Either way, it is clear that much depends on overcoming intrinsic challenges to public understanding, and this is where I turn next.

3 Enhancing Scientific Literacy: Good News and Bad News

Understanding in this context requires degrees of comprehension or mastery that cannot be had without an education (whether formal or informal and to what extents are questions I will leave open here). On this much, educators agree, but what should be the content this education? I submit that in recent decades, two answers to this question have been especially prominent. In more abstract terms, one may describe both of them as advocating for greater scientific literacy. More concretely, however, the first conceives of literacy in a relatively narrow way, and the second in a substantially broader way. These two approaches to scientific understanding—scientific literacy construed narrowly and broadly—correspond to what I earlier called “scientific knowledge” and “knowledge of science,” respectively, and there can be no question that both of these forms of knowledge are valuable and inherently desirable. Nonetheless, I will argue next that, at least so far as the task of improving the public understanding of science is concerned, neither has been conceived in a way that is likely to lead to success. Let us take each in turn.

Scientific literacy narrowly construed is concerned with scientific knowledge: the descriptive content of our best theories and models and the skills and concepts required to understand this content. Norris and Philips ( 2009 , p. 271) describe this as the “fundamental sense” of scientific literacy—emphasizing the ability to read and write science itself—and contend that if a scientific education does not provide this, it “is not likely to achieve the good for citizens and society that we all desire” (p. 282). Footnote 4 Here we have a direct connection drawn between what is undoubtedly a well-entrenched, dictionary-definition-type rendering of “literacy,” in terms of the competent execution of reading and writing, and positive consequences for public understanding. And as these authors and many others have rightly noted, understanding the descriptive content of science and possessing the sorts of skills and concepts that may facilitate this, such as the ability to understand the relevance of data and analysis, degrees of confidence in reported findings, and various devices in terms of which these things are expressed, such as graphs and diagrams, would certainly amount to level of understanding that would surely favor the use of at least some of our best science in acting for the good of society. So far as it goes, this is the good news about scientific literacy narrowly construed.

The bad news, however, is that as a means to the end of widespread understanding, achieving sufficiently highly distributed levels of this sort of literacy is utopian. To think that we could achieve widespread scientific understanding this way is to gloss over the complexities of modern science and the standards of education required to achieve this sort of proficiency. I have already mentioned a number of intrinsic challenges, including the use of technical concepts and advanced mathematics, and the fact that these things do not simply comprise a manageable suite of tools that can be learned once and then applied across the sciences, but are rather, often, highly idiosyncratic to the very specific subdisciplines in which they are used. Add to this scientific techniques of abstraction, in which causally relevant parameters used to investigate target phenomena are parsed in different ways for different purposes, and the routine use of idealizations, in which aspects of these targets are represented in ways that scientists know them not to be, for reasons of mathematical or computational tractability, and the ubiquity of approximations, and implicit understandings within fields regarding how well established or conjectural current theorizing may be—all of which, again, is highly contextual across different subdomains of physics, chemistry, biology, and the social sciences, let alone science simpliciter .

None of this is to suggest that scientific literacy narrowly construed is not worth promoting, on the contrary. That said, in practice, it is highly unrealistic to imagine that in connection with our best, cutting-edge investigations into the most pressing issues of our day—genetic modification, climate change, artificial intelligence, etc.—this sort of literacy is something that could be inculcated in a majority of citizens. It requires a specialized education over significant durations of time and, as noted above, not even scientists are capable of this sort of literacy across the vast breadth of the sciences. Indeed, the impracticability of achieving widespread levels of scientific literacy narrowly construed is much more severe than even this suggests, because (of course) the sciences do not stand still. Whatever non-specialists acquire today will inevitably be modified and replaced over time as technical concepts, forms of data collection and analysis, and so on evolve. Scientific literacy narrowly construed is a good thing, but it will never serve as the primary basis of a widespread public understanding.

Let us turn, then, to a second prominent conception of scientific literacy, which has likewise come to the fore in recent decades. What I have called scientific literacy broadly construed is less concerned with (reading and writing) scientific knowledge per se, and more concerned with a knowledge of science, that is, of science conceived more generally as a practice or set of practices of investigation and knowledge generation. This, too, is undoubtedly a good thing: the extraordinary cultural significance and impact of forms of scientific inquiry in modern times is difficult to overstate, and the greater the extent to which members of societies, which unavoidably partake in and are affected by the sciences, are educated with respect to these nuances, the better. As I will now suggest, however, scientific literacy broadly construed—at least as it has been conceived in recent times—has little potential to serve as the primary basis of a public understanding of science that supports its inclusion in decision making for the common good.

First, let us clarify what a knowledge of science or “the nature of science” is, more precisely. Identifying it with scientific practices (as opposed to the descriptive content of theories and models) is a start, but a great deal hangs on what is included under this heading. For example, does it include techniques of investigation—the use of instruments and other technologies in observation, detection, measurement, and data collection? How about techniques of analysis—the use of mathematics, statistics, computations such as computer simulations, and other procedures involved in moving from data to conclusions? The methods of science are surely part of its nature, but as is suggested by their partial overlap with items whose understanding would be required to achieve scientific literacy narrowly construed, including scientific methods in a conception of literacy broadly construed is likewise a nonstarter for enhancing public understanding. All of the same concerns apply: scientific methods are intricate and complex; their mastery requires substantial training over significant periods of time; they vary remarkably across scientific disciplines and subdisciplines; and they are apt to evolve and change over time as the science develops. As such, for reasons already covered, the methods of science cannot plausibly be considered an effective cornerstone of efforts to achieve widespread public understanding.

Perhaps appreciating this, some appeals to scientific literacy broadly construed take a different tack. They appeal, in various ways, to features of the sciences that are the primary focus of the scholarly field of history and philosophy of science (HPS, which I will take here to include closely integrated disciplines including the sociology of science and ethnographic, anthropological, and related modes of research commonly employed in the field of Science Studies more generally). Footnote 5 These appeals have in common the contention that incorporating HPS into science curricula would be a good thing, but as a whole, I submit, it is difficult to discern in them a positive case for thinking that this is so. Often, the driving, underlying intimation appears to be the suggestion that insights from HPS would instill the view that the sciences are epistemically virtuous in certain ways: rational; objective; reliable producers of knowledge; and so on. This is belied, however, by the very morals that this literature commonly identifies as emanating from scholarship in HPS. Let me illustrate this no doubt provocative claim with just a few examples.

Ennis ( 1979 ) advocates integrating a number of “results” stemming from research in HPS with science education. These include the idea that scientific claims are often subject to “unmentioned qualifications” (pp. 151–152), that they are tentative and subject to revision (p. 152), and that they are sometimes vague and imprecise (p. 156). All of this is correct, of course, but absent a deeper embedding of these facts into a systematic or more substantial account of the epistemology of science—something that scholars in HPS routinely attempt to provide, but that authors concerned with scientific literacy appear (rightly) to agree would be beyond the capacities of general science curricula as such—it is difficult to see how one might characterize these observations as indicative of the epistemic virtues of science, in a way that might then bolster widespread support for science-based policy and governance. Taken at face value, these observations might well seem more likely to encourage skepticism than to inspire confidence in scientific claims.

Similarly, in an extensive review of the literature, McComas et al. ( 1998 , pp. 512–513) contend that while there is significant disagreement within HPS about the nature of science (“what science is and how it works”), there is, in fact, a consensus view regarding aspects that are “most important for a scientifically literate society,” thus constituting “fundamental issues in the nature of science relevant to science education.” From their list of fourteen bullet points comprising this consensus, however, there is very little that might support the idea that science is something that should inform decision making in the public domain. Some of the points are epistemically neutral (e.g., “Scientists are creative”; “Science is part of social and cultural traditions”), and others might easily engender skepticism about passing references to “experimental evidence” and “rational arguments” (e.g., “Observations are theory-laden”; “Scientific ideas are affected by their social and historical milieu”). Once again, the point here is not that any of this is incorrect, nor is it that there can be no mitigation of what might otherwise appear to be worrying features of science regarding the likelihood of it being trustworthy. Rather, it is that giving an account of scientific knowledge that achieves this more detailed understanding would require a much fuller and more subtle engagement with scholarship in HPS than is practicable in a general science education.

Consider, for example, the notion that scientific thinking is, in various ways that have been elaborated through case studies in HPS, responsive to social and historical influences. Leaving aside for the moment the fact that the ways and extents to which this is the case, and the epistemic consequences, are disputed within the field, the basic idea is widely accepted. That said, the fact that social and historical context may affect the formulation and development of scientific hypotheses, theories, and models does not by itself suggest anything about the nature of science that should bolster confidence in the prospects of science-based policy in pursuit of the common good. Lacking expert knowledge that may allow one to determine whether such policy is, in fact, desirable—whether the influence of a given social or historical context is a good thing or bad—a general, skeptical attitude hardly seems unreasonable. The same may be said of other items of bullet point consensus, such as the notion that observations are theory laden; that is, that the way scientists describe the data of observation and experimental detection is influenced by the very hypotheses or theories these data are meant to test. Questions about the ways and extents to which this occurs, and whether it is, in fact, a good thing or bad, cannot be answered except through an application of much more fulsome expertise than a general science education can instill.

Indeed, the bad news for scientific literacy broadly construed, at least insofar as one might imagine it to be a means to the end of greater public understanding, is quite a bit worse than this last consideration suggests. It is not merely the case that abstracting certain facts about scientific practice from the nuanced understandings of them elaborated in the scholarly discipline of HPS cannot serve to facilitate public understanding, due to the impracticability of de-abstracting. It is furthermore that there is, as noted above, deep disagreement within HPS about how these facts should be understood, concretely. Does science produce knowledge? If so, what sort? Do social dimensions of science help or hinder this production? How so and with what consequences? There are longstanding and highly articulated debates here, and no settled consensus regarding how best to think about the epistemic status of our best science. This exposes the fragility of McComas et al.’s claim ( 1998 , p. 512) that “the issues included in the following table [of consensus bullet points] are complex, but we are making recommendations for K-12 students and their teachers – not future philosophers of science.” The putative consensus underwriting scientific literacy broadly construed is a sham, built on a foundation of conflicting views in HPS regarding how these issues should be understood. Footnote 6 This is far from a promising basis for an education with which to facilitate public understanding in the service of the common good.

4 Conditions Underpinning Successful Knowledge Transfer

Even if scientific literacy narrowly and/or broadly construed were to end up featuring as aspects of a compelling account of scientific literacy simpliciter , for reasons we have just considered, something more—or something else—is required if we are to enhance public understanding. Thus, we arrive at the question of what this something else may be. In order to tackle this question systematically, let me first step back to consider certain conditions that seem essential to realizing the goal of improving levels of public understanding, and why in the absence of these conditions, success in this endeavor is not something we should expect. To this end, let us take a moment to reflect on the more general phenomenon of attempting to extract knowledge from one domain, and then employ it, effectively, in another. In doing this, we stand to gain clearer insight into why attempts to implement the conceptions of scientific literacy discussed above are unlikely to succeed in realizing the aim of greater public understanding, and thereby, the aim of greater support for science-based policy and governance. At the same time, we may lay the groundwork for an approach to scientific understanding that fares better.

In recent years, growing interest in the philosophy of science has targeted the idea of what is now commonly referred to as “knowledge transfer,” especially in the context of scientific modeling. In a nutshell, the phenomenon of interest is that of how knowledge sometimes “travels” from a context in which it originates into a different one altogether. The sciences are replete with examples of how theorizing or modeling developed in one domain of research sometimes ends up finding its way into others, where it is likewise employed highly effectively. Models developed in game theory, for instance—the mathematical study of interactions between the choices of agents in decision making, with obvious applications to target phenomena in the social sciences—were subsequently applied with striking success in evolutionary biology. Models developed in the domain of physics have been adopted in the domain of economics, and so on. Footnote 7 Abstracting from the historical richness of these cases, the basic idea is to consider the extraction of some descriptive content that functions well in one domain, and its subsequent adoption in a separate domain in which it is also functions well, facilitating the achievement of whatever goals it may be employed to serve in each.

In scientific cases, though the subject matters at issue in the relevant domains, namely, the target systems under investigation in those domains, are different, there is nonetheless something in the descriptions of them that is shared. That is to say that there is some analogy or similarity between them that underlies the success of the transfer. Often, this is a formal or structural similarity: some set of relations between the relevant parameters in a mathematical, computational, causal, or other description. Transfers succeed, when they do, because even though the semantics of a given structure—its meaning in application to its subject matter—may vary between different contexts of use, it is nonetheless successfully interpretable in each. As an illustration of this, consider the Lotka-Volterra model, essentially a pair of coupled, non-linear differential equations, used in the context of ecology to describe fluctuations in populations of predator and prey organisms. While these equations are interpreted in the context of ecology this way, the very same model can also be used to describe economic fluctuations in employment rates and the share of labor in national income. In other words, it can be interpreted in the very different context of economics so as to serve a very different purpose. Footnote 8

Here is the upshot for present purposes: what allows for the transfer of knowledge from one domain into another is the possibility of interpreting it successfully in those disparate contexts. The Lotka-Volterra model, taken as a mathematical description, can be successfully embedded in both ecological and economic contexts of interpretation because both of these domains feature conceptual and linguistic resources that allow the model to be successfully interpreted. Appropriate semantic embeddings are a necessary condition for the success of the knowledge transfer. Now, with this much in hand, let us see if it is possible to generalize or extend this notion of knowledge transfer to the focus of our present discussion—the public understanding of science. Here, the interest is not in transferring knowledge between contexts of scientific practice, but rather in transfers between scientific contexts, on the one hand, and broader, public, or societal contexts, on the other. As I will now suggest, while these two scenarios are in one sense tantalizingly analogous, prospects for replicating the success of knowledge transfer in the former are seriously undermined in the latter by a telling disanalogy.

First, the analogy. In both science-to-science cases and science-to-society cases, any attempt to transfer knowledge will involve some putatively shared content between the contexts comprising each end of the transfer process. This is the basis, after all, of the attempt. Focusing now on attempts to transfer knowledge between the scientific domains in which it is produced, and the public domain in which we hope to assimilate it, this content may be thought of in terms of putative facts—say, regarding the likely consequences of sustained anthropogenic contributions (at current levels) to climate change, or the efficacy and risk profile of vaccines developed in response to a pandemic. In other words, the content to be transferred takes the form of assertions regarding the epistemic upshot of theories and models: descriptive claims; explanatory claims; predictions; and retrodictions. But here, in all but the most simple cases, a disanalogy between attempts to perform science-to-science and science-to-society transfers is highly consequential. Transfers succeed in the former case because the domains at issue satisfy the necessary condition specified above: they each embed the relevant content in an interpretive context that has the capacity to produce an understanding of it that functions successfully. Footnote 9 In the latter case, however, for reasons intimated in the previous section (which I will now spell out), given currently prominent conceptions of scientific literacy, this condition is bound to be unsatisfied.

Why are attempts to produce more widespread public understanding of science by means of scientific literacy narrowly and broadly construed destined to fail? One may think of this in terms of two classes of challenges. The first arises within (or with respect to) the scientific domains from which knowledge transfer is intended to originate, and concerns the determination of what, exactly, should be transferred. Let us call these translational challenges , since they concern the difficult task faced by scientific experts of re-describing their own knowledge in ways that are accessible to non-experts. Conversely, on the receiving end of aspirations for science-to-society knowledge transfer, another class of challenges likewise arises in attempts to embed scientific knowledge in a very different semantic context of conceptual and linguistic resources. These interpretational challenges concern the difficult task faced by non-experts of understanding the relevant science. I suspect that it is now apparent why scientific literacy narrowly construed and scientific literacy broadly construed are both vulnerable to translational and interpretational challenges. The discussion of Section  3 furnishes a catalogue of reasons for thinking this, but let me drive the point home with a concrete example.

Experts in specific domains of scientific inquiry—scientists themselves, historians and philosophers of science, science journalists, etc.—typically possess a mastery of the semantic contexts in which that science unfolds, allowing them to understand the content of theories and models expressed in their original form. (Levels of expertise vary, of course; let us assume a level corresponding, at a minimum, to this sort of mastery. Footnote 10 ) Now, consider the task of an expert attempting to communicate this content to non-experts in the public sphere. In order to engage with non-experts who, by definition, lack mastery, a translation that does justice to the content of the science is required, but how is this to be achieved? On the one hand, expressing this content in a way that conforms too strictly to the complexities and qualifications of scientific work runs headlong into interpretational challenges; such descriptions are unlikely to produce understanding in an audience that is incapable of embedding it in their own semantic context. On the other hand, straying too far from these nuances—in other words, employing simplified descriptions—produces claims that are often, strictly speaking, false. Even with the best will in the world, it is often impossible to overcome the translational challenges of simplifying sufficiently to generate non-expert understanding, while simultaneously communicating sufficient detail to avoid caricaturing the science.

A familiar example may serve to clarify this sort of interplay between translational and interpretational challenges. Secondary education in many parts of the world teaches that the sciences employ an especially effective procedure for investigating their subject matters—“the scientific method.” This method is epistemically privileged: in inquiring into the natures of things of scientific interest, it functions as something like a procedural guide or recipe for generating truths. In reality, though, the idea of “a method” is an abstraction from some very specific forms of scientific inquiry, namely, those found in experimental disciplines. There are in fact many different forms of scientific investigation, and only some of them can be made to fit this particular mold. As it turns out, then, there is no one method. Upon examining the amazing variety of practices falling under the heading of "the sciences", from mapping the stages of stellar evolution, to exploring the ranges of animal behavior, to modeling quantum gravity, we find that there is no recipe amounting to a common procedure (barring desperate characterizations rendered largely uninformative by generalizing in the extreme: “science is empirical”; “science relies on evidence”; etc.). Footnote 11

Translating the remarkable scope of scientific methods into the perhaps inspiring but nonetheless grossly caricatured notion of “the scientific method” fails in part because, as an indicator of what is required in order to be genuinely scientific—a defining feature of science, as it were—it is false. One consequence of this is that it opens the door to what I earlier described as extrinsic intervention by agents who seek to undermine scientific knowledge, by allowing them to raise doubts about branches of science that may not be well described by the caricature. Owing to the difficulties that undermine prospects for enhancing levels of scientific literacy narrowly and broadly construed, noted above, neither represents a promising antidote to such skepticism. The range and complexity of scientific methods, and the fact that much like theories and models, they too evolve over time (consider, for example, the dramatic methodological impact of the advent of computer simulations and, more recently, machine learning), returns us once again to prior concerns about the futility of attempting to promote widespread understanding through training in skills of reading and writing, or through an appreciation of the lessons of HPS. In the public domain, lacking the semantic context required to grasp the probative force of most scientific methods let alone all of them, interpretational challenges abound.

5 Philosophical Literacy Concerning the Nature of Science

In his reflections on education, Dewey ( 1916 , p. 126) makes a passing reference to what I have described here as the contextuality of understanding in cases of aspirational knowledge transfer:

Even the circle, square, etc., of geometry exhibit a difference from the squares and circles of familiar acquaintance, and the further one proceeds in mathematical science the greater the remoteness from the everyday empirical thing. In one case, as in the other, the meaning, or intellectual content, is what the element accomplishes in the system of which it is a member.

And yet, as he goes on to suggest, the concerns of everyday life and the interactions with the world these concerns provoke and inspire in us are not as disconnected from scientific thinking as this quotation may suggest. Scientific conceptions of the subject matters of science and related spheres of scientific activity are generally about things of concern to life beyond the sciences; and thus, the former are connected to the latter (Dewey, 1948 , pp. 197–206). In this lies a glimpse of what I take to be a way forward in thinking about the public understanding of science, in a way that stands a better chance (than some we have considered) of realizing the goal of a more widespread appreciation for scientific knowledge—one that recognizes the importance and, indeed, the necessity of including our best science in acting in ways that promote our own welfare and that of society. The project of articulating this proposal in detail is one that exceeds my capacities here, but in closing, let me take some initial and I hope productive steps toward describing how we might envision it.

Earlier, I problematized the idea that scientific literacy broadly construed, as it is typically conceived, may serve as a means by which to enhance levels of public understanding. This conception of literacy, recall, is one that emphasizes an imagined, underlying consensus regarding the nature of science revealed by an examination of the history and philosophy of science. The difficulty with this, I argued, is that the elements of this supposed consensus, taken together, may naturally lead to substantial ambivalence or even skepticism about the epistemic status of scientific theories and models. Furthermore, while possibilities for a deeper analysis capable of resolving this ambivalence or skepticism are thoroughly discussed in the field of HPS, it is precisely this level of engagement and understanding that is (rightly) not conceived by proponents of scientific literacy broadly construed as a practicable component of non-expert education. Worst of all, in HPS, there is in fact extensive disagreement about how precisely the elements of this imagined consensus should be understood, and about the consequences they have for the status of scientific knowledge. In all of this, however, as I will now suggest in a more constructive vein, the difficulty with the “nature of science” approach is ultimately one of execution. It turns out that there is something to the idea of consensus here after all, but in order for it to do the work for which it is intended, it will have to be conceived in a very different way.

To elaborate this, let us start with a maximally general, epistemological question about the sciences: what is the upshot of our best science, in terms of knowledge? The answer to this question is hugely contested by philosophers of science. Scientific realists of various kinds take theories and models to describe correctly (or to some impressive degrees) aspects of a mind-independent world, but in different and conflicting ways; some antirealists assert similar-sounding claims regarding descriptions that meet certain, specified standards of success, but reject the idea that these descriptions and the things they describe are, in any intelligible way, independent of human ways of conceiving and knowing about them; others restrict the scope of what is known to that which is detectable, or to that which is detectable using human sensory modalities alone; and so on. Collectively, these positions reflect numerous disagreements about how best to understand the epistemic upshot of the sciences. And while there is no question that all of the proponents of these views would contend that our best science yields knowledge conceived one way or other, they differ enormously regarding what this knowledge is knowledge of , exactly. Footnote 12

In the cut and thrust of now highly elaborated disputes between advocates of different epistemologies of science however—and almost invisible as a shared, background assumption, regarded in this domain as something of a banality underwriting further more detailed thinking about science—is a matter of genuine consensus. All scholars of the sciences, whatever their contrasting accounts of the nature of scientific knowledge may be, endorse the view that the sciences are instrumentally successful: they embody the very best techniques we have managed to establish, and continue to develop, for making predictions, for manipulating things and their properties, for intervening in events and processes, for changing states of affairs, and for applying all of the descriptions, explanations, instruments, and technologies we have fashioned in the course of scientific practice to tackle the problems and puzzles that confront us, and those we set for ourselves. Indeed, whatever else they may achieve, as described in more rarefied epistemological terms and disputed by experts, the sciences are astonishingly instrumentally successful. They incorporate practices that are specifically designed to be, and are selected as, our most successful strategies for delivering empirical success. In these terms, the sciences are supremely fixated on what works. Footnote 13

The instrumental success of science, in all of these respects, is its one truly consensus feature. Viewed in this light, it is revealed not as a banal, background assumption of more interesting debates, but as a stunning achievement of human ingenuity and culture. Scholars differ in how they explain this success (in all of the ways noted above, for example, in terms of realist and antirealist descriptions of scientific knowledge), and these differences are, no doubt, philosophically interesting and worthy of debate. It is a consequential mistake, however, to allow this to obscure the more fundamental, underlying agreement. It is this agreement that should be at the heart of a promising, public understanding of science, and the focus of a general science education. There is an important story to be told here, I submit, regarding how, independently of the many differences in expert diagnoses of the nature of scientific knowledge, all of them recognize the same capacities of the sciences for acting in the world—and thus, by extension, for promoting the common good. There is a shared and powerful conception here of what the sciences are, and what they can achieve, functioning collectively in the manner of an extraordinary machine for generating our best hopes for responding to challenges inherent in our natural and social environments.

It is in the nature of a community of specialists to focus on matters about which they disagree, and then to imagine that these issues exhaust all of what is interesting or important about their specialism. In thinking about the public understanding of science, though, and about what conception of scientific literacy might support the practicable achievement of a widespread understanding that favors a central role for science in acting to serve the good of society, it is now past time to focus on the easily overlooked question of what we mean, or should mean, by “the success of science”—to articulate more clearly and fully the shared part of our many diverse understandings of science, on which our conflicting interpretations of the epistemic upshot of the sciences depend. While I regard the preceding as supplying ample motivation for this positive project, I cannot do it full justice here. That said, it is worth noting that when, hopefully, an increasing number of educators and scholar are ready to engage with this project in earnest, we will have the benefit of a head start furnished by earlier, embryonic articulations of it in the recent history of philosophy.

Consider, for example, the broad sweep of logical empiricism, associated with the birth of the philosophy of science as a self-aware subdiscipline of philosophy in the early twentieth century. Many of its core commitments have since been rejected, but for present purposes, there is something of substantial value to be recovered from its original motivations. The logical empiricists were keen to establish the sciences as the exemplary means by which to produce knowledge of the world, not (primarily) for its own sake, but because they took properly scientific knowledge to be the best possible means by which to facilitate social and moral progress. At the same time, when the American Pragmatists promoted the idea that concepts such as truth and knowledge must be understood as having a pragmatic dimension—that what it is for a claim to be meaningful must be understood, in part, on the basis of what we can do with it—they were articulating a view of how the sciences are tied to practical consequences in human experience. Footnote 14 In more rarefied debates about the nature of scientific knowledge, empiricism and pragmatism are often identified with the antirealist side of the ledger; in different ways, they resist traditional realist understandings of science as furnishing knowledge of a mind-independent world. Once again, however, this simply distracts from what is key to a potentially potent conception of the public understanding of science. Everyone, whether or not they think that the sciences are merely instrumentally effective, thinks that they are instrumentally effective.

The notion that it is part of the essence of the modern sciences to serve as a preeminent collection of instruments with which to face our most pressing challenges is compatible with further elaborations of their aims and achievements, but it is the instrumental piece that is crucial for the public understanding of science—and more specifically, for the sort of widespread public understanding that would support the pursuit of science-based policy and governance. As it requires no specialist knowledge or background to comprehend, this understanding is immune to the debilitating effects of translational and interpretational challenges that inevitably undermine current attempts to transfer scientific knowledge into the public domain. Of course, this does not preclude supplementation by scientific literacy both narrowly and broadly construed, to whatever extent this may be possible. Ultimately, however, what is crucial is something much simpler, more easily absorbed, and longer lasting: a more basic understanding of the functional role of science in society, as our most effective means to desirable ends for people and the planet. This is a cultural fact about science, about its intended place in the broad sphere of human endeavor and its staggering success therein, as opposed to a scientific fact per se , and its articulation and mastery thus counts as a form of philosophical literacy, however modest, regarding the nature of science.

6 Conclusions

This proposal, to shift the focus of a general science education from scientific literacy narrowly or broadly construed to a core notion of instrumental success, is both conciliatory and revolutionary. It is conciliatory in that, as noted above, it is not incompatible with current hopes of promoting literacy, whether narrowly in terms of skills and concepts required to engage with scientific work, or more broadly in terms of historical, philosophical, sociological, and other approaches to understanding the sciences that inform scholarship in HPS. Surely, the more students learn about all of this, the better. The proposed shift in focus is revolutionary, however, in suggesting that whatever a general science education may confer along the lines of scientific literacy narrowly and broadly construed, this should be viewed as a means to the end of instilling the more crucial idea that the sciences represent our best hopes for making positive change. For all of the reasons rehearsed above, teaching science itself and attention to scientific literacy simply cannot fulfill the aspiration of enhancing widespread support for science-based policy and governance. What this teaching and attention can do, however, is provide compelling evidence—a proof of concept, as it were—for the more foundational idea that the sciences embody our most potent strategies for instrumental success. This is precisely the knowledge we can and must transfer from the realm of the sciences into the public domain.

One might expect a number of beneficial consequences to follow from a widespread understanding of this simple idea. The entrenchment of it would promote a more extensive recognition, for example, of the fact that a preponderance of scientific consensus is generally our best bet for rational decision making in the present, even when that consensus is partial and apt for revision. It would promote the superior credentials of science over pseudoscience. Perhaps most importantly, given the tight connection between ambitions for instrumental success and the specific problems targeted by those ambitions, it would help us to think more transparently about which problems and whose problems are addressed by science, thus laying bare the deeply value-laden nature of scientific work and facilitating more explicit considerations of what we as a society want science to achieve, and for whom. Footnote 15 This would promote a welcome scrutiny and critique of extant values in science, thus contributing to the process of making it better, and more trenchant rejections of extrinsic interventions that seek to undermine the sciences to the detriment of the common good. It suggests a role not only for teachers, in reframing their approach to general science education, but also a role for experts in the sciences and humanities, in making the communication of this message the paramount objective of a broader scientific education for society as a whole.

Granted, these are lofty ambitions for an articulation of a basic form of philosophical literacy concerning the instrumental success of science, and the preceding discussion is hardly a panacea for all of the challenges associated with the uptake of scientific knowledge in the public domain. What people believe is a function not only of information made available to them, but of so many things in addition—their background beliefs and cognitive predispositions (conscious and unconscious), their social and institutional relationships, and much more besides. A clear understanding of what would constitute a genuinely efficacious public understanding of science is just one piece of this puzzle. It is, nevertheless, an essential piece, and one whose contours I hope the preceding has helped to illuminate.

Data Availability

Not applicable.

Code Availability

For summaries of work concerning the aims of scientific education, see Smith & Siegel ( 2004 , 2016 ). For studies of the many consequential relationships between scientific literacy, responsible citizenship, and democracy, see Dewey ( 1916 ), Miller ( 1998 ), Longbottom and Butler ( 1999 ), Kolstø ( 2001 ), Holbrook and Rannikmae ( 2009 ), Kitcher ( 2011 ), Ratcliffe and Grace ( 2003 ), Reiss and White ( 2014 ), and Sadler and Zeidler ( 2009 ). For literature surveys of frameworks for analyzing public understanding (including scientific literacy) and enhancing this understanding (e.g., by means of science education) to facilitate science-based policy, see Bauer et al. ( 2007 ) and Kappel and Holmen ( 2019 ), respectively. For some recent empirical data on correlations between science education and public understanding, see Kennedy and Hoffman ( 2019 ) and Besley and Hill ( 2020 ).

See Norris and Phillips ( 2003 ), for instance, for a summary of the failings of much science journalism to communicate scientific information correctly.

For just a few recent studies, see Krimsky ( 2004 ), Brown ( 2008 ), and Oreskes and Conway ( 2010 ).

Norris and Phillips ( 2009 ), pp. 271–273, contrast this “fundamental sense," which they advocate, with a “derived sense,” which concerns “the substantive content of science.” It is difficult to imagine exemplifying the former without also exemplifying a good deal of what falls under the latter, but in any case, as I will argue below, both are problematic for present purposes. Feinstein et al. ( 2013 ), p. 314, offer partially overlapping advice for “cultivating competent [scientific] outsiders.” For a sweeping review of historical and conceptual approaches to scientific literacy, see Laugksch ( 2000 ).

Hence the literature in recent decades aiming to introduce educators to this realm of scholarship, in hopes of facilitating the cause of scientific literacy. See, for example, Martin ( 1985 /1972), Ennis ( 1979 ), McComas et al., and Matthews ( 2015 ).

I will return to this point in more detail in Section  5 .

For a recent collection of case studies from across the breadth of the sciences (and further references), see Herfeld and Lisciandra (eds.) ( 2019 ).

See Humphreys ( 2019 ), pp. 15–16, for more detail.

Morgan ( 2014 ) outlines a number of generic strategies that scientists in different domains use to achieve this, given the conceptual and linguistic richness of their respective contexts of work.

This excludes many to whom we might otherwise hope to attribute expertise. For instance, in a sobering study, Norris and Phillips ( 1994 ) document the failure of top, senior secondary school science students to interpret correctly the contents of (even) popular science reporting, systematically overestimating expressed degrees of certainty and failing to grasp basic forms of expression, such as causal claims (pp. 959–961).

Cf. McComas et al.’s ( 1998 , p. 513) third consensus bullet point: “There is no one way to do science (therefore, there is no universal step-by-step scientific method).” For a sample of the now vast philosophical scholarship on this point, see Crombie ( 1994 ), Hacking ( 1993 ), and Kwa ( 2011 ). See also Windschitl et al. ( 2008 ).

For extensive surveys of different forms of scientific realism and antirealism, with references to contemporary discussions and historical antecedents, see Chakravartty ( 2017 /2011), Liston ( 2016 ), and Rowbottom ( 2019 ). Though I will not defend the assertion here, it seems clear that these positions reflect disagreements not merely among philosophers, but also, often, between scientists, especially in more speculative and cutting-edge domains of science.

It should be clear from this characterization of instrumentally successful science that it applies across the ostensible distinction between “pure” and “applied” science—a distinction that is, in any case, often difficult to maintain in practice given the intimate connections between, and the interwoven nature of, the pure and the applied.

The relationship between Dewey’s “instrumentalist conception of science,” for instance, and his views on science education, are explored in Waddington and Feinstein ( 2016 ). I should note that the inspiration I take from Dewey in this paper stems from aspects of the former that are strictly independent of (though connectable to, of course) his advocacy of social, activity-based, experiential learning in schools, in line with theories of experiential and progressive childhood education.

In recent decades, these themes have been especially prominent in feminist philosophy of science. For some influential contributions to this burgeoning research, see Harding ( 1991 ), Longino ( 1990 ), and Kourany ( 2010 ).

Bauer, M. W., Allum, N., & Miller, S. (2007). What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda. Public Understanding of Science, 16 , 79–95.

Article   Google Scholar  

Besley, J. C., & Hill, D. (2020). Science and technology: public attitudes, knowledge, and interest. National Science Foundation: https://ncses.nsf.gov/pubs/nsb20207/executive-summary

Brown, J. R. (2008). The community of science®. In M. Carrier, D. Howard, & J. Kourany (Eds.), The challenge of the social and the pressure of practice: science and values revisited (pp. 189–216). University of Pittsburgh Press.

Chapter   Google Scholar  

Chakravartty, A. (2017/2011). Scientific Realism. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy . http://plato.stanford.edu/entries/scientific-realism/

Crombie, A. C. (1994).  Styles of scientific thinking in the European tradition: the history of argument and explanation especially in the mathematical and biomedical science and arts (3 vols.). Duckworth.

Dewey, J. (1916). Democracy and education: an introduction to the philosophy of education . Unabridged Classic Reprint.

Google Scholar  

Dewey, J. (1948). Common sense and science: their respective frames of reference. Journal of Philosophy, 45 , 197–207.

Ennis, R. H. (1979). Research in philosophy of science bearing on science education. In P. D. Asquith & H. E. Kyburg Jr. (eds.), Current Research in Philosophy of Science: Proceedings of the P.S.A. Critical Research Problems Conference , (pp. 138–170). Philosophy of Science Association.

Feinstein, N. W., Allen, S. S., & Jenkins E. (2013) Outside the pipeline: re-imagining science education for non-scientists. Science, 340 (6130), 314–317

Hacking, I. (1993). Style for historians and philosophers. Studies in history and philosophy of science, 23 , 1–20.

Harding, S. (1991). Whose science? Whose knowledge?: thinking from women’s lives . Cornell University Press.

Herfeld, C., & Lisciandra, C. (Eds.). (2019).  Knowledge Transfer and its Contexts , Special Issue of Studies in History and Philosophy of Science, 77, 1-140.

Holbrook, J., & Rannikmae, M. (2009). The meaning of scientific literacy. International Journal of Environmental and Science Education, 4 , 275–288.

Humphreys, P. (2019). Knowledge transfer across scientific disciplines. Studies in History and Philosophy of Science, 77 , 112–119.

Kappel, K., & Holmen, S. J. (2019). Why science communication, and does it work? A taxonomy of science communication aims and a survey of the empirical evidence. Frontiers in Communication , 55 . https://doi.org/10.3389/fcomm.2019.00055

Kennedy, B., & Hoffman, M. (2019). What Americans know about science . PEW Research Center.

Kitcher, P. (2011). Science in a democratic society . Prometheus.

Kolstø, S. D. (2001). Scientific literacy for citizenship: tools for dealing with the science dimension of controversial socioscientific issues. Science Education, 85 , 291–310.

Kourany, J. A. (2010). Philosophy of science after feminism . Oxford University Press.

Book   Google Scholar  

Krimsky, S. (2004). Science in the private interest: has the lure of profits corrupted biomedical research? Rowman & Littlefield.

Kwa, C. (2011).  Styles of knowing: a new history of science from ancient times to the present (trans. D. McKay). University of Pittsburgh Press.

Laugksch, R. C. (2000). Scientific literacy: a conceptual overview. Science Education, 84 , 71–94.

Liston, M. N. (2016). Scientific realism and antirealism. In J. Fieser & B. Dowden (eds.), Internet Encyclopedia of Philosophy . https://iep.utm.edu/sci-real/

Longbottom, J. E., & Butler, P. H. (1999). Why teach science? Setting rational goals for science education. Science Education, 83 , 473–492.

Longino, H. E. (1990). Science as social knowledge: values and objectivity in scientific inquiry . Princeton University Press.

Martin, M. (1985/1972). Concepts of science education: a philosophical analysis . University Press of America.

Matthews, M. R. (2015).  Science teaching: the contribution of history and philosophy of science , 2nd edition. Routledge.

McComas, W. F., Almazroa, H., & Clough, M. P. (1998). The nature of science in science education: an introduction. Science & Education, 7 , 511–532.

Miller, J. D. (1998). The measurement of civic scientific literacy. Public Understanding of Science, 7 , 203–223.

Morgan, M. S. (2014). Resituating knowledge: generic strategies and case studies. Philosophy of Science, 81 , 1012–1024.

Norris, S. P., & Phillips, L. M. (1994). Interpreting pragmatic meaning when reading popular reports of science. Journal of Research in Science Teaching, 31 , 947–967.

Norris, S. P., & Phillips, L. M. (2003). The public understanding of scientific information: communicating, interpreting, and applying the science of learning. Education Canada, 43 , 24–27.

Norris, S. P., & Phillips, L. M. (2009). Scientific literacy. In D. R. Olson & N. Torrance (Eds.), The Cambridge handbook of literacy (pp. 271–285). Cambridge University Press.

Oreskes, N., & Conway, E. M. (2010). Merchants of doubt: how a handful of scientists obscured the truth on issues from tobacco smoke to global warming . Bloomsbury.

Ratcliffe, M., & Grace, M. (2003). Science education for citizenship: teaching socio-scientific issues . Open University Press.

Reiss, M. J., & White, J. (2014). An aims-based curriculum illustrated by the teaching of science in schools. Curriculum Journal, 25 , 76–89.

Rowbottom, D. P. (2019). Scientific realism: what it is, the contemporary debate, and new directions. Synthese, 196 , 451–484.

Sadler, T. D., & Zeidler, D. L. (2009). Scientific literacy, PISA, and socioscientific discourse: assessment for progressive aims of science education. Journal of Research in Science Teaching, 46 , 909–921.

Smith, M. U., &, H., & Siegel. (2004). Knowing, believing, and understanding: what goals for science education? Science & Education, 13 , 553–582.

Smith, M. U., & Siegel, H. (2016). On the relationship between belief and acceptance of evolution as goals of evolution education. Science & Education, 25 , 473–496.

Waddington, D. I., & Feinstein, N. W. (2016). Beyond the search for truth: Dewey’s humble and humanistic vision of science education. Educational Theory, 66 , 111–126.

Windschitl, M., Thompson, J., & Braaten, M. (2008). Beyond the scientific method: model-based inquiry as a new paradigm of preference for school science investigations. Science Education, 92 , 941–967.

Download references

Acknowledgements

For discussions of various aspects of this essay and helpful suggestions, I am grateful to Catherine Elgin, Blaine Fowers, Aleksandra Hernandez, Raja Rosenhagen, Harvey Siegel, Denis Walsh, and audiences at the Biennial Conference of the European Philosophy of Science Association, the Central Division meeting of the American Philosophical Association, the Second Congress of the Russian Society for History and Philosophy of Science, Ashoka University, the Principia International Symposium, and the Dubrovnik Philosophy of Science Conference.

Author information

Authors and affiliations.

Department of Philosophy, University of Miami, 1252 Memorial Drive, Ashe Building, Coral Gables, FL, 33124, USA

Anjan Chakravartty

You can also search for this author in PubMed   Google Scholar

Contributions

Corresponding author.

Correspondence to Anjan Chakravartty .

Ethics declarations

Ethics approval.

Not applicable. 

Consent to Participate

Consent for publication.

Yes, if this means you have my consent to publish in the journal if accepted; otherwise, not applicable. 

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Chakravartty, A. Scientific Knowledge vs. Knowledge of Science. Sci & Educ 32 , 1795–1812 (2023). https://doi.org/10.1007/s11191-022-00376-6

Download citation

Accepted : 02 August 2022

Published : 09 September 2022

Issue Date : December 2023

DOI : https://doi.org/10.1007/s11191-022-00376-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public understanding of science
  • Common good
  • Scientific literacy
  • Nature of science
  • Success of science
  • Scientific instrumentalism
  • Find a journal
  • Publish with us
  • Track your research

Nazi Unethical Experiments

This essay about the unethical medical experiments conducted by the Nazis during World War II, particularly focusing on the infamous activities of Dr. Josef Mengele at Auschwitz. It highlights the inhumane treatment of prisoners, the violations of ethical principles such as informed consent, and the lasting impact of these experiments on medical ethics and research practices. Through examining this dark chapter in history, the essay underscores the importance of upholding principles of human dignity and autonomy in medical research and practice to prevent such atrocities from recurring.

How it works

The atrocities committed during the reign of the Third Reich have left an indelible mark on history, particularly in the realm of medical experimentation. The Nazi regime, under the guise of advancing scientific knowledge, conducted a series of unethical and inhumane experiments on prisoners in concentration camps. These experiments, conducted without consent and often resulting in immense suffering and death, serve as a chilling reminder of the depths of human depravity.

One of the most infamous figures associated with these experiments is Dr.

Josef Mengele, often referred to as the “Angel of Death.” Mengele conducted a range of experiments at Auschwitz concentration camp, including studies on twins, genetic disorders, and the effects of various drugs and diseases. His disregard for the humanity of his subjects knew no bounds, as he subjected them to unimaginable pain and suffering in the name of pseudoscientific research.

The experiments conducted by Mengele and his cohorts violated numerous ethical principles, including the principle of informed consent, which holds that individuals must be fully informed of the risks and benefits of participation in research and must give their voluntary consent. In the context of the Nazi experiments, prisoners were coerced or deceived into participating, and their consent was neither voluntary nor informed. Furthermore, the experiments often lacked scientific rigor and were driven more by ideology than by genuine scientific inquiry.

The legacy of Nazi medical experiments extends far beyond the atrocities committed during World War II. These experiments have had lasting effects on medical ethics and research practices, serving as a cautionary tale of the dangers of unchecked scientific curiosity and the importance of upholding the principles of respect for human dignity and autonomy. By studying and reflecting on the dark history of Nazi medical experimentation, we can ensure that such atrocities are never repeated and that the lessons learned from this chapter of history continue to inform and guide ethical medical practice in the future.

owl

Cite this page

Nazi Unethical Experiments. (2024, Apr 29). Retrieved from https://papersowl.com/examples/nazi-unethical-experiments/

"Nazi Unethical Experiments." PapersOwl.com , 29 Apr 2024, https://papersowl.com/examples/nazi-unethical-experiments/

PapersOwl.com. (2024). Nazi Unethical Experiments . [Online]. Available at: https://papersowl.com/examples/nazi-unethical-experiments/ [Accessed: 30 Apr. 2024]

"Nazi Unethical Experiments." PapersOwl.com, Apr 29, 2024. Accessed April 30, 2024. https://papersowl.com/examples/nazi-unethical-experiments/

"Nazi Unethical Experiments," PapersOwl.com , 29-Apr-2024. [Online]. Available: https://papersowl.com/examples/nazi-unethical-experiments/. [Accessed: 30-Apr-2024]

PapersOwl.com. (2024). Nazi Unethical Experiments . [Online]. Available at: https://papersowl.com/examples/nazi-unethical-experiments/ [Accessed: 30-Apr-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

IMAGES

  1. (PDF) AN ESSAY ON SCIENCE AND KNOWLEDGE

    scientific knowledge essay

  2. How to Write a Research Introduction (with Sample Intros)

    scientific knowledge essay

  3. Indigenous Knowledge and Scientific Knowledge Essay Example

    scientific knowledge essay

  4. The Characteristics of Scientific Knowledge

    scientific knowledge essay

  5. The Scientific Method Essay Example

    scientific knowledge essay

  6. phl3B science essay

    scientific knowledge essay

VIDEO

  1. Write An Essay On "The Importance Of Scientific Education"

  2. KNOWLEDGE IS POWER Essay|10 sentences

  3. Day 2: Basics of Scientific Research Writing (Batch 18)

  4. Unveiling the Marvel of Aerogel: The Lightest Solid on Earth! 🌌✨#science #shortfeed

  5. Introducing a Feminist perspective on Science

  6. How school knowledge is useful or useless in our life

COMMENTS

  1. Scientific Knowledge Definition, Overview & Examples

    Some examples of these scientific knowledge and breakthroughs include: The remarkable research and discovery of the Covid-19 vaccine to fight coronavirus is a product of scientific knowledge on ...

  2. The Nature of Scientific Knowledge

    Science consists of two things: a body of knowledge and the process by which that knowledge is produced. This second component of science provides us with a way of thinking and knowing about the world. Commonly, we only see the "body of knowledge" component of science. We are presented with scientific concepts in statement form - Earth is ...

  3. Scientific Writing Made Easy: A Step‐by‐Step Guide to Undergraduate

    Clear scientific writing generally follows a specific format with key sections: an introduction to a particular topic, hypotheses to be tested, a description of methods, key results, and finally, a discussion that ties these results to our broader knowledge of the topic (Day and Gastel 2012). This general format is inherent in most scientific ...

  4. PDF Tutorial Essays for Science Subjects

    Dr Peter Judge | Tutorial Essays for Science Subjects 3 how those facts were discovered. You need to become familiar with the way that experimental methods work, the limitations of various techniques and, most importantly, how the data generated is processed and analysed. Your knowledge of experimental methods will become more detailed as you

  5. Scientific Discovery

    This essay describes the emergence and development of the philosophical problem of scientific discovery and surveys different philosophical approaches to understanding scientific discovery. ... such as Gigerenzer's "tools to theory heuristic" are then applied to understand scientific knowledge generation (Gigerenzer 1992, Nickles 2018 ...

  6. Benefits of science

    The process of science is a way of building knowledge about the universe — constructing new ideas that illuminate the world around us. Those ideas are inherently tentative, but as they cycle through the process of science again and again and are tested and retested in different ways, we become increasingly confident in them. Furthermore, through this same iterative process, ideas are ...

  7. Science is not the only form of knowledge but it is the best

    Construed as an epistemological thesis, then, scientism can be broadly understood as either the view that scientific knowledge is the only form of knowledge we have, or the view that scientific knowledge is the best form of knowledge we have. But scientism comes in other varieties as well, including methodological and metaphysical ones.

  8. The Importance of Understanding the Nature of Scientific Knowledge

    Scientific knowledge and its applications allow us to cure and treat various diseases, communicate with people all over the world, heat and cool our homes, travel the world, enjoy various entertainments, and so on. ... Einstein, A. (1949). Remarks concerning the essays brought together in this co-operative volume. In P. A. Schilpp (Ed.), Albert ...

  9. The Social Dimensions of Scientific Knowledge

    Essays on the relation between science and social values in risk research collected in the volume edited by Deborah Mayo and Rachelle Hollander (1991) attempt to steer a course between uncritical reliance on cost-benefit models and their absolute rejection. ... Social Dimensions of Scientific Knowledge, South Bend: Notre Dame University Press ...

  10. The Nature of Scientific Knowledge

    "The Nature of Scientific Knowledge—An Explanatory Approach (TNSK)—is a recent contribution to textbooks in Epistemology of Science. While intended for students, this accessible and comprehensive introduction … is a worthwhile read for both teachers and non-specialists interested in scientific knowledge as it covers a broad range of issues on the subject without assuming any background ...

  11. Scientific Collaboration and Collective Knowledge: New Essays

    In this collection of essays, leading philosophers of science address these critical questions, among others. Their work extends current philosophical research on the social structure of science and contributes to the growing, interdisciplinary field of social epistemology. The volume's strength lies in the diversity of its authors ...

  12. Philosophy of Scientific Knowledge Essay (Critical Writing)

    The logic of scientific discovery. New York, NY: Hutchinson & Co. This critical writing, "Philosophy of Scientific Knowledge" is published exclusively on IvyPanda's free essay examples database. You can use it for research and reference purposes to write your own paper. However, you must cite it accordingly .

  13. What is Scientific Knowledge? An Introduction to Contemporary

    What Is Scientific Knowledge? is a much-needed collection of introductory-level chapters on the epistemology of science. Renowned historians, philosophers, science educators, and cognitive scientists have authored 19 original contributions specifically for this volume. The chapters, accessible for students in both philosophy and the sciences, serve as helpful introductions to the primary ...

  14. Scientific Knowledge as a Culture: A Paradigm for Meaningful ...

    Consider, for example, the theory of how the world is organised. Its first scientific account took the form of the geocentric theory. Its nucleus included principles of Aristotelian physics, the paradigmatic model of concentric spheres, fundamental concepts (circular motion, spherical universe, the basic elements, etc.). The body knowledge of that theory included working models of Eudoxus and ...

  15. Scientific Objectivity

    Scientific objectivity is a property of various aspects of science. It expresses the idea that scientific claims, methods, results—and scientists themselves—are not, or should not be, influenced by particular perspectives, value judgments, community bias or personal interests, to name a few relevant factors. Objectivity is often considered ...

  16. Which Scientific Knowledge is a Common Good?

    That (informational and skilled) scientific knowledge is non-exhaustible is a factual claim. However, as I have argued in Radder (Citation 2016), claiming that something is in the public interest requires a normative stance. Therefore, stating that a particular item of scientific knowledge is a common good is a normative claim.

  17. Scientific Knowledge And Reasoning: [Essay Example], 1546 words

    Scientific Knowledge and Reasoning. "Provability is a weaker notion than truth." suggests that provability, in other words proving a theory with evidence is not necessarily better than the truth which in case of sciences set conclusions by scientists or philosophers or any expert in science go beyond the evidence provided.

  18. Science

    science, any system of knowledge that is concerned with the physical world and its phenomena and that entails unbiased observations and systematic experimentation. In general, a science involves a pursuit of knowledge covering general truths or the operations of fundamental laws. Science can be divided into different branches based on the ...

  19. Scientific Skills and Knowledge Importance

    Technology and science encompass all facades of life ranging from how people work, converse, do their shopping or pay out bills. It has become an important aspect of life without which is it difficult to live in the society. This paper looks at the importance of scientific skills and knowledge to an individual, society and the nation at large ...

  20. (PDF) AN ESSAY ON SCIENCE AND KNOWLEDGE

    AN ESSAY ON THE UNDERSTANDING OF SCIENCE AND KNOWLEDGE. Sunday Sunday Akpan. PhD Student - Finance. Putra Business School, University Putra Malaysia. Matric No.: PBS 15141083. Introduction. The w ...

  21. Scientific Knowledge

    2208. Cite. View Full Essay. Scientific Knowledge There lies question on whether scientific knowledge is able to answer all the questions that relate to physical reality. For many years, people have wondered what the earth is composed of, leaving them wondering if the nature's secrets will one day be revealed (Grant 64).

  22. Scientific Knowledge vs. Knowledge of Science

    How is knowledge pertaining to science best transferred to the public in order to bolster support for science-based policy and governance, thereby serving the common good? Herein lies a well-recognized challenge: widespread public support arguably requires a widespread understanding of science itself, but this is naturally undermined by the inherent complexities of the sciences, and by ...

  23. The Scientific Enterprise: Public Knowledge. An Essay Concerning the

    The Scientific Enterprise: Public Knowledge.An Essay Concerning the Social Dimension of Science. J. M. Ziman. Cambridge University Press, New York, 1968. xii + 154 pp ...

  24. Nazi Unethical Experiments

    Essay Example: The atrocities committed during the reign of the Third Reich have left an indelible mark on history, particularly in the realm of medical experimentation. The Nazi regime, under the guise of advancing scientific knowledge, conducted a series of unethical and inhumane experiments