www.science.org /doi/10.1126/sciadv.abo0038

Knowledge overconfidence is associated with anti-consensus views on controversial scientific issues

Steven A. Sloman 51-65 minutes 6/30/2022
DOI: 10.1126/sciadv.abo0038, Show Details

Abstract

Public attitudes that are in opposition to scientific consensus can be disastrous and include rejection of vaccines and opposition to climate change mitigation policies. Five studies examine the interrelationships between opposition to expert consensus on controversial scientific issues, how much people actually know about these issues, and how much they think they know. Across seven critical issues that enjoy substantial scientific consensus, as well as attitudes toward COVID-19 vaccines and mitigation measures like mask wearing and social distancing, results indicate that those with the highest levels of opposition have the lowest levels of objective knowledge but the highest levels of subjective knowledge. Implications for scientists, policymakers, and science communicators are discussed.

INTRODUCTION

Uncertainty is inherent to science. A constant striving toward a better understanding of the world requires a willingness to amend or abandon previous truths, and disagreements among scientists abound. Sometimes, however, evidence is so consistent, overwhelming, or clear that a scientific consensus forms. Despite consensus by scientific communities on a handful of critical issues, many in the public maintain anti-consensus views. For example, there are sizable gaps in agreement between scientists and laypeople on whether genetically modified (GM) foods are safe to eat, climate change is due to human activity, humans have evolved over time, more nuclear power is necessary, and childhood vaccines should be mandatory (1). The coronavirus disease 2019 (COVID-19) pandemic also continues on, fueled in part by contagion among the unvaccinated (2), while social movements against vaccination policies are emerging worldwide. The consequences of these anti-consensus views are dire, including property destruction, malnutrition, disease, financial hardship, and death (36).

Opposition to the scientific consensus has often been attributed to nonexperts’ lack of knowledge, an idea referred to as the “deficit model” (7, 8). According to this view, people lack specific scientific knowledge, allowing attitudes from lay theories, rumors, or uninformed peers to predominate. If only people knew the facts, the deficit model posits, then they would be able to arrive at beliefs more consistent with the science. Proponents of the deficit model attempt to change attitudes through educational interventions and cite survey evidence that typically finds a moderate relation between science literacy and pro-consensus views (911). However, education-based interventions to bring the public in line with the scientific consensus have shown little efficacy, casting doubt on the value of the deficit model (1214). This has led to a broadening of psychological theories that emphasize factors beyond individual knowledge. One such theory, “cultural cognition,” posits that people’s beliefs are shaped more by their cultural values or affiliations, which lead them to selectively take in and interpret information in a way that conforms to their worldviews (1517). Evidence in support of the cultural cognition model is compelling, but other findings suggest that knowledge is still relevant. Higher levels of education, science literacy, and numeracy have been found to be associated with more polarization between groups on controversial and scientific topics (1821). Some have suggested that better reasoning ability makes it easier for individuals to deduce their way to the conclusions they already value [(19) but see (22)]. Others have found that scientific knowledge and ideology contribute separately to attitudes (23, 24).

Recently, evidence has emerged, suggesting a potentially important revision to models of the relationship between knowledge and anti-science attitudes: Those with the most extreme anti-consensus views may be the least likely to apprehend the gaps in their knowledge. In a series of studies on opposition to GM foods, Fernbach et al. (25) found that individuals most opposed were the least knowledgeable about science and genetics but rated their understanding of the technology the highest in the sample. A similar pattern emerged for gene therapy, although not for climate change denial. Related findings have been reported for opponents of vaccination claiming to know more than doctors about autism (26) and for anti-establishment voters in a Dutch referendum reporting knowing more about the issues than they really do (27). Those with the most strongly held anti-consensus views may be not only the least knowledgeable but also the most overconfident about how much they know (28, 29).

These findings suggest that knowledge may be related to pro-science attitudes but that subjective knowledge—individuals’ assessments of their own knowledge—may track anti-science attitudes. This is a concern if high subjective knowledge is an impediment to individuals’ openness to new information (30). Mismatches between what individuals actually know (“objective knowledge”) and subjective knowledge are not uncommon (31). People tend to be bad at evaluating how much they know, thinking they understand even simple objects much better than they actually do (32). This is why self-reported understanding decreases after people try to generate mechanistic explanations, and why novices are poorer judges of their talents than experts (33, 34). Here, we explore such knowledge miscalibration as it relates to degree of disagreement with scientific consensus, finding that increasing opposition to the consensus is associated with higher levels of knowledge confidence for several scientific issues but lower levels of actual knowledge. These relationships are correlational, and they should not be interpreted as support for any one theory or model of anti-scientific attitudes. Attitudes like these are most likely driven by a complex interaction of factors, including objective and self-perceived knowledge, as well as community influences. We speculate on some of these mechanisms in the general discussion.

The current research makes four primary contributions. First, we test the generality of the relation between extremity of anti-consensus beliefs and scientific knowledge overconfidence (the difference between subjective and objective knowledge). Although related effects have been demonstrated across a handful of contexts and with different operationalizations of the constructs, there has been no test with a unitary methodology across a range of issues. In studies 1 to 3, we examine seven controversial issues on which there is a substantial scientific consensus: climate change, GM foods, vaccination, nuclear power, homeopathic medicine, evolution, and the Big Bang theory. In studies 4 and 5, we examine attitudes concerning COVID-19. Second, we provide evidence that subjective knowledge of science is meaningfully associated with behavior. When the uninformed claim they understand an issue, it is not just cheap talk, and they are not imagining a set of “alternative facts.” We show that they are willing to bet on their ability to perform well on a test of their knowledge (study 3).

Third, if the effect does not generalize to all issues, do the data give any indication why? In discussing why GM foods showed the pattern but climate change did not, Fernbach et al. (25) suggested that a potentially important difference between the issues is degree of political polarization, with climate change attitudes much more polarized by political affiliation than attitudes on GM foods. Political polarization refers to the degree to which people from different ideological groups (e.g., conservatives versus liberals) differ in their positions on an issue. When an issue is highly polarized, there may be less room for individual knowledge to influence attitudes because they are instead driven more by community influence. In studies 1 and 2, we test whether the predicted effects are attenuated for issues that are more politically polarized. Likewise, because several issues that we examine have come into conflict with religious thinking, and because religion can itself be a polarizing factor for attitudes and beliefs (21), we also test for an attenuation for issues more associated with religiosity.

Last, given the life-altering nature of the COVID-19 pandemic, do these relationships shed light on the psychology of those opposed to expert recommendations and policies aimed at reducing the infection rate? The COVID-19 pandemic, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is the largest spread of a respiratory disease that the world has seen in over 100 years. Although the knowledge gained and shared by the scientific community about the virus gradually increased, public health professionals prescribed traditional, time-tested, and general epidemiological measures to try to mitigate its spread. Thus, while a scientific consensus on the specifics of SARS-CoV-2 viral transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged slowly, consensus on how to mitigate viral contagion was well established even at the beginning of the pandemic. Nonetheless, there are notable gaps between scientists’ recommendations and the public’s willingness to act in accordance with them (3537). Here, we examine the relations among objective knowledge, subjective knowledge, and opposition to COVID-mitigating behaviors and policies in two studies, one focused on openness to being vaccinated (study 4), and the other focused on attitudes toward mitigation behaviors such as mask wearing and social distancing (study 5).

RESULTS

Studies 1 and 2: Anti-consensus views across seven scientific issues

The purpose of studies 1 and 2 was to test the generalizability of relations between participants’ opposition to scientific consensus and their objective and subjective knowledge, both within and across seven scientific issues, in a large preregistered study (combined N = 3249). These issues are of current societal interest, and scientific groups have issued either official statements of consensus on them or surveys of scientists, and reviews of research have demonstrated de facto consensus: the safety of GM foods, the validity of anthropogenic climate change, the benefits of vaccination outweighing its risks, the validity of evolution as an explanation of human origins, the validity of the Big Bang theory as an explanation for the origin of the universe, the lack of efficacy of homeopathic medicine, and the importance of nuclear power as an energy source (see Table 1). Each participant was randomly assigned to answer questions about just one of these seven issues.

IssueConsensusReferences
Climate changeMost of the warming
of Earth’s average
global temperature
over the second half
of the 20th century
has been caused by
human activities.
(5, 52)
GM foodsConsuming foods
with ingredients
derived from GM
crops is no riskier
than consuming
foods modified by
conventional plant
improvement
techniques.
(53, 54)
Nuclear powerNuclear power is
necessary and should
be expanded to
mitigate climate
change.
(1, 55)
VaccinationThe benefits of
vaccinations
outweigh the risks,
and vaccination has
zero link to autism.
(1, 56, 57)
Homeopathic
medicine
There is no reliable
evidence that
homeopathic
medicine is an
effective treatment
for any health
condition.
(58, 59)
The Big BangThe universe began
approximately 14
billion years ago in a
hot and dense state
and has expanded
and cooled since
then.
(60)
EvolutionHumans and other
living things have
evolved over time.
(61, 62)
COVID-19Measures such as
social distancing and
wearing a mask
successfully reduce
the spread of
COVID-19.
(63, 64)

Expand for more

Table 1. Scientific issues and consensuses.

Studies 1 to 3 examine respondents’ attitudes toward seven issues on which scientific consensus has been established. Studies 4 and 5 examine attitudes on COVID vaccination and activities or policies that mitigate the spread of the virus. The consensuses for these issues (and associated citations) are included.

OPEN IN VIEWER

To measure participants’ general and issue-specific objective knowledge, we developed a scale of 34 true-false science questions, containing subscales for each of the seven scientific issues. This allowed us to test the generalizability of the effects both within and across issues. While previous studies have assessed differences in science knowledge between those who oppose versus accept the consensus, we focus on the degree of anti-consensus opposition. These studies, therefore, are restricted to participants who do not report complete agreement with the scientific consensus.

Studies 1 and 2 measured the same variables and showed similar results, so we aggregated and analyzed data from the two studies together (see the Supplementary Materials for additional analyses). The main regression models separately tested the zero-order association of opposition to the consensus with the following measures:

1) Objective knowledge (the full set of 34 items)

2) Objective knowledge (each issue’s five-item subscale)

3) Subjective knowledge

4) A within-subject knowledge difference score constructed by subtracting each participant’s z-scored subjective knowledge score from their z-scored objective knowledge score

Figure 1 shows the main pattern of results: As opposition to the scientific consensus increases, objective knowledge decreases but subjective knowledge increases (see Table 2 for corresponding regressions). As a result, more opposition is also associated with larger (negative) magnitudes of the knowledge difference score (a proxy for knowledge overconfidence), constructed with either the general or issue-specific objective knowledge measures. These results demonstrate that the most extreme opponents believe their knowledge ranks among the highest, but it is actually among the lowest.

Fig. 1. Overall across-issue model predictions of relationships between opposition and objective knowledge, subjective knowledge, and the knowledge difference score, with 95% confidence interval bands.

Higher levels of opposition to a scientific consensus are associated with lower levels of actual scientific knowledge, higher self-assessments of knowledge, and more knowledge overconfidence (operationalized here as the increasing negative magnitude of each respondent’s knowledge difference score). ***P < .001.

OPEN IN VIEWER
 Dependent variables
 Objective knowledge
(full set)
Objective knowledge
(subscales)
Subjective knowledgeDifference score
(using full set of
objective knowledge
questions)
Difference score
(using objective
knowledge subscales)
 Model (1)Model (2)Model (3)Model (4)Model (5)
Opposition−2.84***−0.53***0.15***−0.11***−0.09***
dfs2130.62126.82126.81862.11996.2

Expand for more

Table 2. Overall across-issue model output.

The coefficients and degrees of freedom reported here represent zero-order relationships between opposition to scientific consensus and the five (knowledge) dependent variables in linear mixed models pooling data across all scientific issues in studies 1 and 2. ***P < 0.001; degrees of freedom (dfs) were estimated using Satterthwaite’s method.

OPEN IN VIEWER

Next, because across-issue models could potentially obscure differences in associations at the issue level, we tested the same relationships for each issue separately. Regression predictions by issue are shown in Fig. 2. The figure shows results using the overall objective knowledge scale, but results are similar for the issue-specific subscales (see the Supplementary Materials). The relation between opposition and objective knowledge is negative and significant for all issues except climate change [βopposition = 0.66, t(240) = 0.67, P = 0.50]. The relation between opposition and subjective knowledge is positive for all issues but is not statistically significant for climate change, Big Bang, or evolution (P = 0.13, 0.94, and 0.55, respectively). The knowledge difference score analysis replicated the across-issue results (more opposition associated with larger differences) for all issues except climate change.

Fig. 2. The relationship between opposition and subjective and objective knowledge for each of the seven scientific issues, with 95% confidence bands.

In general, opposition is positively associated with subjective knowledge and negatively associated with objective knowledge, but not for all issues.

OPEN IN VIEWER

Because we were interested in the degree to which the polarization of an issue could alter these relationships, we then calculated political polarization and religiosity scores for each of the seven scientific issues (see Materials and Methods). For more politically polarized issues, the relation between opposition and objective knowledge is less negative than for less polarized issues [βinteraction = 6.26, t(2128.2) = 3.65, P < 0.001], and the relation between opposition and subjective knowledge is less positive [βinteraction = −0.48, t(2125.5) = −4.25, P < 0.001]. Higher levels of issue religiosity, however, attenuated only the relation between opposition and subjective knowledge [βinteraction = −0.61, t(2124.8) = −4.48, P < 0.001]. These findings should be interpreted with caution because scientific issue and polarization scores are perfectly correlated, and the possibility exists that other unmeasured factors represent the true causes of differences between issues. Overall, the positive association between opposition to the scientific consensus and knowledge overconfidence generally holds. However, these relations appear to be weaker for more polarized issues, particularly climate change.

Study 3: Incentivizing genuine assessments of knowledge

A limitation of studies 1 and 2 is that participants with different levels of opposition to the consensus may interpret the measure of subjective knowledge differently. For instance, opponents may claim that they understand an issue but acknowledge that their understanding does not reflect the same facts as the scientific community. This could explain the disconnect between their subjective knowledge rating and their ability to answer questions based on accepted scientific facts. The goal of study 3 was thus to remove ambiguity in how the subjective knowledge measure could be interpreted across participants. To accomplish this, we designed a measure of knowledge confidence that incentivized participants to report their genuine beliefs. Participants were given the opportunity to earn a bonus payment by betting on their ability to score above average on the objective knowledge questions associated with their assigned scientific issue or take a smaller guaranteed payout. In this paradigm, betting indicates greater knowledge confidence (38). We predicted that those with greater opposition to the consensus would earn less due to knowledge overconfidence and that the other effects documented in studies 1 and 2 would be replicated. Another feature of study 3 was that participants fully in line with the consensus were not filtered out of the survey, and we analyzed the data both with and without them included in the dataset.

Figure 3 shows the key results. As opposition to the consensus increased, participants were more likely to bet but less likely to score above average on the objective knowledge questions, confirming our predictions. As a consequence, more extreme opponents earned less. Regression analysis revealed that there was a $0.03 reduction in overall pay with each one-unit increase in opposition [t(1169) = −8.47, P< 0.001]. We also replicated the effect that more opposition to the consensus is associated with higher subjective knowledge [βopposition = 1.81, t(1171) = 7.18, P < 0.001] and lower objective knowledge [both overall science literacy and the subscales; overall science literacy model βopposition = −1.36, t(1111.6) = −16.28, P < 0.001; subscales model βopposition = −0.19, t(1171) = −10.38, P < 0.001]. Last, participants who chose to bet were significantly more opposed than nonbetters [βbet = 0.24, t(1168.7) = 2.09, P = 0.04], and betting was significantly correlated with subjective knowledge [correlation coefficient (r) = 0.28, P < 0.001], as we would expect if they are related measures. All effects were also significant when excluding people fully in line with the consensus (see the Supplementary Materials for analysis). Excluding them weakens the association of opposition with objective knowledge as those fully in line with the consensus scored highly on the objective knowledge questions. However, doing so strengthens the association of opposition with subjective knowledge, as the subjective knowledge distribution is j-shaped (see the Supplementary Materials for visualizations). Similar to more extreme opponents, those fully in line with the scientific consensus rated subjective knowledge higher than moderate opposers (but lower than extreme opponents). However, whereas the confidence of those in agreement with the established science is substantiated by their actual knowledge, the confidence of extremists appears to be misplaced.

Fig. 3. Percentages of participants who bet on their knowledge, scored above average on objective knowledge, and their payout, as a function of opposition, with SE bars.

Higher levels of opposition to the scientific consensus were associated with more betting, lower likelihoods of scoring above average on objective knowledge, and earning less in the incentivized task.

OPEN IN VIEWER

Study 4: Attitudes toward a potential COVID-19 vaccine

The COVID-19 pandemic has caused widespread economic damage, sickness, and death (39, 40). Survey responses in the United States have consistently revealed a stubborn minority of the population opposed to getting a vaccine against novel coronavirus infection (36, 41). In study 4, which was conducted in the summer of 2020 (before COVID-19 vaccines were available and before the emergence of more contagious variants), we examine whether the relationships between anti-consensus attitudes and knowledge generalize to U.S. participants’ views on receiving a COVID-19 vaccine. Participants in study 4 answered a battery of general science and issue-specific true-false questions (objective knowledge) and reported their willingness to receive a potential COVID-19 vaccination (opposition) and their self-assessed knowledge of how a COVID-19 vaccine would work (subjective knowledge).

Study 4’s findings replicated the main pattern of results from studies 1 to 3. As opposition to getting a COVID-19 vaccine increases, both general and COVID-specific objective knowledge decreases, and subjective knowledge of how a COVID-19 vaccine would work increases [general objective knowledge model βopposition = −0.96, t(314) = −2.30, P = 0.02; virus subscale model βopposition = −0.36, t(314) = −2.53, P = 0.01; subjective knowledge model βopposition = 0.13, t(314) = 2.90, P = 0.004]. As a result, more opposition to the vaccine is associated with larger (negative) magnitudes of the knowledge difference score [general difference score model βopposition = −0.15, t(314) = −3.77, P = 0.02; virus-specific difference score model βopposition = −0.15, t(314) = −3.88, P = < 0.001]. Lower willingness to receive a potential COVID-19 vaccine was associated with lower objective knowledge about science and COVID-19 but higher levels of subjective knowledge about how the vaccine would work.

Study 5: Attitudes toward COVID-19 mitigation policies and preventive behaviors

In study 5, we examine support for COVID-19 mitigation policies and self-reported compliance with preventive behaviors recommended by health experts. Data reported here are part of a larger survey on attitudes, behaviors, and information sources about COVID-19, conducted in the fall of 2020 by three researchers who were then independent of those working on studies 1, 2, and 4.

Study 5 included two different sets of measures of participants’ opposition to the consensus: one measuring how opposed they were to COVID-mitigating policies and one measuring their reported noncompliance with COVID-preventing behaviors. Consistent with the previous studies, as opposition to policies consistent with the scientific consensus increases, objective knowledge decreases [βopposition = −0.55, t(692) = −17.56, P < 0.001] and subjective knowledge increases [βopposition = 0.14, t(692) = 3.62, P < 0.001]. Opposition was also associated with the knowledge difference score [βopposition = −0.51, t(692) = −15.74, P < 0.001]. An identical pattern emerged for noncompliance with preventive behaviors [objective knowledge βnoncompliance = −0.45, t(692) = −13.12, P < 0.001; subjective knowledge βnoncompliance = 0.11, t(692) = 2.8, P = 0.005; knowledge difference score βnoncompliance = −0.41, t(692) = −11.79, P < 0.001].

Study 5 also included a new variable; how much participants think scientists know about COVID-19? To validate the main finding, we split the sample into those who rated their own knowledge higher than scientists’ knowledge (28% of the sample) and those who did not. This dichotomous variable was also highly predictive of responses: Those who rated their own knowledge higher than scientists’ were more opposed to virus mitigation policies [M = 3.66 versus M = 2.66, t(692) = −12, P < 0.001, d = 1.01] and more noncompliant with recommended COVID-mitigating behaviors [M = 3.05 versus M = 2.39, t(692) = −9.08, P < 0.001, d = 0.72] while scoring lower on the objective knowledge measure [M = 0.57 versus M = 0.67, t(692) = 7.74, P < 0.001, d = 0.65]. For robustness, we replicated these patterns in identical models controlling for political identity and in models using a subset scale of the objective knowledge questions that conservatives were not more likely to answer incorrectly. All effects remained significant. Together, these results speak against the possibility that the relation between policy attitudes and objective knowledge on COVID is completely explained by political ideology (see the Supplementary Materials for all political analyses).

DISCUSSION

Results from five studies show that the people who disagree most with the scientific consensus know less about the relevant issues, but they think they know more. These results suggest that this phenomenon is fairly general, although the relationships were weaker for some more polarized issues, particularly climate change. It is important to note that we document larger mismatches between subjective and objective knowledge among participants who are more opposed to the scientific consensus. Thus, although broadly consistent with the Dunning-Kruger effect and other research on knowledge miscalibration, our findings represent a pattern of relationships that goes beyond overconfidence among the least knowledgeable. However, the data are correlational, and the normal caveats apply.

A strength of these studies is the consistency of the main result across the overall models in studies 1 to 3 and specific (but different) instantiations of anti-consensus attitudes about COVID-19 in studies 4 and 5. Additional strengths are that study 5 is a conceptual replication of study 4 (and studies 1 to 3 more generally) using different measures and operationalizations of the main constructs, conducted by an initially independent group of researchers (with each group unaware of the research of the other during study development and data collection). The final two studies were also collected approximately 2 months apart, in July and September 2020, respectively. These two collection periods reflect the dynamic nature of the COVID-19 pandemic in the United States, with cases in July trending upward and cases in September flat or trending downward. The consistency of our effects across these 2 months suggests that the pattern of results is fairly robust.

One possible interpretation of these relationships is that the people who appear to be overconfident in their knowledge and extreme in their opposition to the consensus are actually reporting their sense of understanding for a set of incorrect alternative facts not those of the scientific community. After all, nonscientific explanations and theories tend to be much simpler and less mechanistic than scientific ones. As a result, participants could be reporting higher levels of understanding for what are, in fact, simpler interpretations. However, we believe that several elements of this research speak against this interpretation fully explaining the results. First, the battery of objective knowledge questions is sufficiently broad, simple, and removed (at first glance) from the corresponding scientific issues. For example, not knowing that “the skin is the largest organ in the human body” does not suggest that participants hold alternative views about how the human body works; it suggests the lack of real knowledge about the body. We also believe that it does not cue participants to the fact that the question is related to vaccination. Participants tested using the betting paradigm of study 3 who indicated high subjective knowledge were explicitly indicating that they think they know what scientists know. Their subjective knowledge was assessed in terms of “the agreed-upon knowledge of…scientists.” Thus, the pattern of relationships does not appear to be driven completely by participants’ perceived knowledge of incorrect alternative facts, although this may be part of the story.

Of course, this research also has limitations. The data analyzed here cannot directly speak to why some more polarized issues show weaker associations between different knowledge types and attitudes. The relation between opposition and objective knowledge may cancel out at the high end of the distribution (21, 42), but the case for subjective knowledge is less clear, and there are many potential factors. It is possible, for example, that higher levels of media attention, or even how easy or difficult it is to imagine the harms associated with each scientific issue, could shift how (or whether) people make assessments of their own knowledge. More research is needed before strong conclusions can be drawn on this point.

It is also important to point out that consensus views can emerge around matters of fact (e.g., “Earth is warming”) and around policies that are not purely about facts but rather require cost-benefit analysis informed by facts (e.g., “vaccine benefits outweigh risks”). In this research, we consider both but acknowledge the distinction. We similarly recognize that, of the seven scientific issues in the manuscript (excluding COVID-19), nuclear power has the weakest consensus among scientists. While the consensuses surrounding most of the other issues relate more directly to scientific facts, that of nuclear power (and to some extent vaccination) is more of a cost-benefit analysis. The majority of American Association for the Advancement of Science (AAAS) scientists (65%) believe that more nuclear power plants should be built, and the Intergovernmental Panel on Climate Change announced that a sharp increase in nuclear energy production is needed to curb global warming and meet the climate goals outlined in the 2015 Paris Agreement. Last, note that the samples surveyed in this research tended to be slightly more scientifically literate than the average U.S. respondent. To rule out the possibility that the main pattern of relationships was not driven solely by respondents’ education levels, we reanalyzed the data controlling for several demographic variables including education. Doing so did not meaningfully change any of the reported relationships (see the Supplementary Materials for analyses).

The findings from these five studies have several important implications for science communicators and policymakers. Given that the most extreme opponents of the scientific consensus tend to be those who are most overconfident in their knowledge, fact-based educational interventions are less likely to be effective for this audience. For instance, The Ad Council conducted one of the largest public education campaigns in history in an effort to convince people to get the COVID-19 vaccine (43). If individuals who hold strong antivaccine beliefs already think that they know all there is to know about vaccination and COVID-19, then the campaign is unlikely to persuade them.

Instead of interventions focused on objective knowledge alone, these findings suggest that focusing on changing individuals’ perceptions of their own knowledge may be a helpful first step. The challenge then becomes finding appropriate ways to convince anti-consensus individuals that they are not as knowledgeable as they think they are. One option may be to encourage people to try to explain the mechanisms underlying the complex scientific phenomena at issue. This has been shown to reduce subjective knowledge (33, 44) and increase deference to experts (45). Another way to potentially make feelings of ignorance more salient to people is to give them reference points. People feel uncertain about choices they understand less well when considering options together, but not when evaluating them separately (38).This finding suggests that people may be led to realize that they know less about vaccination, for example, than about mechanisms they are more familiar with (from their careers or hobbies say), if presented in parallel.

Another strategy for bringing opponents in line with the scientific consensus is to ignore individual knowledge and focus instead on experts or perceived experts, gaining the allyship of agents of change. A survey on transmission of the coronavirus has found that the major reason people report wearing masks in Japan is not to mitigate risk nor be altruistic but to conform to a social norm (46), and studies in the United States have found that perceptions of the extent to which one’s social circle engages in preventive behaviors are strongly related to one’s own behaviors (47, 48). People tend to do what they think their community expects them to do (49). If policymakers and science communicators can convince influential thought leaders from political, religious, or cultural groups with whom people holding anti-consensus beliefs identify, then these thought leaders may be able to alter their followers’ views. As these novel ideas are adopted by the community, they can create a momentum that would prompt change in the long run (50). At a minimum, these agents of change can be brought to the decision-making table, giving them some ownership of outcomes or discouraging them from actively working against consensus goals.

Conforming to the consensus is not always recommended. Plato and Galileo both refused to conform, and this helped them to drive society to higher levels of philosophical and scientific understanding, respectively. However, if opposition to the consensus is driven by an illusion of understanding and if that opposition leads to actions that are dangerous to those who do not share in the illusion, then it is incumbent on society to try to change minds in favor of the scientific consensus.

MATERIALS AND METHODS

Study 1 and 2 methods

Methods, predictions, and analysis plans for studies 1 and 2 were preregistered on AsPredicted.org before data collection. The two studies were nearly identical but with two differences. First, study 1 participants were recruited from Amazon Mechanical Turk via CloudResearch, whereas study 2 participants were recruited from Prolific Academic. Second, the study 1 sample was a convenience sample of U.S.-based participants, whereas study 2’s was a U.S. nationally representative sample based on age, gender, and ethnicity. What follows in this section describes both studies.

Participants (N = 1754 in study 1; N = 1495 in study 2) were randomly assigned to one of seven scientific issue conditions: climate change, GM foods, nuclear power, vaccination, evolution, the Big Bang, and homeopathic medicine. They then answered a one-item attitude measure of opposition to the scientific consensus for their assigned issue [“opposition”; adapted from Fernbach et al. (25); see the Supplementary Materials for wording]. Any participants who indicated complete agreement with the scientific consensus were funneled into an unrelated study after answering demographic questions and did not complete this one. This left final sample sizes of 1137 for study 1 and 996 for study 2.

Immediately after answering the opposition question, all study 2 participants were asked, “what is your political ideology?” (seven-point scale, “very liberal” to “very conservative”) and “how important is religion in your life?” (five-point scale, “not important at all” to “very important”). These measures were recorded to construct religiosity and political polarization scores for each issue, which we discuss in our analysis of the combined data from studies 1 and 2. Participants were then asked how well they understood their assigned issue using a one to seven measure (“subjective knowledge”) adapted from Fernbach et al. (25) and based on one developed by Rozenblit and Keil (33). They then answered 34 randomly ordered true-false science questions that we compiled from the National Science Foundation’s Science and Engineering Indicators survey, AAAS Benchmarks for Science, and recent work on public understanding of science or developed by us based on information found on governmental websites such as NASA, the U.S. Environmental Protection Agency, and the National Institutes of Health (see the Supplementary Materials for all items and sources). For each of these 34 questions, participants recorded their answers on a seven-point scale ranging from “definitely true” to “definitely false.” Responses were coded from −3 to 3 reflecting degree of correctness and summed for each participant (objective knowledge). For robustness, we created binarized versions of both this general objective knowledge scale and each subscale by treating scores of 1 to 3 as correct and scores of 0 to −3 as incorrect (see the Supplementary Materials for results using these binarized measures). We also divided this measure into issue-specific objective knowledge subscales of five questions each (one medical/biological subscale used for both vaccination and homeopathic medicine, all other issues had their own unique subscales). Last, participants provided demographic information (age, income, gender, and education). They were paid, debriefed, and exited the survey.

Using U.S. nationally representative data from study 2, we calculated the correlation of opposition with both political ideology (with higher values indicating more conservatism) and religiosity within each scientific issue condition. We then took the absolute value of these correlations as the issue-specific political polarization and religiosity scores to use in our preregistered polarization interaction models. Thus, higher numbers indicate more polarization of an issue, regardless of whether conservative/liberal or religious/nonreligious participants are more likely to oppose the consensus. To test whether political polarization and religiosity scores moderate the reported relationships, we ran regression models separately, predicting our two main dependent variables: either objective or subjective knowledge, predicted by opposition, issue-specific political polarization scores, and a political polarization–by–opposition interaction term. We then ran the same two interaction models again, but swapped out political polarization for issue-specific religiosity scores.

Study 3 methods

Participants were 1173 residents of the United States recruited through Amazon Mechanical Turk. The base pay was $0.85 with an opportunity to earn up to an additional $0.50 bonus. The procedure was the same as studies 1 and 2 with four changes. First, we restricted the study to four issues: GM foods, vaccination, nuclear power, and homeopathic medicine. Second, after answering the subjective knowledge question, participants were given the opportunity to bet on their ability to score above average on the scientific literacy questions associated with their assigned issue, and they were told that the questions were designed using “factual information from top scientists” at well-known scientific organizations (see the Supplementary Materials for instructions). If they chose to bet and scored higher than the mean on their issue-specific knowledge subscale, they received a $0.50 bonus. If they chose not to bet, they received an automatic $0.25 bonus. Third, rather than a seven-point scale to measure the objective knowledge, we used a trinary scale (true, false, and I do not know) and coded wrong and I do not know answers as incorrect, as is customary in science literacy research. Last, we did not filter out participants fully in line with the consensus, and we analyzed the data both with and without them included in the dataset.

Study 4 methods

We recruited a U.S. nationally representative sample of 501 online participants from Prolific Academic (final N = 316 after seven attention check failures and 178 exclusions based on complete agreement with the scientific consensus) in July 2020. Participants first answered a COVID-19 vaccination willingness question, which read, “COVID-19 is an illness caused by a coronavirus called SARS-CoV-2 that can spread from person to person. If a COVID-19 vaccine were available to you today, would you get the vaccine?” (seven-point scale, “definitely get the vaccine” to “definitely not get the vaccine”). After this attitude question, participants answered the subjective knowledge question, which was worded, “using the scale you just learned about, how would you rate your understanding of how a COVID-19 vaccine would work? (seven-point scale, “vague understanding” to “thorough understanding”). The study asked how a vaccine would work (as opposed to how it does work), because at the time of the study, no vaccine was publicly available in the United States. Participants then answered 23 true-false science literacy questions, including six COVID-specific items in place of the subscale items from studies 1 to 3 (i.e., “true or false? COVID-19 is a variant of the flu.”). The remaining 17 were identical to those from the previous studies. We developed the six COVID-specific items based on facts from official U.S. and international COVID-19 informational websites (see the Supplementary Materials), and participants indicated their answers on a seven-point definitely true to definitely false scale. As with the objective knowledge variables in studies 1 and 2, participants were given scores of −3 to 3 for each true-false item based on degree of correctness, with scores across all items summed within each participant. Last, participants answered demographic questions before completing the survey and receiving payment.

Study 5 methods

A strategic sample was recruited by distributing the survey link through paid Facebook and Instagram ads and by making the survey available to a student research pool at a U.S. research university. The social media ads reached 13,077 users, proportionally distributed across the United States according to population density and targeted adults 18 to 65+. The student research pool consisted of students 18 to 35 years old who received course credit for their participation. Data collection generated a sample of 695 participants, 452 from social media and 243 from the student subject pool. First, participants answered questions about their exposure to COVID-19, as well as knowledge of deaths among family, friends, communities, and workplaces. Those who had not been diagnosed with COVID-19 were then asked about their perceived risk of contracting it and answered a battery of questions about their perceived knowledge of COVID-19 and preventive measures. They were then asked to complete two instruments, one assessing their COVID-19 knowledge and one assessing their knowledge about its transmission. Following the knowledge questions, participants were asked about their support for mitigation policy measures and trust in politicians and scientists. The next section recorded their own practices related to COVID-19 prevention and motivational factors driving these practices. Frequency of consumption of—and trust in—sources of information about COVID-19 was addressed in the next paragraph, followed by a section addressing fear, worries, and coping. The survey finished by asking participants a series of demographic questions.

We collapsed across 13 policy support questions (α = 0.92) and six preventive behavior ones (α = 0.85) to generate separate measures of opposition to COVID-19 mitigation policies and noncompliance with preventive behaviors, respectively. Policy support questions addressed both major policy decisions that had already been taken during the pandemic, such as “closing K-12 schools and universities” or “imposing severe restrictions to people coming to the United States from overseas,” as well as proposed policy measures to be implemented if the number of cases in the United States were to increase, such as “state-wide mandate requiring people entering from other states with higher infection rates to quarantine for 10 days” or “state-wide mandate requiring people to wear masks all the time when in public.” All policy support items were generated from topics that have received extensive media coverage and were measured on a five-point scale (“strongly against” to “strongly support”). Preventive behavior items were adapted from a previous study on mitigation behaviors (42) and were consistent with the most current recommendations by the World Health Organization and the U.S. Centers for Disease Control and Prevention. A five-point scale (“almost all the time,” “fairly often,” “sometimes,” “not very often,” and “almost never”) was used to estimate compliance with preventive behaviors. Subjective knowledge was measured with one question, “how would you rate your knowledge about COVID-19?” on a sliding scale from 1 = very poor knowledge to 10 = very good knowledge, with the midpoint labeled average knowledge. Perceptions of scientists’ knowledge was measured with one question, “how would you rate (in general) scientists’ knowledge about COVID-19?” using the same scale as above. The objective knowledge measure was created by collapsing across 26 COVID-19 knowledge questions adapted from Rothmund et al. (51) or created by the authors based on the current consensus on transmission mechanisms (see the Supplementary Materials).

Acknowledgments

Funding: Studies 1 to 4 were funded in part by a grant from Humility and Conviction in Public Life, a project of the University Connecticut sponsored by the John Templeton Foundation. Data collection for study 5 was funded in part by the Center for Excellence in Health Communication to Underserved Populations (CEHCUP) from the University of Kansas School of Journalism and Mass Communications.

Author contributions: Conceptualization: N.L. and P.M.F. Methodology: N.L., P.M.F., N.R., M.V.G., and S.A.S. Data analysis: N.L., N.R., and M.V.G. Visualizations: N.L. Writing—original draft: N.L., P.M.F., and N.R. Writing—review and editing: N.L., P.M.F., N.R., M.V.G., and S.A.S.

Competing interests: The authors declare that they have no competing interests.

Data and materials availability: All data and corresponding syntax for analysis are available via OSF at https://bit.ly/3lH2u6T. All measures and summary analyses needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Human subjects: All research with human subjects was carried out in accordance with institutional, national, and international guidelines and approved by the IRB at University of Colorado Boulder (studies 1 to 4) or University of Kansas (study 5).

Supplementary Materials

This PDF file includes:

Figs. S1 to S10

Tables S1 to S12

REFERENCES AND NOTES

1

C. Funk, L. Rainie, Public and scientists’ views on science and society (Pew Research Center, 2015) pp. 1–18.

3

N. E. Borlaug, Ending World hunger. The promise of biotechnology and the threat of antiscience zealotry. Plant Physiol. 124, 487–490 (2000).

5

IPCC, “IPCC climate change 2014 synthesis report” (IPCC, 2014).

6

M. K. Patel, L. Dumolard, Y. Nedelec, S. V. Sodha, C. Steulet, M. Gacic-Dobo, K. Kretsinger, J. McFarland, P. A. Rota, J. L. Goodson, Progress toward regional measles elimination - worldwide, 2000–2018 (Center for Disease Control and Prevention, 2019).

7

W. F. Bodmer, The public understanding of science. Royal Soc. 1–43 (1985).

8

A. G. Gross, The roles of rhetoric in the public understanding of science. Public Underst. Sci. 3, 3–23 (1994).

9

N. Allum, P. Sturgis, D. Tabourazi, I. Brunton-Smith, Science knowledge and attitudes across cultures: A meta-analysis. Public Underst. Sci. 17, 35–54 (2008).

10

M. A. Ranney, D. Clark, Climate change conceptual change: Scientific information can transform attitudes. Top. Cogn. Sci. 8, 49–75 (2016).

11

P. Sturgis, N. Allum, Science in society: Re-evaluating the deficit model of public attitudes. Public Underst. Sci. 13, 55–74 (2004).

12

M. C. Nisbet, C. Mooney, Science and society. Framing Science. Science 316, 56 (2007).

13

M. J. Simis, H. Madden, M. A. Cacciatore, S. K. Yeo, The lure of rationality: Why does the deficit model persist in science communication? Public Underst. Sci. 25, 400–414 (2016).

14

B. Suldovsky, In science communication, why does the idea of the public deficit always return? Exploring key influences. Public Underst. Sci. 25, 415–426 (2016).

15

D. M. Kahan, H. Jenkins-Smith, D. Braman, Cultural cognition of scientific consensus. J. Risk Res. 14, 147–174 (2011).

16

S. Lewandowsky, G. E. Gignac, K. Oberauer, The role of conspiracist ideation and worldviews in predicting rejection of science. PLOS ONE 8, e75637 (2013).

17

A. M. Mccright, R. E. Dunlap, The politicization of climate change and polarization in the American Public’s Views of global warming, 2001-2010. Sociol. Q. 52, 155–194 (2011).

18

D. M. Kahan, E. Peters, M. Wittlin, P. Slovic, L. L. Ouellette, D. Braman, G. Mandel, The polarizing impact of science literacy and numeracy on perceived climate change risks. Nat. Clim. Change 2, 732–735 (2012).

19

D. M. Kahan, E. Peters, E. C. Dawson, P. Slovic, Motivated numeracy and enlightened self-government. Behav. Public Policy 1, 54–86 (2017).

20

L. C. Hamilton, J. Hartter, M. Lemcke-Stampone, D. W. Moore, T. G. Safford, tracking public beliefs about anthropogenic climate change. PLOS ONE 10, e0138208 (2015).

21

C. Drummond, B. Fischhoff, Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Proc. Natl. Acad. Sci. U.S.A. 114, 9587–9592 (2017).

22

E. Persson, D. Andersson, L. Koppel, D. Västfjäll, G. Tinghög, A preregistered replication of motivated numeracy. Cognition 214, 104768 (2021).

23

C. Tobler, V. H. M. Visschers, M. Siegrist, Consumers’ knowledge about climate change. Clim. Change 114, 189–209 (2012).

24

J. Shi, V. H. M. Visschers, M. Siegrist, Public perception of climate change: The importance of knowledge and cultural worldviews. Risk Anal. 35, 2183–2201 (2015).

25

P. M. Fernbach, N. Light, S. E. Scott, Y. Inbar, P. Rozin, Extreme opponents of genetically modified foods know the least but think they know the most. Nat. Hum. Behav. 3, 251–256 (2019).

26

M. Motta, T. Callaghan, S. Sylvester, Knowing less but presuming more: Dunning-Kruger effects and the endorsement of anti-vaccine policy attitudes. Soc. Sci. Med. 211, 274–281 (2018).

27

J.-W. van Prooijen, A. P. M. Krouwel, Overclaiming knowledge predicts anti-establishment voting. Soc. Psychol. Personal. Sci. 11, 356–363 (2020).

28

F. Francisco, S. Lackner, J. Gonçalves-Sá, A little knowledge is a dangerous thing: Excess confidence explains negative attitudes towards science. arXiv:1903.11193 (2021).

29

N. Rabb, P. M. Fernbach, S. A. Sloman, Individual representation in a community of knowledge. Trends Cogn. Sci. 23, 891–902 (2019).

30

S. L. Wood, J. G. Lynch, Prior knowledge and complacency in new product learning. J. Consum. Res. 29, 416–426 (2002).

31

J. W. Alba, J. W. Hutchinson, Knowledge calibration: What consumers know and what they think they know. J. Consum. Res. 27, 123–156 (2000).

32

S. A. Sloman, P. M. Fernbach, The Knowledge Illusion: Why We Never Think Alone (Riverhead Books, 2017).

33

L. Rozenblit, F. Keil, The misunderstood limits of folk science: An illusion of explanatory depth. Cognit. Sci. 26, 521–562 (2002).

34

J. Kruger, D. Dunning, Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J. Pers. Soc. Psychol. 77, 1121–1134 (1999).

35

M. É. Czeisler, M. A. Tynan, M. E. Howard, S. Honeycutt, E. B. Fulmer, D. P. Kidder, R. Robbins, L. K. Barger, E. R. Facer-Childs, G. Baldwin, S. M. W. Rajaratnam, C. A. Czeisler, Public attitudes, behaviors, and beliefs related to COVID-19, stay-at-home orders, nonessential business closures, and public health guidance — United States, New York City, and Los Angeles, May 5–12, 2020. MMWR Morb. Mortal. Wkly Rep. 69, 751–758 (2020).

38

C. R. Fox, A. Tversky, Ambiguity aversion and comparative ignorance. Q. J. Econ. 110, 585–603 (1995).

42

D. M. Kahan, Climate-science communication and the measurement problem. Polit. Psychol. 36, 1–43 (2015).

44

P. M. Fernbach, T. Rogers, C. R. Fox, S. A. Sloman, Political extremism is supported by an illusion of understanding. Psychol. Sci. 24, 939–946 (2013).

45

E. A. Meyers, M. H. Turpin, M. Białek, J. A. Fugelsang, D. J. Koehler, Inducing feelings of ignorance makes people more receptive to expert (economist) opinion. Judgm. Decis. Mak. 15, 909–925 (2020).

46

K. Nakayachi, T. Ozaki, Y. Shibata, R. Yokoi, Why do japanese people use masks against COVID-19, even though masks are unlikely to offer protection from infection? Front. Psychol. 11, 1918 (2020).

47

M. Goldberg, A. Gustafson, E. Maibach, S. van der Linden, M. T. Ballew, P. Bergquist, J. Kotcher, J. R. Marlon, S. Rosenthal, A. Leiserowitz, Social norms motivate COVID-19 preventive behaviors (2020); https://psyarxiv.com/9whp4/.

48

B. Tunçgenç, M. El Zein, J. Sulik, M. Newson, Y. Zhao, G. Dezecache, O. Deroy, Social influence matters: We follow pandemic guidelines most when our close circle does. Br. J. Psychol. 112, 763–780 (2021).

49

R. B. Cialdini, Influence: The Psychology of Persuasion (Collins, 2006).

51

T. Rothmund, F. Farkhari, F. Azevedo, C.-T. Ziemer, Scientific trust, risk assessment, and conspiracy beliefs about COVID-19 - four patterns of consensus and disagreement between scientific experts and the German public PsyArXiv: 10.31234/osf.io/p36w9 (2020).

52

W. R. L. Anderegg, J. W. Prall, J. Harold, S. H. Schneider, Expert credibility in climate change. Proc. Natl. Acad. Sci. U.S.A. 107, 12107–12109 (2010).

55

IPCC, “Global warming of 1.5°C” (IPCC, 2018) pp. 1–26.

56

F. Destefano, C. S. Price, E. S. Weintraub, Increasing exposure to antibody-stimulating proteins and polysaccharides in vaccines is not associated with risk of autism. J. Pediatr. 163, 561–567 (2013).

57

D. Gust, D. Weber, E. Weintraub, A. Kennedy, F. Soud, A. Burns, Physicians who do and do not recommend children get all vaccinations. J. Health Commun. 13, 573–582 (2008).

58

National Health and Medical Research Council, “NHMRC Information Paper: Evidence on the effectiveness of homeopathy for treating health conditions” (Australian Government–National Health and Medical Research Council, 2015).

61

AAAS, “Statement on the teaching of evolution” (AAAS, 2006).

62

L. Rainie, C. Funk, Elaborating on the views of AAAS scientists, issue by issue (Pew Research, 2015).

63

L. Matrajt, T. Leung, Evaluating the effectiveness of social distancing interventions to delay or flatten the epidemic curve of coronavirus disease. Emerg. Infect. Dis 26, 1740–1748 (2020).