Fear and Loathing in the American Electorate: Part 7
This is Part 7 of an eight-part exploration of the 2020 American National Election Study, focusing on the motivations and sources of information driving the American electorate, especially the Republican/conservative voters who cast their votes for Donald Trump in 2020. Here are quick links to the other seven parts.
- Part 1: The most important question facing American democracy today
- Part 2: Demographic predictors of 2020 vote
- Part 3: Media consumption — Where do voters get their information and what difference does it make?
- Part 4: Seven “deplorable” beliefs that predict 2020 Presidential vote
- Part 5: The role of conspiracies and misinformation in the 2020 election
- Part 6: Partisan animosity
- Part 8: Republicans have crossed the Rubicon — Conclusions and implications
We have identified a large array of variables, ranging across demographic circumstances, media-consumption habits, attitudes toward others, beliefs about what is true and not true, and partisan feelings and animosities, that all seem to be pretty robust predictors of how people voted for President in 2020. But they are also robust predictors of each other.
Structural equation modeling has been called part of a “quiet methodological revolution” in statistics because it shifts attention away from significance testing of individual effects to the evaluation of whole statistical models.¹ In SEM, the analyst specifies a comprehensive model of causal effects among a system of variables and then tests the degree to which that model fits the available data. The purpose of SEM is to test a theory by specifying a model that generates predictions from that theory. If the model successfully fits the data (criteria for assessing fit will be discussed below) we can say the theory has been supported. Different models can be compared in terms of their degree of fit with the data. In this way, the analyst can find the most plausible model for explaining the empirical relationships observed. Based on that model, reasonable conclusions can be drawn about how the whole system operates and recommendations can be made about how to influence or change the system to achieve different results.
Since the articles in this series are aimed at people interested in politics, not statistics, I will avoid further explanation of the statistical modeling techniques deployed here. Readers who would like to learn more about SEM and path analysis can consult a number of handy references.[ii] Readers who would like to look at any of the R code I’ve used in this series of articles can contact me directly.
Variables and hypotheses
Now is a good time to recall the question with which we began this report:
Why do voters in GOP-led states continue to elect Republicans, when those officials have failed, often over decades, to provide the level or quality of public services enjoyed by citizens in Democratic-led states?
To construct a more comprehensive answer to this question, we have examined five clusters of potential reasons for this “irrational” behavior:
- Demographics: your vote is motivated by your current position in life.
- Media consumption: your vote is motivated by what you watch, read, and hear.
- “Deplorable” beliefs and values: your vote is motivated by your deepest prejudices and grievances.
- Conspiracy thinking and acceptance of misinformation: your vote is motivated by false information fed to you by media and leaders.
- Partisan animosity: your vote is motivated by your animosity toward the other Party and its supporters.
In terms of hypotheses, here is what we have learned so far.
Gender, race, age, education, and local identity all impacted Presidential vote in 2020, but the effects were fairly modest and tended to be displaced by belief and attitude variables. We can see this relatively modest impact of demographics on political choices in 2020 by looking at a weighted correlation matrix of demographics vs. political affiliations and selected belief variables in the ANES surveys.
While our partisan animosity variable, cbias_scaled, correlates strongly with party affiliation and liberal-conservative identity, and somewhat less strongly with two variables that measure beliefs rather than circumstances (about local identity and religion), correlations with the classic demographic categories are much weaker — knowing a person’s age, gender, education, or income level provides very little insight into how they voted. It appears that conflicting beliefs and attitudes cut across these categories in many ways, making them historically unreliable as indicators of political intent or action in the 2020 Presidential election cycle.
Putting aside demographics, we are left with four clusters of variables: media exposure, deplorable beliefs, conspiracy thinking, and partisan polarization. How might these best be fit together to explain voting behavior and candidate bias? A weighted correlation matrix is a good place to start. Three of the four scale variables are highly correlated with votefor and cbias. While the media bias variable mbias is less influential, its correlations with the other variables are all statistically significant at p < .000.
Path models of the 2020 Presidential election
Fitting path models is more an art than a science. It tends to be an exercise in balancing fit and complexity. What tends to be most useful theoretically is the simplest model that fits the data adequately. Adding more complexity to the model can always improve its fit, up to the point when it simply becomes a perfect description of the data. We begin with a relatively simple model — we’ll call it Model 1. Here is Model 1 in path-diagram form:
Here is how this model can be interpreted:
- We treat deplorable beliefs and values are exogenous, which means they are not caused by any of the other variables in the model. Rather, we model them as deeply-held values that emerge and are reinforced by years of interactions with family, friends, church, and community.
- Deplorable beliefs have three direct impacts: (1) they drive media bias, as people seek out media that validates and celebrates their beliefs; (2) they encourage conspiracy thinking because conspiracies are often necessary ingredients for (falsely) validating deplorable beliefs; (3) they impact partisanship, as each Party makes clear which values and conspiracies it supports and welcomes among its members.
- Media bias also directly fuels conspiracy thinking in this model, on top of effects of deplorable values alone.
- Partisanship, in turn, is modeled as a function of deplorable values and conspiracy thinking. The model assumes that both deplorables and conspiracists are drawn to the Republican Party because it defends and promotes their values and false beliefs.
- Candidate bias, finally, is modeled as a direct effect of partisanship alone. Media bias, conspiracy thinking, and deplorable beliefs are all modeled as indirect sources of candidate bias, operating exclusively via their impact on partisan.
- Candidate bias is the sole source of Presidential vote in this model.
The numbers on the lines in Model 1 are standardized parameters that represent the relative size of the relationship depicted by each arrow. Statistically, they are partial correlation coefficients measuring the degree to which one variable’s values are statistically “explained” by another variable’s values, after the values of all other variables in the model are taken into account (aka held constant). Because these parameters are standardized, they can be compared to each other as measures of relative magnitude of effect. The path modeling software also calculates standard errors, confidence intervals, and p-values for each parameter. Given the large number of cases in our sample (for this model, weighted n = 4,690) it is not surprising that all the parameters in this model are highly significant at p < .000.
Many metrics have been devised to test the overall fit of SEM models. There is no single test that everyone agrees is the best, so SEM and path models are usually measured against several test statistics. I briefly describe the five most commonly used metrics, along with their agreed-upon thresholds for identifying a good fit, in an endnote.³ Each of these overall fit metrics emphasizes a slightly different aspect of a model’s fit. Since the metrics do not always agree and their suggested cutoff points are somewhat arbitrary, standard practice in the academic literature is to report all four or five metrics. If a majority fall within or very close to their “acceptable” range, the model is generally considered a good fit to the underlying data.
Although Model 1 is graphically easy to read and the coefficients are all statistically significant, its overall fit is poor based on two of our selected fit statistics (CFI=0.926 [good], TLI=0.862 [bad], RMSEA=0.207 [bad], SRMR=0.083 [borderline]). Poor TLI and RMSEA fits usually indicate that additional significant relationships need to be accounted for by the model.
To improve the fit of Model 1, two additional relationships must be specified. First, we need to add an arrow from deplorable to cbias_scaled, meaning that additional variance in candidate bias can be accounted for by deplorable beliefs after the effects of partisanship and conspiracy thinking are taken into account. Second, we need to add an arrow from partisan to mbias, meaning that partisanship also has an additional effect on media bias beyond its already significant effect of conspiracist thinking. This model, Model 2, has an closer fit with the underlying data, meeting or exceeding the threshold level for all four fit statistics (CFI=0.992, TLI=0.975, RMSEA=0.088 [borderline], SRMR=0.018).
Results of this path model-building exercise can be summarized as follows (all standardized parameters are statistically significant at p < .000):
- Deplorable remains an exogenous variable in Model 2. It has a large effect on partisan (.75) and conspiracist (.49), along with moderate effects on mbias (.22), and cbias_scaled (.17)
- Partisan also has a moderate effect on conspiracist (.22) and a significant impact on media bias (.38).
- Media bias appears to be equally driven by deplorable beliefs (.22) and strength of partisan identity (.38).
- Candidate bias is strongly influenced by partisanship (.58) and moderately influenced by conspiracist thinking (.22) and deplorable beliefs (.17), in that order of magnitude.
- Candidate bias, in turn, is modeled as the sole direct source of Presidential vote. This path produces the largest standardized parameter in the model (.81)
Together, deplorable beliefs, partisan identity, belief in conspiracies and misinformation, and a biased media consumption diet, all reinforce each other to produce the climate of fear and loathing that characterizes the dominant conservative political mindset in America today.
If this is the state of political thinking in the American public today, where can the country go from here? What levers of change can be pulled, if any, to break out of this self-reinforcing cycle of lies, anger, and relentless animosity?
Continue to Part 8: Republicans have crossed the Rubicon — Conclusions and implications
- Kline, Rex B. Principles and practice of structural equation modeling. Guilford publications, 2015, p. 17; Rodgers, J. L. (2016). “The epistemology of mathematical and statistical modeling: a quiet methodological revolution,” Annual Meeting of The Society of Multivariate Experimental Psychology, Oct, 2005.
- In addition to Rex Kline’s classic referenced above, I found a number of online resources that were extremely helpful, included web pages and YouTube videos authored by Patrick Sturgis, Michael Hallquist, UCLA Statistics, David Caughlin, Sacha Epskamp, and many others.
- Here are the five most commonly reported overall test statistics for SEM and path models:
- Chi-square p-value. Ideally this should be non-significant, indicating a model that does not deviate significantly from its underlying source data. However, the chi-square statistic is extremely sensitive to sample size and is only a reasonable measure of fit when sample sizes are between 75 and 200. For the larger samples in the ANES dataset, this value is always highly significant, so not a good measure of fit for this dataset.
- Comparative Fit Index (CFI). Also based on the chi-square test, this metric compares the model’s fit to the worst possible model (aka the null model). A value greater than 0.90 is considered a good fit.
- Tucker Lewis Index (TLI). This metric is similar to CFI but adds a penalty for model complexity. It is based on the average size of the correlations in the data. A value of 0.95 or greater is considered a good fit.
- Root Mean Square Error of Approximation (RMSEA). This metric measures model fit as a function of chi-square value, degrees of freedom, and sample size. It is most informative for models based on large samples. A value less than 0.08 is considered a good fit.
- Standardized Root Mean Square Residual (SRMR). This metric is defined as the standardized difference between the observed correlations in the data and the predicted correlations in the model. A value less than 0.08 is considered a good fit.