Partisans Process Policy-Based and Identity-Based Messages using Dissociable Neural Systems

Cerebral Cortex (forthcoming) with Nir Jacoby, Jacob Pearl, Alexandra Paul, Emily B. Falk, Emile G. Bruneau, and Kevin N. Ochsner

Political partisanship is often conceived as a lens through which people view politics. Behavioral research has distinguished two types of “partisan lenses” - Policy-based and Identity-based – that may influence peoples’ perception of political events. Little is known, however, about the mechanisms through which partisan discourse appealing to policy beliefs or targeting partisan identities operate within individuals. We addressed this question by collecting neuroimaging data while participants watched videos of speakers expressing partisan views. A “partisan lens effect” was identified as the difference in neural synchrony between each participant’s brain response and that of their partisan ingroup vs. outgroup. When processing policy-based messaging, a partisan lens effect was observed in socio-political reasoning and affective responding brain regions. When processing negative identity-based attacks, a partisan lens effect was observed in mentalizing and affective responding brain regions. These data suggest that the processing of political discourse that appeals to different forms of partisanship is supported by related but distinguishable neural –  and therefore psychological – mechanisms, which may have implications for how we characterize partisanship and ameliorate its deleterious impacts.

How rational inference about authority debunking can curtail, sustain or spread belief polarization

PNAS Nexus (forthcoming) with Setayesh Radkani and Rebecca Saxe

In polarized societies, divided subgroups of people have different perspectives on a range of topics. Aiming to reduce polarizaion, authorities may use debunking to lend support to one perspective over another. Debunking by authorities gives all observers shared information, which could reduce disagreement. In practice, however, debunking may have no effect or could even contribute to further polarization of beliefs. We developed a cognitively-inspired model of observers’ rational inferences from an authority’s debunking. After observing each debunking attempt, simulated observers simultaneously update their beliefs about the perspective underlying the debunked claims and about the authority’s motives, using an intuitive causal model of the authority’s decision making process. We varied the observers’ prior beliefs and uncertainty systematically. Simulations generated a range of outcomes, from belief convergence (less common) to persistent divergence (more common). In many simulations, observers who initially held shared beliefs about the authority later acquired polarized beliefs about the authority’s biases and commitment to truth. These polarized beliefs constrained the authority’s influence on new topics, making it possible for belief polarization to spread. We discuss the implications of the model with respect to beliefs about elections.