Olsen, Asmus Leth (2017). Compared to What? How Social and Historical Reference Points Affect Citizens’ Performance Evaluations. Journal of Public Administration Research and Theory (early view).

ABSTRACT: The question of what is “good” or “poor” performance is difficult to answer without applying a reference point–a standard for comparison. Citizens’ evaluation of performance information will therefore tend to be guided by reference points. We test how reference points alter citizens’ evaluation of organizational performance. Specifically, drawing on Herbert Simon, we test how citizens use historical (internal) and social (external) reference points when making relative comparisons: How important is performance relative to past performance? And how important is performance relative to the performance of other organizations? Two experiments are embedded within a large nationally representative sample of citizens (n=3443). The experiments assign historical and social reference points for performance data on education and unemployment to citizens. We find that citizens’ performance evaluation is fundamentally a relative process. Interestingly, we show that social reference points are almost twice as important to citizens’ evaluations as historical reference points. We find mixed evidence of a negativity bias in citizens’ relative evaluations. The strong social reference point effects have implications for studying citizens’ response to performance and how managers can frame and manipulate external performance data.

Olsen, Asmus Leth (2017). Human Interest or Hard Numbers? Experiments on Citizens' Selection, Exposure, and Recall of Performance Information. Public Administrative Review 77 (3): 408-420. 

ABSTRACT: The abundance of quantitative performance information has motivated multiple studies about how citizens make sense of “hard” data. However, research in psychology emphasizes that vivid episodic information (e.g., case stories) often leaves a greater mark on citizens than pallid statistics. We test this contradiction with multiple experiments embedded in a large nationally representative sample of citizens (n=1013). The results stress three striking differences between quantitative and episodic data: Citizens have strong preferences for statistical data if asked to evaluate an organization. However, episodic information has a stronger impact on citizens’ actual evaluations and is more emotionally engaging than statistics. Finally, if asked to immediately recall recent performance information about public services, citizens report more nuanced and elaborate information about personalized stories and experiences than about statistics and numbers. Overall, the results raise questions about the ability of hard performance data to dominate and crowd out episodic performance information.

Olsen, Asmus Leth  (2015). Negative Performance Information Causes Asymmetrical Evaluations and Elicits Strong Responsibility Attributions. 111th Annual Meeting of the American Political Science Association San Francisco, September, 2015. Revise and Resubmit in Journal of Public Administration Research and Theory.

ABSTRACT: The negativity bias challenges our understanding of how both citizens and managers respond to performance information. It posits that negative information has a stronger impact than positive information of the same magnitude. However, studies of the bias in public administration have found mixed results. We conduct the first set of experiments with the sole purpose of estimating the negativity bias in citizens response to performance information. Moreover, we extend the analysis beyond evaluations and outline a theory and test of how negative performance information also elicits stronger responsibility attributions among citizens: A search for the underlying cause for poor performance. We investigate the negativity bias with three experiments embedded in two large nationally representative samples of citizens (n=1,559 and n=1,013). Within a conservative equivalence framing framework, we find very strong evidence of a large negativity bias. A direct replication confirms this conclusion. In addition, we find that citizens are much more likely to spontaneously engage in attributions about responsibility after being exposed to negative performance data. The findings have implications for our understanding of both citizens and managers response to performance information and provides an a strong empirical connection between negativity bias and blame avoidance.

Olsen, Asmus Leth (2015). Citizen (Dis)satisfaction: An Experimental Equivalence Framing Study. Public Administration Review, 75 (3), 469–478.

ABSTRACT: This article introduces the importance of equivalence framing for understanding how satisfaction measures affect citizens’ evaluation of public services. Does a 90 percent satisfaction rate have a different effect than a logically equivalent 10 percent dissatisfaction rate? Two experiments were conducted on citizens’ evaluations of hospital services in a large, nationally representative sample of Danish citizens. Both experiments found that exposing citizens to a patient dissatisfaction measure led to more negative views of public service than exposing them to a logically equivalent satisfaction metric. There is some support for part of the shift in evaluations being caused by a negativity bias: dissatisfaction has a larger negative impact than satisfaction has a positive impact. Both professional experience at a hospital and prior exposure to satisfaction rates reduced the negative response to dissatisfaction rates. The results call for further study of equivalence framing of performance information.

Olsen, Asmus Leth (2016). The Numerical Psychology of Performance Information – Implications for Citizens, Managers, and Policy Makers. Public Performance & Management Review 39(1), 100–115.

ABSTRACT: Performance information attaches numbers to the inputs, outputs, and outcomes of public services. Numbers are what separate performance information from other sources of information about public sector performance. In cognitive and social psychology, there are vast amounts of research on the profound effects of numbers on human attitudes and behavior. However, these insights are largely unexplored by scholars of performance information. The article introduces the importance of numerical psychology for the study of performance information. It is pointed out how numerical research both challenges existing beliefs about performance information and allows for the formulation of new hypotheses. These insights are relevant to all levels of study, including citizens, managers, and policy makers.

Hansen, Kasper Møller, Olsen, Asmus Leth and Bech, M. (2015). Cross-National Yardstick Comparisons: A Choice Experiment on a Forgotten Voter Heuristic. Political Behavior, 37(4), 768–791.

ABSTRACT: Comparing performance between countries is both a theoretically and intuitively useful yardstick for voters. Cross-national comparisons provide voters with heuristics that are less cognitively demanding, less ambiguous, and less uncertain than solely national, absolute performance measurements. We test this proposition using a unique, choice experiment embedded in the 2011 Danish National Election Study. This design allows to contrast cross-national comparisons with more traditional national sociotropic and egotropic concerns. The findings suggest that voters are strongly influenced by cross-national performance comparisons—even when accounting for classic national sociotropic and egotropic items. Specifically, voters respond strongly to how the prospective wealth of Denmark evolves relative to the neighboring Sweden. Interestingly, voters are more negative in their response to cross-national losses compared to their positive response to cross-national gains—indicating a negativity bias in voters’ preferences.

Olsen, A. L. (2015). The Appropriate Response to Performance Information by Citizens: A Large-scale Experiment with Local Politicians. Working paper.

ABSTRACT: How do politicians view the external response from citizens to performance information? We answer this question by testing how politicians expect citizens to respond to relative performance, social and historical performance comparisons, good and bad performance, and the assignment of responsibility for performance. This constitutes the first study of how politicians view the external role of performance information.We draw on two unique survey vignette experiments conducted with a large sample of Danish local councilors (n=814). The results show that councilors expect citizens to care a lot about both relative social and historical performance. Interestingly, the results also show very divergent asymmetries in councilors expectations to how citizens respond to relatively good and relatively bad performance. Specifically, councilors expect a negativity bias if blame is assigned to others and a positivity bias if praise is assigned to the council.

Olsen, Asmus Leth  (2015). Naming Bad Performance: Does Naming and Shaming Work? Midwest Political Association Meeting 2012 and 2013. R&R.

ABSTRACT: If poor performance is disclosed, poorly performing organizations are likely to face reputational damage and therefore improve their performance. This simple hypothesis is tested with performance information on gender equality among Danish public organizations. The initiative aimed at measuring intra-organizational gender policies and actual gender representation at di erent organizational levels. A regression discontinuity design is proposed for estimating the causal e ffect of the disclosed performance on subsequent improvements. The analysis finds no or small e ffects of performance disclosure on subsequent improvements. Counter to expectations, some results even indicate that feedback about poor performance leads to further deterioration in performance. The results imply that performance information in low-salience policy areas can be ignored by the organizations under scrutiny. The eff ect of disclosing performance information is thus likely to be reliant on media coverage and public dissemination. Disclosing performance information is by itself no quick fix to improving organizational performance.