aislinn.jpg

Associate Professor of Economics
University of Pennsylvania

Research Fellow in Industrial Organization
Center for Economic Policy and Research Development (CEPR)

Member, Inequality: Measurement, Interpretation, and Policy Network
Human Capital & Economic Opportunity (HCEO) Global Working Group

Contact Information
Phone: (908) 432-7889
Email: abohren@gmail.com

I study various topics in microeconomics with a focus on information, learning, and discrimination. My research explores questions related to social learning, learning with model misspecification, biases in belief formation, discrimination, and moral hazard. My work on discrimination and belief biases has both theoretical and empirical components, and builds on my theoretical work on misspecified learning.

Model Misspecification and Biased Beliefs

Beliefs are central to many economic decisions. This component of my research agenda develops methods to model and measure cognitive biases and incorrect models of the informational environment. It also provides empirical support for how cognitive mechanisms (e.g., limited attention, memory, processing capacity) impact belief formation and generate biases.

  • Informational Herding with Model Misspecification, Journal of Economic Theory, May 2016, 163:222-247.
  • [abstract] [publication] [working paper]
    Consider a social learning setting where agents misunderstand the correlation between other agents' actions---they cannot distinguish between new and redundant information. When agents sufficiently overestimate the amount of new information, beliefs about the state become entrenched and incorrect learning occurs with positive probability; while when agents sufficiently overestimate the amount of redundant information, beliefs fail to converge and learning is cyclical.

  • Learning with Heterogeneous Misspecified Models: Characterization and Robustness, with Daniel N. Hauser, Econometrica, November 2021, 89: 3025-3077.
  • [abstract] [publication] [working paper]
    This paper provides a general characterization of how model misspecification affects long-run learning. A simple criterion characterizes learning outcomes---depending on the misspecified model, learning may be correct, incorrect, or beliefs may not converge, and agents may asymptotically disagree despite observing the same sequence of information. It also establishes that the correctly specified model is robust---agents with approximately correct models almost surely learn the true state. The misspecified model of correlation in Bohren (2016) is one form of misspecification captured by our framework. In Bohren Imas Rosenberg (2019), we apply this framework to a labor market setting and show how the dynamic patterns of discrimination between two groups of workers can identify whether the discrimination stems from accurate or misspecified beliefs about worker ability.

  • Optimal Lending Contracts with Retrospective and Prospective Bias, with Daniel N. Hauser, AEA Papers & Proceedings, May 2023, 113: 665-72.
  • [abstract] [publication] [working paper] [online appendix]
    This paper considers an entrepreneur who borrows to invest in a project and learns about project quality with a misspecified model. Using the insight that a misspecified model can be decomposed into the two key classes of distortions: prospective biases and retrospective biases (Bohren Hauser (2024)), we explore how each type of bias impacts the structure of optimal lending contracts.

  • Misspecified Models in Learning and Games, with Daniel N. Hauser, Forthcoming at Annual Review of Economics.
  • [abstract] [working paper]
    In this review, we present an overview of the literature on model misspecification in economics and discuss how it can be used to study the impact of information biases. We focus on misspecified learning in active and social learning settings and misspecified strategic interaction. In closing, we discuss applications of the framework as well as shortcomings and potential avenues for future research.

  • The Behavioral Foundations of Model Misspecification,with Daniel N. Hauser, Revision Requested at Econometrica.
  • [abstract] [working paper]
    This paper links two approaches to biased interpretations of information: non-Bayesian updating and model misspecification. We show that misspecified models can be decomposed into an updating rule and belief forecast, derive necessary and sufficient conditions for an updating rule and belief forecast to have a misspecified model representation, and show that the representation is unique. Finally, we explore two ways to select belief forecasts--- introspection-proof and naive consistent. This highlights the belief restrictions implicit in the misspecified model approach and shows how to identify a misspecified model from belief data.

  • Over- and Underreaction to Information, with Cuimin Ba and Alex Imas, Revision Requested at Quarterly Journal of Economics.
  • [abstract] [working paper]
    This paper explores how features of the learning environment interact with cognitive constraints to determine whether agents under- or overreact to information. We develop a model of belief updating when agents are subject to limited attention and cognitive noise, and use it to generate predictions about how agents react to information. Our model predicts underreaction when the state space is simple, signals are precise, and the prior is flat or diffuse; it predicts overreaction when the state space is complex, signals are noisy, and the prior is concentrated. A series of experiments provide direct support for these theoretical predictions.

  • A Cognitive Foundation for Perceiving Uncertainty, with Josh Hascher, Alex Imas, Michael Ungeheuer and Martin Weber.
  • [abstract] [working paper]
    We propose a framework where perceptions of uncertainty are driven by the interaction between cognitive constraints and whether information is presented sequentially or simultaneously. Limited attention leads to the overweighting of unlikely but salient events---the dominant force when learning from simultaneous information---whereas imperfect recall leads to the underweighting of such events---the dominant force when learning sequentially. A series of studies show that, when learning from simultaneous information, people are overoptimistic about assets that mostly underperform but sporadically exhibit large outperformance. However, they overwhelmingly select more consistently outperforming assets when observing the same information sequentially.

  • Posteriors as Signals in Misspecified Learning Models, with Daniel N. Hauser (draft coming soon).
  • [abstract]
    The Bayesian learning literature often normalizes a signal to be the induced posterior distribution over the state space. We provide a foundation for such a normalization when agents have a misspecified model of the state-signal distributions.

  • Misinterpreting Social Outcomes and Information Campaigns, with Daniel N. Hauser (work in progress).
  • [abstract]
    Given the different inefficiencies that arise when agents are misspecified in Bohren Hauser (2021), it is natural to ask what types of policies will improve decision-making. In this paper, we explore how information campaigns can counteract inefficient choices in a learning setting with social perception bias, where agents have a misspecified model of others' preferences. We characterize how the type and level of social perception bias affects the optimal information policy, and show that key features of this policy depend crucially on the form of misspecification.

Discrimination

This work focuses on understanding how discrimination evolves across time and interacts across markets, and on identifying how discrimination stemming from inaccurate (misspecified) beliefs manifests.

  • The Dynamics of Discrimination: Theory and Evidence, with Alex Imas and Michael Rosenberg, American Economic Review, October 2019, 109: 3395-3436 (lead article).
  • Exeter Prize 2020

    [abstract] [publication] [working paper]
    We model the dynamics of discrimination and show how its evolution can identify the underlying source. We test these theoretical predictions in a field experiment on a large online platform where users post content that is evaluated by other users on the platform. We assign posts to accounts that exogenously vary by gender and evaluation histories. With no prior evaluations, women face significant discrimination. However, following a sequence of positive evaluations, the direction of discrimination reverses: women's posts are favored over men's. Interpreting these results through the lens of our model, this dynamic reversal implies discrimination driven by inaccurate beliefs. Our model builds on the framework for social learning with model misspecification developed in Bohren Hauser (2021).

  • The Language of Discrimination: Using Experimental versus Observational Data, with Alex Imas and Michael Rosenberg, AEA Papers & Proceedings, May 2018, 108:169-174.
  • [abstract] [publication] [working paper]
    Discrimination can also occur along dimensions that are harder to quantify, such as the language used when engaging with and evaluating members of a targeted group. Using textual data from the field experiment in Bohren Imas Rosenberg (2019), we document a significant difference in the language used to respond to questions posed by women versus men. This highlights the importance of considering language as an additional means of discrimination.

  • Inaccurate Statistical Discrimination: An Identification Problem, with Kareem Haggag, Alex Imas and Devin Pope, Forthcoming at Review of Economics and Statistics.
  • [abstract] [publication] [working paper] [literature survey papers] [qualtrics survey]
    We argue that in many situations, individuals may have inaccurate beliefs about the relevant characteristics of different groups. This possibility creates an identification problem when isolating the source of discrimination. When not accounted for, we show both theoretically and experimentally that such inaccurate statistical discrimination will be misclassified as taste-based. We then examine two alternative methodologies for differentiating between these three sources of discrimination---varying the amount of information presented to evaluators and eliciting evaluators' beliefs---and propose a possible intervention: when presented with accurate information, inaccurate statistical discrimination decreases. 

  • Systemic Discrimination: Theory and Measurement, with Peter Hull and Alex Imas, Conditionally Accepted at Quarterly Journal of Economics.
  • [abstract] [working paper]
    This paper develops new tools for modeling and measuring direct and systemic forms of discrimination. We show how systemic discrimination arises from direct discrimination in other decisions, which then generates disparities in the signaling technology or skill accumulation. Importantly, standard tools for measuring direct discrimination such as audit and correspondence studies cannot detect systemic discrimination. We propose two ways to measure such systemic discrimination: by decomposing total discrimination into direct and systemic components, and via a new experimental design that we refer to as an iterated audit. Finally, we illustrate these tools empirically and document sizeable systemic discrimination. These findings illustrate how discrimination in one domain can drive persistent disparities through systemic channels even when the direct discrimination is eliminated.

Moral Hazard and Information Aggregation

This work explores how committees can be used to aggregate dispersed information and how the persistence of past actions or peer-monitoring can be used to overcome moral hazard. Applications include designing rating systems on online platforms and providing incentives in online labor markets.

  • Should Straw Polls be Banned? with S. Nageeb Ali, Games and Economic Behavior, November 2019, 118:284-294.
  • [abstract] [publication] [working paper]
    We consider a setting in which a Principal appoints a committee of partially informed experts to choose a policy. The experts' preferences are aligned with each other but conflict with hers. We study whether she gains from banning committee members from communicating or "deliberating" before voting. Our main result is that if the committee plays its preferred equilibrium and the Principal must use a threshold voting rule, then she does not gain from banning deliberation. We show using examples how she can gain if she can choose the equilibrium played by the committee, or use a non-anonymous or non-monotone social choice rule.

  • Persistence in a Dynamic Moral Hazard Game, Theoretical Economics, January 2024, 19:449-498.
  • [abstract] [publication] [working paper] [supplemental appendix]
    This paper studies how the persistence of past choices can be used to create incentives. A large player, such as a firm, interacts with a sequence of short-run players, such as customers. The long-run player faces moral hazard and her past actions are imperfectly observed--- they are distorted by a Brownian motion. Persistence refers to the impact that actions have on a payoff-relevant state variable, e.g. product quality depends on current and past investment choices. I characterize actions and payoffs in Markov Perfect Equilibria (MPE), for a fixed discount rate. Finally, I derive sufficient conditions for a MPE to be the unique PPE. Persistence creates effective intertemporal incentives to overcome moral hazard in settings where traditional channels fail. Several applications illustrate how the structure of persistence impacts the strength of these incentives.

  • Peer Monitoring with Partial Commitment, with Troy Kravitz.
  • [abstract] [working paper]
    A firm employs workers to obtain costly unverifiable information -- for example, categorizing the content of images. Monitoring takes the form of hiring multiple workers to complete the same task and comparing reported output across workers. The optimal contract under limited liability exhibits three key features: (i) the monitoring technology depends crucially on the commitment power of the firm -- virtual monitoring, or monitoring with arbitrarily small probability, is optimal when the firm can commit to truthfully reveal messages from other workers, while monitoring with strictly positive probability is optimal when the firm can hide messages (partial commitment); (ii) bundling multiple tasks reduces worker rents and monitoring inefficiencies; and (iii) the optimal contract is approximately efficient under full but not partial commitment. We conclude with an application to crowdsourcing platforms, and characterize the optimal contract for tasks found on these platforms.

The Econometrics of Randomized Experiments

Experimental policy trials that explicitly consider interference between individuals are an increasingly useful lens to study spillover and network effects. Empirical researchers who seek to experimentally investigate these effects face novel design choices that do not exist in settings with no interference.

  • Optimal Design of Experiments in the Presence of Interference, with Sarah Baird, Craig McIntosh and Berk Ozler, Review of Economics & Statistics, December 2018, 100:844-860.
  • [abstract] [publication] [working paper] [replication files]
    We formalize the optimal design of experiments when there is interference between units, i.e. an individual's outcome depends on the outcomes of others in her group. We focus on randomized saturation designs, two-stage experiments that first randomize treatment saturation of a group, and then randomize individual treatment assignment. We map the potential outcomes framework with partial interference to a regression model with clustered errors, calculate standard errors of randomized saturation designs, and derive analytical insights about the optimal design. We show that the power to detect average treatment effects declines precisely with the ability to identify novel treatment and spillover effects. Bohren Staples Baird McIntosh Ozler (2016) provides software for researchers to use our standard error calculations and optimal design results.

  • Power Calculation Software for Randomized Saturation Experiments, Aislinn Bohren, Patrick Staples, Sarah Baird, Craig McIntosh and Berk Ozler, Version 1.0, 2016.
  • [details] [link to code]
    To complement our analytical results in Baird Bohren McIntosh Ozler (2018), we developed software to assist researchers in designing randomized saturation experiments. Our software allows users to calculate the standard errors of estimators for different randomized saturation designs or calculate the optimal randomized saturation design for a given researcher objective.

    Available in R, Python, Matlab, Graphical User Interface (GUI)