Social Learning with Model Misspecification: A Framework and a Characterization

 

Social Learning with Model Misspecification: A Framework and a Characterization

Abstract: We explore how model misspecification affects learning in a sequential social learning setting. Individuals learn from diverse sources, including private signals, public signals and the actions of others. Misspecified agents have incorrect beliefs about the signal distribution, how other agents draw inference and/or others’ preferences. Our main result is a simple criterion to characterize long-run learning outcomes, i.e. whether learning is correct, incorrect or cyclical, in that beliefs may not converge, and whether agents asymptotically disagree despite observing the same sequence of information. This criterion is straightforward to derive from the primitives of the misspecification and can also be used to establish that the correctly specified model is robust, in that agents with approximately correct models almost surely learn the true state. We close with a demonstration of how our framework can be used to analyze misspecified social learning in a variety of settings, such as level-k, over/underweighting new information, confirmation bias, and other forms of signal missperception. We illustrate how our tools can be used to determine the set of asymptotic outcome and provide insights into how conceptually robust these outcomes are to different modeling choices.