That mistakes are made is clear. What is meant by that is not. Measuring whatever might be meant and scientifically studying it is therefore even more challenging.
These lectures introduce an interdisciplinary science of mistakes to cut the Gordian knot. The key building blocks are model constructs drawn from the economic tradition, methods of measurement drawn from the psychometric tradition, and analytic methods drawn from economic theory.
Sample Chapter(s)
Lecture 1: Overview
https://doi.org/10.1142/9789811262395_fmatter
The following sections are included:
https://doi.org/10.1142/9789811262395_0001
This introductory lecture has four goals:
https://doi.org/10.1142/9789811262395_0002
The goals of this lecture are:
Delivering all of this material in a single lecture is excessive. So, the real goal of this lecture is to permit students to review the material ex post and accomplish the above goals by themselves. This comment applies to all of the lectures that follow.
https://doi.org/10.1142/9789811262395_0003
The goals of this lecture are:
https://doi.org/10.1142/9789811262395_0004
In this lecture, I follow Caplin and Dean (2015) and Caplin et al. (2023c) not only to characterize CIRs but also to precisely show how to recover all rationalizing cost functions for any such utility function. All results are organized around matrices introduced at the end of the previous lecture. Before plunging into the general results, I open by restating the key objects and providing a number of illustrative examples that point the path forward…
https://doi.org/10.1142/9789811262395_0005
The following sections are included:
https://doi.org/10.1142/9789811262395_0006
In terms of how to do applied research, the central idea in these lectures is that rich data about decisions in a fixed learning environment with very different incentives to learn allows rich inference about utility, learning, and the costs of learning. In this lecture, I take this point to its limit and consider a researcher who is able to vary incentives widely and otherwise to design the decision-making environment. In addition to laying out some particularly insightful variations in the learning environment and decision problem, I present an experiment designed around the main recovery result. The attentive reader will note that the results in this section are illustrative rather than fully definitive. There are many ways to design experimental choice environments that generate insightful SDSC. Many of these have, in fact, been pioneered in psychometrics, which routinely generates such data. I focus on the few cases that have been formally studied in detail. The result is partial rather than complete recovery. It is clear that one can drive far further forward on this road, and I close the lecture by pointing out a few valid directions forward…
https://doi.org/10.1142/9789811262395_0007
As with the material on recovering rationalizing learning in Lecture 5, this lecture takes as its starting point the wonderful approach and results of Blackwell (1953) on ranking experiments by their information content. Given that it was not central to Lecture 5, I did not there note the third equivalent definition of one experiment being more informative than another, which is that it offers at least as high maximum utility function regardless of the particulars of the utility function. While Blackwell’s characterization is definitive, it reveals that many experiments cannot be unequivocally ranked. This has given rise to a sub-literature whose goal is to find methods of comparing experiments that are less demanding than Blackwell’s…
https://doi.org/10.1142/9789811262395_0008
In this lecture, I introduce and analyze behavioral patterns associated with a large and important family of attention cost functions. Costs based on Shannon entropy introduced into economics by Sims opened the door. In this lecture, I introduce this cost function and indicate some of the many reasons why generalizations are being explored. In a nutshell, the model has properties that are often contradicted in data. With that in mind, I introduce some seemingly natural generalizations that share a key property of the Shannon model, which is posterior separability: Costs are additive in nature in a particular sense. These are now well studied due to their allowing for most observed empirical failings of the Shannon model…
https://doi.org/10.1142/9789811262395_0009
The following sections are included:
https://doi.org/10.1142/9789811262395_0010
Caplin et al. (2019) focus on the fact that many options are unchosen. In this section, I analyze this in a few special cases and introduce the ILR hyperplanes, a general-purpose analytic structure for mapping from prior beliefs in a given decision problem to the structure of the consideration set and optimal unconditional choice probabilities. There is much more to be done with these hyperplanes, both computationally and in terms of economic analysis. A few openings and pointers are provided. I open by providing a very standard model of type I and type II errors that illustrates the value of understanding the conditions under which inattentive choice is optimal. I then present two examples from CDL, in both of which the key issue is to identify the optimal consideration set. The ILR hyperplanes are then introduced and discussed. The lecture closes by pointing toward dynamics, in particular to the work of Miao and Xing (2020), which is posterior based and applies to the broad class of uniformly posterior-separable cost functions.
https://doi.org/10.1142/9789811262395_0011
This section opens with two simple models of how rational inattention plays out in market settings. The first makes one simple point: In models with free entry, equilibrium conditions can be used to pin down beliefs about market composition. For that purpose, it is particularly useful to use the posterior-based formulation of optimal strategies, which show many features of the solution to survive variation in prior beliefs, aiding in equilibrium analysis. The second example is dynamic and makes a distinct point: Past market outcomes reveal information that causes priors to be updated. Social learning from market outcomes may, in some cases, be less onerous than private learning. This points to the direction of dynamics. Again, knowing how to solve the model for all priors is of particular value…
https://doi.org/10.1142/9789811262395_0012
This lecture applies the methods of Parts 1 and 2 to model, better understand, and better apply machine learning methods in decision-making. All of the work is jointly conducted with Daniel Martin and Philip Marx and much is taken directly from Caplin et al. (2022b). Algorithms are the most important “decision makers” in the modern world today and will become ever increasingly so. They predict if a driving route has a low expected travel time, an eye scan shows physical damage, a manufactured product has a defect, a house for sale is a likely match, internet activity is a security threat, an email is spam, and so on. Virtually all industries, jobs, and consumer experiences have been impacted in some way by the rapid rise in automation brought about by this technology. Economically important applications of machine learning include what ads to serve, what content to show, what coupons to provide, facial recognition, translation, voice assist, credit scoring and loan decisions, medical decisions, product recommendations, driving routes, spam filters, fraud detection, and so on…
https://doi.org/10.1142/9789811262395_0013
In this lecture, I outline applications of the ideas and methods of the book to teaching and testing protocols for humans rather than for machines. I pick up where the last lecture left off, with the use of proper scoring rules to better understand and to impact learning. Both de Finetti (1965) and Savage (1971) proposed the use of just such scoring rules in multiple-choice tests. To date, this approach has found traction only within decision analysis, where it appears to have proven its worth (Bickel, 2010). I outline some of the powerful arguments in favor of broader experimentation and implementation…
https://doi.org/10.1142/9789811262395_0014
We don’t stop making mistakes when we leave school. We just stop measuring them. The few cases alluded to above, such as sporting calls, are exceptional in this respect because ideal SDSC is available. While there is value in finding other settings in which this is true, at least to a first approximation, there is also value in developing entirely different research strategies in bringing the science of mistakes to the field. My focus in this lecture is on development and deployment in the field of cognitive instruments. Many mistakes can be seen as reflecting cognitive skills that are required, for example, to make a correct sporting call or identify the appropriate medical procedure and implement that procedure effectively. This suggests a psychometric approach of developing cognitive skill measures that predict particular mistakes and implementing them in field settings. If one identifies differences in skill using cognitive instruments and finds that these differences are strongly correlated with clearly related bad outcomes in the field, it strengthens the plausibility of a causal channel running from cognitive skill to behavioral mistakes…
https://doi.org/10.1142/9789811262395_0015
Decision-making skills matter in all phases of life. While being a manager of a fixed group of workers is a particularly salient example of this, there is no realm of behavior in which such skills are irrelevant. In this lecture, I focus on a particularly important set of decisions that are known to have massive impact on lifetime income: decisions on whether and how to search for jobs, including when to quit, what to do in the face of an impending layoff, when to take time out of the labor force and retool, and when to retire. These transitions are important events that appear to have a large effect on income dynamics over the life cycle. When, why, and how such transitions are made is much studied in administrative data. However, the facts alone are not rich enough to determine whether or not there were significant gaps in knowledge and/or illusions that rationalized erroneous decisions. Decision quality is as much about what was not known as what was and about actions that were not taken as much as those that were. In this lecture, I outline research in which measurements are enriched to distinguish between those who make successful labor market transitions and those who do not, thereby to gain insight into the role that decision-making skills play in lifetime income…
https://doi.org/10.1142/9789811262395_0016
Traditional policy tools operate through prices (e.g., taxes) or quantities (e.g., quotas). The theories guiding policy design are based on an essentially error-free understanding of these instruments. Yet, in a world of attentional constraints, thinking in these traditional terms is far too narrow. In fact, the rational inattention revolution was launched, in part, to capture the cognitive constraints relevant to monetary policy design and the perfect information fiction, suggesting that prices and wages would instantaneously adjust to increases in policy…
https://doi.org/10.1142/9789811262395_bmatter
The following sections are included:
"This book offers the first systematic exposition of an exciting new line of research. It shows not only how rational choice theory can be generalized to allow for random errors in a flexible way, but how the costs of precision, and hence the structure of the error that should be expected in any given decision problem, can be backed out from behavioral data. It thus shows that one can study decision theory in the rigorous spirit of revealed preference theory — refusing to posit internal structure that can't be identified from behavioral data — without this requiring one to assume that whatever people choose on any occasion must be what they want. The resulting reformulation of choice theory has deep implications for both positive and normative economic analyses. The book sketches a number of tantalizing applications of its framework, as opening salvos in what promises to be a game-changing campaign."
Andrew Caplin is Silver Professor of Economics at New York University. He is a cognitive economist whose research covers such diverse topics as how to reduce legal and medical errors, and how best to understand lifecycle patterns of earnings, spending, and investing. The common feature is the central importance of reducing mistakes. He is a leader of the Sloan-NOMIS Program on the Cognitive Foundations of Economic Behavior, the Behavioral Macroeconomics research group at the National Bureau of Economic Research, and a member of the Center for Economic Behavior and Inequality at the University of Copenhagen. He has been working on modeling and measuring mistakes for some 15 years.
Sample Chapter(s)
Lecture 1: Overview