Nudging Leads to Consumers In Colorado To Shop But Not Switch ACA Marketplace Plans

Nudging Leads to Consumers In Colorado To Shop But Not Switch ACA Marketplace Plans

Nudging Leads to Consumers In Colorado To Shop But Not Switch ACA Marketplace Plans (Health Affairs 2017open access version here, joint with Jon Kingsdale, Timothy Layton, and Adam Sacarny)

The Affordable Care Act (ACA) dramatically expanded the use of regulated marketplaces in health insurance, but consumers often fail to shop for plans during open enrollment periods. Typically these consumers are automatically reenrolled in their old plans, which potentially exposes them to unexpected increases in their insurance premiums and cost sharing. We conducted a randomized intervention to encourage enrollees in an ACA Marketplace to shop for plans. We tested the effect of letters and e-mails with personalized information about the savings on insurance premiums that they could realize from switching plans and the effect of generic communications that simply emphasized the possibility of saving. The personalized and generic messages both increased shopping on the Marketplace’s website by 23 percent, but neither type of message had a significant effect on plan switching. These findings show that simple “nudges” with even generic information can promote shopping in health insurance marketplaces, but whether they can lead to switching remains an open question.

Design Issues in Economics Lab Experiments: Randomization

I’ve seen a lot of experimental economics papers as a coeditor of the Journal of Public Economics and a frequent reviewer for many journals. There are some recurring design and analysis decisions that lead authors astray. I’ll discuss a series of them. The first is Not Randomizing Treatment. It’s more common than you might think!

Not randomizing treatment. Randomly assigning participants to treatment is one of the key benefits of lab-based economics experiments. When we want to test the effect of a treatment, we want treatment to be orthogonal to everything else. It’s pretty clear how to do this with participant-level randomization—a random number generator assigns each participant to a treatment.

Things often go awry when we move to group-level randomization. For instance, you want to run a public goods game under one set of rules (“Basic Rules”), and then see how contributions change when it is run under another set of rules (“Enhanced Rules”).

Ideally, for each session, you would randomly assign half of the participants who show up to play Basic Rules (and interact with other participants in the Basic Rules condition), and half to play Enhanced Rules (and interact with other participants in the Enhanced Rules condition). This is great, you’ve actually achieved participant-level randomization.

But it’s logistically complicated to have different rules going on simultaneously (and perhaps the lab cannot handle enough people). So instead, you do session-level randomization. You run a “large-enough” number of sessions. You create a random order of sessions, so you might run Basic-Enhanced-Basic-Basic-Enhanced etc. If the number of sessions is large enough, you cluster standard errors at the session level and proceed.

But running different sessions with different rules in a random order is complicated. Plus, you might get an idea about what rules to run after running a few sessions. So what you do is run a few Basic Rules sessions, and then run a few Enhanced Rules sessions. This is where randomization has failed. Your subject population could be changing over time (perhaps early subjects are more eager, or have lower value of time). Or news events could change beliefs and preferences. The list of potential stories can be long; some can be ruled out, others cannot. But because your session order is not random, you are not guaranteed to have your treatment be orthogonal to everything else. As a result, you’ve missed out on the benefit of randomized experiments, and it’s unclear what to conclude from comparing your treatments.

 

Memory and Procrastination

MemoryForWeb

I have two papers examining limited memory.  Most recently:

 On the Interaction of Memory and Procrastination: Implications for Reminders 

Abstract: I examine the interaction between present-bias and limited memory. Individuals in the model must choose when and whether to complete a task, but may forget or procrastinate. Present-bias expands the effect of memory: it induces delay and limits take-up of reminders. Cheap reminder technology can bound the cost of limited memory for time-consistent individuals but not for present-biased individuals, who procrastinate on setting up reminders. Moreover, while improving memory increases welfare for time-consistent individuals, it may harm present-biased individuals because limited memory can function as a commitment device. Thus, present-biased individuals may be better off with reminders that are unanticipated. Finally, I show how to optimally time the delivery of reminders to present-biased individuals.

Forthcoming, Journal of the European Economic Association. Latest version here, with results on empirical estimation. Older version: NBER Working Paper 20381

This paper built on my previous work on memory, showing that people are overconfident about the probability they will remember:

Forgetting We Forget: Overconfidence and Memory

Abstract:  Do individuals have unbiased beliefs, or are they over- or underconfident? Overconfident individuals may fail to prepare optimally for the future, and economists who infer preferences from behavior under the assumption of unbiased beliefs will make mistaken inferences. This paper documents overconfidence in a new domain, prospective memory, using an experimental design that is more robust to potential confounds than previous research. Subjects chose between smaller automatic payments and larger payments they had to remember to claim at a six-month delay. In a large sample of college and MBA students at two different universities, subjects make choices that imply a forecast of a 76% claim rate, but only 53% of subjects actually claimed the payment.

Published 2011 in the Journal of the European Economic Association; Ungated working paper available at SSRN.

Press Coverage:

 

Memory and Procrastination

Memory and Procrastination

MemoryForWeb

I have two papers examining limited memory.  Most recently:

 On the Interaction of Memory and Procrastination: Implications for Reminders 

Abstract: I examine the interaction between present-bias and limited memory. Individuals in the model must choose when and whether to complete a task, but may forget or procrastinate. Present-bias expands the effect of memory: it induces delay and limits take-up of reminders. Cheap reminder technology can bound the cost of limited memory for time-consistent individuals but not for present-biased individuals, who procrastinate on setting up reminders. Moreover, while improving memory increases welfare for time-consistent individuals, it may harm present-biased individuals because limited memory can function as a commitment device. Thus, present-biased individuals may be better off with reminders that are unanticipated. Finally, I show how to optimally time the delivery of reminders to present-biased individuals.

Forthcoming, Journal of the European Economic Association. Latest version here, with results on empirical estimation. Older version: NBER Working Paper 20381

This paper built on my previous work on memory, showing that people are overconfidence about the probability they will remember:

Forgetting We Forget: Overconfidence and Memory

Abstract:  Do individuals have unbiased beliefs, or are they over- or underconfident? Overconfident individuals may fail to prepare optimally for the future, and economists who infer preferences from behavior under the assumption of unbiased beliefs will make mistaken inferences. This paper documents overconfidence in a new domain, prospective memory, using an experimental design that is more robust to potential confounds than previous research. Subjects chose between smaller automatic payments and larger payments they had to remember to claim at a six-month delay. In a large sample of college and MBA students at two different universities, subjects make choices that imply a forecast of a 76% claim rate, but only 53% of subjects actually claimed the payment.

Published 2011 in the Journal of the European Economic Association; Ungated working paper available at SSRN.

 

Press Coverage:

An individual mandate, or a tax? How policy is articulated matters.

Under the Affordable Care Act, people must buy health insurance  or pay a financial penalty. Framing that policy as a mandate to buy health insurance versus as a tax on not purchasing health insurance can matter.

In Ericson and Kessler (JEBO 2016), we describe the results of a year-long experiment in which a series of participants reported their probability of purchasing health insurance either under a mandate or a financially equivalent tax.

In late 2011 and early 2012, articulating the policy as a mandate, rather than a financially equivalent tax, increased probability of insurance purchase by 10.6 percentage points — an effect comparable to a $1000 decrease in annual premiums. However, the controversy over the Affordable Care Act’s insurance mandate provision that changed the political discourse during the year 2012. We document the rise of this controversy. After the controversy, the mandate is no more effective than the tax.

For more, see:

The Size of the LGBT Population and the Magnitude of Anti-Gay Sentiment are Substantially Underestimated

The Size of the LGBT Population and the Magnitude of Anti-Gay Sentiment are Substantially Underestimated

The Size of the LGBT Population and the Magnitude of Anti-Gay Sentiment are Substantially Underestimated

Measuring sexual orientation, behavior, and related opinions is difficult because responses are biased towards socially acceptable answers. We test whether measurements are biased even when responses are private and anonymous and use our results to identify sexuality-related norms and how they vary. We run an experiment on 2,516 U.S. participants. Participants were randomly assigned to either a “best practices method” that was computer-based and provides privacy and anonymity, or to a “veiled elicitation method” that further conceals individual responses. Answers in the veiled method preclude inference about any particular individual, but can be used to accurately estimate statistics about the population.

Comparing the two methods shows sexuality-related questions receive biased responses even under current best practices, and, for many questions, the bias is substantial. The veiled method increased self-reports of non-heterosexual identity by 65% (p<0.05) and same-sex sexual experiences by 59% (p<0.01). The veiled method also increased the rates of anti-gay sentiment. Respondents were 67% more likely to express disapproval of an openly gay manager at work (p<0.01) and 71% more likely to say it is okay to discriminate against lesbian, gay, or bisexual individuals (p<0.01). The results show non-heterosexuality and anti-gay sentiment are substantially underestimated in existing surveys, and the privacy afforded by current best practices is not always sufficient to eliminate bias. Finally, our results identify two social norms: it is perceived as socially undesirable both to be open about being gay, and to be unaccepting of gay individuals.

Paper available below:

Press Coverage:

Contact me about this study:

How Product Standardization Affects Choice: Evidence from the Massachusetts Health Insurance Exchange

Product Standardization on the Mass. HIX

Product Standardization on the Mass. HIX

Standardization of complex products is touted as improving consumer decisions and intensifying price competition, but evidence on standardization is limited. We examine a natural experiment: the standardization of health insurance plans on the Massachusetts Health Insurance Exchange.

Pre-standardization, firms had wide latitude to design plans. A regulatory change then required firms to standardize the cost-sharing parameters of plans and offer seven defined options; plans remained differentiated on network, brand, and price. Standardization led consumers on the HIX to choose more generous health insurance plans and led to substantial shifts in brands’ market shares.

We decompose the sources of this shift into three effects: price, product availability, and valuation. A discrete choice model shows that standardization changed the weights consumers attach to plan attributes (a valuation effect), increasing the salience of tier. The availability effect explains the bulk of the brand shifts. Standardization increased consumer welfare in our models, but firms captured some of the surplus by reoptimizing premiums. We use hypothetical choice experiments to replicate the effect of standardization and conduct alternative counterfactuals.

Link to Full Working Paper: How Product Standardization Affects Choice: Evidence from the Massachusetts Health Insurance Exchange