Nudging Leads to Consumers In Colorado To Shop But Not Switch ACA Marketplace Plans

Nudging Leads to Consumers In Colorado To Shop But Not Switch ACA Marketplace Plans

Nudging Leads to Consumers In Colorado To Shop But Not Switch ACA Marketplace Plans (Health Affairs 2017open access version here, joint with Jon Kingsdale, Timothy Layton, and Adam Sacarny)

The Affordable Care Act (ACA) dramatically expanded the use of regulated marketplaces in health insurance, but consumers often fail to shop for plans during open enrollment periods. Typically these consumers are automatically reenrolled in their old plans, which potentially exposes them to unexpected increases in their insurance premiums and cost sharing. We conducted a randomized intervention to encourage enrollees in an ACA Marketplace to shop for plans. We tested the effect of letters and e-mails with personalized information about the savings on insurance premiums that they could realize from switching plans and the effect of generic communications that simply emphasized the possibility of saving. The personalized and generic messages both increased shopping on the Marketplace’s website by 23 percent, but neither type of message had a significant effect on plan switching. These findings show that simple “nudges” with even generic information can promote shopping in health insurance marketplaces, but whether they can lead to switching remains an open question.

Design Issues in Economics Lab Experiments: Randomization

I’ve seen a lot of experimental economics papers as a coeditor of the Journal of Public Economics and a frequent reviewer for many journals. There are some recurring design and analysis decisions that lead authors astray. I’ll discuss a series of them. The first is Not Randomizing Treatment. It’s more common than you might think!

Not randomizing treatment. Randomly assigning participants to treatment is one of the key benefits of lab-based economics experiments. When we want to test the effect of a treatment, we want treatment to be orthogonal to everything else. It’s pretty clear how to do this with participant-level randomization—a random number generator assigns each participant to a treatment.

Things often go awry when we move to group-level randomization. For instance, you want to run a public goods game under one set of rules (“Basic Rules”), and then see how contributions change when it is run under another set of rules (“Enhanced Rules”).

Ideally, for each session, you would randomly assign half of the participants who show up to play Basic Rules (and interact with other participants in the Basic Rules condition), and half to play Enhanced Rules (and interact with other participants in the Enhanced Rules condition). This is great, you’ve actually achieved participant-level randomization.

But it’s logistically complicated to have different rules going on simultaneously (and perhaps the lab cannot handle enough people). So instead, you do session-level randomization. You run a “large-enough” number of sessions. You create a random order of sessions, so you might run Basic-Enhanced-Basic-Basic-Enhanced etc. If the number of sessions is large enough, you cluster standard errors at the session level and proceed.

But running different sessions with different rules in a random order is complicated. Plus, you might get an idea about what rules to run after running a few sessions. So what you do is run a few Basic Rules sessions, and then run a few Enhanced Rules sessions. This is where randomization has failed. Your subject population could be changing over time (perhaps early subjects are more eager, or have lower value of time). Or news events could change beliefs and preferences. The list of potential stories can be long; some can be ruled out, others cannot. But because your session order is not random, you are not guaranteed to have your treatment be orthogonal to everything else. As a result, you’ve missed out on the benefit of randomized experiments, and it’s unclear what to conclude from comparing your treatments.

 

Inferring Risk Perceptions and Preferences using Choice from Insurance Menus: Theory and Evidence

Inferring Risk Perceptions and Preferences using Choice from Insurance Menus: Theory and Evidence

New working paper:

Inferring Risk Perceptions and Preferences using Choice from Insurance Menus: Theory and Evidence (joint with Philipp Kircher, Johannes Spinnewijn, and Amanda Starc)

Demand for insurance can be driven by high risk aversion or high risk. We show how to separately identify risk preferences and risk types using only choices from menus of insurance plans. Our revealed preference approach does not rely on rational expectations, nor does it require access to claims data. We show what can be learned non-parametrically from variation in insurance plans, offered separately to random cross-sections or offered as part of the same menu to one cross-section. We prove that our approach allows for full identification in the textbook model with binary risks and extend our results to continuous risks. We illustrate our approach using the Massachusetts Health Insurance Exchange, where choices provide informative bounds on the type distributions, especially for risks, but do not allow us to reject homogeneity in preferences.

Measuring Consumer Valuation of Limited Provider Networks

Measuring Consumer Valuation of Limited Provider Networks

Measuring Consumer Valuation of Limited Provider Networks

Published: American Economic Review, Papers and Proceedings, 2015.

Longer version: NBER Working Paper 20812. (Joint with Amanda Starc)

WTP for Network Breadth

We measure provider coverage networks for plans on the Massachusetts health insurance exchange using a two measures: consumer surplus from a hospital demand system and the fraction of population hospital admissions that would be covered by the network. The two measures are highly correlated, and show a wide range of networks available to consumers. We then estimate consumer willingness-to-pay for network breadth, which varies by age. 60-year-olds value the broadest network approximately $1200-1400/year more than the narrowest network, while 30-year-olds value it about half as much. Consumers place additional value on star hospitals, and there is significant geographic heterogeneity in the value of network breadth.

Measuring Consumer Valuation of Limited Provider Networks

Measuring Consumer Valuation of Limited Provider Networks

Published: American Economic Review, Papers and Proceedings, 2015.

Longer version: NBER Working Paper 20812. (Joint with Amanda Starc)

WTP for Network Breadth

 

We measure provider coverage networks for plans on the Massachusetts health insurance exchange using a two measures: consumer surplus from a hospital demand system and the fraction of population hospital admissions that would be covered by the network. The two measures are highly correlated, and show a wide range of networks available to consumers. We then estimate consumer willingness-to-pay for network breadth, which varies by age. 60-year-olds value the broadest network approximately $1200-1400/year more than the narrowest network, while 30-year-olds value it about half as much. Consumers place additional value on star hospitals, and there is significant geographic heterogeneity in the value of network breadth.

Memory and Procrastination

MemoryForWeb

I have two papers examining limited memory.  Most recently:

 On the Interaction of Memory and Procrastination: Implications for Reminders 

Abstract: I examine the interaction between present-bias and limited memory. Individuals in the model must choose when and whether to complete a task, but may forget or procrastinate. Present-bias expands the effect of memory: it induces delay and limits take-up of reminders. Cheap reminder technology can bound the cost of limited memory for time-consistent individuals but not for present-biased individuals, who procrastinate on setting up reminders. Moreover, while improving memory increases welfare for time-consistent individuals, it may harm present-biased individuals because limited memory can function as a commitment device. Thus, present-biased individuals may be better off with reminders that are unanticipated. Finally, I show how to optimally time the delivery of reminders to present-biased individuals.

Forthcoming, Journal of the European Economic Association. Latest version here, with results on empirical estimation. Older version: NBER Working Paper 20381

This paper built on my previous work on memory, showing that people are overconfident about the probability they will remember:

Forgetting We Forget: Overconfidence and Memory

Abstract:  Do individuals have unbiased beliefs, or are they over- or underconfident? Overconfident individuals may fail to prepare optimally for the future, and economists who infer preferences from behavior under the assumption of unbiased beliefs will make mistaken inferences. This paper documents overconfidence in a new domain, prospective memory, using an experimental design that is more robust to potential confounds than previous research. Subjects chose between smaller automatic payments and larger payments they had to remember to claim at a six-month delay. In a large sample of college and MBA students at two different universities, subjects make choices that imply a forecast of a 76% claim rate, but only 53% of subjects actually claimed the payment.

Published 2011 in the Journal of the European Economic Association; Ungated working paper available at SSRN.

Press Coverage:

 

Memory and Procrastination

Memory and Procrastination

MemoryForWeb

I have two papers examining limited memory.  Most recently:

 On the Interaction of Memory and Procrastination: Implications for Reminders 

Abstract: I examine the interaction between present-bias and limited memory. Individuals in the model must choose when and whether to complete a task, but may forget or procrastinate. Present-bias expands the effect of memory: it induces delay and limits take-up of reminders. Cheap reminder technology can bound the cost of limited memory for time-consistent individuals but not for present-biased individuals, who procrastinate on setting up reminders. Moreover, while improving memory increases welfare for time-consistent individuals, it may harm present-biased individuals because limited memory can function as a commitment device. Thus, present-biased individuals may be better off with reminders that are unanticipated. Finally, I show how to optimally time the delivery of reminders to present-biased individuals.

Forthcoming, Journal of the European Economic Association. Latest version here, with results on empirical estimation. Older version: NBER Working Paper 20381

This paper built on my previous work on memory, showing that people are overconfidence about the probability they will remember:

Forgetting We Forget: Overconfidence and Memory

Abstract:  Do individuals have unbiased beliefs, or are they over- or underconfident? Overconfident individuals may fail to prepare optimally for the future, and economists who infer preferences from behavior under the assumption of unbiased beliefs will make mistaken inferences. This paper documents overconfidence in a new domain, prospective memory, using an experimental design that is more robust to potential confounds than previous research. Subjects chose between smaller automatic payments and larger payments they had to remember to claim at a six-month delay. In a large sample of college and MBA students at two different universities, subjects make choices that imply a forecast of a 76% claim rate, but only 53% of subjects actually claimed the payment.

Published 2011 in the Journal of the European Economic Association; Ungated working paper available at SSRN.

 

Press Coverage: