☀️
Innovations in CoEfAs (e.g., GiveWell)
  • Cost-Effect-Analysis: Quant. uncertainty, transparent, customize
  • Organization and introduction
    • Using this resource
    • Key writings and resources
    • Who is involved?
    • Opportunities to contribute to this project
  • Innovations and issues
    • Limitations of GiveWell
      • (Possible errors and misunderstandings: examples from GW and beyond)
    • Incorporating uncertainty
    • (User-input, sensitivity checks)
    • (Type checking and code)
  • Tools and examples
    • Givewell models in explained maths
    • Squiggle
    • Causal.app
      • Causal/Givewell -- working examples (in progress)
    • Guesstimate
    • Pedant
    • hesim and other R package
    • cole_haus modeling
    • Other examples (MC/Fermi)
  • GiveWell model (and extensions)
    • Code representations of GW models
Powered by GitBook
On this page
  • Why make uncertainty explicit?
  • What types of uncertainty...?
  1. Innovations and issues

Incorporating uncertainty

Previous(Possible errors and misunderstandings: examples from GW and beyond)Next(User-input, sensitivity checks)

Last updated 3 years ago

Why make uncertainty explicit?

From

GiveWell produces cost-effectiveness models of its top charities. These models take as inputs many uncertain parameters. Instead of representing those uncertain parameters with point estimates—as the cost-effectiveness analysis spreadsheet does—we can (should) represent them with probability distributions. Feeding probability distributions into the models allows us to output explicit probability distributions on the cost-effectiveness of each charity.

A common approach: Consider 'best' and 'worst' case scenarios for each parameter, and consider 'what if all goes best?' and 'what if all goes worst?', for lower and upper bounds.

But this approach is not optimal, because:

Either case ('all best' or 'all worst') is extremely unlikely, and more unlikely the more 'uncertain things there are'. (At least this is true if the random/uncertain things are independent; we can have correlated uncertainties too). Consider: What are the chances of winning the lottery ten times in a row? What are the chances of getting hit by lightning ten times in a year?

The details of the uncertainty matter, and may matter to outcomes: Some 'uncertainties' are far more uncertain than others, or have more meaningfully 'long-tailed' distributions. Furthermore, if the uncertain events are correlated to one another, this leads to a lot more variance in the outcome. There are reasonable ways of explicitly measuring and calibrating uncertainty over each parameter, and making this explicit is helpful

cole_haus again:

From a point of view, the best way to represent our state of knowledge on the input parameters is with a over the values the parameter could take. For example, I could say that a negative value for increasing consumption seems very improbable to me but that a wide range of positive values seem about equally plausible. Once we specify a probability distribution, we can feed these distributions into the model and, in principle, we'll end up with a probability distribution over our results. This probability distribution on the results helps us understand the uncertainty contained in our estimates and we should take them.

From a table for Malaria Consortium Seasonal malaria chemoprevention (rephrased)

Input
Type of uncertainty
Meaning/importance

num LLINs distributed per person

Operational

Affects total cost for effect

deaths avert per protected child

Causal

How effective is core activity

lifespan of an LLIN

Empirical

Years of benefit per distribution

internal validity adjustment

Methodological

trust underlying studies?

pct mortal. AMF areas v trial

Empirical/historical

Affects size of problem

value averting child death

Moral

Det. how outcomes become value

cole_haus:
subjective Bayesian
probability distribution
how literally
What types of uncertainty...?