💡
EA market testing (public)
  • Introduction/overview
    • Introduction & explanation
    • 👋Meet the team
    • 📕Content overview
    • Progress/goals (early 2023)
      • EAMT progress & results
      • Goals, trajectory, FAQs
  • 🤝Partners, contexts, trials
    • Introduction
    • Giving What We Can
      • Pledge page (options trial)
      • Giving guides - Facebook
      • Message Test (Feb 2022)
      • YouTube Remarketing
    • One For the World (OftW)
      • Pre-giving-tues. email A/B
        • Preregistration: OftW pre-GT
    • The Life You Can Save (TLYCS)
      • Advisor signup (Portland)
    • Fundraisers & impact info.
      • ICRC - quick overview
      • CRS/DV: overview
      • 📖Posts and writings
    • University/city groups
    • Workplaces/orgs
    • Other partners
    • Related/relevant projects/orgs
  • 🪧Marketing & testing: opportunities, tools, tips
    • Testing Contexts: Overview
    • Implementing ads, messages, designs
      • Doing and funding ads
      • Video ads/Best-practice guidelines
      • Facebook
      • Targeted ad on FB, with variations: setup
    • Collecting outcome data
      • Facebook ads interface
        • Pivot tables
      • Google analytics interface
      • Google A/B, optimize interface
      • Reconciling FB/GA reports
      • Survey/marketing platforms
    • Trial reporting template
  • 🎨Research Design, methodology
    • Methods: Overview, resources
    • "Qualitative" design issues
    • Real-world assignment & inference
      • Geographic segmentation/blocked randomization
      • Difference in difference/'Time-based methods'
      • Facebook split-testing issues
    • Simple quant design issues
    • Adaptive design/sampling, reinforcement learning
    • 'Observational' studies: issues
    • Analysis: Statistical approaches
  • 🧮Profiling and segmentation project
    • Introduction, scoping work
    • Existing work/data
      • Surveys/Predicting EA interest
      • Awareness: RP, etc.
      • Kagan and Fitz survey
      • Longtermism attitudes/profiling
      • Animal welfare attitudes: profiling/surveying
      • Other data
    • Fehr/SOEP analysis... followup
      • Followup with Thomas Ptashnik
    • Further approaches in progress
      • Profiling 'existing traffic'
  • 📋(In)effective Altruistic choices: Review of theory and evidence
    • Introduction...
    • The challenge: drivers of effective/ineffective giving
      • How little we know...
    • Models, theories, psych. norms
    • Tools and trials: overview
      • Tools/interventions: principles
      • Outcomes: Effective gift/consider impact)
        • (Effectiveness information and its presentation)
        • (Outcome: Pledge, give substantially (& effectively))
          • (Moral duty (of well-off))
        • Give if you win/ conditional pledge
      • Academic Paper Ideas
  • Appendix
    • How this 'gitbook' works
      • Other tech
    • Literature: animal advocacy messaging
    • Charity ratings, rankings, messages
    • "A large-scale online experiment" (participants-aware)
  • Innovationsinfundraising.org
Powered by GitBook
On this page
  • Resources
  • "Even a few observations can be informative"

Was this helpful?

Edit on GitHub
Export as PDF
  1. Research Design, methodology

Simple quant design issues

How many observations, how to assign treatments, etc.

PreviousFacebook split-testing issuesNextAdaptive design/sampling, reinforcement learning

Last updated 2 years ago

Was this helpful?

Resources

Todo: Integrate further easy tools and guides, including those from Jamie Elsey

"Even a few observations can be informative"

Drawing from Lakens' excellent resource:

You are considering a new and an old message.

Suppose you are a ‘believer’ … your prior (light grey up to) is that ‘this new message nearly always performs better than the control treatment’

Suppose you observe only 20 cases and the treatment performs better only half the time. You move to the top black line posterior. You put very little probability on the new message performing much better than the control.

Now suppose you have the ‘Baby prior’, and think all of the following ten things are equally likely

  • less than 10% of people rate the new message better than the control

  • 10-20% of people rate the new message better than the control

  • …

  • … 50-60% of people rate the new message better than the control

  • ...

  • 90-100% of people rate the new message better than the control

You run tests on 20 people, and you get 15 people preferring the new message.

Now you update substantially. From some calculations (starting from Lakens' code, pbeta(0.65, aposterior, bposterior)) you put about an 80% posterior probability that the new message is preferred by at least 65% of the population. (And only about 1.5% probability on the control being better)

So if I really ‘am as uncertain as described in the example above’ about which of two messages are better (and by how much)...

... then even 20 randomly-selected people assessing both messages can be very informative. How often does this ‘strong information gain’ happen? Well, under the "baby prior", you would get information at least this informative in one direction or the other about half the time.

🎨
15 (Ex-ante) Power calculations for (Experimental) study design | Statistics, econometrics, experiment and survey methods, data science: Notes
Chapter 4 Bayesian statistics | Improving Your Statistical Inferences
Logo