💡
EA market testing (public)
  • Introduction/overview
    • Introduction & explanation
    • 👋Meet the team
    • 📕Content overview
    • Progress/goals (early 2023)
      • EAMT progress & results
      • Goals, trajectory, FAQs
  • 🤝Partners, contexts, trials
    • Introduction
    • Giving What We Can
      • Pledge page (options trial)
      • Giving guides - Facebook
      • Message Test (Feb 2022)
      • YouTube Remarketing
    • One For the World (OftW)
      • Pre-giving-tues. email A/B
        • Preregistration: OftW pre-GT
    • The Life You Can Save (TLYCS)
      • Advisor signup (Portland)
    • Fundraisers & impact info.
      • ICRC - quick overview
      • CRS/DV: overview
      • 📖Posts and writings
    • University/city groups
    • Workplaces/orgs
    • Other partners
    • Related/relevant projects/orgs
  • 🪧Marketing & testing: opportunities, tools, tips
    • Testing Contexts: Overview
    • Implementing ads, messages, designs
      • Doing and funding ads
      • Video ads/Best-practice guidelines
      • Facebook
      • Targeted ad on FB, with variations: setup
    • Collecting outcome data
      • Facebook ads interface
        • Pivot tables
      • Google analytics interface
      • Google A/B, optimize interface
      • Reconciling FB/GA reports
      • Survey/marketing platforms
    • Trial reporting template
  • 🎨Research Design, methodology
    • Methods: Overview, resources
    • "Qualitative" design issues
    • Real-world assignment & inference
      • Geographic segmentation/blocked randomization
      • Difference in difference/'Time-based methods'
      • Facebook split-testing issues
    • Simple quant design issues
    • Adaptive design/sampling, reinforcement learning
    • 'Observational' studies: issues
    • Analysis: Statistical approaches
  • 🧮Profiling and segmentation project
    • Introduction, scoping work
    • Existing work/data
      • Surveys/Predicting EA interest
      • Awareness: RP, etc.
      • Kagan and Fitz survey
      • Longtermism attitudes/profiling
      • Animal welfare attitudes: profiling/surveying
      • Other data
    • Fehr/SOEP analysis... followup
      • Followup with Thomas Ptashnik
    • Further approaches in progress
      • Profiling 'existing traffic'
  • 📋(In)effective Altruistic choices: Review of theory and evidence
    • Introduction...
    • The challenge: drivers of effective/ineffective giving
      • How little we know...
    • Models, theories, psych. norms
    • Tools and trials: overview
      • Tools/interventions: principles
      • Outcomes: Effective gift/consider impact)
        • (Effectiveness information and its presentation)
        • (Outcome: Pledge, give substantially (& effectively))
          • (Moral duty (of well-off))
        • Give if you win/ conditional pledge
      • Academic Paper Ideas
  • Appendix
    • How this 'gitbook' works
      • Other tech
    • Literature: animal advocacy messaging
    • Charity ratings, rankings, messages
    • "A large-scale online experiment" (participants-aware)
  • Innovationsinfundraising.org
Powered by GitBook
On this page
  • Why 'Geo experiments'
  • Where and how can we geographically block treatments?
  • What if we can only apply the treatment to one, or a few, of many groups?

Was this helpful?

Edit on GitHub
Export as PDF
  1. Research Design, methodology
  2. Real-world assignment & inference

Geographic segmentation/blocked randomization

Discussion of blocking/randomizing treatments by post/zip code or other region, allowing us to more accurately tie treatments to ultimate outcomes

PreviousReal-world assignment & inferenceNextDifference in difference/'Time-based methods'

Last updated 2 years ago

Was this helpful?

Measurement needs are varied and come with a variety of limitations, e.g., data avail-ability, ad targeting restrictions, wide-ranging measurement objectives, budget availability,time constraints, etc

Kerman et al, 2017

Why 'Geo experiments'

In many contexts, the route to a meaningful outcome (e.g., GWWC pledge) is a long one. Attribution is difficult. An individual may have been first influenced by (1) YouTube ad while seeing a video on her AppleTV, and then (2) by a friend's post on Facebook, and then finally moved to act (3) after having a conversation at a bar and (4) visiting the GWWC web site on her telephone.

The same individual may not (or may) be trackable through 'cookies' and 'pixels' but this is already very limited and imprecise, and is being made harder by new legislation.

"Geographic targeting" of individual treatments/trials/initiatives/ads may help better track, attribute, and yield inference about 'what works'. E.g., we might do a 'lift test':

  1. select a balanced random set of US Zip codes for a particular repeated YouTube ad promoting GWWC, the "Treated group"

  2. compare the rate of GWWC visits, email sign-ups, pledges, and donations in the next 6 months from these zip codes relative to all other zip codes. (Possibly throwing out or finding a way to draw additional inference from zip codes adjacent to the treated group)..

We could also do multi-armed tests (of several types of ad or other treatment, with a similar setup as above)

There are a few well-known and researched approaches: (emphasis added)

Geo experiments (Vaver and Koehler, 2011, 2012) meet a large range of measurement needs. They use non-overlapping geographic regions, or simply “geos,” that are randomly, or systematically, assigned to a control or treatment condition. Each region realizes its assigned treatment condition through the use of geo-targeted advertising. These experiments can be used to analyze the impact of advertising on any metric that can be collected at the geo level. Geo experiments are also privacy-friendly since the outcomes of interest are aggregated across each geographic region in the experiment. No individual user-level information is required for the “pure” geo experiments, although hybrid geo + user experiments have been developed as well (Ye et al., 2016). Matched market tests (see e.g., Gordon et al., 2016) are another specific form of geo experiments. They are widely used by marketing service providers to measure the impact of online advertising on offline sales. In these tests, geos are carefully selected and paired. This matching process is used instead of a randomized assignment of geos to treatment and control. Although these tests do not offer the protection of a randomization experiment against hidden biases, they are convenient and relatively inexpensive, since the testing typically uses a small subset of available geos. These tests often use time series data at the store level. Another matching step at the store level is used to generate a lift estimate and confidence interval.

Where and how can we geographically block treatments?

Context/location
Geographic blocking? (How)

Youtube ads

Facebook ads

USA

zip codes

Australia

What if we can only apply the treatment to one, or a few, of many groups?

We still mahy be able to make valuable inferences, under specified conditions, through 'difference in difference', 'event study', and 'Time based' approaches. We consider this in the next section: Difference in difference/'Time-based methods'

🎨
From Kerman et al, 2017