Pre-giving-tues. email A/B

Context: Donation 'upsell' to existing pledgers

Question: Are effectiveness-minded (EA-adjacent) donors and pledgers more motivated to donate by

  1. "A": (non-quantitative) presentation of impact and effectiveness (as in standard OftW pitch)

  2. "B": Emotional appeals and 'identified victim' images

Further information on experiment and outcomes in in-depth replicable analysis, organized in dynamic document here

General idea, main 'hypothesis'

Are effectiveness-minded (EA-adjacent) donors and pledgers more motivated to donate by

  1. "A": (non-quantitative) presentation of impact and effectiveness (as in standard OftW pitch)

  2. "B": Emotional appeals and 'identified victim' images

In the context of One for The World's (OFTW) 'giving season upselling campaign', potentially generalizable to other contexts.

Academic framing: "Does the Identifiable Victims Effect (see e.g., the meta-analysis by Lee and Feeley, 2016) also motivate the most analytical and committed donors?"

Background and context

One for The World's (OFTW) 'giving season upselling campaign''

10 emails total over the course of November were sent in preparation for GivingTuesday

Point of contact (at organization running trial)

Timing of trial

: November 10, 18, 23, all in 2021, but may be delayed for feasibility

Digital location where project 'lives' (planning, material, data)

Present Gitbook, Google doc linked below, preregistration (OSF), and github/git repo

Environment/context for trial

Emails ... to existing OftW pledgers (asking for additional donations in Giving Season)

All 10 emails had the same CTA: make an additional $100 donation for the giving season/GivingTuesday on top of their recurring monthly pledge donation.

Participant universe and sample size

Roughly 4000 participants, as described.

A series of three campaign emails will be sent out by OftW to their regular email lists, to roughly 4000 participants, as described.

Key treatment(s)

:

  • A list of ~4500 contacts (activated pledgers) was split into two treatment groups.

  • Treatment Group A received emails that were focused on the contact's impact

  • while Treatment Group B received emails that were focused on individual stories of beneficiaries

See preregistration, treatment specifics

Treatment assignment procedure

See preregistration How many ... conditions

Outcome data

Targeting: Donation incidence and amount in the relevant 'giving season' and over the next year, specifically described in prereg under

key dependent variable

Data storage/form:

  • MailChimp data (Chloe is sharing this),

  • Reports on donations (Kennan is gathering this)

Optional/suggested additions

Planned analysis methods, preregistration link here

Cost of running trial/promotion: Time costs only (as far as I know)

Proposed/implementing design (language)

(Link)

Pre-registration work

Pre-registered on OSF in 'AsPredicted' format, content incorporated here here

Preliminary results

Overview:

The Emotion treatment leads to significantly fewer people opening emails, but more people clicking on the in-email donation link (relative to the standard Impact information treatment). However, we are statistically underpowered to detect a difference in actual donations. More evidence is needed.

Chloe: those emails that appealed to emotional storytelling performed better (higher in-email click rate) than those that were impact-focused.

DR, update: I confirm that this is indeed the case, and this is statistically significant in further analysis.

Evidence on donations

(preliminary; we are awaiting further donations in the giving season) ...

This is 'hard-coded' below. I intend to replace this with a link or embed of a dynamic document (Rmarkdown). The quantitative analysis itself, stripped of any context and connection to OftW, is hosted HERE


Note: We may wish to treat the 'email send' as the denominator, as the differing subject seemed to have led to a different number of opens


Treatment 1 (Impact): We record

  • 1405 unique emails listed as opening a ā€˜controlā€™ treatment email

  • 29 members clicking on the donation link in an email at least once (2.1% of openers)

  • 15 members making some one-time donation in this period (about 0.11% of openers, 0.075% of total)

  • 8 members emails donating (likely) through the link (0.057%/0.04%)

Treatment 2 (Emotional storytelling):

  • 1190 unique emails listed as opening an email (a significantly lower 'open rate', assuming the same shares of members were sent each set of treatment email)

  • 56 members clicking on the donation link in an email at least once (4.7% of openers)

  • 11 members making some one-time donation in this period (about 0.9% of openers, about 0.055% of total)

  • 9 unique emails donating (likely) through the link (0.08%/0.045%)

Note: We may wish to treat the 'email send' as the denominator, as the differing subject seemed to have led to a different number of opens


ā€˜Initial impressions of preliminary outcomesā€™

  • The conversion rates are rather low (0.5%) ā€¦ but maybe high enough to justify sending these emails? Iā€™m not sure.

  • While people are more likely to O_pen_ at least one Impact email, they are more likely to Click to donate at least once if assigned the Emotion email

  • But we can't say much for actual donations.

  • Given the low conversion rates we donā€™t have too much power to rule out ā€˜proportionally largeā€™ differences in conversion rates (or average amounts raised) between treatments ā€¦

The figure above seems like a good summary of the ā€˜results so farā€™ on ā€˜what we can infer about relative incidence ratesā€™, presuming I understand the situation correctly ā€¦I plot Y-axis: ā€™how likely would a difference in donations ā€˜as small or smaller in magnitudeā€™ā€ than we see in the data between the incidence ā€¦ against X-axis: if the ā€œtrue difference in incidence ratesā€ were of these magnitudes

Implementation and management: Chloe Cudaback, Jack Lewars

  • Our data is consistent with ā€˜no differenceā€™ (of course) ā€¦ but it's also consistent with ā€˜a fairly large difference in incidenceā€™

  • E.g., even if one treatment truly lead to ā€˜twice as many donations as the otherā€™, we still have a 33% chance or so of seeing a difference as small as the one we see

  • We can reasonably ā€˜rule outā€™ differences of maybe 2.5x or greater

  • Main point: given the rareness of donations in this context, our sample size doesnā€™t let us make very strong conclusions in either direction about donations

Last updated