Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
How to get data from trials of Facebook ad
Go to "the reporting suite in Meta ads manager"
2. Specify some filters:
This gets us the screen below
3. Specify the date range.
5. Export simple results for Campaigns
Click 'Reports' ... upper right.
We can 'create a custom report', which saves this for later tweaking, or merely 'export table data'. I will do the latter for now:
Note: I chose CSV and do not include summary rows, to avoid confusion later.
Now I import this data into R (I usually use code but let's do it the interactive way for illustration)...
It seems that the option 'include summary row' was probably not wanted here, and that row with blank 'campaign name' could cause confusion.
It seems to have removed the "bid strategy" column, and added 'reporting starts' and ...'ends' from the filter. Otherwise, everything else seems the same as in the ad manager view, although some labels have changed.
We see three tabs
Campaigns
Ad sets for 1 campaign
Ads for 1 campaign
Campaigns
Here we have 7 campaigns, each with separate budgets, and start and end dates (although these mainly overlap).
It looks like some campaigns were set up for direct comparison or "A/B" perhaps, with the exact same budgets and end dates, and similar names:\
Ad sets
Here, there are 52 total 'ad sets' across all campaigns.
I'm going to export this as a csv too, in case it's useful.
Ads
There are also 52 "ads"; it seems in this case, one per ad set:
The information in the 'ads' table seems the same as in the 'ad sets table' ... other than a link to preview the ad content itself (which I don't seem to have access to atm).
Here “Effective Giving Guide Lead Generation campaign … ran late November 2021 - January 2022" (Careful in specifying the dates; the interface is weird)
After specifying these dates, more information comes up in the basic columns:
For a trial to yield insight, we need to be able to track and measure meaningful outcomes, and connect these to the particular 'arm' of the trial the person saw ... (if they saw any arm at all)
In this section we discuss how to see the results of your promotions and trials, and how to access data sets of these results that you can analyze.
Facebook's Ad Manager and Google Analytics often report results that seem to have discrepancies. Below, one particular case, and possible explanations.
Facebook: We have 50k+ unique impressions, and 1335 clicks
Google Analytics records only 455 page views, 403 users
And only about 20 doing any sort of Engagement like scroll or click (if we read it correctly)
JS: main reasons [DR: slightly edited[
1. "Do they click the ad and shut down before page comes up?" Yup! Closing the page before the redirect fully loads. Facebook will be as generous as possible with their click reporting.
2. ... If a user clicks on the FB ad twice within 30 minutes, then Google Analytics would record that only as a single user and a single session.
3. If a user has JavaScript disabled or doesn’t accept cookies, then Google Analytics doesn’t track.
Leticia at Facebook: can be mistaken clicks, this is common.. need a pixel to fix this ..., can change it to 'landing page view'
You may want to see or export crosstabs of one outcome, user feature, or design feature, by another. Sometimes you just want to see these quickly, but this might also be a way to extract the 'raw data' you wish to analyze elsewhere.
Start new pivot table
From within Ads Manager From 'ads reporting' (3 Aug 2022 updated interface)
Click "Create Report" --> Pivot table
2. As before, make sure you've selected the right date range, and (redo) any relevant filters
Here I add a filter for 'campaign name ' contains 'general'. Because I'm specifically trying to pull down some information on 'which video people saw' in this group (which needs a special setting to access... as noted below)
3. "Customize pivot table" – "Breakdowns" ... the things you want this to disaggregate across (sums and averages within groups)
the 'campaigns', the 'ad names'
timing, demographics
Drill down to "Custom breakdowns", "Dynamic Creative Asset", to get it broken down by the text linked to the ads:
However, some breakdowns are not compatible with other breakdowns (maybe for privacy reasons?) For example, if I tick 'Gender' I cannot have it broken down by 'Image, video, and slideshow', at least in the present case ... (perhaps because it narrows down too few observations?)
4. "Customize pivot table" – "Metrics"
Select the things you want reported, and deselect things that are not interesting or irrelevant to this case (like 'website purchases') or numbers that can be easily computed on your own
Normally, I'd suggest leaving out the redundant 'Cost per Result' but it's probably good to have as at least one sanity and data check.
Other stuff like 'video play time' could sometimes be very relevant, but I'll leave it out for now
5. (Optional) Conditional formatting
This could also be helpful if you are using the Ads Manager tools in situ, but obviously this has no value for downloading.
6. Save report for later use, share
If you think the report is useful in-situ, you can also share a link
7. Export the data
As in #extracting-simple-results...
(or consider direct import into R using tools like the rfacebookstat
package)
Add section: How to set up GA
Some key 'flows and tips'
**'**Home'
'behavior', 'site content', 'all pages'
remember to set date range!
Acquisition, all traffic, channels: here 'social' (probably) tells you who came from Facebook etc
Acquisition, all traffic, Source/medium drills down into this
DR: I'm not sure how to get 'all the data', but I have been able to get data on, e.g.,
a set of outcomes,
over a set period of time, (a particular month and the same month in the previous year)
broken down by another feature (by city)
Then search and select your desired ‘metrics’ (outcomes) of interest. “Users” and “sessions” seem pretty important, for example.
Next you can break this down by another group such as “city”. You can put in 'filters' too, if you like, but so far I don't see how to filter on outcomes, only on the dimensions or groups.
I don't know an easy way to tell it to “get all the rows on this at once.” but if you scroll to the bottom you can set it to show the maximum of 5000 rows.
Next, scroll up to the top and select export. I chose to Export it as an Excel spreadsheet., as this imports nicely into R and other statistical/data programs.
We were able to do this in two goes, but for larger datasets this would be really annoying. I imagine there is some better way of doing this., maybe a way of using an API interface for Google Analytics to just pull all of this down.
A partial workaround fix is to do a ‘filter’ to discard rows you don’t need… click ‘advanced’ at the top and…
Understanding how this tool works to test different versions of pages. GWWC Pledge page trial as first context
Mapping the key non-obvious features of running and analyzing these A/B trials using the Google analytics/optimize system.
Reporting and considering this in the context of the GWWC Pledge page (options trial)
Clicking on a particular 'experience' in the 'container'...
(if you have been granted read and analyze permission), will open the useful 'Optimize Report' (which Google explains here)
The overall start/end and 'sessions' are given first. What are "sessions"? The short answer: 'Sessions' are the number of 'continuously active' periods of an individual user. So individual users may have multiple sessions! (see #sessions-vs.-usersbelow). Here, there have been 7992 such 'sessions' over 81 days.
I am not sure where we can learn 'how many users there were'.
("View full chart" can give you a day-by-day breakdown of the number of sessions.)
The next section compares 'sessions' and 'conversions' by treatment, and does a Bayesian analysis. This seems the most useful part:
Above, the 'Separate block' (SB) seems to be the best performing treatment. Google calculates a 2.69% conversion rate for this (here, presumably the rate of people checking 'any' of the follow-on boxes).
Considering the Analysis, Google Optimize "uses Bayesian inference to generate its reports... [and] chooses its priors to be quite uninformed." The exact priors are not specified (we should try to clarify this).
But if we take this seriously, we might say something like ...
if our initial priors gave each treatment an equal chance of having the highest conversion rate ('being best'), and assumed a [?beta] distributed conversion rate for each, centered at the overall mean conversion rate ...
then, ex-post, our posterior should be that the SB treatment has an 80% chance of being best, our 'Original' has a 17% chance of being the best ...
Google also gives confidence intervals for the conversion rates for each treatment, with boxplots and (95%) credible interval statistics:
The grey bar for the baseline is mirrored in all rows. The 95% CI for the 'improvement over the baseline' is given on the right. But this is a rather wide interval. More informatively, if we hover over the image, we are given more useful breakdowns:
Although this does not exactly tell us the 50% interval 'improvement over the baseline' (this would need a separate computation), we can approximately infer this.
But fortunately it is reported in data we can download; see below "Download (top right)".
From that data, we get:
Our 'posterior' probability thus infers (assuming symmetry, I think) that we should put (considering odds ratios, not percentage points)
a 2.5% chance of SB having an 18% (or more lower rate of conversion than 'Original'
a 22.5% chance on SB being between 18% worse and 4% better
a 25% chance of being 4-20% better
a 25% chance of being 20-36% better
A 22.5% chance of being 36-76% better
A 2.5% chance of being more than 76% better
We can also combine intervals, to make statements like ...
a 50% chance of being 4-36% better
a 50% chance of being 20-76% better
We report on this further, for this particular case, under #basic-results-outcomes
There is some repetition (can we 'mirror blocks'?)
Above, even though the treatment has been assigned randomly (presumably a close-to-exact 1/3, 1/3, 1/3 split), the number of 'sessions' differs between the treatments ('variants').
Why? As far as I (DR) understand,
while each individual user (at least if they are on the same machine and browsing with cookies allowed) is given the same treatment variant each time...
the same users may 'end' a session (by leaving or being inactive for 30+ minutes), and return later, getting the same treatment but tallying another 'session'. This suggests that users in the "Separate Block" (SB) treatment are returning the most (but also see 'entrances' below).
The final section gives the day to day breakdown of the performance of each treatment, presumably, along with confidence intervals. This seems relevant for 'learning and improving while doing' but possibly less relevant for our overall comparison of the pages/treatments.
The 'Analytics data' gives us sessions and conversions by day and by treatment.
(Where no session occurs in a day for a treatment, it is coded as blank).
... this gives some other information, mainly having to do with the user experience.
"Unique page views" represent "the number of sessions during which that page was viewed one or more times." ... Recall "sessions" are periods of continuous activity.
"Entrances" seem potentially very important. According to Google:
Sessions are incremented with the first hit of a session, whereas entrances are incremented with the first pageview hit of a session.
In the present context, this suggests that the 'Separate block' page is inspiring users to come back more often, and to spend more time on average.
As noted, essentially: 'Sessions' are the number of 'continuously active' periods of an individual user
Analytics measures both sessions and users in your account. Sessions represent the number of individual sessions initiated by all the users to your site. If a user is inactive on your site for 30 minutes or more, any future activity is attributed to a new session. Users that leave your site and return within 30 minutes are counted as part of the original session.
The initial session by a user during any given date range is considered to be an additional session and an additional user. Any future sessions from the same user during the selected time period are counted as additional sessions, but not as additional users.
To test content in more depth than an A/B trial permits
Better control over 'who is participating' and how much attention they are paying
Things more towards 'basic background research'
Closer to a 'representative sample'
: Created specifically for (academic research). Our impression is that this is among the highest quality panels, although there is some controversy over this.
CloudResearch: CR approved Mturk
CloudResearch: Prime Panels
Positly: https://www.positly.com/ Qualtrics (panel) Lucid
Dyndata
I added a few features I thought might be interesting or useful. Was anyone drawn in to pledge? When did each campaign start/end (doublecheck)? How many unique link clicks?
After logging in and selecting 'all domains'...
Select 'customization', 'custom reports', 'new custom report'
Variant | 2.5th Percentile Modeled Improvement | 25th Percentile Modeled Improvement | Modeled Improvement | 75th Percentile Modeled Improvement | 97.5th Percentile Modeled Improvement |
---|---|---|---|---|---|
Original
0%
0%
0%
0%
0%
Pledge Before Try Giving
-50%
-33%
-23%
-11%
18%
Separate Block For Other Pledges
-18%
4%
20%
36%
76%