library(redditadsR)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(ggplot2)
The goal here is to outline in a couple of paragraphs and few lines
of code some simple ways in which we can use the Windsor.ai API and R
package
redditadsR
to gain insights into marketing campaign
performance in Reddit Ads. The nice thing about Windsor.ai is that you
can have all of your marketing channels aggregating in a single place
and then access all data at once using this package. In this case,
however, the package is focused on getting data from reddit Ads
campaigns. Of course, once the data is in R
you can do much
more than the examples below, and work on analysis, predictions or
dashboards.
After we create an account at Windsor.ai
and obtain an
API key, collecting our data from Windsor to R is as easy as:
<-
my_redditads_data fetch_redditads(api_key = "your api key",
date_from = Sys.Date()-100,
date_to = Sys.Date(),
fields = c("campaign", "clicks",
"spend", "impressions", "date"))
This code will collect data for the last 100 days. Lets take a look at the data we just downloaded to get a better idea about the structure and type of information included.
str(my_redditads_data)
#> 'data.frame': 14 obs. of 5 variables:
#> $ campaign : chr "retageting APAC" "retargeting UK&CO" "retageting APAC" "retargeting UK&CO" ...
#> $ clicks : num 4 0 5 7 0 0 4 2 3 0 ...
#> $ spend : num 2.57 2.48 2.39 2.54 0.94 0.71 2.59 2.12 2.43 0.13 ...
#> $ impressions: num 806 693 819 689 299 190 682 688 822 135 ...
#> $ date : chr "2022-09-28" "2022-09-28" "2022-09-29" "2022-09-29" ...
Now we can analyze our Reddit Ads data. For instance, let’s compare the two campaings we have to see which one performed better the last 100 days.
ggplot(my_redditads_data, aes(y = clicks, fill = campaign)) + geom_boxplot()
It looks like APAC campaign is performing better than UK&CO in number of clicks. Now let’s see if this difference is statistically significant by using generalized linear models, as our variable response is number of clicks, which have a poisson distribution.
<- glm(clicks ~ campaign, data = my_redditads_data, family = "poisson")
lmod summary(lmod)
#>
#> Call:
#> glm(formula = clicks ~ campaign, family = "poisson", data = my_redditads_data)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.3905 -1.6036 -0.7599 0.6372 3.5065
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 1.0498 0.2236 4.695 2.67e-06 ***
#> campaignretargeting UK&CO -0.7985 0.4014 -1.989 0.0467 *
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for poisson family taken to be 1)
#>
#> Null deviance: 43.735 on 13 degrees of freedom
#> Residual deviance: 39.456 on 12 degrees of freedom
#> AIC: 66.147
#>
#> Number of Fisher Scoring iterations: 6
We can see that differences among campaigns are statistically significant and that the campaign UK&CO have a mean that is 0.79 lower than the APAC campaign.