Webinar recap: Data Dump: Overcoming the Challenges of Podcast Advertising Measurement
We packed a ton of information into our webinar last week and we definitely recommend listening to the recording but if you’re looking for the highlights, you’ve come to the right place. With a focus on tools and measurement, Cameron and Krystina - well, mostly Krystina - discussed the challenges and merits that come with the most popular methods for planning and tracking podcast media.
If you have some questions of your own, or would like to join the conversation, Cameron and Krystina will be hosting a room in Clubhouse on March 16th!
Transcript has been heavily edited for brevity and clarity.
Speakers:
Cameron Hendrix, CEO and Co-Founder of Magellan AI
Krystina Rubino, Head of Offline Marketing at Right Side Up
KR: The reason why we're actually here today is because there are 1.75 million podcasts but very few of them are actually monetized.
CH: Since 2017, we've sampled more than 30,000 different podcasts. If you look at every podcast, we’ve sampled, we’ve detected ads on just under half of those. For this, you know, conversation, we looked at just the last 11,000 or so podcasts that we sampled most frequently in 2020. And what we found is that about 75% of those were monetized meaning that we again picked up ads, either in the form of podcast ad advertising on another podcast, or an advertiser that's actually placing a spot. And, you know, it's really pretty amazing, because we see the same, 6000-8000 podcasts coming up over and over again with most advertisers in the space. So as much as there are 1.75 million different podcasts out there, the universe that advertisers seem to be operating in is a much smaller component of that.
KR: And I can actually back that up, because when we were looking at our performance indexing tool, which is an internal tool we've developed at Right Side Up to not only aggregate cross line performance and look at it, but also index it against each other. Everybody's got different CPA goals and customer acquisition costs, etc. But we've purchased over 1700 shows, across 170 different networks, partners, direct partners, all that across the 50 different advertisers. So if you think about 1700 shows out of 1.75 million podcasts. The reason that we're actually here is because nobody knows what to buy unless you have access to data that tells you what to buy, or platforms like Magellan AI.
CH: When you get down to it, just, you know, 6000-8000 podcasts that are really coming up over and over again. We also looked into ad serving technology, as we were thinking about understanding the market and found that, in 2020, just under half of the podcasts we sampled were using dynamic insertion. If you look at the last 90 days, though, that's actually up to closer to 60%.
KR: Using dynamic insertion doesn't necessarily mean that they're selling their impressions or selling their downloads on an impression basis. Dynamic insertion has become like this multifaceted term that we use in our industry to not just talk about like the way we knit ads, regardless of how they're sold into episodes, we've started using it as parlance for impression based media.
CH: We put together The Podscape because we saw so many companies popping up, not just in publishers and content creators, but a lot more ad tech companies and it's really hard to figure out what, how everyone fits together. The way we try to fit into the podcast advertising landscape is trying to take all the terrible parts of Krystina's job and write software to do those things. That’s how we think about media planning and really supporting media planners. How does Magellan AI and other tools fit into your process today?
KR: The planning process doesn't look that dissimilar from like if you've ever been in a channel where you're buying relationally. We use Magellan AI for the same kind of planning that we've been doing in the channel for a long time, which is basically cohort based analysis. Regardless of how complicated this Podscape is, sadly, the process I'm about to give you is basically the best that we as an industry have come up with so far. Let's look and see cohorts who might be targeting a similar consumer, look at shows who have been proposed by different partners and see, what does their monetization history look like? You know, in Magellan AI, you can pull up renewals, if you see a show that doesn't have a ton of advertisers renewing, you can directionally say, maybe that one's not so great. It's all of these inputs that you're using subjectively.
CH: It's definitely like a bit of a challenge to figure out where all the bodies are buried. We're taking rate card data and our estimates for how large shows are, and it's really helpful because then you can contextualize, “is an advertiser spending a million dollars in space? 10,000? 100,000?”
I would like to think we're relatively accurate, but the reality is, we're just never going to know when you're getting a two for one ad deal. That's really between the advertiser and the podcast publisher. We’re really here to provide a little bit of transparency into how we think that the ad market is trading, but the reality is, it is only going to be directional.
So should we talk about campaign measurement and some of the tools that are out there?
KR: I have bought a lot of survey based methodology. It's not that I don't think the tools are helpful directionally or interesting or that I don't think the technology has promise.
My problem is that we are racing towards adopting technology, like pixel based attribution, that is becoming obsolete in other industries. IDFA is setting up Apple and Facebook for a hell of a fight. And in podcast advertising, we're sitting here going “pixels are the future.” It doesn't make sense. I want us to be able to use the current technology available and device graph matching to get at something that feels a little better than survey based methodology.
And I know that we actually had a partner ask ahead of this webinar, if we require any attribution as part of our media buys, the answer is no, we recommend advertisers pull in the technology based on the merits of the technology, same to publishers. Media spend from an advertiser shouldn't be telling you how to run your business because I think we have to evaluate technology on its own merits and not make some of the same mistakes that we've made in digital. We need to give advertisers and publishers control from the outset. Otherwise, we're gonna wind up repeating the sins of the past. And I'm really trying hard not to do that, because I hated buying banners eventually. And I love podcasts advertising.
CH: What about brand lift studies? Is that in something that you've used?
KR: I have spent so much money on brand lift studies in my career. And they are always interesting directionally. But if you're trying to use a brand lift study, to optimize a performance based advertising campaign, you're gonna be there for a while. They rarely hit statistical significance on a placement basis. I think they're really helpful for awareness and sentiment and consideration monitoring, like when you're doing hearts and minds campaigns and you're affecting behaviors on a different scale. That's a little different.
CH: Got it. So we talked about pixel based attribution and some of the challenges there, and it was scaling that to the entire campaign. And brand lift studies, having their place, but what other ways do you recommend marketers, especially those coming from the Facebook world, think about measurement for their campaigns?
KR: By far the best practice is still survey based methodology. I know, it's not the answer that anybody wants. It's not the answer I want. A lot of the survey data, actually, all of the survey data is first party advertiser data. And so we tend to lean most heavily on first party data when we are deciding for media.
But survey based attribution actually lets you level set across all of your channels, so that you're then not adding one plus one plus one plus one and getting 17 when it comes to looking at the total sum of your attribution.
CH: One of the things that I always think about is lowering the barrier to entry for advertisers coming into the space. What are you thinking about in that context?
KR: I actually don't think the barriers to entry are that high.
Generally, there is no minimum that you need to test in this channel. Anyone telling you, “you have to spend a quarter of million dollars to find out a true negative or positive” is just not right. We generally tend to recommend an initial starting budget between $50,000 to $150,000. And that's over seven to 10 weeks, there are outliers in either direction, we do launch quarter million dollar tests all the time. But that is because in proportion to the actual like the rest of the advertisers media mix, you need to spend $300,000, to get a blip, and to feel it and to get the actual true positive or negative, you need to move forward.
CH: We see a ton of advertisers coming into podcasts right after they've gotten to the point where they’re like, “I don't want to be completely reliant on Facebook and Instagram, the rest of my company's existence.” Podcasts is a great place to go to test some alternate channels, instead of TV. TV has to make sense. And that's why you hear an advertiser in podcasts, and then 6-12 months later, you see them doing an ad campaign on TV. Pretty amazing.
Audience Question: Do you find you can use DAI successfully for performance marketing? Or do you only leverage it for awareness?
KR: I don't. When we think about DAI, it will work but you have to flight it differently than you flight baked in ads. If we're talking about broad-based programmatic media and run of site and run of show and all of that stuff: no, it does not back out consistently for advertisers.
And what I'm going to tell you, that nobody who sells inventory on this webinar is going to like, is that programmatic podcast ad technology was developed to monetize a glut of inventory on the interwebs. We do not have a glut of inventory in podcasting, it's a finite inventory supply. Also, the solutions that are available are not programmatic. The closest that we have right now is the demand side platform that lets you buy across exchanges in theory or an exchange basically, that they run. But there's no transparency.
As an advertiser, I can't find out where my ads ran. I have horror stories from the early days of digital where I can confirm that yes, my ads did run in adult content. I don't want us to use it as a one size fits all tool, especially not until we know where the heck our ads are going. And what we're actually paying. Because the CPMs right now are between 20 and $40 for behaviorally targeted inventory. There's no way we're spending that. And there's no way it's backing out.
KR: [answering audience question] I've used post purchase pop ups to measure channel effectiveness, especially podcasts. I am the biggest fan of a post-purchase survey, put it past your most difficult funnel activity and that is where you will get the real media insights that you need to prioritize your media spend not just in podcasts, but all channel effectiveness to your point.
Audience Question: In regards to top shows at Apple, do you consider it a win if you purchase a new show at 100,000 and it goes to 150k an episode?
KR: Absolutely. But that is exactly the kind of room that we have right now in CPMs. Because the second that that doesn't become possible for advertisers, y'all are getting $15 CPM on the regular. This is the chicken and egg. We're projecting downloads and we're doing it conservatively. The advertiser, because they're buying on a fixed cost spot basis, might actually get some additional impressions. But on the flip side, the publisher doesn't have to go back and reconcile and redeliver if they didn't deliver in show, so it’s definitely challenging.
I understand like then on the publisher end there was this want and desire to make sure, from an inventory yield perspective, that you're making the most that you can, but you may be making the most that you can from leaving a small amount of inventory unsold and getting premium CPMs from advertisers like us. You have to look at it on a revenue per user basis. And there's actually some really good stuff published on yield management in the digital industry side that should help partners figure that out.
Audience Question: What's the minimum level of return on ad spend you'd consider a success for a brand's initial test and podcast?
KR: Like any channel, we can't guarantee ROI with a first test. There's going to be risks associated with testing any new channel including Facebook, but in terms of return on ad spend, we generally find that the industry average for renewing from a first test is somewhere between 20 to 30%. Our renewal rates at Right Side Up are actually significantly higher, somewhere in the 40 to 50% range. But I would say that you can probably expect a minimum of point five or less on your first initial test that ages out without any optimization to media spend. I have seen initial tests pay back at two or three to one. But I want to caution that like, it's gonna be somewhere in the middle.
Audience Question: How do you determine if a show is going to be a total flop versus having some long term scalability? Are there like past campaign indicators you look at?
KR: I have a really good, a really good link that I can share. It's a blog post that we did last year called Minding the Ramp. It's on Right Side Up’s website. We can tell there are markers of a show, after the first integration, second, third, etc. If we tend to see that they're performing at a certain threshold, we actually can know whether or not they'll pay back for the first test, even before the campaign ends. First of all, when we think about post-campaign, you should be looking at performance, somewhere between three to six weeks out from your last spot. I've seen shows pay back two months later after we launched our last integration we decided not to renew because it wasn't backing out. Two months later, we'll go to renew a show. And it won't be available because it's then backing out. So you have to make sure that when you're looking, your performance window is set appropriately.
And frankly, that's another challenge with pixel based attribution, because there are no standard Windows for attribution the way that there are in digital.
KR: [answering audience question] I like a 70/20/10 cadence whenever we can do it for testing. So 70% of your test should be in like, really solid tests, or core. 20% should be like, more aggressive tests. And then 10% is, you know, we all have fun words for it fun, money, F-it bucket, basically your crazy things. Like when I was in house, I spent 10% of my budget on sponsoring an audio fiction podcast in 2018. It did not work. But it was great. And it wound up having other performance implications and other awareness implications.
KR: [answering audience question] Yeah, every three to four weeks on a single show, I like to say one episode every three as kind of a default. But I would say we tend to go between one by two, one by three, or one by four. one by four means more of a monthly cadence. It's more useful for the bigger shows where you're doing a lot of media spend when you have to back out on the back end. When in doubt, every three weeks, if there are other considerations, like if it's an awareness campaign, and I have to do something in Q4, and I have to get everything in market by Black Friday, I'm going to go hard and go one by two. I may or may not be as efficient as I normally would but if I have seasonality and other tailwinds behind me, it should carry me through.
We hope you learned something new to apply to your podcast advertising campaigns and we look forward to seeing you in Clubhouse!
Ready to check out how Magellan AI could fit into your podcast media planning? Request your custom demo.