Attribution is the ability to track the performance of each marketing effort and determine which of them are profitable and effective and which are not.

In today’s digital world it’s discussed mainly through the different marketing channels – if a Facebook ad led to an app download, a marketer can see a conversion through Facebook’s platform (i.e the download was attributed to the Facebook ad) whereas in the pre-digital marketing world it was much harder to attribute a marketing channel with a conversion (marketers can’t tell if a billboard led a customer into a store to make a purchase, or if it was a TV ad, they’re limited to seeing that their campaigns led to an increase in revenue).

Marketing Attribution in a Non-Digital Environment

Before the full adoption of the internet, tracking the performance of ad campaigns was inherently limited. While there were a variety of different methods, companies used to gain insights into their consumers’ behaviors and preferences, most if not all were mostly ineffective and overpriced.

Companies used coupons, where they offered discounts on their product by having consumers use a piece of paper indicating they deserve it. Tracking large amounts of coupons, sometimes through multiple branches (in the case of big chain stores) without the use of digital means is nearly impossible (or extremely expensive). Imagine Walmart or Target trying to track coupons usage across their hundreds of branches in the US alone.

In some cases, consumers would volunteer information about what made them make a purchase (through feedback surveys or just casual conversation). Again, a method that provides partial information, and is unscalable.

For example, think back on a post-purchase Give Us Feedback survey, where consumers relay their experience in the store and the service they’ve received. If consumers chose to fill it, they already had a positive experience, which means that the feedback is gathered with a positive bias, rendering the results unreliable.

Secondly, their paper-filled answers will now have to be read and analyzed in order to draw conclusions. Doing so takes time and money, marketers are left with partial unreliable insights (only some consumers who had a positive experience filled it) and the conclusions are drawn late after the fact (depending on how big the company is, it can take up to months to gather the data).

The Role of Marketing Agencies

The inability to track and gain insights from the different marketing efforts led advertisers to hire marketing agencies. The agencies, in turn, would spend tremendous budgets to run branding campaigns that were ostensibly impossible to measure.

The idea behind branding campaigns is getting a wide-spread exposure to the product, company, or brand while delivering a key message, like Nike’s famous Just Do It. Not only did everyone know Nike as a sports brand, everyone knew this key message. Branding campaigns are effective for creating a familiarity with a brand, a sense of trust, and conveying a key message with which consumers recognize the brand.

The problem with branding campaigns, other than the fact that in order to be effective, the budgets must be substantial, is that after all the time, effort and resources put into them it’s nearly impossible to track the performance of the campaign and attribute the success of the TV ad or the billboard sign and gain insights into which parts of the campaign worked and how.

The Era of Digital Attribution

The Core of Attribution - Tracking Your Campaigns

Since digital marketing came into our world, there have been many attempts to track attribution before settling on today’s common practices. For example, there was an attempt to connect physical actions with digital trackings, like in the case of QR codes where consumers had to scan a code to get discounts, but these attempts never succeeded in getting widespread adoption by consumers.

Today, when attribution is accessible and available through mobile devices, and it’s compatible with the behavior of users (i.e users spend hours on their phones, reading, playing, buying, downloading, etc’) budgets are being overwhelmingly shifted to mobile.

Through the usage of mobile phones, without physical involvement, it’s significantly easier to track ad campaigns’ performance down to extremely granular levels – there’s a distinction between ad impressions and ad clicks. If it’s a video ad, there’s an element of duration (if a user watched the full video or chose to skip it) and then the tracking continues down the funnel (was there a download, a purchase, and if the channel drove high user retention – depending on the goal of the campaign).

The ability to track in such detail at a relatively low cost enabled more products to enter the market, run ad campaigns, and track their performance and rapidly made mobile marketing the largest marketing channel.

The Problem of Fraud

Mobile Fraud and Its Impact on Ad Campaigns

Having the ability to track all digital campaigns should lead to budgets being spent more effectively – advertisers can track which campaigns work, and expand their spend on them, and which campaigns don’t and stop them, all in a significantly shorter amount of time (compared to non-digital attribution) while still running campaigns at scale.

Unfortunately, in a similar fashion to any transaction that happens online, ad campaigns attract the attention of fraudsters, which without tracking and enforcement can wreak havoc and completely distort the data gathered, and thus the insights drawn from this data, impacting the decision-making process.

Running campaigns without fraud prevention also means risking spending (at least partially) on fraudulent activity, instead of actual users, hindering their performance and deeming their outcomes non-representative.
“To know your Enemy, you must become your Enemy” – Sun Tzu

Knowing Your Enemy

The Importance of Active Familiarity with Fraud Methods

In order to successfully run mobile ad campaigns and avoid wasting time and money on fraud, attribution platforms evolved and started incorporating fraud detection tools. You can learn more about the different tools provided by different platforms in our list of vendors. With all available fraud detection tools, and all knowledge gathered, fraud is gradually being reduced in the mobile marketing industry, but since we’re not yet at the point of eliminating it, advertisers still need to be cautious about who they partner with for running their ads.

Everyone involved, including stakeholders, should familiarise themselves with the possibility of fraud and the breadth of the phenomenon.

In order to make an educated choice, prior to partnering with a marketing company, UA managers should check the company’s existing partners (and their feedback), the company’s experience, their platform, and service, and make an educated choice.

When running ad campaigns, UA managers (or anyone else managing the campaigns) should familiarise themselves with how to detect fraudulent behavior.

Stand Out From the Crowd

How to Run Successful Mobile Ad Campaigns in an Overflowing App Market

In the digital realm, it’s easier to create and market a product (think of the possible reach in a digital campaign vs. having to run this same campaign using billboards). Since the barrier of entry is much lower, there’s an overflow of products and the competition is harder than ever.

Advertisers need to emphasize what’s unique about their product and capitalize on it. Using performance data, gathered by an attribution platform, can help market the product’s strengths and reach users by highlighting their most liked features.

In conclusion, attribution is key to making informed decisions in regards to ad spend and budget allocation. The availability of attribution tracking in the mobile industry and the scale it provides allows for more products to enter the market and for smarter, more precise ad campaigns.

On the other hand, with the attribution platform, marketers can also look at event data and see around which events (that represent actions or milestones of the users inside the app) users are most engaged and develop creatives around their correlating feature, as it seems to resonate with the apps user base (if it’s an e-commerce app, then the best selling item, if it’s a game then it can be a popular stage in a game, and so on).

For example, marketers can test the effect of different creatives on the users they acquire. If they ran campaigns with several different creatives (as one should), all of which brought a significant amount of users, but the users exposed to one variation performed better (i.e had higher retention rates, converted better down the funnel) then the marketers managing the campaign can notice that and then use the best-performing creative in other campaigns.

The issue of view-through attribution is a controversial one amongst advertisers. The debate is around the importance and perceived contribution of a view (i.e an impression) in otherwise seemingly organic conversion.

Some advertisers will claim that attributing any ad view, even a small banner ad, that had no user engagement or definitive proof that it was noticed by the user, to a conversion that happened hours later is in fact attributing an organic conversion to the banner ad’s buyer.

Since this is indeed problematic, we see it fit to break it down and show that there are ways to measure the influence of an ad view on a conversion and limit VTA in a way that would minimize the risk of the advertiser, while still rewarding buyers with substantial impressions.

How to measure users’ intent to convert

The customary VTA window is 24 hours to all ad types (be it a video ad, a banner, a native ad, or a playable ad, though we know different ads have proven to have different influences on users). One way to authenticate users’ intent to convert is by setting different VTA windows for different types of ads (the more engaging the ad, the longer the window), making sure the attribution is sensible in the effect the impression might have had on the users.

For example, playable ads and video ads, which are considered to be more immersive, can get a 24-hour window, while banner ads, considered to be significantly less immersive, can get a single 1-hour window.

In addition to that, advertisers can take another precaution and validate users’ intent by checking, post-conversion, if those users who had converted had previous (expired) attribution windows. If the converted users had previous views, clicks, or other ad engagements then the potential intent was rightfully targeted and the ad view can be attributed to the media buyer.

The Default Lookback Windows

  • If you want to learn more about the 24 hours view-through window and the 7 days lookback window you can read about it in our Types of Attribution entry.

SRNs vs. DSPs

In the current marketing landscape, there’s a clear distinction between SRNs (Self-reporting Networks) and DSPs (who report through an attribution platform). SRNs have a set, default, non-negotiable, view-through window which all advertisers must accept as is, whereas DSPs are dependent on an advertiser’s choice to agree to said VTA window.

In some cases, advertisers will use their liberties and won’t agree to set a VTA window with a DSP they’re working with. This results in an uneven playing field where DSPs and SRNs are showing the same ads in the same placements, while the SRN ad has a VTA window and the DSP ad doesn’t, preserving the sometimes false image of better performance of SRNs (i.e using a 24-hour window for all types of ads might actually show more organic conversions attributed to the SRN).

A rewarded video impression that is shown via Facebook’s SDK in their Audience Network is no different from a rewarded video impression shown on a different SDK via OpenRTB through a DSP, thus their VTA windows should be equal. Since advertisers might still feel reluctant to give DSPs the same VTA windows, we suggest the middle ground mentioned above, where the type of ad influences the length of the window.

The importance of customized VTA windows

The difference in length of VTA windows will help preserve the quality of the view. Instead of having buyers using it to their advantage. In a situation where all view-through windows are equal (no matter the type of ad), media buyers can decide to strategically buy banners and native ads, which are much cheaper, and use them to “tag” more users then they would by buying interstitial ads.

These users have now had an impression of the ad, a cheap impression on the buyer’s side, and are validated for a view-through window for the next 24 hours. This vast tagging technique increases the buyer’s chances of being attributed a conversion by exploiting the window.

Tagging more users means increasing your chances of catching more users who would’ve otherwise converted organically. It’s easy to doubt that an impression from a 320×50 banner ad made 14 hours prior to a conversion had limited contribution if any at all.

Verifying impressions in video ads

As an additional means of verification, when it comes to video ads, advertisers can agree with their DSP on a point in the video in which the impression would be triggered – i.e. the midpoint, the end, or when the end card is rendered when using the VAST protocol. For a playable ad, it can be a specific action that triggers an impression, ensuring the user was exposed to most of the content or engaged with the ad, therefore it had a significant impact on the user.

When marketers have a better understanding of the metrics involved and employ these tools to verify the ways in which the DSP they’re partnering with is using the window, there’s no real reason to not provide them with a VTA window.

In order to discuss the different types of attribution in-depth, we need to first address the importance of attribution modeling.

Attribution Modeling

Advertisers have a different set of goals for their users – be it installs, retention, or deposits and they use different marketing channels to reach these KPIs. In order to track the performance of each marketing channel and its ability to meet those set KPIs, they use an attribution platform.

Attribution modeling influences the way an advertiser attributes an event to a marketing channel. For example, three different marketing channels present User A with the same ad, one after the other, and then the user installs the app – should all 3 channels be credited equally? Should only the last marketing channel be credited? Should the ad with the biggest impact be credited? It depends on the chosen attribution model the advertiser chose.

Currently, the most commonly used attribution model is the Last Touch Attribution, but it’s important to understand the alternative models. Though they may be perceived as more complex, you might find that they offer a more sensible solution.

Last Touch Attribution

The Last Touch Attribution model is the most common model and the reason for it might be its simplicity and the basic logic behind it. As the name implies, in a Last Touch Attribution model the marketing channel responsible for the last touch (i.e click that resulted in an install) is the one that gets the conversion attributed to it.

It’s simple – a user clicked the ad and then installed the app, so this was the ad that led to a conversion. It’s easy to go for this model since it’s just straightforward.
In the example above we can see that all three networks (A, B, and C) showed the user an ad that resulted in a click, but since the install happened after the click on Network C’s ad, the install was attributed to it.

Other attribution models

Alternative attribution models include models such as First Click Attribution and Multi-touch Attribution, offering models such as U-shaped (position-based) attribution, and W-shaped attribution). The concept of Multi-touch Attribution is to attribute an install to all marketing channels that had an impact on a user that led to a conversion, for various reasons relating to the ad formats used and the general way in which the channel is perceived by the marketer.
As the example above shows – there were, in this instance, two different touchpoints. The first one was a video ad (where a user watched a full video and then clicked the ad), the second was a small banner ad, after which the user installed the app. If the model used by the advertiser was last-touch attribution, then Network B would’ve been attributed the install, though it may be argued that the video ad had more or similar impact on the user since it offers a more engaging experience.

In multi-touch, on the other hand, all (or some, according to the exact definition the advertiser set up) touchpoints are attributed for the install, as they all contributed to it. The way they are attributed depends on the chosen attribution model, for example, it could be split evenly between all touchpoints, or it could be by the type of ad – based on the understanding that a video ad offers a more immersive experience than a banner ad, it could be that the last touch gets the highest percentage, and so on.

Lookback Window

All attribution models are valid within a set time frame – that is the Lookback Window.

A lookback window is a period of time in which a conversion may be attributed to a certain marketing channel (or multiple marketing channels, in the case of multi-touch attribution). Meaning, if a user clicked an ad by Network A and a couple of days later installed the app, the install will be attributed to channel A, only if the install happened within the set lookback window.

There are a couple of industry standards and, of course, a couple of exceptions, when it comes to lookback windows.

7 Days Standard For Clicks

Most networks and UA partners will work within the 7 days standard window for clicks, which means that if there’s a gap smaller than 7 days between a click and an install – they will be attributed the install, but if there’s a gap bigger than 7 days, they won’t.

7 days lookback window

In the example above there are 3 networks involved in the user’s funnel, all of them are set with an equal 7 days lookback window. Network B was the one to “win” the install since Network A’s click expired (there were more than 7 days between the click and the install) and Network C only had an impression (the user viewed the ad). Network B was still within the lookback window and served as the last click, thus it was attributed the install.

This window can be changed by the advertisers to better represent their needs.

24 Hours for View-through Attribution

The standard lookback window for view-through attribution is 24 hours. A view-through conversion occurs when a user sees an ad (i.e when there’s an impression) and converts within 24 hours from viewing it. This means the install will be attributed to the network that showed the ad, unless there was an active click lookback window. If there’s an active click lookback window, then an impression and an install within 24 hours of the impression – the click will win. A click lookback window is always more impactful than a view-through.

24 Hours Fingerprinting Attribution

In most cases, in order to attribute a conversion, the attribution platform needs a device ID or some other unique identifier (such as Apple’s IDFA and Google’s Advertising ID). In cases in which a device ID isn’t available (e.g, when an Apple user enables Limited Ad Tracking), a conversion will be attributed based on fingerprinting.

Fingerprinting uses parameters such as the device name, type, OS version, IP address, and more, in order to identify a device and match between its touchpoint and conversion. Since all of these parameters are subject to change (unlike a device ID) a fingerprinting attribution window is only open for 24 hours, to reduce the margin of error.

Fingerprinting is based on statistical probability and is not bulletproof by any means, but within a 24 hours window, and taking into account all of the publicly-available variables, it’s highly accurate.

The Exceptions: Facebook, Google, and Twitter

Facebook, Google, and Twitter are three major players in the digital marketing industry, and as such, they have the liberty to set their own terms when it comes to attribution.

There are two key points in which they act differently than the rest:
  • They are all self-reporting networks. Instead of using an attribution platform to track their performance, they report it to the attribution platform (so advertisers can still access all of their performance data in one platform).
  • They enforce their own attribution windows – Facebook with 28 days, Google with 30, and Twitter is more flexible, offering 5 pre-set windows: 1, 7, 14, 30, 60, or 90 days. Advertisers using these channels accept these windows, and their self reported data, as is.
These definitions allow the self-reporting networks to attribute conversions to themselves even if they don’t follow the attribution model defined for other channels (like last-click)

What are MMPs?

MMP (mobile measurement platform) is a platform in which advertisers can track the performance of their different marketing channels. The platform provides data and insights that help advertisers better understand where they should be spending their marketing budgets.

At their most basic, MMPs track impressions, clicks, installs, and events, though today some of them offer much more.

Why do I need an MMP?

While advertisers can track the source of each download in Apple’s App Analytics and Google’s Play Console, attribution platforms provide additional details such as the medium of each individual source (i.e paid, organic, other) and further details within each source, such as campaign name, ad group, ad name, keyword and so on, which makes it easier to then get cohorted reporting.

Beyond granular reporting, these platforms offer complimentary tools. These tools evolved from advertisers’ needs and the changes in the mobile advertising industry towards a transparent, data-based, fraud-free marketing funnel\process\something.

Anti-fraud Tools

Mobile ad fraud has been a central issue in the mobile marketing industry from its beginning. As the industry grew and changed, the war against fraud became a pivotal part of it. As these changes took place, mobile measurement platforms started emphasizing the issue of fraud, offering tools and preventive measures against it, with the promise to save advertisers their marketing budget on genuine actions taken by actual users, and making sure paid installs are attributed to the proper marketing channels.

Audience Segmentation Tools

Some MMPs offer marketers the option to segment their audiences (similar to the segmentation Google and Facebook’s Pixel offer) and draw actionable insights from the data.

Cost Tracking

Providing a single platform to track all marketing channels spending and income under one platform, allowing marketers to easily estimate the performance of channels in terms of ROI/ROAS.

Who?

Choosing a Mobile Measurement Platform

There are plenty of available MMP’s for an advertiser to choose from. There are some who provide their services for free, such as Kochava, Tenjin, and Firebase, some who provide paid services, such as Tune, AppMetrica, Adbrix, Wisetracker, Branch, Party Track, MyTracker, and the two standouts are AppsFlyer and Adjust, offering extended services and serving some of the biggest advertisers in the mobile apps industry.

Adjust:

Founded in 2012
# of employees: 200+

Services: Measurement, Fraud Prevention Suite, Unbotify

Adjust is a mobile measurement company that helps app marketers track all of the marketing channels and aggregate conversion data for in-depth analysis. It provides metrics data on an hourly basis, revealing the most granular trends for user downloads

Pricing: Adjust offers three available packages (with different features available): Basic, Business, and Custom. Charging for non-organic installs.

Unique Benefits

Advanced Fraud Prevention: The “Fraud Prevention Suite” stops fraudulent installs in real-time. Adjust also offers click validations and takes additional preventive measures to decrease fraudulent activity.

CAAF: The “Coalition Against Ad Fraud” is a body, founded by Adjust, dedicated to leading an anti-fraud movement. CAAF members pledge to tackle ad fraud and collaborate to develop technological solutions aiming to eliminate mobile ad fraud by creating new industry standards and tools to fight it, and educating the industry as a whole.

No Data Limits: Adjust offers marketers the ability to look back on their data from day 1 of using the platform.

Custom Attribution: Marketers can set customized attribution windows, suited for each campaign’s specifications.

Automation: The “Control Center” tool enables advertisers to view all of the campaign data, cross-network, in a single view, and easily draw actionable insights.

AppsFlyer:

Founded in 2011
# of employees: 400+

Services: People-based Attribution, Marketing Analytics Data, OneLink’s Deep linking capabilities, Audiences, and Protect360.

AppsFlyer is a mobile measurement platform that prides itself on having a customer-centric approach, with an emphasis on accurate data and maintaining its partners’ privacy.

Pricing: AppsFlyer offers flexible packages, based on the number of conversions and additional features.

Unique Benefits

Holistic Attribution: Marketers get granular reporting on every app install (the growth channel, media source, creative, or campaign that drove it), rich in-app events, cost data, and ad revenue analytics.

Comprehensive Fraud Protection: Incorporating post-attribution fraud reporting, together with active prevention at every level of the conversion flow, helps uncover the least detectable fraudulent activity.

Audiences for Engagement and Remarketing: Marketers can segment, manage, and amplify their audiences from one centralized dashboard.

Branch

Founded in 2014
# of employees: 200+

Services: Journeys, Universal Ads, Cross-Platform Personas, Deepviews

Branch provides the leading mobile linking platform, with solutions that unify user experience and measurement across different devices, platforms, and channels.

Unique Benefits

Pricing: Branch’s pricing is based on monthly active users. They also offer a fixed-price for startups or flexible pricing for enterprises, including a scale-as-you-grow option.

People-Based Attribution: Branch connects touchpoints from different platforms with the converting platform, creating a full customer journey.

Comprehensive Support: The platform includes support for all digital marketing channels including paid ads, email, web, social, referrals, and more.

Deep Linked User Experiences: Branch’s deep-linking solutions provide marketers with flexible deep links that allow for bespoke onboarding experiences across different devices, rather than just conversion rate tracking.

Kochava

Founded in 2011
# of employees: 100+

Services: SmartLink™, Deep Linking, People-Based Attribution, and IdentityLink™

Kochava enables people-based marketers to establish and enrich user identities, segment and activate audiences, and measure and optimize their campaigns across all connected devices.

Pricing: Kochava offers a free version (the Free App Analytics), or paid options such as Standard and Enterprise, with flexible rates depending on monthly active users or conversions.

Unique Benefits

Unified Audience Platform: Kochava provides marketing attribution and analytics tools to research, create, measure, and optimize ad campaigns from start to finish in a single dashboard.

The Kochava Collective: A data marketplace, containing 1st party data (and access to 3rd party data) from all of Kochava’s marketers, that enables marketers to build new audiences and enrich existing audiences.

IdentityLink™: Kochava’s cross-device attribution allows marketers to follow a user’s journey across platforms, from PC to mobile to gaming consoles and other connected devices.

Singular

Founded in 2014
# of employees: 100+

Services: Marketing ETL, Performance Analytics, Unified Marketing Data, and Fraud Prevention

Singular offers comprehensive solutions for mobile attribution, marketing analytics, cost aggregation, fraud prevention, ad monetization, and a marketing ETL to push data directly into your internal database.

Pricing: Singular offers pricing packages based on ad spend (unlike other MMP’s non-organic installs model) and includes fixed pricing for features.

Unique Benefits

Smart Data Governance: The platform automatically unifies disparate data sets, enabling mobile marketers to get an in-depth view of true ROI. Cost aggregation helps view the entire ad spend in one place.

Custom Fraud Prevention Solution: Singular has a customizable fraud prevention solution. Marketers can personalize their fraud prevention strategy to fit their business needs.

Cross-Channel Creative Reporting: Marketers can group and evaluate creative sets (based on build-in image recognition capabilities) and identify the creative themes that resonate with their customers.

Tenjin

Founded in 2014
# of employees: 20+

Services: Free Attribution, Ad Revenue Tracking, and Cost Aggregation

Tenjin is a free mobile measurement platform. Tenjin’s platform consolidates user-level marketing data from more than 300 industry-leading ad networks and acquisition sources. Tenjin knows the mobile gaming space, especially when it comes to ad-supported gaming.

Pricing: Tenjin’s “pay-as-you-grow” model begins with a Starter tier (fewer than 2000 paid conversions per month) that includes free dashboard access; There’s also an all inclusive plan.

Unique Benefits

Hands-On Growth Training: Tenjin offers in-person training resources, and walk their clients through real-life use cases for app growth, ensuring clients fully understand how to use the tools at their disposal.

Data Warehousing and Automation Tools: Tenjin maintains a data warehousing platform — called DataVault — that stores raw data for custom analysis. DataVault can be used to analyze metrics like lifetime value, workflow automation, bidding, and more.

Free Pricepoint: Tenjin is one of the only MMPs to offer an intuitive, free ROI dashboard that gathers cost data, attribution metrics, and ad revenue data all in one place, allowing users to easily make data-driven decisions and save time when assessing user acquisition. It also makes for a much easier and lower-cost evaluation process.

What is user acquisition?

User acquisition, or in short, UA, is the process in which mobile apps (or other products) obtain new users through marketing. User acquisition in the context of mobile apps usually refers to a strategy built to achieve a significant volume of app installs to increase your user base or targeting high-value potential users to increase your revenue cycle.

User acquisition is at the core of every app (and product) since, at its most basic, it signifies growth. Simply put, the more users the app has, the bigger the potential revenue and success.

Why is User Acquisition Important?

Any product, service, store, or brand, needs user acquisition. User acquisition is a way to guarantee the product maintains an active user base. The easiest way to explain it is by example. It’s not a perfect analogy, but let’s compare a new app and a new physical store.

When a new physical store opens, consumers, who walk down the street and see it, walk-in. Some of them would make purchases, most of them won’t. Some of them will come back and buy again, most, unfortunately, just won’t. So far, somewhat good, as the store got some initial organic traction.

Now, after its opening, if the store won’t invest any effort in advertising itself, it’d be hard to bring in new customers. Over time, the initial boost of customers will die down, and without actively bringing in new customers, it’d be hard to keep the store profitable, by solely relying on the random bypassers (or the few returning customers).

In reality, some stores are able to maintain a revenue stream based on their premium and central location, and the fact that there is a lot of ‘organic traffic’ around, which is exactly why such locations are extremely expensive and rare.

In order to not only stay profitable but actually grow, this new store will have to invest in advertising and reach new customers. The more customers – the greater the revenue potential (i.e, the more people enter the store, the bigger the chances someone will buy something! Of course, the more relevant your new customers are, the better the odds).

When it comes to mobile apps, the challenge is even bigger. Though users can enter an app store, search, and install, this rarely happens and no developer can or should count on organic installs (there are no actual “bypassers on the street”).

In combination with the fact that most apps are free to download and use, user acquisition plays a much bigger role. Most app developers simply cannot rely on organic traffic and this free-to-use model requires them to scale their apps quickly, which necessitates user acquisition.

Another problem with apps, which requires constant scaling, is the high churn rate. While retargeting and remarketing campaigns are also important, having a strong UA strategy, aiming to target high-quality users in the first place, is essential.

Though organic installs are negligent, store ranking is still important. Having a high number of users who downloaded and installed the app helps the app’s overall ranking in the app store. Since, in the past, it was used to manipulate store ranking, it’s overall influence on the ranking has reduced, but it still plays a role and has some significance.

Considering all of the above, without a clear and concise UA strategy, it’d be hard to find and convert new users. Luckily, today, most UA mobile campaigns are data-driven and are focused not only on scale but also on reaching engaged and active users.

The Different Types of User Acquisition

There are three central types of UA: Paid, Owned, and Organic.

Paid Media Marketing

Paid UA is one of the most commonly known UA methods, and probably the first one to come to mind when thinking of UA. Paid marketing refers to all media channels through which apps can run ads.

Media channels can be social networks such as Facebook, Twitter, LinkedIn, or it can be alternative channels such as ad networks and DSPs who run in-app ads and enables marketers to diversify outside of social networks. In these channels (both social and the alternatives), a marketer can use a wide variety of ads from native, banner, and interstitial ads. These ads can be dynamic, playable ads, video ads, and more depending on the product, branding, and the available budget.

Another form of paid media marketing is through influencer marketing. Influencer marketing is when collaboration is formed between a brand and an online personality, with the belief (and usually the data to back it) that this personality has an audience that is relevant to the brand and can promote the app and introduce it to new users.

With the rise of social networks and the success of individuals in these platforms, influencer marketing has immensely increased in popularity in the last couple of years.

All of these marketing methods require an investment of marketing dollars in other media channels in order to promote the app.

Owned Media Marketing

Owned media marketing, on the other hand, means using existing marketing assets to obtain users. Owned media can be anything from a mailing list, SMS, QR codes, and more.

UA campaigns that utilize owned media can be a newsletter campaign inviting web users to migrate to the app, or an incentivized invite code encouraging users to invite their friends in exchange for in-app currency.

Owned media marketing enables marketers to scale their app using their self-owned assets and users. It usually still requires a budget, though usually a much lower budget, and enables a limited amount of growth (it’s not as scalable as a paid media marketing campaign).

Organic Media Marketing

When it comes to mobile apps, organic marketing means app store optimization (ASO). Similar to SEO (search engine optimization) on the web, ASO serves to organically promote the app in the app store.

This means adjusting and optimizing everything from the title to the description, the keywords, and the category. As mentioned above, since the competition is fierce, scaling from organic installs is not really feasible, so organic media marketing won’t serve as a UA strategy.

With that being said, ASO is still essential since this optimization is taken into account by the algorithm when ranking apps in the app store. Having an optimized page in the app store is crucial, as it still serves as a gateway to conversions. An optimized page shows credibility and helps gain new users’ trust in their final stop before an install.

Making the Most Out of User Acquisition

When planning a UA strategy, marketers and app developers need to consider the basics. There are some crucial decisions to make, to help build the best strategy for your app.

Choosing an MMP

Choosing a mobile measurement platform will help you track the performance of your campaigns and the quality of the users your app is acquiring. The different MMPs provide additional services and serve apps of different sizes.

Choosing a DSP

Choosing a demand-side platform that fits your app’s needs will help you set up the campaigns you want and help in maximizing your app’s growth potential.

If you’re unsure as to why you need an MMP, you can read our entry about why attribution matters.

Programmatic Campaigns

In this day and age, UA campaigns are shifting towards programmatic and ML-based campaigns, while ad networks are turning obsolete. To get a better understanding of the programmatic UA world, we recommend reading our intro to programmatic UA, and the basics of real-time bidding.

After gaining a better understanding of the two, you can dive deeper into the data with our data activation entry and key KPIs, to make sure you’ve set the right KPIs for your campaigns.

And lastly, once your user acquisition campaigns are up and running, you’d want to set up a retargeting campaign to regain users who turned inactive.

Retargeting in Mobile Marketing

Retargeting, also known as remarketing and re-engagement, refers to a marketing strategy in which advertising efforts are taken to reach existing inactive users. Users who installed and opened the app but have since been inactive.

There’s a tendency to differentiate between different retargeting audiences and the terms used to describe these campaigns. Some refer to re-engagement as targeting users who have never been significantly active in the app but have installed it and still have it installed on their device. A re-engagement campaign will focus on introducing the app, detailing the advantages, and promoting outstanding features in the app.

Retargeting, on the other hand, is referred to as campaigns targeting the most lucrative users and encouraging them to spend more. In e-commerce apps, it’s by offering them coupons and presenting them with items they’ve shown interest in (either being back in stock, on discount, or still available), while in gaming it’s by offering them in-game currency, experience points, and other incentives.

What is The Advantages of Retargeting

The Harsh Reality of Retention Rates

It can be argued that growth (i.e user acquisition) is more important than retargeting. Marketers may wonder why they should spend any more of their marketing budget on already existing users. The biggest argument for that would be in the following statistic:

“(Retention) rates remain low, reaching only 5.5% and 6.8% on day 30, for non-organic and organic respectively” – Jillian Gogel, Appsflyer

This means that most mobile apps lose over 90% of their users within 30 days of installing the app. Hence, the value of retargeting lies in bringing back already existing users rather than spending money on bringing new ones and losing 90% of them within 30 days.

Why Run a Retargeting Campaign

1. User acquisition costs are rising – as the competition grows and targeting capabilities improve, CPI, and CPA costs increase. Re-engaging users that already showed intent and interest, on the other hand, is simpler and more cost-effective.

2. Audience segmentation – since you have the ability to create audience segmentation, you can easily define different audiences and customize their campaigns.For instance, you can target recently inactive heavy spenders, encouraging them to come back and spend more by showing ads that specifically highlight new content or create a campaign targeting users who reached an advanced level in the payment\IAP funnel but did not complete a purchase, by showing them the benefits of making their initial purchase.

3. Performance – according to data gathered by Appsflyer, apps that ran retargeting campaigns saw a 63% revenue uplift, which solidifies the whole ‘money on the table’ aspect of why you should consider running retargeting campaigns in most cases

Improve Your Overall Performance

“This is a marathon, not a sprint”

With the immense drop in retention by D30, marketers who want to sustain an active and engaged audience have to consider retargeting. There are virtually an infinite amount of users (it truly feels like it at times of growth), the potential to bring in new users to an app is huge, but if your monetization model is based, even partially, on in-app purchases, retargeting existing recently inactive users is a way to improve your overall performance metrics.

Maximizing UA efforts means bringing in new users to the app – both increasing ad revenue and IAP revenue and is overall a valiant effort but combining retargeting (while running UA campaigns) means investing in your app for the long haul. Focusing campaigns on users who are willing to spend money in the app (or reached far in the IAP funnel) almost guarantees they’ll do it again and these users (even if they only represent 5% of your overall uses) are vital to the app.

Retargeting for Gaming Apps

Retargeting is known to work best for any app that has a transaction base (shopping, food delivery, travel, and subscription-based apps) because your campaign reaches out to active users encouraging them to take a revenue-based action.

When it comes to gaming apps, many marketers still abstain from running retargeting campaigns since it’s not, in its most basic definition, transaction-based. Nonetheless, depending on the game and its monetization model, retargeting can still be a useful tool to increase revenue and improve overall performance.

Games that have a monetization model that’s based on both ad revenue and in-app purchases or games that are solely based on in-app purchases (such as most social casino or hard-core RPG games) should run a retargeting campaign while looking at events that lead to a purchase as an in-app transaction. This might sound obvious, but in practice, many publishers and developers are not engaging in this activity.

Keep in mind that a transaction in a gaming app is different from a purchase – since the product is in-game as opposed to food delivery or shopping and your segmentation should be adjusted accordingly.

When Should You Start Retargeting?

It used to be common to think that a marketer’s retargeting efforts should only begin once user acquisition efforts have been exceeded, but today more and more marketers understand that the two should be combined and dealt with simultaneously.

When it comes to maximizing marketing efforts, a holistic end-to-end approach, which takes into account a complete user’s journey, is realized as the most lucrative in the long run (improving the company’s bottom line KPIs). To achieve that, one must find the right partner – a DSP that is able to run UA and retargeting campaigns simultaneously and maximize your mobile marketing capabilities.

This does not mean retargeting users who have churned early (you can’t change users and turn them from inactive to purchasers, without significant changes in the app, as it’s apparent that they didn’t like what they saw, at least not enough) but combining aggregated campaigns’ data with overall performance benchmarks for the specific vertical and customizing an overall acquisition plan can truly affect the bottom line.

Combining UA and retargeting campaigns, targeting the correctly-segmented users (as mentioned beforehand – recently inactive with proven potential to bring in revenue), and setting realistic KPIs can be a game-changer, especially in apps where retargeting was not previously considered.

Incrementality and A/B Testing:

Deciding to run a retargeting campaign means deciding to invest money in existing users – whether they are organic or non-organic which obviously meaning you already spent resources to acquire them in the first place. For that reason, when running retargeting campaigns you have to make sure you’re spending it effectively, and not just on retaining users that would retain either way.

“Measuring retargeting is best done by looking at the incremental effect. There are so many things that might affect your users’ experience beside the retargeting ad – where they were acquired (maybe even organic), their stage within the app, their overall app experience – so make sure you isolate the retargeting effect” – Jillian Gogel, AppsFlyer

Incrementality, at its most basic, is the ability to measure if any actual change in performance is generated from paid marketing efforts. In order to measure the incremental effect of a retargeting campaign, you must compare it to an identical group (a ‘control group’) on which you make no paid marketing efforts. This means, that in order to measure the incremental effect, you first have to create two identical user groups (groups that are performing similarly) and define one as a remarketing group and the other as the control group.

Then, you run the campaign on the remarketing group and measure both groups’ performance. The difference in performance is the incremental effect created by retargeting efforts. If the retargeted group is yielding better results than the test group (after factoring in the cost of the campaign) then the retargeting campaign is working and continuing it should result in a revenue increase.

On the other hand, if the control group is outperforming or on par with the remarketing group then the campaign isn’t working and the source and cause for these results should be tracked and dealt with. The reasons vary and it can be anything from the creatives used in the campaign, the incentive offered to users, the costs of the campaign, or anything in between.

In conclusion, when running retargeting for your app, you should always test the incremental effect. When done properly, a retargeting campaign has a huge potential to increase revenue and improve overall performance.

Changes Following the Removal of the IDFA

Apple’s release of iOS14, scheduled sometime this fall (2020), is an update that will significantly change the Mobile Marketing industry. Aiming to protect the privacy of its users, Apple will now require publishers to show users to an opt-in dialog in order to have access to their IDFA (Apple’s identifier for advertisers), which used to be accessible by default (i.e., all devices had the IDFA exposed unless they actively chose to disable it in their device settings).

This change means that advertisers, DSPs, and attribution platforms now have to conform to the reality and work without a user-level identifier. This means performance can now only be measured on an aggregated level instead of user-level granular data.

Mobile marketing industry members should approach this change as if it applies to all iOS users and find solutions that work for them. With that in mind, users can still opt-in and make their IDFA accessible, which means old means still have some merit, depending on the actual percentage of users who’ll choose to consent.

For every new app they install, users would be presented with a screen allowing them to opt-in or opt-out of sharing their IDFA. Meaning, a single device can have different settings for different apps (something that wasn’t optional prior to the iOS 14 update).

Internal Data for Publishers

If at present, you rely on the IDFA for user segmentation, reports, and other user-level analysis in your internal data warehouse, you should make the move to the IDFV (Identifier for vendors). IDFV will stay viable for publishers to keep track of in-app activity.

If you make the move to the IDFV, make sure to update your data collection processes to use the IDFV as a new primary key for user segmentation in your systems (such as in postback templates, API pulls, etc.).

Attribution in iOS 14

Attribution will change significantly. Attribution platforms, on their end, can use tools such as fingerprinting and probabilistic matching, based on reports from the ad networks’ side (which are based on the SKAdNetwork reports).

All attribution reports will be delayed since Apple won’t be sending real-time postback data. The reports are going to be received between 24-48 hours after the app opened or a conversion value was reached. Apple’s delayed reporting is another means to prevent attempts to relate these reports to in-app activity and use it to identify users (i.e., it’s another privacy measure).

Fingerprinting and probabilistic matching is not new to attribution platforms and have been in use to track users that enabled LAT (Limited Ad Tracking). LAT users used to count as a substantial minority of iOS users, but there were still efforts to track their activity (without compromising their identity).

SKAdNetwork

SKAdNetwork is the new way of attribution methodology on iOS 14 for DSPs, attribution platforms, and advertisers alike. It provides no user-level data (guaranteeing users privacy), and its funnel differs from the old way that included the IDFA and relied mostly on MMP SDKs. Whereas IDFA postbacks were sent through the attribution platform, with SKAdNetwork, the information is reported through the device.

SKAdNetwork install attribution will allow tracking click-through attribution for in-app mobile ads only. The information provided will include:
  • Publisher ID – allows for transparency as to the source of the click.
  • Campaign ID – provides added values such as the creative used, ad placement, and other possible identifiers, according to the buyer’s discretion, and limited to 100 ids.
  • A first time or a returning user indicator – signifies whether the user just downloaded the app for the first time or re-downloaded the app after uninstalling it in the past.
  • A personalized conversion value – An unsigned 6-bit value. The app or the ad network determines the meaning of the value. The default value is o. It can be used for in-app event tracking. For instance, it can represent a stage in the game that the user reached, or the duration of the user’s session.
SKAdNetwork won’t include the following:
  • Real-time data – postbacks will be delayed between 24 to 48 hours.
  • Advanced Attribution Reports – omitting user-level data means there’s no deep-linking, LTV, and ROI.

Retargeting in iOS 14

Running a retargeting mobile campaign targeting users who opt-out of the IDFA, with the information known to us at this moment, will be impossible. Retargeting relies solely on the ability to identify users’ devices and their previous actions in the app.

With limited events reporting and campaign-level data, retargeting, deep-linking, and tracking user-level performance is, unfortunately, no longer a viable option.

The exception to this rule will be publisher-level cross-promotion and retargeting campaigns. As a publisher with multiple apps that can employ the IDFV to its primary key to track its users, it’s possible to target these users inside the publisher’s apps. In-app users data is still available through IDFV, and running retargeting campaigns between same-publishers apps to re-engage dormant users, who are active in these other apps, is still an option.

It’s a limited option, depending on the number of users, their activity in those other apps, and the scale such a campaign can offer, nevertheless, it’s worth noting that it’s an option that would exist.

Fraud in iOS 14

Apple uses cryptographic verification to verify the attribution. This verification is claimed to be unforgeable and should enable install verification without compromising user privacy.

While these statements are contradictory, fraudsters, like life, find a way. Since this is an entirely new way of attribution, and there are no unique identifiers, the issue of fraud is still in question. Without transparency, it seems like options that were just recently extinct have reopened, and the possibility of abusing the data with tools such as click flooding is still viable.

The question of fraud has been raised and discussed in relation to iOS 14, and the only conclusion, for now, is to be as cautious and aware of the possibility of it popping up due to the adamant nature of fraudsters.

Mobile Marketing and Programmatic Targeting in iOS 14

Contextual Targeting

Contextual targeting has been a key factor in targeting prior to iOS 14, and with the change in iOS 14, it’s going to take an even more prominent role in UA campaigns. Contextual targeting can be done in different ways, but the basic idea is to understand how to target users based contexts they usually engage with. For example, if a user is currently playing a match3 game, they’d be targeted with a similar game from the puzzle genre.

The context is deduced through the use of different technologies that impact its accuracy. In our case, we use an ML algorithm called Word2Vec. The idea behind it stems from the understanding that the user’s current location (the app the request is from) is pretty much the only indication of the users’ interest (i.e., the app the users are currently active in is an app that they enjoy using). It should be dissected and analyzed to its maximal capacity. Ostensibly, W2V makes a direct comparison between the App Store’s description (in which the ad would be shown) and the promoted app, to measure their contextual distance from one another.

Frequency Capping

One of the most prevalent nuances in advertising is finding the balance between showing an ad enough times to drive users to engagement (in case of performance) or be remembered (in case of branding) but not showing it too much so as to not to become a nuisance. It’s referred to as “The Rule of 7” in marketing, which, of course, went through an adaptation when it came to mobile advertising and the overwhelming amount of content available.

The basis stays the same – enough, but not too much. Setting frequency capping (the number of times a user sees an ad within a set time frame) is not an issue when targeting at the user level and relying on a unique identifier. However, when it comes to the campaign, aggregated level, it’s harder to set, measure, and enforce.

In an effort to increase conversions and not impose your ad on the same users too frequently, frequency capping is essential, especially when done somewhat blindly. Instead of using user-level data, users are grouped under known identifiers such as their location, device models, and usage times. Then the group is set with a frequency cap that is carefully calculated, considering it’s estimated size.

Closing Thoughts

The removal of the IDFA from iOS 14 is a big change that’s shaking up the mobile growth ecosystem. Still, the industry, as it usually does, will evolve to adapt to these challenges. Mobile marketers need to seek out trusted and vetted partners who are quick to adapt to changes and work with them through this change.

Changes Following the Removal of the IDFA

The release of iOS 14.5 is fast approaching. In order to help advertisers successfully transition, we’ve created this checklist. We’ve combined some of the basics along with some less-obvious and more than note-worthy changes.

We believe the combination of the two is a recipe for success. If you’ve already been through the basics – just skip to the relevant sections.

The Checklist:

  • Update your MMP’s SDK
  • Confirm DSPs’ integration with your MMP.
  • Set Up a Conversion Value
  • Understand Your MMP’s Solution
  • Understand Your DSP’s Solution
  • The Bottom Line
In our previous post, User Acquisition and Retargeting in iOS 14 here, we’ve covered the absolute basics:
  • The expected changes
  • What internal data will be available for publishers
  • Attribution using SKAdnetwork
  • Retargeting post-iOS 14.5
  • Fraud in iOS 14
  • Important steps DSPs can take when running campaigns without IDFA (such as contextual targeting and frequency capping).
Unlike our previous post, which is a great resource for anyone interested in the big picture overall overview of the upcoming changes, this post is a deep dive into the changes, what changes and preparations should be made, what to expect in your first SKAN UA campaigns, and more.

Update your MMP’s SDK

In preparation for iOS 14.5, MMPs, along with all other relevant parties, have been working to conform to Apple’s new requirements. In order for your MMP to continue and deliver your performance data, publishers must have their latest SDK installed and verify that it’s iOS 14.5 compatible. As you’re most likely wearing both hats (publisher and advertiser), it’s crucial that you update your MMP’s SDK.

“Verify All Your Ad Networks and DSPs Have Been Integrated With Your Attribution Platform”

Yes, we are aware of how stating-the-obvious this is, but if we’re being honest, you’d be surprised at how many haven’t been fully integrated yet. Check-in with your MMP to make sure that once iOS 14.5 is out, your campaigns, from all sources, can continue to run.

There’s no announced release date for iOS 14.5. It’s better to come prepared than be caught off guard the day of the release.

Understand the Value of Your New Users

The rise in significance of the conversion value vs. user-level data

The biggest challenge is in understanding the value of these new anonymized users, assuming most users will not opt-in with ATT to share their IDFA. The key to determining value in the post-iOS 14 era is through setting up a conversion value.

Understanding SKAdnetwork (AKA SKAN or SKAD) and How to Set Up a Conversion Value

SKAdnetwork is an Apple-developed and privacy-compliant attribution framework. SKAN helps advertisers measure their app marketing efforts while maintaining user privacy. Since it works without IDFA or any other advertising ID it doesn’t require users’ consent (i.e, they can be opt-out of ATT and you’ll still get postbacks with conversion values for them).

The best way to understand how SKAdnetwork postbacks work is to compare it to the current way postbacks work.

Current postbacks (with IDFA):

  • The user opens App A, sees an ad for App B and installs it.
  • The MMP’s SDK identifies the new user according to his IDFA.
  • The MMP attributes the install and sends the postback to the relevant DSP that had the last ad engagement with the user (and has an active attribution window).
  • This postback includes the IDFA and additional data about the user.
  • An event postback is sent through the MMP to the DSP in real-time, and in many cases with the inclusion of how much revenue was generated in that single purchase.
Since these postbacks include the IDFA, MMPs are able to attribute installs and events on a 1-to-1 level and DSPs can use this granular data to train ML models, bid on lookalikes, etc.

SKAdnetwork Postbacks (without IDFA):

SKAN postbacks offer a few changes.
  • Conversion Values: since there are no identifiers, users’ activity is measured through a set of numerical values, each representing key events for the advertised app, with their definition up to the discretion of the app developer.
    • Conversion Model: the conversion model is the way user activity is encoded into the conversion values. For example, a model can be event-based, it can incorporate time dimension, it can focus on retention or on purchases, etc. \

  • The 24-hour timer: after a user installs the advertised app, a 24-hour ‘event’ timer is started. This timer can reset if another key event occurs (an event set as a numerical value in the conversion value section) within this window, or expire if it doesn’t.

  • The Random Delays: after the 24-hour timer expires, the conversion value is locked in and a random delay starts before sending the relevant DSP a postback (the random delay seems to be designed to dissuade marketers from attempting to connect it to impression data and maintain some semblance of user-level data). As far as we can tell now, it can be up to 24 hours. After the delay, the postback is locked in and sent to the DSP.

  • The Single Anonymized Postback: there’s only one postback per app per user (if there’s an event past the 24-hour timer it’s not going to be reported through a postback). This postback contains the install and the user’s latest conversion value.

Attribution Postbacks Post iOS 14.5

Now, let’s go back to our previous example of how attribution technically works, but post iOS 14.5.
  • App B has set up a conversion model that helps predict users’ performance: 0 = install, 1 = level 3, 2 = level 10, 3 = deposit.
  • The user opens App A, sees an ad for App B and installs it. According to App B’s conversion model, the current conversion value is set to 0 (i.e, the app has been installed) and the 24-hour timer starts.
  • Within 4 hours of the install, the user reaches level 3 (i.e, conversion value 1) which triggers the 24-hour timer reset.
  • 22 hours pass and the user has been making its way up the levels, finally reaching level 10. The current conversion value changes to 2, and the 24-hours timer resets again.
  • An hour later, the user makes the first deposit. The conversion level changes to 3 and another 24-hour timer begin.
  • There are no other events happening in these 24 hours and the conversion value gets locked at 3. Now, the random delay begins.
  • In an unknown time frame (up to 24 hours since the last timer expired) a single anonymized postback carrying the conversion value 3 will get sent from the user’s iOS device to the DSP.
This conversion model is based on events, and this example is kind of an ideal scenario, in which all of the conversion model’s events happen within their allocated time frame.

Attribution Postbacks Post iOS 14.5 - The Alternate Scenario

Getting a postback with the maximum conversion value makes it easier to predict the value of this user. So let’s look at an alternate scenario that started the same, but didn’t go exactly according to plan.
  • This scenario also starts with the user installing the app.
  • Within 4 hours the user reached level 3, which means the current conversion value is 1 and the 24-hour timer begins.
  • This time 24 hours pass but the user doesn’t reach level 10. This means that the conversion value has been locked in at 1, but the postback hasn’t been sent yet (due to random delays).
  • Another 2 hours pass and the user does reach level 10, but now, unfortunately, it won’t affect the conversion value and it won’t be sent as another postback.
  • The user makes a deposit immediately after reaching level 10. Again, since the conversion had already been locked, this event won’t change the postback value.
  • The single anonymized postback carrying the conversion value 1 will get sent from the user’s iOS device to the DSP.
  • This outcome won’t necessarily represent the user’s predicted value and gaining actionable insights from this postback are likely to pose a challenge.
These two possible scenarios exemplify the challenges of setting up a conversion value and how using an events-only conversion model might not be sufficient.

We’ve heard discussions and have seen attempts to incorporate time dimensions into the conversion model. We generally think this might be a good idea for some apps, depending on the possible time dimension (day of the week may not be indicative for most apps, but if other time dimensions can be measured it may be insightful). Either way, we encourage advertisers to explore all possible setups before committing to a conversion model.

Understand Your MMP’s Solution and Its Limitations

There’s no denying that this change will greatly impact everyone. MMPs, aiming to continue to deliver performance data and help advertisers make data-based decisions face some pretty big challenges.

Our recommendation is to take the time and understand the upcoming changes and the available solutions to guarantee an easier transition. Keep in mind that there’s no perfect solution that will maintain user-level data. As long as users didn’t opt-in through ATT, user-level data won’t be available, and performance will be valued based on the aforementioned conversion value.

AppsFlyer

AppsFlyer chose to focus on four core areas: their SDK, SKAdNetwork, Web to App Campaigns, and AppsFlyer Privacy-Centric Attribution.

Their bottom-line, as we understand it, is to provide advertisers with predictive analytics. They aim to use conversion value in a comparative manner that will provide advertisers with long-term predictions.

Adjust

Adjust’s focus is on simplifying the setup of the SKAdNetwork conversion value, along with some additional tools to help developers make sense of performance data post-ATT.

Their initial intent seems to be to provide support for their advertisers as they set their conversion value and follow the data changes.

Tenjin

Tenjin, unlike other MMPs, promotes a change in the current payment models MMPs offer. As a free MMP to begin with, their claim is that MMPs should stop charging for attributed installs, as the challenge to validate the value of the installs will be greater, if at all possible. This outstanding stance is worth mentioning.

Similarly to other MMPs, they too offer support, an updated SDK, and setting up SKAN conversion values.

Singular, Branch, Kochava, and othersM

Most MMPs, at this point, offer iOS 14.5 support in the form of a SKAdnetwork-compatible SDK, ATT guidance, and SKAdNetwork conversion value setup. While this may seem pretty basic support, considering how much is still unknown, it makes sense to first offer these essentials and after further experiencing the new post-iOS 14.5 ecosystem, make the necessary adjustments and conjure relevant solutions for advertisers’ new needs and requirements.

Understand Your DSP and Your Ad Network Solutions

Another part of the industry that’s going to be significantly impacted are ad networks and DSPs. Since they’re inherently different, their solutions will differ as well.

Ad networks and DSPs are usually bound together under the same duopoly-alternative solution, though they themselves differ significantly. There are different types of DSPs (managed, self-service, or in-house), and they all use different technologies (such as programmatic buying or ML-based targeting).

Though they differ, all DSPs and ad networks alike, will be impacted from the move from IDFA to SKAN. Much like MMPs, each offering their own solution but dealing with the same bottom-line, so do the DSPs.

The Effect of SKAdNetwork on UA Prices

Their way of dealing with these changes will very much depend on their existing technology. For example, a DSP that utilizes machine-learning models for targeting may still use this technology, and the data still available in the bid request to target relevant users and adjust the bid according to engagement predictions but would lose its ability to recognize more complex and specific patterns.

As DSPs will lose their ability to accurately predict user value (and MMPs will struggle to measure it, as well), the focus would be to adjust the campaigns according to the data at hand. This means lowering bids, and seeing how well and indicative the conversion value can actually be.

It’s safe to assume prices will decrease – both CPMs of the ad inventory bought and as a result, and most likely, so would the average CPI. With the loss of predictability, DSPs or any other performance-based player would have to lower their bids, which will affect the entire RTB ecosystem. Our estimation is that everyone will try to maintain a significant and representative scale and learn the meaning of the delivered conversion value.

After some data is gathered, advertisers can deduce the relationship between the conversion value to how indicative it is of actual user value (how does one translate it into ROAS and LTV effectively?). If it does work as an accurate indicator, advertisers can start estimating the value delivered by DSPs accordingly or consider using different conversion values if they seem to not be indicative enough.

Once there’s a better understanding of the change and its effects, scale can be gradually increased.

Tides of Change

The only thing we truly know is that there is still a lot that can only be learned through trial and error. We don’t have a vote of confidence, we can’t say that there won’t be any hurdles along the way, and that’s mostly because no one can truly say that.

We do know that advertisers should think long and hard about their conversion model since it’s the key indicator for their bottom line. They should consult with their colleagues about setting their conversion value in a way that would help make their acquisition reliably measurable and effective. This setup is not “one-size fits all” and it will differ between different apps, as we’ve indicated earlier.

The chosen conversion model and its subsequent values should be, as much as possible, helpful in drawing revenue-related insights. These results should then be measured as much as possible against actual revenue and tested to see if the chosen conversion model manages to indicate and predict performance or not. If it’s the latter, then it calls for a change of a conversion model.

We don’t actually know how many users will decide to opt-in through ATT and what it would mean. Will the decision to pot-in have any indication of the value of the users?

So no IDFA, and yes, there are some problems. It’s not the end of the world, but it’s definitely a challenge and a change. There are a lot of unknowns and there will be a learning curve followed by gradual adjustments to the effects of these changes.

We hope we’ve helped clarify some of the key changes, some things that you should take note of, and all of the preparations you should be making.

What is Programmatic User Acquisition?

Programmatic UA (also referred to as programmatic media buying or programmatic advertising) is a technology-based method used to run user acquisition campaigns. The use of the word ‘programmatic’ refers to the automation of the process, i.e, running campaigns automatically according to a set of predefined rules, or algorithms.

The word ‘programmatic’ is a general term used to describe the automated process, unlike more specific terms such as ML-based UA. We will stand on the difference between the two as we’ll dive deeper into programmatic UA.

Depending on the type of automation and the scope of the data, in programmatic UA, advertisers can define different audience metrics (from basic demographics to user behavior patterns) and use their technological capabilities to reach this audience and improve their UA campaigns, reaching their app growth KPIs.

The automation of the process, that programmatic media buying allows, enables advertisers to run and manage campaigns on a scale that humans simply can not, at least not effectively. From the extent of the audience to the user-level targeting, through the speed in which impressions are bought, programmatic opened up possibilities that never existed before.

From Ad Networks to Programmatic DSPs

Ad networks were a central component for advertisers before programmatic advertising became prominent. These companies aggregated ad space supply from publishers, classified it into categories, and resold it to advertisers. This process provided advertisers with somewhat of a targeted audience, and publishers with monetization opportunities.

In order to prevent advertisers and publishers from cutting the middleman (i.e, the network) the process was anonymized – advertisers didn’t know where exactly their ads were shown.

With the growth and advancement of programmatic UA, ad networks have become obsolete and are gradually being replaced by companies with a strong programmatic infrastructure.

Programmatic vs. RTB (Real-Time Bidding)

Programmatic Media-Buying in the RTB Ecosystem

RTB is a media-buying ecosystem that can only be used for programmatic campaigns (i.e, RTB cannot be used in non-programmatic campaigns, but it can be used with all types of programmatic campaigns).

Real-time bidding is an online auction marketplace for buying and selling ad impressions in real-time. Such auction occurs in mere milliseconds and is completed before an ad is displayed in an app.

RTB programmatic buying is beneficial for both publishers and advertisers. Marketers achieve major efficiencies by showing their ads to the right audiences, and decrease wasted impressions, while publishers enhance the value of their ad space and improve direct sales strategy and pricing.

RTB accounts for 90% of all programmatic buying (which is why it’s so vastly covered and frequently discussed). The other 10% is populated by alternative methods, most famously of them is the Private Market Place.

he RTB Ecosystem:

In the RTB ecosystem, there are a few key players: publishers, advertisers, SSPs, and DSPs.
  • Publishers: Publishers are the ones who are offering ad space in order to monetize their app.
  • SSP (supply-side platform): An SSP is the one providing the platform in which publishers can sell their ad space.
  • DSP (demand-side platform): A DSP is the one managing the side of media-buying, connecting available ad space (on the publisher side) to the campaign (on the advertiser side).
  • Advertiser: Advertisers are the ones looking for available ad space that would serve their campaign.

The RTB Ecosystem

The process starts with the publishers, the ones offering up ad placements in their mobile app. This ad space is sent as a bid request in the RTB ecosystem, through the SSP which facilitates the auction. The different DSPs bid on that placement in the auction process (which we explain in detail in the Basics of RTB), and the winning bid gets to display their ad (i.e, the advertiser’s ad).

Using Machine-Learning (ML) Algorithms for User-Level Targeting

How ML-Based Campaigns Differ From Programmatic Campaigns

Programmatic media buying enables advertisers to use AI (artificial intelligence) technologies, such as machine-learning algorithms, that add the ability not only to predict but also to adapt to changes in the data (by recognizing trends), and change their campaign’s settings and targeting accordingly.

While programmatic media buying can be as simple as a set of predefined rules, ML algorithms differ by offering more dynamic and complex targeting capabilities.

User-Level Targeting Using Machine-Learning Algorithms

The algorithm’s ability to adapt to changes in the data is best explained by example. We’ll use two possible campaigns for an RPG mobile game to demonstrate.

The first campaign is a programmatic UA campaign for the RPG game. The campaign is set to target who you know to be your audience: Males, 20+, who have shown interest in other RPG mobile games.

This campaign will probably do well for a while, since it is the target audience, but will plateau over time since it only considers the very generic parameters of the bidding decision-making process – like the placement, device, OS Version, ISP, time of day, and a few others.

One repeating scenario is when a dominant placement showing good performance loses its popularity and its user count starts dwindling – the performance of the campaign takes a huge dip, as experienced by many running campaigns through self-service platforms.

The second campaign is an ML-based campaign. Initially, this campaign will be set up to target the same demographic as the programmatic campaign. The campaign will run for an exploration period where the algorithm will learn the data and recognize trends.

After an exploration period, the algorithm will start targeting according to what it had learned. For example, targeting two groups: users who’d spent at least 3 hours in any RPG game, and users who play more on the weekends in general. The algorithm will then test out different options (testing who is more inclined to install – the 3-hours gamers or the weekend gamers) and change its targeting based on the results. As the campaign continues to run and the data changes, the algorithm will adapt according to users’ behaviors.

The success of the campaign does not rely solely on the algorithms, but on the features that were set up for the campaign (someone needs to “tell” the algorithms that looking at an RPG session duration, or testing out the day-of-week usage is crucial) as we’ve previously covered in our Data Activation in Mobile UA entry.

While programmatic media-buying, without the use of ML algorithms, can result in successful mobile UA campaigns, a well-set and efficiently managed ML-based campaign is more of a long-term solution. With the ability to adapt to changes in the data, a UA campaign can continue to scale and yield results in the long run.

Choosing a DSP (Demand Side Platform)

The mobile app marketing industry is evolving towards automation, with companies increasing their spending on programmatic media buying, and shifting away from establishing and managing direct advertiser\publisher relationships that are difficult to set up and sustain. While Ad Networks still exist, there’s a shift towards Programmatic DSPs and ML-Based DSPs.

The shift to programmatic and ML-based media buying led to the development of different solutions and services available for marketing managers looking for growth opportunities. The three main options are In-house DSP, a Self-service DSP, and a Managed DSP and here we’ll elaborate about the differences and advantages each platform offers.

The In-House DSP

An In-house DSP refers to a self-developed DSP, a platform uniquely built for the company. There are many advantages to an In-house DSP: full control over the campaigns, full cost transparency, no data sharing (all of the aggregated data regarding the users stays within the company), and so on.

An In-house DSP is a highly rare solution since most companies simply can’t afford it. Building a DSP requires investing resources in infrastructure and development, recruiting teams to develop and maintain it (data scientists, product managers, and an operational team). It’s essentially building a whole new product within a company and those who can afford it are possibly corporate-sized companies (and they too probably wouldn’t opt for creating a whole new product). For the rest, it’s simply unrealistic. Most companies are faced with a choice between the two alternative solutions: a Self-Service DSP and a Managed DSP. “Marketers and agencies want and need the benefits of domain-specific AI capabilities. However, they also recognize that to build the capabilities themselves it would involve a commitment to a multi-year project, dedicated resources and investment on an ongoing basis across multiple platforms with no guarantee of a successful outcome” – Matt Nash, MD @ Scibids

The Self-service DSP

A Self-Service DSP is a DSP service built and managed by another company, providing marketers access to a platform in which they can manage their own campaigns. This allows control over the campaigns but usually have three distinct disadvantages:
  • Limited targeting options – i.e it’s not feasible/possible to target at the user-level.
  • Data-sharing with a third party – Having to share your suppression list with the DSP providing the service.
  • The Inability to draw actionable insights – Whether a campaign performs well or bad, the insights are limited to the data provided (location, device, gender, or age) whereas, in a managed DSP, the insights can be significantly valuable, as detailed in the managed DSP section.
Under the Self-Service DSP umbrella, there’s another type of DSP called Bidder as a Service. Bidder as a Service once again enables marketers to manage their own campaigns but, in this case, those services offer some form of customization through the use of machine-learning (i.e not just programmatic). An algorithm that can be set, using existing features, to target users better and deliver better results.

Using Bidder as a Service to its full theoretical potential requires having a data scientist on your team, working alongside a marketing\campagin manager. With a data scientist working on the campaign, using machine learning to its fullest and most granular level by generating custom user attributes for models is still limited and costs significant additional fees. The targeting it enables, though, is closer to being actually user-level.

The Managed DSP

A Managed DSP refers to programmatic DSPs, where a company had built its own proprietary bidder and offers a full suite of services to marketers – from setting up the campaigns to managing, optimizing, and meeting their KPIs. A Managed DSP, in some cases, will also be ML-based.

A managed DSP has a couple of stand-out advantages. The first one is that it requires almost no involvement from the marketing managers – the DSP sets up the campaign and manages it fully, while the marketing manager only has to track and monitor the campaign.

The second advantage is in the effectiveness derived from a machine learning model fine-tuned for your acquisition requirements – a campaign that is legitimately ML-based can target at the user level, detect and draw conclusions based on behavioral data, and predict accordingly (depending on the set KPIs it can predict which users are most likely to install, deposit, or potentially have the highest retention), enabling it to sift through users rather than making decisions at a shallower level, like placements, or device models.

Some of the disadvantages are as follows:
  • Lack of Control – Since someone else is managing the campaign, the marketer has no control over it. This is a claim often relied by marketers but can be easily resolved when setting mutual expectations, providing access to a detailed, real-time, tracking platform, and keeping an open communication line.
  • Data-sharing with a third party – Having to share your suppression list with the DSP providing the service.
  • ML Based Campaigns Require Time – In order to successfully run an ML-based campaign, the machine has to first learn. In order to reach scale and positive ROI, in the long run, marketers will have to gather some patience for the starting low-scale, higher-costs stages of the campaign.
The way to achieve trust for both sides is through transparency – the more data shared regarding the way campaigns are managed and what considerations are being made while bidding by the DSP, and the more accessible the campaign details are, the easier it should be for the advertiser to understand they are in good, capable hands.

Diving DeeperM

In order to make an informed decision between the two options (self-service and managed), we’re going to get into the details of the differences from the targeting capabilities, to payment models, and long term results.

Targeting Capabilities

In a self-service DSP targeting options are limited. Marketers can target according to real-time data that is accessible at the bid-level (such as ad placement, device type, time of day, location, etc’).

In a bidder as a service, on the other hand, marketers can take their targeting capabilities a step further, creating their own algorithm that uses existing model features (i.e attributes that the bidder service defined as important targeting attributes), and reach user-level targeting (within the limitations of the existing features). While this improves targeting, it also increases costs, since any added capabilities in a bidder as a service require an additional cost. This means that improving your targeting might not be cost-effective.

A managed DSP can provide the most customized targeting, with models specific to an app and features made specifically to match users’ behavioral data. Managed DSPs also apply data enrichment and use aggregated data to improve their targeting, reaching, and possibly even exceeding, KPIs over time.

The best way to exemplify the advantage of customized features is by sharing a case of a well-known weather app. Our campaign for the app started at an exploratory stage where we targeted new devices in the US (a targeting feature that isn’t available in a self-service DSP). After a short period, we noticed a shift in the results – we got higher retention rates the longer the campaign ran. We looked into the model’s decision making, in an attempt to explain these exceptional results, and saw that the algorithm’s model, that changed based on real-time data, started targeting states that had bad weather (storm alerts, flood warnings, etc’..) because those were the states where users most frequently used the app. As with its given name, the machine learned who to target and the model changed according to the results, yielding even better results.

Running the same campaign in a self-service DSP will be much more challenging. Since the new devices feature doesn’t exist, new devices can’t be specifically targeted, which means the exploratory stage will have a much wider initial targeting. These results will then have to be downloaded and analyzed after the fact. Then it’d be up to the marketing manager to deduct and conclude why some states were performing better when they did. Even reaching this improbable conclusion will then result in having to manually customize the bids according to the changing weather, missing out on the benefits of real-time data and automation and wasting human resources that are trying to keep up with this constant change.

ROAS

At the end of the day, marketers look at their ROAS. Payment models in the different DSPs may be identical (CPM being the most common) but the risk involved in running these campaigns differs greatly.

A self-service DSP and its limited targeting options might give off the biggest feeling of control, but, in reality, will create the biggest risk (marketers spend on impressions without probability insights). A bidder-as-a-service might provide better results, but the added costs will affect and lower the ROAS, and a managed DSP, having the capability to target based on probability and run campaigns based on performance, should serve as a risk-free model and theoretically show better results from the start.

Creatives

Creatives today can be customized to match the users – what they prefer to see and engage with based on data of past interactions. This technology is seen throughout different marketing platforms as well as on social marketing platforms. The difference is in the ability to customize the ad at the user-level, which in turn is reserved for those able to target at the user-level.

Managed DSPs, if they provide such a service, will have the necessary data and the technological capability to generate a user-level customized ad. Self-service DSPs, on the other hand, while still capable of customization, will be limited to data provided through bid requests (device, location, local time, placements) with some possible data enrichment.

Making a Choice

Every app is unique, it has unique KPIs, and can afford spending differently sized budgets. Before committing to a certain platform, do your research, and test the waters. If you have the budget, we’d recommend trying out different platforms in parallel, testing their targeting capabilities, their transparency and the insights you can gain, and potential for scale and profitability. You’d probably end up using a couple of different platforms, each with their own advantages.

We recently had the opportunity to take part in an article by Adjust, titled How to Choose the Right Mobile DSP for Growth, and contributed our insights.

Bid Shading – Making the Most of First-Price Auctions

Bid shading, a practice used by programmatic bidders to eliminate the risk of overbidding in the realm of first-price auctions, is gaining traction in the programmatic industry. Before getting to know bid shading better, let’s take a dive into a brief history of auction types in programmatic ad auctions.

Industrial Shift to First-Price Auctions

In RTB (Real-Time Bidding), multiple DSPs (demand-side platforms) compete to display an ad to a specific user on the publisher’s inventory (i.e., ad space within an app or website). As essentially all major SSPs (supply-side platforms) operate with the CPM model, their direct interest is to sell the inventory at a higher price to maximize the publisher’s ad revenue (and, obviously, their share of it).

For over a decade, all SSPs operated with the second-price auction model. In the second-price auction, a bidder would pay the price of the second-highest bid +$0.01.

This all began to change when SSPs started moving to the first-price auction model, where the highest bidder pays the full price of their bid. Google was the first to make the change to its entire inventory, and all exchanges followed suit within a couple of years.

While publishers and SSPs are the direct beneficiaries in the chain from first-price auctions, the buyers (advertisers and DSPs) are forced to be more cost-aware and not “overbid” the market by too much.

These days, the vast majority of SSPs are running on a first-price auction model. This forces buyers to face the dilemma of how high they should bid to win the auction without overpaying above its market value. The industry shift made DSPs adjust to the current situation and develop solutions to make sure they pay exactly as much as needed to win the impression.

What Is Bid Shading?

Bid shading is a practice used by programmatic bidders to predict the market price of auctions based on historical data and bid accordingly. By correcting your bid to be just high enough to win the auction, bid shading helps you avoid significant overpaying.

Having access to the historical record of what amount was spent in the past for the same bid—taking into consideration the app, placement, ad format, historical user data (for Android campaigns), and at what price the previous bids were lost—the algorithm on the DSP side can decide how much to bid on the impression.

The Essence of Bid Shading

To understand the essence of bid shading, imagine you need to take a bus that circulates once an hour.

Sometimes the bus arrives on time, but it can frequently arrive a few minutes earlier or later. If you miss the bus, you will have to wait an extra hour for the next one. At the same time, you don’t want to be at the bus stop too early and waste precious time.

Now, you have historical data that allows you to estimate how much earlier the bus can arrive, on average. Based on this historical data (that you painfully gathered showing up to the office an hour late), the natural thing to do is to arrive a couple of minutes earlier, just on time to catch the bus if it arrives before its scheduled time. And, worst case scenario, you’ll end up waiting a few more minutes if it’s running late.

Internally, we call this bus-shading. And no, it’s not weird that we obsess over industry terms and let it leak into our commute considerations—you don’t have to be so judgemental.

Bid shading works in the same way: The algorithm is adjusted to win the auction and helps advertisers avoid overpaying for the impression.

As such, DSPs can identify which users are likely to fit the “perfect” user criteria and bid higher while bidding lower on those who are still relevant but bring lower LTV.

Who Benefits from Bid Shading?

Among DSPs, only a few of the most sophisticated players can actually claim that they’ve implemented bid shading in their toolkit. These are the players who have gathered sufficient data throughout years of extensive work and research, armed with machine learning capabilities and having access to major data points needed to build high-probability prediction models.

The Definitive Guide to Mobile Ads’ Sizes and Creatives’ Formats in 2023

In this article, we will review everything you need to know about mobile creative formats and their specs, performance, and best practices.

It might seem that ads come in all shapes and sizes—well, that’s not exactly the case with mobile programmatic. There are standard formats in programmatic that will help you cover most of the available inventory types. Exceptions might include private marketplaces’ inventory with specific requirements.

Mobile ads’ placements

Before we dive into the formats, here is a list of the standard ad sizes.
Image (jpeg/png/gif) Video MRAID: Dynamic / Interactive Formats Rewarded: A combination of video and an end-card/playable; users must watch the entire video to gain an in-game reward
320x50 - Small Banner ✔️ ✔️
300x250 - Rectangular Banner ✔️ ✔️
728x90 - Tablet Small Banner ✔️
1200x628 - Native ✔️ ✔️
16:9 / 9:16 ratio - Native video; resized assets might be used for other ad placements with the same ratio ✔️
Full screen (Interstitial) - Landscape/Portrait ✔️ ✔️ ✔️ ✔️

Mobile ads’ formats

Mobile ad banner

Banner ads are small rectangular ads that appear at the top or bottom of a mobile screen.
While banners hold the tiniest screen “real estate” (and usually the cheapest inventory), this ad format can have a great impact when utilized properly.

Best practices in using banner ads:

  • Make sure to include eye-catching visuals
  • Keep it brief
  • Include a button and/or a CTA

Pros:

  • Low cost compared to other formats
  • Ability to reach a large audience due to the low cost
  • Non-interruptive

Cons:

  • Least ad space, meaning it might go unnoticed due to banner blindness
  • Lower CTR compared to other formats

Most common mobile banner sizes:

  • 320×50 – Small banner
  • 300×250 – Medium rectangular banner
  • 728×90 – Tablet small banner

Mobile video ad

Video is a format that captivates users’ interest and enables advertisers to showcase their brand to the fullest extent.

Best practices in using mobile video:

  • Make sure to add an end-card
  • Engage users during the first 3-4 seconds as most platforms allow to close the video after 5 seconds
  • Keep it short, preferably only up to 15 seconds

Pros:

  • Engaging and immersive advertising experiences
  • Maximum branding
  • More room for creativity

Cons:

  • Costly distribution
  • Might be expensive to produce
  • Might be intrusive if not executed properly

Most common mobile video ads sizes:

Video ads come in various sizes and shapes, depending on the exact placement. However, all videos share an aspect ratio of 16:9 / 9:16. Keep in mind that most ad platforms can automatically resize creatives to fit the placement, so there is no need to overstress about providing all the possible sizes. Instead, focus on providing a portrait or landscape HD resolution video (1080×720 / 720×1080) and an end-card of the same size.

MRAID (Mobile Rich Media Ad Interface Definitions) ads

MRAID is a standard API developed by IAB for the development of interactive mobile ads using HTML5 and Javascript.

MRAID provides a comprehensive set of guidelines to address interoperability problems between publishers’ mobile apps, ad servers, and various rich media platforms. These guidelines cover various ad interactions, such as expanding, collapsing, resizing, and closing, as well as other features such as screen size, video playback, image saving, and more.

These ads can vary to be as simple as animated banners or as complicated as resembling the gameplay mechanics of an MMORPG.

Playable ads

Playable ads are interactive ad units that mimic a short gameplay or native app experience. The goal is to let the user “get a taste” of the app before they install it. In our practice, playable ads are one of the most efficient ways to promote an app, as they eliminate the app discovery phase and allow users to familiarize themselves with an app before downloading it.

Playable ads sizes:

The standard development size for interstitial or full-screen playable ads would be: 1280×720 / 1920×1080 (landscape) and 720×1280 / 1920×1080 (portrait). However, it’s crucial to test the rendering on multiple screen sizes to ensure a smooth user experience

Pros:

  • High performance due to being an extremely prominent format
  • Interactive, allowing users to get a glimpse of the app
  • Engaging format that ensures users who download the app are actually interested
  • Very precise measurement (every interaction counts)
  • Testing for new demographics and social segments

Cons:

  • Expensive inventory since playables take up the entire app screen (interstitial)
  • Difficult to effectively create and utilize
  • Can lead to poor user experience if there’s poor execution and SDK adaptation

Mobile dynamic ads

The dynamic ad is a format that is generated in real-time based on the aggregated behavioral data and previous interactions with an app. This allows for a more personalized and relevant advertising experience. Dynamic ads can be used across multiple placements, stretching from the smallest possible size to full-screen (interstitial).

The main advantage of dynamic ads is their ability to precisely target and engage the audience, leading to higher conversion rates and more efficient spending.

Dynamic ads specs:

Dynamic ads can fit any inventory size. The standard assets required to generate dynamic ads include the following:
  • Title – up to 25 characters
  • Description – up to 100 characters
  • CTA – up to 15 characters
Depending on the ad specifications, your adtech provider might require extra assets and/or extra integration to ensure that the right content is displayed to the right audience.

Pros:

  • Personalized based on previous experience – imagine a user checked out sunglasses on your app or website. Based on this interaction and further user actions, an advertiser can either push the item sale or upsell more items from other categories to finalize the outfit (like shoes).
  • Simple maintenance once setup is complete.
  • Scalability to allow the setup to be expanded.

Cons:

  • Deeper level of integration required
  • Not suitable for every product

Native ads

Native ads are designed to match the content of the media source utilized by the user. This format provides the most non-intrusive user experience while promoting the app.

Native ads specs:

Because native ads are generated in real-time, they require extra assets:
  • Title – up to 25 characters
  • Description – up to 100 characters
  • CTA – up to 15 characters
  • Image (1200×628)

Native ads sizes:

  • Image – 1200×628
  • Video – 1280×720 (or similar ratio)

Rewarded ads

Rewarded ads are typically interstitial (full-screen) video ads with an image or playable serving as an end-card. Users get rewarded with in-app rewards (lives, in-app currency, premium content, etc.) for watching the ads until the end or interacting with the ads.
Users can decide whether they want to opt in and interact with the app, which leaves them with an overall positive experience.

Pros:

  • Increased user engagement, as users can’t skip the ad
  • Positive user experience when the right ads are shown to the right users
  • No payment required from advertiser unless the user watched the ad until the end
  • Full-screen placement ensures full user attention

Cons:

  • A relatively expensive inventory
  • Correct targeting is crucial

What’s next?

Delivering the right creatives is undoubtedly just as important as targeting the right audience at the right time. Testing the creatives is a trial-and-error path. To yield the best results, a programmatic DSP responsible for the delivery should be able to take multiple steps into consideration:

1) Choose the right combination of formats. It’s relatively rare for a user to download the app after one ad view. Following the basic AIDA (attention – interest – desire – action) model, it typically requires a few impressions to convince the user. That’s why it’s important to consider which ads users will initially be exposed to and which ads will help them progress in the marketing funnel.

2) Consult on the right performance metrics. Programmatic buying works exclusively with CPM (cost per 1000 impressions) even if the final payment model is different. It’s important to track the right metrics and have access to the granular insights to identify the best performing creatives based on their actual performance.

3) Don’t stop testing. Check out our article on creative testing to see which approach works best for your app.

What is Real-Time Bidding (RTB)?

RTB stands for Real-Time Bidding and refers to the automated process of buying and selling ads. RTB, as the name implies, runs in real-time, on a CPM basis, in the form of an auction. This process is facilitated by an SSP (a Supply Side Platform).

The auction process, at its most basic, consists of a publisher (the supplier of the inventory), an SSP (the facilitator of the auction), a DSP (the media buyer), and an advertiser (who wants their ad to be shown).

At any second in the RTB ecosystem, there are countless bid requests on which DSPs are bidding. The process in which a single ad request is sent, until the end of the bidding process where there’s a winner, takes somewhere between 100-300 milliseconds.

A bid request, on the user’s side, is the moment in the app just before the ad appears. In many cases, before the ad even appears to the user, the SSP had already run the auction, a media buyer won the bid, and the ad had been pre-cached and will be displayed at the right moment.

The process is automated, it can be programmatic or even ML-based, which allows advertisers to efficiently use their marketing budgets (bidding higher on users most likely to convert, lower on users who are less likely to convert, and dismissing the bid on unfitted users).

To learn more, read our intro to programmatic mobile user acquisition

The Difference Between RTB and Programmatic Media Buying

Since RTB uses automatic media-buying, it is, by definition, programmatic, but that does not mean that all programmatic media-buying is done via RTB. RTB utilizes programmatic media buying by bidding in an open market. Having it be a real-time auction is what differentiates programmatic from RTB.

Programmatic, on the other hand, can also refer to directly buying from a publishers’ inventory. The term programmatic solely refers to the fact that the inventory is bought automatically.

The Real-Time Bidding Process

First and Second Price Auctions

There are different ways in which advertisers can acquire inventory in the RTB ecosystem, through the open market part of the RTB ecosystem. The two central types of auctions are first-price and second-price auctions.

In a first-price auction, the process is simple: the highest bidder wins the impression for the exact price they bid with. In a second-price auction, the highest bidder still wins, but the price of the bid is decided by the second-highest bid (with a +$0.01 modification). Meaning, if the highest bid was $10 and the second-highest bid was $7, the winner will pay $7.01.

First price auctions lead to more predictable results. They allow buyers to bid according to the users’ perceived value, in correlation with the campaign’s KPIs (i.e, the probability to install, deposit, etc’). Taking into account the user’s predicted value (for example, LTV or ROAS) and adjusting the bid accordingly, may increase the bid, but improve the bottom line by acquiring users who are more likely to return their investment.

The key difference between first and second-price auction is having control over the bid. In the first-price auction, media buyers know the exact amount of money they spend on a bid, while in a second price auction, they don’t.

Second price auctions offer a lower spend on impressions, with the risk of losing some of the bids as a result of the modified bidding price. The complexity and unpredictable nature of the second price auction are currently leading the industry to shift towards first-price auctions.

“Now, as real-time bidding has become more prevalent and buyers have become increasingly sophisticated, it makes sense for the ecosystem to transition to a first price model” – Casie Attardi Jordan, Mopub

Bid Shading – Making the Most of First-Price Auctions

Bid shading is a practice used by programmatic bidders to eliminate the risk of overbidding in the realm of first-price auctions.

Bid shading is an algorithm that predicts the price of bids based on historical data. By allowing you to pay just enough to win the bid, bid shading helps you avoid significant overpaying.

Having access to the historical record of what amount was spent in the past for the same bid—and taking into consideration app, placement, ad format, historical user data (for Android campaigns), and at what price the previous bids were lost—the algorithm on the DSP side can decide how much to bid on the impression.

Read this article to learn more about the essence of bid shading.

Location, Location, Location: In-app Ad Placements

The Differences Between Banner, Native, and Interstitial Ads

In-app ads offer an engaging user experience by using videos, playable ads, dynamic ads, and more. The three main types of in-app ads are interstitial, native, and banner ads.

These 3 types of ads differ in many aspects. Interstitial is often referred to as the most engaging type of ad, since it’s both a full-screen ad, and it supports video and playable ads, which are considered to be the most effective ads.

“Due to their size, interstitial ads can be much more visually compelling than other static ad formats. It’s also possible to use these full-screen ads to share engaging content like videos and store locators.” Samuel Harries, Content Manager @ Adjust

These ad types also differ in size and price;
  • Banner ads, being the smallest, are considered to be the least intrusive, due to their size. This makes them less effective, less competitive, and results in them being the cheapest ad format.
  • Native ads are somewhere in the middle (in size, price, and affect), and include text-based elements that allow testing extensive amounts of creatives by changing each element independently, creating hundreds of variations.
  • Interstitial ads are the biggest (being full-screen ads). As the most effective type of ad, the competition on interstitial placements results in a price increase, making them the most expensive ad placements.

The Bid Request

Each bid request includes a certain amount of data that helps DSPs estimate the user’s potential and decide how high of a bid to make, if at all.

There is basic data that exists in almost all bid requests, then there are data enrichment tools that allow DSPs to receive additional information about the device, and, last but not least, there’s behavioral data that helps predict future behaviors and make accurate predictions on a bid request.

Usually, the more data is available in the request, the more competition for it, leading to higher bids, as DSPs have more data to base their decision on.

The Basics - Bid Request Data

Bid request data usually includes the following:
  • The device data, which includes the UA string, model name, etc.
  • The user Identifier (IDFA/GAID), unless the user is a LAT user (or iOS14 users who opted out).
  • Location data, which can be as vague as the IP or a city, or as detailed as a ZIP code or even GPS coordinates.
  • The ad placement, which indicates the app name and the in-app placement of the ad.
  • The OS Version, which may sound insignificant but can be very telling. There are a lot of conclusions to draw between users who stay up-to-date with the latest OS version and those who haven’t updated in a while (especially when combined with the rest of the data.)

Data Enrichment - Maximizing the Data in the Bid Request

With the help of data enrichment tools, media buyers can get additional hardware-related details in the bid stream and improve their predictive capabilities. Information such as battery life, PPI (Pixels Per Inch), device memory, and others, can be indicators of the users’ correlation with the advertiser’s app.

For example, users who install high fidelity 3D games (like action and FPS games) usually have higher PPI devices, leading to the conclusion that users with high PPI devices are more likely to install this type of games. Another example refers to the users’ current battery level – users with low battery are less likely to install any new app since they want to preserve the battery they have left for their regular usage.

These examples are simplified and do not stand alone. These data enrichments add to the rest of the data when deciding whether or not to bid and if so, what bid to place on any given ad request.

We’ve been working with 51Degrees to enrich our device-related data, and you can read more about it and how it had improved our campaigns in our case study.

Enrichment From the Supply Side

SSPs can add extensions to the ad requests that provide additional data points. Fyber is one of the SSPs that puts a lot of effort into maximizing the enrichment of their requests with an abundance of useful attributes.

By providing DSPs with data enrichment, adding details such as battery level, available disk space, impression depth, and session duration, Fyber helps DSPs improve their predictions, which, in turn, enables DSPs to bid higher on the likeliest users to convert.

This results in higher bids and more wins, which, in turn, increases the profits for all participating sides – the publisher, the DSP, the SSP, and the advertiser, who ends up with better user acquisition campaigns.

“We see great value in enriching the bid stream with unique, SDK-based, data parameters. It helps DSPs bid more effectively by providing real-time access to crucial targeting parameters and puts them on a level playing field with ad networks that have their own SDK.

Building close collaborations between DSPs and SSPs help ensure that everything from targeting to creative delivery via our SDK is optimal, creating a win-win scenario where advertisers enjoy better ROAS and publishers see better ad monetization”

Behavioral Data - Finding Your Ideal Users

Behavioral data is a key factor when running programmatic user acquisition campaigns. This data hinges on the IDFA\GAID to predict future user behavior, based on past usage patterns. Usage patterns can be anything from the genres and types of apps users use, how long they use them when they use them, and much more.

These patterns help form predicted behaviors such as the types of apps users would be most interested in, when they’re likelier to install new apps, and even how much time they potentially could be spending in their targeted app.

Behavioral data, having the potential to be this indicative as to users’ behaviors, plays a central role when it comes to the decision making behind bidding.

Bid request data (basic, enriched, and predicted combined) can be used in many different ways (i.e, each DSP delivers different campaign results while having access to the same data). The differences usually stem from the way the data is being implemented, as we’ve covered in our Data Activation in Mobile UA entry.

What is Data Sharing in Mobile User Acquisition?

Data sharing in mobile UA means providing your UA partner access to essential UA-related data, in the form of an audience list, comprised of existing users. Sharing this data fulfills two central goals: it helps your UA partner target lookalikes, which, in turn, helps improve your campaign results, and it serves as a suppression list (formerly known as a blacklist) to prevent targeting existing users.

By using this data your campaigns are focused on acquiring new and relevant users, instead of wasting your budget on a long exploration phase and targeting existing and active users, or ones that already used your app and didn’t retent.

Sharing an audience list with your UA partner is relevant to partners who run data-driven UA campaigns. If the campaigns run by your partner are not, at the very least, programmatic, then sharing the data will only be used as a suppression list.

A Holistic Approach to Mobile UA

Using an Audience List to Target Lookalikes

While it may sound abrasive, data sharing has a huge impact on mobile marketing campaigns when they are run through a DSP partner that employs machine-learning. Sharing an audience list means that from the early stages of the campaign, your DSP partner has the necessary information required to target lookalikes (given that the partner has the technical capability to do so).

Using existing audience data can help your partner create a profile of a potential new user. Depending on the UA partner’s targeting abilities and its existing data, this data can help recognize behavioral patterns from which it can build an initial targeting model.

Some of the patterns that can be detected and targeted with are types of devices, OS versions, session depths, the time of day, and so on. Recognizing common behaviors between existing users allows for the creation of a targeting model (programmatic or fully based on machine learning) which, in turn, learns based on its performance.

For example, let’s say the app at hand is a hyper-casual gaming app. You provide your UA partner with data about your existing users (i.e an audience list). Your partner uses this data to detect patterns and sees a specific time-of-day activity, a repeating average session length, and that these engaged users all are highly active in the hyper-casual genre. They might also recognize a more engaged gender or age group. These patterns can be picked up by the machine learning model and guide bidding decisions, to ensure only users with a higher probability to install and engage with the app are bid on.

Optimizing Your UA Campaign - From Targeting Lookalikes to Reaching KPIs

Using the audience list, your partner can start the campaign by targeting only relevant users. Now, the algorithm can learn and improve based on the performance of the initial lookalike targeting campaigns, according to the set KPIs, and optimize towards users with similar behavioral patterns and target them exclusively. This process can be thought of as sort of a refinement of the model, in which the model is already set on targeting the high-intent users, tracking their performance, and optimizing towards the set KPIs.

To put it simply, thanks to the data shared, the campaign starts by targeting lookalikes, and then, as it learns and tracks data trends and the goal KPIs, a targeting model is established and its targeting keeps improving over time, as the model learns from its past mistakes (hence the learning in Machine Learning). If earlier we’ve mentioned the type of device, and interest in the genre, now it may find even more distinctive behavioral patterns common to most ideal users (those users may be high retention users, depositors, or something else, depending on the set KPIs).

Read more on data activation in our Data Activation in Mobile UA entry.

This is a win-win-win situation, for the advertiser, that sees results faster, the DSP, that can model and scale faster, and the users, that are shown ads relevant to their interests. Having said that, sharing first-party data is not a matter to be taken lightly, and should only be shared with a trusted partner.

The Different Types of Audience Lists

There are two important distinctions when it comes to sharing audience lists – the type of list (dynamic or static) and content segmentation.
  • Static lists As the name suggests, these lists are a single delivery (i.e, they’re given one time). They consist of an existing list of all users up to a certain point in time. If you’re running other UA campaigns at the same time, either on social networks or with other UA partners, the static nature of the list may cause problems, depending on the existing pool of users and the scale of the campaign. A static list may turn irrelevant if the app is new and the users’ pool is relatively small, or in cases in which multiple campaigns are running parallelly on different platforms. In general, a static list is not an ideal choice, but it’s still better than not sharing a list at all.
  • Dynamic lists Dynamic lists update automatically, as new users install the app, regardless of where the install originated. This is useful (and highly recommended) when running multiple UA campaigns on different platforms and helps avoid targeting newly acquired users.The different attribution platforms can help create and maintain these lists (and even segment these lists) by offering their users a very simple way to share these ongoing, dynamic audience lists with their various UA partners. This feature not only benefits the performance of specific immediate UA campaigns by ensuring newly acquired users are not targeted, but it also allows your partners to gain a truly holistic overview of your user base, which in turn can significantly impact performance for the long run.
  • Custom/Segmented Audience list These lists, which can be either static or dynamic, are segmented by user behavior, expressed in in-app events – users that reached D30 retention, inactive users, depositors, inactive depositors, and so on. These segmentations can be used by a UA partner to recognize data patterns faster and target accordingly. If the campaign KPI is D30 retention, for example, your partner can look at a D30 retention segment and recognize data patterns specific to these users (such as type of device, OS version, depth of session, app genres, active hours, etc’) and target lookalikes. The segmentation helps your partner find patterns that are distinctive to the segment and improve their targeting. If the campaign is ML-based, and the list is dynamic and segmented, then as data patterns shift and change so would the model (if something in the patterns of D30 retention users changed so would, consequently, the targeting). If these segmentations don’t exist, but your partner is still running an ML-based campaign, these data patterns will eventually be surmised, but it may take significantly longer to collect enough samples.

Is an Audience List the Same Thing as a Suppression List?

A suppression list is a term mostly used in the context of email marketing, where it’s commonly used in order to avoid targeting existing users. In the context of data-driven mobile UA, they’re used similarly and help focus the spending budget on acquiring new users. We’re avoiding the term suppression list since data sharing (or audience lists) has a bigger contribution and functionally in mobile UA campaigns than suppression lists.

Customizing Ad Serving According to the Audience Segment

The Effect of Creative Selection

Once the data is shared and the audience is segmented, we can delve into creative selection. Customizing the creatives and adapting them to relevant audience segments can significantly improve the campaign’s performance.

For example, using a playable ad or an introductory video ad can work well when introducing the app to a completely new user. Whereas, when retargeting users, there’s no point in introducing them to the app, but rather expose them to newly available content, or content they haven’t interacted with yet.

When it comes to retargeting campaigns, there are almost endless options for customization. It can be customized down to the stage users reached in the app or the specific product they were about to purchase.

For example, if a user reads the product description but didn’t add it to the cart, vs. a user that added to the cart but didn’t complete the purchase. The first user can get an ad with a notification that the product is still available, whereas, since the second user was further in the funnel, this ad would be an incentivized ad offering a discount for completing the purchase.

The ability to customize at such a granular level can lead to better KPIs and more engaged users. The options are limitless, and as such, should be used in moderation (don’t scare off your users by offering them 5 ads a day, each one detailing exactly where they abandoned the app).

What is Data Activation?

Data activation, at its broadest, means using data by making actions based on its implications. In other words, generating value and insights, recognizing behaviors and trends in the data, and using them to lead to improved performance, whatever the task may be.

In our field, we refer to data activation as the use of aggregated data for purposes of user acquisition, which translates into having the ability to extrapolate data effectively. Since user acquisition is at the forefront of every app (and any business for that matter) and the competition is constantly growing, the challenge to acquire new users grows.

As the competition for acquiring new users increases, so does the cost associated with it. It’s much more expensive to acquire new users than it is to retain existing ones. Overcoming the challenge of acquisition costs offers great rewards, such as scale, increased revenues, and eventual company growth.

Why is Data Activation Vital for Your User Acquisition?

These days there are infinite data. Running successful UA is rooted in the way you use yours.

When bidding on OpenRTB inventory, in a single bid request, you can get data on the type and model of the user’s device, OS version, location, local time, and much more. Using data enrichment and combining aggregated data makes your options virtually limitless. The central question, and how you differentiate yourself from the competition, is in the way you use, or more accurately, look at that data.

All marketing efforts, way before the digital and mobile apps world, focused on understanding the customer and its needs. Though the technology changed, the ideas remain the same. Understanding trends, patterns, and users’ behaviors are key to running successful UA campaigns.

This behavior can be understood by focusing your analysis on an existing pool of users to predict future behaviors and recognize potential new users. In digital marketing, it is often referred to as lookalikes.

It’s All About the Features

Depending on the technologies at use (with the year being 2020 we assume the technology is very much in existence and in use) you have a lot of data. You’ve recognized patterns and behaviors within these data, and now it’s time to act upon it and generate predictions or make decisions that would help acquire new users effectively.

The question is how to derive specific insights and implement them from these immense quantities of data. There’s a tendency to forget there are users behind the data and commit solely to the numbers at hand, recognizing trends and following them, without rationalizing them. Which, in the long-run, creates problems.

When machine-learning algorithms are in use, the way to derive meaning from the mass is by creating features that dictate the algorithms on how to use the data at hand. These features bridge between data and the users behind them. A feature can be as simple as the time of day or the day of the week, and complex as the current session length or the contextual relationship between the user and the promoted application.

Time of day illustration

To explain this point, we’ll use the time-of-day data and show how you can create different features using the same data. You can create a time-of-day feature that refers to this information on an hourly basis (i.e there are 24 units a day) and let the algorithm do its thing (target users by their most active hour units, for example). This would be an example of a feature in a machine-learning algorithm that follows the data blindly, without accounting for actual user behavior.

On the other hand, you can look at the data and combine known human behaviors that correlate with the data. For example, according to the data, users are mostly inactive during the night, while they are at their most active in the morning (while they are commuting to work), then they are somewhat active during the workday, and active again in their evenings.

Instead of creating 24 single hour units, you can create, for example, 4 units, each representing a different level of activity during the day, and use these data when targeting. Feeding a model with this kind of feature makes a lot more sense, in the long run, than blindly following the most basic grouping of data (i.e, the 24 single-hour units).

Commuting at rush hour illustration

An example of how features can become more convoluted and interesting can be by creating a feature that combines the time of day unit with movement (i.e, a location change within a certain time of day). Assuming that if it’s rush hour and users are in movement, they’re probably commuting to work and are at their most active, at least for the morning portion of the day.

This means that the feature not only follows the data and targets accordingly, but it also combines human behaviors and can make nuanced decisions (i.e targeting all users in rush hour is not as accurate as targeting all users at rush hour who are also in movement). The better the features combine between data and behavior, the easier it is to understand user behavior and actually predict future behaviors.

If we continue the line of time of day – building a feature regarding the time of day and basing it on data collected on weekdays, then running it on the weekends would result in very different outcomes.

While midweek users were most active in the morning unit, when they were commuting to work, on the weekend they might be busy, asleep, with their family, or running errands these same hours.

Had the feature included a location change (or a day of the week grouping) – the results would have been better and lookalikes would play their part. Though being too specific also has its downfalls. The problem with creating specific features is having them become limiting, by overfitting the model.

The Problem With Overfitting

Overfitting is what happens when the model learns the details and noise in the training data to the extent that it negatively affects the performance of the model on new data. Instead of correctly generalizing the data and following trends, it follows random fluctuations and noise and applies what it’d learned on new data.

The process of building these features is challenging and requires expertise both in being able to connect the data points to the business problem at hand, as well as ensuring the model is representative and not overfitted. When done right, the fruits outweigh the labor by a significant margin.

Blackbox Approaches

The term Blackbox, in the context of machine-learning algorithms, refers to the type of models that are uninterpretable by humans (such as neural networks).

At its core, a Blackbox model works similarly to ML models that can be interpreted (like logistic regression) but they are exponentially harder to understand since their iterative and sometimes recursive nature learns complex feature relationships and estimating the importance of each feature and its relation to the other features is essentially impossible.

The way to evaluate a Blackbox model is by its ability to predict – seeing if it actually is able to effectively make the predictions it was defined to, in your production setting.

Interpretable models and their decisions, on the other hand, can be relatively easily analyzed to decide if they are making decisions that seem correct when considering the business problem at hand, and then further explained to decision-makers. They may rely on non-interpretable attributes, but can still be understood by their original attributes.

In reality, Blackbox models, like neural networks (that were originally designed by computer scientists together with neurological scientists in order to mimic the way the human brain works), can definitely generate more accurate models than simpler, interpretable ones, but that isn’t always the case.

Sometimes, the simpler models outperform the complex ones due to the fact that the data scientist has more control over what happens behind the scenes and can make little patches and tweaks to fine-tune the model to the problem at hand.

When planning to tackle a business problem with machine learning, you’ll need to first establish if it’s important to you to be able to interpret the decisions guided by the model, and, of course, your production environment limitations, as the more advanced Blackbox models require stronger computational resources to infer.

Predictive Performance Metrics

Using machine learning models, advertisers can predict the eventual lifetime value for each user, from their first day of using the app, and in many cases with incredible accuracy. Implementing this data can help stir campaign budgets towards the most successful marketing channels early on, allowing the UA team to effectively scan the entire ecosystem for any source that can meet goals and expedite growth.

Data activation, be it in mobile campaigns or traditional marketing campaigns, is based on current and past data to predict future outcomes. This idea relates to predictive KPIs. We’ve discussed the importance of long-term predictive KPIs such as LTV in our Cornerstone KPIs for Mobile UA entry.

What is Programmatic User Acquisition?

When tracking the performance of in-app mobile marketing campaigns, there are endless dimensions and a lot of metrics to consider. Before you start looking at the data, it’s better to gain an understanding of which data you want to be looking at, and what should be considered when analyzing the data.

Metrics can be divided into different categories, the ones we’ll be referring to in this article are initial and cost-related KPIs, with a distinction between pre-install and post-install KPIs. The second part of metrics KPIs refers to value-based metrics, which we’ve covered in The Cornerstone KPIs for Mobile UA and include retention rate, ARPU, ROAS, and LTV.

Why Pre-install Metrics Matter

Pre-install metrics help gain early knowledge of the performance of the campaign. While an install may not be the goal for most marketers (KPIs will usually focus on ROAS, whether it’s dependent on retention or deposits), it’s still a significant performance indicator.

While most marketers focus heavily on conversion rate (impression to click), we argue that there are other, more reliable, early indicators that can be used to predict the performance of the campaign, such as IPM (install per mille).

Before full transparency became the standard for marketers, granular data wasn’t as accessible as it is today. App marketers had to rely on conversion rates and other pre-install metrics in order to estimate the quality of their partners. Based on the data, marketers chose their partners and allocated the highest spend to those who showed to bring the most engaged users in the promise of growth potential and quality.

These days, when granular data is available, and when there’s more transparency, CR (conversion rate) is not as meaningful as it once was but both pre and post-install data are considered at every stage of a user acquisition campaign. CPI (cost per install) and the metrics around it aren’t the one size fits all solution that they used to be, but they should still be optimized towards, within reason, without compromising the performance down the funnel.

Considering Ad Spend - Scalability and Relative Success

Ad spend is not a measure of performance in and of itself but it still plays a role when you weigh performance. A campaign with more spend but lesser results might still be better than a campaign with less volume and better relative results.

To exemplify it, we’ll use two fictional campaigns:
  • Campaign A had a budget of $100 and delivered 30% D7 ROAS.
  • Campaign B had a budget of $1000 and delivered 20% D7 ROAS.
When you look at ROAS alone, campaign A delivered better results than campaign B but if you look at the revenue, campaign A’s revenue was $30, whereas campaign B’s revenue was $200.

Campaign B delivered higher revenue by utilizing a bigger spending budget. Since it’s not a given that campaign A is scalable, campaign B is the better-performing one, in this instance.

If campaign A can deliver the same quality of results with x10 the budget, it’d be the better-performing campaign.

The takeaway here is that the quality of the campaign is related to its scale and should be measured against it.

CPM (Cost Per Mille), dCPM (Dynamic Cost Per Mille) and eCPM (Efficient Cost Per Mille)

In programmatic media buying, where advertisers pay for impressions, the price is normally set in CPM (Cost Per Mille, where “Mille” is the Latin word for thousand). CPM is a common metric used in advertising to denote the cost of 1,000 impressions or views of an advertisement. It measures how much an advertiser pays for a thousand impressions of their ad.

For example, if an advertiser pays $500 for 100,000 impressions, the CPM would be calculated as follows:

CPM =($500/100,000)*1000 = $5

So, in this case, the cost per thousand impressions (CPM) is $5.

CPM is commonly used in digital advertising, where advertisers pay for the number of times their ad is displayed to users, regardless of whether they click on it. It provides advertisers with a standardized way to compare the cost of reaching their target audience across different websites or advertising platforms.

In the world of programmatic, it’s also not uncommon to hear dCPM and eCPM. dCPM (Dynamic Cost Per Mille) and eCPM (Effective Cost Per Mille) are both metrics used to measure the cost of 1,000 impressions. However, they represent different aspects of the advertising performance.

dCPM (Dynamic Cost Per Mille)

dCPM is a bidding strategy used in programmatic advertising. It refers to the maximum amount an advertiser is willing to pay for 1,000 impressions, and this amount can change dynamically based on the likelihood of conversion or other specified goals.

The bidding is adjusted in real time based on the predicted value of each impression, aiming to maximize the ROAS.

eCPM (Effective Cost Per Mille)

eCPM is a metric that represents the estimated revenue for a publisher (app developer) per 1,000 impressions. It is calculated by dividing total earnings by the number of impressions and then multiplying by 1,000.

eCPM provides an effective way to compare the revenue generated from different advertising channels or platforms, regardless of the pricing model (e.g., CPC, CPM, or CPA).

In summary, dCPM is related to the bidding strategy in programmatic advertising, where the cost per mille can vary dynamically based on certain factors. On the other hand, eCPM is a metric used to measure the effective revenue for the publisher generated for 1,000 impressions, providing a standard comparison across different advertising channels or platforms.

CTR and CTI (Click-Through Rate and Click to Install)

The click-through rate is a favorable marketing KPI. It signifies a stage in the funnel and proves to advertisers that the creative and targeting work, at least to some extent – the ad’s been clicked.

With that being said, in some cases, CTR is given greater gravity than it should. In-app ads work great when they reach their target audience, when they’re well done, and when they’re served at the most opportune time but they’re not as reliable as other metrics.

In-app ads, especially interstitial ads, may be mistakenly clicked more often than others. Simply put, since interstitial ads take up the whole screen, and the close ad sign appears a few seconds into the ad, some clicks are just wrongly clicked, without signifying users’ actual intent.

If this is an issue, it’d result in inflated CTR, which can be confirmed by a decrease in CTI alongside it (it means people are clicking, but have no intention to install, which means the targeting is off).

With more granular data available, KPI goals gear towards performance. While CTR can demonstrate issues relating to creatives and targeting, a metric such as IPM may be more indicative of performance.

IPM (Install Per Mille)

IPM may not have CTR’s reputation, but when it comes to performance-based in-app campaigns, it’s one of the best early indicators of the campaign’s targeting effectiveness.

IPM measures how many users, out of 1000 impressions, installed the app. The higher the IPM, the better. High IPM indicates that your campaign is reaching the right audience and that the creatives are working.

Unlike CTR, IPM is a better metric to estimate users’ intent since an app install is extremely less likely to be accidental than an ad click. With that being said, it does not stand alone. It’s a great early indicator, but a high IPM should be looked at alongside retention rates and other post-install metrics, to ensure quality is not being negatively affected.

Post-install Metrics

Post-install metrics, as we’ve previously mentioned, can be divided into cost-related metrics and value-based metrics. Cost-related metrics, though they’re more down the funnel than pre-install metrics, still serve as early indicators and help estimate the quality of the campaign and the users’ potential.

Post-install metrics are usually measured and tracked through an MMP. They may also be measured and tracked internally, but most developers prefer to rely on 3rd parties than build the substantial infrastructure themselves.

Install to Target Action Rate

Install to target event is the ratio between users who installed and those of them who completed a predefined valuable event. This event can range from completing a level in a game to making a purchase, depending on the relevant app’s main KPIs.

If your aim is to acquire engaged users for a gaming app, your event can be set as completing level 10. On the other hand, if your aim is to have users subscribe to your app, then that may be the predefined event, and so on.

This metric is a strong indicator for gauging value in performance and can provide further insight on other metrics, such as IPM. Through this rate, marketers can verify that the users installing the app are users that, in the future, have the potential to return their investment (an early indication of what their ROAS might be), assuming the predefined event is set in a way that signifies users’ future interactions with the app.

Without diving too deeply into the numbers, setting a target event should be done in an effective way that underlines the bottom-line goals for the app. For example, a 1D deposit for free-to-play gaming apps is usually a good indicator of subsequent deposits and high ROAS, and as such, can be set as an install to action goal and serve as an early indication of the effectiveness of the campaign.

Calculating New User Costs - CPI, CPA, CAC

The cost of a new user can be measured in different ways, depending on the definition of a new user. If you measure a new user by users who installed then CPI will be your metric, but as the industry is shifting more towards down-funnel performance-based campaigns, setting CPA KPIs, and lowering the risk involved in running UA campaigns at scale, is becoming the standard.

Let’s stand on the differences between these 3 metrics:
  • CPI (Cost Per Install) is simply calculated by taking the spend and dividing it by the number of installs.
  • CPA (Cost Per Action \ Acquisition) is calculated by dividing the spend by the number of actions.
  • CAC (Customer Acquisition Cost) is calculated by dividing, once again, the spend in the number of actions. Those actions may be identical to the defined actions in the CPA metric, or they may differ.
These metrics can be aggregated by creatives, campaigns, sources, vendors, and more depending on your purpose of comparison. If you want, for example, to compare the performance of a certain creative, you should measure it against other creatives, preferably from the same vendor and identical spend.

The Difference Between CPA and CAC

CPA can be defined as any action in the app, from completing level 1 to registering, subscribing, making a purchase, and so on. The estimated cost for this action will vary accordingly. For example, an action defined as completing level 1 on a monetization-based game will probably cost significantly less than an action defined as a paid monthly subscription for a service, since it’s relatively less competitive.

CPA can serve marketers as the final goal or as a step along the way. If we continue along with the examples above, the gaming app can set its CPA by a level 1 completion but, in order to calculate its CAC, they’ll look at users who reached level 4 on D1, because it signifies the point from which users start using the app regularly.

For the subscription-based app, they may set both CPA and CAC as the same goal – subscribing to the app, or they may separate between CPA, set as registration, and CAC, set as a subscription, knowing that registration for the free version means they’re likely to convert to a paid subscription.

K-Factor - Estimating Viral Growth

The term K Factor originated in the medicine world, where it was used to describe the spread of a virus (i.e, how infectious it is.) In the mobile marketing world, the context is much more positive, since virality is a positive quality for an app.

K factor, in our context, is used to calculate the ratio between paid users and organic users, that installed the app as a result of running paid campaigns. These organic users may have installed from a known referral, such as a user’s invitation code, or from an unknown source, by simply opening the app store and downloading the app – there are various common ways of calculating it, and depending on how conservative you are in your approach to analysis, you’ll decide if to only measure organic users that can be directly associated with UA or ones that are probabilistically associated with it.

To make things simple, think about starting a UA campaign in a new country (where you have no existing users or a negligible number of users). Your goal is 1,000 paid installs and you start running your paid UA campaign, while also measuring organic installs. By the end of the campaign, you’re left with 1000 paid installs and 100 organic installs (1,100 overall). This means that your paid UA efforts yielded 100 organic installs and that your K factor in that country is 1.1.

The Importance of K Factor

Measuring K Factor helps you
  • Gain a better understanding of the effect of your inorganic installs on organic installs (if it’s lower than 1 then there’s room for improvement.)
  • Gain insights into how effective or ineffective your in-app sharing capabilities are. Are users utilizing them? How can it be improved?
  • Compare your virality between different countries – where it’s working, where it’s not, and what can be improved.
  • Get an overall accurate picture of your paid UA efforts. Organic users are usually the best-performing users, and if your paid UA increases the number of these users, it should be taken into account when estimating campaign and acquisition costs.
Much like any data, there’s an abundance of metrics that can be used for measuring and tracking UA campaigns. Marketers should research and realize which metrics are the most valuable for their campaigns and set different key metrics for their different ‘lifetime’ stages (early indicators for the start of the campaign, and value-based for a more advanced campaign, for example).

The goal of a marketing manager should be to effectively choose the most informative KPIs for their use case and use them to deduce where their marketing dollars should go in a very short amount of time.

What’s Next?

For further reading, we recommend these related topics:
  • Cornerstone KPIs for mobile UA
  • Attribution manipulation – click spamming and click hijacking
  • Data activation in mobile UA
  • The basics of RTB
  • User acquisition and retargeting in iOS14

The Cornerstone KPIs for Mobile UA

Valuable KPIs for running mobile ad campaigns

KPI stands for Key Performance Indicator. KPIs have a measurable value that helps companies understand the effectiveness of their efforts in achieving their key business objectives. In mobile marketing, KPIs are metrics that are used to assess the performance of a marketing campaign.

In this entry, we will focus on mobile marketing KPIs and which indicators are the most valuable for both established apps as well as growing apps and why they are crucial for success.

The importance of KPIs

Setting the correct KPIs is essential to improving app performance and mobile ad campaigns. Examining retention, ROAS, or mid-funnel events (like game levels) can help understand where further improvement can be driven from – if there’s an isolated issue that warrants changes to improve the in-app user experience or if there’s a required change to the way the campaign is managed.

The most basic KPIs are the campaign indicators: impressions, clicks, installs, CTR, and cost. Then there are the in-app indicators: retention, monetization, active users, and events, and the relative and predicted KPIs: LTV and ROAS.

Each app uses different KPIs to measure the success of marketing campaigns, according to its goals. Different apps that rely on different business models may set different KPIs – an app that relies on ad revenue will most likely set retention rate as the most important KPI, while an app that relies on IAP might focus more on ARPU.

Retention Rate

One of the most commonly used KPI is the retention rate. Retention rate is calculated as the ratio of users who opened the app in several time points: install day (day 0), day 1, day 7, and day 30. This metric is used by all apps, regardless of their monetization model, it’s easy to calculate, and provides useful data (such as general app performance, correlation with ROI).

It’s an extremely useful KPI in the increasingly competitive environment of app development – it indicates users’ loyalty to the app (higher retention means users keep choosing to come back to the app instead of its competitors), and when around 70% of users abandon apps on the first day of use, retaining users is challenging and critical, and therefore, retention should always be measured.

Retention rates are important but do not stand alone. There are some caveats to it.

The limitations of Retention Rates

  • While it doesn’t take long to gain retention rates (D1 retention is also meaningful), this KPI itself does not stand on its own.
  • The retention rate does not account for the length of the session (which may indicate the quality of the user or the quality of the app).
  • The retention rate does not factor in the cost to acquire or retain a user. It can be used to deduct the cost of a “retained” user, but retention alone is not enough to conclude costs.
  • Following the previous points – high retention rates don’t necessarily guarantee to generate value for the business. Having a lot of users continuously using the app but not making any in-app purchases is nice to have, but not always sustainable as a business model.

Predictive KPIs - How to realize your acquisition potential

ARPU (Average Revenue Per User)

The average revenue per user (ARPU, AKA ARPDAU/average revenue per daily active user) is another useful KPI for mobile marketing. ARPU can help predict the maximum amount that should be paid to acquire a new user and raise a red flag when a campaign is predicted to be unprofitable.

ROAS

ROAS (Return on ad spend) is probably the most commonly used KPI for predicting profitability. ROAS is considered to be more useful than retention rate and ARPU since it incorporates core inputs of profit: revenue generation, as a percentage of the cost. Another advantage of ROAS is that it’s easy to calculate (unlike LTV).

Specifically, week 0 ROAS is as common a KPI as D7 retention. Since it takes only one week, while still capturing data from the different days of the week (thus factoring in the effect different days have on the performance of an ad campaign), and provides predictability, i.e actionable insights.

LTV (Life Time Value)

LTV is at the top of the KPI list and is known for being the most useful KPI for quickly assessing whether a campaign will turn a profit down the line. LTV encapsulates both user retention and monetization, and once it’s established, it’s fast to predict profitability – once CPI or cost per paying user are established.

The reason it’s not as common and widely used as the #1 KPI might be related to some of its disadvantages:
  • It requires the app to be established and stable. Even when it’s based as an app, it takes more time and effort to calculate and maintain (requiring dedicated analysts or data scientists), compared to other KPIs.
  • There are innumerable ways to calculate LTV and no guarantees as to which way is the most accurate, as with any prediction, it’s an ongoing process with plenty of trial and error.
  • Once you have an LTV model you must maintain it – making sure it’s well trained but not overtrained, i.e that your predictions show actual results from a realistic group of users and not the skewed results from your most loyal and lucrative users.

Choosing your key KPIs

The move from D7 ROAS to LTV

When choosing which KPIs should determine your course of action and lead forward your UA campaigns, every marketer needs to consider the state of their app (an established stabilized app, an established app that recently went through a significant change, or an app in growth, that still requires a lot of ‘figuring out’) and choose accordingly.

If your app is still at its early stages and if you’re still changing and developing as you grow, focus on retention and ROAS. Connect them to meaningful events in the user journey and use them as potential indicators, not a full-proof source of truth for the future.

If your app is established and stable, i.e has an established user journey – make use of predictive KPIs, become a soothsayer, know what to expect. This is the time to start using LTV and gain insights into the value of your users in the long run.

ROAS, as previously mentioned, is a useful KPI. It’s not too hard to calculate, it’s predictive (to a degree), it incorporates both cost and revenue, and it can be pretty quick to provide conclusions. LTV is quite a bit more complex and that’s why some app developers may abstain from using it.

Why LTV is worth the effort:

Since mobile app marketing is on the rise and data is becoming more granular, companies need to use the tools at their disposal and choose KPIs that are as relevant as possible to the company’s bottom line.

LTV is not easy to determine (there’s no one way that is the correct way) and will likely require adjustments as the app undergoes major changes, but when it’s set correctly, it can measure gross-revenue and performance over time.

It’s easy to invest time with the KPIs that are right in front of you, like downloads, CTR, impressions, and so on, but if your users aren’t converting, aren’t spending money in-app (or buying a subscription – depending on the monetization model), these interactions have no long-term value. LTV provides an overview of your users and helps determine what are realistic CPAs and CPIs.

Creative Testing – Multi-Armed Bandit vs. A/B Testing

Creative testing is the way marketers compare the performance of different creatives in a campaign in order to evaluate which creatives yield better results. We will cover two methods: A/B testing, which is a commonly used method and widely known method, and the multi-armed bandit, which is less commonly known and used, usually due to its relative complexity.

These two methods offer very different approaches to creative testing, each with its own advantages and disadvantages. The aim here is to help marketers understand these differences and bring attention to some of the limitations of A/B testing and help them make a more informed decision.

A/B testing

An A/B test (also known as an A/B/n test or a Split Test) is designed to try out different creatives in a win/lose scenario, resulting in a “Champion” (i.e, a single winning creative). These tests are executed by running different creatives for a set amount of users and a set amount of time. This initial stage of the campaign is referred to as the exploration stage. Then, the winning creative, i.e, the one yielding better results (be it clicks, conversions, or any other set goal) is the one chosen to run for the rest of the campaign. This second stage of the campaign is the exploit portion.

This type of test is rather simple to understand and set up. It’s useful for initial testing, and when there’s a need to choose a single champion. This may be useful to test out different creatives with starkly different styles, or when there are just a few creatives (2-3 options).

The Explore-Exploit Dilemma

Part of the reason that A/B testing is easy to understand and execute is that it offers a clear distinction between the exploration period and the exploitation period of the campaign. There’s a clear separation between the two stages.

The exploration period is the time in which all of the creatives are tested against one another on the same audience, and the exploitation period is when the winning creative runs exclusively. The name exploitation refers to taking advantage of the gathered data from the exploration period.

Other than its relative simplicity, A/B testing offers control during the exploration period since advertisers can set in advance the audience, as well as the running time, which means that they can limit the budget spent “unoptimized”.

The Caveats of A/B Testing

The simplicity of the test is inviting but there are some disadvantages that should be taken into consideration.
  • Limited Testing – the aim is to shorten the exploration as much as possible (as to not waste budget on less effective creatives), the audience on which the creatives are tested is significantly smaller than the overall audience. Marketers would want to choose the smallest possible audience to run these different ads on before settling on a creative, which means their sample might not be truly representative of the wider audience.
  • Higher Overall Costs –even if marketers are testing their ads on the smallest possible audience, budget is still equally spent on creatives that aren’t working as well as the winning creative. They may be 1% worse than the winning creative, but they may also be 50% worse – and in both cases the spent on these creatives will be equal during the testing stage.
  • May Be Arbitrary –there are no set best practices or simple correct settings for an A/B test. It depends on the size of the overall audience, the budget, the creatives, and how much they differ. The results may not be as conclusive as you’d like them to. You may get a very small margin separating between the winning creative and other tested creatives, which may be caused because the audience you’ve tested was too small or that both creatives were just as good, and there was no distinctive winner – a true “champion” creative.
While our focus here is on creatives testing, A/B testing can be used in many different ways. When explaining their split test tool, AppsFlyer mentions testing different media sources, ad placements, and more.

Adjust, on the other hand, offers running an in-app A/B test, as well as a marketing campaign test, and adds a few important best practices.

The Multi-Armed Bandit (MAB)

The name and concept of the multi-armed bandit are derived from slot machines. These machines are originally called one-armed bandits since their mechanics is that gamblers pull the lever, AKA the arm, and, in return, the machine ‘robs’ them.

The multi-armed bandit offers an alternative and more intricate way to test creatives. The MAB is a group of algorithms, based on the idea of the one-armed bandit. These algorithms offer different solutions for the theoretical problem a gambler faces in the casino – “what is the least expensive and fastest way to test all of the slot machines at the casino, assuming they vary in performance”.

The concept is “testing as you go”, as opposed to running an A/B test with a clear start and endpoint. In MAB, the test is ongoing, operating continuously throughout the campaign. In mobile marketing that would mean that the tested ads run in parallel, while the weight (i.e. percentage of the budget) given to each ad is decided by its performance.

Unlike A/B testing, in the different multi-armed bandit algorithms, the explore and exploit stages are intertwined.

This solution uses an algorithm to constantly change the spending of each creative to coincide with its performance. This means that the best performing creative gets more budget, while the least performing one still runs, just with a smaller budget, and anything in between is still being used, based on its performance. There’s a range and the budget is relative.

Since there’s no distinction between the exploration period to the exploitation in MAB (the two stages happen simultaneously), MAB produces faster results and is widely proven to be more efficient than A/B testing.

Though the different MAB models differ, generally speaking, MAB models start to “move” traffic towards the winning variations as soon as they detect the slightest variation. This means that, unlike an A/B test, in MAB there’s no waiting for the end of the experiment, which means that MAB algorithms not only work faster, they also reach results at a lower cost, as long as the creatives actually perform differently from one another.

A/B Testing vs. MAB

  • Trends in data – MAB results may change over time. One creative may become more popular than it was, initially. It may be impacted by varying factors, but it will be reflected in the data since the test runs continuously. A/B, on the other hand, only reflects the results of the limited time it ran in.
  • Adding Creatives – since different creatives get different percentages of the budget, it’s easy to add new creatives to the mix and test them alongside the champion. If they “shine”, they gradually get more and more volume.
  • Earning while learning – by combining the explore and exploit stages, MAB offers lower costs since the optimization starts while data is still being collected. The response is immediate and optimization kicks in faster..
  • Automation – Bandits are a very effective way to automate creative selection optimization with machine learning, especially when considering different user features—since correct A/B tests are much more complicated in that situation. The MAB algorithm can be activated at very granular levels, and select the best creative for subsections of the targeted audience specifically, while other subsections would have different champions.

When Should You Use A/B Testing in Mobile Campaigns?

When comparing the two methods, it sounds like MAB is the way to go, and a way to save money and get quicker results. Nonetheless, there’s a reason for A/B testing’s popularity.
  • If your app is new and you’ve yet to settle on a creative line and you want to test out a couple of very distinctive and different creatives, A/B testing is a way to get a conclusive yes\no answer (and if the margin is small, maybe you should run another test with different creatives).
  • If your creatives are limited, and not endless. If you have 2-3 variations and not 10, it’s a simple way to decide between the versions without wasting your budget on multiple variations.
  • The majority of acquisition channels don’t allow users to conduct MAB-based optimizations, only A/B testing. This option leaves marketers with a choice between A/B or no testing at all. In this scenario, A/B testing is better than nothing.

The Different Types of MAB Algorithms

Since it’s a complex topic, the aim here is to give a simplified version of the most relevant MAB models for marketing campaigns in general, and creative testing specifically.

Epsilon Greedy

Epsilon greedy, as the name indicates, is a MAB model that gives the largest part of the budget to the champion. In Epsilon Greedy, the person running the campaign decides on an epsilon (i.e, a percentage of the budget dedicated to the challengers) and the rest of the budget is allocated to the champion.

Let’s say the epsilon is set at 20%, this means that 80% of the budget of the campaign goes to running the champion, and the 20% is equally divided between the rest of the creatives. This means that epsilon greedy somewhat resembles an A/B test, but if there are changes in the performance during the campaign, a challenger may become the new champion.

Thompson Sampling

Thompson sampling, on the other hand, still utilizes the epsilon, but in its case, the challengers get budgeted in relation to their performance. Let’s say, again, that the epsilon is set at 20%. This means that 80% of the budget goes to the champion, while the 20% are divided based on their relative performance (some creatives will get 8% of the budget, some might get just 1%, and as the performance shifts, so will the budget).

The Upper Confidence Bound (UCB)

UCB may be referred to as the opposite of Thompson Sampling. UCB challenges the least explored creative to test if its low performance is related to the limited budget it’s been getting. When testing creatives, it’s least likely to be used, but when testing, for example, new inventory pockets with higher bids, it might yield interesting results.

If there are bid floors you’ve been avoiding, since they’re too high, it might be worth exploring, since these pockets of inventory might yield higher LTV users.

Contextual Bandits

Contextual bandits, as the name implies, are MAB algorithms that group around context. In the case of creative testing, contextual bandits may be grouped around gender, location, or other contextual relations and set a different champion and challengers for each group. This means that instead of a general competition between all of the creatives and the audience, the audience is segmented, and each segment gets its own champion.

What is Programmatic User Acquisition?

When it comes to fraud in mobile marketing we think it’s important to understand which vulnerabilities each fraud technique abuses, how it works, what are its consequences, and how it can be detected.

In the subject of attribution manipulation we’ll focus on two opposing types of fraud that both revolve around the abuse of the way attribution technically works (i.e, by faking attributed installs by taking over organic or other paid installs).

Click Spamming in Mobile Marketing

How Does Click Spamming Work?

Click spamming is a technique that abuses the last touch attribution model in order to “steal the credit” for an install. The most commonly used attribution model is ‘last touch’, in which the latest UA partner who was attributed with a click on the ad that preceded the install will be attributed with the install, assuming the attribution window is still open. In short, the last click on an ad before the install will be credited for the install.

Click spamming happens at the time of an ad impression. It can be an in-app banner ad or a pixel “ad” on a webpage. Once there’s an impression, the ones conducting the fraud trigger a series of clicks (20 or even 30 clicks), as if there were multiple ads being displayed to a single user and all of them were clicked. This process is called ad stacking.

In the next stage of the process, the app store opens. Since the ad wasn’t originally clicked by the user, this happens in the background without the user’s knowledge, and all of these clicks are registered for future attribution purposes, but there’s no install, yet.

Click spamming works on assumption, and targets click to apps who are likely to be installed organically by large groups of users (apps who have a buzz around them, new services, apps that new devices tend to install, and more).

What happens eventually is that users install after the ad was ‘clicked’, and the spammer gets attributed for the install. It’s guesswork, but imagine the probability of running 20 to 30 different clicks (for popular apps) per a single impression. You’re bound to ‘win’ some installs.

The Collateral Damage of Click Spamming

Click spamming hurts almost all sides involved in the ad serving process – the advertiser, legitimate UA partners (ad networks and DSPs), and the users.

The Advertiser:

If your app is the targeted app (i.e, you’re it’s advertiser), it means you end up paying for your organic installs. One of the most infamous cases where a company claimed they’ve been paying for organic installs is Uber, in the case of Uber vs. Fetch.

Fetch was Uber’s ad agency, which means they hired and handled the ad networks that ran Uber’s paid ad campaigns. Uber claimed that Fetch hired ad networks who used click spamming, resulting in Uber paying, both Fetch and the ad networks, for installs that were attributed to the ad networks, but in reality originated in organic installs.

Fetch, on the other hand, claimed they work tirelessly to minimize fraudulent activity and that Uber’s invoices went unpaid for months.

While there seems to be some additional tension between these two companies, click spamming is a well-known issue in the industry, one any UA and Growth manager should be aware of.

According to a report by AppsFlyer, $20.3B in ad spend was exposed to app install fraud in the first half of 2019 with 22.6% of non-organic app installs globally are currently identified as fraudulent.

A Negative Effect on Store Ranking:

An app store ranking is decided based on multiple factors that differ between the stores. One of the deciding factors is the ratio between store visits to installs, if the conversion rate is lowered over time (i.e, a lot of users open the store but only a few download the app) the store’s ranking will decrease. If your app is subject to click spamming this means that even your store ranking will be negatively affected.

The UA Partner (Ad Networks and DSPs):

The second party to get hurt by this technique is the trusted UA partners. A partner that ran an honest, fraud-free campaign might have been losing it’s view-through attribution window (an impression that preceded the install) to click spammers.

View-through attribution windows are significantly shorter so as to only be counted when they can actually be considered to have led to an install. If a click occurs while a view attribution is still open the click “wins” the attribution (even if the impression occurred closer to the install). This means that even if the impression from the trusted ad network led to the install, the click (that occurred without the user’s knowledge or intention) will be attributed for the install.

The Users:

For some users, data plans are billed by usage (instead of a monthly package). This means that in devices that are subject to click spamming, the increase in data usage will be costly since opening the app store for 20 clicks per ad takes a fair amount of data usage, for people whose data is a limited resource, click spamming can lead to a lot of frustration, as it can’t be easily ascertained that it even occurred.

How to Recognize Click Spamming

There are a couple of KPIs that should be tracked and monitored to rule out click spamming.

In click spamming, the conversion rate (click to install) will be low, and in many cases extremely low (0.01%-0.05%) since there will be a whole lot of clicks but a relatively minuscule amount of installs. In past occurrences where we helped our partnered advertisers detect click spamming, we saw examples like ~2,000,000 clicks and ~300 installs (0.015%), and the conversion rates were sometimes even lower.

The CTIT (click time to install) will be longer. There are benchmark CTIT to compare to, but our recommendation is to take them with a grain of salt. Benchmarks vary between apps, and the best thing to do is to compare it with other campaigns for your own app if you can.

And lastly, in click spamming, as soon as a new click spamming ‘source’ is added to your buying, you’ll see a decline in your organic installs, because, at the bottom line, those are the installs that click spamming targets and hurts the most.

Click Spamming in a LAT (Limited Ad Tracking) Environment

A LAT environment makes fraudsters’ life easier, since attribution tracking relies on fingerprinting, instead of relying on device IDs, rendering it probabilistic as opposed to deterministic. If click spamming is executed on a device with LAT enabled, anyone on the same IP address who organically installed the app will end up attributing the fraudster with an additional install.

The lack of a deviceID in a LAT environment makes it harder to track the connection between a click and an install. Having a lot of devices on the same IP address and tracking an already popular app, means it might have just been installed organically by a different device than the one with the open click attribution window.

Without click spamming, fingerprinting is estimated to be 98% accurate when the time between the click to the install is 10 min or less, but as time passes the likelihood that the click and install actually originated from the same device, decreases.

“As the time between click and conversion increases, fingerprint attribution becomes increasingly inaccurate. After three hours, accuracy drops by 85%. At the 24-hour mark, it drops to 50%; and beyond that, fingerprint attribution is more likely wrong than right.”

Grant Simmons, VP @ Kochava Foundry, “Your Attribution May Be More Wrong Than Right”.

Keeping in mind the inaccuracy in fingerprinting (especially the longer the CTIT is) and adding click spamming, means UA managers need to be especially careful and monitor their LAT sources, always suspect inflated click volumes, and have a minimal understanding of how those sources generate these clicks.

As an advertiser, a good precaution would be to limit your fingerprinting attribution window to decrease the chances of click spamming in a LAT environment. Granted, this is hard to detect, but if your UA partner runs both LAT and non-LAT in-app campaigns and in both cases can’t provide you with device IDs, it should raise your suspicion and lead to further questioning, as there are very little (if any) legitimate reasons to not use deterministic attribution where it’s available.

Click Hijacking (AKA Click Injection) in Mobile Marketing

Click hijacking differs from click spamming significantly. It can only occur in-app, the hijacker has to have it’s code natively on the device (either through an app or an SDK), and it’s much more targeted in its nature. While click spamming is about sending massive amounts of clicks for different apps hoping to catch a few, click hijacking is very specific and hijacks already-clicked ads.

Once the infected user clicks the app store button to install an app (whether he got there organically or through a paid click), the fraudster recognizes it, checks if he has access to a campaign promoting the app that’s about to be installed, and hijacks the click, resulting in the fraudster being attributed with the install.

The vulnerability that’s taken advantage of in click hijacking occurs much earlier compared to click spamming and comes from the core – the app stores. When a new app is submitted to the app store it must go through a review process/ approving new apps to be added to the store.

So far, some fraudsters were able to pass this barrier on both platforms, but in an ideal world, more thorough testing would be done at this stage, and hopefully all, apps containing malicious software would be detected and banned.

The Rube Goldberg Effect of Click Hijacking

One by one, several different aspects of the advertiser’s UA efforts are affected. First, click hijacking hurts the app developer in a very clear and deliberate way, similarly to click spamming, organic installs are written off as paid installs which means the advertiser is paying for something he already owns., but there’s more to it. It completely obstructs setting the app’s marketing KPIs. The fraudsters’ results might look good on paper, but it won’t represent actual performance, and will simultaneously hurt other partners’ performance.

If other UA partners are targeting relevant users for your app, but are not successfully converting them (since those installs are being hijacked), their performance report is off as well. Their campaigns are underperforming, their CAC (customer acquisition costs) are higher, and they’ll try to optimize and improve while running on false, manipulated data.

This leaves advertisers confused. The fraudulent campaign will thrive while the non-fraudulent will underperform (since those installs are hijacked). If the underperforming campaigns are paused, the fraudulent campaign will see a decline as well (since the performance of the hijacked campaign is directly reliant on the work of other campaigns), but might still not clear the air and reveal the hijackers.

Whether or not it’s done consciously, the ‘host’ app (which enables click hijacking) gains from this fraud. As many installs are generated for the ad placements inside it, as the hijackers usually only trigger their malicious code in some percentage ratio of all impressions (something like 1/10 or 1/15 impressions is hijacked, for example), it can still be labeled by legitimate bidders as a prime ad placement for user acquisition, which means it would get higher CPM bids for its inventory.

How to Detect Click Hijacking?

In click hijacking, your CR will appear to be high (since these users are prequalified), while the CTIT would be low (since the hijacker is attributed with the click right before the install occurs). If your attribution platform offers some form of user journey tracking you’ll notice the fraudulent company constantly popping up between other UA partners and their installs.

A famous case of a malicious SDK is the Chinese company Mintegral that allegedly disguised itself as a tool to help app developers and advertisers monetize their apps ‘effectively’ with ads. It was discovered that the SDK allegedly had malicious code within it that worked to steal potential revenue from other ad networks.

Mintergal made it harder to detect their alleged fraudulent activity. Instead of supposedly hijacking the attribution of all the clicks, they would only hijack one out of ten, making it seem like an ordinary ad network activity.

Another case that was made famous back in 2018, named two Chinese companies – Cheetah Mobile and Kika Tech. In the article published by Buzzfeed, it was claimed that these companies used their SDK across different, self-owned, wildly-popular apps in order to perform click hijacking.

Their activity was discovered by Kochava’s attribution platform.

How to Detect Click Hijacking?

If you have cause to suspect click spamming or hijacking from a new UA partner, take a better look at your campaigns’ ad placements. Both of these attribution manipulations tend to use ‘hardcoded placements’.

If your data indicates a few repeating well-performing placements over long periods of time, it might be hardcoded and fraudulent. In reality, many new ad placements are created on a daily basis as new apps are released, and you should expect a healthy dynamic nature in the placements that are prominent for your UA campaigns in the in-app ecosystem. In some cases, in click spamming, the ad placement will indicate an app that doesn’t even run ads, so double-check for that as well.

The Ramifications of Raising Capital Under Falsely Calculated CAC

Monitoring your campaigns under falsely calculated CAC is a problem in and of itself, but the issue is much bigger when it involves investors, people outside your organization, operating under false assumptions regarding the app’s growth potential.

When your app’s paid acquisition data consists of many organic users falsely claimed as acquired ones, your costs seem lower than they actually are, and funds raised under these assumptions won’t pay themselves back over time, as the growth potential of the product itself isn’t what it seems to be.

This means that if you’ve raised capital, your app grew, your team grew, and your acquisition investments grew as well, but in the end, the expected results weren’t delivered and the investment was deemed to be a failure.

In the worst-case scenario, this will result in new hires being laid off, shrinking teams back down, losing not only the funding, but the trust of the investors, which will also lead to negative public coverage, and more.

An app with actual potential can still be a victim of mobile fraud, and though it had everything it needed to become a success, the mishandling of UA in the early stages of the app could’ve hurt in an irreparable way.

How to Avoid Click Spamming, Click Hijacking, and Other Fraud Techniques

  • Due Diligence – ask your UA partners the hard questions – how they target, ask to learn more about how their SDK works, and their ad units, and get familiar with their process. In this day and age offering transparency is basic and DSPs should be able to make their information accessible to the advertiser. Not getting definite, clear answers should raise a flag.
  • Adjust Your KPIs – make sure your organization is setting the right KPIs for the UA campaign. Some organizations are still focusing on volume instead of quality (i.e install volume vs. active and engaged users, depositors, etc’.) By focusing solely on volume and not testing the quality of the users, you essentially attract possible fraudulent activity to your app and hurt the overall performance of your app (getting a lot of new users who open once and never come back means lower retention rates.)
  • Monitoring and Tracking – Keeping track of your user base, shifts in the percentage of organic and paid users, shifts in overall app performance, etc’ can serve as great indicators of malicious activity. In both click spamming and click hijacking you’ll see a decline in organic installs. Though a more substantial hit in organic installs will most likely be detectable in click spamming.

Choosing Your Partners

Since these are attribution-related manipulations there are two key partners who can help avoid, prevent, track, and recognize the fraudulent activity.

Different attribution platforms offer different tools to track fraud and notice abnormalities in the data. Taking the time and choosing the right attribution platform for your app can help with the early detection of fraud.

Choosing a trusted and reliable DSP. Our glossary entry choosing a DSP covers the different types of DSPs and which mobile growth opportunities they offer. Adjust also wrote extensively about choosing a DSP.

As we’ve mentioned before, as we steer further from ad networks and transition to programmatic and ML-based campaigns, transparency is a basic expectation from a UA partner.
Drop Us a Line

Connect with strongmetrics

Ready to take the first step towards unlocking opportunities, realizing goals, and embracing innovation? We're here and eager to connect.

For more enquiry
+918750740270

Your Success Starts Here!