Analytics

Analytics

Analytics, Marketing

blog header image with postbacks for ad whales written as the title and signs that say where, what, why, who, when and how

Before we jump into the topic of postbacks for ad whales, lets first understand what are postbacks and why they are so important for any marketer of a mobile app company. Let’s say you have a dating app called TrueMatch and after you have had some organic growth you have recently partnered with a few marketing partners – mostly ad-networks who specialize in bringing installs. Let’s call one of them Tap4Buck. Tap4Buck places ads to promote your app TrueMatch on different websites and apps. As a result users click on them and get to your app-landing page. Some of them also decide to install your app and a smaller percentage even continues and converts to payers. Since Tap4Buck wants to give you the best results possible, they want to know which clicks ended up converting to installs and which ones converted to purchasing users. The problem is that the app store landing page breaks the flow of information so Tap4Buck can’t continue to track the user once they have installed the app. Postbacks solve this issue. If you are using an attribution provider (you should – it’s a must have these days), you can easily configure it to send postbacks to Tap4Buck and help them optimize your campaign for you.

What are ad whales and what are postbacks for ad whales

Now, let’s imagine that TrueMatch makes 50% of it’s ad-revenue from advertising. This means that sending postbacks for users who made purchases only tells Tap4Buck half the story. What about users who generate a lot of revenue from ad based monetization? Ad Whales are users who made at least $0.7 in ad revenue. This is the minimal amount of revenue a payer can make ($1 purchase minus 30% cut by Apple/Google). $0.7 threshold means that a conversion to ad-whale yields the same amount of money as a conversion to payer would yield. Postbacks for ad whales means that your attribution provider would send Tap4Buck an event every time a user that came through Tap4Buck has generated at least $0.7 in ad revenue and converted into an ad whale. This typically happens with 2%-5% of the users in games that are tuned towards ad based monetization but obviously changes from one game to another.

Who should care about postbacks for ad whales?

Companies who have any type of paid marketing activity would benefit from sending postbacks in general. The ones that also have an ad revenue component that amounts to at least 15% of their total revenue should be sending postbacks for ad whales. Ad whale postbacks also benefit the partners on both sides. For the marketing partner that sent the traffic to your app, better postbacks means more effective campaigns and happier customers. For the monetization partners, better postbacks means that the app will get more ad whales as a result of the optimization and therefore their volume of revenue would increase.

When – 2017 is the year of change

If you have been following the industry trends you already know that ad revenues are becoming the dominant way to monetize apps. It’s already as big as In-App Purchase and is projected to grow faster in the next 4 years. In Mobile games alone, App Annie projects in-app advertising will amount to revenue streams of over fifty billion dollars ($50B) for the companies who will be placing these ads in their apps. The total mobile ad spend worldwide is projected to reach $195B by eMarketer. As ad based monetization is becoming so important, companies are looking for tools to optimize them and postbacks are a big part how the mobile marketing space has been operating.

Where – not all geographic areas are created equally

Most of the media buying today is concentrated in a few countries where people are willing to spend money on in-app items. These countries are often referred to as Tier 1 countries and are also where most of the postbacks are being fired today. At the same time, postbacks for ad whales bring a new opportunity to table. There are other countries with large population where people can’t afford to buy in-app items. These countries offer low rates for user acquisition due to lack of demand. Setting up postbacks for ad whales allow app publishers to find opportunities to acquire users in these countries with positive ROI. This means that as postbacks for ad whales became more popular through out 2017 we will see a shift in the postback geographical activity areas.

Why track conversion to ad whales and report it as postbacks?

There are 3 main reasons to track and post ad whale conversion:
Business goals alignment – many apps that have a big ratio of ad revenue today would make up a game progress goal such as “100 sessions completed” or “10 levels”. These goals would be defined as events and companies would track conversion to these goals and report postbacks to the ad-networks. However, these goals are not aligned with the business of the company. Conversion to payers and to ad whales is a far better goal and will bring better results in the long term.

ROAS not enough
– Measuring and optimizing the return on ad spend is the best theoretical approach. However, in a real world situation it relays on predictive models that are often hard to implement. Media buyers often require a more day to day metric to optimize against. This is why most UA campaigns track the conversion to payers as one of the leading KPIs. Similarly, in apps that monetize mainly with ads, the easiest way for media buyers to optimize is against a goal of conversion to ad-whales.

Postbacks allows manual as well as automatic optimizations – reporting the conversion to ad whales as a postback to the traffic source allows them to have an optimization goal that is aligned with your business. In turn, it impacts what users you will be getting from this traffic source. In some channels such as search and social media there is a lot of algorithmic optimization taking place. These algorithms need a goal to optimize against so having them optimize for ad whales would be the best approach for an ad supported app. Similarly, in other channels there is a manual optimization process of eliminating bad sub-sources such as sites or segments – these manual optimizations also requires a goal and reporting ad-whale conversion as postbacks provides such a goal.

How to set up ad whale conversion as postbacks

There are 3 components for setting up ad whale postbacks in your app:

#1 – Tracing back ad revenue per user – in order to detect the ad whales and report them you will need a way to measure the ad revenue for each user separately. Your monetization partners typically report ad revenue per country and average CPM but not the ad revenue for specific users. The most accurate way to measure ad revenue today is SOOMLA TRACEBACK. It is the only platform that can identify the ad whales for you.

#2 – Connecting the data pipelines – your attribution platform is the one in charge of sending postbacks to your marketing partners. Once you have SOOMLA integrated in your app you can configure it to send the right postbacks to your attribution platform with just a few clicks.

#3 – Setting up postbacks in your attribution platform – this step is slightly different depending on the attribution partner. However, they all have a partner configuration screen where you can set up the ad-whale conversion from phase #2 as the trigger for the postback.

 

Feel free to share:
Analytics, Marketing

Reality can prove very different than the statistics that represent it

There is a simple idea at the core of most mobile marketing campaigns these days – if you spend $x on some marketing activity and received $y in return you want y to be grater than x. This is often referred to as ROAS or campaign ROI. We have trained mobile marketers to break down their activities to small units: ad groups, ad sets, ad creatives, audiences, … and find the ones that show ROAS. Doubling down on the positive ROAS units while shutting down the negetive ROAS units is the leading campaign optimization strategy today.

Here is the problem – it only works under certain conditions.

There is a famous saying by Mark Twain – “There are lies, damned lies and Statistics”. It comes to warn people about using statistics in a wrong way. One such way is using statistics when small numbers are involved. Another way in which statistics are deceiving is called Multiplicity or Multiple comparisons. Let’s see how those come into play when calculating returns.

Beware of the small numbers

Most companies base their ROAS calculations only on revenues from In-App Purchases. This is a result of 2 things:

  • Up until recently, ad based monetization and ad spend were mutually exclusive
  • Until SOOMLA TRACEBACK there was no way to attribute ad monetization

The problem with In-App Purchases revenue is that it’s highly concentrated. Studies have shown that purchases are less than 2% of the users and among those 2%, the top 10% generate half of the revenue. Let’s say that you spent $5,000 to acquire 1,000 users and you are trying to figure out the return. Most likely you have 20 purchases but there are 2 whale users who generated $1,500 each (this is aligned with the studies – yes). Now, suppose you had 2 ad-groups in that campaign and you are trying to figure out which one was better. Here are the options:

  • Group A had both whales
  • Group A had one whale and B had one whale
  • Group B had both whales

Since we are talking about 2 users here – the scenario that actually happened would be completely random. Even if one ad-group is better than the other it is still very likely for that group to outperform the other group when we are talking about only 2 users who can flip the outcome completely. The danger here is that our UA teams would double down on the ad-group that yielded the 2 whales without understanding that it’s not better than the other. If we look at sample sizes here n=1000 is normally considered a good sample size. Has the monetization been less concentrated a sample size of 1,000 should have been enough to make decisions. However, for the purpose of acquiring whales the actual sample size is n=2 in this case. We should try to get at least n=500 before we start making decisions on media buying. The problem of course is that attracting 500 whales could be a very expensive test – more than $100,000 based on the numbers in the example above.
On the other hand, companies who monetize with ads enjoy the fact that more users participate in generating revenue and can make decisions based on smaller sample sizes and smaller test budgets.

Multiplicity – the bias of multiple shots

Another bias we normally see in mobile marketing is Multiplicity. The easiest way to explain this is with the game of basketball. Let’s imagine you are through from the 3-pt line and you have 50% chance to score. What happense if you try twice, the chances of scoring at least once becomes 75%. With 3 shots, it’s 87.5% and so forth. The more times you try the better your chances to score.This is what happens when you try to hard to find positive ROAS in a campaigns that has a lot of parameter. You compare ad-groups – that’s 1 shot, you compare ad creatives – that’s a 2nd shot, you compare audiences – that’s a 3rd shot and so forth. The more you try to find a segments with positive ROAS by slicing and dicing the more likely you are to find a false positive one.

Feel free to share:
Analytics, App Monetization

Ad revenue concentration hero image with chart and text

We are happy to report some interesting data points we recently looked at. The goal was to understand how concentrated ad revenue really is. Everybody knows already that the 80/20 law applies in IAP – at least 80% of the revenue is driven by the top 20% of purchasers. There is plenty of research showing how concentrated IAP revenue is. Ad revenue, however, is still a mystery for most publishers and very few companies actually have the data on how concentrated the revenue is. If you take the naive approach and believe all the users contribute revenue based on the average eCPM you might think that the ad revenue concentration chart will be flat. The reality however, is very different.

Comparing Concentration in Ad Revenue vs IAP Revenue

In the image below you can see a comparison of the revenue concentration between ad based monetization and IAP based monetization. These charts are based on data from 28 days of activity in a Match-3 game where most of the monetization comes from interstitials. The revenue model behind the ad monetization is CPC in this case.

On the left side, the IAP revenue is highly concentrated and 80% of the revenue is generated by the top 20% of the users. The top user generated more than $300 in revenues for the app.

On the right side, we see that the ad revenue is also highly concentrated. The top 20% are contributing more than 50% of the revenue here and the top user generates $2.5 while there are users who only contribute a few cents.

Comparison between concentration of ad revenue and IAP revenue

Ad Revenue Concentration with Reward Ads

One of the hot trends of 2015 and 2016 was the adoption of rewarded video ads by many game publishers. We wanted to look at the ad revenue concentration in rewarded video as well. The chart below does exactly that.

The data here is from a single day so obviously more concentrated than information aggregated over an entire month. The game here is an mid-core action game and the monetization is done with both rewarded videos and an offer wall.

The ad revenue concentration is much higher in this data set. The top 20% of the users are contributing 90% of the revenue and the top user is contributing more than $15.

Ad revenue concentration in rewarded video ads and offer walls

Who are the users contributing high amounts of ad revenue

Once you realize that the ad revenue is concentrated almost as much as iAP revenue, your next question is likely to be: “how are these users”. On a high level, these users are typically the users who download many apps as indicated by a Comscore Report highlighted in this article. But you can go a lot further than that. Using SOOMLA Traceback you can profile these “Ad Whales and target them in marketing activities.

 

Feel free to share:
Analytics, App Monetization

Reward Abusers written in blue text next to a trophy and Heavy App Downloaders written in green text next to an app icon with two axes

As the market adopts TRACEBACK technology we are learning new things about how users interact with ads. This allows us to classify users into types. Let’s think about these two types of users who are highly relevant to rewarded video based monetization.

Reward Abusers – these are users who watch the videos to get the rewards but are not contributing any revenue in Neither IAP nor in Ad revenue.

Heavy App Downloaders – these are users who download and try multiple apps each month. Typically, these are the users who end up generating the most amount of video ad revenue for your app.

How these segments impact your business

Let’s say you are buying traffic from a new source. You probably ask yourself, how many installs I received but you should also ask the million dollar question – “what type of users am I getting?”

Why?

Consider 2 possible sources:
Incent Campaign – this campaign gives users an incentive in another app in return for downloading your app. By nature these users are after the rewards so this source might be heavy in Reward Abusers
FB Campaign – now consider a campaign targeting lookalikes of users who are existing Heavy App Downloaders. This campaign is likely to bring more Heavy App Downloaders. You can learn more about this specific technique – here

How can you segment your users

If you are already convinced that knowing the Reward Abusers from the Heavy App Downloaders can impact your business your next question should be how to spot them. Let’s think about what features are similar and which ones are different between thm.
App Engagement – both user types have high app engagement
Video Ad Engagement – Reward Abusers will watch as much videos as Heavy App Downloaders
Post Impression Performance – this is the feature that sets them apart – Reward Abusers will only watch the videos while Heavy App Downloaders will also click and install the apps presented to them

Reward Abusers Heavy App Downloaders
App Engagement High High
Video Ad Engagement High High
Post Impression Performance Low High

So understanding what the user does after he watches the video ad is the key here. Today, there are two solutions in the market:
Developing In-house – this requires your engineering team to figure out specific ways to track post impression events with each ad-network and to keep updating the code every time there is an update to the ad-network SDKs
SOOMLA TRACEBACK – our platform does all the work for you, it requires a simple integration but once implemented you will be able to segment your users reliably, track ROAS and do many other mind blowing optimizations to your ad-revenue. CLICK TO LEARN MORE

Feel free to share:
Analytics, App Monetization

Podcast - The role of analytics and data in ad based monetizationThis is a recent interview I gave at Cranberry Radio. I’m talking about the following topics:

  • The role of unbiased measurement companies on the advertiser side and the publisher side
  • Insights that SOOMLA have seen by measuring ad-revenue to an unprecedented granularity level
  • Insights from our latest study – Rewarded Video Ads Retention Impact in Match 3 Games

You can listen to the podcast here.

Feel free to share:
Analytics, Marketing

4 top mobile a/b testing tools header image

This is a guest post by Natalia Yakavenka from SplitMetrics

Ask any mobile marketer what is the best way to optimize conversion rates for your app page and you’ll most likely get A/B testing as a response. While A/B testing is still most often associated with the web, the concept of a/b testing for mobile app pages is not new. The very first solutions growth hackers used were custom coded landings, but such approach requires time and effort. However, app page conversion optimization only became popular when self-service platforms like SplitMetrics and Storemaven emerged. These platforms brought a completely new level of A/B testing for mobile pages as they provided insights on top of showing the winning variation. Later on, the introduction of Google Play Experiments in 2015, brought A/B testing of app landing pages into the “must have” category for app marketers. Since that time, plenty of new solutions have emerged but we recommend sticking to the 4 most popular tools presented here.

Google play store allows experiments - a limited way to do mobile a/b testsGoogle Play Experiments

When it comes to selecting the best A/B testing tool, the most common question is why go elsewhere if you have the free Google Play Experiments. Indeed, it allows mobile publishers to run free experiments on their app pages, but it comes with significant limitations. The most serious ones are that you can’t test unpublished apps and you’ll never find exactly what worked due to the lack of on-page analytics. Still, Google Play is the perfect solution for those who are not familiar with paid traffic and user acquisition as it doesn’t require driving traffic to the experiment from ad resources. The other three tools require sending traffic to their experiments and are usually for more advanced marketers.        

Distinctive features: absolutely free + requires no additional traffic

Split metrics logo - this mobile a/b testing tool offers many advanced ASO featuresSplitMetrics    

Founded in 2014, SplitMetrics was among the first ones to provide every marketer with an easy-to-use, unlimited, and flexible A/B testing tool. In addition to the regular icon/screenshot testing, it offers pre-launch experiments for unpublished apps and Search, Category and App Store Search Ads testing. Unlike the Google Play service, it offers a multi-armed bandit approach which helps reach significant results fast. But it’s not as ideal as it seems to be — you have to pay for it. Though the price is very reasonable and you have a 30-day trial, you will need to pay a monthly fee for your subscription.

Distinctive feature:  pre-launch experiments and App Store Search Ads testing

StoreMaven is one of the pioneers of the mobile a/b testing and ASO spaceStoremaven

StoreMaven provides easy-to-use A/B testing for the entire app store landing page experience. One of their advantages is offering benchmarks based and best practices based on their broad client base in each of the app store categories. On top of that, StoreMaven clients benefit from their money saving algorithm, StoreIQ. This algorithm helps conclude tests with fewer samples and lower costs by leveraging historical data to quickly determine the winning creatives. StoreMaven provides a fully dedicated Account Manager to make sure clients make the most of their testing budgets. This tool is also a paid tool that is offered as a monthly subscription.

Distinctive feature: Professional Services

One of the features offered by the tune platform is A/B testing to improve your ASOTUNE’s A/B Testing

4) Tune offers many services for app marketers. They are mostly known for their attribution service — measuring paid app installs. However, they also offer A/B testing and optimization tool for the app landing page. Launched in spring 2016, it already provides solid functionality, offering the basic functions of testing different types of assets and showing all measurements and stats collection. While Tune offering is more complete compared to the other testing tools, it’s biggest limitation is that it doesn’t work with other attribution providers. The tool is also limited with regards non-US regions and only supports a small list of regions. In terms of pricing, Tune’s A/B testing tool is not available as a stand alone and so customers have to buy it as part of a suite of services.

Distinctive feature: works very well with other Tune capabilities

A/B testing can be easy with the right tools and is recommended for any app marketer as part of a data-centric growth strategy. Feel free to also try our quiz — test yourself to see how data-driven your game is.

Feel free to share:
Analytics, App Monetization

Header image showing how complex it is to a/b test your ad based app monetization

A/B testing has been an integral part of marketer toolbox for a good reason – it takes a great deal of the guess work away from marketing. In online and mobile companies it also became a popular tool for product managers. Every time a new version is released, why not a/b test against the existing version and make sure nothing got broken. In mobile app monetization, however, this tool is not available.

Why ad based app monetization is so hard to A/B test

The core requirement for A/B testing is to be able split your users into two groups, give each group a different experience and measure the performance of each one so you can compare it later. There are a number of tools who can facilitate the split for you including Google Staged Rollout. If you are measuring IAP monetization it’s easy enough to associate purchases to the users who made them and then sum the revenue in Group A and Group B. In ad monetization however, it’s impossible to associate ad revenue to individual users. The ad partners mostly don’t report the revenue in this level of granularity.

Method 1 – interval testing

One alternative that companies have been using is interval testing. In this method, the app publisher will have one version of the app already published and will roll out a version with the new feature to all the devices. To make sure all the users received the new version publishers will normally use force update method that gives the user no choice. The impact of the new feature will be measured by comparing the results over two different time intervals. For example, Week1 might have contained version 1 and week 2 might contain version 2 so a publisher can compare version 1 vs. version 2 by comparing the results in different date ranges.

Pros

  • Very simple to implement – no engineering effort

Cons

  • Highly inaacurate and subject to seasonality
  • Force update method has a negative impact on retention

Method 2 – using placements or different app keys

This is a pretty clever workaround for the problem. Most ad providers has a concept of placements. In some cases, they are called zones or areas but all 3 have the same use – they are planned so you can identify different areas in your app where ads are shown for reporting and optimization purposes. The way to use this for A/B testing is to create a zone A and Zone B and then report Zone B for users that received the new feature while reporting Zone A for the control group. If you are already using the zones feature for it’s original purpose, you might already have zone 1, 2, 3, 4 and 5 so you would create 1a, 1b, 2a, 2b, ….

Of course, if you are using multiple ad-networks you would need to repeat this set up for every ad-network and after the test period aggregate the results back to conclude your A/B test.

A variation of this method is to create a new app in your ad-network configuration screen. This means you will have 2 app keys and can implement one app key in group A and the other app key in group B.

Pros

  • More accurate compared to other methods

Cons

  • The effort for implementing a single test is very high and requires engineering effort
  • Will be hard to foster a culture of testing and being data driven

Method 3 – counting Impressions

This method requires some engineering effort to set up – every time an impression is served the publisher reports an event to his own servers. In addition, the publishers sets up a daily routine that queries the reporting API of each ad-network and extracts the eCPM per country. This information is than merged in the publisher database so that for every user the impression count for every ad-network is multiplied by the daily average eCPM of that ad-network in that country. The result is the (highly inaccurate estimation of the) ad revenue of that user in that day. Once you have this system in place, you can implement A/B tests, split the users to testing groups and than get the average revenue per user in each group.

Pros

  • After the initial set up there is no engineering effort per test

Cons

  • Settting this system up is complex and requires a big engineering effort
  • Highly inaacurate – it uses average eCPM while eCPM variance is very high
  • Can lead to wrong decisions

Method 4 – leveraging true eCPM

This method leverages multiple data sources to triangulate the eCPM of every single impression. It requires significant engineering effort or a 3rd party tool like SOOMLA TRACEBACK. Once the integration of the data to the company database is completed, publishers can implement a/b tests and can get the results directly to their own BI or view them through the dashboard of the 3rd party tool. Implementing A/B tests becomes easy and a testing and optimization culture can be established.

Pros

  • The most accurate method
  • Low effort for testing allows for establishing a testing culture
  • Improvement in revenue can be in millions of dollars

Cons

  • The 3rd party tool can be expensive but there is usually very quick ROI

 

Feel free to share:
Analytics, App Monetization, Resource

ltv model

In previous blog posts I posted 6 different LTV calculators and received a lot of feedback about the LTV models. Turns out game publishers found them super useful for calculating the LTV of their game. It was great to hear the positive feedback which also led to a lot of conversations about how people are calculating their LTV. Here are some of the learnings I can share.

Specific LTV model is always better than generic one

All our LTV calculators can’t be nearly as accurate as the ones you can build in-house. If you have the money to hire a data sceintist or at least contract one to build a formula for you after you have gethered some data, you will end up with a more accurate model. The reason is simple, in predictive modeling, the more signals you have the more accurate the model will be. All our calculators use retention and arpdau because they need to be widely applicable. However, there are a lot more signals you can feed to a specific model: tutorial completion, level progress, soft currency engagement, challenges completed, … Factoring such signals would give you a better prediction model. Our generic calculators’ main purpose is to get you started, give you a framework to think about LTV prediction and help you do some basic modeling if you are on a budget.

Simplified spreadsheet modeling

Our original spreadsheet model was taking in 31 points of data. However, after talking with readers I learned that most of you only track 4 retention data points and 1 arpdau point. This is why I created a version that is simpler on the input side. Another feedback I received is that you want more outputs: Day 60, Day 90, Day 180 and Day 365 LTV. Here is the new calculator based on all that feedback.

Inputs:

  • Day1 retention
  • Day7 retention
  • Day14 retention
  • Day30 retention
  • ARPDAU

Outputs:

  • Day60 LTV
  • Day90 LTV
  • Day180 LTV
  • Day365 LTV

Method:

This spreadsheet is the same one from the retention modeling we presented in this post but with a few tweaks.

The actual spreadsheet

 

If you want to measure the ads LTV in addition to IAP LTV you should check out SOOMLA Traceback – Ad LTV as a Service.

Learn More

Feel free to share:
Analytics, Marketing

Kongregate's recent blog post suggests that you can double your traffic by tracing your ad revenue

I recently came across a fantastic post by Jeff Gurian. Those of you who don’t know Jeff, he is the Director of Marketing at Kongregate. In his post he brings up a super important point – you can double your traffic by Tracing the Ad LTV or “counting the ads” in the language of the article.

Doubling your traffic only takes a 25% increase in LTV

According to Kongregate’s experience with user acquisition, Jeff explains, the correlation between how much traffic you can get and the bids you place is not linear but rather a power function. “There is always a tipping point where your traffic will increase exponentially relative to the increase in your bid.” says Jeff.

The chart in the post does a good job in explaining this point:

chart illustrating the power curve of the impression volume you can get at different bid levels

Image from original article at Kongregate developer blog

In this example – acquiring traffic with bids of $12.5 as opposed to $10 will allow you to get twice the amount of traffic. In other words, a bid increase of 25% transatles to a volume increase of 100%.

Tracing Ad LTV allows more room in your CPI bids

Not all games have ads but the ones that have added in-game advertising are seeing between 10% to 80% of their revenue coming from ads. 25% is a typical scenario in many games and is also close to the ratio reported by public companies such as Glu and Zynga. The example given in the article (see image below) is showing that tracing Ad LTV can modify your ARPU / LTV analysis by 25%-30%. As we know, higher LTV means that we can afford to pay higher CPI which leads to twice as much traffic per the explanation above.

Illustration of LTV and ARPU calculations with and without tracing-back the ad revenue

Image from original article at Kongregate developer blog

Let SOOMLA do the work and get you the accurate Ad LTV

Many companies skip the Ad LTV since the process for calculating it is often complicated, time consuming and in many cases it is not accurate enough. Their claim is that none of this matters if you are miscounting your Ad LTV. Counting impressions can lead to significant errors in LTV calculations which means your ROI analysis can be off and end up losing money for the company.

Fortunately enough, SOOMLA has developed a solution that automates the Ad LTV calculation and we do that with much greater accuracy so now you can enjoy the benefits of Traceback and double your traffic without worrying about accuracy or extra development effort.

To save valuable resources and ensure you are getting the Ad LTV correct for every cohort you need a specialized system like SOOMLA TRACEBACK. The platform traces the ad revenue and sends it to your attribution partner or in-house BI.

Learn More

 

 

Feel free to share:

Join 7485 other smart people who get email updates for free!

We don't spam!

Unsubscribe any time

Categories

SOOMLA - An In-app Purchase Store and Virtual Goods Economy Solution for Mobile Game Developers of Free to Play Games