Tech Resources

Research, Tech Resources

(1200x600) Top 10 Parse Alternatives

Contrary to popular belief among mobile game developers, the recent Parse announcement that it’s shutting its services down is not the end of the world.

Wait, what?

Yes, you heard me. A few days ago, Parse announced that it is retiring, sending out ripples of disbelief and discontent across the development world. No need to panic, though, keep reading.

First of all, Parse will have a year-long cooldown period – the final shutdown is scheduled for January 28, 2017, so you have plenty of time. Second of all, the company released a database migration tool (you can find it here), as well as an open-source Parse Server, which lets you run most of the Parse API from your own Node.js server.

Third of all, we’ve created a list of the best Parse alternatives for your mobile game you can find.

Why should you care?

Parse is a mobile backend as a service (MBaaS). That is a model that has grown to become an essential part of (almost) any game, even though it is a fairly new product category, one that’s been around for roughly five years. Its services might vary to some degree, from company to company, but the basics are the same – every MBaaS will offer a cloud storage solution, push notifications, file sharing and social integrations (Facebook, Twitter), as well as messaging and communications options. In today’s world of mobile and (quite often) social gaming, you can see why these features are essential to a mobile game’s success. It removes the burden of building in-app purchase item ownership data, building player progression storage or in-game communications, to name a few, and allows the developer to focus on more pressing matters like art, game design, innovation and monetization.

This is why we can’t have nice things

Parse was an important figure in the chain – it was loved by developers for having tons of features, good documentations and quality customer support. And after it got acquired by Facebook back in 2013 for $85 million, game developers were certain the company would have a bright future ahead – flocking to use its service.

Now, panic and fear has crept up on hearts and souls of mobile developers everywhere, as they raise their hands in despair and wonder why bad things always happen to good people </drama>.

But seriously, don’t worry. While the Parse announcement spawned a lot of lists with alternatives, those mostly revolve around general apps, with little to no focus on gaming. And with gaming being a specific industry in its own right, we feel a specific list is needed. We’ve got you covered. Below you will find the top 10 Parse alternatives for your game backend (listed in no particular order).

gamesparksGameSparks

GameSparks, which launched in 2013 and now has over 72 million players using their platform, is a good mobile backend as a service option, and one of the more popular ones. It is flexible and has a good set of features such as analytics, a management dashboard, leaderboards, and real-time and turn-based muliplayer. It runs a MAU (Monthly Active Users) cost which can be confusing, leading people to think it’s too expensive when, in fact, it offers quite a competitive price. GameSparks isn’t a prescriptive service. They provide a highly flexible, configurable and extensible platform that allows developers to build and manage their own projects.

playfabPlayFab

PlayFab launched in September 2014, though behind the veil the’ve been in business for 3 years as the in-house backend for Uber Entertainment. Some will say it is the most complete backend platform, especially after it partnered with Photon, the multiplayer cloud service. With 20M players on their system and a top game holding 1 million DAU (confirmed with their team), PlayFab is no stranger to scale. Features include player accounts, virtual goods management, in-game messaging, and game data storage. Another unique PlayFab aspect is their recently launched marketplace, which makes it easy to integrate with key 3rd party services beyond Photon, such as attribution-tracking, advanced analytics, community tools, and more.

heroiclabHeroic Labs

The key selling point of HeroicLabs is the API which allows game developers to easily integrate multiplayer and social elements without needing a server backend. It focuses and optimizes mostly for massive games, games of high volume. HeroicLabs also has a code sample with SOOMLA in our knowledge base.

gamedoniaGamedonia

Gamedonia is another complete backend solution for mobile games. The cloud platform for game developers does not require a server and offers many social games and real-time elements such as PvP (player versus player) modules, in-game chat or social sharing. Gamedonia was founded in 2012 and besides offering mobile support, also works in the browser.

kiiKii

Kii is another developer sweetheart and a Unity partner, making its community support quite strong. Its key selling point is a burst limit of 150 API calls per second, which is quite important. On the other hand, it does not allow anonymous users. Other features include server extensions, push notifications, leaderboards and achievements. It supports iOS, Android and Windows 8.

kinveyKinvey

Kinvey is one of the pioneers in the MBaaS game, which by default makes it a strong contestant for the best service out there. Compared to Parse, I’d say the two are quite similar in features: it offers cloud storage and push notifications. There’s also an easy way to integrate Facebook Open Graph for all those apps without websites. However, like Parse, it’s a general purpose MBaaS for all mobile apps, not just games.

braincloudLogo_80brainCloud

brainCloud might make your brain hurt of all the features it offers. It calls itself “backend in a box.” It is a ready-made, cloud-based backend designed for game developers, allowing them to jumpstart their game creation with various pre-built features. Its features include Cloud Data, including user and global statistics, shared data and custom files, Multiplayer, with support for turn-by-turn and one-way offline (clash-style) multiplayer. Other features include Achievements, Leaderboards and Monetization features.

floxGamua Flox

Flox is a scalable and lightweight cloud backend for mobile games built by Gamua. It runs on all mobile devices supported by Adobe AIR, and also allows offline play. Players can be authenticated through Google+, Facebook, email or the iOS GameCenter API. It comes with rich documentation and a powerful customer support.  If you’re developing with AIR, or specifically the Starling framework, this is the backend for you.

app42App42

App42 is another popular solution. It has many features, including all the usual ones like leaderboards, cloud storage or social sharing. It used to be cheaper than Parse (now it definitely is), while offering the same burst limit. A great solution for any mobile game developer.

pun_logo_bigPhoton

Photon is a cross-platform multiplayer game backend – a service tailored especially for game developers. It allows you to easily add multiplayer to your games and run them in the Global Photon Cloud. You can also host your own Photon servers, if that kind of hybrid is your thing. It is a good choice for game developers of all sizes, from indies to AAA studios.

 

FREE REPORT – VIDEO ADS RETENTION IMPACT

 

Basic pricing plans

Company Free Tier? Minimum Price
GameSparks Yes $0.02/player (MAU – applies when a game has reached 10,000 users)
PlayFab Yes Free. (Support + Enterprise are paid)
HeroicLabs Yes $69 / Month / $0.02 (MAU)
Gamedonia Yes 89€ / Month
Kii Yes $1,200 / Month
Kinvey No $2,000 / Month
brainCloud Yes $30 / Month
Gamua Flox No $29 / Month
App42 Yes $99 / Month
Photon Yes $95 / Month
Feel free to share:
Tech Resources

Being a data company is never easy, especially when you reach larger scale. An appropriate data store is one of the most important engineering and financial decision you’ll have to take. (300x300) MongoDB-UpgradeWe decided to go with MongoDB to store parts of our aggregated data because we love MongoDB’s aggregation framework and the flexibility in it’s data modeling. We find it to be a good solution for dynamic and fast growing companies like us. When our Mongo deployment started to get “fat,” so did our problems with it. This is exactly where we decided we need a change, or did we? MongoDB 3.0 sets us straight.

MongoDB 3.0 has been available for download since March 2015 and since it’s release it had seen several updates with performance improvements and bug fixes. Here is a selection of topics from what’s new in MongoDB 3.0 and why you should consider upgrading.

What’s new in MongoDB 3.0?

Major Changes

Pluggable Storage Engine

MongoDB 3.0 introduces a pluggable storage engine API that allows third parties to develop storage engines for MongoDB. The vision: “One data model, one API, one set of operations concerns. Under the hood – many options for every use case under the sun.” It is even possible to use different storage engines for different members of a single deployment’s Replica Set, making it easy to evaluate the migration of a new engine. The possibility to mix different storage engines on the same replica set allows, for example, to direct operational data (requiring low latency and high throughput performance) to replica set members using in-memory storage engine, while exposing the data to analytical processes running in a Hadoop cluster through a member configured with an HDFS storage engine, which is executing interactive or batch operations rather than real time queries.

So the platform is now open for new players to come in and integrate / implement new specialized storage engines to address a variety of requirements from different architectures and/or applications.

Facebook’s Parse, for example, reported on April this year that they are now running a RocksDB storage engine on MongoDB 3.0 in production. Another already supported 3rd party engine is Percona’s TokuMX. Other storage engines in development are an HDFS storage engine and a FusionIO engine that bypasses the filesystem.

We are very curious to see what kinds of new engines will rise in the future.

@MongoDB 3.0 introduces a pluggable storage engine API that allows 3rd parties to develop storage… Click To Tweet

WiredTiger

MongoDB 3.0 introduces support for the new WiredTiger storage engine out of the box, joining the classic MMAPv1 storage engine that was available in previous versions.

The new WiredTiger engine supports all previously available MongoDB features, introducing new features like document-level locking and data and index compression. WiredTiger performs more work per CPU core than alternative engines. To minimize on-disk overhead and I/O, WiredTiger uses compact file formats, additionally to optional compression.

According to MongoDB, for write-intensive applications, the new engine gives users an up to 10x increase in write performance, with 80 percent reduction in storage utilization, helping to lower costs of storage, achieve greater utilization of hardware, improve predictability of performance and minimize query latency. Migrating to the WiredTiger storage engine, MongoDB says, will deliver the most noticeable performance gains on highly write-intensive applications, such as an analytics platform like we operate at SOOMLA.

With document-level locking, a lock is being put on a single document (instead of a collection/database lock) while a write is being made to that document, in which case the operation is queued up and waits until the previous operation is completed. This brings far better utilization of the CPU and scales vertically much better with threading. WiredTiger uses various algorithms to minimize contention between threads.

Upgrades to the WiredTiger storage engine will be non-disruptive for existing deployments and can be performed with zero downtime.

The WiredTiger product and team was acquired by MongoDB December last year and continues development on the next version of the WiredTiger engine.

MMAPv1 Improvements

In 3.0, MMAPv1 adds support for collection-level locking. Previously, the storage engine had a write lock at the database-level, so each database only allowed one writer at a time. The new version of the engine improves concurrency, giving almost linear performance scaling with growing concurrent inserts.

Record allocation behavior has been improved to better handle large document sizes. The new allocation strategy called ‘power of 2 sized allocation’ can efficiently reuse freed records to reduce fragmentation, increasing the probability that an insert will fit into the free space created by an earlier deletion or document relocation.

Introduction of Ops Manager

Ops Manager is a new management and monitoring tool introduced with MongoDB 3.0 that incorporated best practices to help keep managed databases healthy and optimized. It converts complex manual tasks into automated procedures which are easy to execute by API calls or through dashboard.

Ops Manager helps operations teams with: deploying a new cluster or upgrading one, backing it up and recovering from backup, dynamically resizing capacity by adding shards and replica set members and it does all that with zero downtime. Some of these tasks would require hundreds of manual steps in the past, and now can be done in a single step.

Ops Manager allows to monitor more than a hundred key database and system health metrics including operations, counters, memory and CPU utilization, replication status, open connections, queues and any node status.

Replica Sets

Increased Number of Replica Set Members

In MongoDB 3.0 replica sets can have up to 50 members instead of 12.

Replica Set Step Down Behavior Changes

Before stepping down the step-down procedure will attempt to terminate long running operations that might block the primary from stepping down, such as index build, or a map-reduce job.

To help prevent rollbacks the procedure will wait for an electable secondary to catch up to the state of the primary before stepping down. Previously, a primary would wait for a secondary to catch up to within 10 seconds of the primary.

Other Replica Set Operational Changes

Initial replica sync build indexes more efficiently applying epilog entries in batches using threads.

Sharded Clusters

Provides a more predictable read preference behavior. Instances no longer pin connections to members of replica sets when performing read operation. Instead, mongos reevaluates read preferences for every operation.

Improved visibility of balancer operations. sh.status() includes information about the state of the balancer.

Security Improvements

MongoDB adds a new SCRAM-SHA-1 challenge-response user authentication mechanism. This mechanism represents an improvement over the previously used MONGODB-CR providing a tunable work factor, per-user random salts rather than server-wide salts, stronger hashing (SHA-1 rather than MD5) and authentication of the server to the client as well as the client to the server.

Improvements

New Query Introspection System

An improved explain introspection system that provides better output and a finer-grained introspection. The query plan can now be calculated and returned without actually running the query like before.

Enhanced Logging

MongoDB now categorizes some log messages under specific components or operations and provides the ability to set the verbosity level for these components.

MongoDB Tools Enhancements

Key MongoDB tools mongoimport, mongoexport, mongodump, mongorestore, mongostat, mongotop and mongooplog have been rewritten as multi-threaded processes, allowing faster operation and smaller binaries.

The mongorestore tool can now accept BSON data input from standard input in addition to reading BSON data from file.

How we feel about it?

The new features introduced in MongoDB 3.0 look promising and should make it a better performing solution with substantially lower operation costs for our deployment. We chose to go ahead and upgrade.   What about you?

Sources:

https://docs.mongodb.org/manual/release-notes/3.0/

https://www.mongodb.com/press/wired-tiger

http://www.mongodb.com/presentations/webinar-an-introduction-to-mongodb-3-0

https://www.mongodb.com/blog/post/whats-new-mongodb-30-part-1-95-reduction-operational-overhead-and-security-enhancements

https://www.mongodb.com/blog/post/whats-new-mongodb-30-part-2-richer-query-language-enhanced-tools-and-global-multi-data

https://www.mongodb.com/blog/post/whats-new-mongodb-30-part-3-performance-efficiency-gains-new-storage-architecture

https://www.mongodb.com/presentations/webinar-an-introduction-to-mongodb-3-0

http://s3.amazonaws.com/info-mongodb-com/MongoDB-Whats-New-3.0.pdf

http://www.wiredtiger.com/

http://blog.parse.com/announcements/mongodb-rocksdb-parse/

http://www.zdnet.com/article/mongodb-3-0-gets-ready-to-roll-with-wiredtiger-engine-onboard/

http://www.tomsitpro.com/articles/mongodb-big-data-document-store,1-2485.html

http://www.acmebenchmarking.com/2015/01/diving-deeper-into-mongodb-28.html

Feel free to share:
Announcement, Tech Resources

SOOMLA Retention Reports

SOOMLA introduces retention reports – the best way to explain the behavior of your users in terms of how many stayed in your game and how many left it. There are many services out there that present the ability to investigate user retention, SOOMLA’s retention reports come with 3 kinds of retention types, each of them shows you your users behavior from a different angle and gives you information so you can react accordingly to prevent users’ churn.

Investigating your new users

The first type of report is the Regular Retention, here you can see how many first time (new) users visited your game.

Regular_Retention

The two leftmost columns represent how many users started playing your game and the specific day they came in. Any other column “i” represents how many users visited your game in the i-th day after the starting day.

Rolling retention of first time users

The next retention report is Rolling Retention which was first introduced by Flurry. Rolling Retention shows you how many users are still “in your game.” For example, a user started playing on day 0, and came back only on day 3. Rolling Retention will count this user also on day 1 and 2 as if the user played throughout all these days.  While this is a more “optimistic” analysis of user behavior, it treats users as equals even if they didn’t come back each and every day.  Rolling Retention is particularly interesting to look at when coupled with other marketing activities used by your studio.  For example, a push notification campaign is likely to get users back and to boost this metric.  This metric has been much debated, which is why we offer multiple retention types for developers looking to optimize their retention.

Rolling Retention

Like in the regular retention, in Rolling the two leftmost columns show the number of users who start to play on those days. Every other column “i,” represents the number of users who visited your game after “i” days or more according to the date range you’re looking at.

Take a deeper look into returning users

The last type of retention you can find on the Grow dashboard, is the Return Retention. This kind of retention is different from the others because it doesn’t consider each date as a new “day-0” cohort.  Instead, it accounts for all active users on that certain date and shows you their return rate from that day on.  In that sense, Return Retention captures a snapshot of users from multiple cohorts and observes their future retention.  It also lets you identify weak and strong days of gameplay over time.

Return_Retention

The leftmost column represents the total number of users which visited your game on the exact day, while the other columns tell you how many of them came after the i-th day.

Retention is the cornerstone metric every studio should be tracking.  The importance of this metric can be emphasized by two observations:

  1. Retention is the foundation for calculating user lifetime value, which lends its hand to revenue projection and ROI calculations of a game.  It’s also necessary to know LTV in order to conduct ROI-positive user acquisition
  2. Retention expresses your users’ delight from your product.  It’s the ultimate metric for understanding if your game is truly entertaining to the extent of keeping users coming back for more.

We encourage studios to explore retention metrics in the Grow dashboard and to understand their users’ behavior.

Feel free to share:
Tech Resources

This post is an update of an older post we had about how to set up shop on your mac: http://blog.soom.la/2014/04/setting-up-shop.html

Here we’ll describe the basic setup of every one of the engineers here in SOOMLA. This basic installation includes the basic essentials every full stack engineer needs.

Go through these stages to install the full stack

XCode installation

Using The OSX App Store, Install The Latest Version Of XCode, And Run It For The First Time

Command Line Tools

Next, Paste The Following Code In A New Terminal Window

xcode-select --install

Make your terminal work for you (dotfiles)

Everyone in SOOMLA works with the terminal. It’s the fastest and most efficient way to work as an engineer that needs to install things and run a lot of commands all the time.

dotfiles is a great project that incorporates many amazing people’s terminal configurations. We especially like mathiasbynens‘s stuff. It’s a great starting point to the perfect personal terminal configuration.

 Homebrew setup

Paste The Following Code In a Terminal Window

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
  • Update via the brew update Command
  • Verify Successful Installation via The brew doctor Command

Install different packages

Now we’ll be installing various programs using brew, for each program enter the following command brew install xxx, replacing each time.
You can easily verify installation by running xxx --version after, where xxx is the name of the installed package.

The programs we’ll install using brew (click for further info):

$ git version
git version 2.3.0
$ openssl version
OpenSSL 0.9.8zd 8 Jan 2015
$ redis-cli --version
redis-cli 2.8.19
$ mongod --version
db version v2.6.7

Downloading JDK 1.6

Download And Install JDK 1.6 For OSX here

Installing the Android SDK

SDK
  • Enter brew install android-sdk into Terminal
  • Run The android Command From Terminal And Follow Instructions To Continue Installing
NDK

Enter brew install android-ndk into Terminal to install.

Setting up your IDEs

These are the IDEs we work with at SOOMLA

Rubymine

Download and install here

Android Studio

Download and install here

Text Editor

There’s a great debate which text editor is good for you. I guess it’s everyone’s choice. This is how we rank the test editors (from best to pretty good):

  1. Atom
  2. Sublime
  3. Textmate 2
  4. Brackets – especially for web development.

Additional utilities you will need

Robomongo – MongoDB Management

Download and install here

Skitch – Image Annotation

Download and install here

StackEdit – Markdown Editor

Get it here

Meld – Use it for git diffs

Follow instructions here

Gitx – Use it to see your git history

Download and install here

Sequel Pro – MySql client

Download and install here

Keep – For notes

You should really try it here

Git-ify your command line

Go over this great post by @gurdotan: http://rubyglazed.tumblr.com/post/15772234418/git-ify-your-command-line
Your command line should be your main tool when using git. Get ready to git in the speed of light.

Online Services Sign-Up

Every team member should be signed up to these services

  • Github
  • Trello

Done

Now you should be up and running, the SOOMLA way!

Feel free to share:
Resource, Tech Resources, Tips and Advice

Over the last few months, we’ve been conducting interviews for various positions and it’s been difficult. Exceptional engineers are hard to come binterview handshake_shadey. So between mobile game reviews and other technical posts, I wanted to take a moment and point out some do’s and don’ts for all you smart candidates out there:

Don’t Over Sell Yourself!

Sell yourself. Of course. But be careful, over selling yourself will lead to people not assessing your character and expertise correctly, which isn’t good.  This is bad not only for the hiring company, but also for you, as the candidate. A wrong perception of who you are may cause you to:

  • … be assigned to a position that doesn’t fit your knowledge.
  • … work in a company that doesn’t fit your character.
  • … be assigned a position that you don’t want.
  • … not get the job by seeming arrogant or sometimes over qualified.

What does over selling yourself look like?

I’m not saying that candidates are lying. If I had a feeling a candidate was lying to me then, of course, I wouldn’t want to work with him. I’m talking about candidates that create an image of someone they aren’t just to get the position. When you create a wrong image, you:

  • … say you know how to do something that you actually don’t.
  • … say you know how to do something that you’re actually just 20% experienced in.
  • … will try to put on a show and act like somebody you’re not. (If you’re not a loving & caring person, don’t try to be)
  • … will try to answer questions you weren’t asked, thinking it will impress your interviewer.
  • … will try to correct your interviewer and be wrong about it. (Oh god that’s a turn off 🙂 )

How To Not Over Sell Yourself

The evident result of over selling yourself is that you will very quickly be back on the market. And that’s if people in the company you joined really care about you. Smart managers know when a new candidate isn’t right and can recognize when they’ve hired the wrong person. Therefore, the best thing is for both sides to say goodbye nicely. It becomes even harder for managers to do this when they know they took the candidate from a previous workplace, but still the smart thing to do is part ways.

In order not to over sell yourself in an interview, think how you can help your interviewer. The guy sitting across from you asking questions and assessing your abilities is not there to fail you. He’s there to actually hire you. Give him the right reasons to hire you and not just reasons to hire you. Ask your interviewer: “What do you need?” and see if you fit that description. If the description is not clear enough, ask him to rephrase and make sure you have a clear understanding of the requirements and that they apply to you.

Another smart thing to do in an interview, is stay humble. Be patient, calm and answer what you’re asked. Don’t try to show how smart you are, if you’re smart it’ll be shown.

So stay focused, answer every question you’re asked and make sure you’re the right man for the job. The goal is not to win the interview, but to win the position that fits you most.

Good Luck!!!
🙂

Feel free to share:
Tech Resources

This is the second of a 2 part post about how we improved query performance on our analytics dashboard by over 7000x. All just by moving some of our data from MySQL to Redis. Part 1 was a technical explanation of the setup, while part 2 shows the benchmarks we saw when comparing fetching data from both systems.


We use Redis a lot. It is fast, stable, effective and awesome! This time, we found Redis useful in solving a painful problem: counting unique users for multiple different filters.

We recently found a new feature in Redis (new for us at least): HyperLogLog. HyperLogLog is a growth arrowprobabilistic data structure which makes estimating the number of distinct objects in a set very fast (Actually, more like blazing fast), but with a minor standard error (You can read more about it here). The moment we read about HyperLogLog we knew there’s something in it. And now that Redis has made it so easy to use, our testing started almost immediately.

We Want Real-Time Data

Until now, we used to keep all data about unique users in MySQL. The data was saved in different variations and ready for filtering (country, day …). As time went by, our queries became slower and slower. It was a pretty grim situation when all our different optimizations on MySQL showed us there’s no real solution here. We were offered to take many different approaches using Redshift, Hadoop or ElasticSearch but we didn’t want to have our data presented in any delay to our users. We wanted a complete, real-time data presentation in our dashboard that is being instantly updated using our background workers.

Redis to The Rescue

Once we had Redis running and migrated the MySQL data in, the results were astonishing. We’ve been tweaking MySQL to try to make distinct counting faster for a couple of months now, and results were mediocre at best (not to MySQLs fault, we were counting cardinality in 10 million+ row tables), but Redis was FAST. Although speed wasn’t the only thing we had to benchmark, we weren’t sure how well the 0.8% error deviation Redis promises for HyperLogLog stood up when we ran queries on our data.

MySql is Under Performing

To get us started, here is a benchmark of part of the many many different ways we tried tweaking MySQL specifically for COUNT DISTINCT

mysql-performance

We tried different query and index structures, the conclusions we drew from the process:

  • SELECT COUNT(*) FROM (SELECT * GROUP BY id) seemed to constantly work better than SELECT COUNT(DISTINCT id).
  • MySQLWorkbench is awesome.
  • With 10M rows and getting larger every day, MySQL just wasn’t the tool for counting the cardinality of our user-data.

Revelation of Goodness

Once we migrated all of our MySQL data into Redis Keys, we saw Redis zip by MySQL in a blink of the eye.

mysql-redis

There’s no mistake in that graph. We tried to chart both performance times of MySql and Redis on the same graph, but you probably can’t see redis’s values there. Here’s a close up of Redis performance times.

redis-performance

Amazing!

The Fly in The Ointment

This can’t be all so good. HyperLogLog only gives an estimate, so then it was time to compare the estimates to the actual MySQL counts. For most queries, the difference was much smaller than the 0.8% error deviation (the smallest was 0.03%), but after benchmarking many different queries, we also had 2 that reached an error of 1.1% and 1.7%.

redis-difference

In the end, these error deviations were acceptable for some of our use cases. We’re still saving exact counts outside of Redis … Just in case.

HyperLogLog is a very powerful tool for counting unique entities. You should definitely use it if you’re willing to accept its minor standard error.

Feel free to share:
Tech Resources

This is the first out of a 2 part post about how we improved query performance on our analytics dashboard by over 7000x. All just by moving some of our data from MySQL to Redis. Part 1 will be a technical explanation of the setup, while part 2 will show the benchmarks we saw when comparing fetching data from both systems.


 

GROW is SOOMLA’s new user intelligence services presented by a brand new Analytics Dashboard. We wanted to provide mobile gaming studios with various ways to investigatRedis Featured Imagee their games using informative and important data metrics. The problem was, queries were slow and the user experience was bad. Most of the slowness stemmed from the fact we used MySql to calculate unique users in multiple different filters which was a bad choice for real-time uniqueness calculations. We tried to figure out ways to improve that until we stumbled upon a new method for calculating unique users with Redis.

We were already using Redis at this point, but only for internal purposes, and not for serving data to our web app, so we decided on setting up different servers that would only serve data for the dashboard. We looked at different options (clusters coming to Redis 3, which has officially been released since this blog post was written, Redis Sentinel using at least 3 different servers) but decided that for our usage, setting up a simple master-slave duo would be enough. Failover will be taken care of semi-manually instead of the overhead of sentinels, once a crash is identified we run a script that notifies the slave to be master, and switches the IP on all servers that connect to Redis (we based some of our approach on some great advice by the awesome @jondot).

When looking to set up a few Redis servers, we saw 2 major options:

  • Setting up our own machines and running Redis off them
  • Using a cloud Redis service such as Amazon Elasticache/Azure Cache/redislabs

After considering pricing and our specific needs we decided to manage our own machine, which came to a third of the price of other self managed cloud services.

Here is our process of setting up 2 Redis machines as Master/Slave

We start with a fresh instance on Amazon EC2, using Ubuntu 14.04


EC2 was just our choice for testing purposes… you can absolutely select your preferred cloud service provider.


Setup

SSH in and add all necessary keys to ~/.ssh/authorized_keys

Install the latest version of Redis via Chris Lea’s PPA:

sudo add-apt-repository ppa:chris-lea/redis-server
sudo apt-get update
sudo apt-get install redis-server

redis-server should be running now

Run redis-benchmark -q -n 100000 -c 50 -P 12 to make sure everything is running ok

Config

Open /etc/redis/redis.conf and change the following settings:

  • tcp-keepalive 60

  • comment bind
    This makes the machine accessible to anyone on the web

  • requirepass choose extremely secure password
    The extreme speediness of Redis is a double edged sword when it comes to password protection.
    An attacker can be able to try as much as 150,000 passwords/second so make that password secure.

  • maxmemory-policy noeviction
    For our needs, no key can ever be deleted

  • appendonly yes
    We will be using both AOF and RDB backups

  • appendfilename redis-[prod/stg]-s[1/2]-ao.aof

If configuring slave

This part is just for our slave

  • slaveof ip port
  • masterauth master password

Save and exit.

Restart redis with sudo service redis-server restart.

You should now be able to connect to redis via

redis-cli -h 127.0.0.1 -p [your port]
AUTH ֿ

Machine Config

Install Git:

sudo apt-get update
sudo apt-get install git

Install Node.JS + NPM:
We will migrate our data to Redis with some node scripts

sudo apt-get install nodejs
sudo apt-get install npm
sudo ln -s /usr/bin/nodejs /usr/sbin/node

Setting up semi-auto failover

We will set up the failover in a way where the redis server IP is an environment variable. On failure we will receive notification and set up a script that will:

  • switch the environment variable to be the IP of the slave machine
  • send the slave a SLAVEOF NO ONE command
  • update the master’s configuration to be slave of the original slave machine

After that we can take our time and figure out why the master server crashed.

To setup the local variable

create a new file in /etc/profile.d with a .sh extension, the file content should be

export REDIS_SERVER=

To make changes have effect immediately run

source .sh

And make sure everything is set by running

printenv REDIS_SERVER

Now, in your server environment configuration, set the Redis server url to be

server: process.env.REDIS_SERVER

Now your server should successfully connect to Redis via the environment variable.

If you’re running your server as a service

In this case the daemon service will not recognise your environment variables, therefor you should inject the .sh in your daemon script in /etc/init.d/yourservice

the injection should look like this

source /etc/profile.d/.sh

inserted before the start/stop/restart functions.

That’s it for setting up our server, stay tuned for the 2nd part, how using Redis HyperLogLog made our queries 7000x faster.

Feel free to share:
Game Reviews, Tech Resources

If you are an indie game developer, you might have had the idea of submitting your game for review at different game review sites. At the end of the post you can find a list of 150 sites that post reviews for iOS games. First, let’s discuss some of the merits in pursuing that path.

List of Game Review Sites

How many people read review sites

While it’s hard to get a reliable data point on this for the gaming industry you can find some data about the movie industry and use it as a general guideline. In a survey made by Amazon they found that only 2-3% took professional film critics seriously in their decision of what movie they should watch. If we use this stat as a guideline, you might be asking yourself if its worth the trouble.

Paid games vs. Free 2 play games

There is a big difference between getting a user to download a free game vs. a paid game. Getting a user to buy something requires a bit more effort and so review sites are coming in handy especially if your game lacks a brand or recognizable IP.

The ripple effect

Another positive outcome of getting reviewed is that the review might impact decisions made by different gate keepers. You are more likely to get more reviews by other publications. In addition, many games that get good reviews end up getting featured by Apple and finally, you have a better chance of striking a publishing deal.

Where to get reviewed

As promised earlier, here is the list – List on Maniac Dev site

In addition, you can also submit a request to get reviewed on the SOOMLA blog

 

Feel free to share:
Announcement, Open Source, Plugins, Tech Resources

new vungle plugin allows unity developers to add video ads to their games and monetize betterPlugins are coming! The first one is Vungle. The great Video Ads monetization service for mobile apps is now available to SOOMLA developers. With the new soomla-vungle plugin you can easily use Vungle to show ads on Android, iOS and Unity3D.

Give rewards for watching videos

The most important feature in this plugin is the ability to give rewards for watching a Vungle video. Rewards are a new concept we recently presented after watching many games and figuring out how rewards are given to users. There’s even a VirtualItemReward that you can use to give your users a curtain amount of a curtain VirtualItem when he finished watching a video. You can also decide not to use rewards and just use Vungle to present a video at any given time.

Easy to use and (of course) free !

To use the new plugin, just go to the new Github repo at: http://github.com/soomla/soomla-vungle and click on the folder of your selected platform. Follow the Getting Started carefully and you’re golden!
Don’t hasitate to asking anything involving this plugin in SOOMLA’s new answers website at: http://answers.soom.la. You can also just use it to say how much you enjoyed using this new plugin 🙂

Feel free to share:

Unity_Logo (1)I’m not sure if you guys see what’s going on with the SOOMLA opensource framework but you should know it keeps expanding every moment. We’re adding features and fixing bugs like crazy but the new feature (or major improvement if you like) that I’ll tell you about in this post is one of our most important ones so far.

Improving your game development environment

Community members keep asking us why unity3d-store isn’t available in the editor. This was one of the most wanted improvements for the Unity3d flavor of SOOMLA Store and now it’s here. We knew how important it is to you and we always felt like the SOOMLA library is not complete without it. I’ll explain why …

Working in the editor is beneficial in so many ways. You can now test your games with the SOOMLA Store economy mechanisms as if you were running your game on a device. All balances and other information will be saved in a local storage on your computer and will be available to you through multiple testing of your game.

Your game with SOOMLA everywhere!

Easily testing your game is not the only benefit of working with SOOMLA in the editor. If you want to publish your game to the web or as a desktop game and you don’t need in-app purchases then you can now do that with the amazing features that SOOMLA is offering you. Everything from economy management to level design (unity3d-levelup) can now be used in the editor and built to multiple platforms.

Another option that this major change provides is the option for more billing services to be integrated with SOOMLA Store. Anyone from the community who want to add a billing service for desktop or web games is encouraged to do that. If you feel like contributing today you just start doing it or contact us for assistance at support@soom.la. If you don’t feel like it today, that’s fine, you can do it tomorrow 🙂

Cocos2dx users, don’t feel left behind. We’re getting the work-in-editor spirit to you soon. You will also be able to build and test your games on a desktop emulator and release with SOOMLA inside to multiple environments.

Feel free to share:

DOWNLOAD OUR RECENT REPORT

Join 7556 other smart people who get email updates for free!

We don't spam!

Unsubscribe any time

Categories

SOOMLA - An In-app Purchase Store and Virtual Goods Economy Solution for Mobile Game Developers of Free to Play Games