Pls share this page

‘Go Make It Rain’ App and the key technique behind it

May 26th, 2013 by Tanbir Leave a reply »


About a week ago our team quietly released Go Make It Rain, an app for iOS and Android. This post is a reflection on the key technical decisions (of which there were many) made in order to rain those digital stacks like a boss.

The app began entirely on the Java side, targeting Android (our core development strength at the time). After throwing together some OpenGL we produced an app that looked like this:


It was a dirty (pun intended) prototype, but the reactions we got from others were surprisingly positive. What started as a joke to make a “bill-flinging simulator” became something we strongly considered developing on a more serious level.

The decision was made to kick it up a notch. The following goals were set:

  • Make an app on both iOS and Android
  • Create a true “Augmented Reality” environment, and use the phone’s camera
  • Add true “physics”
  • Add a mechanic to gamify it (this was eventually cut)
  • Make the app a social experience

Cross Platform

One of the biggest hurdles was deciding how to make it cross platform. We decided to write a majority of the app in C++ in order to have a shared code base between the Android and iOS. None of us were game engine experts or experienced with mobile at that low of a level.

So wtf do we do now? One of our programmers was working on a game engine in Visual C++ for his thesis project (GitHub link), which revolved around a real time visibility culling technique (only rendering stuff in a player’s field of view in a complex video game environment). We decided to fork that code base and attempt to get it compiling with the Android NDK. After some considerable effort, and a few head bangs against a keyboard or two, we got it up and running on Android. Objective zero-point-five complete, booya.

The goal was to have front end UI, sensors, and input handled on the respective platforms, Java and Objective-C.


To make this work with Android, we used the NDK’s JNI interface. This allows you to link with a native library and pass parameters back and forth between the Java front end and the C++ backend. This saved a significant amount of time because we could use standard Android for the menus, user input, in-app purchasing, server interaction for photo uploads, and more, all in Java. Programming in C++ on Android comes with a lot of headaches though. Memory and lifecycle management is like fighting a yeti without a lightsaber. It can be done, but there’s gonna be a few bruises and/or missing fingers. Debugging in the NDK environment is terrible as well. Hitting breakpoints, monitoring different threads, analyzing native crash signals… as painless as a casual walk on Hoth in the middle of the night.


iOS was a new platform for us. A couple friends (thanks Max, Martin, and Malinda) donated an old MacBook and a couple old iOS devices. That was all we needed to get rolling. iOS was much simpler to tie in with the C++ backend. Given the closeness of Obj-C to C++, integrating the engine was no more difficult than adding an Obj-C file and modifying a few compiler settings. The best way to integrate Obj-C and C++ is to use a “.mm”. This allows you to use both C++ and Obj-C in same file. Sensory input and touchscreen input were handled by CMMotionManager and UIGestureRecognizer, respectively. Requesting sensor updates is straightforward:

m_motionManager = [[CMMotionManager alloc] init];
iosMotionManager.deviceMotionUpdateInterval = 0.033;
[iosMotionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXArbitraryZVertical toQueue:nil withHandler:nil];

The iOS development experience really made us question our years of Android loyalty. Even with a half-working 2006 MacBook, and jankety old iPhone, we see the iPhone version of Go Make It Rain as our flagship version. Check it out on your 4S or 5, smooth as butta!

Augmented Reality

This is where $hit got real. Originally the Android app had a fixed orientation and a photo as the background. We wanted to be able to throw out the money in a real world environment. What if you could spin in your office chair making it rain? That would mean the money stays fixed in location so once you do a full turn it will still be waiting for you in the same spot. Hellz yeah.

First thing first, how do we get the phone orientation data? To do this we use two phone sensors, the magnetic field for orientation and the accelerometer for gravity. It turns out getting this information isn’t very hard, it just takes a slight bang of the head against a keyboard.

For Android:

float [] rotationMatrix = new float[9];
SensorManager.getRotationMatrix(rotationMatrix, null, gravity,geomag);

float angleMat[] = new float[9];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, angleMat);

Sweet so now we have everything we need to know to orient the camera in relation to the bills.

For video input in OpenGL we built off of Qualcomm’s Vuforia API. Originally we were interested in utilizing some of the marker tracking capabilities, and still might in the future, but for now, Vuforia serves an easy way to interface with the camera settings in an OpenGL environment.


The first Android prototype used Bezier curves to calculate a path from the bill stack, out into the world. A series of control points would be generated based on the velocity and angle of the bill swipe. It would then find a path like this! Damn that was cool. It quickly became apparent, however, that this method would turn into a clusterf#@k for more complex movement, a.k.a. it just wouldn’t be scalable.

The final implementation ended up being pretty simple using gravity and air resistance, acting on a specific bill’s orientation matrix we captured when the bill was flung.

bill->m_velocity += GRAVITY * seconds;

if (bill->m_liftSpeed > 0.0f)
bill->m_velocity += bill->m_liftVector * bill->m_liftSpeed * seconds;
bill->m_liftSpeed += bill->m_liftAcceleration * seconds;

We know which way is up in relation to the bill from the sensor data; no matter at what orientation you hold the phone, the bill always falls directly towards Earth’s liquid hot mag-ma core. We also know what angle of elevation you are holding the phone. If you hold the phone up above you, the bills fan out in a cone around you and fall quickly. If the phone is more level then the bills shoot out with some lift, and then continue down the path of self-immolation (fire graphics not included).

Another hurdle was computing bill orientation along the path on the fly. There can be an infinite number of normals to a path in 3D. After doing some research online, it was decided the age old language of calculus had to be spoken. The orientation down the path (the tangent) is just the derivative of the positions. The path normal is the derivative of that orientation. Since the orientation vector was computed by subtracting the current position from the previous position, the path normal was computed by subtracting the current orientation vector from the previous orientation vector. It looks something like this:

m_billDirection = m_pos - m_posPrev;
m_billNormal = b_billDirection - m_billDirectionPrev;

And then Quaternions (yes, Quaternion. “Hey kids, Quaternion is out sick today because he’s busy being smothered in beautiful women with that name” anyway…) were computed from all of this to smoothly interpolate the bill orientations in 3D as they were flying through the air. The bill orientation Quaternion was computed at some regular rate so the newly sampled orientation was interpolated towards from the previously sampled Quaternion. This way, if there were any sharp path direction changes, the bills would never abruptly snap to a different direction, which would look like money with Tourettes (no offense to those who have the syndrome, just trying to paint a politically incorrect picture).


We use a skeletal animation system for the bills so they have a bunch of different animation types that smoothly transition into each other. This way the bill smoothly goes from laying in your hand to flying through the air, and then playing one of the many random falling animations. The Assimp library was very useful for taking the Collada files that our artist, Mikkel Sandberg, exported from the modeling program and converted to a nice, easy-to-load format for the engine.

Each bill has its own animation that is updated independently of other bills on the CPU. Then the GPU does the skinning in the vertex shader. There is a common skeleton and mesh for all bills, all you do is apply some set of animations that combine to smoothly blend into each other. When we added Reddit upvote arrows our artist only had to make a new mesh and paint the vertex weights to adhere to the original bill skeleton. It’s the same concept in other games that have many different character skins. They would all have a common mesh and skeleton that is animated, and artists just create a wide variety of characters.



One way we thought we could gamify Go Make It Rain was to add in multiplayer. The idea was to have one user be the “Rain Maker,” the goal of which would be to make it rain on the other players, a.k.a. “Rain Catchers”. Rain Catchers would compete to catch the falling bills. The bills which were caught would be added to a player’s bank/vault and could in turn be used to make it rain on others.

This posed a few technical challenges:

  1. How do we make a game multiplayer and cross platform without building out a hardcore backend?
  2. How do we combine the Rain Makers’ and the Rain Catchers’ individual augmented worlds into one, where it appears that the same bills thrown are the same bills being caught (shared 3D space)?

How do we make a game multiplayer and cross platform without building out a hardcore backend?

We wanted a World of Warcraft like experience where anyone could join and leave a game seamlessly. The key differences were that our game world did not have orcs and mages, YET. It was an augmented version of your surroundings and so it only existed when and where people were using the app. To achieve this, we pursued a cross-platform P2P SDK made by Qualcomm called Alljoyn. Alljoyn is really cool because it allows phones to connect to each other using whatever means necessary. So one person could be making it rain and we could have catchers with wifi or bluetooth join into the game seamlessly and because its P2P we didn’t really need a server side backend – boom. We set it up so that when a user started the game they connected to the existing server or acted as a server if no one else was there. The first user always acted as the host initially. If the host decided to leave, the session was passed on the the phone with the lowest unique ID. This worked pretty well except for some problems with how fast devices would detect each other. Sometimes we would end up with two sessions at the same time. To solve this we always pass the session to the phone with the lowest ID, this allowed all users to connect to the same host, even if there were some initial detection problems.

How do we combine the Rain Makers’ and the Rain Catchers’ individual augmented worlds into one, where it appears that the same bills thrown are the same bills being caught (Shared 3D space)?

In order for multiplayer to make sense, the player making it rain should always be facing the other players so they could catch the falling bills, otherwise there should be no bills to catch. With the phone’s compass we can detect the direction someone is facing, and combined with the GPS we can detect if players are facing each other and where the bills should fall. Assuming GPS was extremely accurate indoors this would work for most cases except for cases like the following:


Here the both players would be facing each other according to the compass of their phones, but would likely fail due to an inaccurate GPS reading.

Alljoyn was not robust enough to work in the use cases we thought were most important. For example, we wanted to allow people to be able to start a game when the only connection they had was 4G. We needed people down at the bar to be able to make it rain on each other. If connecting wasn’t easy and fast, no one would bother doing it, even if there were hoes shakin’ booty on the dance floor. We couldn’t get the 4G connectivity to be reliable enough, even though, in theory, it seemed like a sweet idea. A remote server would match all devices on 4G who had a shared wifi signal (even if that wifi signal was locked). Alljoyn is actively being worked on and we have no doubt it will be robust enough for most cases.

In the end, this component was removed from the initial release. We had some awesome tech working, and it was fun to play around with, but we didn’t have the programming bandwidth to get it polished to the point where it was ready for the public to get its dirty paws on.

The Future

Here are a few things we have on the table for future improvements. We would LOVE to hear your feedback on these, and any other ideas you might have!

  1. Record Video
    • Imagine if you could record a short video of your friend (insert shenanigans here) while you made it rain on them? A lot of people are asking for this.
  2. Slow Down Time
    • People have been saying it’s too difficult to time photos with the speed the bills move.
    • It’s pretty badass when you pause the game with money floating around. It stays locked in position as you move the camera around.
    • Pro Tip: The Android version will freeze time when you hold the photo button down and move the phone around.
  3. Prevent Bill Clipping
    • As bills fly through the air, they intersect each other and their geometry clips.
    • Use a simple constraint system that pushes bills apart gradually. Think of magnets repelling each other or planets doing reverse gravity on each other.
  4. Finish the Gamification
    • What if you could actually catch the money someone was throwing at you in a shared 3D space? Think about that for a second.
  5. Photo-Sharing Portal
    • What if everyone could share their photos to a single site where the community could vote on the best pictures?
  6. We desperately want to do all of these, but we want to see a little more traction before we invest the time to build it.


Creating Go Make It Rain was a great experience for us. If you’re looking into strategies for making cross platform apps, a shared native component is certainly a good option. Did it save us time from making two separate native apps? Probably a little. It definitely made it easier to dial things in on both platforms simultaneously.

If you feel so inclined, check out the app and leave us a review (it’s free, here’s the iOS version: link. And Android: link). Feel free to follow us as well; we enjoy sharing our half-baked, witty posts. FacebookTwitter, you know the deal. In the future we hope to write more about other aspects of the project, including the considerable visual and feature design changes.


(link to the Reddit discussion)

%d bloggers like this: