Once a product guy, always a product guy! I just noticed some neat A/B tests that made me wonder if Uber is running an experiment to boost revenue from its super users in India.
To the uninitiated, A/B testing is form of experiment where two features, called A and B, are tested among statistically similar groups to figure out if the new features would be desirable in its targeted user groups. It is often used for testing web and email marketing campaigns and increasingly, mobile apps. Following is a simple example: for an identical design, changing the word from “Buy” to “Try” more than doubled the conversion.
Before I narrate Uber’s test, let me describe what type of Uber user I am. After moving from the San Francisco Bay Area in 2015 to Delhi, India, I decided not to buy cars. Here was the simple economics: a mid-grade car with everything (lease/payments, expensive fuel in India, maintenance etc.) cost anywhere from Rs 40–60,000 per month ($600–900). My monthly Uber rides are rarely over Rs 10,000 ($160), plus the convenience of not driving in insane Indian traffic is a big one. While $160 may not sound much in the USA, in India it probably made me among the top 1–5% of Uber riders.
Also read: Uber Elevate kickstarts flight of fancy for aerial urban mobility. Will it take off?
Last Thursday, while booking my Uber, I noticed a new feature. Instead of showing me the price and asking me to confirm the same before booking the car, I got the following prompt.
I intuitively knew this was an A/B test; in fact, I’ve been wondering for a while what kind of monetisation tests Uber would run as it battles for supremacy in the Indian market, given that its fares are too low to make the business profitable.
Without the benefit of being an Uber insider, here are my best guesses for key features of this test:
To a Mobile B-to-C product person like me, the test looked very well designed. Notice the text “your total fare would be updated in the trip feed once you connect with the nearest driver”.
1. Since you have to take an extra step to check the fare in the trip feed, it’s likely some riders would not do so. You can call this group “highly insensitive to price”
2. Among the people who skipped viewing fare while booking but did come to the check the price later can be called “somewhat sensitive to price”
3. People who clicked “Cancel” on the prompt and did check the price before booking, are “sensitive to price”
Though I have a limited amount of data; only one user (me) and less than 10 rides, the early indicators are that there is indeed some difference in pricing. Following is the average price I paid on the trips where I was “sensitive” to price — eg, clicked “Cancel” on the prompt — and “highly Insensitive” to price — eg, clicked “Continue” and did not check the price in the trip feed.
Notice that the price I paid when I was highly insensitive to pricing was about 23% higher. Some of the difference in pricing is likely to be factors such as delay in traffic, which, because I travel short distances during office hours, tend to contribute to 10–15% fluctuation in fares. To remove such biases, I have taken an average of trips in both directions (back and forth from home to work) and different times (morning and evening hours). But given the small datasets, I expect this test to have some bias.
Uber has multiple challenges in India: the top two being a brutal competition with local player Ola and a cash burn that shows no signs of receding. At the same time, just like any other market, it knows it has a “core” group of users who are so hooked to the Uber experience that they would not mind paying more to continue having the same.
The test is to potentially to discover core #Uber users and see how much extra they will pay — tweet this.
For me, at least a few reasons:
It’s fun being on the sidelines and watching the taxi ride industry play itself out in India. As Uber (I rarely use Ola) rolls out more test, I would keep you in the loop.
For all product managers reading this, let’s have some more fun. Please comment on: