We’ve been watching you (and we have graphs)

Lots of teams run tests with Sauce, and we've been collecting data on them. Who doesn't like data diving? Not us. We love it. So, like with any data dive, we have started to notice some patterns with the way people run their tests. Here, okcupid-style, are all the things we know about you just from how many minutes you use testing and when. The graphs you'll see are minutes used by day for each day in an example customer's tenure at Sauce, starting with day 0 as the first day they ran a test with us. We've broken them down by archetype. These are all actual customer data being shown to the world for the first time, but the names have been removed to protect the innocent (which is us (from lawsuits)).


The Addict These are scatter plots that increase over time. They represent on-demand use by everyone in a company as it either gets adopted by more people in the company or as more tests get added to the build. The average minutes per day seen in rampups has great range, from 25 per day to 5000. Larger companies tend to fall into this category, as their dev infrastructure slowly switches over to using us. The ones that run many tests usually have high degrees of parallelism, from 20 to 50 tests at a time. They sometimes start out never parallelizing and then one day start running all their tests in parallel, and when they do that, their usage tends to increase more sharply in the immediatly following days.   They tend to be companies that aren’t software-as-a-service, like a travel agency, a sports site, a hospital finder, or a business intelligence consulting agency.  


The Agile Shop Yes, I said "agile," and I realize it cost me 20 hipster points. These are the folks who, after a brief warmup period, start using us random amounts within a somewhat fixed range. These tend to be on the high usage side, averaging 1000 to 3000 minutes per day, and with high parallelism, topping out between 60 and 100 tests in parallel. They increase their use slowly over time, but it’s hard to tell with all the noise and the high volume. They’re companies that enter the game with a lot of their own in-house selenium tests that they switch over to using Sauce, usually a software-as-service model, like an online gaming site or ecommerce. This category of companies also includes ones that use us as a platform and sell a service that leverages ours.  


The Daily Builder These are characterized by flat lines of dots that indicate the same number of minutes being used each day. The number of minutes jiggles as latency affects the total nuber of tests, and on some days, possibly when a build fails and has to be restarted, the line jumps up. The line sometimes changes height when the company decides to run more or fewer tests in its daily build, or to change which browsers they run their tests in. The total number of minutes used in a dailybuild is usually not very much - between 80 and 200, with varied parallelism. Some don’t parallelize at all, while some get up to 25 or 30 tests in parallel. They tend to be very plugged-in companies in the tech community, like social media startups or very famous tech companies who won’t let us mention them by name. In this example, the customer ran one test 140 days before they really adopted us, and I removed it from the graph as an outlier; that's why the day axis starts at 140 instead of at 0. This company is on the extreme low end of usage for a daily builder, but is the best example of a very-flat line.  


The Zombie These are users who signed up and paid for a subscription but never ran any tests. They pay us anyway. Maybe they plan to adopt us soon! There are not many of these and they don’t have much in common. Their graphs are boring, so we plan to send them all a nice note saying we're canceling their subscriptions and they can re-enter their credit card when they're ready to provide more entertaining graphs.  


The Abandoner These are users who ran few minutes for a little while, with no parallelism, and then left. If you are one of these, you are a scarce resource and we want to hear from you! Please tell us why you left. We want to fix it. For this graph I've added a day of 0 minutes where today would be, to make the graph wider. All the other graphs end at today.  


The Contractor These guys run lots of tests with huge gaps in between testing cycles. Note the vertical gaps in the graph around days 30, 120, 180, and 300. Some of them are companies that are obviously consultants and some aren’t. We think they’re using us to test the webpages they’re building for someone else while they have a contract. There aren’t enough of them to generalize the number of tests they run or how parallel they get. They’re usually design agencies.  


The Boomerang These are guys who used us for a little while, left for many days, and then came back in force. They were probably people with some weird dev infrastructure that took work to integrate us. They might have done an exploratory sprint to see how we worked without integrating us into anything, then had to drop us for a while before they could invest the time to integrate. If you are one of these, we’re sorry we were hard to adopt. Please tell us what the difficulty was so we can smooth it out!  


The Test Czar These are characterized by dots that seem to form dotted lines, not scatter plots. Unlike the daily-build companies, these have sloped lines, not flat lines. Our best guess is that they also have a daily build, but have some person or group who manually curates which tests are run in the build, with the job of keeping build times down. This would explain why the build seems to spike up and then hold flat for a while before linearly decreasing over a few days as they prune or tune tests. They top out at around 200 minutes per day, and they don’t run many tests in parallel. There are very few of these, and their verticals aren’t similar; a fashion company, a major university, etc. These appear to be a fluke of internal management decisions.  

Written by

The Sauce Labs Team

Topics

Agile Development