Mobile Testing Summit: Rise of the Machines
The great debates: New York vs Silicon Valley. Emacs vs Vim. Philz vs Blue Bottle. The list goes on. Within the software development world, there's an even greater debate about the proper way to do testing: manual vs automated.
Manual testing advocates argue that automated tests are too hard to write, too hard to maintain, and in general, not worth the cost. Automated testing advocates fire back that manual testing is tedious, takes too much time, too much bureaucracy, and is too error prone.
As the creator of Selenium, you can probably guess which side I'm on. However, I'm a little nervous. Something happened that has given manual testers renewed faith that their way is the right way: mobile.
Until a few years ago, to test software applications, you only really needed to worry about the software part. It was safe to assume the software was running inside a big boring beige box, or more recently, a sexy thin laptop. It didn't matter whether the software ran in the cloud, on a desktop, or in a browser - if you tested it here, you knew it worked there, too.
But with mobile, assumptions are dangerous. Mobile devices are way more complicated -- with all kinds of sensors (e.g. GPS, multiple cameras, accelerometers, etc.) Apps can be location and context specific (e.g. unlocking a Zipcar with your iPhone). How will software developers manage this matrix of complexity? For now, the answer is "with humans", and it's a crushing blow for Team Robot.
I've spent a long time wondering: What is it about humans that make them better at mobile testing? It's three things, really: brains, fingers, and eyes. With a brain, a finger, and an eye ball, you can pretty much test anything, anywhere, on any device - they are the universal testing API.
Without brains, fingers, and eyes, the robots are constantly playing catch-up with the humans -- always at a disadvantage. That is, unless, the machines can learn to adapt.
For the past year, I've been trying to learn the fundamental lessons of why manual testing is better, and use this to make better automation tools. With robotic fingers, cameras, and smart computer vision algorithms, the robots finally have a fighting chance against the humans.
Of course, that's just my opinion. There are many smart people thinking about how test automation tools can and should adapt to the problem of mobile. They've started to attack the problem and ship their solutions. But they're all working in isolation. The problem requires collaboration and communication. How are they adapting their tools and processes to the new challenges?
The mobile software development world needs a public forum to discuss mobile test automation. So today, we at Sauce Labs are pleased to announce the Mobile Testing Summit. On November 1, in San Francisco, the world's leading open source mobile test tool developers will come together for one day. We'll share our work, and discuss the challenges unique to mobile test automation. If you are as passionate about mobile test automation as we are, we look forward to seeing you there.
Manual testing, you're officially on notice. To paraphrase Neo: I don't know the future, I didn't come here to tell you how this is going to end. I came here to tell you how it's going to begin. On November 1, we're going to show a world where anything is possible. Where we go from there is a choice I leave to you.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce