Guest Post: What Automated Testing Really Provides
One of the more fun meme's on the internet is the 'correlation does not mean causation'. One of my favourite examples of this is stop global warming: become a pirate. Application development, and society at large, are full of these [fun] problems.
So what have we incorrectly correlated in the browser automation niche? The number of browser-based scripts and a decrease in test cycle time.
First off, the decision to be 'done' testing is actually quite arbitrary. And though the inputs into that decision are often quite uniform, occasionally someone or something will throw a monkey wrench into the mix and people make a guess/take a risk. Therefore, there is no correlation between the number of scripts and test cycle time.
What people often mean by that 'decrease in test cycle time' is that they are achieving a particular level of 'quality' in less time. I had one customer whose testing time took 10 days and they desired it to be three with the 'same quality'. The problem here is that as your application gets larger and more complex, that those 'same quality' goal posts becomes wider and wider. And writing more and more automation isn't going to slow that gap nor shrink it.
This drives us to the fundamental purpose of all this automation we write and of testing in general. And that is to provide information.
Automation that achieves this goal tells the consumer(s) of the information [the results] a status of the application / build. These things appear to work, these things appear not to work. In a Continuous Delivery-esque workflow this means the manual testers can know they can concentrate on the new/changed/unautomated parts of the application largely ignore the others. This is really where those savings come from.
This also means that you need to approach how you decide what to automate differently. Again, there is no correlation with the volume of script execution and the value of the information provided. I'll usually appreciate 20 different scripts that broadly provides different tidbits of information versus 20 scripts that narrowly checks field value permutations on a form.
So how do we rephrase the original statement to remove the causation/correlation problem and increase its truthiness? It's a little wordy, but how about this?
The number of browser-based scripts can decrease the time wasted by manual testers either exploring bad builds or re-checking things by providing a base information set thus decreasing test cycle time.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce