Functional testing is a critical step to ensure high-quality systems are delivered to customers. Functional testing’s focus on business value versus technical correctness ensures the system is solving fundamental customer problems. Unfortunately, it’s often viewed as overly time consuming and painful. By its very nature functional testing is a more complex task than most developer-level testing such as unit, integration, or system/API testing.
Automating functional testing can bring extraordinary value to a project. Moving to parallel execution takes that value a step further by enabling teams to continually test the most critical, high-value aspects of their software.
Functional testing ensures the system solves customers’ needs as laid out in various conversations and documents. Functional testing, also called acceptance testing, stands out from other types of testing in that it’s focused more on the business problems and less the technical aspects.
At its root, all testing is about effective communication. Functional testing is especially so due to its nature of checking the system’s form and fit for customer/user needs. Moreover, functional testing’s basis lies more in the business domain than the technical domain. This makes the need for effective communication even more critical.
Early clarity of expectations is critical, which means functional testing needs to start as early as possible—effectively at product/system ideation if at all possible. Getting an early jump on functional testing ensures clear expectations and also enables adjusting system development to best fit the customers’ and users’ true needs.
As mentioned previously, good functional testing starts nearly at the inception phase of the project.
First and foremost, the team will need clear acceptance criteria for system features. For the majority of scenarios, these need to be expressed as user tasks as opposed to deep technical statements. Think less of “Order summary pages shall return in less than two seconds under expected system load” and more of “Orders with out of stock items will create a summary notification message in the inventory management message queue. That summary message will include the item’s stock number and link to customer orders which weren’t able to be filled.”
Functional testing will require an environment that’s as close to production as possible: integration with full data, external systems, etc. Testing should obviously be done along the way as the system’s being developed, but the final functional testing will need access to all applicable dependencies. “As close to production as possible” doesn’t mean full hardware, though. Keep in mind that functional testing (generally!) isn’t the same as performance testing. Ergo, teams should be more concerned about integration versus hardware.
Good data for your functional testing is critical, and is yet another reason to start your functional testing early in your phase. Your system may be relying on upstream systems to create or transform data for you. Determining data in complex environments is a time-consuming, highly iterative process. You don’t want to start that a few days before you’re planning to hit your functional testing hard!
Most teams will look to automate at least the highest-value, highest-risk functional test scenarios. This means you’ll need to have your team ready to work with things like Selenium, API frameworks, deployment toolsets, data management tools, etc. Do you have the right infrastructure in place, not just to run your system, but to run tests against that system? If you’re running parallel tests you’ll need some form of grid or swarm to run your test agents in. You’ll need all the associated management infrastructure to handle execution of those tests as well, not to mention reporting requirements.
Unfortunately, test automation has been a badly misunderstood domain. Too often it’s seen as a panacea for cutting the time needed for regression and release testing. Worse, too many organizations don’t understand that automated functional tests are a software engineering effort requiring many of the same skills and disciplines as used to develop the system software. Successful automation means planning for it from the start, and understanding how to treat your automated tests.
Functional automation should never be approached from the mindset of “We’ll automate all our manual tests!” That’s a poor use of time and returns very little practical value.
Instead, good functional automation focuses on high-risk, high-value business use cases. Somewhat obviously, automation should only focus on scenarios it makes sense to repeat. Checking successful integration of a third party control on a web page doesn’t generally doesn’t make sense to automate—a quick bit of exploratory testing can confirm the control is properly wired up, receiving and displaying data properly, etc. Automating that test makes little sense as the third party control isn’t under the team’s development, nor is anything about its binding on the page changing.
It’s important to give test code the same care and attention as your production code—because it is production code.
Creating well-crafted, maintainable automated test scripts is critical to any project’s long-term success. This means you’ll need at least a few people on your teams who understand software craftsmanship principles and can help guide and mentor others writing automation. Your team will need to live and breathe concepts like Page Object Pattern, abstraction, Don’t Repeat Yourself (DRY), and other craftsmanship principles.
Once you’ve cleared up which tests to automate, take time to clearly lay out how you’ll write those tests. Good test construction will help keep those tests accurate, maintainable, and high-value through the life of the project.
Good tests exercise one function, they don’t conflate multiple features or workflows. That one specific function/workflow may be a complex test with a number of checks against multiple items; however, you’re still evaluating the outcome of one specific action.
For example, let’s look at a payroll system that computes an employee’s weekly pay based on their hourly rate and number of hours worked. (For simplicity’s sake we’ll leave off complexities of taxes, benefits, withholdings, etc.) Overtime needs to be taken in to account, as do business rules for hours and rate. Invalid inputs need to be checked as well. A table of test values and expected outcomes might look like this:
Standard Time
Hours | Rate | Expected |
0 | 10 | 0 |
1 | 10 | 10 |
40 | 10 | 400 |
Overtime
Hours | Rate | Expected |
41 | 10 | 415 |
80 | 10 | 1000 |
Business Rule Limits
Hours | Rate | Expected |
81 | 10 | ERROR. Can’t work more than 80 hours per week. Entry is flagged for review. Mail sent to supervisor. |
1 | 501 | ERROR. Max hourly rate is 500. Entry is flagged for review. Mail sent to supervisor. |
Invalid Inputs
Hours | Rate | Expected |
-1 | 10 | ERROR. User is prompted at UI to enter valid value. Don’t allow submission. |
1 | -1 | ERROR. User is prompted at UI to enter valid value. Don’t allow submission. |
One could easily write one single test to check all these inputs; however, that would end up being too complex and make it harder to understand what’s going on when something fails.
It’s easy to see the attraction of sharing state (data, environment, etc.) between tests: Set things up once, then easily run a bunch of tests using that same data. Unfortunately, history has shown this is actually a horrible practice.
Sharing state between tests regularly injects subtle bugs due to timing, especially when tests are run in parallel. One test may be editing values in a data set that another executing test relies on—and the second test failed for unexpected, unclear reasons. More obvious issues regularly pop up when one test delete’s another test’s prerequisites…
Note that baseline data sets are a separate issue from this. Creating something like a parts catalog to use as background data makes perfect sense. A test that’s adding a new part to the catalog should randomly create data for that new part versus re-using a template.
It’s far better to set up a customized framework that handles data setup and provisioning. This way it’s simple to quickly create a test that quickly and clearly creates complex environments specific to that test. For example, a test that checks placing an order for car parts could use setup steps looking something like the following in pseudo-code.
` // Test setup using a customized framework/API
CustomerId = Framework.CreateRandomTestCustomer();
FirstPartId = Framework.CreateRandomTestPart();
SecondPartId = Framework.CreateRandomTestPart();
This style of setup gets your prerequisites created in the system, and leaves you data you can use in the UI such as logging on as the test customer and adding parts to an order by their ID.
Teams starting out with parallel execution have a number of important things to consider:
Toolsets and Platforms: What functional test tools will you be using? Selenium WebDriver? A commercial tool? A combination of all the above? What languages will you need support for?
Test Infrastructure: You’ll need to run agents which execute your tests. Where will you host them? How many do you need? Will you want to dynamically change the number of agents between test suite runs? Can you use cloud-based services outside your organizational firewalls?
Cross Browser Testing: What browsers do you need to run your tests against? On which platforms (Windows, OSx, Linux, etc.)?
Mobile and Other Devices: Do you need to test on non-desktop/laptop devices? Which ones? Which OS versions on those devices? Where will those devices be stored or hosted?
Test Runners: How will you execute your test runs? What will manage scheduling or trigger runs? Are you using some form of a continuous integration environment such as Jenkins, Travis, Team Foundation Services, etc.?
Reporting: What level of granularity and detail do you need for each test pass? What do you need for trend reporting?
Data Archiving: How long will you need to maintain data for each pass? Will you want to have long-term storage of data for all of your runs? Just certain ones such as release or certification?
As with any complex technical challenge, a team’s best odds of success with parallel functional testing is to get as clear a picture of their needs in place, then start with small experiments in order to learn how to be successful. Teams need to remain focused on writing good, solid automated functional tests that cover the clients’ business needs. Those teams need to pay the same craftsman-like attention to their test code as they do their system code. As the project continues to grow teams need to continually learn from and improve their overall approach to automated functional testing, especially in a parallel environment.
Parallel functional testing can bring tremendous value to teams trying to deliver great software. It’s absolutely worth the investment of learning to do it well!