Saucelabs
Saucelabs
Back to Resources

Blog

Posted June 9, 2014

Guest Post: A Dialectical Theory of Software Quality, or Why You Need Independent QA

quote

Product quality, and in particular software quality, can be an ephemeral characteristic of the product. It may not be easy to define, but in a sense, it is the opposite of the definition of pornography. You may not recognize it when it's there, but you know it when it's not. I propose that anything in a software product, or for that matter any other product, that induces unnecessary aggravation in the user is a detraction from the quality of the product.

For those unfamiliar with the term "dialectical" or its noun form, "dialectics", these terms can be very roughly defined as an approach to looking at things that sees them as dualities. For example, the concept of "night" is more meaningful when coupled with the concept of "day." "Good" has more meaning when paired with the concept of "evil". Creative and constructive processes can be thought of as dialectical, where there is a tension between opposing imperatives and the result of such processes can be thought of as the resolution of these tensions.

As applied to the discipline of software engineering, one dialectic that exists is that between the imperatives of developers and architects and those of users. In the development process, the imperatives of independent QA engineers are those of users and are theoretically opposite to those of developers. Developers are totally absorbed in the technical intricacies of getting from point A to point B. They work to some set of explicit or implicit product functionality items that make up a product requirements set. Their concern is in how to implement these requirements as easily as possible. They work from the inside out, and are intimate with the details of how the functionality requirements are implemented. Independent QA, on the other hand, works from the same set of defined or implicit functionality and requirements but, in theory, does not care about the details of the implementation. QA engineers are intimately concerned with all aspects of how to use the product. By exercising the product, they find the points of aggravation to which the developers may be completely oblivious. To the extent that their findings are heeded, the quality, defined as, among other things, the lack of aggravation, can be enhanced.

In a sense, any piece of software that is run by someone other than the person who wrote it is being tested. The question is not whether the software will be tested, but by whom, how thoroughly, and under what circumstances. Any shortcuts, data formats, dependencies, and so many other elements that a developer used to get their code to run that are not present outside of their development environment may cause a problem when someone else runs that code.

There are many types of software testing. One fundamental division of testing is that between so-called white box testing and so-called black-box. White-box testing is testing carried out with knowledge of the internals of the software. Black-box testing emphasizes the exercise of the software's functionality without regard to how it is implemented. Complete testing should include both types of tests. The emphasis in the text that follows is on black-box testing and the user experience, where the dialectical view of QA has the most relevance.

Bugs and other manifestations of poor quality cost money. There is a classical analysis that basically says that the cost of fixing a bug increases geometrically the later on in the development cycle it is found. Having your customer base be your principle test bed can prove to be expensive. Another possible source of expense is the support for workarounds for bugs that are not fixed. I can give a personal example of this. Some time ago I purchased an inexpensive hardware peripheral which came with a configuration software package. This package had a bug that, on the surface, is very minor, but when I used it I had problems configuring the product correctly. It took two calls to their support team to resolve the problem. Given the low price of this peripheral, one may wonder if their profit from the sale of this unit was wiped out. If many people call with the same question, how does this affect their earnings? How much does a product that is difficult to use, buggy, or otherwise of poor quality increase the cost of selling the product? Repeat sales cost less to generate then new sales and to the extent that poor quality impacts repeat sales, the cost of sales is driven up.

The scope of independent QA need not be limited to bug hunting. Test-driven development can be done at both the highest level and the unit level. QA can make an important contribution in the earliest phases of product specification by writing scenario documents in response to a simple features list before any detailed design is done. For example, in response to a single feature item such as "login", a creative QA engineer may specify tests such as "attempt login specifying an invalid user name, attempt login specifying an incorrect password, begin login and then cancel, attempt login while login is in process, attempt multiple login multiple times specifying invalid passwords", and on and on. Another engineer, seeing this list of tests, may well think of other tests to try. The developer writing the login functionality can see from the list what cases they need to account for early on in their coding. When something is available to test, the QA engineer executes the tests specified in the scenarios document. Those scenarios that turn out to be irrelevant because of the way the login functionality is implemented can be dropped. Other tests and scenarios that the tester thinks of or encounters in testing can be added. Ambiguities encountered in this testing can be brought to the attention of development for resolution early on.

As more and more software is Web-based, runs in Web browsers and is available to more non-technical users, usability issues become more important. How often have you visited Web sites and been unable or have had great difficulty in doing what you wanted? There are all too many feature-rich Web sites based on some usage model known only to the designer. The simplest of actions such as logout may become difficult simply because the hyperlink for it is in some obscure spot in a tiny font. A vigilant QA engineer given the task of testing this Web page may well notice this user inconvenience and report it. A common user scenario such as placing an order and then cancelling it may leave the user unsure about whether or not the order has actually been cancelled. The developer may not have thought of this scenario at all, or if they did, thought only in terms of a transaction that either went to completion or was rolled back. A consideration that is trivial to the developer, however, may cause grave consternation to the end user. A transaction that did not complete for some catastrophic reason such as a connection being dropped unexpectedly could well leave the end-user wondering about the state of their order. The independent QA engineer may identify a need for a customer to be able to log back into the site and view their pending orders.

Current trends in software development such as Agile, as well as the move to continuous integration and deployment, do not negate the need for an independent QA function. Indeed, continually making changes to an application’s UI, functionality, or operating assumptions may prove unnerving to users. Assumptions of convenience, such as the idea that the user community will understand how to work with a new UI design because they are already familiar with some arbitrary user model supporting it, can easily creep in under an environment of constant change carried out by people who do not question these assumptions. Independent QA is still needed to define and execute user scenarios made possible by product change as well as old scenarios whose execution steps may be made different by UI changes. Automated unit testing, programmatic API testing, and automated UI tests created by development-oriented engineers cannot simulate the dilemmas of a user who is new to the product or is confused by arbitrary UI changes. A highly visible example of this is the failure of Windows 8 to gain widespread acceptance and the huge market for third-party software to bring back the Start menu familiar to experienced Windows users. Nor was the smartphone-style UI, based on a platform with more inherentlimitations than the existing Windows desktop, a big hit with them.

The work of independent QA engineers can, among other things, serve as an “entry point” for tests that may later be added to an automated test suite. A set of steps, initially executed by an actual human doing ad-hoc or exploratory testing, that cause an operation to fail inelegantly, can lead to a test program or script that should be added to the suite that is executed in a continuous integration cycle.

None of these considerations invalidate the value of testing based on knowledge of the internals of a product. Unit testing, white box testing, and anything else that one can think of to exercise the application may uncover bugs or usage issues. White-box testing may quickly uncover change- introduced bugs that black-box testing might only find with a great deal of time and effort, or not at all. In this context, automated tests automatically kicked off as part of a continuous integration cycle are an extension of an existing white box regression test suite but not a replacement for actual hands-on, exploratory, black-box QA. You might say that white-box testing is the dialectical negation of black-box QA. It verifies that the individual pieces work, where independent, black-box QA verifies that the product works for the user. The two approaches to testing complement each other. Both are necessary for a more complete assessment of product quality.

By Paul Karsh for Sauce Labs

Published:
Jun 9, 2014
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.