Functional testing is based on what the application under test is supposed to accomplish, it’s based on the requirements and specifications that define it. Testing the Amazon shopping application to make sure I can search for and purchase a product is functional testing. The inverse of this, non-functional testing, is instead based on how good that application worked and how well it performed. When thinking of functional and non-functional testing in this way, at least for me, it’s easier to understand where the different types of testing techniques fall. Unit testing, for example, is a prime example of functional testing in that it literally tests “functions” of code - chunks of reusable code that perform one action. More broadly, functional testing includes a range of testing techniques where clearly defined requirements are used to verify the application is working as designed. Functional tests occur at the atomic level (unit tests) and throughout the development lifecycle.
Unit testing - This refers to the testing of an individual component of a particular application. These can be automated, ensuring that core functionality is not compromised as the codebase evolves.
Acceptance testing - Following the development of a particular feature, acceptance testing should be performed as a sign-off indicating that the newly developed feature satisfies the provided requirements. When working in an agile environment, this type of testing can be performed at the conclusion of a sprint where a feature is completed. A common practice is to include this as a task within each user story to ensure that it isn’t overlooked. Acceptance testing helps to eradicate any misunderstandings surrounding feature functionality by providing a checkpoint at which functionality is verified by those writing the requirements.
Integration testing - Integration testing refers to testing portions of an application where components are forced to work together to perform some set of functionality. This type of testing can be automated, ensuring that components within the codebase are working together without issue.
Regression testing - Executing existing test suites against an application after the codebase has changed helps identify bugs and defects inadvertently introduced to the system. This is an excellent example of where automation should be used. Regression testing provides a quick return on investment and is critical in a CI/CD environment, ensuring code stability.
Non-functional testing is focused on how well (or how bad) an application works - it’s behavior. Performance, localization (region-specific - not just converted language), user experience (UX), security, scalability, compatibility all fall into the category of non-functional testing. For example, you may want to test an application’s behavior under regional conditions including language and culture. To successfully accomplish this, you’ll need to test all the language interpretations, any changed images, documentation & support, the EULA and associated regulatory legalese, etc. How well the application performs, how consistent its behavior is, and how good the user experience is all distilled into non-functional testing.
Performance testing - This is a rather large bucket that includes all things performance-related. Load, stress, longevity, torture, capacity, scalability all fall into this category. Automating these tests is the only way to go as they are impractical and prone to error when attempting to run manually.
Security testing - Security testing is of paramount importance, especially with government entities, financial institutions, and the healthcare industry. By and large, security has and will continue to be of vital importance in the digital age and therefore thorough testing is warranted. Penetration, failover, compliance, vulnerability, disaster recovery, RCA, detection, and remediation all fall into this category. Depending on the industry and clientele, security testing can be required along with transparent results.
Compatibility testing - This is important to understand as it’s not the same thing as interoperability testing (how to 2 distinct applications interact with each other). Compatibility testing deals with how an application behaves from one environment to another. A prime example of compatibility testing is cross-browser and cross-platform testing where you are testing to make sure your application works and behaves the same from one browser to another and likewise with operating systems. Automating these tests can quickly reveal where an application degrades or fails with respect to a change in environment.
User experience testing (UX) - becoming more and more popular in today’s application development lifecycle. To beat the competition, developers must design applications that provide the most fabulous user experience and this means analyzing how users interact with the application, how efficient the workflow is. Is the layout intuitive or do users find it frustrating and difficult to navigate? Are users happy with their experience or do they find the application prohibitive in any way? These questions and more are answered with UX testing and help to deliver digital confidence.
Despite these various “test techniques”, it’s obvious that test automation is the key to delivering on them. The sheer volume, repetition, and risk involved are simply prohibitive for manual testing alone. Add to that the required devices, browsers, operating systems, VM’s, regional presence, etc. that would be required to satisfy things like performance, compatibility, and UX. Luckily there are several vendors out there that do exactly that.