Back to Resources


Posted May 23, 2019

5 Tips to Automatically Test Every Time a Build is Submitted by a Developer


Back in the day, before the advent of automated testing, manual testers would need to painstakingly review code at every step of the process. As you can imagine, this slowed everything way down—and it got in the way of quickly and efficiently producing great, high-quality software. Today, DevOps teams are all about continuous testing and delivery; dev and QA teams working in-tandem to find and fix issues, get any necessary feedback from stakeholders, and push the code live to the customer.

This type of approach to software development and testing will save you a huge amount of time, energy, and resources. But it does come with some up-front time investment and learning. To help you get started, here are the top five tips to help you implement continuous automated testing every time a build is submitted by a developer.

1. Ensure the quality of your code

First and foremost, you need to ensure that your source code is free from stylistic errors before committing a new build. The developer should verify the code’s quality and avoid the common and uncommon mistakes that are made during coding the application. There are several tools available in the market for most of the common languages, including C#, Java, Python, JavaScript, CSS, HTML, etc., and they’ll help your team of testers eradicate errors and improve the overall quality of each build commit. This process of code checking helps accelerate development and reduce the overall project cost by finding errors early on.

You can also use tools that work to detect any unnecessary tests that are actively being developed and maintained, as these are wasting valuable engineering resources. These tools minimize the amount of test development by identifying code areas executed in production that are not utilized by regression tests.

To help guarantee the quality of your code, your developers can take these steps:

  1. First of all, write and compile the code

  2. Analyze it for any Syntax/Lexical errors with the proper tools, either open source or code-based (such as Pylint, JSLint, Sonar, and Sealights)

  3. Review and fix the bugs identified by the tool

  4. Integrate modules after fixing the bugs

  5. Again analyze the fixed module code with the tool

2. Test on the developer’s end

The unit test is a key step for developers before committing the code to ensure that all business requirements are fulfilled by the written logic. The developers should write the unit test cases, ensuring that they cover all the aspects of the feature. The purpose of writing test cases for all functions and methods is to quickly identify and fix the error. Test cases should be independent, so that in the event of any enhancements or changes in the requirements, the unit test cases will not be affected. After the successful completion or pass of unit and integration test cases, the developer will provide the feature branch.

Writing tests at the development level can add the benefit of helping you uncover major code design issues. If the developer finds that application code is difficult to test, then that gives them a clue to change the logic, as per the requirements. The process eases the integration of easily repeatable, everyday development tasks that reduce the overall build costs and reveals defects earlier in the cycle.

There are many unit test frameworks available to help your development team write the test cases, such as Pytest and TestNG.

3. Automatically build, deploy, and test

After committing the development team’s code, the next step is the testing orchestration. Continuous integration (CI) tools should be integrated, so that with each commit a job is triggered with an API test, if one is required. Once successful, the next test suite of smoke/sanity UI tests should be executed on the committed branch. Here, Code as Infrastructure should be integrated, so that the test machine can be provisioned as per the test requirements. Also, the test should be triggered with the same configuration of the Dev environment, which means QA, Staging and Dev environments should all be identical. Containerization can help here, so as to avoid environmental issues.

A successful run of smoke/sanity test cases will trigger the next downstream job, which will start executing complete end-to-end regression testing, along with features developed in the feature branch. If a feature got passed in the automated test, then it moves to the next step. At this point, the build should be deployed on the final environment, and artifacts should be delivered to any binary repository. If there are additional issues, it should be noted in the tracker tool (i.e. Jira) and the build should be rejected.

Here, your team can write scripts to automate deploy tools. We can use Powers Shell or Shell scripting for deployment and reporting. There are various tools available to integrate with automation code to support continuous delivery for reporting tools. The deployment will also benefit the product owners, enabling users to give quick responses on new releases. Unhelpful or hard-to-use features will help your dev team to refocus and avoid devoting more effort into functional areas that are unlikely to produce a good ROI.

Just some of the many tools that can be used at this step:

4. Simplify reporting to stakeholders

Your stakeholders should be informed at every step, starting from every unit test execution. Test results should be shared in whatever way the stakeholder prefers, like a chat/messenger client or via email. Be sure to provide a detailed result of the automated test, along with a pass/fail percentage. This helps them determine whether the build is good to go for production. All these flow processes can be communicated through the notification channels of your choosing.

You might also consider system-level notification channels, which can be used to send notifications about all CI and infrastructure processes implemented. Some popular messaging clients include Slack, HipChat, Hangout, and email providers.

5. Monitoring the complete process

Monitoring should be available at all times to check the build health and provisioned machine, so that in case of any issue the appropriate action can be taken by the team. Continuous monitoring plays an important role in detecting system errors and network issues before any negative impacts on business productivity.

Monitoring can be performed at various levels. For example, vulnerabilities introduced in the top-level code of an application by insecure coding practice can be monitored. Continuous integration servers can communicate with chat servers to alert teams about failed builds and other deployment issues. Application logs can be used mostly for monitoring, and application uptime and performance can be monitored to help gauge the issues. Some of the continuous monitoring tools you might consider include Nagios, New Relic, and Splunk.

Releasing or updating a software product is exciting and often we want it done yesterday, but take the time to go through these five tips for setting up a successful automated testing system. It will save you a lot of time on the back end so that you don’t have to keep reviewing and testing and fixing errors that could have been prevented if a process had been put into place. If you struggle to understand how to execute these tips and/or struggle with understanding which tools would be best for your automated testing process, you may want to reach out to a software testing company who specializes in creating automated tests.

Software testing companies can help you select the best suited automation tool for your product and your requirements, help you develop a maintainable and reusable automation framework, develop robust automated tests and environments and much more. Whether you’re working with another company or setting up your continuous automated testing yourself, we strongly encourage you to take the time to set up a process that ensures a smooth and successful product launch.

Brandon Getty is the lead copy writer for QASource. He has been writing about quality assurance testing for QASource for four years now.

May 23, 2019
Share this post
Copy Share Link
© 2023 Sauce Labs Inc., all rights reserved. SAUCE and SAUCE LABS are registered trademarks owned by Sauce Labs Inc. in the United States, EU, and may be registered in other jurisdictions.