Continuous Testing in Practice: Test Intelligently by Tuning CI Pipeline Code

As companies seek to deploy software faster, they must test intelligently by tuning Continuous Integration (CI) pipelines. That requires rethinking how we automatically test our applications, since traditional automated test methodologies are not always fast enough to keep pace with the demands of modern software delivery.

To achieve highly efficient automated testing, continuous testing strategies need to focus on testing less, and testing smarter, by only testing what has changed. This leads to a smoother automated testing process that reduces waiting time and provides faster feedback from testing and integration code to developers.

In this article, I explain how I have been fine-tuning my CI pipeline in order to achieve a higher level of efficiency in my automated tests.

Moving to Declarative Pipeline Code

I know a lot of companies are still using Jenkins. The UI isn't the greatest, and teams are still using non-declarative pipelines using front-end freestyle jobs to create CI pipelines. Jenkins 2 introduced a declarative pipeline, which is a concept of a shared library that allows for specifying the pipeline orchestra as code, and it lives inside an application repository.

Why should you use a declarative pipeline?

Because the entire job configuration is stored as code, is version controlled, and is easier to maintain. It's faster and more stable for Jenkins to read and cache. It is easier for humans to read and understand the entire pipeline when comparing non-declarative pipelines which are daisy-chaining multiple freestyle jobs together. It is lintable, and can now validate a declarative Jenkinsfile from the command line! Plus, this can be done by using a Jenkins CLI command or by making an HTTP POST request with proper parameters. (I recommend checking out Jenkins documentation for Command-line Pipeline Linter.)

I could go on forever regarding declarative pipelines and their many advantages. Start moving toward code and stop non-declarative pipelines! What I love about declarative pipelines is the control that allows us to tune our pipelines for smarter quality validation.

Tuning the Pipeline Code for Intelligent Testing

It starts by creating a modularized pipeline as code for build, testing, and deployment. I am going to talk about how we are tuning our pipeline code for intelligent testing by creating the following Jenkinsfiles files: cron, queue, profile, and in the future, target and canary.

Yes, we want to test less. It's still important to run the entire smoke or regression once in a while. We have created a cron Jenkinsfile pipeline to execute nightly or weekly.

The queue Jenkinsfile is where the tuning magic happens before communicating with the profile Jenkinsfile to execute. To get started, the following pseudocode locates all the environment profiles from macOS/chrome to iphone5/safari by reading the testing configuration file (nightwatch.json).

def nightwatch = readJSON file: 'nightwatch.json'
def nightwatch_profiles = nightwatch.test_settings.keySet().findAll { profile, settings -> !profile.startsWith('default') && !profile.startsWith('local') }

echo "candidate profiles: ${nightwatch_profiles}"

Console Output


Let’s start tuning by adding pipeline logic—by checking what changed from the Git branch and master. This will allow us to simply wrap logic around what will be tested within a pull request by running a `git diff`.

def tests_changed = sh(script: 'git diff -z --name-only origin/master..HEAD tests/ | wc -l', returnStdout: true).trim() as Integer

Check for changed test scripts. We are targeting the changes and only validating that the changed tests scripts work with the system.

if (tests_changed > 0) {
  tests = sh(script: 'git diff -z --name-only origin/master..HEAD tests/', 
     returnStdout: true).trim().split("\0")
  echo "changes made to tests, only queuing changes:\n${tests.join(' ')}"
} else {

Check for changed configuration—for instance `nightwatch.json`. If changes are made to environment profiles, or there are only queuing changes or there are no changes to profiles, run a random environment profile against changed tests, or run all of them. Applying the same principle as above, we are building logic into our pipeline, only targeting the changes to test less, rather than everything.

def nightwatch_changed = sh(script: 'git diff -z --name-only origin/master..HEAD nightwatch.json | wc -l', returnStdout: true).trim() as Integer
if (nightwatch_changed > 0) {
 def original_nightwatch = readJSON(text: sh(script: 'git show         
   origin/master:my-project/nightwatch.json', returnStdout: true))
 nightwatch_profiles.each { k, v ->
   if (original_nightwatch.test_settings[k] != nightwatch.test_settings[k]) {
 echo "changes made to profiles, only queuing changes:\n${profiles.join(' ')}"
} else {
    def random = new Random()
    def r = random.nextInt(nightwatch_profiles.size())
    echo "no changes to profiles, running a random profile: ${profiles.join(' ')}"

Checking the pipeline trigger from a nightly task, or merging branch changes to the master branch? The objective is to provide faster feedback by only focusing on the changes from pull requests. As of right now, I have only talked about tuning the pipeline for test script or testing framework configuration changes. It’s still important on the master branch to run all tests against all targeted profiles on a nightly basis until we implement target feature and canary pipelines (which I will discuss more in the next section).

} else if (env.NIGHTLY_TASK == 'smoke') {
  base_url = ''
  profiles = nightwatch_profiles
  echo 'on master branch and is nightly test, running all tests against all profiles'
} else {
  echo 'on master branch and is not nightly test, doing nothing'

The profile Jenkinsfile is downstream pipeline code that depends on the queue to pass what test scripts and environments will be executed.

parameters {
 string name: 'BASE_URL', defaultValue: '', description: 'Environment to  
   run nightwatch against'
 string name: 'NIGHTWATCH_PROFILE', defaultValue: '', description: 'Name of the nightwatch 
   profile to run'
 string name: 'NIGHTWATCH_TESTS', defaultValue: '', description: 'Nightwatch tests to run'

Fetch secrets from Vault rather than hard-coding secrets in your pipeline code.

    [path: 'secret/myapp/sauce', keys: [username:  
 ]) {

Install project dependencies, and execute tests scripts with JUnit reporting.

env.SAUCE_READY_FILE = 'sc-ready-' + currentBuild.displayName.replaceAll('/', '_')
  sh script: [
    'cd smoke-tests',
    'ulimit -n 24000',
    'sc -f $SAUCE_READY_FILE &',
    'for t in {0..12}; do echo $t; if [ -f $SAUCE_READY_FILE ]; then break; fi; sleep 10;  
    'if [ ! -f $SAUCE_READY_FILE ]; then exit 1; fi',
    'npm install',
    "npm test -- --env ${params.NIGHTWATCH_PROFILE} ${params.NIGHTWATCH_TESTS}"
junit testResults: "${NIGHTWATCH_PROFILE}-test-results.xml",
                  allowEmptyResults: true

The cron Jenkinsfile is simple pipeline code that uses cron to trigger the nightly task.

Then, if anything fails, the stack trace will be intercepted, and sends it to Slack.

What’s Next?

We’ll next want to finish tuning our CI pipeline to test less and remove the nightly task which executes all the tests. We need to shift our focus to targeting front-end component changes at the pull request level. The concept focuses on tagging component changes to help us develop pipeline logic to decide what test scripts need to be executed against the front-end component change. (Sorry, no working example as of right now.) I'm always sharing my stories, so keep a close eye on a future blog post or conference talks on this topic from me.

I am currently working on a blog post about canary release focus on shift-right testing and monitoring in a production environment.


The sky's the limit to test intelligence. We need to continue to evaluate our continuous testing strategy to identify how to move faster. To build on the speed factor, when developing your path to production as pipeline code, use the Jenkinsfile parallel option to fire off all testing types in parallel. Don't let testing be the roadblock for faster deployments. I can't wait to implement targeted testing against feature code along with canary release, which is a technique that will reduce risk and build confidence in releases.

Greg Sypolt (@gregsypolt) is Director of Quality Engineering at Gannett | USA Today Network, a Fixate IO Contributor, and co-founder of Quality Element. He is responsible for test automation solutions, test coverage (from unit to end-to-end), and continuous integration across all Gannett | USA Today Network products, and has helped change the testing approach from manual to automated testing across several products at Gannett | USA Today Network. To determine improvements and testing gaps, he conducted a face-to-face interview survey process to understand all product development and deployment processes, testing strategies, tooling, and interactive in-house training programs.

Discuss: Continuous Testing in Practice: Test Intelligently by Tuning CI Pipeline Code

Free Trial

Get access to a free 14-day trial version, or contact Sales for more information.