Planning Quality Architecture for 2020
I was inspired by Denali Lumma (@denalilumma) when she delivered a glimpse of the future in her talk about 2020 testing at the Selenium 2015 conference. The session was an excellent introduction that compared many scenarios of the minority testing elite versus the more common development team. The elite companies consider infrastructure FIRST, and the majority thinks about infrastructure LAST. It got my wheels turning regarding the future of software development. I don’t have all the answers right now, but I want to be part of the movement to plan and build architecture with quality. A few words come to mind when thinking about quality architecture — automation, scalability, recoverability, and analytics.
Build a culture
When building a culture, avoid too much control. You want a culture that embraces freedom, responsibility, and accountability. Why is building a culture like this important? It allows passionate employees to innovate and find big-time solutions. You can’t plan for innovation. It naturally happens. When you give passionate employees an inch, they’ll take a mile. The future team culture needs to push the envelope and step outside their comfort zone.
This is slowly happening across the software development industry. The team makeup is being reshaped by removing specialized task silos (code, tests, continuous integration) and bridging the gaps between developers, QA, and DevOps, allowing them to move quickly and build quality up front. The team needs to share tasks and responsibilities, but what does that mean? By increasing the team’s skills set and talent, everyone on the team can share specialized tasks and own quality. Here is an example of a team’s primary focus and who has shared responsibilities for every sprint: [table id=8 /] The key is that everyone needs to embrace the new culture — one where QA and DevOps team members are embedded with developers and share responsibilities.
Continue to focus on automation strategies
To improve the efficiency and reliability of a development project, the future needs minimal human involvement for all committed code. Teams will want to ship as soon as the code is ready and no later. The objective of automation is to simplify as much of the infrastructure with code that generates trustworthy reporting, allowing confidence in shipped features and bugs. The current standard for all companies must be: build, test, deploy, and recoverable infrastructure when things go wrong. The future of automation strategies should focus on testing pre-production and production environments. Remove the FEAR, and inject some chaos into your production infrastructure. Evaluate any failures that occur and find solutions to prevent those failures the next time.
Everything needs to be SCALABLE
The year 2020 seems like a lifetime away for technology. I have learned one thing since Test Automation and DevOps entered the scene and took over the world’s software development — You’d better be ready to evolve and scale up quickly when change occurs. How do we prepare? Scale comes in many forms. (It doesn’t always mean cloud infrastructure.) Here is a list of ideas that comes to mind when we need to be scalable without affecting quality:
- Onboard new employees
- Cross-team training
- Deploying a process or policy change
- Cutting-edge technologies are born every day
- Application redesign
- Environment (machine-as-code, cloud-as-code)
Everything needs to be scalable. Are you prepared to evolve and scale when the change occurs?
Repeatable and recoverable
The future of cloud computing is here to stay (for awhile). Moving quickly and reliably requires infrastructure-as-code. You should build environments for development, pre-prod, and production that are identical. There are a lot of technologies in this area, such as configuration management and containerization tools. Puppet and Chef are the most popular configuration management tools out there. They allow you to keep all your servers configured in a central place, and identical. Cloud computing services will become the NORM for many reasons. They allow flexibility, disaster recovery, automation software updates, the ability to work from anywhere, security, and many other benefits. If you haven’t moved to cloud computing yet, it is only a matter of time before companies realize that the benefits are substantial enough to move their business into the cloud. The best defense against failures is cloud computing and configuration management tools.
We need more ANALYTICS
Lastly, the future needs to focus on analytics. They will allow us to evaluate and recalibrate to improve processes, testing, application, infrastructure, and more, with instantaneous analytics alerting the team when things go wrong.
- Build a culture that embraces freedom and responsibility
- Automation will continue to be part of the future
- Tools and processes power how changes move from developers to production
- Computers will be waiting for humans — humans won’t be waiting on computers
- Real-time analytics
Greg Sypolt (@gregsypolt) is a senior engineer at Gannett and co-founder of Quality Element. He is a passionate automation engineer seeking to optimize software development quality, while coaching team members on how to write great automation scripts and helping the testing community become better testers. Greg has spent most of his career working on software quality — concentrating on web browsers, APIs, and mobile. For the past five years, he has focused on the creation and deployment of automated test strategies, frameworks, tools and platforms.
- Accessibility Testing
- Appium Resources
- Best Practices
- Continuous Delivery
- Continuous Integration
- Continuous Testing
- Cross Browser Testing
- Guest Blog Posts
- Load Testing
- Machine Learning
- Mobile Development & Testing
- News & Product Updates
- Open Sauce
- Open Source
- Performance Testing
- Product Updates
- Quality Assurance
- Quality Engineering
- Sauce Product Info
- Security Testing
- Selenium Resources
- Software Development & Testing
- The Story of Sauce