INSIGHTS
4 min read
Published on 06/22/2022
Last updated on 05/03/2024
Building Quality into Agile
Share
In the age-old waterfall model, discrete amounts of time and resources were allocated for quality testing after a product was stabilized and the majority of the development process was completed. As we adopt agile methodologies, the software development life cycle (SDLC) can make it challenging to fit quality testing and its expectations into the overall process. Additionally, today’s continuous improvement methodology—where development, DevOps and QA collaborate closely—calls for better “lines of demarcation” between the deliverables, expectations and outcomes expected of each role. Oftentimes, however, those lines are still blurred, which makes it critical to put a quality process in place that ensures efficiency, accountability and, ultimately, the delivery of a superior product to market.
Broadly speaking, we can categorize each role as follows:
- Development - provides individual and integrated components
- DevOps - provides all the infrastructure needed for integrating various components, deploying builds and initiating test suites on top
- QA - provides various test suites, its execution and respective reports
- goldenDev – an environment for active development (both dev and QA), integrating components, fixing all bugs, running test suites
- goldenQA – a more stable environment where extensive testing is done; only blocker issues are fixed here
- Unit tests: usually defined by dev and triggered as part of each deployment
- Build Acceptance tests (BAT): defined by QA; includes basic sanity tests and is triggered after deployment to qualify a specific build
- Functional & System tests (FAST): defined by QA and triggered after BAT. Runs all functional and end-to-end tests (new features and regression).
- Performance & Stress tests (PAST): defined by QA and triggered after FAST. Runs benchmarking tests to establish the metric in terms of performance and stress on the system.
- Longevity tests (runs only on goldenQA): defined by QA. Runs use scenarios to gauge the stability of the overall product, over extended periods of time.
- Code Coverage: codebase can be instrumented, to check the code coverage being achieved with all test suites. These test suites can be continuously enhanced, based on the gaps identified to achieve more coverage.
Release Exit Criteria
- All the planned test execution should be completed with the following criteria:
- 100% pass rate for unit and BAT tests
- 100% pass rate for regression tests (FAST)
- New feature run to plan (RTP) — 100% executed (FAST)
- New feature pass to plan (PTP) — 80–90% executed (FAST) and no critical and blocker bugs in open state
- Performance & stress testing (PAST) should be completed — with acceptable metrics (agreed based on previous baseline) and no degradation
- Longevity tests should be completed — system uptime of 7–10 days with no downtime and degradation
- Automation achieved
- Identified high priority tests should be automated
- Code coverage — 60–80% achieved
Subscribe to
the Shift!
Get emerging insights on emerging technology straight to your inbox.
Unlocking Multi-Cloud Security: Panoptica's Graph-Based Approach
Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.
Related articles
Subscribe to
the Shift
!Get on emerging technology straight to your inbox.
emerging insights
The Shift keeps you at the forefront of cloud native modern applications, application security, generative AI, quantum computing, and other groundbreaking innovations that are shaping the future of technology.