There are many myths vs facts in performance testing. We think if application is slow, engage the performance testing team immediately to resolve the bottlenecks. It is absolutely 100% – wrong sense.
I am seeing performance defects being quite expensive to get fixed in later part of SDLC Phase but still business sponsor are yet to understand the need of Performance Testing (PT) and Performance Engineering (PE) to allocate the right budget to PT and PE. This blog is aimed to business sponsor, Project team and development project managers to ensure PT and PE teams are engaged and deployed in appropriate time.
Let’s discuss myths vs facts in each test phase.
Test Requirement Phase:
Myth: Project team can decide if Performance testing is required or not as they are the application owners.
Fact: Involve Performance engineers and architects during Non-functional requirement Assessment to conclude the risk and need for Performance testing.
Myth: When there are no Performance Testing NFRs, development team or Project Managers can define them.
Fact: Business team defines the NFRs in concurrence with Solution Architects to design the infrastructure and application to meet the NFRs. If application is in production, Performance testing team would help derive the NFRs by analyzing the production logs or using production monitoring tools.
Myth: Non-functional requirements (NFRs) refer to application response time only.
Fact: Non-functional requirements refer to Key Performance indicators – Reliability, Capacity, Availability, Security, Scalability, Stability, Accessibility and Usability. It’s not only application response time.
Test Planning Phase:-
Myth: Performance testing can be completed in 2 to 4 weeks.
Fact: Performance Testing duration cannot be fixed to ‘x’ weeks and depends upon the Performance Test objective, Complexity of the application. Applications involving complex architecture can take months to be performance tested.
Myth: Performance Testing can be done ONLY towards the end of the testing life cycle.
Fact: Base lining effort upfront followed by incremental tests to check if performance is improving or deteriorating. Early Performance testing can be done in parallel with Development if applicable for complex engagements. Performance defects are quite expensive to be fixed at the end of SDLC. It could lead to change in technical design.
Myth: Performance testing can be engaged once SIT is in progress.
Fact: Performance Test Script Development can start while SIT is in progress however it is recommended that test executions commence only after SIT Completion. At least 80% SIT completion with no Sev 1 or 2 Defects.
Test Development Phase:-
Myth: Performance Testing is about learning and using a load testing tool.
Fact: Derive Workload Mix and Design realistic scenarios. Devise a suitable end-to-end PT approach. Define clear objectives for each test type. Analyze Performance from both Software and Hardware perspectives. Identify Performance bottlenecks and provide recommendations.
Myth: Performance testing is measuring the response time meets the defined SLA.
Fact: Performance testing objectives can include the assessment of other NFRs like Scalability, Availability etc. Testing Application Performance is one of the critical NFRs.
Myth: Test cases used by functional team can be leveraged for Performance Testing.
Fact: Performance Testing covers only critical transactions – Most frequently used, Complex Business functions and System intensive operations.
Test Execution Phase:-
Myth: Developers can only tune the application performance
Fact: Performance Architects provide recommendations and tuning suggestions but the implementation of the recommendations would be with development / Project team
Myth: Performance issues can be fixed by simply plugging in extra hardware
Fact: Performance issues can be in – Application Code, Improper Server Configuration, Third Party libraries, Infrastructure etc.
Myth: To determine overall application performance, two successful baseline tests are enough/sufficient
Fact: Number of tests and test types are basically determined based on Performance objectives. Performance testing team will always recommend the applicable test types.
Performance Testing vs Performance Engineering.
We need to have difference to understand the activities being performed by two tracks – PT vs PE.
Performance Testing:-
- Validating application’s performance against application/deployment architecture and infrastructure based on the non-functional requirements”
- Identifying performance bottlenecks in every tier of the application
Performance Engineering:-
- NFR definition – breaking down Non-functional requirements into Business, Application and System level NFRs
- Architecture and design definition and review for Performance – e.g. Performance modeling of expected behavior, Single Point of Failure Analysis.
- Capacity planning and strategy.
- Code profiling, Performance profiling as part of development during development
- Performance tuning recommendations during Performance Testing
- Production monitoring, RCA, Capacity planning.
Performance Engineering:-
Let’s see myths vs facts in PE track finally.
Myth: Performance tuning can be done at the end of Performance Testing.
Fact: Performance tuning can be part of SDLC to ensure that the performance issues are identified early and resolved. Ex. Performance Architecture and Design Review, Early Performance Testing and Code Profiling etc.
Myth: Capacity planning by environment team.
Fact: Capacity Planning is Performance engineering activity to identify and project the hardware capacity and software configuration required to meet the meet the anticipated load.