In the first Blog of Orthogonal Array Testing Technique we gone through the concept the Orthogonal Array (OA) and covered the following topics of :
- What is Orthogonal Array
- Example of Orthogonal Array
- Benefits and Limitation of OA
In this post, we will go through one of the Project where we used Orthogonal Array and see how it helps us to Reduce overall Testing Time and to increase Testing Coverage.
Blog- II: Orthogonal Array testing – Case Study for major supermarket retailer.
The purpose of the case study is to explain the analysis and implementation of a tool aimed at reducing the time and effort required to create test cases. We have used the tool to generate test cases for two key modules in our project namely “Promotional Sensitivity Calibration” and “Forecast”.
Promotional sensitivity calibration involves the process of calculating the sales lift for a product after a reduction in its price. This sales lift is calculated based on pre and post promotional sales history details, exactly one year from the effective date.
Forecast is the prediction of sales of a set of products in the upcoming weeks based on certain conditions. This prediction is done at a program level where products are associated to a program. Forecast is done every week to ensure that the sales, if not exactly matching the prediction, is at least close to the forecast. These certain “conditions” are derived after years of research done on the trend at which the products have been selling.
The tool used for test case generation is Orthogonal Array. It is a multi-dimensional array tool which takes factors and levels as input and generates test cases with maximum coverage as possible for all levels in each factor. This is done to optimize test scenarios that have several factors but have multiple combinations.
Why is this study needed?
The aim of this study is to come up with the list of the most important flows needed to give maximum coverage during the entire testing cycle. This if done manually would need a tediously long analysis for which neither the time nor the resource is usually available. At the end of the case study, statistical data would be provided on how much effort has been saved and how the test case efficiency improved.
Keywords used
- OA : Orthogonal Array
- PSC : Promotional Sensitivity Calibration
- FCST : Forecast
- TC : Test Case
- RCA : Root Cause Analysis
- CRUD : Create Read Update Delete
- ST : System Testing
- UAT : User Acceptance Testing
Quantifying the Problem
In the previous phase of our project, we had feedback from the customer indicating us to reduce the defect slippage ratio in UAT so as to improve the test case efficiency in ST Phase. Hence we had triggered RCA on customer feedback. RCA outcome is as follows,
- Defect Slippage Ratio is higher than industry standards
- Test Coverage in ST Phase is not adequate.
Though we had a healthy TC count, it was not enough to unearth all the high priority defects that predominantly occur in the functional part of the application.
We decided to follow a systematic approach to close the action items on RCA outcome. The primary goal was to improve the test coverage; we had analyzed and found the feasibility to implement the Orthogonal Array tool in our project. Moreover we had ensured that the TC count generated after OA implementation suits our test execution plan in all test cycles.
Data Collection & Analysis
The analysis for generation of test cases for PSC involves two major tasks. But before that here is a brief introduction on characteristics and capabilities of the OA tool.
The OA tool works on module specific input given by us. This comes in the form of factors and levels, where a single factor can have several levels. To have an optimized output from the tool, it is required that the implementation have at least 3 factors having 2 levels in each factor. The OA tool is scalable to a large extent that can process as many as 9999 factors.
We have sectioned the tasks for implementing OA in any project.
Task 1: Deciding the Factors & Levels
This task involves the generation of factors and levels needed by OA to generate TC’s. The following excel sheet contains the various factors and levels for Promotional Sensitivity module.
Original Factor Level Test Case Excel
If you notice, there is a particular color code used between factors. This is to signify the dependency between levels of separate factors. For example, the first 3 levels of factor 1 can only move onto levels 1, 2 or 3 of factor 2. Similarly, there is a dependency in the first level of factors 3, 4, 5. (The flow of PSC scenarios are derived based on business logic and client requirements.)
Task 2: Identify composite factoring and eliminate dependencies
The next task is to remove the dependency between factors. This exercise is done to make all the factors and levels independent of each other so that all implausible scenarios are eliminated.
Optimizing factors involved combining one or more factors together into a single one known as a composite factor. These factors are synonymous to the original ones except that each level in them is completely independent from levels of any other factor. After removing these dependencies, we were down to 3 (composite) factors. The following excel sheet is the set of composite factors and levels.
Composite Factor Levels Test Case Excel
Implementation of OA
The implementation of OA tool takes place in 3 tasks.
- Generation of TC’s
- Addition of TC’s for missed out test coverage
- Removal of TC’s based on priority
Task 1: OA Test Cases Generation
- Based on the analysis done on the factors and levels for PSC, the OA tool is input with the names of 3 factors.
- In the next step, each factor is input with 13, 5 and 4 levels respectively. Corresponding names are given for each level for better understanding.
- Finally the list of TC’s are generated for the PSC flow. The report on the total number of plausible combinations for “exhaustive” test case coverage yielded a whopping count of 260 test cases. A simple check on the count is arrived by multiplying the number of levels for a particular combination (Eg. 13*5*4 = 260), as seen below:
OA Generation Test Case Excel File
Combinations | Possible Combinations | % Coverage |
Promotion Type -> Sales Type | 65 | 100 |
Promotion Type -> Calculate Promo Acceleration Factor | 52 | 100 |
Sales Type -> Calculate Promo Acceleration Factor | 20 | 100 |
Promotion Type -> Sales Type -> Calculate Promo Acceleration Factor | 260 | 25 |
The above report only gives the list of all possible combinations out of which only part of these would find a place in the final list. Now this is where the crux of the OA tool implementation comes into play. The OA tool generates the list of all TC’s that gives maximum coverage to all factors and levels among all possible 260 scenarios. The final report containing the list of test cases had 65 test cases. The % coverage given in the above table indicates the coverage given to a particular combination with respect to the 65 TC’s only. It is clearly evident from the table that we need to give more coverage for the final combination. This is what we would be doing in the next phase (Task 2 & Task 3 would be applicable when % coverage is less than 100% against any combination).
Task 2: Test Cases Addition for Maximum Coverage:
The successful implementation of the OA tool is possible only if maximum coverage is given for all scenarios. As mentioned earlier, the combination containing all 3 factors does not have complete coverage. This is where human intervention is needed to fill up the missed out coverage. Based on domain expertise and prior knowledge about the module, we have to add all those scenarios that would not be covered by the tool. Following is the list of TC’s after addition:
TestCases After Addition Excel File
Task 3: Remove lesser priority Test Cases
Though OA generates scenarios by combining all the factors, considering the business criticality, probability & impact of the scenario occurrence in production, lesser priority TCs would need to be evaded in this step. Following is the list of TC’s that were not considered for the final list:
TestCases After Deletion Excel File
Hence after thorough analysis of the test cases, higher priority scenarios are added and lesser TC’s are removed. The final count came to 44 as seen below:
Color Code | Number of test cases |
Test Cases obtained using OA | 27 |
Manually added Test cases | 17 |
Total | 44 |
The final list of TC’s that were generated is:
Final Test Cases Sheet Excel File
The unanswered question that still remains is on what basis were test cases added and removed. Based on business requirements, we had to come up with priority for particular levels. This was done because it was known that only certain scenarios would predominantly occur during the real-time usage of the application. For example, the “Sale (Site Marketing)” promotion type scenario takes more precedence when it comes to either “Temporary Price Cut” or “Circular”. Hence we had to give more priority to such scenarios; also keeping in mind that the other scenarios cannot be ignored altogether.
The following table describes the priority of the various scenarios and the coverage given to each one of them,
Sales Type | Priority (0-2) | Individual Priority (0-3) | Test Cases Covered (Out of 44) | (%) |
Regular – Original Sales Qty | 0 | 0 | 27 | 61.36 |
Non-regular – Multiple promotions – Circular | 1 | 2 | 5 | 11.36 |
Non-regular – Multiple promotions – Highest disc. | 2 | 4 | 9.09 | |
Combined – Multiple promotions – Circular | 2 | 3 | 3 | 6.8 |
Combined – Multiple promotions – Highest disc. | 3 | 5 | 11.36 |
Calculate Promo Acceleration Factor | Priority (0-3) | Test Cases Covered (Out of 44) | (%) |
Sales Qty 1st Promo week Sales Units < 0 —> PAF = 1 | 2 | 8 | 18.1 |
Total Average Previous Sales < 0 —> PAF = 1 | 2 | 8 | 18.1 |
Total Average Previous weeks Sales > 0 or Total sales Units > 0 —–>Direct value is found from MKDN table | 0 | 15 | 34.09 |
Total Average Previous weeks Sales > 0 or Total sales Units > 0 —–>Derive value from MAX of acc factor and DISC% | 0 | 13 | 29.5 |
Our primary goal was to ensure that we are giving maximum coverage, keeping in mind that our execution plan cannot be stretched. It was required that all higher priority scenarios be given greater coverage.
Conclusion
The implementation of OA tool has resulted in the efficient generation of test cases ensuring at most coverage is given to all scenarios.
The main areas of improvement were TC efficiency, defect slippage ratio, defect rejection ratio, test coverage and effort savings. Following are the metrics used:
- Test case efficiency: This is the percentage of TC’s that are capable of finding defects in an application.
- Defect Slippage ratio: This refers to the number of defects found in UAT that were missed out in ST.
- Defect rejection ratio: This number is the number of defects rejected by the development team during ST.
- Test Coverage: It refers to the TC’s covered against the requirements given by the customer.
- Effort savings: This is the amount of efforts saved for generating TC’s, before and after implementing OA. It is the count of person hrs saved.
S.No | Metrics | Acceptable Norms (%) | Prior to OA Implementation | Post OA Implementation |
1 | Test Case Efficiency | 95% | 85% | 96% |
2 | Defect Slippage Ratio | 5% | 15% | 3% |
3 | Defect Rejection Ratio | 5% | 6% | 4% |
4 | Test Coverage | 100% | 80% | 100% |
5 | Effort Savings
(Only for writing test cases pertaining to functionalities discussed here.) |
20% | 7 PDs | 3 PDs (58% Effort saved) |
It has also helped us in making a proper estimation needed to perform testing in the most productive way. The complete success of the OA tool implementation lies in the use of the statistical capability of the tool combined with the domain knowledge of the implementer.