THE NUMBERS THAT COUNT IN END-TO-END AUTOMATED TESTING
Have you ever wondered how much can be achieved by automating user acceptance testing? Here I present data about a case study based on one of our client.
The system under test (SUT) is a web application in the financial sector.
At the moment, for this project, we are maintaining and extending a collection of automated test suites that run on our CUTE architecture . Each test case implements a possible usage scenario, chosen on the basis of a risk assessment that is routinely carried out together with the client.
These data reflect what was happening in the last few weeks, which are representative of the current status of the test automation process.
- Each night these tests run for a total of 30 hours of test time (obviously, they run in parallel execution channels).
- Each night they involve 2 out of a set of 5 possible user platforms: Windows10 with Chrome, Firefox, IE11, Edge or Safari. Each platform is used several times each week.
- Tests are run on 4 different test environments, where the SUT differs either because of its code base or because different test data are used. One environment is a pre-production one where automated smoke tests are used.
- Each night there are at least 50 different test reports being automatically generated.
- In total there are more than 230 test cases, that “touch” the SUT in at least 1750 unique touch points. They are either control points, where the test case drives the SUT, or test oracles, where the test case checks some property of the SUT.
- During each test execution, for each touch point being exercised, a screenshot of the GUI of the SUT is automatically produced and included in the test excution report. In total there are more than 24000 screenshots being collected per night.
- Test cases have been developed following the “Specification by example” approach, and they provide the basis for defining a living documentation of the logic of the SUT. These test cases can be automatically extracted to feed a test book of 334 pages. Tests are related to the requirements they test, and the test book includes them as well.
- The test harness that is used to support such abstract test cases, written in Java, comprises a total of 469,000 lines of source code. Most of it is updated automatically thanks to the model-driven approach that we are following.
This status of the testing process has been reached after working for this client for the last 24 months. Currently, month by month, we follow an agile development approach driven by risks, those numbers keep increasing:
– Each month the number of testing hours is extended by at least 2 hours.
– Each month there are about 100-120 new touch points being added to the existing ones.
– Each month the test cases increase by 10-20.
This development rate of test ware is counterbalanced by a relatively lightweight inolvement of the client’s staff. In the last several months, the client’s effort has been basically one person-day per month to do risk assessments, and about half an hour per day to inspect test reports; hence a total of 3 person-days per month. No changes are needed on the SUT, which we treat totally as a black box.
If we were to compare this volume of testing to a manually equivalent (the so called “Estimated Manual Test Effort”), the client would have to support this kind of effort:
– 3 full-time equivalents (FTE) to carry out execution of tests, each night;
– 6 FTEs to write the detailed test reports, each night;
– at least 0.5 FTE to update the living documentation embedded in the test book, one a month.
One of the metrics that the client is monitoring is the number of hot-fixes that are created after each of the releases (which happen at least once per quarter). The number of hot-fixes per release was 5-7 when the testing process began, and is now down to 0-1 for the last 5 releases (which is an 80% decrease at least).
Our conclusion is that, despite several myths, automating user acceptance tests can be done in a sustainable way.
In a future post I will present data regarding the specific model-driven approach that we are adopting.