What are the reasons why a sustanable testing is not achieved?
In my experience, based on discussions and interviews with several development teams and management, I found four major barriers.
First and foremost, a poor definition of the testing process. Often there is no explicit testing policy (for example, on why to do unit testing), no explicit strategy outlining ways in which policies are to be implemented, no definition of testing-related roles for people, no test plan. In return, we often see that testing is left to the initiative of a few intrepid individuals, with no coordination leading to divergent behavior. There is no economic assessment, no monitoring of the testing process, no specific goals and no plan to achieve them. During emergency situations, whatever testing process was being carried out is suspended. There are also myths regarding testing that are difficult to dismantle. One myth is that testing delays the release of the product: true, but only when testing is badly managed. In fact, a properly defined testing process shortens the cycle time because it supports the team. Another myth is that testing is not useful, because of the assumption that there might be just a few bugs, with minor effects. Finagle’s law demonstrates that this is not case. A similar myth is that the client is not willing to pay extra for the testing effort, because it has no visible effects for the client. In fact, testing activities should not be separated from development activities, and testing should always be considered an integral part of developing software.
Second, lack of an appropriate software testing automation solution. Test automation is a necessary ingredient of an agile software development recipe. It might have to cover different testing levels (such as unit testing, system testing, acceptance testing, smoke testing), it might be governed by a multi-pronged policy (such as a policy enforcing development of a automated tests whenever a new bug is reported, as well as according to Test Driven Development), it will rely on different testing frameworks.
Stumbling blocks include poor testability of the existing code base (in fact, several test automation patterns can be applied to tackle the problem, such as “humble object“). A poor implementation of a test automation approach might lead management to perceive it as an activity that does not provide value, as the development of a second software system that has to be maintained alongside the primary system, as another sink for development time. Sometimes, poor quality of the testing system leads to heavy maintenance of the same code base, which becomes difficult to extend, to change, to refactor, to verify. Another important blocking factor is due to management that gives little time to developers to learn how to apply tools and known practices.
The third major barrier is due to a sub-optimal testing workflow. People waste time in designing poor test cases that provide little value (whereas knowledge of a few basic test design techniques and risk-driven approach to prioritization could greatly improve the situation); time might be wasted in manually executing test cases that could be automated; bugs escape the lab because effective techniques such as exploratory testing are not employed; time is wasted in creating test reports that provide little value. Time is wasted because of a wall between programmers and testers, and between analysts and testers: testers might erroneoulsy complain that without proper specifications they cannot do testing, programmers may carelessly deliver software artifacts that have low quality because they know somebody else will eventually catch those problems.
Finally, the fourth source of problems is poor management. In fact, some managers struggle to understand that testing requires skilled people. Necessary skills include ability to manage the process, to design test case, to carry out exploratory testing, to automate tests, and these skills are as technical and as difficult as those required by a typical developer. Hence, a good tester should be as valued as a good developer. And nurtured, I’d haste to say. In striving to meet a budget, managers often assume that by reducing the testing effort, and/or by reducing the quality of the testing process, they would get in output the same kind of product and same kind of development process. As is usually the case when cutting corners, this is not true. Cheap testing is not economically viable testing.
I’d love to hear what you have to say. Do you agree with these points? Have you experienced other stumbling blocks? Have you found smart ways to overcome some of those problems?