In today’s highly competitive IT business, companies experience massive pressures to be as effective and efficient as possible in developing and delivering successful software solutions.
If you don’t find strategies to reduce the cost of software development, your competitors will, allowing them to undercut your prices,to offer to develop and deliver products faster,and ultimately to steal business from you.Often in the past,testing was an afterthought; now it is increasingly seen as the essential activity in software development and delivery. However, poor or ineffective testing can be just as bad as no testing and may cost significant time,effort,and money, but ultimately fail to improve software quality, with the result that your customers are the ones who find and report the defects in your software! If testing is the right thing to do, how can you ensure that you are doing testing right?
If you ask managers involved in producing software whether they follow industry best practices in their development and testing activities,almost all of them will confidently assure you that they do. The reality is often far less clear; even where a large formal process documenting best development and testing practice has been introduced into an organization,it is very likely that different members of the team will apply their own testing techniques, employ a variety of different documentation(such as their own copies of test plans and test scripts), and use different approaches for assessing and reporting testing progress on different projects. Even the language is likely to be different, with staff using a variety of terms for the same thing, as well as using thesame terms for different things!Just how much time, effort,and money does this testing chaos cost your organization?
Can you estimate just how much risk a project carries in terms of late delivery, with poor testing resulting in the release of poor-quality software? To put this in perspective,the U.S. National Institute of Standards and Technology recently reported that, for every $1 million spent on software implementations, businesses typically incur more than $210,000 (or between a fifth and a quarter of the overall budget)of additional costs caused by problems associated with impact of post implementation faults.
The most common reason that companies put up with this situation is that they take a short-term view of the projects they run; it is much better to just get on with it and “make progress” than to take a more enlightened, but longer-term, view to actually address and fix the problems.
Many organizations are now adopting some form of formal test process as the solution to these problems.In this context,a process provides a means of documenting and delivering industry best practice in software development and testing to all of the staff in the organization. The process defines who should do what and when, with standard roles and responsibilities for project staff, and guidance on the correct way of completing their tasks.The process also provides standard reusable templates for things like test plans,test scripts,and testing summary reports and may even address issues of process improvement.Although there have been numerous attempts to produce an “industry standard” software testing process (e.g., the Software Process Engineering Metamodel ), many practitioners and organizations express concerns about the complexity of such processes. Typical objections include:
_ “The process is too big” – there is just too much information involved and it takes too long to rollout, adopt, and maintain.
_ “That’s not the way we do things here” – every organization is different and there is no one-size-fits-all process.
_ “The process is too prescriptive” – a formal process stifles the creativity and intuition of bright and imaginative developers and testers.
_ “The process is too expensive” – if we are trying to reduce the cost of software development, why would we spend lots of money on somebody else’s best practices?
Interestingly, even where individuals and organizations say they have no process, this is unlikely to be true – testers may invent it on the fly each morning when they start work, but each tester will follow some consistent approach to how he or she performs their testing. It is possible for this “approach” to be successful if you are one of those talented super testers or you work in an organization that only hires “miracle QA” staff. For the rest of us, we need to rely on documented best practices to provide guidance on the who,the what,and the when of testing, and to provide reusable templates for the things we create, use, or deliver as part of our testing activities.
So, here is the challenge: how is it possible to produce good-quality software, on time and to budget, without forcing a large, unwieldy, and complex process on the developers and testers, but still providing them with sufficient guidance and best practices to enable them to be effective and efficient at their jobs? To restate this question, what is the minimum subset of industry best practice that can be used while still delivering quality software?