At 06:31 PM 07/06/2002 -0700, Joseph Ashwood wrote:
First, closed source testing, beginning in the late Alpha testing stage, is generally done without any assistance from source code, by _anyone_, this significantly hampers the testing.
This has led to observed situations where QA engineers sign off on products that don't even function, let alone have close to 0 bugs.
I don't know where *you* develop software, but anywhere I've been or seen that had QA engineers signing off on non-functional products would either lead to serious re-education for the engineers or an internal understanding that the company values ship dates above capabilities (not that that's unknown in the industry, and admittedly closed-source development shops are more likely to have business models that emphasize shipping, but Darwin fixes them.) One of the purposes of having QA people work without source is so that they're actually testing documented functions of the product, rather than testing what the code looks like it can do successfully. Except for those products that are code designed to be integrated into other programs, where actual code matters, that modularity is critical. You test the interfaces, not the innards. It's still an infinite problem, but it's much less infinite, and lets you hit what you need. If the interfaces don't work, you're supposed to figure that out, though sometimes late-alpha code is known to be missing major pieces -- you usually work around this by writing lots of test drivers. You're also supposed to find out if there are missing pieces that somehow escaped the design phase. White-box testing lets you go beyond that to find subtle nasty bugs that escaped unit testing and developer code reviews, and have a better view into holes in the system that a malicious attacker can break. It's important for security, because there are often things you don't find during black-box testing.
With the software engineers believing that because the code was signed off, it must be bug-free. This is a rather substantial problem.
A much more serious problem is coders and testers believing that the design documentation they're working from, if any, reflects what the system is supposed to do and what it's supposed to not do.
To address this problem one must actually correct the number of testers for the ones that are effectively doing nothing.
In commercial code with internal testers, you're not supposed to need this. Obviously, for community-tested code, duplication in code coverage is common, and one challenge for the open-source business is finding ways to improve test coverage in volunteer testing, by adding some kind of coordination.