Software engineers don’t develop products in a vacuum. We rely on high-level languages, frameworks, and SDKs to get the job done. Even those who create drivers, operating systems, or virtual machines in assembler are confined to the instruction set supported by the target processor. Every engineer in our industry stands on the shoulders of giants.
How can we be sure our tools will always behave as expected? In truth, it’s rarely feasible to be completely sure they all will. Intel once released a processor that divided only 0.0000000114% of the total input number space incorrectly. Operating systems and applications seemed to run just fine on it, but a math professor eventually reduced strange results to a bug in the processor. Have you tested your processor’s entire instruction set on all possible inputs and confirmed that all results are correct? What about doing the same for the JIT compiler, IL, .NET Framework, and the .NET high-level languages and their compilers? I’d like to see you try--really, I would.
Few engineers would ever take that challenge seriously, and yet we solve real problems using these tools all the time. Despite not confirming their correctness, we need enough confidence in our tools to justify using them in our product. Here are some factors that improve my confidence in a tool:
Low-level details are cleanly hidden by a definitive interface specification.
All versions have backward compatible interface specifications.
Experiences are consistent with its specification.
Past releases include bug fixes.
Many other engineers depend on it for similar use cases.
Uncertainties about the tool are addressed quickly through the producer’s support system or websites like StackOverflow.
These factors tell me two important things: the producer respects their tool’s abstraction layer and engineers appreciate the value of that abstraction. Without reliable, practical abstractions engineers don’t have a firm foundation to build on.
After building enough confidence in a tool to develop production applications on top of it, we’re still not done assessing the tool. As long as we use that tool or anything that depends on it, we should monitor for inconsistencies with the tool’s specification.
Defects or unexpected behaviors sometimes surface in the tools we use in our implementations. It’s not fun, but it happens. How can we minimize the negative economic impact of those defects? Studies discussed in Steven McConnell’s Code Complete found it costs 10 to 100 times as much to fix a defect after product release compared to fixing the problem when it was first introduced. It’s unclear whether the studies distinguish between defects originating in internal or third party modules. What is clear is this: handling defects sooner than later in the product life-cycle reduces overall expenses.
Testing helps us identify defects, determine their scope, communicate clearly about the problem, and confirm whether code changes resolve the problem. Every now and then debates spring up in the blogosphere over how much utility testing gives us. Supposing your goal is to develop a profitable product, the utility of your engineering practices need to be evaluated against that goal. Do what you can to reduce the overall cost of development (including maintenance) and maximize market penetration. The nature of your product and target market should determine whether it’s best to test by manual trial and error, formal verification, or something in between.