Software Paradoxes: The Cost of Trying Too Hard
Software development is full of paradoxes, the classic of which is Fred Brook’s claim that adding more programmers to a project tends to produce the opposite result of longer development times and inferior products, primarily due to quadratic increase in costs of communication between team members.
Raymond Chen likes to point out that some of the lessons taught in school may actually be counterproductive in the real world. He presented at PDC last year “What Every Developer Should Know,” which I mentioned in my post on algorithmic complexity. Two recent posts of his mention the pitfalls of two common algorithms/data structures with examples from the Windows operating system:
I am actually fond of splay trees, because of their simplicity (compared to AVL and Red-Black trees) and amortized behavior. The disadvantages that he points out—such as inconsistent search times—can easily be mitigated.
But I will agree that, despite their sizable presence in data structures/algorithm books, in most applications, advanced string searching algorithms such as KMP and Boyer-Moore are rarely more advantageous than naive search algorithms. This is because the search strings are usually small and the code for naive algorithms is typically much simpler and maintainable.