Of good design and testing
Most of my beliefs about software design, come from a few experiences which I’ve learned from and hope not to repeat. These experiences aren’t always of my own making, but are sometimes from inheriting existing systems. You should learn from your own mistakes, obviously, but learning from the example of others is also of immeasurable utility and in excess around us. Two of these examples, which I wish to discuss here, include a design & test around managing documents between mongo collections as a ‘transaction’ (even where transactional databases were avaialble) as well a large suite of unit and regression tests in a financial system that mixed presentation with data model. From these specific experiences I’ll describe a few of the key things that I’ve learned.
First, good design obviates testing. If you use the right tool for the job and use a transactional database to control cases where “eventually consistent” doesn’t meet the use case, then you won’t need unit and regression testing around such an anti-feature. Ignoring the work and guarantees that have been granted by those who do it as a full time job, and are typically much better at it, are a major red flag for me these days. If you must use a non-transactional database then at the very least refactor your application to rely on what atomic features that system does provide. Forcing a square peg into a round hole leads to complexity, bugs, and extreme difficulty in extending features.
In the same vein of “good design” taking priority, an app mixing presentation, model, and control leads to a Rube Goldberg Machine of a design. Having extensive unit and regression testing in such an application leads to a false sense of security and accomplishment and brings to mind the quote from Tommy Boy
Ted: But why do they put a guarantee on the box then?
Tommy: Because they know all they solda ya was a guaranteed piece of shit. That's all it is. Hey, if you want me to take a dump in a box and mark it guaranteed, I will. I got spare time.
Having supported and worked on such an app, I can say that deep structural design problems become baked in to the tests and are very difficult to unwind. In many cases this ended with building a parallel portion of the application, which did follow proper software design philosophies, and supplanted the functionality. The reason for the mixing of layers in this legacy app and tight coupling to the testing was due to the tests being written along side of an alpha version of the application with little/no opportunity to scrap them (or the design) once written.
This leads to a second maxim: Test Driven Development belongs only with apps that have a solid, established design. In cases where there is green field work, TDD should not be applied until an alpha and beta have established and the fundamental architecture decided for that use case. The alternative is that it will bake in design assumptions which will not be able easy to change when a new understanding is reached and a structural component of the application must change. Build the foundation first with no testing and then add testing to a beta version of the application. Project management should explicitly add a stage to the timeline for this and engineers should not offer it as optional. After a beta release TDD should be the standard where feasible.
Design of an application is more important than the testing of it. I am not contradicting what I just said by saying that testing is a very important part of the application released for the customer use and that no application should be released without it. Both of these being said, if I’m given a very short deadline on which to deliver a feature or an application, I will spend more time on design than testing. Good design is used by the customer, the developer, and by any support staff. The testing framework (no matter what anyone tells you) is used only by the developer.
I’ve worked with multiple well designed applications that had very little testing and were quite successful. This determination is based on the frequency of bugs the ease and risk associated with adding new features or fixing those rare bugs. It can be easily argued that though there wasn’t extensive testing, there was an appropriate amount for those applications.
The applications that had extensive testing and very poor design can easily be described as failures since they didn’t meet the customer needs and couldn’t be extended or fixed easily. In those cases, the testing either didn’t matter or was a hindrance to fixing it.
To summarize; concentrate on design, then make sure it is tested, and if your writing a lot of code to add a feature which comes with choosing the right architectural component, then scrap it and add that component into the stack.