Experimenting: Touch Point CoverageAreas where tests prevent refactoring pin-point how you can improve your code quality. Useful for teams that rely mostly on E2E tests and want an easy way toward unit testing.
How many times were you told to Test behaviors, not implementation details? Of course, nods all around. Until you realise no one can agree on what that means. This idea keeps popping up in my conversations with engineering teams. Improving your test design requires rules that are pedantic but simple to follow at the same time. The closest I came up with so far in terms of naming—and I’m terrible with naming these things, sorry!"—is Touch Point Coverage. Touch Point CoverageThe number of public interfaces touched in a test suite per line of coverage. There are some caveats of course when measuring too naïvely, I’ll get to those in a moment. TPC is less of a metric and more of an intuition about how tightly the testing surface area couples with the underlying class or function structures. Lower percentage is better, unless it is zero.
Okay, that’s the basics. I want your feedback so hit me! CaveatsAs always it’s easy to poke holes into too simplistic rules, so here’s a few examples with context how this could be used and where it doesn’t make sense. What did you find? Did I miss anything obvious? What about End to End tests?E2E tests would have a very low percentage of “public interface”, however they are too wide. End-to-end testing’s actual deterministic coverage is likely zero. It is smoking out possible issues, but because E2E tests can have multiple reasons for failing, they do not provide adequate coverage for the unit under test. You still would want tests in addition to E2E and smoke tests that are faster and more deterministic, focused on the failure of only 1 behavior at a time, through a shared interface. What about testless workflows?Event Modeling comes to mind. Itsup front design produces public interfaces and desired behaviors which focus on provability and correctness. The TPC of such a system would be close to 100% when tested against each component’s behavior specs (Given-when-then) even when not done in a TDD fashion or with zero abstractions. Given the effort required to repeatedly do smalll-design-up-front diligently and keep the models up to date would be worth it. For example, Adam Dymitruk’s business is focused on event modeling and event sourcing as their core process and are famously not doing any TDD or technical shenanigans, relying purely on the correctness of the model. I think that’s okay but the small-design-up-front with a detail event model is a huge factor in this Along with the experience of keeping the data flow behavior focused. I’d need to talk to Adam to get his insights on this. EDIT: Perhaps in an event model the “public interface” can be focused on the views and projections rather than actual coding structures (classes, functions) to maintain the efficacy of the metric. Conclusion
Thoughts? I help engineering leads cut through the noise to get teams on track to greatness.
You’re currently a free subscriber to Crafting Tech Teams. For the full experience, upgrade your subscription. Whenever you’re ready, here are 3 ways I can help you:📙 Grab your copy of the No-nonsense TDD booklet to get insights on why and how to save money adopting better craftsmanship concepts 📚 Join the Tech Leaders’ Guild Book club where you learn pragmatic speed reading and we discuss engineering leadership and technology strategy books with other tech leads, CTOs and engineering management experts. New book start of every month. We finish books fast. 🔮 For bespoke coaching for leaders and their teams, schedule an intro call with me to discuss your biggest obstacles in becoming more profitable and confident in your engineering outcomes. |