The Fairway Technologies Blog

The CTO's Quick Guide to Test-Driven Development

December 19, 2018

You have known about Test-driven Development (TDD) for years. You’re aware that modern application frameworks were designed with testability as a primary concern, and unit testing and continuous integration have become an industry standard. You have a hunch that TDD will benefit your teams, yet you haven’t embraced the practice! You are out of excuses and you may even feel behind the curve. No worries—let’s get you going with TDD.

Let’s review the basics. What is TDD? TDD is an Extreme Programming (XP) concept that encourages simple design and inspires confidence by following the "Red, Green, Refactor" mantra. This is achieved by writing a failing test around a new feature, writing enough code to make the test pass, and then refactoring the code to acceptable standards.

Personally speaking, TDD has had a huge impact on my coding career. I started TDD about eight years ago after a couple of false starts. It took about two weeks to find a rhythm. Now, I can’t do without it. In my work as a Director of Software Engineering, I strongly recommend having TDD in place on every project. What’s funny is that even though the name emphasizes tests, for me, it is not about testing. TDD generates superior software. Which brings us to a question: why is that?

#1: Design

There’s a reason some folks refer to TDD as Test-driven Design. TDD drives simple, clear design. Building tests at the beginning of a project makes your team stop and think about each feature and the interaction between features before you start. This process nearly ensures decoupling. Creating tests ahead of time assures that every piece of the puzzle has a single responsibility and that code works independently. If you follow the TDD guidelines and principles, the applications you build with TDD will be reusable and maintainable—exactly what we are all aiming for in the first place.

There are countless best practices that are good to keep in mind when you’re designing a testable codebase. There’s Single Responsibility Principle (SRP), high cohesion and low coupling, Don't Repeat Yourself (DRY), separation of concerns, Law of Demeter, and Dependency Inversion Principle just to name a few. But rather than overwhelm you with lists of design fundamentals, I’ll offer basic advice that is sure to get you off on the right foot. You simply cannot test your system unless you have control over dependencies, so code against interfaces and embrace dependency injection. If you are disciplined and keep to these rules, your application design is much more likely to be sleek, efficient, and extensible.

Tip: Favor constructor or setter injection over the service locator pattern and use a factory pattern to isolate object creation throughout your codebase. 


#2: Documentation

The next benefit of TDD is documentation. Though some developers have an aversion to reading documentation, they are inclined to read code. That’s where unit tests shine as they offer up-to-date living documentation, which can be used by developers to gain an understanding of the API. Keep in mind, this means that tests need to be maintained just like other forms of documentation. If you maintain your tests, you already have documentation embedded into your design.

It is possible these same tests could be used by stakeholders to vet requirements and track delivered functionality. This is a bit Pollyannaish and is NOT going to happen straight out of the gates. But just FYI, there is an extension of TDD known as Acceptance Test-driven Development (ATDD) or Behavior-driven Development (BDD). I recommend as you first get going to keep your unit tests in TDD locked away for developer eyes only, but maybe you'll make the leap to ATDD/BDD at some point.

Tip: Still take a look at BDD's Gherkin syntax and Given-When-Then naming.

#3: Facilitate Change

One of my favorite byproducts of TDD is the ability to code without fear. Instead of wondering if you have broken something, just make your code change and re-run tests to make sure nothing broke. Now I often cringe when words like just or simple are used in software-talk, but it is just that simple. When you have a solid design and the safety net of tests, you’re set up for some relatively risk-free refactoring time.

There are empirical studies (such as the one led by Nagappan et al.) that show that TDD greatly reduces the chances of bugs so there’s even more reason to code without fear. But even if you introduce a bug or you can’t get your test to pass after making a change, it’s cool; just rollback your code and try again. I’m creeping into the next section, Focus & Productivity, but to ensure that TDD works effectively when you make changes, source control is a must as you will start to favor rollbacks over debugging.

Tip: Continuous integration is highly recommended to provide constant feedback on your build status. Take note: it is difficult to scale to a large number of tests (they run too long) without a build server.


#4: Focus & Productivity

TDD can also increase focus which boosts the productivity of your team. Since TDD fosters confidence and a strong understanding of the code and domain, engineers can really hone in on the problem they are solving. After all, we now code without fear, and it’s easy to grok a clear, simple design that’s documented. Bugs are found earlier, and developers are spending less time with the debugger. Finally, TDD helps define done (code satisfies tests, which satisfy requirements) and success (no failing tests). These things help keep coders focused on effectively implementing a requirement and nothing more. TDD gets us coding with a purpose!

Tip: Three principles to help keep coders focused, productive and coding with a purpose: "Keep it simple, stupid" (KISS), "You ain't gonna need it" (YAGNI) and "Fake it till you make it."

Tip: Consider pair programming. It will keep developers even more on track and following TDD practices.

#5: Simplify Integration

Once you have unit tested each piece of your well-designed program, it becomes easier to test the sum of its parts with integration tests. Simply, your application will naturally have more integration points (seams), and your integration testing requirements will be reduced. But don’t be fooled. You still need to integration test!

Also, coding can begin before dependencies are in place. Your team can mock/fake that third-party web service which isn't exposed yet and start coding like it's already there. This really boosts productivity (maybe that should have been mentioned sooner.)

Now that we’ve covered some of the benefits and best practices of TDD…

Let's Dig into Red, Green, Refactor!

 This principle is essential to TDD and XP so it’s worth going over in detail. This is the meat and potatoes of TDD—the mantra and set of directions that should be at the front of your mind. At the beginning, it can seem overwhelming to change your entire team’s way of doing things. But I assure you that it will be time and money well spent. So, let’s begin.

Red – Stop, think, and write a test.

In the first step, the “Red” step, stop and think, choose the next feature to implement, and get your head around its requirements. You are creating a test that defines the new feature or enhancement and will be testing with purpose. Think about why you are testing something and at what level it needs to be tested. Take care because a misunderstood requirement will lead to a test which verifies and documents the wrong thing! Before you act, think through your design. Keep it simple and clear and stay focused on the interface and behavior rather than the implementation. Create your test knowing it will be referenced by other developers to gain an understanding of the application. It's living documentation and sample code. Write your test as if the code already exists and then create just enough production code so you can compile it. Merely produce a stub that can be asserted against. I have a confession—I often write the code stub before the test. That's just how I roll.

Now, run the test. The test will fail because your feature has not been implemented yet. This might seem like a waste of time. After all, you know the test is going to fail. Well, running the test validates that the test harness is working and you're calling the correct code. We're just checking that the code fails for the right reasons. But I say play it by ear. It's a valid practice, but those issues tend to surface anyway even if you skip the test run and move into "Green."

Tip: Don’t get caught up testing your implementation (e.g. assert that internal mocked method was called). Instead, focus on inputs, outputs and behavior (i.e. when null is provided then a specific exception is thrown). This approach will keep you focused on requirements and will leave your tests less brittle once it is time to refactor.

Green – Write enough production code to make the test pass.

In the “Green” step, write enough production code to make the test pass. Write new business code only when the automated test fails. This is Kent Beck's first of two simple rules to TDD. Some suggest you begin by hardcoding the expected result just to verify that the test correctly detects success. I don't do this. I think it's a waste of time even though there have been cases when I have chased my tail because my assertion was plain wrong, but my code was correct. During this second step, it’s important to stay focused. It is important that you write your code to pass the test. Nothing more. Nothing less. That is to say: don't speculate! The "You ain't gonna need it" (YAGNI) principle is often used to veto unnecessary work. And if new functionality is still needed, then another test is needed. Just make this one test pass and continue. How do you know when you’re done? If you've written the code so that the test passes as intended, you are finished. The test is the objective definition of "done."

Tip: When the test passes, you might want to run all tests up to this point to build confidence that everything else is still working. And then move on to the fun part...

Refactor - Change the code to remove duplication in your project and improve the design while ensuring that all tests still pass.

This last step in TDD tightens up your design. When you refactor, pay particular attention to your test structure. Tests must be quickly understood, easily maintained, and organized. I like to keep my unit tests in separate projects which have folders and namespaces that map to those of the target assemblies. Test classes are named after the feature or the class under test (CUT) and are always appended with "Tests." For example, UserRegistrationTests or RegistrationServiceTests. Since requirements drive tests, test names will ideally describe feature behavior. But that's a BDD concept and something to strive for. It's perfectly fine to just reference themethod under test and its behavior, i.e. MethodName_StateUnderTest_ExpectedBehavior. If you're clever, your production code will be self-documenting and your methods and features will share the same ubiquitous language. But test naming is never easy. Heck, naming isn’t easy. It's best to come up with an agreed-upon convention and try to follow it.

The second of Kent Beck’s two simple rules to TDD is: eliminate any duplication that you find in both the production code and tests. Ask yourself these questions:

  • Can test code be shared?
  • Should a test base class or methods be introduced?
  • Are fake objects reusable? If so, use the Object Builder or Object Mother patterns to isolate this logic.
  • If the constructor of your CUT were to change, would it be easy to address in your tests? One trick I use for this is isolating the instantiation of my CUT to reduce the time required on future constructor refactorings.

Remember: make design and code changes to improve the overall solution. Now tests are passing in the easiest and maybe not-most-elegant way. Improvement is inevitable! After each refactoring, rerun all the tests to ensure that they still pass and continue to code without fear. By re-running the test cases, you can be confident that code refactoring is not damaging any existing functionality.

Now, repeat. Starting with another new test, the cycle is then repeated to push forward the functionality. Each cycle should be very short, and a typical hour should contain many Red/Green/Refactor cycles. The size of the steps should always be small and easily reverted instead of using excessive debugging.


Is it a good test? Ask a bunch of questions.

 How do you know if you’ve written a good test? Answer these questions:

  1. Does the test clearly reveal its intention?
  2. Can another developer look at the test and understand what is expected of the production code? Go ahead. Add a few comments if necessary.
  3. Is the test code organized? I recommend using the Arrange Act Assert pattern (or AAA syntax) where one would Arrange all necessary preconditions and inputs, Act on the object or method under test, and Assert that the expected results have occurred.
  4. Does the test have a limited scope? If the test fails, is it obvious where to look for the problem? Use few Assert calls (ideally just one) so that the offending code is obvious. It's important to only test one thing in a single test.
  5. Do your tests run fast? If the tests are slow, they will not be run often or will be abandoned altogether.
  6. Are dependencies such as databases, file systems, networks, etc. faked or mocked? These dependencies will ordinarily be abstracted away by using interfaces. "No Worries" Tip: Don't get hung up on the difference between mocks vs stubs (and there are even more types). It hardly matters in practice. Just focus on managing dependencies.
  7. Does your test cross application boundaries? Only execute the code under test. Let's say you are testing a controller action which calls into the service layer. By mocking the service, the test is completely based on the controller logic. If the service code was executed, boundaries would be crossed, and we'd have an integration test.
  8. Will your test run and pass in isolation? Can your test run in any order without special setup or breakdown? Will your test pass on any machine?


In Conclusion 

There is a lot that goes into TDD. It can feel like a laundry list of rules and guidelines. However, if you and your teams take the time to learn the principles and put them into action, it will seem completely natural within no time, and your teams will be writing software that is robust, maintainable, reusable, and meets the needs of your business and users. And as the humble Kent Beck said: while you might never be a great programmer, you will be a “good programmer with great habits.”