Published on

TDD Medium Gear and Mutation Testing

Authors

Medium Gear

I continued with the second day of the TDD course that I won. This involved working on code katas that better helped illustrate how medium gear works. Medium gear is where you start to work more directly on the equivalence partitions of a problem. This is in contrast to low gear where you start by writing lots of little tests to build confidence and understanding in the problem. In medium gear you end up with less tests than in low gear.

Low Gear - Kotlin and JUnit 5 - FizzBuzz Example Tests

@Test
fun parse_GivenAnInputOf3_ReturnsFizz(){
    val result = fizzBuzz.parse(3)

    assertEquals("Fizz", result)
}

@Test
fun parse_GivenAnInputOf4_Returns4(){
    val result = fizzBuzz.parse(4)

    assertEquals("4", result)
}

@Test
fun parse_GivenAnInputOf5_ReturnsBuzz(){
    val result = fizzBuzz.parse(5)

    assertEquals("Buzz", result)
}

@Test
fun parse_GivenAnInputOf6_ReturnsFizz(){
    val result = fizzBuzz.parse(6)

    assertEquals("Fizz", result)
}

@Test
fun parse_GivenAnInputOf15_ReturnsFizzBuzz(){
    val result = fizzBuzz.parse(15)

    assertEquals("FizzBuzz", result)
}

@Test
fun parse_GivenAnInputOf30_ReturnsFizzBuzz(){
    val result = fizzBuzz.parse(30)

    assertEquals("FizzBuzz", result)
}

Medium Gear - Kotlin and JUnit 5 - FizzBuzz Example Tests

@ParameterizedTest
@ValueSource(ints = {3,6,9,12})
fun parse_ReturnsFizzForMultiplesOf3(number: Int){
    val result = fizzBuzz.parse(number)

    assertEquals("Fizz", result)
}

@ParameterizedTest
@ValueSource(ints = {5,10,20,25})
fun parse_ReturnsBuzzForMultiplesOf5(number: Int){
    val result = fizzBuzz.parse(number)

    assertEquals("Buzz", result)
}

@ParameterizedTest
@ValueSource(ints = {15,30,45,60})
fun parse_ReturnsFizzBuzzForMultiplesOf15(number: Int){
    val result = fizzBuzz.parse(number)

    assertEquals("FizzBuzz", result)
}

Mutation Testing

An interesting concept that was introduced to me today is the idea of mutation testing. You do this after you have written up your tests for a class and you want to test how resilient and reliable your tests are. This is also a good guide into whether or not you are complete with your tests and also whether there are some useless production code blocks that are unreachable (these can often simply be deleted). Mutation testing is where you:

  • Go and delete or comment out arbitrary parts of your production code
  • You then run your tests
  • If your tests:
    • Do not break it means that the given production code is either useless and can be deleted or you are missing some key tests
    • Do break in which case you clearly have tests that are targeting that code block

This type of testing is also useful in helping indicate whether the name of the test makes sense. When you get a failure using this approach and the test/s in question have a name that is not aligned with the code deleted it is an indication that you will likely need to rename them.

This technique is like Chaos Monkey but for your tests. There are Java libraries that help with this - PITest is an example of this. These frameworks normally use the term zombie. This is an instance of your code that has been partially modified by the framework. The framework then runs your tests and if any fail then that particular zombie is considered dead otherwise it survived. The zombies that survived are listed at the end of the run. These surviving zombies highlight issues with your tests which when addressed harden your tests.

Experimental/Exploratory Testing

For one of the katas we did there was a rule we had to follow: you may not use the debugger or println statements during this exercise. What this meant is any debugging had to be done via tests. The idea with to get used to the idea of using tests as a debug tool and also to demonstrate how much faster it can be to run simple trial and error experiments against production code using tests. For instance say you want to test some sort of functionality on your website but to test this manually it involves logging in and navigating to a certain page. Instead you can setup a test that targets this functionality directly and in so doing avoid the arduous process of logging in and navigating to the desired functionality. This in turn then speeds up the feedback cycle for issues you are investigating.

Refactor, Test, Refactor

One rule I completely forgot to follow is running tests after each refactor. I had a suite of passing tests then did a tonne of refactoring and all my tests broke. I had to undo most of my changes and re-apply them one by one, this time, running tests after each change until I picked up which refactor was breaking the tests. This reminded me of the very important rule of running tests after each change to immediately pick up any errors I may have caused.