Monday, February 23, 2015

Five Thoughts on Software Testing

I felt like sharing some thoughts on testing, not necessarily related to any particular incident, just some things that were on my mind.

  • The easiest time to add the tests is now.

    It always seems to be the case that the time to write the tests is sometime later.

    • "I'm pretty much done; I just have to write some tests."
    • "If I write a lot of tests now, I'll just end up changing them all later."
    • "We're still in the design phase; how can we write any tests now?"

    I'm here to tell you that you can, and should, write those tests now. The main reason is that the sooner you write tests, the sooner you can start running those tests, and thus the sooner you will start benefiting from your tests.

    And I often write my tests when I'm still in the design phase; let me explain what I mean by that. While I'm noodling along, thinking about the problem at hand, toying with different ways to solve it, starting to sketch out the framework of the code, I keep a pad of paper and a pen at hand.

    Each time I think of an interesting situation that the code will have to handle, I have trained myself to immediately make a note of that on my "tests" pad.

    And as I'm coding, as I'm writing flow-of-control code, like if tests, while loops, etc., I note down additional details, so that I'm keeping track of things that I'll need to test (both sides of an if, variables that will affect my loop counters, error conditions I'll want to provoke in my testing).

    And, lastly, because I'm thinking about the tests while I design and write the code, I'm also remembering to build in testability, making sure that I have adequate scaffolding around the code to enable me to stimulate the test conditions of interest.

  • Tests have to be designed, implemented, and maintained.

    Tests aren't just written once. (Well, good ones aren't.) Rather, tests are written, run, revised, augmented, adjusted, over and over and over, almost as frequently as the code itself is modified.

    All those habits that you've built around your coding, like

    • document what you're doing, choose good variable names, favor understandable code whenever possible
    • modularize your work, don't repeat yourself, design for change
    All of those same considerations apply to your tests.

    Don't be afraid to ask for a design review for your tests.

    And when you see a test with a problem, take the time to refactor it and improve it.

  • Tests make your other tools more valuable.

    Having a rich and complete set of tests brings all sorts of payback indirectly.

    You can, of course, run your tests on dozens of different platforms, under dozens of different configurations.

    But there are much more interesting ways to run your tests.

    • Run your tests with a code-coverage tool, to see what parts of your codebase are not being exercised.
    • Run your tests with analyzers like Valgrind or the Windows Application Verifier
    Dynamic analysis tools like Valgrind are incredibly powerful, but they become even more powerful when you have an extensive set of tests. You can start to think of each test that you write as actually enabling multiple tests: your test itself, your test on different platforms and configurations, your test under Valgrind's leak detector, your test under Valgrind's buffer overrun detector, etc.
  • Keep historical records about running your tests

    As you're setting up your CI system to execute your test bed, ensure that you arrange to keep historical records about running your tests.

    At a minimum, try to record which tests failed, on what platforms and configurations, on what dates.

    A better historical record will preserve the output from the failed tests.

    A still better historical record will also record at least some information about the successful tests, too. Most test execution software (e.g., JUnit) can produce a simple output report which lists the tests that were run, how long each test took, and whether or not it succeeded. These textual reports, whether in HTML or raw text format, are generally not large (and even if they are, they compress really well), so you can easily keep the records of many thousands of test runs.

    Over time, you'll discover all sorts of uses for the historical records of your test runs:

    • Looking for patterns in tests that only fail intermittently
    • Detecting precisely when a regression was introduced, so you can tie that back to specific changes in the SCM database and quickly find and repair the cause
    • Watching for performance regressions by tracking the performance of certain tests designed to reveal performance behaviors
    • Monitoring the overall growth of your test base, and relating that to the overall growth of your code base
  • You still need professional testers.

    All too often, I see people try to treat automated regression testing as an "either or choice" versus having professional test engineers as part of your staff.

    You need both.

    The real tradeoff is this: by investing in automated regression testing, by having your developers cultivate the habit and discipline of always writing and running thorough basic functional and regression tests, you free up the resources of your professional test engineers to do the really hard stuff:

    • Identifying, isolating, and reproducing those extremely tough bug reports from the field.
    • Building custom test harnesses to enable cross-platform interoperability testing, upgrade and downgrade testing, multi-machine distributed system testing, fault injection testing, etc.
    • Exploratory testing
    • Usability testing
    All of those topics that never seem to find enough time are within your reach, if you just build a firm and solid foundation that enables you to reach out for the wonderful.

Oh, and by the way: Leadership is not the problem!

No comments:

Post a Comment