Tigraine

Daniel Hoelbling-Inzko talks about programming

Practices you don’t throw away when your deadline approaches

6358304_cd37b36deb
(image by deVos)

So, I’m a bit under pressure right now. I don’t blog too much because I’m bound to a immutable release date of 7th of January. If I don’t get the software out until 7th of January the customer will have to wait for another year before he can make the switch from his (pain in the ass) 15 year old MS-DOS software.

So you see I’m not only a little bit stressed, I’m looking forward to a Christmas holiday I’ll probably spend locked into some room finishing the software day and night.
As that deadline approaches, there are some things I have noticed in my current development that suddenly don’t seem fit any more due to time constraints.

I know my business layer needs a major refactoring (a day or so) to keep my domain logic “clean” of clutter. But I know I can’t spend a day refactoring something that has to ship next week (I want to have the app user tested by 7th). Still I can think before I code.
One thing I see in legacy code all the time is how people stop thinking once they approach deadlines, but start to just mindlessly copy/paste stuff to make it work somehow.

Here’s one principle you should NEVER forget:

DRY – don’t repeat yourself

Not repeating yourself through simple copy and paste will at least make it easy for you not to have to rewrite the whole thing once a change has to be made. I can retrofit extensive tests and good SoC, but I am doomed once I have to change the same piece of code throughout the whole application.

If I’m so stressed that I have to write crappy code, It’s still my duty to fix it sometime later, and that’s not possible if  one business rule (it may be as simple as amount/numberOfRates) is done at 20 different places!
That then leads me to the next point, fixing stuff later requires at least some automated tests (nothing is worse than writing fixes that have more bugs in them).

Testing isn’t difficult, and it saves you more time than it takes. You may think you’re faster if you run the app once and see if it works, but apps don’t run as fast as unit tests, and once you have to run it several times to make it work you would have saved time by writing a unit test first. And although my tests look ugly and may be incomplete, I have verified that the most important calculations and interactions work (and can rerun those tests and verify that stuff still works later when I try to clean up the mess).

So, by applying (even incomplete) testing to my app, and not violating DRY I still retain the ability to easily extend and manipulate my application in the future. Once I’m done I can easily run NCover and find out what code I didn’t cover in my tests..

Cheap Bugtracking systems

I don’t like documentation.
Whenever I feel that I need to add comments to a piece of code to make it understandable I usually rework the code until it’s intent and workings are clear from looking at it.

Still, documentation will come after you once you are dealing with defects and planned features. Nobody can keep that stuff in their head, and it’s important to keep track of what your customers expect you to do/fix.

And while big projects demand some “real” process around this issue, when you’re doing small projects you usually don’t want to break a butterfly upon a wheel.

One easy solution is to simply write a Unit Test that exposes the bug.
Whether you fix it right away or you check it into your source-tree is totally up to you, but you’ll always see that red light reminding you of that bug.

While at it, I strongly suggest always writing a Unit Test to expose a bug prior to fixing it. You increase your code coverage by fixing bugs, and protect yourself from having the bug surface again with a later change.

Still, you should also look into getting a “real” bugtracking databases like FogBugz (has a free 2 person license), or install your own like Trac or BugNET.

Filed under programmierung, testing

Unit testing with mocks – Rhino Mocks basics (Part 2)

In Part1 of this series I have showed you how to create a very simple mock object by hand. In this post I will show you how to use RhinoMocks to create the mock and how to verify this. This post is intended as basic advice, and won’t cover any advanced RhinoMocks topics, just the basic setup/replay/verify steps.

When working with almost any mock framework there are 3 things: Setup, expectation recording and verifying that the expectations where met.
That means, first you tell the mock object what calls to expect. Then you let the method under test do it’s magic and afterwards you let the mock verify that all expected calls where made.
You can get very precise on what to expect and how to expect it, but that will be covered in the next part of this series.

The hand-made mock from Part1 would translate to a test like this when using RhinoMocks:

[Test]
public void ServiceWatcherNotifiesUser()
{
    var repository = new MockRepository();
    var notifier = repository.StrictMock<IErrorNotifier>();

    notifier.NotifyOfServiceDown();

    repository.ReplayAll();

    var watcher = new HttpServiceWatcher(notifier);     watcher.ObserveService();

    repository.VerifyAll(); }

The main things here:
We start the repository and then request a IErrorNotifier object from it.

var repository = new MockRepository();
var notifier = repository.StrictMock<IErrorNotifier>();

The mock object (notifier) is in record mode now, all calls we do to the object aren’t actually executed but will be expected afterwards.
So if we want NotifyOfServiceDown to be called once we simply call it while in record mode:

notifier.NotifyOfServiceDown();

After having set up all expectations in record mode, we tell the Mockrepository to go to replay mode:

repository.ReplayAll();

The mock object still doesn’t do anything. But it expects what we setup in replay. If the watcher calls methods that weren’t specified in replay mode the mock will throw exceptions at us.

Now we construct the object under test:

var watcher = new HttpServiceWatcher(notifier);

And note that we pass the mock object instead of an actual implementation of IErrorNotifier.
Now we call the method under test just as we would normally:

watcher.ObserveService();

That leaves us with only one step left, we tell the repository to verify all mocks that where created and it will throw Exceptions if mocks didn’t get called or did get called too often.

repository.VerifyAll();

Although this is just a very basic example of how to use RhinoMocks, you are able to see the benefits from this. You could write the HttpServiceWatcher class without having to write any concrete IErrorNotifier implementations. You can just concentrate on the HttpServiceWatcher instead of worrying how the underlying Service is going to work.

In the next part I’ll be covering how to make some fancier things with RhinoMocks like returning values and verifying that passed parameters meet certain criteria.

The source code is available through my SVN repository:

svn checkout https://office.pixelpoint.at:8443/svn/tigraine/UnitTesting/trunk UnitTesting –username guest

Notice that all dependencies are also in the svn, so you don’t need to get RhinoMocks or nUnit yourself.

Unit testing with mocks (Part 1)

I regret not having blogged on TDD and designing for test before – it makes it very difficult to talk about mocking as it is a rather advanced topic, requiring at least some knowledge of polymorphism and oo-design.
So this post is the first in a series of posts on the topic of testing with mock objects, that will hopefully help you with your testing.

Testing is one of the most important things in software development.
We’re all human, and we all make mistakes. And even if we discover these mistakes during development and testing, it’s almost certain that we’ll have to come back at a later point to change something, possibly breaking what already worked.
Doing a full QA cycle during initial development may look reasonable to most of us, but doing it every time you change a tiny bit in the application will certainly get you some angry mails from management.
Having good unit tests gives you a safety net for future development. By simply running the tests you can verify that things that already worked still work properly. And that’s what is important in software development.

The whole idea of unit testing is to test as little as possible while still verifying that the method under test behaves as specified and expected.
Keep this in mind, because it is important when testing classes that depend on services or other classes.

If let’s say you have a HttpServiceWatcher that is a service running somewhere and watching if a HttpService is up and running, you should test the HttpServiceWatcher class itself, not the associated notifier classes that the Watcher calls when it wants to notify you.
But how do you verify that the HttpServiceWatcher really worked and called the notifier as a result?

Let’s start with the Notifier interface:

public interface IErrorNotifier
{
    void NotifyOfServiceDown();
}

Let’s assume we have implemented a EmailNotifier class, if the HttpServiceWatcher looks like this we’re in testing-nightmare land:

public class HttpServiceWatcher
{
    public void ObserveService()
    {
        IErrorNotifier notifier = new EmailNotifier();
        notifier.NotifyOfServiceDown();
    }
}

The HttpServiceWatcher news up it’s notifier service, so every time we want to adjust the notifier, we’d have to change the ServiceWatcher and risk breaking something. Also, we can’t test the ServiceWatcher itself, because it will always call to an EmailNotifier that we can’t fake easily.

So, the correct move would be to use Inversion of Control (IoC) to inject the service into the watcher class:

public class HttpServiceWatcher
{
    private IErrorNotifier notifier;

    public HttpServiceWatcher(IErrorNotifier notifier)     {         this.notifier = notifier;     }

    public void ObserveService()     {         notifier.NotifyOfServiceDown();     } }

Now, the HttpServiceWatcher class doesn’t directly depend on any concrete implementation of IErrorNotifier, the calling code takes care of creating the concrete classes. Changes to notifiers don’t get propagated to the HttpServiceWatcher.
Also, this makes it very easy to simply fake the notifier. We could either create a fake test class that inherits IErrorNotifier, or we could use a Mocking framework.

Manual mocking could look like this:

public class NotifierMock : IErrorNotifier
{
    public int notifyOfServiceDownCallCount = 0;

    public void NotifyOfServiceDown()     {         notifyOfServiceDownCallCount++;     } }

The test could then look like this:

[Test]
public void ServiceWatcherNotifiesUser_Custom_Mock()
{
    var notifier = new NotifierMock();

    var watcher = new HttpServiceWatcher(notifier);     watcher.ObserveService();

    Assert.AreEqual(1, notifier.notifyOfServiceDownCallCount); }

And that’s fine. It works, we verify that the watcher actually calls the notifier service, and all is well.
It just gets tricky when you get more tests, you’ll have to create many mock objects that always introduce the possibility of breaking other tests etc.

In the next post in this series I will try to illustrate how to do the same thing with RhinoMocks and how it makes testing very easy.

Download the source code from my SVN Repository by doing a:

svn checkout https://office.pixelpoint.at:8443/svn/tigraine/UnitTesting/trunk UnitTesting –username guest

Continue reading Unit testing with mocks – Rhino Mocks basics (Part 2)

My Photography business

Projects

dynamic css for .NET

Archives

more