Tigraine

Daniel Hoelbling-Inzko talks about programming

Golang hidden gems: testing.T.Log

One thing I love about Go is it's build chain and overall ease of use. Some things take time to get used to, but the lightning fast builds and the convention-based testing Go offers are addicting right from the start.

Today I found another hidden Gem I think is just genius: testing.T.Log(). Ok I admit, not the most sexy method to get excited about - but bear with me for a moment. Imagine the following code.

func TestSomething(t *testing.T) {
  t.Log("Hello World")
}

What's the output? If you'd expect Hello World you are mistaken. The output is exactly nothing :)

testing.T.Log() only prints something if a testing.T.Error or testing.T.Fatal occurred. Brilliant! Nothing is more annoying than chatty test suites where your actual problem is buried in 2-3 megabytes of meaningless debug statements! And this solves the problem really elegantly. You can log as much debug info as you want and it will only surface if the test actually failed.

Filed under golang, go, testing

Enable code coverage reports in create-react-app projects

create-react-app is a nice and easy way to bootstrap a new React.js project with some sane defaults and most of the tedious configuration required to enable Webpack building of Babeljs etc..

One thing I was missing from the generated configs though is how to output code coverage. Turns out it's rather simple - locate your package json and add the following line under scripts:

  {
    "coverage": "node scripts/test.js --env=jsdom --coverage"
  }

This way you can run yarn coverage or npm coverage and get a nicely formatted output with your coverage data. You can read more about the jest cli options in the docs

Filed under reactjs, testing, tools, javascript

Naming tests is more important than what they do

Why are we writing tests? There are numerous reasons, but to me the primary one is that I can go into a codebase even after I have forgotten everything about it and make changes without fear of breaking 20 things at once.

One of the major antipatterns I see regularly is the dreaded testMethodWorks() testcase:

@Test
public void testCreateUser() throws Exception {
  User user = userService.createUser("foo", "bar");
  assertNotNull(user);
  assertEquals(user.getUsername(), "foo");
  assertEquals(user.getPassword(), "bar");

  User invalidUser = userService.createUser("bla", "");
  assertNull(user);

  User someOtherTest = userService
  .....
  ..(goes on for another 20 cases)...
}

The example is somewhat contrived, but you get the idea. A testcase that checks 30 different (marginally related) things and will potentially fail for even more reasons. Of course that one testcase validates that the createUser() method works - and especially when a lot of setup is involved in your testcase it's convenient to just use the stuff that's already there.

But by doing so you are sacrificing a major benefit of tests: Readability through naming. If every testcase is simply named after the method it's testing, you end up with a completely useless test class that has exactly the same informative value as the class under test. Why would I bother reading the test if I could just look at the code that's doing stuff? It's probably shorter than the test case!

Imagine you come into a new codebase and whenever something breaks you first have to read through the test code. Looking at each jUnit stacktrace to figure out which assertion blew up - just so you can figure out what the test was actually doing and why that's a bad thing. Yikes.

Now I won't advocate the "one assertion per test" mantra - that's going overboard and usually leads to unmaintainable tests. But at the very least group your tests not by method but by use case. If a test fails it should be for one reason and that reason damn well ought to be in the test name. Not because nobody likes to read code - but because the first thing each testrunner will report is the name of the test that failed.

It's much easier to figure out what is going on if you get a

testCreateUserWithoutAdminCredentialsReturns403ForbiddenStatusCode()

failure rather than a simple testCreateUser().

Seriously - I didn't even have to explain to you what my use case was - but if this test blows up you will immediately know it's a ACL issue and that it's manifesting itself by not returning a 403 StatusCode. If there was a second testcase called testCreateUserWithoutAdminCredentialsDoesNotInsertUserIntoDatabase you'll also not look at all the different corners of my repository why we got a record too many in some assertThat(repository.getAll().size(), equals(0)); but rather just ignore that failure as it's clearly an ACL issue not a database related thing. By splitting things into multiple testcases we also get the added benefit of predictable states. A test that did not correctly clean up some shared resource (in-memory-db etc..) will not create false positive in line 100 of your testMethodWorks() case but should be contained by your Transactional Testrunner or your setup/teardown methods.

So I propose three simple things that should always be in the testname - regardless of how the test is written or what you are testing:

  • Method under test (createUser)
  • Context the test was run (WithValidAdminCredentials)
  • Expected outcome of the test (ReturnsUserAsJson)

And you end up with createUserWIthValidAdminCredentialsReturnsUserAsJson and alongside you'd naturally get a second testcase called createUserWithValidAdminCredentialsInsertsUserIntoDatabase.

Keep that in mind and you'll make your life much easier for yourself when you have to update something in your codebase a few months down the road - once you have forgotten everything that was going through your head right now :)

Filed under code, style, testing

Good ideas worth spreading: SystemDateTime abstractions

I brought this example up a lot on this blog, but while looking at the code of Mark Nijhof yesterday I noticed a rather nice solution to my ongoing problem of abstracting away System.DateTime.Now calls for testing purposes.

As stated before: Don’t make your tests depend on external factors like the current time or Date, and I even had a solution until now that solved the problem rather nicely through a static factory that returns an instance to your DateProvider.
Why a global factory? Simple: having your IDateProvider be a mandatory dependency on all your objects and services will quite simply clutter up your design. IDateProvider is by no means a really important dependency, and modeling it the same way as say IImportantBusinessRule would not only require you to think about that DateTimeProvider in every test that you run against your object, but also reduce the readability of your constructors dramatically.

What I didn’t think about when writing my IDateProvider abstraction almost a year ago was that with C# 3.5 and lambdas, passing around a function is essentially the same as using a strategy class, but with a lot less ceremony. And obviously so thought Mark Nijhof when he wrote Fohjin (a very nice CQRS example you really should check out on GitHub).

public static class SystemDateTime
{
    public static Func<DateTime> Now = () => DateTime.Now;
    public static void Reset()
    {
        Now = () => DateTime.Now;
    }
}

So simple yet so elegant. In your tests you hardly have to think about this stuff, but if there is a test that depends on the date you can just go ahead and set it like this:

[Fact]
public void ctor_SetsDateAddedTo_CurrentDate()
{
    SystemDateTime.Now = () => DateTime.MaxValue;
    var orderLine = new OrderLine(TestData.Product, 1);

    Assert.Equal(DateTime.MaxValue, orderLine.DateAdded); }

It’s just a small touch, but it saves you 2 classes and still solves the problem nicely.

Beggars can&rsquo;t be choosers: Dependency injection through global factories

Whenever you listen to testability talks you usually take away one universal truth:

Global state is bad, singletons are essentially global state.

So, if you want to have it done right, use dependency injection and don’t let your code depend on global state.

But: Sometimes it’s just not possible. My current project for example does not use dependency injection. Why? I didn’t know better and used ActiveRecord with all it’s static design. And besides, I’m just lazy and have no intention of diving into the Castle documentation to find out how to teach ActiveRecord to use an IoC container when creating entity objects.

And if you have no control over your constructor, your options for dependency injection are limited to two things:

Public fields (aka optional dependencies) and Global factories.

Public fields

While in theory a pretty decent method that allows you to swap out parts it falls very short once you have multiple classes that need the same service:

public class Entity
{
    public IDateProvider DateProvider { get; set; }

    public Entity()     {         DateProvider = new DateProviderImpl();     } }

Since the default implementation is hardcoded into every consumer, you end up with a big pile of DRY violations that will one day bite you when you try to refactor DateProviderImpl’s constructor.

Global factories

Now the words global and testability don’t go well together, but in this case it’s ok. You try to battle the DRY violation while still making your service optionally interchangeable when testing.

public class Entity
{
    public Entity()
    {
        var now = DateProviderFactory.Provider.Now;
    }
}

public class DateProviderFactory {     private static IDateProvider _provider;

    public static void SetProvider(IDateProvider provider)     {         _provider = provider;     }     public static IDateProvider Provider     {         get         {             if (_provider == null)                 _provider = new DateProviderImpl();             return Provider;         }     } }

Now obviously you should NEVER call SetProvider inside your production code. It’s a pure testability helper so if you start messing with it expect to see some really hard to debug errors pop up.

But as long as you don’t mess that up, you can write tests like this one:

public class TestFixture
{
    [Fact]
    public void DoesSomethingWhenGivenDate()
    {
        var mock = new MockedDateProvider();
        DateProviderFactory.SetProvider(mock);
        var entity = new Entity();
        //.....
    }
}

I know it’s not perfect, but nobody expected it to be that way. The best solution to the problem obviously is a very clean separation of object construction and business logic, and the proven way to achieve that is dependency injection through a container like Windsor or StructureMap. Yet, often you have to look at old codebases where you just need to get the job done, and then it’s nice to know your way around the limitations sometimes.

Oh, and btw: The example I did above was chosen deliberately to be something as simple as a abstraction of DateTime.Now. As said before, never depend on moving parts in your tests.

Filed under net, programmierung, testing

Don&rsquo;t forget the Refactor in Red-Green-Refactor

When first learning about TDD all sources I read focused pretty much on one thing: writing the tests.

Few sources really talk about the full TDD workflow:

  • Write a failing test
  • Make it pass
  • Refactor the code
  • start over

Arguably that the hardest part of doing TDD is 1+2, while possibly the most important one is 3!

Some people tend to stress the fact that automated tests can be a safeguard against breaking existing code. And although I like this aspect, I believe if your code is structured well and was built with the open-closed principle in mind,  chances are you’ll never touch old code in the process of implementing new features.

But during refactoring, you play to TDDs strengths. You don’t write new stuff, you focus on the stuff that’s already there and that already works. You search for ways to improve what’s already there while not changing it’s behavior. And very often this final refactoring step is the only thing that really brings value to your process, since it not only uses those tests you just wrote, it also facilitates future change allowing you to produce cleaner code than you would without refactoring.

Think of refactoring as fortifying the wall you’ll be building the next floor upon. Without it the you may be fine, but 3 floors from now you’ll have a lot of work at your hands to be able to commence work.

Getting things right the first time is incredibly hard. On my last project it even took me some quality pair programming once to come up with something great, and 80% of the pair programming was mostly spent on refactoring a raw idea from a 80% solution into a 100% solution. So, don’t spend too much time with hunting the ideal of writing a 100% solution the first time, rather try to get it 80% right and don’t stop improving it until you are at 100%!

Filed under programmierung, testing

.NET Unit testing tools

After posting my tools list today I got asked why I didn’t list any testing frameworks. Obviously, I love testing, so why no testing tools like Gallio, NUnit or Testdriven.NET etc?
The answer is rather simple, Resharper runs my tests for me.

By default Resharper can run NUnit tests, if you install Mbunit it can run those too, and if you just copy over the resharper support library from the XUnit contrib project to your Resharper/Plugins directory it can also run XUnit.

image

Also, almost all open source frameworks out there include their test runner in their code tree, so you don’t need to worry about what exotic test frameworks are out there, you’ll be provided with the appropriate runners.

On the testing framework side I recently (~4 months) switched over to xUnit as it’s syntax felt much better than that of xUnit or Mbunit.
Also I am currently looking into maybe using a BDD testing framework like MSpec.

Filed under net, testing, tools

Troublesome testing

This may be the very first time I blog about a bug in the CLR, but it’s annoying nonetheless.

Apparently a bug in the CLR’s System.Reflection.Emit prevents Rhino.Mocks from working when generic constraints are applied to a method.

void Add<T, TType>()
    where T : class
    where TType : T;

As long as the TType : T constraints is present, all tests will fail with a System.BadImageFormatException.
Now, the bug is known and it looks like it can’t really be helped on the framework side. But, I didn’t want to drop this constraint in my production code just to make the class testable.

So, I went back to the dark ages and actually wrote a Mock class by hand that counted calls to methods, returned preset values for methods.

Overall, the Mock is a mess. There are like 5-6 fields counting all sorts of different stuff just for a simple interface with two methods.

Thank god there are tools like Rhino.Mock that keep me from writing code like that (I really can’t praise Ayende enough for Rhino.Mocks).

Filed under net, testing

Tests should last forever

There is one excellent point Roy Osherove made while reviewing the tests in NerdDinner.com I wanted to share with you.

Don’t write tests that change over time!

Often I have initialized objects for tests like this:

[Fact]
public void Test()
{
    DateTime date = DateTime.Now;
    ...
}

Until recently I thought that’s cool, after all the tests passed every time I ran them.

But what if the passing tests are just a coincidence?
What if I am testing a financial application that will only accept orders between 8am and 6pm on weekdays, and the order-date gets initialized to DateTime.Now in my test?


Given the normal work days in most countries that code would run for most developers just fine, but when some notorious late-worker like me comes in the tests start failing for no apparent reason.

Time or place of execution, should not have an impact on a test at all. Given the same code and the same test, the result should always be the same.

So, whenever you initialize a value to something from your current execution context (time being the prime example here), you create a possibility that this test will break in some unexpected ways over time.

So if you really want to fill some DateTime with a value for testing use a constant (like DateTime.MinValue/MaxValue). So whenever you re-run this test all inputs are the same as they have been when you wrote the test.

Filed under programmierung, testing

ASP.NET MVC: Hide the HttpContext services with Windsor and a custom ControllerFactory

ASP.NET MVC was designed to be a very “clean” and testable framework for creating web applications from Microsoft. And they failed really badly in one place: HttpContext!

The fact that the ASP.NET MVC Contrib project has a whole project dedicated to mocking out the whole HttpContext for testing simply illustrates one point: It’s broken, period.
There is this one gigantic god hash table that has 5 other hash tables hanging from it that knows everything about the incoming request. And although it’s possible to fake the whole thing with RhinoMocks (as the MVC Contrib guys do it), it’s still a pretty stupid idea to have all those concerns in one class called “context” (and accessible to the controller code).
So, although the HttpContextBase is already an abstraction of the real context, I wanted to extract those things into specialized service classes that I have full control over (and that could then be used for even more specialized classes that handle data retrieval, thus making “magic strings” go away when dealing with requests and sessions).

I set out to create a request service class that follows a very simple Interface:

public interface IRequestService
{
    string GetRequestField(string fieldName);
}

The actual class is just a Facade for the HttpRequestBase class that gets injected into the constructor.

Problem here: I would have to new up this IRequestService in my controller, and that’s something I didn’t want to do. Object graph construction shouldn’t be in the controller at all, and so I want to inject IRequestService instances into the controller. And that can’t be done without control over the ControllerFactory.

The IControllerFactory interface is rather simple, and it’s the perfect place to leverage the power of a IoC framework to construct the controller objects.
So I simply pass the object creation off to Windsor in the CreateController method:

public class ControllerFactory : IControllerFactory
{
    private WindsorContainer container = new WindsorContainer(
                                        new XmlInterpreter(new ConfigResource("castle")));

    public IController CreateController(RequestContext requestContext, string controllerName)     {                  return (IController)container.Resolve(controllerName);     }

    public void ReleaseController(IController controller)     {         var disposeable = controller as IDisposable;         if (disposeable != null)             disposeable.Dispose();         container.Release(controller);     } }

What then took ages for me to figure out was how to instruct Windsor to use current HttpContext.Request object. Turns out, I was searching in the wrong place: That functionality is in MicroKernel and not in the Windsor container.

public IController CreateController(RequestContext requestContext, string controllerName)
{
    container.Kernel.AddComponentInstance<HttpRequestBase>(typeof (HttpRequestBase),
                                                           requestContext.HttpContext.Request);
    return (IController) container.Resolve(controllerName);
}

The AddComponentInstance method allows you to pass in a concrete instance that should be used when searching for a service. This way when Windsor constructs the RequestServiceFacade class that takes a HttpRequestBase as dependency it will simply inject the one specified instead of trying to construct the HttpRequestBase itself (that doesn’t work ;)).

This now allows me to easily swap out request implementations by just changing the Windsor configuration.

Filed under net, testing

My Photography business

Projects

dynamic css for .NET

Archives

more