Tigraine

Daniel Hoelbling-Inzko talks about programming

Divide two integers in .NET

Ok, this is really simple, but it just has cost me 15 minutes because I ignored a compiler warning:

double result = myInt1/myInt2;

The compiler will underline that there is a possible loss in fraction, but I ignored it while not seeing anything wrong with the code at all.

Whatever values I calculated, the result was always an integer. Turns out, dividing two integers will also create an integer instead of a double (so I got weird results like 31/10 = 3 ).

To bypass this, you need to promote at least one of your integers to a double so a double divide will occur:

double result = (double)myInt1 / myInt2;

Hope this helps, took me far too long to figure out.

Filed under net, programmierung

Tests should last forever

There is one excellent point Roy Osherove made while reviewing the tests in NerdDinner.com I wanted to share with you.

Don’t write tests that change over time!

Often I have initialized objects for tests like this:

[Fact]
public void Test()
{
    DateTime date = DateTime.Now;
    ...
}

Until recently I thought that’s cool, after all the tests passed every time I ran them.

But what if the passing tests are just a coincidence?
What if I am testing a financial application that will only accept orders between 8am and 6pm on weekdays, and the order-date gets initialized to DateTime.Now in my test?


Given the normal work days in most countries that code would run for most developers just fine, but when some notorious late-worker like me comes in the tests start failing for no apparent reason.

Time or place of execution, should not have an impact on a test at all. Given the same code and the same test, the result should always be the same.

So, whenever you initialize a value to something from your current execution context (time being the prime example here), you create a possibility that this test will break in some unexpected ways over time.

So if you really want to fill some DateTime with a value for testing use a constant (like DateTime.MinValue/MaxValue). So whenever you re-run this test all inputs are the same as they have been when you wrote the test.

Filed under programmierung, testing

Why the secrecy?

Although I am still actively working for Pixelpoint, running strong on finishing my projects, I also went back to Klagenfurt University to finally finish my computer science bachelor degree.

Only problem being that (since I am now caught up somewhere between work and university) I can’t fit any lectures into my schedule. I can barely attain courses that require me to do so, so lecture slides are a rather vital part of my studying right now, and not being able to access them is really a showstopper.

What really bothers me is that a university professor who is teaching computer science, doesn’t get the concept of an open exchange of information and knowledge!

All educational material for courses by the syssec group is accessible through an external website that is password protected and where username/password combination is only given out during lectures.

Why?

We are talking about slides for a university lecture, by Austrian law a public event that anyone can attain (yes! every person who wishes to can simply sit in there and listen). Anyone who wants to can simply walk into the ÖH office and can buy a printed copy of exactly those slides for less than 1€. Heck, everyone can get to those slides anyway so why are we keeping people out?

I’ve seen them, there are no state secrets in there. To be honest: I don’t even like these slides too much (they were meant to be presented by a professor mind you). Still they are a valuable source of information that I believe shouldn’t be kept away from anyone.

On the other hand there is O. Univ.-Prof. Dipl.-Ing. Dr. Laszlo Böszörmenyi who manages to somehow have ALL of his course material be publicly (and freely) available to anyone. So, I doubt there is any legal reasoning behind locking away the course material and I demand that those password protections for the lecture slides are released. A university ought to be a place where knowledge is shared, so why stop at the boundary of a small institution like the University of Klagenfurt?

Filed under personal

Static members in generic classes

I have been using generics quite heavily lately for writing decorators to Repository classes that do logging and caching on top of the repository (I’ll talk about that another time).

Since I implemented an asynchronous cache clear method I immediately ran into some troubles with shared resources like the DB connection and so I figured the whole problem would be solved with a simple lock around the cache fill.

public class Cache<T>
{
    private static readonly object locker = new object();
    public IList<T> GetAll()
    {
        lock(locker)
        {
            //Query the DB etc.. 
            return null;
        }
    }
}

Maybe you already see the problem, but I for sure didn’t. And so it was quite a bit amazed when I discovered that the locking problem didn’t go away just like that.

Turns out, every generic class has it’s own static members. So Cache<string> has a different locker object than Cache<long> would have. Here’s the test to show this:

public class Test<T>
{
    public static long calls;
}

public class Tester {     [Fact]     public void Calls_DifferentGenerics_DontShareInstance()     {         Test<string>.calls = 10;         Test<long>.calls = 20;

        Assert.Equal(10, Test<string>.calls);         Assert.Equal(20, Test<long>.calls);         Assert.Equal(0, Test<int>.calls);     } }

Since all of my Cache objects are singletons (enforced through Windsor), there is little point in locking there.

I solved this by having a non generic class containing the static lock object and going on, but I have to say that this bug could have gotten a rather hard to reproduce bug.

Filed under net, programmierung

A short look at the Big&gt;Days 2009 Demo

I couldn’t help and look at the Big>Days2009 Rent-A-Worker code that Max Knor recently put up on his website.

I really didn’t spend too much time to look at the whole code, most notably it’s not the complete code (the WPF desktop client and the Silverlight client are missing from the repository).

Usually when confronted with a new code I immediately try to look at the tests to see what the code is about (since I don’t want to build the database on this machine).

Finding the tests is easy, the solution is rather well structured and split into multiple projects to separate concerns.
Unfortunately that’s the only good thing about the tests. Since there are only 2 classes with tests, both which I find tragically funny:

[TestMethod]
public void CustomersGetAll()
{
    TestContext.WriteLine("Retrieving customers...");

    Cust.Customer[] Customers = CustomerMgmt.GetCustomers(string.Empty);     TestContext.WriteLine("Called successfully!");     foreach (var c in Customers)     {         TestContext.WriteLine("Getting details for {0}...", c.CustomerID);         Cust.Customer cd = Customer.GetCustomer(c.CustomerID);         TestContext.WriteLine("Details retrieved: {0} {1}!", cd.Name, cd.MembershipID);

        TestContext.WriteLine("Getting by membership ID...");         int cdId = Customer.GetCustomerForUser(             cd.MembershipID == null ? Guid.Empty : cd.MembershipID.Value);         TestContext.WriteLine("Customer ID by Membership retrieved: {0}!", cdId);

        TestContext.WriteLine("-----");     } }

This should be a test for the Proxy class, but there are no asserts in there. I mean, if you test something, at least make sure you test that what you did worked. Not getting an exception from your code isn’t really a test at all (wait for the guy who mucks all exceptions with try/catch!).

Same thing goes for tests like this one:

[TestMethod()]
public void DeleteResourceTest()
{
    ResourceDataAccess target = new ResourceDataAccess();
}

or half done tests like this one:

[TestMethod()]
public void GetResourcesTest()
{
    ResourceDataAccess target = new ResourceDataAccess(); // TODO: Initialize to an appropriate value
    IEnumerable<RentResource> expected = null; // TODO: Initialize to an appropriate value
    IEnumerable<RentResource> actual;
    actual = target.GetResources();
    Assert.AreEqual(expected, actual);
    Assert.Inconclusive("Verify the correctness of this test method.");
}

From what I can judge (and I’m surely in no position to do that since my latest code was quite untested too), there isn’t one test in two different test projects that actually does something (besides Assert.Inconclusive calls at the end, or no asserts) and so I wonder why someone bothered creating those projects at all.

Also, most code in there uses a static Factory classes that I would abandon in favor of dependency-injection to facilitate testing.


It’s rather painful to see production code like this:

public static ICustomerAccess GetCustomerAccess()
{
    if (UseDummy)
    {
        return new DataAccessDummy();
    }
    else
    {
        return new CustomerDataAccess();
    }
}

(You could spare yourself some pain if you’d have two implementations of the Factory class instead of doing the Dummy branch in every method)

Now, this is hard I know. Most other code I looked at in there is quite nice, the DataAccessLayer seperation is quite nice, and also the strict DTO declaration is really cool, and now hitting on the tests and the factory is quite bad. Also the project structure is a really pleasant sight (although I keep missing projects :)).

But I’m a test and deendency-inection nut, so what matters most to me is what I’ll pick on first. It takes time to come up with good code, and with some refactorings this codebase can really shine (it’s well done after all).

Filed under net, programmierung

Big&gt;Days 2009 Code

I already picked on the Big>Days 2009 source today, and since it’s open and there, why not just write a little patch.

Without having to go too far into the code I thought a perfect spot to start would be the DataAccessFactory that should return test-dummys during tests while serving real objects otherwise.

What I didn’t like was the fairly repetitive code of checking if it’s in dummy mode and returning the appropriate object:

public static ICustomerAccess GetCustomerAccess()
{
    if (UseDummy)
        return new DataAccessDummy();
    return new CustomerDataAccess();
}

This gets repeated for every factory method and is a perfect example of a DRY violation.

I figured since we don’t want to touch calling code (and the UseDummy was never set anywhere), best way to go would be to simply create a interface for the factory:

public interface IDataAccessFactory
{
    ICustomerAccess GetCustomerAccess();
    ILocationAccess GetLocationAccess();
    IMachineTypeAccess GetMachineTypeAccess();
    IRentalServiceAccess GetRentalServiceAccess();
    IResourceAccess GetResourceAccess();
}

I then created two implementations of this, one for the TestDummy and one for the real thing (called RealDataAccessFactory) so I don’t need to check the UseDummy field any more.

Now the actual static factory can instantiate the real object by default and have a method on it to set another IDataAccessFactory implementation during runtime:

public static class DataAccessFactory
{
    private static IDataAccessFactory dataAccessFactory = new RealDataAccessFactory();

    public static IDataAccessFactory Implementation     {         get { return dataAccessFactory; }         set { dataAccessFactory = value; }     }

    public static ICustomerAccess GetCustomerAccess()     {         return dataAccessFactory.GetCustomerAccess();     }

    public static ILocationAccess GetLocationAccess()     {         return dataAccessFactory.GetLocationAccess();     }

    public static IMachineTypeAccess GetMachineTypeAccess()     {         return dataAccessFactory.GetMachineTypeAccess();     }

    public static IRentalServiceAccess GetRentalServiceAccess()     {         return dataAccessFactory.GetRentalServiceAccess();     }

    public static IResourceAccess GetResourceAccess()     {         return dataAccessFactory.GetResourceAccess();     } }

During production nothing changes, but in a unit test scenario I can pass a fake DataAccessFactory into the static factory and swap the whole implementation (enabling me to use Rhino.Mocks or whatever mocking framework I like instead of writing TestDummys myself).

This way we can even have the TestDummy class living inside the test assembly instead of littering the production assembly.

A test now may look like this:

[Fact]
public void Module_CallsFactoryForILocationAccess()
{
    DataAccessFactory.Implementation = MockRepository.GenerateMock<IDataAccessFactory>();

    var module = new Module();     module.DoSomething(); //This should call the Factory to retrieve a ILocationAccess

    DataAccessFactory.Implementation.AssertWasCalled(p => p.GetLocationAccess()); }

As you can see, we now have complete control over the factory during testing, without affecting the rest of the code in any way.
Another sideeffect of this is that the static DataAccessFactory or the actual DataAccessFactory implementation has no need to change if we need to make changes to the DummyFactory.

Filed under net, programmierung

The joy of pair programming

Software design is hard. Not so much because it’s so hard to come up with, but because it takes a very long time to really hit the sweet spot where you really feel it’s good.

Doing this on your own is almost impossible, because you constantly have to switch off your personality and challenge the assumptions you just made when writing something.
I ask myself all the time “Is this module really right here? Should I break this up into smaller modules?” .. And frankly, I am the wrong person to answer that question since I made the mistake in the first place.
So I challenge myself all the time into the mindset of another person (be it a user, tester etc..) and try to forget what I was just thinking for a minute to decide if I’m still on track or not.

It’s like driving alone in a car through unknown terrain, you have to stop all the time and pull out the map to see if you’re still driving in the right direction or not.

And honestly, it sucks big time. Stopping is always bad. Dropping out of your “zone” and doing a complete context switch hurts the flow and I feel mentally exhausted very fast, and nothing seems to get done in the long run.

So, at my current project I decided that I can’t go on on my own. I tried multiple times to get to a good design through lots and lots of whiteboard, spiking and experimenting. But I never quite nailed it, I always felt like 20% away from the real thing, but with a burned bridge in front of me.

So, when I called my employer last Sunday I asked one thing “Do I get some budget to bring in a second pair of eyes to work on this particular problem?”. And the awesome answer was “Just do it and spare me the details.”.

Next day I called Harald Logar and he agreed to stop by and go through the code with me for a day. I gave him a very brief heads up on what I was working on (mind you, he’s a complete outsider to the project) and what problem I’m trying to solve.

When he came in next day I explained the vision, and showed him some tests I prepared before that should demonstrate the “desired” behavior of the system. After that little introduction, we were already implementing like crazy.

It was amazing! Although I was doing most of the typing, Harald was constantly there to challenge my assumptions, answer my questions and throw in his own ideas when necessary.

But what was really an game changer for me that day was that we were not only good, we went faster than I had ever done in the past. We rewrote the complete data access logic of a rather complex system in less then 2 hours (complex means dynamic proxies, caching and some non-trivial retrieval stuff).

We then spent the rest of the day optimizing the system (performance is very critical for the project) and I think we both learned a great deal about the inner workings of .NET collections and how they work performance wise. (We also implemented a very cool cache solution that clears the cache on a background worker thread to avoid downtime)

So, needless to say I’d do this again any time. Harald was a joy to work with, and I think by the end of the day we were both very proud of what we accomplished.

Filed under net, programmierung, job

Found the missing Linq operators: MoreLinq

I complained before that there are operators like the IEnumerable<T>.Each() are somewhat missing from the default implementation of Linq, but I never really gave it too much thought since then.

Until today I read a blog post from Jon Skeet (Mr. Stackoverflow himself) about how hard it is to name methods in his MoreLinq effort.

So, curious me I immediately peeked at the source and found some really cool stuff in there that I may very well use in the future.
There aren’t only missing things in there like list concatenation, also generator methods for sequences (great when doing mock expectations) but also cool things I never thought of before but that might come in handy like a consume method that triggers Linq execution immediately without consuming memory (You could do that with .ToList() but that would allocate a IList<T> in memory).

So, don’t miss out on the fun in MoreLinq. I’m sure there is something cool in there for everyone.
And also, don’t forget to suggest to Jon Skeet good method names if you come up with one :

Filed under net, programmierung

Rant: BIOS update procedure fail

Upon helping a friend of mine assembling a new kick-ass gaming PC. Once the whole system was put together he called me the next day stating that the system feels incredibly slow and crashes whenever he starts up a 3d game.

I rushed to help and after some tinkering we found out that the CPU (a Core 2 E8500 2x3.16 Ghz) was operating at almost 80°C. By contrast, my Core2 E6600 (2x2,4) reaches 36°C under full load.
So, since the CPU cooler was spinning, I am quite convinced that it has something to do with a faulty mainboard or (rather unlikely) a faulty CPU.
My initial thought was to try updating the BIOS in case the problem may go away.

So I went to the ASUS download site, selected the appropriate mainboard model (P5Q) and downloaded a zip file containing the newest BIOS. Now imagine my excitement when I was presented with a .ROM file inside that zip. Pretty cool huh?
So I went on to download the Afudos BIOS update tool V2.36 that should install the .ROM bios. 
Started it: sorry doesn’t work on Windows. (WTF?)

So, following the instructions provided by ASUS I’m supposed to:

Please insert a clean, unformatted disk into A:\ drive and boot the system into DOS mode. In DOS mode, please type in C:\> FORMAT A: /S or click on“My Computer”icon under Windows O/S, right click on drive A:\ and choose“Format”. By using the procedure above, you can create a boot disk without AUTOEXEC.BAT and CONFIG.SYS files.

Drive A:\ ? DOS? Autoexec.bat? Config.sys?

ASUS: Are you out of your mind?

When I needed to flash my Dell XPS M1330 Bios with a new version, I didn’t even have to leave my browser to do so. Some weird ActiveX thingy just started and updated my BIOS revision while I was casually checking my email. And ASUS is really telling me this is the way to go if I want to update my brand new socket 775 P45?
C’mon, that’s so 1994 – not even funny any more.

Filed under personal

Do you really know what LinQ does?

LinQ is by far the most empowering language technology I’ve seen in years, and it has really helped me in many cases get to a more functional style of programming, enabling clearer syntax and better overall code.

But, it also has it’s pitfalls.
Since LinQ attaches a .Count method to any enumerable, why would you still use IList for read-only collections? It’s so damn easy to simply write code like this:

IEnumerable<string> strings = new[] {"hello", "world"};
Console.WriteLine("Number of Strings: {0}", strings.Count());


What would be the benefit when using a IList<string> or ICollection<string> like this?

IList<string> strings = new[] {"hello", "world"};
Console.WriteLine("Number of Strings: {0}", strings.Count());

IEnumerable<T>.Count() would be a O(n) operation if it would follow the IEnumerable semantics through enumerating through all items. Since the definition of a Enumerable is that you need to iterate over every item in a linked list to determine it’s length.
Actually, the implementation (I looked with Reflector into the Count() method) does exactly that:

int num = 0;
using (IEnumerator<TSource> enumerator = source.GetEnumerator())
{
    while (enumerator.MoveNext())
    {
        num++;
    }
}
return num;

But, that would always guarantee a O(n) execution time and would slow most applications to a crawl (since it’s so easy to use .Count() everywhere) Microsoft implemented a little shortcut right before the above code:

ICollection<TSource> is2 = source as ICollection<TSource>;
if (is2 != null)
{
    return is2.Count;
}

So, they are breaking the Liskov Substitution Principle on purpose to speed up the execution time of .Count().


That’s why calling .Count() doesn’t hurt so much as long as you are calling it on a IEnumerable that’s also an ICollection, all you’re doing is a cast and a field read.

That’s also why in my testing IEnumerable.Count() wasn’t soo much slower than IList.Count since the only difference that slowed IEnumerable was the typecast (I’m too lazy to generate some data on a non ICollection IEnumerable with a real O(n) execution time).

 image

Just keep in mind that once you are iterating over a “real” IEnumerable that has no collection underneath, you should try to avoid calling .Count() too often since it’s not only a cast/read but a iteration over all elements of a list.
Also keep in mind that usually when working with the extension methods on IEnumerable you risk to perform a O(n) operation, so use it wisely (especially when you don’t control the source of your IEnumerable<T>, you could get passed anything).

Filed under net, programmierung

My Photography business

Projects

dynamic css for .NET