Daniel Hoelbling-Inzko talks about programming

NDC 2009 Videos online

The Norwegian Developer Conference is over for quite some time now, and by looking at their speaker line-up it’s quite clear that I would have loved to be there.

Good for me that they videotaped all talks and decided to share them with the general public.

So, if you are interested in seeing Ayende, Michael Feathers, Scott Hanselman, Jeremy D. Miller, Phil Haack or Udi Dahan doing their talks, you can either stream them online, or you can go ahead and download a 30GB torrent with all of their talks.

The videos are online on the official conference page, or if you prefer a per-speaker listing:
Mark Nijhof has a list of all NDC videos for your streaming pleasure and Rune Grothaug has the torrent.

(Please keep seeding the torrent for a bit after your download has finished)

Filed under net, internet

Win7 shortcoming: Notetaking widget


Oh, I love Win7. I said so after installing the first beta for the first time and I’ve been running my new favorite OS ever since.

But I have one problem with the way Microsoft used their built-in applications to showcase new functionality like jump-lists etc: They broke the Notes widget from the Windows sidebar.

You know, that little widget thingy that allowed you to write down 4 short lines somewhere on your screen.

In Windows Vista it was visible whenever your sidebar was visible (not a perfect solution either), but at least it was visible occasionally.

Windows7 changed this, Notes is now it’s own little application that has to run and that takes away space from my task-bar. And: When I use Aero-Peek to look at my desktop widgets, the Notes app turns transparent too.

So, it became nearly useless, since I want my notes to be a gentle, sometimes visible reminder for things I need to keep track of. Nothing I’d start a program to look at, no – something that is just there sometimes. And the new Win7 implementation of that falls seriously short of that goal. (And I very rarely see my desktop except for Win+D or Aero-Peek – both witch also remove the notes from the screen.)

In fact: I wonder why anyone would sacrifice some valuable taskbar real-estate for such a useless application.

Filed under personal

How MonoRail selects it’s best ActionMethod candidate: CalculateParamPoints

James Curran pointed me at one interesting flaw with my implementation of the DefaultValueAttribute for MonoRail I blogged about some weeks ago. This tipped me off to actually read the MonoRail code to find out how exactly MonoRail selects what overload of a ActionMethod to call.

MonoRail’s approach is as simple as it is brilliant, and reading the code that does this is a very pleasant experience. It took about 5 minutes to figure out the following:

If there are multiple public methods in a SmartDispatcherController that match the request’s action, MonoRail calculates a score of parameter points of each overload, picking the “heaviest” and executes it.
How that score is calculated is quite simple: Every matched parameter gets 10 points, unmatched 0.

But there’s more detail to this:

Every regular parameter (types not defining a attribute of IParameterBinder) where the parameter-name could be matched to the request parameter’s key, MR assumes assumes a weight of 10

In detail this means: Given the following ActionMethod with two parameters:

public void Test(string category, int page)

Monorail will assign 10 points if the key “category” could be found in the server’s request object (Request["category"]) and another 10 if a parameter key called “page” is also present.

So the following call /Test.rails?category=beer&page=1 would account for 20 parameter points, whereas omitting page would result in only 10 points. MonoRail will then pick the method with the highest score of matched parameter points and call it with those parameters.

Now, obviously the following would lead to a disambiguation:


public void Test(string category, int page)
public void Test(string category)

Category is present in both cases and page is unmatched, so both methods get 10 points and no useful distinction can be made. This is where MonoRail will award a bonus of 5 points to a method where all parameters could be matched. Thus giving Test(string) 15 points and Test(string, int) only 10, leading to the right match.

Now, in case of a parameter that is decorated with a IParameterBinder attribute (like ARFetch, DataBind etc) calculating those parameter points is delegated to the attribute class that then returns a score following it’s own logic (e.g.: if one attribute collects data from multiple request parameters it could return more than 10)

Let’s look at a sample implementation of CalculateParamPoints of the ARFetchAttribute:

public virtual int CalculateParamPoints(IEngineContext context, IController controller, IControllerContext controllerContext, ParameterInfo parameterInfo)
	String paramName = RequestParameterName ?? parameterInfo.Name;

return context.Request.Params.Get(paramName) != null ? 10 : 0; }

As you can see, ARFetch follows the usual MonoRail behavior and will return 10 in case it’s parameter-name could be matched, or 0 otherwise.

Still, all this doesn’t negate the fact that you could end up with ambiguities between action methods. In case many methods received the same number of parameter points MonoRail will simply call the first.

Oh, and did I mention that ASP.NET MVC can overload only on a per-http-verb basis? (Given that that’s a quite finite number of exactly 5)

Filed under net, castle, programmierung

Translation taken too far

I’ve said in the past that I believe you have to know English to be a programmer, and sometimes I get a painful reminder about how right I was.

After installing git on my laptop today I found out that GitGui has been set to German instead of English. Look at this wonderful commit dialog:

image Isn’t that awesome? I speak German and can’t tell you what they could possible mean with words like Abzeichnen (to sign sth.) and Eintragen (to chart sth?) since they don’t even remotely translate to words like commit. They don’t use the words all the others use, thus alienating everyone and separating German GitGui users from the rest of the world.

Desaster, I think I’ll have to point this out to the GitGui mailing list. It just doesn’t make sense this way.

Filed under personal

I’m in on the Castle blog aggregator

Well, I already told you about the Castle blog aggregator as a source for keeping up to date with the Castle project. Turns out the idea was taken even further and the feed has now been integrated into the www.castleproject.org under Community/Blogs.

What I never anticipated was that I’d one day get this email by Mauricio Scheffer:

Hi Daniel, would you be interested in being included in the Castle blog aggregator?
Your posts tagged as "Castle" would be automatically included.


My immediate answer was Yes! I’m very happy to be able to contribute to castle in any way possible, and seeing people consider my posts interesting is very rewarding to me. Thanks Mauricio!

Filed under personal, site-news

Introducing IronLess.Net – your duct tape solution to LessCss in ASP.NET

Some time ago while writing the CSS for the ImagineClub website I found out the hard way that there are two ways of developing XHTML sites: Clean presentation/markup separation or mixing of the both to achieve CSS reusability.

What I mean by mixing is markup code like the following:

<div class="floated thick-border highlight">

I have problems with the above, since I am clearly mixing presentation with data. I want my XHTML to transport structured data that gets styled through CSS. Separation of concerns teaches us that we should rarely have to touch the markup if we want to change appearance, and we should not have to touch the markup if we change the data we are presenting.
So the above code clearly blurs the line somewhat, and while still being somewhat semantic markup, it’s also intermingled with presentation concerns what I don’t like at all.

Why code like the above exists has a reason: CSS is endlessly verbose and leads to tons and tons of code-duplication if only applied to DOM structures and IDs, so naturally webdesigners have started to use the mixing of classes in markup to avoid some of the duplication while still leveraging the power of CSS.

During a chat with Kristof about this particular issue the conclusion we reached was that my way of doing it was theoretically better, but only if backed by some sort of server-side framework that would enhance CSS to avoid duplication and verbosity that comes with my approach.

And looking over the Microsoft fence, somewhere in those fluffy green lands inhabited by Ruby people, I found the answer to my problems: LessCss!

But I don’t do ruby development so I filed it away under “cool but unreachable”, until I came across this tweet some two weeks ago:


I was immediately sold to the idea and contacted Erik, to contribute to the project. Turns out, he’s a really nice guy and working hard on writing a parser to read LessCss fully in managed code through the use of ANTLR.

But being the simple guy I am one of the first questions I had for Erik was: “Why don’t we just wrap the original Less project inside the DLR and run it from there?”.

Well, at that time Erik had no real answer for that, and I didn’t either so I decided to give it a try while Erik had some very valid reasons to continue working on a full C# implementation.

And now this is the post to tell you of my pyrrhic victory:

First: I did it. It’s here and it can be used: IronLess.Net.

Disclaimer: It’s a pain in the ass to use.

Installing IronLess.Net

When I write a library I want it to be one thing: self-contained. I don’t want to mess with your local IronRuby installation or with your current gem setup on the machine. So IronLess comes packed with a full catalog of IronRuby/Ruby class libraries all packed into a 0.9mb 7zip file. This file contains 2.228 files in 482 folders all together forming the complete IronRuby environment needed to run the original LessCss.

I did some (simple) magic with NAnt to alleviate that pain, so if you checkout the code you’ll just have to run build.bat and NAnt will compile IronLess and also extract the IronRuby libraries to your build folder, making it completely self-contained. You’ll end up with the following folder structure you just need to move (copy will take forever) to your /bin folder:


Once that’s done you only have to add the following HttpHandler to your ASP.NET web.config and add some initialization code to your Global.asax.cs to be all set.


<add verb="*" path="*.less" type="IronLess.Wrapper.IronLessHandler, IronLess.Wrapper" validate="false"/>


protected void Application_Start()

That should suffice to redirect all request for a .less file to the IronLessHandler that will compile .less to .css using LessCss.


There are a thousand reasons to use this, but I’ve another thousand why you shouldn’t:

  1. Startup is painfully slow: Initializing the LessCss ruby script takes >20 seconds. So every application start takes 20 seconds now since we call the RubyEngine initializer in Application_Start (that will kick off the init of the LessCss script that itselfs makes IronRuby parse all the imported libraries resulting in a 20 second library load). That itself makes it completely unbearable since every debug run in Visual Studio now takes 20 seconds to load.
  2. LessCss through the DLR can’t read windows line-endings. You have to open up your.less file in a editor like Notepad++ and convert it to UNIX style endings (CR-LF -> LF). Not pretty and even less practicable.
  3. Error handling / debugging is impossible. I didn’t dare to modify the LessCss.rb script so errors will be outputted to the command-line that you aren’t seeing. So if your .less file has errors you’ll see no useable results on why it failed to load.
  4. Compilation of .less to .css takes between 50 to 200ms inside a running web-app. Running the IronLess.Compiler takes about 30 seconds. Both figures are way to slow to be actually useable, going with the native Ruby gem from the commandline would be much faster.

So, why bother?
Actually, that’s the question I asked myself halfway through doing IronLess. Since it’s so painful to deploy and startup, I don’t see any real use for this at this point. If someone has the skills to make the DLR run LessCss faster than light by flipping some magic bit, please go ahead and fork my repository on github and tell the world about it.

Also I find the installation process to be just too painful. C’mon, 2.228 files in the /bin directory just to write CSS just isn’t cutting it for me. What I want is one simple dll I reference from /lib and I’m set.

Going further

I’ll be going to help Erik get Less.Net out of the door as quickly as possible, in the hopes of bringing something much needed to the ASP.NET world while avoiding all the troubles with external dependencies you get when trying to call into the ruby world from .NET code.
Also it’s a nice excuse for me to dig into ANTLR.

So finally, if you decide to use this you are entering a world of hurt. Either you can improve IronLess to a point where it gets useable (I can’t), or you wait for Less.Net.

Filed under net, programmierung, projects

Storing binary data in NHibernate / ActiveRecord

I believe the simplest way to store binary data is to just put in the database. Whenever I’ve agreed to throw data to a disk I’ve had issues with deployment, administration or disaster recovery.

Simply put: Once you have a dependency from your database to your file system, you no longer have the luxury of only thinking about recovering the database. You now need to keep two pieces of your system “safe”, both requiring a completely different toolset than the other.

Besides the obvious second point of headache for backup/recovery, you also bring yourself into a world of hurt for deployment / maintenance scenarios.
Filesystem access rights can be a huge pain in the ass, and having to set them right (and keep them that way) is usually a time-bomb waiting to go off.

So, storing your binary data in the db solves many problems, but some new ones arise. Mostly implementation details, but I’d like to show you some things to keep in mind when writing binary data to db.

NHibernate supports no lazy loading of instance fields

image While with conventional ADO.NET I’d just put the binary data as a column inside the table it belongs to, NHibernate requires you to do things different. If you map your data like that NHibernate will fetch it whenever you read objects from that table, meaning that you’ll be querying large binary data fields for no reason, causing you application performance to significantly degrade over time.


What you want is to have NHibernate fetch that field only if it is accessed (lazy load it), and that’s not possible for fields inside a class, but it is possible for references. So your database schema should look like this:


And your mapping will look similar to this (I’ll use ActiveRecord for easier understanding):

public class Invoice : ActiveRecordBase<Invoice>
    public int Id { get; set; }

    [BelongsTo(Lazy = FetchWhen.OnInvoke, Cascade = CascadeEnum.SaveUpdate)]     public BinaryData ScannedInvoice { get; set; } }

[ActiveRecord] public class BinaryData : ActiveRecordBase<BinaryData> {     [PrimaryKey]     public int Id { get; set; }

    [Property(ColumnType = "BinaryBlob", SqlType = "IMAGE", NotNull = true)]     public byte[] Data { get; set; } }

Now whenever your Invoice is saved/inserted NHibernate will also check if BinaryData has to be updated/inserted, while only loading the binary field if you actually access the Invoices.ScannedInvoice field.

The fairy tale of binary blob fields

One of the main advantages students get from being members of imagineClub is that they get access to uploaded course materials through the website. Naturally, the new site has to support file upload and download somehow, and yesterday I started implementation of that feature.

In theory this sounds really simple, especially since the file upload in MonoRail is so trivial I figured it wouldn’t be a problem to implement.

One major thing to consider when designing a file upload feature is the question: Save to disk or save to database? Let’s look at the two options:

Save to disk:

Pro: Very easy
Con: Requires metadata to be kept in the database. Could go out of sync with the db. Requires backup. Requires special permissions.

Save to db:

Pro: Zero setup. Data all in one place, backup hugely simplified. Enforces data integrity
Con: Non-trivial implementation.

Now, I naturally went with the db option. Deployment is hugely facilitated if you don’t need to look at file permissions, and most hosters have databases backed up anyway. So things go south, the only thing I need to recover the site would be the database file.

Some searching revealed that binary data could be mapped to the database through AR quite easily:

[Property(ColumnType = "BinaryBlob", SqlType = "varbinary(MAX)")]
public byte[] BinaryData { get; set; }

Problem with that is that it crashed ALL of my database dependant unit-tests:

------------ System.Data.SQLite.SQLiteException : SQLite error
near "MAX": syntax error

Apparently SqlLite can’t figure out that MAX thing and will crash. Since it would accept a numeric value instead I looked at the SqlServer 2008 documentation for varbinary to find out what MAX would be. Turns out it’s exaclty 2147483647 (2^31-1), so my natural reaction was to change the SqlType to be exactly varbinary(2147483647) instead of MAX. Now SqlLite can interpret it and all tests run great again, but creating the schema on SqlServer isn’t possible any more due to the following (odd) error:

The size (2147483647) given to the column 'BinaryData' exceeds the maximum allowed for any data type (8000).

So, what we just saw is a leaky abstraction inside the ORM. But NHibernate never claimed to abstract the DB completely away from me, so we’ll not use that against it. NHibernate explicitly supports these scenarios and in a real NHibernate scenario it’s just a matter of having two different mapping files, one mapping to the appropriate SQLite datatype and the other mapping to the Sql2008 datatype that would be varbinary(MAX).
But, I’m not using NHibernate here, I’m using ActiveRecord that handles mapping through attributes on the data classes, and I’ve no intention of using #ifdef statements anywhere around my code.

The problem here is mainly that whenever you are trying to use two different RDBMS at once you are limiting yourself to the least common denominator, and you have to deal with that.

I won’t be able to use advanced Sql2008 features, and I also won’t be able to use anything fancy inside SQLite either.

The least common denominator in this case is the datatype IMAGE, something that Microsoft is discouraging people to do in their documentation:


This puts me in a delicate position since the imagineClub website is hosted on a server I don’t control. So I could just wake up one morning and seeing the iC website down because the hosting company decided to upgrade all users to 2010 (or whatever version the next SQL Server will have).

And I know, usually providers send out warning for stuff like this, but I doubt that through all the structural changes with imagineClub lately they even know where to send those warnings to.

So: Long story short, use image over varbinary(MAX) if you plan on doing in-memory SQLite testing, just keep in mind that your app will break when you upgrade to a newer version of SqlServer.

Update: Looks like Krzysztof Kozmic had the same issues and found a quite clever solution for that. I’m not totally clear on how to do this with ActiveRecord, but it’s a very pragmatic approach to a problem that seems to not have a perfect solution anyway.

ImagineClub Website: File upload/download works

Oh it’s been quiet for a very very long time around here imagineClub-wise. And I’m afraid to say that progress has been rather slow.

Well, today I finally around to implement file downloads/uploads and the file listing in the sidebar section. In all it’s glory it was nothing more than 10 lines of C# code and tons and tons of XHTML and CSS that I’d rather not go into.

Here are some screenshots of the current version:

 image image

It’s really rough around the edges and I will probably need to bring in a markdown editor to allow formatting when posting. I plan on stealing Ken Egozi’s Windows Live Writer integration for the news section, but when users upload files to the site they need to have some way of formatting their text.

Also I’ll look into Less.NET to manage my CSS because it is becoming very very verbose at the moment and restructuring it is just painful. This is mainly due to the fact that I want to keep the markup was ignorant to presentation as possible and as expressive as possible. All forms are built with accessibility in mind and are passing the webaim tests.

Next up on my list is a search/list option for uploaded files. Maybe improve the file organization a bit more and then start to write the ELMS integration to allows logged in users access to the MSDN-AA. Once we are done with the ELMS thing I’d dare to launch the site.

I need time-scheduled energy-plans in Windows

I run my main PC almost 24/7 because I use it to host all sorts of data on it (TV Shows, Music etc) that I then stream to my Laptop, TV or XBox to consume somewhere in the house. So the PC is constantly running and I don’t like wasting energy.

I’ve configured my PC to go to hibernation after 45 minutes of idle time, and Windows is smart enough to detect when files are open over the network and will not hibernate while the data is actively in use.
Where it gets problematic is when I am doing something else (like eating) and then want to watch something off my PC. While I was eating the PC went to hibernation and I need to go up one floor just to hit a spacebar and head back down to the TV.

Obvious solution would be to simply shut off hibernation and let the PC run the whole day. But that then is bad since I like watching stuff on my laptop in bed. So when I turn off the laptop I want confidence that the PC will hibernate shortly after without me having to get out of bad at that time.

So, what I want is a setting that allows me to tell Windows to run energy profile 1 during the day where hibernation is completely off, while running energy profile 2 after 10pm so I don’t need to worry about shutting down the PC.

I guess I could write a service to do exactly that, but I believe it would be a great feature to see out of the box in Windows 8 (if they decide to follow some sane naming rules this time)

The much better solution anyway would be to get some NAS to store my stuff in my server closet so my PC can sleep through 70% of the day while I can still access all my media. Unfortunately most consumer-level NAS solutions are still too expensive or just not performing well enough.

Ps: One nice thing on the green-computing side of things that comes with Win7 is that they hide the “Maximum Power, minimum saving” option by default from the menu. You have to click “More energy-plans” to see that.
I guess it doesn’t matter at all, since most users savvy enough to use that dialog probably know what they want anyway, but it sends the right message.

Filed under personal