Tigraine

Daniel Hoelbling-Inzko talks about programming

Configuring Kong health-checks in Kubernetes

The first rule of cloud computing should be: Always have a health check!

Why? Well - without them your cluster will not know if the application is actually up or still starting/terminating or anywhere inbetween. As long as there are livenessProbes and readinessProbes Kubernetes can make sure no traffic gets routed to your app before it is really ready. And even more important: It will restart services and reschedule them once your health checks start going sideways.

But here is another insight into health checks: Do performance testing on them.

During the last couple of days I've had Kubernetes kill and restart perfectly healthy Kong Api-Gateway pods because apparently the /status route in Kong does some pretty expensive queries on the backend. Kong apparently thinks it's cool to do a SELECT COUNT(*) on most of it's tables to tell you how many consumers it has registered, how many oauth_tokens there are etc.. All totally irrelevant information for a health check - but it's still the only endpoint I was able to hit that would actually terminate on kong itself (anything else would also kill Kong if the upstream service is having a problem). And /status sounded like a reasonable endpoint for health-checking.

Now with Postgres that kind of queries would not really be a terrible problem (still not good), but for Cassandra it's pretty catastrophic since it's not really meant to do aggregation queries without a partition key. Looking at the code reveals the problem - and so once there was some moderate pressure, the slow queries would time out and Kubernetes would think the Kong pod was dead (although it was still serving requests) and killed it. Yay!

So the solution here was to move away from a httpGet liveness & readinessProbe to a exec probe. Exec probes are a one of my favorite feature Kubernetes - instead of doing Network calls to check if something is up it will just do a docker exec and determine based on the return code of the program executed if the pod is healthy or not.

And coincidentally Kong comes with a commandline utility called kong health that does exactly what it's named for - and is lightning fast with no database involved :).

Here is the relevant yaml configuration:

 readinessProbe:                                                                                                                                                                                            
   exec:                                                                                                                                                                                                    
     command:                                                                                                                                                                                               
       - kong                                                                                                                                                                                               
       - health 
Filed under kubernetes, devops

Enable code coverage reports in create-react-app projects

create-react-app is a nice and easy way to bootstrap a new React.js project with some sane defaults and most of the tedious configuration required to enable Webpack building of Babeljs etc..

One thing I was missing from the generated configs though is how to output code coverage. Turns out it's rather simple - locate your package json and add the following line under scripts:

  {
    "coverage": "node scripts/test.js --env=jsdom --coverage"
  }

This way you can run yarn coverage or npm coverage and get a nicely formatted output with your coverage data. You can read more about the jest cli options in the docs

Filed under reactjs, testing, tools, javascript

Naming tests is more important than what they do

Why are we writing tests? There are numerous reasons, but to me the primary one is that I can go into a codebase even after I have forgotten everything about it and make changes without fear of breaking 20 things at once.

One of the major antipatterns I see regularly is the dreaded testMethodWorks() testcase:

@Test
public void testCreateUser() throws Exception {
  User user = userService.createUser("foo", "bar");
  assertNotNull(user);
  assertEquals(user.getUsername(), "foo");
  assertEquals(user.getPassword(), "bar");

  User invalidUser = userService.createUser("bla", "");
  assertNull(user);

  User someOtherTest = userService
  .....
  ..(goes on for another 20 cases)...
}

The example is somewhat contrived, but you get the idea. A testcase that checks 30 different (marginally related) things and will potentially fail for even more reasons. Of course that one testcase validates that the createUser() method works - and especially when a lot of setup is involved in your testcase it's convenient to just use the stuff that's already there.

But by doing so you are sacrificing a major benefit of tests: Readability through naming. If every testcase is simply named after the method it's testing, you end up with a completely useless test class that has exactly the same informative value as the class under test. Why would I bother reading the test if I could just look at the code that's doing stuff? It's probably shorter than the test case!

Imagine you come into a new codebase and whenever something breaks you first have to read through the test code. Looking at each jUnit stacktrace to figure out which assertion blew up - just so you can figure out what the test was actually doing and why that's a bad thing. Yikes.

Now I won't advocate the "one assertion per test" mantra - that's going overboard and usually leads to unmaintainable tests. But at the very least group your tests not by method but by use case. If a test fails it should be for one reason and that reason damn well ought to be in the test name. Not because nobody likes to read code - but because the first thing each testrunner will report is the name of the test that failed.

It's much easier to figure out what is going on if you get a

testCreateUserWithoutAdminCredentialsReturns403ForbiddenStatusCode()

failure rather than a simple testCreateUser().

Seriously - I didn't even have to explain to you what my use case was - but if this test blows up you will immediately know it's a ACL issue and that it's manifesting itself by not returning a 403 StatusCode. If there was a second testcase called testCreateUserWithoutAdminCredentialsDoesNotInsertUserIntoDatabase you'll also not look at all the different corners of my repository why we got a record too many in some assertThat(repository.getAll().size(), equals(0)); but rather just ignore that failure as it's clearly an ACL issue not a database related thing. By splitting things into multiple testcases we also get the added benefit of predictable states. A test that did not correctly clean up some shared resource (in-memory-db etc..) will not create false positive in line 100 of your testMethodWorks() case but should be contained by your Transactional Testrunner or your setup/teardown methods.

So I propose three simple things that should always be in the testname - regardless of how the test is written or what you are testing:

  • Method under test (createUser)
  • Context the test was run (WithValidAdminCredentials)
  • Expected outcome of the test (ReturnsUserAsJson)

And you end up with createUserWIthValidAdminCredentialsReturnsUserAsJson and alongside you'd naturally get a second testcase called createUserWithValidAdminCredentialsInsertsUserIntoDatabase.

Keep that in mind and you'll make your life much easier for yourself when you have to update something in your codebase a few months down the road - once you have forgotten everything that was going through your head right now :)

Filed under code, style, testing

Git isn't magic

One thing I am seeing over the years is that Git has become too commonplace - too ubiquituous so that people don't care how it actually works or why it works that way. It's just this magical thing that keeps your sourcecode safe - and as long as push and pull work they just don't care why or how it works. Until git pull messes up your history, or you accidentally merged into the wrong branch or just a case of something went wrong™ and then they hope their Git client can bail them out.

So here is a very short guide on why Git is the way it is and why you should not be afraid of anything when it comes to working with git.

First thing to understand is that Git stores all of your committed files in a blob storage living inside your .git folder. Each and every file you ever committed got slashed into tiny files and placed in the .git folder. This is important because Git does not think in deltas or changes like other SCM systems but it thinks in snapshots. Each commit you do will commit the whole file to the blob storage. To save space git will internally map your files to multiple smaller files and have a blob object reference these smaller chunks (everything is addressed as a hash in git). So whenever you change something in that file git will not save the whole file a second time, but rather create a new blob object that references all the old chunks, alongside the new one you just changed.

Now that we know that Git has a storage that can return you every file ever committed in your repository we can talk about the commit and the tree. Files in itself are not very useful as we usually have a lot of these in our repository, so there is a object above the blobs that describes which versions of each blob belong together into a directory tree. Aptly named tree this is just a file that is more or less a mapping from the filename to the underlying blob hash. This is also important because if we change a file in a huge directory tree, a new version of that tree will just change one of these references to a new blob object (leaving all the other ones pointing to the old objects) - and that blob object will possible reference older chunks alongside the few changes we did in that file.

Why is this important? This also explains why git is so lightning fast. For every checkout operation git has to just look at the tree object, go into it's blob storage and store each file at the name it was referenced in the tree. Unlike other delta based SCMs where you had to get the last snapshot and then replay all changes since that snapshot to get to the current version. For git this is a constant operation that always takes the same amount of time - no matter which commit you check out.

The last concept to understand with git is the commit. A commit is basically a very small file that contains a few important pieces of information. It contains the SHA1 hash of the tree it is representing, the name of the author, the committer and the commit message. The git commit hash is exactly what the name implies: A shas1sum over this commit file. Thus a commit hash uniquely represents a commit, which unambiguously and cryptographically securely references a tree which represents (using the same sha1 concept) all files. If I tell you to check out version **d0ce065** of the project there is no doubt about what you are getting. Either your commit sha1 matches it's commit files contents or not. There is no way to corrupt that unit of your source code. Everything attached to a commit is securely identified by that one commit (since blobs, trees and chunks are all also in turn identified by their SHA1 sums).

So we end with a diagram like this (graphic by Scott Shacon):

https://git-scm.com/book

What is missing from this picture is the history. Where are the other commits? This is no SCM yet. Well I left out one very important detail in my previous description. The commit file also contains the sha1 of it's parent commit. So if you remember that everything about a git commit is a sha1 hash, one commit also cryptographically secures the whole git history before it and makes it impossible to miss a commit or add ones without all hashes upstream to change. Again this is best described with a diagram:

alt

So this now also explains why in every git presentation ever there where arrows between the commits that always went backwards. Git is a acyclic directed graph. This also means that git commits only know about their parents - never about their children. So if you did something you are not proud of, just check out the parent commit and continue working there - your child commit will linger on in the git object storage for some time, but eventually git will forget about it and it's gone.

No branches?

If you think about all of the above for a second you will realize branches are there just for human comprehension. They are not needed in git. Everything git needs to know or reason about (and you) is stored in single commit sha1 hashes. The git datastructure has no concept what master or develop means, it can only reason about 9b09404973ca743d0f5e11367034250dff219637.

Branches are just cosmetic to make us stupid humans not have to remember sha1sums when we work with git. A branch is nothing more than a file that lives inside .git/refs/heads that contains exactly the shasum of the commit your branch is currently pointing to. Branches are pointers - they serve no (real) purpose and can be deleted at no cost without anything happening to the commit tree. Try this:

$ git clone https://github.com/dotless/dotless.git
$ cd dotless/.git/refs/heads/
$ cat master
=> 9b09404973ca743d0f5e11367034250dff219637 <= This is the commit sha1.

Go ahead and do a git checkout 9b09404973ca743d0f5e11367034250dff219637 and you will get exactly the same working directory as if you had done git checkout master. This is so important because most of the time when stuff goes wrong with git people don't realize that having a commit on a branch where it does not belong is no big deal - just delete the branch and re-create it at the point where it actually should be.

Coincidentally remote branches are also just the same thing: pointers. Locally you can merge with your origin/master exactly because when you do a git pull or git fetch git updates the files inside .git/remotes/origin to the sha1sums that are on the server. These things are just local operations since all changes of the server are already living inside your .git directory.

So if you ever accidentally deleted a branch and forgot the commit sha1 it was at - that branch is not lost. It's still right there in your .git folder - you just have no clue how to ask git for it. That's the point where you can just run git reflog and you will see all the recent pointer/branch changes you did recently along with their sha1 sum for you to check out and recover.

$ git reflog
858b6db HEAD@{0}: merge laedit/Adding-support-for-'unit'-function: Merge made by recursive.
715db85 HEAD@{1}: checkout: moving from laedit-Adding-support-for-unit-function to master
715db85 HEAD@{2}: checkout: moving from master to laedit-Adding-support-for-unit-function
715db85 HEAD@{3}: merge nevett: Fast-forward
2a933ed HEAD@{4}: checkout: moving from nevett to master
715db85 HEAD@{5}: pull https://github.com/Nevett/dotless.git fix-mixin-important-recursive: Fast-forward
2a933ed HEAD@{6}: checkout: moving from master to nevett
2a933ed HEAD@{7}: checkout: moving from d1d2a822561fcd2c52a54b6bb8799000e4efecf9 to master
d1d2a82 HEAD@{8}: checkout: moving from MarkOSIndustries-master to d1d2a822561fcd2c52a54b6bb8799000e4efecf9

As you can see, each operation I did (even when I did not commit anything) is noted here and I can find the missing sha1sum to recover.

Filed under git

Setting Source Code Pro as default font in xterm

Since I am setting up my Linux machine I decided I hate the default font provided by Ubuntu and vastly perfer Source Code Pro by Adobe for my terminal. Also the font was too small by default so I googled a bit and found the following solution to work for me with some minor changes.

First of all you have to download the font from Adobe and install it to your ~/.fonts directory and rebuild the font cache.

At the time of writing this script should do this all for you:

wget https://github.com/adobe-fonts/source-code-pro/archive/2.030R-ro/1.050R-it.zip -O scp.zip
unzip scp.zip
cd source-code-pro*
mkdir -p ~/.fonts
cp TTF ~/*.ttf ~/.fonts/
fc-cache -vf

Afterwards you have to create a ~/.Xresources file in your home directory that contains the following lines:

XTerm*faceName: Source Code Pro,Source Code Pro Semibold                                       
XTerm*faceSize: 12

Now run the X server resource database utility xrdb and merge the settings into your current X config. All new xterm windows will now use the new font.

xrdb -merge ~/.Xresources
Filed under linux, xterm, ubuntu

Windows 10 Error Message 0xc000000e after installing dual boot system

I recently decided to install Ubuntu as a second boot option on my main PC at home. Since I do a lot of unix development lately and really hate working on my small Macbook it seemed like the logical alternative to install Ubuntu on the big machine.

So my old setup was a 240gb SSD that previously held my Windows 10 install and I got a new 500gb Samsung SSD that should replace the old drive as my main OS disk. So using my old but proven dd copy method to move the old 240gb SSD to the more spacious 500gb SSD I freed up the old 240gb SSD for the fresh Ubuntu install.

Everything worked fine with copying, Windows worked flawlessly on the new 500gb SSD - until I also installed the 240gb SSD and set up Ubuntu on it. Ubuntu would load fine, but when I tried to boot into Windows it would fail with the 0xc000000e error showing me the Windows 10 boot repair options.

The weird thing here was that I was able to boot into Windows perfectly fine once I unplugged the 240gb Ubuntu SSD, but as long as that SSD was in the system I could not get Windows Bootloader to start up Windows. Even when I changed the boot settings in my Bios to bypass grub and go straight to the Windows Bootloader.

With the kind help of the people in the askubuntu forums I finally found the solution: Windows BCD apparently does not like it when you change the order of it's harddrives. So half the bootloader was loading from UEFI, but it could not hand off to the real bootloader on my 500gb SSD - it was expecting that disk to be first in my system, but instead it became my second disk since the 240gb SSD was now on SATA1 and the 500gb disk was on SATA2.

After switching the cables around everything worked fine and I could boot into Windows using the Windows Bootloader in my UEFI Bios, or I could start up Grub and chainload into the Windows Bootloader from there without issue.

Filed under ubuntu, windows, windows10, boot

Disk shows up in BIOS but not in Windows 10 (AHCI Hotplug)

So today I installed a new Drive-Bay that makes a 5.2" Slot on my computer into a dock for 3.5" SATA drives. The installation worked fine, I enabled Hot-Plug in my UEFI-BIOS and everything looked ok. Until I installed my hard drive into the bay and Windows would not recognize it at all.

I went back into the BIOS, this time with 3.5" drive installed in the bay and saw that the drive is showing up in my BIOS monitor just fine. Just to make sure I re-checked the power connections and finally I plugged the drive directly into the SATA port to make sure it's not the Drive-Bay that's at fault. Still Windows 10 would not recognize the drive.

After looking around on the Internet I found this very interesting forum thread on TomsHardware that suggested running the Windows 10 Memory Diagnostic. Well - no clue why it worked - but it did.

After running the diagnostic on minimal settings with only one pass Windows bootet again and the drive was there and it even shows up in the "Remove Hardware securely" section which I need for Hot-Swapping the drive.

Filed under hardware, windows, bios, ahci

Making a whole libgdx screne2d group touchable

When placing a Group with child-controls inside a libgdx stage is is apparently impossible for the whole group to get hit-detection (click-events and touch events), instead only the children in the container will fire touch and click events.

This is something I was struggling with quite some time until I looked at the libgdx source code and found this in Actor.java:

public Actor hit (float x, float y, boolean touchable) {
    if (touchable && this.touchable != Touchable.enabled) return null;
    return x >= 0 && x < width && y >= 0 && y < height ? this : null;
}

That's how hit-detection is done in libgdx and that's also the reason why no click-events inside the groups bounds where detected. Not so much the code above, because that one works fine. But if you look at the code for scene2d.ui.Group you notice that it is overriding the hit-detection to delegate all calls to it's children, never checking for itself.

So the simple solution here was to just subclass the Group and instead of calling super.hit I brought back the hit-detection from Actor.

Hope this helps.

Filed under libgdx, java, inputs

Usability done right - Apple TV 4th Gen

My old Apple TV Remote died some years ago and I had another useless Remote from last century programmed to control it. It was ugly to say the least and I really got used to having HDMI-CIC after using the Chromecast for some time. So when I heard that the new Apple TV has HDMI-CIC (meaning the Apple-TV can control the TV and the TV can Control the Apple-TV) I decided to buy it.

So I got it yesterday and set it up - and I have to say: Wow that process was genius. Instead of having to enter all your information (Network settings, Apple-ID and password etc) you can just take your iPhone, hold it next to your Apple-TV and after a brief second the iPhone asks if you really want to set up the Apple-TV with your iPhone credentials and that's it. It configures the WiFi from your iPhone, your Apple-ID etc etc..

The process for the Chromecast was also good, but this is just way better.

Another thing I noticed right away, the perceived streaming quality on Apple-TV is way better. Instead of starting a Netflix stream instantly like on Chromecast it will buffer for a second longer, but you don't start out with a 240p stream but at least get 720p to start with. Especially at peak times when ISPs are overloaded it seemed to take forever on my Chromecast for it to wind it's way up from 240p to 720 and then 1080. Meaning sometimes I would watch almost 5 minutes until the image is bearable.

Filed under appletv, usability

Project Thesis - Introduction

I think I mentioned a time or two that I have been studying computer science at the University of Klagenfurt for quite some time now. I officially took leave from work to get my thesis done and today marks the first day of honest work on the project. To start things off I decided to write about what the project is all about and will keep a diary of stuff I learned while developing my thesis.

What's this about?

My thesis will consist of a Android game that provides a framework for researchers to test games with a purpose without having to implement the game part over and over again. The initial idea here is to create a game similar to Clicker Heroes, a simple, yet addictive time-killer game that is played in stages and can be played forever with little effort from the players. By being played in levels I am planning on injecting a mini-game into the (already mini) game for the user to earn extra-points, providing something like a boss-level. This boss-level is where the science comes in.

During the boss-level I am planning on using the users to verify the results of a deep-learning computer vision algorithm on an arbitrary image-database. This is also where I want to enable others to just take the working game and plug in their own game with a purpose to test a theory without having to do their own full-fledget game. Granted, you still have to implement your mini-game, but you don't need to make it fun - just reward your users for playing it with money and points towards the actual base game. (Similar to how some games display ads in between levels to make money, I am planning on annoying the user for science!)

alt

So how is this going to happen? The game will be open-source and available on GitHub shortly and will obviously be written in Java (yay - haven't done any serious Java in years). To speed things along I am planning on implementing it using libGdx so I get cross-platform support without too much work. Besides that I haven't decided yet on how far to take this. There will be a sample game-with-a-purpose implementation I will be testing that is probably using Caffe from Berkley Vision and Learning Center to generate a model I can then verify using the players.

The project will be open-source and reside on GitHub for everyone to follow, licensed unter Apache2 License. Please let me know if you have any questions or want to follow the progress along, you can reach me on Twitter at https://twitter.com/tigraine

Filed under thesis, projects, oss

My Photography business

Projects

dynamic css for .NET

Archives

more