Tigraine

Daniel Hoelbling-Inzko talks about programming

Debugging Go IOWait Hang: Sometimes it's really not your code

If something looks like a bug in the Language Runtime, Standard Library or the Operating System I tend to always approach it with caution: It's usually a bug in my code and I'm just not seeing it.

But sometimes it's not me - it's really the compiler and you spend a solid week debugging a Go program until you find out that cross-compiling from OSX to Linux leads to a stdlib Bug that manifests itself with the whole application just hanging in IOWait loops given enough concurrency.

Obviously the whole thing was really frustrating because:

  • The bug only happened on production servers (obviously - anything else would not be fun).
  • Could only be reproduced on a large dataset of 300 million items (so every test also takes quite a while)
  • I had to test if it works without concurrency (which took 2 days and yes it did)

But the important finding from this exercise was that you can print the full stacktrace of all running Goroutines as well as their status for a running/hanging program! You just have to send the kill -ABRT signal to a process! This is similar to what you see when a panic occurs and was massively helpful in hunting down this bug. Kudos to the Go team for that.

An example for this:

package main

func main() {
  for {}
}

The program will obviously hang and do a busy loop, but if you send the kill -ABRT signal you'll get something similar to this printed to stderr:

SIGABRT: abort
PC=0x1056d70 m=0 sigcode=0

goroutine 1 [running]:
main.main()
        /Users/tigraine/projects/test/main.go:4 fp=0xc00003c788 sp=0xc00003c780 pc=0x1056d70
runtime.main()
        /usr/local/Cellar/go/1.14.1/libexec/src/runtime/proc.go:203 +0x212 fp=0xc00003c7e0 sp=0xc00003c788 pc=0x102b3f2
runtime.goexit()
        /usr/local/Cellar/go/1.14.1/libexec/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc00003c7e8 sp=0xc00003c7e0 pc=0x10528f1
...
Filed under golang, go, debugging

Upping Apache PoolingHttpClientConnectionManager pool limits

Imagine configuring a HTTP Connection pool and setting setMaxTotal to 50. Reasonable assumption would be that henceforth 50 concurrent connections will be made by the HttpClient upstream.

Well not in Java-land - here you'll get exactly 2 connections going out - apparently regardless of what you set as maximum total connections.

Turns out there is a second setting on the PoolingHttpClientConnectionManager that's called maxPerRoute and that controls how many connections you can make to the same host/url combination. Since in our current setup we mostly query one endpoint over and over again the maxTotal setting is pretty useless and the limiting factor will be the maxPerRoute.

Thankfully there is a setDefaultMaxPerRoute which can be tweaked, or there is the ability to specify individual limits per upstream route with setMaxPerRoute

The final code in question is:

PoolingHttpClientConnectionManager poolingConnectionManager = new PoolingHttpClientConnectionManager();
poolingConnectionManager.setMaxTotal(MAX_TOTAL_CONNECTIONS);
poolingConnectionManager.setDefaultMaxPerRoute(MAX_TOTAL_CONNECTIONS);

To debug the issue of slow responding upstream clients I also wrote a little go webserver called blackhole that does exactly what the name implies: It accepts any HTTP connection and swallows it for 100 seconds. This makes it easy to test your code against slow responding HTTP servers (like when under duress or if the system becomes unresponsive).

Filed under java, apache

Convert a millisecond precision unix timestamp to Time in go

It's no real secret that I do love the programming language Go. So I was really delighted to see that Go apparently does all the right things when it comes to their time package that handles time zones etc correctly by default as opposed to be something bolted on after the fact like most other languages.

But for some unknown reason it's just way too complex to convert a millisecond resolution Unix timestamp to time.Time. The built-in time.Unix() function only supports second and nanosecond precision.

This means that you either have to multiply the millis to nanoseconds or split them into seconds and nanoseconds. So obviously my naive implementation was:

time.Unix(0, timestamp * int64(1000000))

But that code looked ugly to me - especially if you have to do this a few times around the codebase - so I wrote a function.

But for some reason I also decided to benchmark my function as I am working on a performance critical piece of code right now. And it turns out that the simple multiplication to turn millis into nanos is 2x slower than dividing the millis into seconds and then turning the remainder into nanos.

time.Unix(ms/int64(millisInSecond), (ms%int64(millisInSecond))*int64(nsInSecond))

Benchmark:

goos: darwin
goarch: amd64
pkg: github.com/tigraine/go-timemillis
BenchmarkMult-8         2000000000               0.50 ns/op
BenchmarkDiv-8          2000000000               0.25 ns/op

So I packaged my findings into a library which is now available on GitHub: go-timemilli

Filed under golang, time

Golang hidden gems: testing.T.Log

One thing I love about Go is it's build chain and overall ease of use. Some things take time to get used to, but the lightning fast builds and the convention-based testing Go offers are addicting right from the start.

Today I found another hidden Gem I think is just genius: testing.T.Log(). Ok I admit, not the most sexy method to get excited about - but bear with me for a moment. Imagine the following code.

func TestSomething(t *testing.T) {
  t.Log("Hello World")
}

What's the output? If you'd expect Hello World you are mistaken. The output is exactly nothing :)

testing.T.Log() only prints something if a testing.T.Error or testing.T.Fatal occurred. Brilliant! Nothing is more annoying than chatty test suites where your actual problem is buried in 2-3 megabytes of meaningless debug statements! And this solves the problem really elegantly. You can log as much debug info as you want and it will only surface if the test actually failed.

Filed under golang, go, testing

Golang: int is not a type

Today I ran into a very interesting compiler error in my go program: int is not a type. Although the same lines of code that didn't compile worked a few minutes earlier.

It took me 20 minutes to figure it out. I was already halfway through a forum post on the golangbridge forums and trying to put together a minimal example when I noticed the problem. Apparently I had a typo in mit func init() and had inadvertently called it func int()!

The go compiler didn't even complain with me overriding one of it's core types.

Filed under golang

Generating synthetic CPU load on Linux

While working on some alerting and metric collection about our infrastructure at Bitmovin I wanted to test out if the alerts I configured are actually triggered when a server experiences high CPU load.

I came across this beautiful Stackoverflow Answer that did exactly what I needed:

seq 3 | xargs -P0 -n1 md5sum /dev/zero

This command will saturate 3 cores with 100% user load until you cancel the command with CRTL+C.

Filed under linux, devops, server

Compiling vim8 with python support on Ubuntu

Today I took a day off from work so as always when I try some new stuff I end up spending 2 hours on my Vim configuration before actually getting something done. So todays two hours where spent on getting Vim compiled with python3 support.

First off - do use Vim8 - it's awesome and do compile it from source. It's rather simple and saves you from outdated packages on Ubuntu :).

Now my issue today was that I tried enabling python2 and python3 support at the same time. For no apparent reason the following configuration did always result in a vim binary that thought it had python support - but didn't.

./configure --with-features=huge \
            --enable-multibyte \
            --enable-rubyinterp=yes \
            --enable-pythoninterp=yes \
            --with-python-config-dir=/usr/lib/python2.7/config-x86_64-linux-gnu \
            --enable-python3interp=yes \
            --with-python3-config-dir=/usr/lib/python3.5/config-3.5m-x86_64-linux-gnu \
            --enable-perlinterp=yes \
            --enable-luainterp=yes \
            --enable-cscope --prefix=/usr \
--enable-fail-if-missing

Running vim --version resulted in +python/dyn and +python3/dyn so I thought - cool it's working.. Until I started vim and was greeted by:

Sorry, this command is disabled, the Python library could not be loaded.

To make things more interesting :echo has('python') did return 0 too - although the Vim was built with python support (and --enable-fail-if-missing is supposed to fail if python can't be linked).

So after trying around a bit and not getting anywhere I decided to just remove the python3 support from the configure line and voila - python is statically linked and working.. Yay!

./configure --with-features=huge \
            --enable-multibyte \
            --enable-rubyinterp=yes \
            --enable-pythoninterp=yes \
            --with-python-config-dir=/usr/lib/python2.7/config-x86_64-linux-gnu \
            --enable-perlinterp=yes \
            --enable-luainterp=yes \
            --enable-cscope --prefix=/usr \
--enable-fail-if-missing
Filed under vim, python, tools

Configuring Kong health-checks in Kubernetes

The first rule of cloud computing should be: Always have a health check!

Why? Well - without them your cluster will not know if the application is actually up or still starting/terminating or anywhere inbetween. As long as there are livenessProbes and readinessProbes Kubernetes can make sure no traffic gets routed to your app before it is really ready. And even more important: It will restart services and reschedule them once your health checks start going sideways.

But here is another insight into health checks: Do performance testing on them.

During the last couple of days I've had Kubernetes kill and restart perfectly healthy Kong Api-Gateway pods because apparently the /status route in Kong does some pretty expensive queries on the backend. Kong apparently thinks it's cool to do a SELECT COUNT(*) on most of it's tables to tell you how many consumers it has registered, how many oauth_tokens there are etc.. All totally irrelevant information for a health check - but it's still the only endpoint I was able to hit that would actually terminate on kong itself (anything else would also kill Kong if the upstream service is having a problem). And /status sounded like a reasonable endpoint for health-checking.

Now with Postgres that kind of queries would not really be a terrible problem (still not good), but for Cassandra it's pretty catastrophic since it's not really meant to do aggregation queries without a partition key. Looking at the code reveals the problem - and so once there was some moderate pressure, the slow queries would time out and Kubernetes would think the Kong pod was dead (although it was still serving requests) and killed it. Yay!

So the solution here was to move away from a httpGet liveness & readinessProbe to a exec probe. Exec probes are a one of my favorite feature Kubernetes - instead of doing Network calls to check if something is up it will just do a docker exec and determine based on the return code of the program executed if the pod is healthy or not.

And coincidentally Kong comes with a commandline utility called kong health that does exactly what it's named for - and is lightning fast with no database involved :).

Here is the relevant yaml configuration:

 readinessProbe:                                                                                                                                                                                            
   exec:                                                                                                                                                                                                    
     command:                                                                                                                                                                                               
       - kong                                                                                                                                                                                               
       - health 
Filed under kubernetes, devops

Enable code coverage reports in create-react-app projects

create-react-app is a nice and easy way to bootstrap a new React.js project with some sane defaults and most of the tedious configuration required to enable Webpack building of Babeljs etc..

One thing I was missing from the generated configs though is how to output code coverage. Turns out it's rather simple - locate your package json and add the following line under scripts:

  {
    "coverage": "node scripts/test.js --env=jsdom --coverage"
  }

This way you can run yarn coverage or npm coverage and get a nicely formatted output with your coverage data. You can read more about the jest cli options in the docs

Filed under reactjs, testing, tools, javascript

Naming tests is more important than what they do

Why are we writing tests? There are numerous reasons, but to me the primary one is that I can go into a codebase even after I have forgotten everything about it and make changes without fear of breaking 20 things at once.

One of the major antipatterns I see regularly is the dreaded testMethodWorks() testcase:

@Test
public void testCreateUser() throws Exception {
  User user = userService.createUser("foo", "bar");
  assertNotNull(user);
  assertEquals(user.getUsername(), "foo");
  assertEquals(user.getPassword(), "bar");

  User invalidUser = userService.createUser("bla", "");
  assertNull(user);

  User someOtherTest = userService
  .....
  ..(goes on for another 20 cases)...
}

The example is somewhat contrived, but you get the idea. A testcase that checks 30 different (marginally related) things and will potentially fail for even more reasons. Of course that one testcase validates that the createUser() method works - and especially when a lot of setup is involved in your testcase it's convenient to just use the stuff that's already there.

But by doing so you are sacrificing a major benefit of tests: Readability through naming. If every testcase is simply named after the method it's testing, you end up with a completely useless test class that has exactly the same informative value as the class under test. Why would I bother reading the test if I could just look at the code that's doing stuff? It's probably shorter than the test case!

Imagine you come into a new codebase and whenever something breaks you first have to read through the test code. Looking at each jUnit stacktrace to figure out which assertion blew up - just so you can figure out what the test was actually doing and why that's a bad thing. Yikes.

Now I won't advocate the "one assertion per test" mantra - that's going overboard and usually leads to unmaintainable tests. But at the very least group your tests not by method but by use case. If a test fails it should be for one reason and that reason damn well ought to be in the test name. Not because nobody likes to read code - but because the first thing each testrunner will report is the name of the test that failed.

It's much easier to figure out what is going on if you get a

testCreateUserWithoutAdminCredentialsReturns403ForbiddenStatusCode()

failure rather than a simple testCreateUser().

Seriously - I didn't even have to explain to you what my use case was - but if this test blows up you will immediately know it's a ACL issue and that it's manifesting itself by not returning a 403 StatusCode. If there was a second testcase called testCreateUserWithoutAdminCredentialsDoesNotInsertUserIntoDatabase you'll also not look at all the different corners of my repository why we got a record too many in some assertThat(repository.getAll().size(), equals(0)); but rather just ignore that failure as it's clearly an ACL issue not a database related thing. By splitting things into multiple testcases we also get the added benefit of predictable states. A test that did not correctly clean up some shared resource (in-memory-db etc..) will not create false positive in line 100 of your testMethodWorks() case but should be contained by your Transactional Testrunner or your setup/teardown methods.

So I propose three simple things that should always be in the testname - regardless of how the test is written or what you are testing:

  • Method under test (createUser)
  • Context the test was run (WithValidAdminCredentials)
  • Expected outcome of the test (ReturnsUserAsJson)

And you end up with createUserWIthValidAdminCredentialsReturnsUserAsJson and alongside you'd naturally get a second testcase called createUserWithValidAdminCredentialsInsertsUserIntoDatabase.

Keep that in mind and you'll make your life much easier for yourself when you have to update something in your codebase a few months down the road - once you have forgotten everything that was going through your head right now :)

Filed under code, style, testing

My Photography business

Projects

dynamic css for .NET

Archives

more