Tigraine

Daniel Hoelbling-Inzko talks about programming

Disk shows up in BIOS but not in Windows 10 (AHCI Hotplug)

So today I installed a new Drive-Bay that makes a 5.2" Slot on my computer into a dock for 3.5" SATA drives. The installation worked fine, I enabled Hot-Plug in my UEFI-BIOS and everything looked ok. Until I installed my hard drive into the bay and Windows would not recognize it at all.

I went back into the BIOS, this time with 3.5" drive installed in the bay and saw that the drive is showing up in my BIOS monitor just fine. Just to make sure I re-checked the power connections and finally I plugged the drive directly into the SATA port to make sure it's not the Drive-Bay that's at fault. Still Windows 10 would not recognize the drive.

After looking around on the Internet I found this very interesting forum thread on TomsHardware that suggested running the Windows 10 Memory Diagnostic. Well - no clue why it worked - but it did.

After running the diagnostic on minimal settings with only one pass Windows bootet again and the drive was there and it even shows up in the "Remove Hardware securely" section which I need for Hot-Swapping the drive.

Filed under hardware, windows, bios, ahci

Making a whole libgdx screne2d group touchable

When placing a Group with child-controls inside a libgdx stage is is apparently impossible for the whole group to get hit-detection (click-events and touch events), instead only the children in the container will fire touch and click events.

This is something I was struggling with quite some time until I looked at the libgdx source code and found this in Actor.java:

public Actor hit (float x, float y, boolean touchable) {
    if (touchable && this.touchable != Touchable.enabled) return null;
    return x >= 0 && x < width && y >= 0 && y < height ? this : null;
}

That's how hit-detection is done in libgdx and that's also the reason why no click-events inside the groups bounds where detected. Not so much the code above, because that one works fine. But if you look at the code for scene2d.ui.Group you notice that it is overriding the hit-detection to delegate all calls to it's children, never checking for itself.

So the simple solution here was to just subclass the Group and instead of calling super.hit I brought back the hit-detection from Actor.

Hope this helps.

Filed under libgdx, java, inputs

Usability done right - Apple TV 4th Gen

My old Apple TV Remote died some years ago and I had another useless Remote from last century programmed to control it. It was ugly to say the least and I really got used to having HDMI-CIC after using the Chromecast for some time. So when I heard that the new Apple TV has HDMI-CIC (meaning the Apple-TV can control the TV and the TV can Control the Apple-TV) I decided to buy it.

So I got it yesterday and set it up - and I have to say: Wow that process was genius. Instead of having to enter all your information (Network settings, Apple-ID and password etc) you can just take your iPhone, hold it next to your Apple-TV and after a brief second the iPhone asks if you really want to set up the Apple-TV with your iPhone credentials and that's it. It configures the WiFi from your iPhone, your Apple-ID etc etc..

The process for the Chromecast was also good, but this is just way better.

Another thing I noticed right away, the perceived streaming quality on Apple-TV is way better. Instead of starting a Netflix stream instantly like on Chromecast it will buffer for a second longer, but you don't start out with a 240p stream but at least get 720p to start with. Especially at peak times when ISPs are overloaded it seemed to take forever on my Chromecast for it to wind it's way up from 240p to 720 and then 1080. Meaning sometimes I would watch almost 5 minutes until the image is bearable.

Filed under appletv, usability

Project Thesis - Introduction

I think I mentioned a time or two that I have been studying computer science at the University of Klagenfurt for quite some time now. I officially took leave from work to get my thesis done and today marks the first day of honest work on the project. To start things off I decided to write about what the project is all about and will keep a diary of stuff I learned while developing my thesis.

What's this about?

My thesis will consist of a Android game that provides a framework for researchers to test games with a purpose without having to implement the game part over and over again. The initial idea here is to create a game similar to Clicker Heroes, a simple, yet addictive time-killer game that is played in stages and can be played forever with little effort from the players. By being played in levels I am planning on injecting a mini-game into the (already mini) game for the user to earn extra-points, providing something like a boss-level. This boss-level is where the science comes in.

During the boss-level I am planning on using the users to verify the results of a deep-learning computer vision algorithm on an arbitrary image-database. This is also where I want to enable others to just take the working game and plug in their own game with a purpose to test a theory without having to do their own full-fledget game. Granted, you still have to implement your mini-game, but you don't need to make it fun - just reward your users for playing it with money and points towards the actual base game. (Similar to how some games display ads in between levels to make money, I am planning on annoying the user for science!)

alt

So how is this going to happen? The game will be open-source and available on GitHub shortly and will obviously be written in Java (yay - haven't done any serious Java in years). To speed things along I am planning on implementing it using libGdx so I get cross-platform support without too much work. Besides that I haven't decided yet on how far to take this. There will be a sample game-with-a-purpose implementation I will be testing that is probably using Caffe from Berkley Vision and Learning Center to generate a model I can then verify using the players.

The project will be open-source and reside on GitHub for everyone to follow, licensed unter Apache2 License. Please let me know if you have any questions or want to follow the progress along, you can reach me on Twitter at https://twitter.com/tigraine

Filed under thesis, projects, oss

Compiling ViM on OSX with Ruby support

I just wanted to quickly point you to this link in case you need ViM compiled in a way that allows you to run the awesome Command-T plugin: http://arjanvandergaag.nl/blog/compiling-vim-with-ruby-support.html

Without Command-T I couldn't use ViM, it's just too convenient.

Filed under vim, ruby, osx

Random PC freezes and reboots under load

I just switched some components on my computer to get some more speed out of my photography retouching workflow. Since I really hate waiting for Lightroom I opted for a really beefy i7-5820K with a pretty oversized motherboard and plenty of ram. I kept my trusty Geforce GTX 760 TI all of my peripherals - but the 6 (12 logical) cores really ought to speed up my workflow.

So I ordered my stuff and assembled it - even upgraded my case from a Midi-Tower to a Full-Tower to get better air-flow and cooling to my hard drives. I spent way too much time tidying up cables and making the whole build really solid. Everything worked flawlessly at first. Until the next day where I was scheduled for a meeting with a client at 4pm and wanted to transfer some images to the iPad to show during the meeting.

So I spun up Lightroom, selected some of my favourite images from last weeks wedding and hit Export like I did a thousand times before. But this time the computer just rebootet after a few seconds.

Ok so I was royally screwed, I retried immediately and the computer froze once more. Looking into the Eventlog I found a critical Kernel-Power 41 error without any useful description whatsoever.

At that point I had to quit and get to the meeting - without the images, but when I came back and searched a bit online Idiscovered that Kernel-Power 41 can mean a multitude of things, but mostly that the system did not cleanly shut down and that most likely a Power-Supply-Unit issue is causing it.

So I did the math on my components and found that the rig should run at peak power consumption somewhere around 520 Watts - with a bit undersized 550W PSU installed (your PSU should have around 30% headroom over the peak consumption to be able to absorb spikes without any issues). So I frantically searched for a new PSU and got myself a 750W beQuiet! unit. After another hour of routing all the cables and making sure everything is neat and tidy inside my case I turned on the computer - ran a Lightroom export and the system crashed once again :(.

This time I was sure the PSU is fine (and probably has been fine all along) and I looked at the Graphics-Card and the RAM for the culprit. I re-installed everything once more to make sure all the leads are connected correctly and the system still froze.

Next up I decided to reset my UEFI-BIOS to factory defaults and try again - to no avail. Until I decided to disable the Intel XMP RAM overclocking that was being applied and voila - the system is stable under extreme loads! So apparently G.Skill has screwed up somewhere with their G.Skill Ripjaws 4 3000mhz kits and the XMP profile configured by my BIOS was causing system

Turns out: Enabling the XMP Profile for my RAM did not disable the Turbo-Boost. It only overcloed the System to 3.6ghz, but whenever Windows decided the System could use some more Horsepower it instructed the CPU to go into Turbo-Boost which overclocked the overclocked system by another 30%. Needless to say that was outside the safe range for the RAM and the system crashed. I noticed this for the first time after disabling the XMP profile and monitoring my RAM under load - it was already running at 3ghz without the overclock settings in the BIOS.

And: Having your system randomly freeze and reboot at various steps of your Windows updates also sucks - I am now stuck with a failing Windows .NET Framework 4.2 updatee and can't upgrade to Windows 10 :(

Filed under windows, pc

Formatting Date and Times in Rails

When looking at formatting Times and Dates in a certain format one quickly looks at strftime for help. Only problem there is that strftime will not take into account the translations configured through Rails I18n gem. So a german representation of 10. Oct 2015 will not render correctly since October is abbreviated in german as Okt.

A solution would be to just define the format pattern through the I18n translations inside the yaml files in config/locales/ - but often you have formats that are set just this once, so externalizing them is not really a good solution.

Looking at the I18n gem the solution is quite easy: I18n.localize(<date>, format: '%d. %b %Y') yields the appropriate result while passing in the localization format. So just replacing the date.strftime(<pattern>) call with I18n.localize(date, format: <pattern>) is the way to go.

You can also pass in the locale you want it formatted in using the locale: :de parameter to localize.

Hope this helps.

Filed under rails3, rails, i18n

Ruby Time.strftime pads %e with whitespace

I just ran into this quick issue so I thought i'd share it.

When trying to create a Date format for Rails i18n.l you are at the mercy of Time.strftime and my format was supposed to look like this: "%e. %b %Y" - so the quite common 1. Jul 2015. According to the excellent http://www.foragoodstrftime.com/ %e should give me the non-zero-padded day of the month - but my tests still failed because of a leading whitespace on days < 10.

Looking at the reference I noticed that strftime allows flags for each replacement to modify how padding and other things work. The solution to getting rid of the whitespace was to change the format to this: %-e. %b %Y.

Here is the list of flags from the documentation:

-  don't pad a numerical output
_  use spaces for padding
0  use zeros for padding
^  upcase the result string
#  change case
:  use colons for %z

The flags just go after the % and before the format directive. Hope this helps.

Filed under ruby, i18n, rails

Razer Black Widow Ultimate Review

I have been putting this review off for a very very long time since purchasing the Razer Black Widow Ultimate, (in fact it's been almost 3 years since I got mine), but since friends keep asking about the Keyboard I thought I could save myself a few keystrokes here.

So, short and sweet: Is it any good?

YES

I don't know how many times this has already been said (see Jeff Atwood for example), but keyboards matter. And having a great keyboard is one of the most important things to me personally.

So after 6 worn out Microsoft Natural 4000 Keyboards, some intermediate Razer and Logitech keyboards, I decided to bite the bullet and jump on that new "mechanical" keyboard wagon to test it out and got the Razer Black Widow Ultimate. And god this thing changed my life!

When you first type on it (or any mechanical keyboard for that matter) it's this "HOLY CRAP" moment when you remember how typing felt back on those IBM keyboards in your youth. The keys travel perfectly uniform, with exactly the right amount of pressure and a satisfying click at the end. Let's just say the typing is sublime. It's just plain better than conventional keyboards - period.

Now we have established you need an mechanical keyboard, but do you need the Razer Black Widow?

Yes, no and maybe. I love Razer products, I swear by my Razer mouse and their keyboards have always served me well before. So I would say the Razer Black Widow is a well build, solid and great looking Keyboard you want to buy. But: Don't buy it for it's gaming features. Buy it for the looks, the build quality and the switches.

Why not for gaming features? Because gaming keyboards are a lie - Gaming keyboards are the equivalent of 3D-TVs, just a marketing gag to extort money from you. You don't want an extra row of macro buttons, because you don't need an extra row of macro buttons. That's like putting a second door handle on a door - everything you need out of a keyboard is already there: On or near the WASD keys. No game on this earth expects it's players to have a macro-recording super duper keyboard so all games are designed to work well with a standard keyboard. I have yet to find a game where I actually could not remap the keys in the interface, or had to perform a keyboard input that weird that I had to use these keys - EVER.

Second lie with gaming keyboards is their anti-ghosting technology. Again: You ain't gonna need it. Yes the keyboard may accept more than 4 inputs at the same time, but I have never ever felt that this was a problem with other keyboards which lacked this before. The times where you played multiplayer games by having 2 people use the same keyboard are gone, and for everything else you will never hit any limits even with a 10€ keyboard.

Third lie is the ultra-fast 1ms response time. Who are we kidding? There are no noticeable keyboard delays on regular keyboards, so any improvement on already unnoticeable lag is just snake-oil. But heck, it sure sounds like that's the only thing holding you back in multiplayer games.

Now that we established that I love my Razer Black Widow, but think all the gaming features they market it with are crap, I also have to express my frustration with the Ultimate version of the keyboard.

When I bought it, you could get the Razer Black Widow for around 80€, and the Black Widow Ultimate for 120€. I went for the Ultimate edition, because it has backlight illumination and I liked that. It also has an additional USB Port and a Audio/Mic pass-though. This means in theory you could connect your headset to the keyboard, avoiding problems with cable length etc. The reality is just frustrating: Brainless monkeys designed this feature! They put it on the right side of the keyboard - right where my mousepad starts!. What on earth where they thinking? I am supposed to have cables and USB sticks on my mousepad? Like there is no fucking space anywhere around the keyboard! Actually, there is exact the same space unoccupied on the left side of the keyboard. The whole back of the keyboard is empty. I've seen other keyboards solve this way better! I have had keyboards that even had grooves on the bottom to pass your headset cables below the keyboard so they aren't in your way. And Razer designed theirs so the whole point of the cables is to be in your way.

So in closing: You want this keyboard - it's great. Just make sure you really really want to pay 40€ extra for the illumination - because the rest of the "ultimate" package is just crap.

Filed under hardware, tools, review

The story of NginX, Facebook and ipv6

I just released my professional photography website to the public and was quite content with the setup. The site is running Wordpress on php5-fpm proxied through NginX. I optimized the hell out of the site using w3-total-cache, and the NginX + php5-fpm setup delivers superb performance.

Only Google and Facebook where giving me a hard time with site-verifications and other checks to see if tracking codes are correctly embedded. After digging a bit I noticed that the Facebook Linter was only seeing "Welcome to NginX" which is the default site set up on the server.

So I started taking apart my NginX configuration, testing different things and even though I could access the site correctly using Chrome, sometimes on other computers it would still show the default page. I was puzzled to say the least. Also Chrome makes it exceptionally hard to debug these problems due to being too smart. I had deliberately set up the site to only be available without www and was planning on configuring the 301 redirect, but somehow forgot to - turns out Google did it all by himself and never told me about it. So I was there thinking the site 301 redirects, but instead people with certain browsers ended up seeing the Nginx default page.

Once I realized that curl on my server was also only returning the default page it started to dawn on me. I had set up a AAAA record by default, and NginX was listening for ipv6 traffic, just the photography host was not configured to listen for it. So any requests that came in through ipv6 where hitting the default_server, not the actual host. Once I configured the listen [::]:80; ## listen for ipv6 line in my nginx host configuration everything started to work as expected and also Facebook started to see the page.

So lesson learned: Facebook tries ipv6 if possible, and if your server has a ipv6 DNS record but is not configured correctly, users will see your site (due to the browsers being smart), but crawlers may miss it. So always check v6 connectivity when launching a new site.

Filed under server, nginx, facebook, ipv6, network

My Photography business

Projects

dynamic css for .NET