Ventures of an ex indie game developer

The draw-back of gut feelings

Here is a pretty long post on why gut feeling, even when felt by prominent people, sometimes is totally off.

Generally I encourage extensive use of gut feeling, but mostly try to motivate my own standpoint using evidence. Since I spend my days in a highly theoretical environment (in my day job, the department has 20 % PhDs, 20 % engineering physicists, 7 % ex-scientists, the rest mainly have some kinda masters degree and then there’s me with naught), me and my co-workers lend ourselves to vivid discussions on how stuff/people really are/function/should be. No real surprise there.

Yesterday I committed a major change to our source code repository. The commit was a test to try to see if I could find the root-cause of some eluding crashes of late. The change was mainly made by the built-in search-and-replace function of Visual Studio (which is, I know, generally a no-good coding practice). But the results where great: the crashes vanished!

So how to proceed from this point on? We had a meeting (including the mandatory theoretical discussion) where people mainly considered three different paths forward:

  1. Use my previous commit as a platform. Go through all changes looking for problems. Pragmatic approach – use a working version as a basis.
  2. Revert the commit, and then apply the changes in small, manual steps instead, interface by interface.
  3. Revert the commit, and then apply the changes in small, manual steps, back and forth to “zoom in” in an attempt to “binary search” for “the bug”.

I was in favor of the first idea. Test is king. The third path was seriously discussed for some time, even though totally based on a gut feeling from some of the (more influential) people. They argued that finding “the bug” could actually be of great benefit to us in the future.

The above gut feeling tells people several things, all of which are bad according to me:

  1. “The bug”!!! What if more than one bug is causing our crashes?
  2. Binary search? Using svn commit/revert in large chunks of code is medieval - not to mention that it steals valuable development time from others that are trying to push new code into the same parts of the code that might be committed/reverted many times over.
  3. To run the automatic tests (that detects if the bug is fixed or not) take 13 hours. 13! A Binary search for, say, three bugs, with manual changes in 1500 files with a delay of 13 hours between each test might become tedious. Or is it me?
  4. What, exactly, might one learn from finding the flaws that so much time and effort in the passed has gone into finding? Especially since we know what class of bugs they are (since my search-and-replace fix caught it)? On a policy level, we can already say: “don’t do like this”, is there anything else to learn from finding a few bugs? Have you ever gotten any higher wisdom out of fixing a crash bug if you already know of what class it is?
  5. Reading and correcting code manually sucks! What happened to using tests to verify results (and trusting them)? Has some law of nature changed? Because last time I checked, using automatic tests were proven more effective! Proven as in evidence-based.

When your gut feeling tells you something that will cost the company 10 times as much with no apparent gain, always fall back to using logic instead.

About the author

Mitt foto
Gothenburg, Sweden