Read Fullscreen

The multiplicative slowdown effect of testing

Rails/Angular developer, Entrepreneur & YC alum. Previously built and sold Clickpass.com. Now O(only)TO at Brojure.com and part time partner at Entrepreneur First.

I recently read an article by Tom Blomfield which touched on something I’ve felt for a while. That is that tests can slow you down. Badly.

It took me about eight months to write the first version of Pingpanel, which was not a disastrously long dev time but still far too slow.

Part of what slowed it down was testing. Everything was tested. And I mean Everything. I had unit tests, integration tests and deployment tests and I spoke to anyone and everyone who would listen about how great they were and how solid my code was.

It was solid: rock solid. If something went wrong I knew about it. But like the Segway it was rock solidly the wrong product. I was running more tests than there were user interactions on the site. Not more test-runs; more tests. I had about 500 tests and and the site had about 15 questions on it. Bad ratio!

The multiplicative slow-down of test writing

  1. You have to write tests (maybe 30-50% of dev time)
  2. You have to wait for the tests to run (a chunk of time)
  3. You attempt to get things like Spork to speed your tests up (0.3 x a chunk of time)
  4. You have to finish reading all the tabs you opened from Hacker News while your tests were running (many chunks of time)
  5. You start feeling subconsciously disheartened because your velocity has dropped and you know you’re spending too much time reading articles about suspiciously alpha founders
  6. You hear yourself saying things like “probably best to delay Facebook integration, I’ve no idea how we’ll test it”
  7. Your so bored with your dev cycle you goto (3).
  8. You fear writing new code because of the tests it’ll break and subconsciously switch to refactoring instead
  9. Green dots mean more to you than user interactions
  10. <shame>You realise you’re developing features slower than a big company. </shame>

Even with all of this overhead I still surged on like a skinny white Jonah Lomu.

What brought me to my knees was the realisation that Pingpanel was not a destination site but a white-label service. Even then it was only my futile attempts to test emails coming anywhere other than example.com that finally put me on the ground.

What was the effect of dropping testing?

It sped up my development and it didn’t have much effect on the reliability of the site. Sounds trite but it was true. Occasionally things broke but never badly and not in a way that couldn’t be quickly fixed.

The client site doesn’t have a vast amount of traffic but it certainly gets enough to flush any issues. There are about 500 visitors and about 50-100 questions and answers each day.

The main effect of dropping the tests is that the site is really good. Little things like adding auto-login from emails, which massively increased question-answering rate, were knocked out in a few days rather than the weeks they’d have taken before.

The site is much, much better today than it would have been had I continued testing. Not only that but I’m several times more enthused about it (a fact not lost on investors or clients).

Why was dropping tests the right thing to do?

TDD/BDD implicitly assumes that the cost of code breakage is higher than the cost of missing or incorrect code. This is not true for most consumer startups where the major risk is building the wrong thing or the right thing in a less-than-usable fashion. If it breaks then in many cases you can just send your users an email or phone them up and say sorry.

Bear in mind that I’m creating social software, not Stripe. If I was doing payments or control code for nuclear power stations that cost ratio would be completely different. The point is that for me, while losing a database would be bad, faulty functionality is no biggie. An unusable site though is a complete killer.

Will I carry on without testing?

No. There are some things that are calling out for tests and I’m just about to bring new team members on board which will invariably amplify demand.

However, in order qualify, any potential test will have to pass one or more of the following finger-in-the-air checks:

  1. The consequence of the broken test should be bad enough that I’d get up in the middle of the night to fix it
  2. The test (including its maintenance) promises to speed up future releases. There are a few places where I’m now wasting unnecessary time doing manual integration testing which could be easily automated
  3. Unit tests should be restricted to code that actually does complex stuff that could easily be broken during refactoring (e.g. making sure that authorisation rules are never accidentally ignored). I won’t be unit testing 4-line controller actions.

I’m also going to restrict tests to high-cost, high-probability interactions e.g. If I’m testing a signup flow I’ll test that you can signup, not what happens when you submit an invalid email. I’m interested in verifying that the motorway is open, not the hard shoulder.

Should you write tests?

Only you can answer that. For what it’s worth here is my advice:

1. Don’t write things for which the cost of failure is lower than the cost of writing and maintaining the test. I had tests to make sure that nav elements were highlighted which was just daft. Don’t underestimate the cost of maintaining and running tests either, even if it’s quick to write, running and maintaining it will take a lot of cumulative time.

2. Try not to write tests for imaginary stuff: the dumb future-developer who accidentally future-deletes the line of code you just wrote or the low-cost edge case user-screwup. Both these things will lead you down the rabbit hole. Keep it to stuff you actually care about and assume you won’t hire dummies.

3. Complement your tests with monitoring. It’s not immediate but each day I get a Mixpanel email containing all the critical metrics from the previous day. If there aren’t any questions asked on the site, any answers given or any thanks offered it’s probably worth me double checking what’s going on. Google Analytics alerts can give you even earlier warnings than this. Also you could try giving users your phone number and let them ring you directly. You’ll soon find out if and when you mess up.

I don’t know what the right balance is but what’s reassuring is that no other developers seem to either. The answer is undoubtedly that it’s whatever works for you. Bear in mind though that, like funding, too much testing can kill you just as surely as too little.


2 Notes

  1. randallb reblogged this from peternixey
  2. peternixey posted this