Fetcher moved to GitHub

Posted by Luke Francl
on Sunday, February 15

A quick FYI for those who have been using the Fetcher plugin that we wrote (and use on FanChatter Events)...

I have moved the Fetcher plugin repository to GitHub. You can get it at git://github.com/look/fetcher.git

Happy forking!

(And a shameless plug for Mike Mondragon and my book: if you need more details about how to make your app speak email, look no further than Receiving Email with Ruby!)

Rescuing autotest from a conflicting plugin

Posted by Jon
on Saturday, February 14

For the longest time, I wasn’t able to run autotest on one of my projects. That was OK; I was intrigued by autotest, but had never really committed to it. The problem: whenever I would try to run autotest, I’d get the following error:


loading autotest/rails_rspec
Autotest style autotest/rails_rspec doesn't seem to exist. Aborting.

I’m running Shoulda, not RSpec, so I had no idea why this was happening. I tried installing (and uninstalling) RSpec in various configurations, to no avail. Nothing worked.

Then I started a new project. Autotest worked just fine on it. After a few days, I got used to autotest, and a few days later, I came to really like it. It helps me get into a TDD “flow” – all tests pass; write failing tests; write code; all tests pass.

So when I came back to my previous project where autotest didn’t work, I decided to dig deeper. Eventually I found a plugin that was causing the problem: acts-as-taggable-on. The plugin was written to allow autotesting, as explained in a blog post. Supposedly, this is supposed to be a different autotest instance from your app’s main instance, but it wasn’t working that way for me.

The fix? Delete lib/discover.rb from the acts-as-taggable-on plugin. That’s it – autotest works now.

In the end, I maybe could have solved the problem by getting RSpec configured properly, but just running the gem locally didn’t do the trick for me, and I don’t want to add any code to my app to support autotesting of a plugin that I never want to test.

So should plugins even ship with test code? Yes, they should. Not for normal use; I never run plugin tests, assuming instead that the plugin is tested by the author. But if an open source plugin ships without tests, it’s that much harder for other developers to fork/fix/improve the plugin. But really, that’s about the only reason for plugin/gem tests. And they should never touch application tests.

Maybe there's more than one way to develop good software?

Posted by Luke Francl
on Sunday, February 08

I’ve been following the recent blow up on the topic of test driven development with interest (perhaps a vested interest).

Basically what happened is Joel Spolsky said requiring 100% test coverage was doctrinaire and maybe mangled some testing gurus’ ideas a little bit. Which made them mad and there was this big harrumph about testing on blogs, Hacker News, and elsewhere.

Other people jumped into the discussion. Jay Fields offered a very well thought out post with a key point: if it hurts, you’re doing it wrong. Paul Buchheit said “unit tests are 20% useful engineering, and 80% fad”; and then Bill Moorier, the VP of Justin.TV, posted a followup to that discussing his dislike of unit tests.

Some people are trying to rip the test skeptics a new one. Giles Bowkett says it’s proof that the mainstream will never catch up, because they’re so totally clueless.

I find this all bemusing.

Here’s the thing.

Joel Spolsky created the original version of FogBugz, a pretty sweet piece of software. Paul Buchheit created GMail, a product that millions of people use and love every day. Justin.TV is also a very innovative product that’s popularized a new form of expression. Justin.TV is so “mainstream” that a guy committed suicide on it.

The testing advocates have a similarly impressive list of great software.

Maybe there’s more than one way to develop good software?

I tried to make this point explicitly in my Testing is Overrated talk: one of the reasons why test-driven design works is that it makes you think deeply about the problem you’re trying to solve. But it’s not the only way to think deeply about a problem! And it’s certainly not a guarantee for building successful software.

Pivotal Tracker > bug trackers

Posted by Jon
on Monday, February 02

I’ve used just about every defect tracking system there is. That includes Trac, Fogbugz, Lighthouse, a Rails-based Trac clone that I forget the name of, spreadsheets, note cards, and even Bugzilla. I haven’t tried Mingle, and I only used Jira for a few days, but I’ve got most of the other bases covered. Last year at Tumblon, we settled on Fogbugz as less painful than the other options. It’s a pretty good defect tracking system, with most of the right features, and it’s reasonably easy to use.

A month ago, I tried Pivotal Tracker for a new project. It blew everything else away.

I think there are two reasons for this.

First, it’s a story tracker, not a defect tracker. And when I’m building software, I need to track stories more than defects. Of course, Pivotal Tracker handles both, and so do most bug trackers. But bug trackers usually seem natural when I’m using them to track bugs, and unnatural when I’m trying to map out new development. Pivotal Tracker feels natural for both.

Second, it prevents sabotage. This is the real key. Software development projects are hard to get right. It’s really easy to unintentionally sabotage a software development project, and everyone on the team can do it. That’s why there’s a huge publishing and training industry around project management, and why most of us are interested in the software development process. (Things like Agile, XP, and Scrum) are probably interesting to you and I, while CMM and CMMI are interesting to other people.)

Basically, Pivotal Tracker enforces a lightweight agile process, and makes this painless. If you use Tracker properly, you’ll write relatively atomic stories, estimate their difficulty, prioritize them, step each one through a simple workflow from new to accept/reject, track weekly or bi-weekly iterations, and see where you’ve come from and where you’re going. You can do this with most bug trackers, but most of them are missing one important thing.

Constrained velocity.

Pivotal Tracker requires that every feature get an estimate on a simple scale – 0-3 is the default. These are relative velocity points, not hours, and they give you an idea of how much you can accomplish in an iteration. After your first iteration, Tracker uses the average velocity of the previous few weeks as the predictor of your velocity for the current iteration. So if you did 17, 15, and 20 points over the last three weeks, Tracker thinks you can do 17 points next week. Chances are, this is a decent guess.

The Current Iteration and the Backlog are a continuous list in Tracker, and the line between them is based on velocity. If you only have 12 points of work on an iteration, you can’t add stories to the next iteration – you still have capacity on the current iteration, so the stories will show up there. If you have 17 points scheduled for this week, and you try to add 2 more, something has to give. And this is the key.

By managing velocity in this way:

  • The product owner can’t shove an extra 5 points of work on an iteration just because he wants it to get done, at least not without recognizing that it will take more resources (i.e. more programmers).
  • The product owner can’t make every new feature the top priority just because it is new, at least without clearly seeing that other features will be delayed.
  • Developers are forced to estimate everything. You can’t start/finish/deliver a story until it has been estimated first. As long as estimates are optional, they won’t be done consistently.
  • Everyone has a decent idea of how much can be done over time. The velocity estimates aren’t perfect, of course, and that’s just fine. They don’t have to be perfect – they just have to be Good Enough. (For what it’s worth, our velocity over the last 4 weeks was 20, 15, 16, 17 – good enough that we can expect to do around 15-20 points/week.)

There are also some smart touches. For example, you can set the team strength for a given iteration – if half your team is going to be pulled off onto another project next week, or out of town at a conference, set the iteration strength to 50%. Tracker will halve the expected velocity. Or if you have another developer you can pull onto the project, increase the strength to 125%.

Of course, Tracker isn’t perfect. It isn’t particularly client (Product Owner) friendly; I’d love to see a dumbed-down interface that is less intimidating to non-technical clients, but still has all of the features that they need (prioritization, Accept/Reject). And I’m not exactly sure where the QA role fits into the process – there is no “Verified by QA” state, so QA either needs to usurp the Accept/Reject role, or needs to be responsible for Delivery (deployment).

I’m also not sure how well it will work on large/long projects. After 4 weeks of work, we’ve added a total of 250 stories to the system, and 100 are still active (unfinished). It’s working fine now. But if we had 6 months of work mapped out in the Icebox, it might be a little hard to find things.

But Pivotal is actively working on improving Tracker, and even if they weren’t, it’s already better than most bug trackers.