My weekend project: When's your next "fun" birthday?

Posted by Luke Francl
on Monday, March 01

When’s the next time your birthday is going to be on a Friday or Saturday, so you can go out and have fun?

That’s what my little weekend project will tell you.

This site came about because my wife and I figured out that the next time her birthday falls on a Friday, she’ll be 42 years old! Yikes.

This was a fun site to build because I got to play with some JavaScript libraries that I don’t often use, like date.js, mustache.js, and TypeWatch. I also made use of some cool CSS 3 features like @font-face with the font Tuffy Bold from Kernest. For the CSS, I used 1KB CSS Grid based on Geoffrey Grossenbach’s suggestion.

Using acts_as_archive instead of soft delete

Posted by Luke Francl
on Friday, February 26

For the application I am working on right now, the ability to restore content that has been deleted is one of the requirements. A lot of people would just go ahead and add acts_as_paranoid or is_paranoid and be done with it, but I've had trouble with that approach before.

I've been reading a lot about the trouble with "soft deletes" (flagging a record as deleted instead of deleting it). Using a plugin that monkey patches ActiveRecord can go a long way towards fixing thesee problems, but it's a leaky abstraction and will bite you in the ass in unexpected ways. For example, all your uniqueness validations (and indexes) become much more complicated.

That's why Jeffrey Chupp decided to kill is_paranoid and Rick Olson doesn't use acts_as_paranoid any more.

There are other problems too. If you delete a lot of records, and you keep them in the same table, your table can get quite large, and all your queries slow down. At this point you have to use partitioning or partial indexes to get acceptable performance.

Alternatives to soft delete

In my reading, I found two alternatives to soft delete to be compelling.

The first was the suggestion to properly model your domain. Why do you want to delete a record? What does that mean? Udi Dahan puts it this way:

Orders aren’t deleted – they’re cancelled. There may also be fees incurred if the order is canceled too late.

Employees aren’t deleted – they’re fired (or possibly retired). A compensation package often needs to be handled.

Jobs aren’t deleted – they’re filled (or their requisition is revoked).

Keeping that in mind, what if the task at hand really is to delete the record? The other idea that I liked was to archive the records in another table.

The first Rails plugin I came across that implemented this was acts_as_soft_deletable which besides being misnamed doesn't appear to be actively maintained. The author even disavows the plugin somewhat for Rails 2.3:

Before using this with a new Rails 2.3 app, you may want to consider using the new default_scope feature (or named_scopes) with a deleted_at flag.

Then I found acts_as_archive which is more recently maintained and used in production for a major Rails website.

There was only one problem -- acts_as_archive didn't support PostgreSQL. Fortunately, that was easy enough to fix.

Restoring deleted records with acts_as_archive

acts_as_archive has the ability to restore a deleted record, but only that record, not associated records.

I was troubled by this at first, but after thinking about it I came to the conclusion that restoring a network of objects is an application-dependant problem. Here's one way to achieve it.

Imagine you have a model like this, with Posts having many Comments and Votes.

Post model

A Post can be deleted, and when it is, it should take the Comments and Votes with it:

class Post
  acts_as_archive

  has_many :votes, :dependent => :destroy
  has_many :comments, :dependent => :destroy
end

(Assume Comment and Vote also have acts_as_archive.)

Now, I can restore a Post with its associated Votes and Comments like this:

def self.restore(id)
  transaction do
    Post.restore_all(["id = ?", id])
    post = Post.find(id)

    Vote.restore_all(Vote::Archive.all(:conditions => ["post_id = ?", id]).map(&:id))
    Comment.restore_all(Comment::Archive.all(:conditions => ["post_id = ?", id]).map(&:id))
  end

In my real code, I've broken apart the two pieces of this into a class method restore and an instance method post_restore which the freshly restored object uses to find its associated records and restore them. post_restore also takes care of post-restore tasks like putting the object back in the Solr index.

This all works great. But now let's say Comments can be deleted individually, and we want to restore them.

Here the logic is a little different, because a Comment can't be restored unless its parent Post still exists (unless it's being restored by the Post, as above).

I take care of this logic in the administrative controller, by only showing child objects that it's valid to restore, and my foreign key constraints prevent anyone from getting around that.

I really wanted to delete that!

Sometimes you don't want to archive a deleted object. For example, in the application I'm working on, votes are canceled by re-voting. I don't want to save those votes -- there's no point, and it can even cause problems with restoring. Imagine having several archived votes from a user for a Post, and then deleting and restoring that Post. The restoration will try to bring back all the votes. Again, I catch this with a uniqueness constraint, but I don't want it to happen in the first place.

Fortunately acts_as_archive has me covered.

To destroy a record without archiving it, you can use destroy!. Likewise for deleting, there is delete_all!.

Bundler and I are breaking up

Posted by Luke Francl
on Thursday, February 18

Bundler may be the future, but after way too many hours of trying to get my app working with Rails 2.3.5, bundler 0.9.x, and Heroku I have decided to throw in the towel and switch back to Heroku’s gem manifest system.

I had Bundler 0.8 working very nicely but for whatever reason I couldn’t get the gems to play nice with each other in the new version. I had the app working locally and the tests passing, but on Heroku the app wouldn’t boot. This could have something to do with Heroku running Bundler 0.9.5 while I was running 0.9.7 locally. Whatever the reason, I’ve decided to take a break from bundler and wait until its development stabilizes a bit—at least on Heroku.

If you’re in the same boat, you can use this script to convert your Gemfile back to a .gems file and config.gem statements.

Rake task for deploying to Heroku

Posted by Luke Francl
on Friday, February 12

Deploying to Heroku is pretty easy, but I’ve often found myself needing to do additional tasks after pushing to Heroku’s git repository. For example, if you have to migrate, you have to do that after pushing; and after migrating you have to restart the app server.

So here is a Rake task to automate that. It uses Heroku’s client library to find the git remotes you need to push to. Use it like this:

rake deploy # deploys to your default app for this directory

rake deploy APP=some-other-app # deploy to another app (e.g., a staging server)

Fixing raw HTML error pages from Facebooker

Posted by Luke Francl
on Tuesday, February 02

I am using Facebooker for Facebook Connect with Rails 2.3.5 with the rails_xss plugin, which escapes HTML by default unless you use raw.

I recently started seeing exceptions that looked like this:

The top of the HTML contains a <fb:fbml> tag which led me to suspect Facebooker. A quick git bisect confirmed this. But why is it happening?

I spent some time looking through the Facebooker source code and located the suspicious-sounding facebooker_pretty_errors.rb file. Sure enough, that file renders a template for errors that look good on the Facebook Canvas (assuming you’re not using rails_xss anyway…).

Fortunately, it is easy to turn this off, by setting this in your facebooker.yml file:

development:
  pretty_errors: true

Now it’s back to normal, and I can read my exceptions again.

Fixing the Heroku "Too many authentication failures for git" problem

Posted by Luke Francl
on Sunday, January 31

Getting an error like this when you push to Heroku?

electricsheep:herokuapp look$ git push heroku master
Received disconnect from 75.101.163.44: 2: Too many authentication failures for git
fatal: The remote end hung up unexpectedly

If you like to create an ssh key for each server you use, you run this risk.

The reason is that unless you specify which key to use for a host, ssh-agent sends each key in turn until one works. However some server configure sshd to reject connections after too many attempted logins. For example, Dreamhost does this (see Dealing with SSH’s key spam problem for details). This is especially annoying if you weren’t even planning to use key-based authentication (as is the case on Heroku).

You can fix this by setting IdentitiesOnly yes in your ~/.ssh/config file. You can do this on a host-by-host basis.

host foobar.dreamhost.com        
        IdentitiesOnly yes

Heroku is a bit difficult to do this for because they don’t have a single IP address or domain (that I know of) you can configure this for.

As a workaround, clear your identities:

ssh-add -D

(Thanks to my friend McClain for his help with the ssh-add command.)

Editing Migrations

Posted by Luke Francl
on Wednesday, January 20

I have a confession to make: when I’m starting out a new project, especially if it’s a small team, I like to edit my migrations.

At the beginning of a project there are always a ton of changes in how models are defined and how they relate to one another. I find it so much easier to edit migrations and keep these initial declarations compact than to write new migrations for every piddling change.

The downside is an increased communications burden—people need to know they need to run rake db:migrate:reset when migrations change. And, of course, once you’ve got real data in production, you can’t do this.

But at the beginning of a project, I like to edit my migrations.

Y Combinator Interview Advice

Posted by Luke Francl
on Monday, November 09

Paul Graham emailed YC co-founders to share their interview stories for those who were asked to interview for the W10 batch. Here’s my take.

First off, congratulations! You’re probably wondering what to do next, depending on the outcome of the interviews. I’m not going to tell you not to be nervous, because that won’t help. But keep perspective – YC’s not the end-all of the startup world. If you’re dedicated, you can make your company happen (startups did exist before YC, believe it or not). I know one team that got rejected, but decided to move to Silicon Valley anyway. They got funded by a major VC before most of the companies in the Summer 2009 batch!

That said, Y Combinator is an…intense experience and you should try your hardest to get in.

What to expect

I can’t say what to expect better than Paul, so be sure to read Y Combinator’s advice. There’s also been a number of YC alumni sharing their stories – check the bottom of that link. I’ll especially call out Michael Young’s experience because he didn’t get accepted…then joined a team that did, so he’s seen both sides.

The interview setup itself is intimidating: you and your co-founders are sitting face-to-face with the Y Combinator partners (Paul, Jessica, Trevor, and Robert) across a ridiculously narrow table. In our interview, that tension was quickly broken as everyone crowded around to see our demo. Paul thinks big, so if he likes your idea, be prepared for him to rattle off about 2 years worth of work for you to do – new features, new markets, a different direction, etc.

However, it’s not all about the idea. Some teams get roughed up in the interview and are surprised to be accepted. The Y Combinator partners are looking for teams to fund. Lots of YC startups end up doing a totally different thing before Demo Day.

After the interview, be prepared for some of the longest hours of your life as you wait for the email (rejected) or phone call (accepted).

Getting Ready

“What are you going to do?” This is the number one question to have an answer for. One sentence. Two tops. I made my co-founder repeat our answer ad nauseam for practice. You know how hard it was to boil your idea down into the 1 minute video for your application. Now it’s time to distill it even further. This is your first elevator pitch. Also be ready to talk about your competitors and how you’re going to make money.

Your demo. You are going to show your demo. You’ve got some time: polish it up! Fix rough edges, improve the UI, add the cool new feature you’ve been thinking about, test the critical paths. Could you sign up a paying customer before the interview? That’s impressive.

I was in charge of giving the demo. I decided to do it as a series of browser tabs showing different features, because then I wouldn’t have to worry about the internet connection or anything breaking. I practiced it relentlessly and got it down to about a 2 minute spiel. When I actually showed the YC partners, I got interrupted and had to explain things here and there but I knew the material and was able to carry on.

Mock interviews. We set up some mock interviews with entrepreneurs to practice. We wanted people who had run successful startups to test us and see where the weakness in our company and idea were.

Scott Wheeler of Direct Edge writes about something similar they did:

We brainstormed a big list of questions that I can’t find anymore that we thought might come up and talked through answers to all of them. We came up with a list of points that we wanted to be sure to mention and even practiced transitions from other topics to those.

All of that, however, turned out to be useless.

Our mock interviews turned out nothing like the real one. But I disagree with Scott. It wasn’t useless, because it made us more prepared.

If you’re prepared, you’ll be more relaxed, and can focus on presenting your idea.

Talk to alumni. You probably know some YC alumni. Email them and see try to set up a phone call about what to expect. I think you’ll find most will be happy to give you some time to ask questions.

Good luck

That’s it for now. Maybe someday I’ll tell the full story of our YC interview (which includes nearly missing it due to a Murphy-worthy series of screwups) but for now I wanted to get the solid advice out of the way.

Not everyone is going to get in, but if you focus on “What are you going to do?” and getting your demo down cold, you can maximize your odds.

Let a human test your app, not (just) unit tests

Posted by Jon
on Thursday, October 29

I’m a big believer in unit testing. We unit test our Rails apps extensively, and we’ve done so for years. On some projects, we do both unit testing and integration testing using Cucumber. I preach unit testing to everyone I can. I’d probably turn down a project if the client wouldn’t let us write tests (though this has never come up, and I don’t think it would be a hard sell).

But for a long time, that’s all I did on my projects. Our clients and users would find the bugs that got past the developers. They were, in effect, our QA testers. (I think a lot of small/agile teams are the same way; based on my experience, I’d be surprised if more than 20% of Rails projects were comprehensively tested by a human.)

This is not right. A good QA tester is worth the surprisingly modest expense.

If I unit test, do I really need to hire a QA tester?

Keep on writing unit tests. But unit tests and human testing are two completely different things. They both aim to increase code quality and decrease bugs, but they do this in different ways.

Developer (unit) testing has three benefits. It:

  • Makes refactoring possible. Don’t even try to refactor a large app without a test suite.
  • Speeds up development. I know there are some haters who deny this, but they’ve either never really given unit testing a chance, or their experience has been 180º different than mine.
  • Eliminates some bugs. Not all, but some.

Human testing has related, but somewhat different, benefits. It:

  • Eliminates other bugs. Unit tests are great for certain categories of bugs, but not for others. When a human walks through an application with the express purpose of making things break, they’re going to find things that developer-written unit tests won’t find.
  • Acts as a “practice run”. Before letting a client, boss, or user see a change, let a QA tester see it. You’d be surprised how many 500 errors and IE incompatibilities you can avoid.
  • Gives you confidence before you deploy. After working with good QA testers, I can’t imagine deploying an app to production without having a QA tester walk through it.
  • Saves you time. If you don’t have a QA role on your project, your developers will be defacto testers. They probably won’t do a good job at this, since they’ll be hoping things succeed (rather than making them fail). And their time is probably more expensive than a good tester’s time.

How to use a QA tester in an agile project

Agile testers should do four things.

First, they should verify or reject each story that is completed. Every time a developer indicates that a feature or bug is completed, whether you use a story tracker or index cards, a QA tester should verify this. Don’t deploy to production until the tester gives it a thumbs-up.

Second, they should do exploratory testing after every deploy. A few minutes clicking around in production can sniff out a lot of potential errors.

Third, they should test edge cases. What happens if a user types in a username that is 300 characters long? What they try to delete an item that is still processing? What if they upload a PDF file as an avatar? Testers are great at this sort of thing.

Fourth, they should test integrations. Unit tests can’t (and shouldn’t) test multi-step processes. Integration testing tools like Cucumber are OK, but don’t catch everything. Identify the main multi-step processes on your site, and have a human verify them every time they change.

Expect a tester to increase your development costs by 5%-10%. We find that 1 hour of testing for every 6 hours of developer time is a reasonable estimate. Our testers cost about 40% less than our developers. So on a typical invoice, testing services are about 10% of development services.

Bill separately for testing. Don’t just roll it into your developer rate. Clients are more likely to object to a 10% increase in your main hourly rate than a separate, lower testing line item.

Finding a good tester

There are two main ways to find a tester.

First, you can train one. Tech-savvy folks who aren’t programmers are a good option. They understand enough to fit in with your development process, but are happy testing and not coding. If you find the right person, they can be testing in no time, and won’t cost a ton of money.

Second, find one that understands agile development. There are plenty of professional testers out there, but most of them do waterfall testing: spend 3 weeks writing test cases, get release from developers, and spend 3 weeks testing. I can say, without hyperbole, that this is how exactly 0% of Rails development projects work. Look for the small number of testers that actually have experience with iterative development, flexible scope, and rapid turnaround. You can sometimes find these people at agile events (conferences or user groups). Otherwise, ask other developers. I found one via referral, and I’ve since referred him to others. This second category will probably be more expensive than the first, but if you want to ship the best code you can, go with this route. Just make sure you avoid a Zompire Dracularius.

Building a Video Delivery Network in 48 hours

Posted by Jon
on Friday, August 28

Last weekend, I participated in my first Rails Rumble. Rails Rumble is a 48-hour app building contest. We started from scratch Friday evening – you can have concepts and notes on paper, but no code or digital UI assets – and stopped Sunday evening, after 48 hours. You can use open-source code and public web services, and we made liberal use of both.

Our team consisted of myself and three of the Sevenwire crew: @fowlduck, @brandonarbini, and @steveheffernan. That’s two developers (Nate and myself), one developer/UI combo (Brandon), and one UI guy (Steve). All in all, a really good mix for the app. We’re also the team behind two video encoding services: Zencoder and FlixCloud.

Check out our app (and the 21 other great finalists) and vote at http://r09.railsrumble.com/entries. Voting ends this weekend, so do it soon.

The App

Our project was ZenVDN, a video distribution network. In other words, a place to upload video that you want to publish, i.e. via your blog or website. Upload one or more videos, and they’re transcoded into web and mobile formats, and sent to a Content Delivery Network for distribution.

After that, you’re given a page to manage each video, with HTML embed code to plug the video directly into your blog or website. You can also link directly to the videos, if you want to use your own player. And finally, each video has a public page on the ZenVDN site if you want to share the video directly.

So it’s a complete start-to-finish video publishing platform. Let’s say you’re Ryan Bates of RailsCasts. You can compress, upload, and host your own video files manually, or you can use a service like ZenVDN to do that for you. (I emailed Ryan about this, by the way, and he prefers the manual route. ;)

Another way to look at it: a better YouTube for video publishers. YouTube and its peers were designed for wide-scale video sharing, not for video producers and content owners. If you don’t mind YouTube’s quality and watermark, and you don’t mind your video being shared publicly on YouTube, ZenVDN probably isn’t for you. But if you want better quality and to own distribution of your videos, check us out.

What’s cool? A multi-file uploader with progress; direct uploads to our CDN, for speed and scalability; video watermarking; video thumbnails; wide input video support; a Flash Player integrated into the embed code; and detailed statistics (by video, by date, by format).

What’s missing? Again, it’s a working end-to-end product, but we’d like to do a lot more. Examples: Ogg support (for HTML 5), an RSS feed for videos, more public/sales information, and better privacy controls.

And, of course, paid subscriptions. We hoped to get e-commerce done during the Rumble, probably using Spreedly, but we ran out of time. Maybe in a 72 hour Rumble. In the meantime, our Free level limits the number of videos you can upload, and the amount of video you can stream. A paid level would increase these limits and let you use your own watermark (instead of the ZenVDN watermark on free accounts).

All in all, we’re really happy with where we ended up. I’m proud to say that Obie briefly questioned whether we could build the whole thing in a weekend. That’s praise.

The experience

The Rumble was way more fun than I expected. I had just worked a hard week, and a part of me was dreading the prospect of a long weekend of work. But it was actually a blast.

Why? Development flow, I think. Development flow is a really fun experience. It’s probably why most of us are developers, after all. We like to build things; we like to solve problems; and we like to work effectively. I’ll sometimes go days, or even weeks, without experiencing concentrated development flow like I did during the Rumble. (Stupid meetings.) So the Rumble was a really great experience.

Our team really clicked. We had a great mix of skills: across the four of us, we had one designer, two front-end coders, and three back-end coders. Besides the initial design concepts, every task could have been handled by more than one person, so tasks rarely sat in the queue for long.

We tried really hard to avoid rushing at the end. We stopped development with 3 hours to go, and two of us started testing, while two others recorded the screencast for the homepage. But it didn’t work out quite so smoothly. The screencast wasn’t done until about T-30 minutes, and we were checking in fixes and refinements until about 6:45. Then a minor Git snafu, and panic ensued. Our final submission came down to the wire.

Finally: sleep and breaks. Call me weak, but I like to sleep. I got 8 hours/night during the Rumble, which definitely improved my experience (and the quality of my code). We ate lunch at our desks, but took a 90-minute dinner break on Saturday, and stopped several times for a game of darts.

Lessons learned

1. Blitzes can be fun and effective. I’m inclined to try a Rumble-like iteration every few months, to avoid project monotony, and to ship stuff quickly when necessary. I did ~3-4 days of work during the Rumble, so I figure a 3 day Rumble plus 2 days of vacation evens out to about a week of work.

2. Focus is essential. If I had three 30-minute meetings during the Rumble, my contribution would have been cut in half. Good reminder of makers’ schedules.

3. Don’t rush at the end. We left three hours for testing and padding. We should have left six.

4. Prioritize well. If we had tackled e-commerce on Day 2 (as I almost did), we wouldn’t have finished our core product. Build the Minimum Viable Product first, and then move on to concentric circles of improvement.

5. Small projects can work. I have a bias against small projects; 3 month gigs feel so much more comfortable to me as a consultant than 3 week gigs. But done properly, shorter projects can work fine. We did ~$15,000 worth of work over the course of the weekend. No reason that experience couldn’t translate into a client project.

Next steps

So what’s next for ZenVDN? We’d really like to get a few video publishers using it. (Talk to me if you want to be a beta customer.)

And we want to monetize the site, of course.

We think it complements our suite of video-related products well – Zencoder is the core software; FlixCloud makes it an easy web service; and ZenVDN brings video publishing one step closer to the producers.

We have some other ideas for ZenVDN. But if you have an interest in online video, or are a publisher/producer yourself, we’d love to talk more!

ActiveRecord refererential integrity is broken. Let's fix it!

Posted by Jon
on Tuesday, August 18

ActiveRecord supports cascading deletes to preserve referential integrity:

1
2
3
class User
  has_many :posts, :dependent => :destroy
end

But you really only want cascading deletes about half the time. The other half, you want to actually restrict deletion of a record with dependencies. ActiveRecord doesn’t support this.

Think of an e-commerce system where a user has many orders. Once an order has gone through, you shouldn’t be able to delete the user who placed the order. You need a record of the order and the user who placed it.

Or even more obvious, think of a lookup table. An Order might have several of these dependencies; OrderStatus, Currency, DiscountLevel, etc. In all of these cases, you want ON DELETE restrict, not ON DELETE cascade. But Rails doesn’t support this. That’s dumb.

If you agree, head on over to the Rails UserVoice site and make your opinion known! There is a ticket for this already. Vote it up if you think Rails should implement this.

The solution to the problem is really pretty simple. ActiveRecord just needs something like this:

1
2
3
class User
  has_many :posts, :dependent => :restrict
end

In this case, if you try to destroy a user that has one or more posts, Rails should complain. You’ve told the app: “Don’t let me delete users who have posts!” The easiest way to do this is to have Rails throw an exception, and have your controller capture the exception and print a flash message. Other approaches could work too.

So why is this important?

1. It’s common. Every project should maintain referential integrity in some way, and :dependent => :destroy isn’t always appropriate. Who wants to do a cascading delete from roles to users, or manufacturers to products, or order_statuses to orders? I don’t think I’ve ever worked on a project where cascading deletes were always appropriate. Any lookup table, at minimum, needs this feature. (I personally prefer to maintain referential integrity with foreign keys, but even still, I’d love to have an application-level check first, which would be easier to rescue. And some projects don’t use foreign keys.)

2. It fits with the Rails philosophy. Rails says “Let your application handle referential integrity, not the database”. But without :dependent => :restrict, one of the most important pieces of referential integrity is missing.

3. It’s easy. 9 lines of code to add this to has_many. Check out this gist: http://gist.github.com/170059.

Someone wrote a plugin for this, but it has the distinct disadvantage of not working anymore. This should really be a core feature anyway, at least as long as :dependent => :destroy is a core feature.

The UserVoice suggestion for this is at http://rails.uservoice.com/pages/10012-rails/suggestions/103508-support-dependent-restrict-and-dependent-nullify.

Weird Gem Error

Posted by Luke Francl
on Monday, August 10

Talk about a hard problem to diagnose!

I canceled the installation of Rack 1.0 half way through because I realized I was running the wrong command (I didn’t use sudo like I wanted to).

After that, I couldn’t load rack at all, even though I could see it in my gems directory and I could load other gems there. I got a LoadError, like this:

irb(main):001:0> require 'rubygems'
=> true
irb(main):002:0> require 'rack'
LoadError: no such file to load -- rack
from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require'
from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require'
from (irb):2

I tried downgrading Rails to the version that used Rack 0.9.1 and then I got an error saying Rails couldn’t activate Rack 0.9.1 because 1.0.0 was already active!

Finally figured it out—there was a gemspec for rack-1.0.0 in my ~/.gems directory, but no corresponding gem in the lib directory. Ugh!

Rails 2.3.3 upgrade notes: rack, mocha, and _ids

Posted by Jon
on Wednesday, July 29

I upgraded two apps to Rails 2.3.3 today. It’s a minor release, and there’s not much to report. But I did run into three minor problems.

Mocha

Mocha 0.9.5 started throwing an exception:

NameError: uninitialized constant Mocha::Mockery::ImpersonatingAnyInstanceName

A quick update to Mocha 0.9.7 cleared this up.

Array parameters in tests

In functional tests with Test::Unit, passing an array to a parameter stopped working. Previously, I had something like this:


post :create, :user => {:role_ids => [1,2,3]}

This would post the following parameters:


"role_ids"=>["1", "2", "3"]

But after the 2.3.3 update, I started seeing an error:

NoMethodError: undefined method `each' for 1:Fixnum

I’m not sure why this stopped working. (Anyone know?) Changing the integers to strings clears up the error:


post :create, :user => {:role_ids => ["1","2","3"]}

Or


post :create, :user => {:role_ids => [1.to_s,2.to_s,3.to_s]}

Rack

Rack apparently no longer comes bundled with Rails. Or at least deployment failed on cap deploy: RubyGem version error: rack(0.4.0 not ~> 1.0.0).

The solution was simple: install (or vendor) Rack 1.0.0.


config.gem 'rack', :version => '>= 1.0.0'

validates_length_of byte counting gotcha

Posted by Luke Francl
on Sunday, July 19

Watch out for validates_length_of if you need to make sure a string is a certain number of bytes long. For example, SMS messages can be no longer than 160 bytes in length. I recently got bit by this because some unicode “curly” quotes slipped into a reply message, but they weren’t detected by the validation.

Here’s the problem.

Consider this string:

str = "€"

It is 3 bytes long:

str.size => 3

However, ActiveRecord’s validates_length_of records this as only one character, because it uses str.split(//).size to measure the size.

If you NEED to be certain that a string is less than a certain number of bytes, you’ll need to override the default behavior of validate_length_of.

Fortunately, you can supply your own tokenizer, which makes this easy. The tokenizer is called, and size is called on its return value to find out how many tokens there are. Since String responds to size which returns the number of bytes, you can simply return the attribute value itself as the tokenizer, like this:

validates_length_of :message, :maximum => 160, :tokenizer => lambda { |str| str }

Will this still work in Ruby 1.9? I’m not sure. I now have a test case which will warn me if it doesn’t…

FutureRuby!

Posted by Luke Francl
on Wednesday, July 08

Jon and I will be at FutureRuby this weekend (actually, I will be there, and Jon will be speaking).

Say “hi” if you see us. I’m flying into Toronto Thursday for FAILcamp.