Web Roulette 1.0.1 is on the App Store

Apple approved the 1.0.1 update to Web Roulette earlier this week. It fixes a potential crash when submitting links back to us and addresses a UI glitch on the “about” screen.

One interesting statistic from the first couple weeks of sales is that iPad users are the large majority of daily users. It’s hard (impossible?) to say if this is because there is less competition in the iPad part of the App Store or if people prefer to use a web-centric app like Web Roulette on an iPad.

Announcing In Season 1.0

I am pleased to announce the immediate availability of In Season 1.0 for the iPhone and iPod touch. In Season is a produce shopping guide, inspired by a couple of recent books that address the problem of missing flavor in most produce found in American markets.

Here in America, we’ve become rather used to the idea that fresh fruits and vegetables are available whenever we want them. What we don’t always realize is that this convenience comes at the cost of flavor and price. Plants grow according to a schedule, and while we can force things somewhat (hothouse tomatoes, for example), if you want peaches in the dead of winter, they aren’t coming from the northern hemisphere.

Shipping produce from South America is a long trip to an American market, though, so food has to be bred to survive the journey and someone (the consumer) has to cover the costs of that travel.

There are even disadvantages to strawberries grown and sold in season. Strawberries are so fragile, growers have had to breed them exclusively for shipment, resulting in a berry that has only a pale shadow of true strawberry flavor left.

I created In Season to help my own family, and hopefully others, with this problem. Food tastes better when it’s grown according to its natural schedule, and even more so if you can find a local farmer supplying produce to your market. It will take less effort to grow it and supply will be higher, so you will pay less. Locally-grown produce also means less fuel is burned bringing that food to market, further bringing prices down and reducing your carbon footprint at the same time.

The economics of the App Store being what they are, version 1.0 is a toe in the water. If it is well received and I can justify further development, I have some great ideas to make it an extremely useful and educational app.

If you are interested in those books I mention above, they are terrific reads: How to Pick a Peach: The Search for Flavor from Farm to Table, by Russ Parsons and Animal, Vegetable, Miracle: A Year of Food Life, by Barbara Kingsolver.

Announcing Pat Counter for iPhone

This evening I finally received an email from Apple informing me that our first application for the iPhone and iPod touch is ready for sale. I’m pleased to announce Pat Counter, a simple way to keep track of a running count without having to hold something in your hand or look at a screen to tap a button.

It’s a simple application, but it was an interesting experience in developing and shipping an application for a new platform. Rands was right about 1.0: it’s amazing how much work and time it takes to take care of all the little details, even in something as simple as Pat Counter. Valley start-ups really do go out of business because it can be so hard. (Aside: Rands’s book Managing Humans is quite good.)

I wasn’t able to find any data points out there about lead times, but for me it was five days between submission of the binary to Apple and approval for sale. That includes a weekend, but it seems like apps are still approved then, just at a slower rate.

Finally, a word of advice: make sure your binary is right the first time. I goofed my initial build and it took seven days before Apple told me about it, by which time I’d already noticed it myself. Replacing the binary was effectively the same as the initial submission; I went to the back of the line. Read the instructions for a distribution build closely and follow them exactly.

Update: It’s now available on the App Store with a direct link, but isn’t in the search results.

Update 2 (Sep. 15): It now shows up in search results, a mere five days after approval.

sql_logging Updated for Edge Rails

I updated the sql_logging plug-in today to work on Edge Rails.  The PostgreSQL adapter on Edge tries to load a different database driver, and explicitly loading it when the application isn’t using it causes a crash when the server starts.  Today’s change reworks things by asking ActiveRecord what adapter it’s using and only loading the extensions for it.

One other change includes making PostgreSQL query statistics work on Rails 2.0.  The adapter there prefers to call the execute method instead of query.

Things should still work on Rails 1.2, but I’m moving applications to 2.0 whenever possible, so I’ll rely on a bug report if something is broken.

Finally, I’ll leave with a little tip: statistics aren’t kept for queries that lack a name or look like Rails asking for schema information.  If you have a significant amount of query work done using the raw connection (select_all or select_one), make sure you give the query a name so it can be tracked properly and show up in the top 10 list.

Get the plug-in from svn at http://svn.lightyearsoftware.com/svn/plugins/sql_logging.

The sql_logging Debugging Plug-In

One of my current projects involves significantly improving the search function in a Rails application, a thorny problem that led me to come up with the method to quiet down the Rails logger in my SQL log silencer. It turns out, that wasn’t enough. I have expanded quite a bit on that idea by folding in some useful debugging aids from other developers, making them more useful, and adding a SQL query “hit list” after each request completes. I’m releasing it as the sql_logging plug-in.

Installation is the usual (on one line):

$ script/plugin install \
http://svn.lightyearsoftware.com/svn/plugins/sql_logging

I have set up a forum for discussion and support of the plug-in.

So, what does this give you?

Out of the box, you get these new things:

  • A count of the number of rows returned and a rough count of bytes for each SELECT query
  • A summary at the end of the request of the total number of queries, bytes returned and SQL executions
  • Each SQL execution includes a clean-up backtrace to show you exactly what part of your code triggered the query
  • A summary at the end of the request showing a “top 10” list of queries

The last one is particular useful. Queries are grouped by their backtrace, and then sorted by either number of executions, rows returned or bytes returned. This provides an excellent way to target your performance optimization work.

When sorted by executions, you very quickly see where the same code is triggering a query over and over, and can spend your optimization time trying to reduce this by adding the model to an earlier query (via :include) or by getting more rows at a time by batching up loads (WHERE id IN (...)). Here’s a sample:

Completed in 13.01586 (0 reqs/sec) | Rendering: 2.69928 (20%) | DB: 8.68216 (66%), 154 selects, 1728.1 KB, 166 execs
Top 10 SQL executions:
  Executed 95 times, returning 180 rows (230.5 KB):
    app/models/merge_keyword.rb:34:in `get_merges_by_keyword'
    app/models/merge_keyword.rb:30:in `each'
    app/models/merge_keyword.rb:30:in `get_merges_by_keyword'
    app/models/merge.rb:107:in `search'
    app/models/merge.rb:105:in `each'
    app/models/merge.rb:105:in `search'
    app/controllers/search_controller.rb:98:in `results'
  Executed 22 times, returning 30 rows (1.3 KB):
    app/models/merge.rb:80:in `attribute'
    app/views/merges/_result_line.rhtml:2:in `_run_rhtml_47app47views47merges47_result_line46rhtml'
    app/views/search/results.rhtml:172:in `_run_rhtml_47app47views47search47results46rhtml'
    app/views/search/results.rhtml:170:in `each'
    app/views/search/results.rhtml:170:in `each_with_index'
    app/views/search/results.rhtml:170:in `_run_rhtml_47app47views47search47results46rhtml'
  Executed 20 times, returning 30 rows (1.2 KB):
    app/models/merge.rb:80:in `attribute'
    app/helpers/application_helper.rb:40:in `merge_detail_link'
    app/views/merges/_result_line.rhtml:26:in `_run_rhtml_47app47views47merges47_result_line46rhtml'
    app/views/search/results.rhtml:172:in `_run_rhtml_47app47views47search47results46rhtml'
    app/views/search/results.rhtml:170:in `each'
    app/views/search/results.rhtml:170:in `each_with_index'
    app/views/search/results.rhtml:170:in `_run_rhtml_47app47views47search47results46rhtml'

By sorting by rows or bytes returned, you may be able to determine that you need to let the database do more of the filtering work. It may be inefficient to have the database return a pile of rows to your application, only to discard a bunch of them.

The silencer from my previous post is in there, too, along with a new silencer that quiets only the backtrace for the duration of the block:

silence_sql do
  # ...
end
 
silence_sql_backtrace do
  # ...
end

I must give a big hat tip to Adam Doppelt and Nathaniel Talbott. Adam’s SQL logging enhancements formed the basis for the logging enhancements here. I made them a little more generic so they worked with both MySQL and PostgreSQL, as well as tracking executions for things that don’t return data. Nathaniel’s QueryTrace plug-in has been around for a while, and I originally tried making this new plug-in simply work with it, but in the end decided to fold the functionality in so the “top 10” list didn’t duplicate the backtrace cleaning code. I also added a regexp to the cleaner so it excludes the plug-in’s code from the backtrace.

There are a few more goodies in the plug-in, so look at the README for more details.

Update: I fixed a few bugs and added a feature. The plug-in now ignores queries whose name ends with ” Column”. These are internal Rails queries and their inclusion skewed the results.

The new feature tracks the execution time of SQL, and lets you sort the top-10 list by either total time for all instances of the SQL or the median execution time. Total time is also now the default sorting method, as this is probably the best way to attack SQL optimization: whatever is taking up the biggest percentage of your time in the database is a good place to start.

Update 2: The plug-in works on Rails 2.0.2 now.

Collecting Statistics from PostgreSQL in Rails

The PostgreSQL database includes a statistics collection facility that can give you information about your database, tables, indexes and sequences. I just posted a new Rails plug-in that makes it very easy to gather this information and display it in a Rails application.

pgsql_stats screenshot

All of the counters described in the PostgreSQL manual are represented in the models in the plug-in. To name a few:

  • Number of scans over the table or index
  • Cache hit/miss counters (and cache hit rate, by a simple computation)
  • On-disk size

In the above screenshot (taken very soon after the server was started), it’s easy to see that the cron_runs table is by far the largest in the database, followed by its primary key index. Of the entities that have been touched, a large percentage of requests are being satisfied by the buffer cache. You can’t see it in that image, but I’ve defined ranges that turn the green bars red if the cache hit rate falls below 75%.

I’ve set up a Google Group forum for further discussion. Some additional information is available in the README, and the plug-in can be installed like any other:

$ script/plugin install \ http://svn.lightyearsoftware.com/svn/plugins/pgsql_stats

All on one line, of course.

Update Sep. 10, 2007: There is now a usage example on the Google Group that shows how to get the results shown in the screenshot.

Update Jul. 23, 2008: Part of the fallout of Google disabling my account appears to be that the group I set up for discussion and support disappeared, too. I have moved discussion and support to my own forums: pgsql_stats forum.

Introducing ZingLists

Last week, we finally launched our first product: ZingLists. It’s a community-oriented site for making and sharing lists. What kind of lists?

  • Simple, itemized lists for things like what to put in a home emergency/disaster kit
  • To-do lists where tasks have a due date and even a schedule for repeating the task
  • Fun, lighthearted lists of your favorite CDs, movies, places to go…

ZingLists is not the first time I’ve built a real web site, but it is my first serious project that uses Ruby on Rails and I have to say that it has been a very enjoyable experience. There is only one thing I can think of where I wanted to do a little more than Rails offered out of the box, and there wasn’t already a hook somewhere. (More about that later.)

The site is quite young at the moment and so public content is a little thin, but it will get better over time. Feature-wise, the site is very useful. I have had my personal to-do lists running from it for a couple of months, and have been using it for more generic list keeping as well.

There is almost always room for improvement, though, so if anyone has any feedback, feel free to leave a comment here or drop us a note at the support page.