One change I never made for key prefix compression is to not bother doing it if we weren't saving at least N bytes off the key.
In short, key prefix compression itself only takes up a single byte on the page, but the subsequent memory allocation to instantiate the key if it's ever accessed in a search, and, of course, the CPU cost, makes me think we should not bother prefix compressing keys unless we get at least N bytes back from it.
The problem is, I haven't the slightest idea how to select N.
@michaelcahill:
I vote for making it configurable, with a default of 4.
The issue with the parameter that the cost/benefit of prefix compression is determined by the access patterns, which we can't know when the data is inserted. In particular, prefix compression costs almost nothing for pure in-order sequential scans and the benefit is large (more records per page read), but the cost is large for random lookups and the benefit is small (we're always getting one record per page read, regardless of saving some space).
In choosing 4 as the default, the first point is that the setting must be strictly greater than 1, because it costs a byte to encode the prefix length. If we chose N=2, then we would be optimizing for sequential scans, but in terms of the workloads we've seen so far from customers, random reads have been far more common – I can't think of one case where sequential scans have been a bottleneck.
The in-memory cost for each prefix compressed key is 8 + key length (a WT_IKEY structure), not including malloc overhead, so I'm inclined to make the default setting only do prefix compression if the on-disk saving is meaningful. Users who know their access pattern is purely sequential can dial it down to save up to 3 bytes per record.