13. April 2008, 07:29, by Silvan Mühlemann

Pitfalls with MySQL and auto_increment

There’s a number of pitfalls one should be aware of when working with AUTO_INCREMENT fields in MySQL. Last week, we fell in each of them:

We have the table photos which contains all 15 million pictures on tilllate.com. This table is MyISAM and has id INT NOT NULL AUTO_INCREMENT as the primary key. The position of the auto increment counter was at 112’606’834.

We did tests on a new component and inserted to 4’294’967’295 by accident. This means the auto increment counter was internally changed to 4’294’967’295.

This being at the end of our INT(5) range means no more inserts on this table were possible. So we wanted to reset the counter back to the old value of 112’606’834. We issued the following statement:

ALTER TABLE f_bilder AUTO_INCREMENT = 112'606'834

Two pitfalls

First pitfall: The query ran for five minutes and locked this important table. People visiting tilllate.com were not able to upload photos during this time. A huge backlog of locked queries built up. MySQL rebuilt the whole index (Just if you would have issued an REPAIR TABLE statement.) To me, this is behavior is incomprehensible. Why does the index have to be rebuild when we just want to have one internal state variable changed?

Second pitfall: The position of the auto increment counter was still at 4’294’967’295. Our expensive ALTER TABLE statement was useless. Why’s that? The MySQL manual tells you more:

if the value is less than the current maximum value in the column, no error occurs and the current sequence value is not changed.

Climbing out of the pit

So we first had to remove the rows containing the too high value before changing the auto increment counter. This meant waiting again for MySQL rebuilding the index.

In InnoDB there are tricks to avoid this behavior. But the best solution is to avoid auto increment in the first place. And create your keys out of a hash of the content of the row. This is more expensive. But you’ll avoid key collisions. And if you have a distributed environment with objects being created at different places you won’t have to use communication to ensure uniqueness of the keys. But that’s a whole different topic… I’ll leave that to the SOA specialists. :-)

Filed under: Database

4 Comments

  1. Just for the information of our readers: Deleting the too high values and Alter table should be done while holding a lock on the table, to not get into troubles again.

    A lot of databases do not even have autoincrement (as postgresql) they use sequences for that. You can build this feature with a trigger in mysql as Peter describes here.

    Comment by Leo Büttiker — 13. April 2008 @ 12:01

  2. Well the real SNAFU here was that all of this was done on your production database, wasn’t it?

    Comment by patrice — 13. April 2008 @ 17:20

  3. @patrice: well…erhmm…
    yes!

    Comment by Leo Büttiker — 20. April 2008 @ 19:16

  4. Creating the primary key out of a hash might not be such a good idea with InnoDB as the data is inserted in primary key order. With a non-numeric key that would mean there are no more simple appends, every insert would be a random write…

    Comment by Nils — 11. September 2008 @ 09:36

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

© 2017 tilllate Schweiz AG - Powered by WordPress