Main

April 24, 2009

Running a Realtime Stats Service on MySQL (my slides at Percona Performance Conference)

Today at Percona Performance Conference, I did my presentation on the optimizations / tweaks that I developed for running Pathtraq, one of Japan's largest web stats service. Thank you to people who listened, I hope you enjoyed my talk. And thank you to the people at Percona. I have uploaded my slides to Slideshare. So for more information, please refer to them.

Continue reading "Running a Realtime Stats Service on MySQL (my slides at Percona Performance Conference)" »

April 23, 2009

Q4M Presentation Slides at MySQL Conference

Today at MySQL Conference & Expro 2009, I did a presentation introducing / explaining Q4M. Thank you to people who came to listen.

The presentation slides I used can be found on slideshare (slideshare).

Continue reading "Q4M Presentation Slides at MySQL Conference" »

April 22, 2009

Q4M (and Pathtraq) at MySQL Conference & Expo 2009

At MySQL Conference & Expo 2009, I will be giving a presentation on Q4M tomorrow (Apr. 22) from 11:55am. If you are interested in using a simple, fast message queue as part of your system, please come to the session.

Details: Using Q4M: A Message Queue Storage Engine for MySQL

On the next day (Apr. 23), I will be doing another presentation at Percona Performance Conference, held at the same location. In the presentation, I will describe the techniques (mainly MySQL UDFs) used to squeeze maximum performance out from MySQL used in Pathtraq, one of the largest web access stats service in Japan.

Details: Running a Realtime Stats Service on MySQL (from 6:10pm)

I am looking forward to seeing you in the sessions.

April 16, 2009

Q4M 0.8.5 released

Q4M 0.8.5 is now downloadable from q4m.31tools.com. Prebuilt binaries for MySQL 5.1.33 running on linux (i386 or x86_64) and Mac OS X 10.5 (x86) are available as well.

There are no bugfixes in this release. The only change from version 0.8.4 is bundle of boost header files necessary for building Q4M. The build process no more requires separate installation of the Boost C++ libraries.

February 09, 2009

Q4M 0.8.4 released (with prebuilt binaries for MySQL 5.1.31)

Today I have uploaded Q4M 0.8.4 to q4m.31tools.com. Prebuilt binaries for MySQL 5.1.31 running on linux (i386 or x86_64) and Mac OS X (x86) are available as well.

The release fixes crash on linux (i386) systems when a table becomes larger than 2GB. There are no changes for other platforms.

February 05, 2009

Using O_DIRECT on Mac OS X

Is there any reason for not supporting innodb_flush_method=O_DIRECT on Mac OS X? Today I wrote a tiny patch to enable O_DIRECT on MySQL running on Mac OS X, and it seems to work fine. (mysql-5.1.30-osx-o_direct.patch)

Although I do not think it is a good idea to use Mac OS X as a database server (since it's I/O system calls are slow compared to other OSes, at least until Snow Leopard is released), having O_DIRECT support would be better than none in some cases.

December 12, 2008

Using Top N Sort on MySQL

One of the best practices on using MySQL is to avoid filesort. However there are cases where it is inevitable (e.g. ordering the result of fulltext search by modification date), and although in most cases we only the top N rows of sorted resultset are needed, MySQL does not implement top N sort.

After wondering for couple of months if I should hack the MySQL core to implement top-N-sort, today I decided to write a UDF that performs top N sort, and see how well it performs. And here it is: top-n-sort.c.

First the benchmark. When performing order by 〜 limit on an unindexed column of a 100k row table, top N sort is more than two times faster than the internal sort algorithm.

mysql> SELECT id,priority FROM testsort ORDER BY priority LIMIT 10;
(snip)
10 rows in set (0.11 sec)

mysql> SELECT topn_get(v) AS id,topn_get_value(v) AS priority FROM int_seq WHERE v<(SELECT topn_set(10,id,priority) FROM testsort);
(snip)
10 rows in set (0.05 sec)

The top-n-sort UDF consists of three functions, topn_set, topn_get, topn_get_value.

Topn_set function takes three arguments, number of top columns to preserve, id of the column, and sort value. It is an aggregate function that returns the number of rows that should be returned.

Topn_get function returns the n-th id of the sort result, while Topn_get_value function returns the n-th value.

So combining the results of the UDFs with a table (int_seq) that holds a sequence of integers starting from zero, it is possible to perform a top N sort within a single query using the UDF functions. And it is fast.

Next question will be whether I should add top N sort to MySQL by myself, now that I have proved there would be performance benefit? For myself the UDF version seems satisfactory althougth the syntax is a bit complicated.

October 29, 2008

Benchmarking SSD for MySQL

Today I bought Intel X25-M, to test its performance and consider if we could replace a HDD used in our slave database of Pathtraq with a solid state disk. Connecting the drive to a test server, I have just run a synthetic benchmark to check its performance for 16KB random access with O_DIRECT flag set, which is pretty similar to the access pattern we see in our daily InnoDB use.

16KB Random Access
HDD (Maxtor 7L250S0) SSD (Intel X25-M) Ratio
Read (MB/sec) 2.64 38.0 1440%
Write (MB/sec) 12.0 46.4 385%
Write (MB/sec, hdparm -W 0) 1.91 11.2 585%

Looks very promising. I will post further results as I continue my benchmarks.

PS. For the test, I wrote randombench.cc (requires this wrapper to run), and ran it with options -b 16 -c 1 -f 102400 -l 1000.

PS2. I have also checked by opening the disk as a raw device with -c 4 to check if performance increases under higher concurrency (due to NCQ, etc.), but there were no improvements at least on our system. (unfortunately we did not have XFS installed)

October 23, 2008

Q4M prebuilt binaries for MySQL 5.1.28-rc

I have uploaded prebuilt binaries of Q4M for MySQL 5.1.28-rc to http://q4m.31tools.com/dist/. For installation instructions, please refer to http://q4m.31tools.com/install.php.

September 03, 2008

Q4M becomes part of FreeBSD Ports Collection

Thanks to Akinori MUSHA, Q4M has become part of the FreeBSD Ports Collection.

If you are using FreeBSD, Q4M can be installed by following the steps below.

# cd /usr/ports/databases/mysql-q4m
# make install
# echo 'mysql_enable="YES"' >> /etc/rc.conf
# /usr/local/etc/rc.d/mysql-server start
# mysql -u root -f mysql < work/q4m-0.8.3/support-files/install.sql

Running either cvsup or portsnap might be necessary to update the installed ports collection to the newest state. Since the port depends on mysql51-server, you should make deinstall if an older version of mysql is already installed via the ports collection. If you want to test the installation, type:

# chmod 755 work/q4m-0.8.3/support-files/q4m-forward
# make test

The chmod seems to be necessary due to the fact that the install script for non-executable files of the ports collection drops the execution bits. Since there is no particular reason for keeping q4m-forward in the support-files directory, I plan to move it to somewhere else in the next release of Q4M.

September 02, 2008

Release of Q4M 0.8.3 and support for FreeBSD

Q4M (Queue for MySQL) Version 0.8.3 has been released with following Changes.

  • fix race condition error that might lead to deadlock on shutdown
  • support for FreeBSD
With support for FreeBSD added, prebuilt binaries are provided for the platforms below.
  • linux (i386)
  • linux (x86_64)
  • freebsd-6 (i386) (should work on FreeBSD 7 as well with compat6x package installed)
  • Mac OS X 10.4 (x86)

August 29, 2008

Q4M 0.8.1 released (including prebuilt binaries)

Today I have released Q4M 0.8.1. It is a minor bugfix version from 0.8. The changes are:

  • fix file descriptor leak on DROP TABLE (regression in 0.8)
  • adjust Makefile to fix build error under certain environments
But the biggest improvement might be the release of prebuilt binaries. Q4M 0.8.1 comes with executables for linux i386, linux x86_64, and Mac OS X (x86). Since I have set up an automatic build-and-test environment, I hope I can continue to release binary versions from now on. For more information, please refer to the install page of Q4M.

PS. Prebuilt binaries for linux platforms have been updated to version 0.8.2, since they had interoperability issues (crashing with SIGFPE on dlopen). Source code and Mac OS X binary tarballs will not be released for 0.8.2. For technical detail of the problem see FPE in ld-linux-x86-64.so loading custome dso in Apache - Unix Linux Forum. Thanks to hirose31-san for reporting the problem and for helping solve the issue.

August 13, 2008

Q4M adoption by Mixi, and the release of version 0.8

Last week, Mixi - Japan's largest social network service provider, launched an experimental microblogging service called Echo, and according to their delevopers' blog entry, they are using Q4M to level their write loads. Thank you to the developers of Mixi Echo for using Q4M, I hope Echo will go well and soon become a first-class service.

Meanwhile, several bugs were found in Q4M (that arise under rare cases or complex usage senarios), so I have just uploaded version 0.8, available from the Q4M homepage.

This release fixes the following bugs.

  • block div-by-zero exception on conditional subcription
  • do not crash when a nonexistent table (or a non-Q4M table) is specified as an argument of queue_wait
  • fix memory corruption that used to occur under certain cases when multiple table-conditions where passed to queue_wait
  • return correct data when a single table is passed multiple times to queue_wait function causing a wait

From this release, the plugin_version field of information_schema.plugins table will show the correct version number of Q4M.

mysql> select plugin_version from information_schema.plugins where plugin_name='queue'; 
+----------------+
| plugin_version |
+----------------+
| 0.8            | 
+----------------+
1 row in set (0.01 sec)
It is also possible to view the status of Q4M.
mysql> show engine queue status\G
*************************** 1. row ***************************
  Type: QUEUE
  Name: 
Status: 
I/O calls
------------------------------------
sys_read                           71
sys_write                       58292
sys_sync                        43271
read_cachehit                       0

Writer thread
------------------------------------
append                          10095
remove                          34422

Conditional subscription
------------------------------------
evaluation                      31809
compile                         26543
compile_cachehit                26531

High-level stats
------------------------------------
rows_written                    47002
rows_removed                    46469
queue_wait                      44931
queue_end                        4110
queue_abort                         3
queue_rowid                      4085
queue_set_srcid                    89

1 row in set (0.00 sec)

July 01, 2008

Q4M 0.7 released

Version 0.7 of Q4M (a pluggable message queue storage engine for MySQL) has been released with following changes.

  1. Faster SELECT COUNT(*)

    Q4M now caches the number of rows within a table. It is now possible to heavily issue SELECT COUNT(*) queries to monitor queue usage.

  2. Dropped binlog capability flags

    Q4M tables were incorrectly marked as binlog-capable in previous releases.

  3. Added examples/crawler

    Q4M now includes an example web spider implementation. According to a test using preliminary version (detail in Japanese), the implementation was about two times faster than a crawler based on POE. It is also easier to monitor, since ordinary SQL can be used to examine and/or adjust the request queues.

June 13, 2008

Performance of MySQL UDFs vs. Native Functions

Hearing from Brian that UDFs might be slower than native functions in MySQL, I did a small benchmark.

mysql> select benchmark(1e9,abs(1));
+-----------------------+
| benchmark(1e9,abs(1)) |
+-----------------------+
|                     0 | 
+-----------------------+
1 row in set (27.15 sec)

mysql> select benchmark(1e9,udf_abs(1));
+---------------------------+
| benchmark(1e9,udf_abs(1)) |
+---------------------------+
|                         0 | 
+---------------------------+
1 row in set (43.04 sec)

The numbers were taken on my MacBook (Core 2 Duo @ 2GHz, OS X 10.4.11) running the official binary version of MySQL 5.1.25-rc for 32-bit arch. So the overhead of UDFs compared to native functions seems to be about 30 clocks per each call.

So the question is whether it would matter on an actual application. I created a 100k row heap table and performed sequential scan. For each row, either abs() or udf_abs() is called.

$ mysqlslap -S /tmp/mysql-dev.sock -u root -i 1000 -q 'select abs(v) as v1 from test.heap_t having v1=-1'
Benchmark
        Average number of seconds to run all queries: 0.014 seconds
        Minimum number of seconds to run all queries: 0.014 seconds
        Maximum number of seconds to run all queries: 0.018 seconds
        Number of clients running queries: 1
        Average number of queries per client: 1

$ mysqlslap -S /tmp/mysql-dev.sock -u root -i 1000 -q 'select udf_abs(v) as v1 from test.heap_t having v1=-1'
Benchmark
        Average number of seconds to run all queries: 0.015 seconds
        Minimum number of seconds to run all queries: 0.015 seconds
        Maximum number of seconds to run all queries: 0.019 seconds
        Number of clients running queries: 1
        Average number of queries per client: 1

There does seem to be some difference, and it might be worth remembering. But IMHO, in general, it would not be a problem since most UDFs perform much more complex operation than a simple abs calculation, and that accessing a single row in most cases would be much heavier than reading a four-byte fixed width heap table.

After I go to my office, I would like to take the same benchmark on a linux server running 64-bit version of MySQL.

Below are the benchmarks on Opteron 2218 running CentOS 5.1 (x86_64). The overhead of calling UDF exists, about 10 clocks per each call. However, when running a sequential scan, the UDF version performed faster than the native version. I am not sure why such a thing happens (I tried multiple times and got the same result), but it might be due to the memory access patterns and behaviour of prefetchers within the CPU.

mysql> select benchmark(1e9,abs(1));
+-----------------------+
| benchmark(1e9,abs(1)) |
+-----------------------+
|                     0 | 
+-----------------------+
1 row in set (20.69 sec)

mysql> select benchmark(1e9,udf_abs(1));
+---------------------------+
| benchmark(1e9,udf_abs(1)) |
+---------------------------+
|                         0 | 
+---------------------------+
1 row in set (30.65 sec)
$ /usr/local/mysql51/bin/mysqlslap -S /tmp/mysql51.sock -u root -i 1000 -q 'select abs(v) as v1 from test.heap_t having v1=-1'
Benchmark
        Average number of seconds to run all queries: 0.014 seconds
        Minimum number of seconds to run all queries: 0.011 seconds
        Maximum number of seconds to run all queries: 0.018 seconds
        Number of clients running queries: 1
        Average number of queries per client: 1

$ /usr/local/mysql51/bin/mysqlslap -S /tmp/mysql51.sock -u root -i 1000 -q 'select udf_abs(v) as v1 from test.heap_t having v1=-1'
Benchmark
        Average number of seconds to run all queries: 0.011 seconds
        Minimum number of seconds to run all queries: 0.011 seconds
        Maximum number of seconds to run all queries: 0.013 seconds
        Number of clients running queries: 1
        Average number of queries per client: 1

Continue reading "Performance of MySQL UDFs vs. Native Functions" »

June 12, 2008

Optimizing MySQL Performance Using Direct Access to Storage Engines (faster timelines)

First, let's look at the numbers. The table below lists the speed of building a timeline like Twitter does, all of them using pull model.

Building Timelines on MySQL
timelines / sec.
SQL56.7
Stored Procedure136
UDF using Direct Access1,710

As I explained in my previous post (Implementing Timeline in Web Services - Paradigms and Techniques, it is difficult (if not impossible) to write an optimal SQL query to build timelines on the fly. Yesterday I asked on the MySQL Internals mailing list whether it is possible to write code that directly accesses the storage engine (in my case InnoDB) for the highest performance, and Brian gave me a quick response (thank you) that there is a MySQL branch that supports writing stored procedures in external languages. So I looked into it, but since it seemed to me like directed towards flexibility than performance. Wondering for a while, I came up with an idea of calling storage engine APIs from an UDF, tried, and it worked!

The code can be found here (/lang/sql/mysql_timeline - CodeRepos::Share - Trac). Its only about 120 lines long with a general helper library (in C++ template) with about the same size. And although it uses a better-tuned version of an algorithm described in my previous post, the core code is as small as follows. IMHO, it is easier to understand than the stored procedure version.

Continue reading "Optimizing MySQL Performance Using Direct Access to Storage Engines (faster timelines)" »

June 09, 2008

Implementing Timeline in Web Services - Paradigms and Techniques

This is a loose translation from Japanese version.
Further optimized version of the pull model can be found here.

It is quite a while since Twitter has become one of the hottest web services. However, a time-based listing of your friends updates is not a unique function of Twitter. Instead, it is a common pattern found in many social web services. For example, social network services provide time-based listing of your friends' diaries. Or social bookmarks have your friends' list of bookmarks.

What Twitter brought into attention is that implementing a friends timeline in an optimal way is not an easy task. The fav.or.it Blog » Fixing Twitter seems to be a good article covering the problem, however, there is still more to consider, especially on how to create a well-performing basis of handling friends timeline that can be scaled out.

In this blog article, I will describe two methods: push and pull, of implementing such a timeline, from their basic design to tuning techniques. SQL (that would work fine with MySQL) would be used in the article, but the prcinples would never change whatever storage (relational or specially designed) is being used.

Continue reading "Implementing Timeline in Web Services - Paradigms and Techniques" »

June 05, 2008

Memo: Binary Logging of MySQL from the viewpoint of storage engine

  • two formats: statement-based and row-based
    • can be mixed
    • 5.1 supports both
  • statent-based logs record UPDATE,INSERT,DELETE queries
  • row-based logs store internal buffers passed to `handler' class
  • storage engines may declare HA_HAS_OWN_BINLOGGING and write to binlog directly
    • however, it becomes impossible to log multitable updates
    • what happens if the storage engine supports transaction?
  • handling of auto_increment
    • when using statement-based logs, lock for auto_increment value should be held until a query completes
    • when using row-based logs, an auto_increment column can be updated and stored to log one row by row by directly updating ``uchar record[]''

For myself, since Q4M has a hidden rowid, it seems that declaring HA_HAS_OWN_BINLOGGING is the way to go.

June 02, 2008

Q4M - 0.6 release and benchmarks

Today I have uploaded Q4M (a Queue for MySQL) 0.6, which is basically a performance-improvement from previous releases. Instead of using pread's and a small user-level cache, Q4M (in default configuration) now uses mmap for reads with a reader/writer lock to improve concurrency.

I also noticed that it would be possible to consume a queued row in one SQL statement.

SELECT * FROM queue_table WHERE queue_wait('queue_table');

This statement actually does the same thing as,

if (SELECT queue_wait('queue_table') == 1) {
  SELECT * FROM queue_table;
}

But since the former style requires only one SQL statement (compared to two statements of the second one), it has much less overhead.

And combining these optimizations together, consumption speed of Q4M has nearly doubled from previous post (or trippled from 0.5.1) to over 57,000 rows per second.

[kazuho@dev32 q4m-0.6]$ ./configure --with-mysql=/home/kazuho/dev/mysql/51/64-bin-src --prefix=/home/kazuho/dev/mysql/51/64-bin --with-sync=no --with-delete=msync
(snip)
[kazuho@dev32 q4m-0.6]$ USE_C_CLIENT=1 MESSAGES=1000000 CONCURRENCY=10 DBI='dbi:mysql:test;mysql_socket=/tmp/mysql51.sock;user=root' t/05-multireader.t 
1..4
ok 1 - check number of messages
ok 2 - min value of received message
ok 3 - max value of received message
ok 4 - should have no rows in table


Multireader benchmark result:
    Number of messages: 1000000
    Number of readers:  10
    Elapsed:            17.261 seconds
    Throughput:         57934.239 mess./sec.

Continue reading "Q4M - 0.6 release and benchmarks" »

May 27, 2008

Slides on Q4M

Today I had a chance to explain Q4M in detail, and here are the slides I used.

It covers from what (generally) a message queue is, the internals of Q4M, how it should be used as a pluggable storage engine of MySQL, to a couple of usage senarios. I hope you will enjoy reading it.

May 21, 2008

Maximum Peformance of MySQL and Q4M

I always use to blog my temporary ideas on one of my Japanese blog (id:kazuhooku's memos). When I wrote my thoughts on how to further optimize Q4M, Nishida-san asked me "how fast is the raw performance without client overhead?" Although it seems a bit diffcult to answer directly, it is easy to measure the performance of MySQL core and the storage engine interface, and by deducting the overhead, the raw performance of I/O operations in Q4M can be estimated. All the benchmarks were taken on linux 2.6.18 running on two Opteron 2218s.

So at first, I measured the raw performance of MySQL core on my testbed using mysqlslap, which was 115k queries per second.

$ perl -e 'print "select 1;\n" for 1..10000' > /tmp/select10k.sql && /usr/local/mysql51/bin/mysqlslap --query=/tmp/select10k.sql --socket=/tmp/mysql51.sock --iterations=1 --concurrency=40
Benchmark
        Average number of seconds to run all queries: 3.470 seconds
        Minimum number of seconds to run all queries: 3.470 seconds
        Maximum number of seconds to run all queries: 3.470 seconds
        Number of clients running queries: 40
        Average number of queries per client: 10000

And the throughput of single row selects to the Q4M storage engine was 76k queries per second.

$ perl -e 'print "select * from test.q4m_t limit 1;\n" for 1..10000' > /tmp/select10k.sql && /usr/local/mysql51/bin/mysqlslap --query=/tmp/select10k.sql --socket=/tmp/mysql51.sock --iterations=1 --concurrency=40
Benchmark
        Average number of seconds to run all queries: 5.282 seconds
        Minimum number of seconds to run all queries: 5.282 seconds
        Maximum number of seconds to run all queries: 5.282 seconds
        Number of clients running queries: 40
        Average number of queries per client: 10000

And finally, the queue consumption speed of Q4M (configure option: --with-mt-pwrite --with-sync=no) was 28k messages per second. And when I turned the --with-sync flag to fsync the speed was 20k messages per second. Considering the fact that consumption of a single row requires two queries (one query for retrieving a row, and one query for removing it), the numbers seem quite well to me, although further optimization would be possible.

$ MESSAGES=200000 CONCURRENCY=40 DBI='dbi:mysql:test;mysql_socket=/tmp/mysql51.sock' t/05-multireader.t 
1..4
ok 1
ok 2
ok 3
ok 4


Multireader benchmark result:
    Number of messages: 200000
    Number of readers:  40
    Elapsed:            7.040 seconds
    Throughput:         28410.198 mess./sec.

And regarding the question about the raw performance of Q4M, the answer would be that the overhead of consuming a single row takes about 30 microseconds in Q4M core with fsync enabled, and about 15 microseconds if only pwrite's are being called.

May 20, 2008

Q4M version 0.5.1 released

I have just released version 0.5.1 of Q4M, a message queue that acts as a pluggable storage engine of MySQL. In the release, I have fixed two bugs that might that might block table compaction from occuring, or cause a return of an empty result set when data exists. Thanks to Brian for pointing them out.

Q4M Hogepage

PS. If you have installation problems, using the svn version might help. Installation problems in 0.5.1 have been fixed (link error on linux/x86_64 and installation directory problem with binary distribution of mysql).

Slides from YAPC::Asia 2008 on MySQL Tuning in Pathtraq

Last friday I had a chance to give a talk at YAPC::Asia 2008 on the internals of Pathtraq, one of the largest web access stats service in Japan.

The talk covered from techniques we use for compressing data on MySQL tables, our cache architecture, and a mysql-based message queue (Q4M) that we developed and use.

The slides of the talk are available at http://www.slideshare.net/kazuho/yapcasia-2008-tokyo-pathtraq-building-a-computationcentric-web-service, so please have a look.

April 16, 2008

Embedded SQL in Perl

Today I wrote a tiny module Filter::SQL, a module that enables embedding SQL in perl.

NAME
    Filter::SQL - embedded SQL for perl

SYNOPSIS
      use Filter::SQL;

      Filter::SQL->dbh(DBI->connect('dbi:...')) or die DBI->errstr;

      EXEC CREATE TABLE t (v int not null);;

      $v = 12345;
      INSERT INTO t (v) VALUES ($v);;
  
      foreach my $row (SELECT * FROM t;) {
          print "v: $row[0]\n";
      }

      if (SELECT ROW COUNT(*) FROM t; == 1) {
          print "1 row in table\n";
      }

After writing the module, people told me of similar modules like SQL::Preproc. Compared to SQL::Preproc, Filter::SQL does not provide conformance to Embedded-SQL, but may be easy to use for ordinally perl programmers. There are also modules like DBIx::Perlish that seems to tackle the same problem from a bit different direction. Anyway certainly TMTOWTDI, we have the right choose what you like, and I hope Filter::SQL would be a good choice for some of us.

March 19, 2008

Q4M version 0.3 released

Today I have released Q4M version 0.3. Changes from the previous are as follows.

Support for message relaying

Messages submitted to Q4M tables can be forwarded to another Q4M table on a different MySQL server. The transfer is trustable, there would be no message loss or duplications even on a server or a network failure. It is also possible to build a message broker above the API used for relaying.

Prioritized subscription to multiple tables

Clients can now subscribe to more than one table at once. The subscription priority can be specified, thus it is able to create a priority queue using multiple tables.

For more information, please refer to the Q4M homepage.

January 15, 2008

Q4M

For several years I have been wondering if there was a RDBMS that serves as a message queue as well1. And in the end of last year, noticing that MySQL 5.1 with support for pluggable storage engine was entering its RC stage, and that (from little googling) nobody was creating such storage engine, I dicided to write my own.

Q4M (Queue for MySQL) is a message queue that works as a pluggable storage engine of MySQL 5.1, designed to be robust, fast, flexible. The development started in late December of 2007, and although it is very primitive, operates quite swiftly.
q4m.31tools.com

Yes, it is primitive, and might have stability problems (I do not recommend using Q4M in a production environment), but if you are interested, please have a try.

1: At least Oracle seems to, but I never had a chance to use it.

October 10, 2007

Swifty 0.06

This is a minor bugfix release from Swifty 0.05 and Cache::Swifty 0.05.

Bugs fixed are as follows:

  • fix build problem on i386 systems
  • only set do_refresh flag if the flag is not yet being set
  • handle EINTR when calling flock
  • add destructor (perl only)

September 27, 2007

KeyedMutex - a mutex for web services

Yesterday, I wrote:

Normally, a cache entry has a single lifetime. The problem is that if a cache entry is read frequently and if it takes time to update the entry, a situation known as thundering herd would occur on expiration; i.e. many cache consumers will detect expiration and send same update requests to the backend, causing a performance decline. There are two solutions to the problem:
  • use a proxy that combines identical update requests as a single request
  • initiate an update request prior to expiration, when lifetime being left gets below a certain threshold

Kazuho at Work: Swifty 0.05 and the Thundering Herd

As a complement for the second appoarch I took in Swifty 0.05, I wrote a tiny server that enforces exclusive access to databases. Actually it is not a proxy but works much like a mutex object between processes, but has a slightly different interface to fit into the realities of web services. Here comes the sample code.

Continue reading "KeyedMutex - a mutex for web services" »