On Mon, Jun 27, 2011 at 8:24 AM, Michael Widenius
<monty@askmonty.org> wrote:
<cut>
MARK> Can there be a variant that doesn't do #2? My workload doesn't use
>> MyISAM
MARK> and I don't know if #2 is done because of MyISAM. Calling
MARK> close_cached_tables seems like a heavy way to force LOCK TABLEs to be
MARK> unlocked. Any long running queries will cause #2 to block.
>>
>> The reason for 2 is to ensure that that all table info is written to
>> disk so that if you do a snapshot or copy of tables, you will get
>> things in a consistent state.
>>
>> This is mostly for MyISAM and non transactional tables, but it will
>> also speed up things for InnoDB tables and allow you to copy xtradb
>> tables from one server to another (if you are using table spaces)
>> without having to take down the server.
>>
MARK> We are not using this for backup. This doesn't make it safe to copy
MARK> InnoDB/XtraDB tables as the background IO threads can still do writes (flush
MARK> dirty pages, merge insert buffer pages, purge delete rows.
This is how innodb/xtradb works just now. However, I assume it should
not be hard to fix XtraDB to flush things completely for tables that
are closed and flushed and not in use by any transaction.
(This is what Aria does today).
>>
>> It's possible to do a 'FLUSH TABLES FAST WITH READ LOCK' version that
>> would only flush the header of MyISAM tables, which would probably
>> help you, as long as you don't plan to copy any tables to any other
>> server.
>>
>>
MARK> We want a fast version of this to support other admin activities. In this
MARK> case we don't care if this doesn't make it safe to backup MyISAM. Is it
MARK> possible to provide that option and is it interesting for MariaDB to have
MARK> such an option?
What is possible relatively easy is to add the above FAST keyword to
FLUSH TABLES. For MyISAM tables it would do as I describe above, for
other table type it would basicly be a nop and would thus solve your
problem while being useful for others.
I would expect this to something 8-16 hours of work. Do you want me
to do a worklog for this?
Assuming this won't block waiting for running queries to finish, yes let's proceed with a worklog.