developers
Threads by month
- ----- 2025 -----
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- 8 participants
- 6853 discussions

Re: [Maria-developers] Architecture review of MWL#116 "Efficient group commit for binary log"
by Sergei Golubchik 08 Sep '10
by Sergei Golubchik 08 Sep '10
08 Sep '10
Hi, Kristian!
Let's start from the WL description:
> While currently there is no server guarantee to get same commit order
> in engines an binlog (except for the InnoDB prepare_commit_mutex
> hack), there are several reasons why this could be desirable:
...
> - Other backup methods could have similar needs, eg. XtraBackup or
> `mysqldump --single-transaction`, to have consistent commit order
> between binlog and storage engines without having to do FLUSH
> TABLES WITH READ LOCK or similar expensive blocking operation.
> (other backup methods, like LVM snapshot, don't need consistent
> commit order, as they can restore out-of-order commits during crash
> recovery using XA).
I doubt mysqldump --single-transaction cares about physical commit
ordering in logs.
> - If we have consistent commit order, we can think about optimising
> commit to need only one fsync (for binlog); lost commits in storage
> engines can then be recovered from the binlog at crash recovery by
> re-playing against the engine from a particular point in the
> binlog.
Why do you need consistent commit order for that?
> - With consistent commit order, we can get better semantics for START
> TRANSACTION WITH CONSISTENT SNAPSHOT with multi-engine transactions
> (and we could even get it to return also a matching binlog
> position). Currently, this "CONSISTENT SNAPSHOT" can be
> inconsistent among multiple storage engines.
Okay, although it's a corner case.
It would be more important to have consistent reads working across
multiple storage engines without explicit START TRANSACTION WITH
CONSISTENT SNAPSHOT. But it's doable too.
> - In InnoDB, the performance in the presense of hotspots can be
> improved if we can release row locks early in the commit phase, but
> this requires that we release them in the same order as commits in
> the binlog to ensure consistency between master and slaves.
Conceptually InnoDB can release the locks only after the commit was made
persistent.
So, I think it's the same as "optimising commit to need only one
fsync" - if we only sync the binlog, InnoDB can release its locks right
after binlog sync. And the question is the same, why do you need
consistent commit order for that? (I suppose, the answer will be the
same too :)
> - There was some discussions around Galera [1] synchroneous
> replication and global transaction ID that it needed consistent
> commit order among participating engines.
> - I believe there could be other applications for guaranteed
> consistent commit order, and that the architecture described in
> this worklog can implement such guarantee with reasonable overhead.
Okay, this much I agree with - there could be applications (galera, may
be) that need this consistent commit order. But because it's not needed
in many (if not most) cases, it'd be good to be able to switch it off,
if it'll help to gain performance.
================ High-Level Specification ===================
...
> This group_log_xid() method also is more efficient, as it avoids some
> inter-thread synchronisation. Since group_log_xid() is serialised, we
> can run it together with all the commit_ordered() method calls and
> need only a single sequential code section. With the log_xid()
> methods, we would need first a sequential part for the
> prepare_ordered() calls, then a parallel part with log_xid() calls (to
> not loose group commit ability for log_xid()), then again a sequential
> part for the commit_ordered() method calls.
How could that work ? If you run log_xid() is parallel, then you don't
know in what order xid's are logged - which means, you don't know the
commit order in the binlog. And if you don't know it, you cannot have
the same order in the engines.
> The extra synchronisation is needed, as each commit_ordered() call
> will have to wait for log_xid() in one thread (if log_xid() fails then
> commit_ordered() should not be called), and also wait for
> commit_ordered() to finish in all threads handling earlier commits. In
> effect we will need to bounce the execution from one thread to the
> other among all participants in the group commit.
I failed to understand the last paragraph :(
> As a consequence of the group_log_xid() optimisation, handlers must be
> aware that the commit_ordered() call can happen in another thread than
> the one running commit() (so thread local storage is not available).
> This should not be a big issue as the THD is available for storing any
> needed information.
I do not see how this is a consequence of the optimization.
> Since group_log_xid() runs for multiple transactions in a single
> thread, it can not do error reporting (my_error()) as that relies on
> thread local storage. Instead it sets an error code in THD::xid_error,
> and if there is an error then later another method will be called (in
> correct thread context) to actually report the error:
>
> int xid_delayed_error(THD *thd)
>
> The three new methods prepare_ordered(), group_log_xid(), and
> commit_ordered() are optional (as is xid_delayed_error). A storage
> engine or transaction coordinator is free to not implement them if
> they are not needed. In this case there will be no order guarantee for
> the corresponding stage of group commit for that engine. For example,
> InnoDB needs no ordering of the prepare phase, so can omit
> implementing prepare_ordered(); TC_LOG_MMAP needs no ordering at all,
> so does not need to implement any of them.
Hm. handlerton::prepare_ordered() and handlerton::commit_ordered() are
truly optional, but TC_LOG::group_log_xid() is a replacement for
TC_LOG::log_xid().
I wonder if it would be cleaner to remove TC_LOG::log_xid() (from
TC_LOG_MMAP too) completely and not keep the redundant functionality.
> -----------------------------------------------------------------------
> Some possible alternative for this worklog:
>
> - Alternatively, we could eliminate log_xid() and require that all
> transaction coordinators implement group_log_xid() instead, again for
> some moderate simplification.
indeed, that's what I mean
> - At the moment there is no plugin actually using prepare_ordered(),
> so, it could be removed from the design. But it fits in well, is
> efficient to implement, and could be useful later (eg. for the
> requested feature of releasing locks early in InnoDB).
every time when a new api is implemented there is no plugin actually
using it :)
> -----------------------------------------------------------------------
> Some possible follow-up projects after this is implemented:
>
> - Add statistics about how efficient group commit is
> (#fsyncs/#commits in each engine and binlog).
this is so simple that can be done im this WL.
> - Implement an XtraDB prepare_ordered() methods that can release row
> locks early (Mark Callaghan from Facebook advocates this, but need to
> determine exactly how to do this safely).
>
> - Implement a new crash recovery algorithm that uses the consistent
> commit ordering to need only fsync() for the binlog. At crash
> recovery, any missing transactions in an engine is replayed from the
> correct point in the binlog (this point must be stored
> transactionally inside the engine, as XtraDB already does today).
I'd say it's important optimization - can you turn it into a WL ?
It deserves more than a passing mentioning at the end of another WL.
Perhaps other follow-up projects may be also turned into WLs.
> - Implement that START TRANSACTION WITH CONSISTENT SNAPSHOT 1) really
> gets a consistent snapshow, with same set of committed and not
> committed transactions in all engines, 2) returns a corresponding
> consistent binlog position. This should be easy by piggybacking on
> the synchronisation implemented for ha_commit_trans().
>
> - Use this in XtraBackup to get consistent binlog position without
> having to block all updates with FLUSH TABLES WITH READ LOCK.
>
================ Low-Level Design ===================
>
> 1. Changes for ha_commit_trans()
>
> The gut of the code for commit is in the function ha_commit_trans()
> (and in commit_one_phase() which is called from it). This must be
> extended to use the new prepare_ordered(), group_log_xid(), and
> commit_ordered() calls.
>
> 1.1 Atomic queue of committing transactions
>
> To keep the right commit order among participants, we put transactions
> into a queue. The operations on the queue are non-locking:
>
> - Insert THD at the head of the queue, and return old queue.
>
> THD *enqueue_atomic(THD *thd)
>
> - Fetch (and delete) the whole queue.
>
> THD *atomic_grab_reverse_queue()
>
> These are simple to implement with atomic compare-and-set. Note that
> there is no ABA problem [2], as we do not delete individual elements
> from the queue, we grab the whole queue and replace it with NULL.
>
> A transaction enters the queue when it does prepare_ordered(). This
> way, the scheduling order for prepare_ordered() calls is what
> determines the sequence in the queue and effectively the commit order.
How can that work ? If you insert into a queue without a lock, you
cannot know what order of commits you eend up with. So, you cannot have
the same order as you've got fof prepare_ordered. Besides, if
prepare_ordered is, precisely, *ordered*, it must be done under a lock.
I see two possibilities of doing prepare_ordered and enqueue:
1.
pthread_mutex_lock(queue_lock);
prepare_ordered(thd);
enqueue(thd);
pthread_mutex_unlock(queue_lock);
in this case prepare_ordered calls are strictly serialized by a mutex.
There is no need for non-locking enqueue_atomic, because the same mutex
must be used to define the queue ordering.
2.
enqueue_atomic();
... some time later ...
for (thd=queue_head; thd; thd= thd->next_in_queue)
prepare_ordered(thd);
in this case insert into a queue is non-locking, the order of elements
in a queue is undefined. Later on, you iterate the queue and call
prepare_ordered in the queue order. In this implementation
all prepare_ordered and commit_ordered will likely be called from one
thread, not from the connection thread of this transaction.
==========
Note the detail: for START TRANSACTION WITH CONSISTENT SNAPSHOT and
inter-engine consistency to work, all connection threads must use the
same commit order across all engines (similar to deadlock avoidance that
MySQL uses - where all threads lock tables in the same order). A simple
way to do it would be to change trans_reqister_ha and instead of adding
participating engines to a list, mark them in an array:
bool engines[MAX_HA];
trans_reqister_ha() {
...
engines[hton->slot]=1;
...
}
and in ha_prepare_ordered() you call hton->prepare_ordered()
for all engines that have non-zero value in engines[] array.
To make the above pseudo-code simpler, I've used an array of bool,
but it would be better to have an array of void* pointers and let
the engine to store any transaction specific data there, just like it
can use thd->ha_data[] and savepoint memory.
> The queue is grabbed by the code doing group_log_xid() and
> commit_ordered() calls. The queue is passed directly to
> group_log_xid(), and afterwards iterated to do individual
> commit_ordered() calls.
>
> Using a lock-free queue allows prepare_ordered() (for one transaction)
> to run in parallel with commit_ordered (in another transaction),
> increasing potential parallelism.
Nope, in the approach 1 (with a mutex) as above, you can have this too.
Simply as
1.
... on commit ...
pthread_mutex_lock(queue_lock);
commit_queue= queue;
queue=NULL;
pthread_mutex_unlock(queue_lock);
... now you can iterate the commit_queue in parallel ...
... with preparing other transactions ...
actually you'll have to do it not on commit but before group_log_xid().
> The queue is simply a linked list of THD objects, linked through a
> THD::next_commit_ordered field. Since we add at the head of the queue,
> the list is actually in reverse order, so must be reversed when we
> grab and delete it.
Better to have a list of transaction objects, if possible - as
eventually we'll need to decouple st_transaction and THD to be able to
put transactions on hold and continue them in another thread, as
needed for external XA.
We aren't doing it now, but if possible, it'd better to avoid creating
new dependencies between THD and st_transaction.
> The reason that enqueue_atomic() returns the old queue is so that we
> can check if an insert goes to the head of the queue. The thread at
> the head of the queue will do the sequential part of group commit for
> everyone.
>
> 1.2 Locks
>
> 1.2.1 Global LOCK_prepare_ordered
>
> This lock is taken to serialise calls to prepare_ordered(). Note that
> effectively, the commit order is decided by the order in which threads
> obtain this lock.
okay, in this case it's approach 1, enqueue must happen under the same
lock and there's no need for a lock-free implementation
> 1.2.2 Global LOCK_group_commit and COND_group_commit
>
> This lock is used to protect the serial part of group commit. It is
> taken around the code where we grab the queue, call group_log_xid() on
> the queue, and call commit_ordered() on each element of the queue, to
> make sure they happen serialised and in consistent order. It also
> protects the variable group_commit_queue_busy, which is used when not
> using group_log_xid() to delay running over a new queue until the
> first queue is completely done.
>
> 1.2.3 Global LOCK_commit_ordered
>
> This lock is taken around calls to commit_ordered(), to ensure they
> happen serialised.
if you have only one thread iterating the queue and calling
commit_ordered(), they'll be always serialized. why do you need a
mutex ?
> 1.2.4 Per-thread thd->LOCK_commit_ordered and thd->COND_commit_ordered
>
> This lock protects the thd->group_commit_ready variable, as well as
> the condition variable used to wake up threads after log_xid() and
> commit_ordered() finishes.
>
> 1.2.5 Global LOCK_group_commit_queue
>
> This is only used on platforms with no native compare-and-set
> operations, to make the queue operations atomic.
>
> 1.3 Commit algorithm.
>
> This is the basic algorithm, simplified by
>
> - omitting some error handling
>
> - omitting looping over all handlers when invoking handler methods
>
> - omitting some possible optimisations when not all calls needed (see
> next section).
>
> - Omitting the case where no group_log_xid() is used, see below.
>
> ---- BEGIN ALGORITHM ----
> ht->prepare()
>
> // Call prepare_ordered() and enqueue in correct commit order
> lock(LOCK_prepare_ordered)
> ht->prepare_ordered()
> old_queue= enqueue_atomic(thd)
so, indeed, you enqueue_atomic under a mutex. Why do you want lock-free
queue implementation ?
> thd->group_commit_ready= FALSE
> is_group_commit_leader= (old_queue == NULL)
> unlock(LOCK_prepare_ordered)
>
> if (is_group_commit_leader)
>
> // The first in queue handles group commit for everyone
>
> lock(LOCK_group_commit)
> // Wait while queue is busy, see below for when this occurs
> while (group_commit_queue_busy)
> cond_wait(COND_group_commit)
>
> // Grab and reverse the queue to get correct order of transactions
> queue= atomic_grab_reverse_queue()
>
> // This call will set individual error codes in thd->xid_error
> // It also sets the cookie for unlog() in thd->xid_cookie
> group_log_xid(queue)
>
> lock(LOCK_commit_ordered)
> for (other IN queue)
> if (!other->xid_error)
> ht->commit_ordered()
> unlock(LOCK_commit_ordered)
commit_ordered() is called from one thread, and it's protected
by LOCK_group_commit. There's no need for LOCK_commit_ordered.
> unlock(LOCK_group_commit)
>
> // Now we are done, so wake up all the others.
> for (other IN TAIL(queue))
> lock(other->LOCK_commit_ordered)
> other->group_commit_ready= TRUE
> cond_signal(other->COND_commit_ordered)
> unlock(other->LOCK_commit_ordered)
why do you want one condition per thread ?
You can use one condition per queue, such as:
else
// If not the leader, just wait until leader did the work for us.
lock(old_queue->LOCK_commit_ordered)
while (!old_queue->group_commit_ready)
cond_wait(old_queue->LOCK_commit_ordered, old_queue->COND_commit_ordered)
unlock(old_queue->LOCK_commit_ordered)
this way you'll need just one broadcast to wake up everybody.
> else
> // If not the leader, just wait until leader did the work for us.
> lock(thd->LOCK_commit_ordered)
> while (!thd->group_commit_ready)
> cond_wait(thd->LOCK_commit_ordered, thd->COND_commit_ordered)
> unlock(other->LOCK_commit_ordered)
This doesn't work, I'm afraid. The leader can complete its job and
signal all threads in a queue before some of the queue threads has
locked their thd->LOCK_commit_ordered mutex. These threads will miss the
wakeup signal.
To close all loopholes, I think, you can use my suggestion above - with
one broadcast and waiting on old_queue->COND_commit_ordered - and every
thread, including the leader, must lock the leader->LOCK_commit_ordered
before unlocking LOCK_prepare_ordered. That is, it'll be
// Call prepare_ordered() and enqueue in correct commit order
lock(LOCK_prepare_ordered)
ht->prepare_ordered()
old_queue= queue_head
enqueue(thd)
leader= queue_head
lock(leader->LOCK_commit_ordered)
unlock(LOCK_prepare_ordered)
if (leader != thd)
while (!leader->group_commit_done)
cond_wait(leader->LOCK_commit_ordered, leader->COND_commit_ordered)
else
leader->group_commit_done= false;
.... the rest of your group commit code
broadcast(leader->COND_commit_ordered);
unlock(leader->LOCK_commit_ordered);
> // Finally do any error reporting now that we're back in own thread.
> if (thd->xid_error)
> xid_delayed_error(thd)
> else
> ht->commit(thd)
> unlog(thd->xid_cookie, thd->xid)
> ---- END ALGORITHM ----
...
I've skipped the algorithm that uses old log_xid()
...
> 2. Binlog code changes (log.cc)
>
> The bulk of the work needed for the binary log is to extend the code
> to allow group commit to the log. Unlike InnoDB/XtraDB, there is no
> existing support inside the binlog code for group commit.
>
> The existing code runs most of the write + fsync to the binary lock
> under the global LOCK_log mutex, preventing any group commit.
>
> To enable group commit, this code must be split into two parts:
>
> - one part that runs per transaction, re-writing the embedded event
> positions for the correct offset, and writing this into the in-memory
> log cache.
>
> - another part that writes a set of transactions to the disk, and
> runs fsync().
>
> Then in group_log_xid(), we can run the first part in a loop over all
> the transactions in the passed-in queue, and run the second part only
> once.
>
> The binlog code also has other code paths that write into the binlog,
> eg. non-transactional statements. These have to be adapted also to
> work with the new code.
>
> In order to get some group commit facility for these also, we change
> that part of the code in a similar way to ha_commit_trans. We keep
> another, binlog-internal queue of such non-transactional binlog
> writes, and such writes queue up here before sleeping on the LOCK_log
> mutex. Once a thread obtains the LOCK_log, it loops over the queue for
> the fast part, and does the slow part once, then finally wakes up the
> others in the queue.
I wouldn't bother implementing any grouping for non-transactional
engines. They aren't crash safe, so practically users of these engines
don't need or use --sync-binlog anyway.
> I think the group commit is mostly useful in transactional workloads
> anyway (non-transactional engines will loose data anyway in case of
> crash, so why fsync() after each transaction?)
Yes, exactly
> 3. XtraDB changes (ha_innodb.cc)
>
> The changes needed in XtraDB are comparatively simple, as XtraDB
> already implements group commit, it just needs to be enabled with the
> new commit_ordered() call.
>
> The existing commit() method already is logically in two parts. The
> first part runs under the prepare_commit_mutex() and must be run in
> same order as binlog commit. This part needs to be moved to
> commit_ordered(). The second part runs after releasing
> prepare_commit_mutex and does transaction log write+fsync; it can
> remain.
>
> Then the prepare_commit_mutex is removed (and the
> enable_unsafe_group_commit XtraDB option to disable it).
>
> There are two asserts that check that the thread running the first
> part of XtraDB commit is the same as the thread running the other
> operations for the transaction. These have to be removed (as
> commit_ordered() can run in a different thread). Also an error
> reporting with sql_print_error() has to be delayed until commit()
> time.
>
> 4. Proof-of-concept implementation
>
> There is a proof-of-concept implementation of this architecture, in
> the form of a quilt patch series [3].
>
> A quick benchmark was done, with sync_binlog=1 and
> innodb_flush_log_at_trx_commit=1. 64 parallel threads doing single-row
> transactions against one table.
>
> Without the patch, we get only 25 queries per second.
>
> With the patch, we get 650 queries per second.
Please do RQG transactional tests (Philip can help with that)
to verify that ACID wasn't compromized
> 6. Alternative implementations
>
> - The binlog code maintains its own extra atomic transaction queue to
> handle non-transactional commits in a good way together with
> transactional (with respect to group commit). Alternatively, we could
> ignore this issue and just give up on group commit for
> non-transactional statements, for some code simplifications.
Right, that's what I suggested above.
Seems like whatever I write, you have the same later in the WL :)
> - It would probably be a good idea to implement
> TC_LOG_MMAP::group_log_xid() (should not be hard).
Agree (I wrote that above, too :)
Regards,
Sergei
2
5

07 Sep '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: SKIP LOCKED
CREATION DATE..: Tue, 07 Sep 2010, 19:52
SUPERVISOR.....:
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 140 (http://askmonty.org/worklog/?tid=140)
VERSION........: WorkLog-4.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
There are times when parallel thread scalability can be improved with SELECT
.... FOR UPDATE SKIP LOCKED. This feature, which is part of Oracle's RDBMS
implementation, prevents all the threads from being held up by a single thread
that has locked a row.
A perfect example is a queue with multiple consumers. If the queue has the
following columns: ID int autoincrement, State varchar(10), Work (varchar 255).
A consumer of the queue would run the following pseudocode to grab a queue
entry, reserve it, do the work, and delete the entry after the work completes:
begin
select ID, Work from Queue order by id limit 1 for update skip locked
<do work>
delete from Queue where ID=<ID from select>
commit
Without SKIP LOCKED the pseudocode has to be
begin
select ID, State, Work from Queue where State='Waiting' order by id limit 1 for
update skip locked
update Queue set State='Working' where ID=<ID from select>
commit
# This commit is needed to release any other consumers waiting for me to release
the lock
begin
#This could take quite a while
<do work>
delete from Queue where ID=<ID from select>
commit
Without the SKIP LOCKED, only once consumer thread at a time can be running from
the initial select to the first commit. This limits the scalability of the
queue. With SKIP LOCKED none of the consumer threads will be waiting for a lock
to be released by another consumer thread.
Anker
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#139 New (by Sergei): UNIQUE CONSTRAINT for BLOB
by worklog-noreply@askmonty.org 07 Sep '10
by worklog-noreply@askmonty.org 07 Sep '10
07 Sep '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: UNIQUE CONSTRAINT for BLOB
CREATION DATE..: Tue, 07 Sep 2010, 19:50
SUPERVISOR.....:
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 139 (https://askmonty.org/worklog/?tid=139)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
the feature from http://forge.mysql.com/worklog/task.php?id=882 that somehow got
stuck for years and never were implemented.
Add support for unique constraints of arbitrary length - not limited by the max
key length of the engine.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

Re: [Maria-developers] [Commits] Rev 2816: Fixed second part of the LP BUG#615760 - incorrect parameters of the temporary heap table index (unique & nulls are equal). in file:///home/bell/maria/bzr/work-maria-5.3-lb615760/
by Sergei Golubchik 07 Sep '10
by Sergei Golubchik 07 Sep '10
07 Sep '10
Hi, Sanja!
On Sep 07, sanja(a)askmonty.org wrote:
> At file:///home/bell/maria/bzr/work-maria-5.3-lb615760/
>
> ------------------------------------------------------------
> revno: 2816
> revision-id: sanja(a)askmonty.org-20100907061732-yyuwtfg0ucyqiu8j
> parent: sanja(a)askmonty.org-20100906123424-2sco3ghittvidm6l
> committer: sanja(a)askmonty.org
> branch nick: work-maria-5.3-lb615760
> timestamp: Tue 2010-09-07 09:17:32 +0300
> message:
> Fixed second part of the LP BUG#615760 - incorrect parameters of the
> temporary heap table index (unique & nulls are equal).
> === modified file 'sql/sql_expression_cache.cc'
> --- a/sql/sql_expression_cache.cc 2010-09-06 12:34:24 +0000
> +++ b/sql/sql_expression_cache.cc 2010-09-07 06:17:32 +0000
I cannot review that change, I don't know sql_expression_cache code :(
Anyway, you asked me to look at the "incorrect heap table index flags"
changes only.
> === modified file 'sql/table.cc'
> --- a/sql/table.cc 2010-07-10 10:37:30 +0000
> +++ b/sql/table.cc 2010-09-07 06:17:32 +0000
> @@ -5191,7 +5191,7 @@
> keyinfo->usable_key_parts= keyinfo->key_parts = key_parts;
> keyinfo->key_length=0;
> keyinfo->algorithm= HA_KEY_ALG_UNDEF;
> - keyinfo->flags= HA_GENERATED_KEY;
> + keyinfo->flags= HA_GENERATED_KEY | HA_NOSAME;
> sprintf(buf, "key%i", key);
> if (!(keyinfo->name= strdup_root(&mem_root, buf)))
> return TRUE;
> @@ -5230,6 +5230,7 @@
> {
> key_part_info->store_length+= HA_KEY_NULL_LENGTH;
> keyinfo->key_length+= HA_KEY_NULL_LENGTH;
> + keyinfo->flags|= HA_NULL_ARE_EQUAL; // def. that NULL == NULL
> }
> if ((*reg_field)->type() == MYSQL_TYPE_BLOB ||
> (*reg_field)->real_type() == MYSQL_TYPE_VARCHAR)
Difficult to say.
Sure, temporary tables should have keyinfo->flags=HA_NOSAME and
keyinfo->flags|= HA_NULL_ARE_EQUAL. You can see that in
create_tmp_table(). HA_GENERATED_KEY has no effect, so it doesn't matter
if you specify it or not.
But I don't know what are requirements for temporary tables in your
subquery cache code.
Unrelated question - why did you duplicate creation of temporary
tables? I understand that you may've needed something slightly different
from create_tmp_table(), but then you should've changed
create_tmp_table() to use your new code, TABLE::add_tmp_key() etc.
By the way, if you had done that (reused the code, not duplicated it)
this bug would've been impossible - many tests in the test suite would
fail at once if all temporary tables would have non-unique indexes.
Regards,
Sergei
2
1

[Maria-developers] Rev 2815: Fixed LP BUG#615760: Check on double cache assignment added into the trensformation methods. in file:///home/bell/maria/bzr/work-maria-5.3-lb615760/
by sanja@askmonty.org 05 Sep '10
by sanja@askmonty.org 05 Sep '10
05 Sep '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb615760/
------------------------------------------------------------
revno: 2815
revision-id: sanja(a)askmonty.org-20100905193612-1fm5qj6ga4525k5l
parent: sanja(a)askmonty.org-20100901144241-0b04wm4z328tmwtq
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb615760
timestamp: Sun 2010-09-05 22:36:12 +0300
message:
Fixed LP BUG#615760: Check on double cache assignment added into the trensformation methods.
Cache parameters print added in EXPLAIN EXTENDED output.
=== modified file 'mysql-test/r/compare.result'
--- a/mysql-test/r/compare.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/compare.result 2010-09-05 19:36:12 +0000
@@ -88,7 +88,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,<expr_cache>((select count(0) from `test`.`t1` where ((`test`.`t1`.`b` = `test`.`t2`.`a`) and (concat(`test`.`t1`.`b`,`test`.`t1`.`c`) = concat('0',`test`.`t2`.`a`,'01'))))) AS `x` from `test`.`t2` order by `test`.`t2`.`a`
+Note 1003 select `test`.`t2`.`a` AS `a`,<expr_cache><`test`.`t2`.`a`>((select count(0) from `test`.`t1` where ((`test`.`t1`.`b` = `test`.`t2`.`a`) and (concat(`test`.`t1`.`b`,`test`.`t1`.`c`) = concat('0',`test`.`t2`.`a`,'01'))))) AS `x` from `test`.`t2` order by `test`.`t2`.`a`
DROP TABLE t1,t2;
CREATE TABLE t1 (a TIMESTAMP);
INSERT INTO t1 VALUES (NOW()),(NOW()),(NOW());
=== modified file 'mysql-test/r/explain.result'
--- a/mysql-test/r/explain.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/explain.result 2010-09-05 19:36:12 +0000
@@ -224,7 +224,7 @@
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select 1 from `test`.`t2` where (`test`.`t2`.`d` = NULL))) AS `(SELECT 1 FROM t2 WHERE d = c)` from `test`.`t1`
+Note 1003 select <expr_cache><NULL>((select 1 from `test`.`t2` where (`test`.`t2`.`d` = NULL))) AS `(SELECT 1 FROM t2 WHERE d = c)` from `test`.`t1`
DROP TABLE t1, t2;
#
# Bug #48419: another explain crash..
=== modified file 'mysql-test/r/group_by.result'
--- a/mysql-test/r/group_by.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/group_by.result 2010-09-05 19:36:12 +0000
@@ -1742,7 +1742,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select `test`.`t1`.`a`)) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by (<expr_cache>((select `test`.`t1`.`a`)) + 0)
+Note 1003 select <expr_cache><`test`.`t1`.`a`>((select `test`.`t1`.`a`)) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by (<expr_cache><`test`.`t1`.`a`>((select `test`.`t1`.`a`)) + 0)
EXPLAIN EXTENDED
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) FROM t1 GROUP BY -aa;
id select_type table type possible_keys key key_len ref rows filtered Extra
@@ -1750,7 +1750,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select `test`.`t1`.`a`)) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by -(<expr_cache>((select `test`.`t1`.`a`)))
+Note 1003 select <expr_cache><`test`.`t1`.`a`>((select `test`.`t1`.`a`)) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by -(<expr_cache><`test`.`t1`.`a`>((select `test`.`t1`.`a`)))
# should return only one record
SELECT (SELECT tt.a FROM t1 tt LIMIT 1) aa, COUNT(DISTINCT b) FROM t1
GROUP BY aa;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect.result 2010-09-05 19:36:12 +0000
@@ -52,7 +52,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache><'1'>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -316,7 +316,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache><`test`.`t2`.`a`>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -334,7 +334,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache><`test`.`t6`.`clinic_uq`>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -745,7 +745,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache><`test`.`t2`.`id`>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -893,7 +893,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -908,7 +908,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1469,25 +1469,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1742,14 +1742,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache><`test`.`t1`.`id`>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache><`test`.`tt`.`id`>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2285,7 +2285,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache><`test`.`up`.`a`>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2826,7 +2826,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
@@ -2838,7 +2838,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4325,13 +4325,13 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache><1>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache><1>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
DROP TABLE t1;
#
# Bug#45061: Incorrectly market field caused wrong result.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect3.result 2010-09-05 19:36:12 +0000
@@ -30,7 +30,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `Z` from `test`.`t2`
explain extended
select a, oref from t2
where a in (select max(ie) from t1 where oref=t2.oref group by grp);
@@ -39,7 +39,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))
select a, oref, a in (
select max(ie) from t1 where oref=t2.oref group by grp union
select max(ie) from t1 where oref=t2.oref group by grp
@@ -70,7 +70,7 @@
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select <expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+Note 1003 select <expr_cache><`test`.`t3`.`a`>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
@@ -95,7 +95,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))))) AS `Z` from `test`.`t2`
flush status;
select oref, a from t2 where a in (select a from t1 where oref=t2.oref);
oref a
@@ -162,7 +162,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 5 test.t1.b 1 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache><`test`.`t3`.`a`,`test`.`t3`.`oref`>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
drop table t1, t2, t3;
create table t1 (a int NOT NULL, b int NOT NULL, key(a));
insert into t1 values
@@ -190,7 +190,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 4 test.t1.b 1 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`)))))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache><`test`.`t3`.`a`,`test`.`t3`.`oref`>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`)))))) AS `Z` from `test`.`t3`
drop table t1,t2,t3;
create table t1 (oref int, grp int);
insert into t1 (oref, grp) values
@@ -214,7 +214,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 't2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0)))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0)))))))) AS `Z` from `test`.`t2`
drop table t1, t2;
create table t1 (a int, b int, primary key (a));
insert into t1 values (1,1), (3,1),(100,1);
@@ -246,7 +246,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))))) AS `Z` from `test`.`t2`
select a,b, oref, (a,b) in (select a,b from t1 where c=t2.oref) Z from t2;
a b oref Z
NULL 1 100 0
@@ -263,7 +263,7 @@
2 DEPENDENT SUBQUERY t4 ALL NULL NULL NULL NULL 100 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
select a,b, oref,
(a,b) in (select a,b from t1,t4 where c=t2.oref) Z
from t2;
@@ -308,7 +308,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
drop table t1, t2;
create table t1 (oref char(4), grp int, ie int);
insert into t1 (oref, grp, ie) values
@@ -578,7 +578,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2`
drop table t1,t2;
create table t1 (oref char(4), grp int, ie int primary key);
insert into t1 (oref, grp, ie) values
@@ -710,7 +710,7 @@
1 PRIMARY t2 eq_ref PRIMARY PRIMARY 4 test.t1.a 1 100.00 Using index
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`)))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`)))))))
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1));
a
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1 WHERE a > 4));
@@ -881,7 +881,7 @@
Note 1276 Field or reference 'test.t1.a' of SELECT #3 was resolved in SELECT #2
Note 1276 Field or reference 'test.t1.c' of SELECT #3 was resolved in SELECT #2
Error 1054 Unknown column 'c' in 'field list'
-Note 1003 select `c` AS `c` from (select <expr_cache>((select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`)) AS `(SELECT COUNT(a) FROM
+Note 1003 select `c` AS `c` from (select <expr_cache><count(`test`.`t1`.`a`),`test`.`t1`.`c`>((select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`)) AS `(SELECT COUNT(a) FROM
(SELECT COUNT(b) FROM t1) AS x GROUP BY c
)` from `test`.`t1` group by `test`.`t1`.`b`) `y`
DROP TABLE t1;
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-09-05 19:36:12 +0000
@@ -34,7 +34,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `Z` from `test`.`t2`
explain extended
select a, oref from t2
where a in (select max(ie) from t1 where oref=t2.oref group by grp);
@@ -43,7 +43,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))
select a, oref, a in (
select max(ie) from t1 where oref=t2.oref group by grp union
select max(ie) from t1 where oref=t2.oref group by grp
@@ -74,7 +74,7 @@
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select <expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+Note 1003 select <expr_cache><`test`.`t3`.`a`>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
@@ -99,7 +99,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))))) AS `Z` from `test`.`t2`
flush status;
select oref, a from t2 where a in (select a from t1 where oref=t2.oref);
oref a
@@ -166,7 +166,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 5 test.t1.b 1 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache><`test`.`t3`.`a`,`test`.`t3`.`oref`>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
drop table t1, t2, t3;
create table t1 (a int NOT NULL, b int NOT NULL, key(a));
insert into t1 values
@@ -194,7 +194,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 4 test.t1.b 1 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`)))))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache><`test`.`t3`.`a`,`test`.`t3`.`oref`>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`)))))) AS `Z` from `test`.`t3`
drop table t1,t2,t3;
create table t1 (oref int, grp int);
insert into t1 (oref, grp) values
@@ -218,7 +218,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 't2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0)))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0)))))))) AS `Z` from `test`.`t2`
drop table t1, t2;
create table t1 (a int, b int, primary key (a));
insert into t1 values (1,1), (3,1),(100,1);
@@ -250,7 +250,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))))) AS `Z` from `test`.`t2`
select a,b, oref, (a,b) in (select a,b from t1 where c=t2.oref) Z from t2;
a b oref Z
NULL 1 100 0
@@ -267,7 +267,7 @@
2 DEPENDENT SUBQUERY t4 ALL NULL NULL NULL NULL 100 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
select a,b, oref,
(a,b) in (select a,b from t1,t4 where c=t2.oref) Z
from t2;
@@ -312,7 +312,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
drop table t1, t2;
create table t1 (oref char(4), grp int, ie int);
insert into t1 (oref, grp, ie) values
@@ -582,7 +582,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache><`test`.`t2`.`b`,`test`.`t2`.`a`,`test`.`t2`.`oref`>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2`
drop table t1,t2;
create table t1 (oref char(4), grp int, ie int primary key);
insert into t1 (oref, grp, ie) values
@@ -714,7 +714,7 @@
1 PRIMARY t2 eq_ref PRIMARY PRIMARY 4 test.t1.a 1 100.00 Using index
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`)))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`)))))))
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1));
a
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1 WHERE a > 4));
@@ -885,7 +885,7 @@
Note 1276 Field or reference 'test.t1.a' of SELECT #3 was resolved in SELECT #2
Note 1276 Field or reference 'test.t1.c' of SELECT #3 was resolved in SELECT #2
Error 1054 Unknown column 'c' in 'field list'
-Note 1003 select `c` AS `c` from (select <expr_cache>((select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`)) AS `(SELECT COUNT(a) FROM
+Note 1003 select `c` AS `c` from (select <expr_cache><count(`test`.`t1`.`a`),`test`.`t1`.`c`>((select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`)) AS `(SELECT COUNT(a) FROM
(SELECT COUNT(b) FROM t1) AS x GROUP BY c
)` from `test`.`t1` group by `test`.`t1`.`b`) `y`
DROP TABLE t1;
=== modified file 'mysql-test/r/subselect4.result'
--- a/mysql-test/r/subselect4.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect4.result 2010-09-05 19:36:12 +0000
@@ -80,7 +80,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1`
+Note 1003 select <expr_cache><NULL>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1`
first equivalent variant
SELECT (SELECT 1 FROM t2 WHERE d = IFNULL(c,NULL)) AS RESULT FROM t1 GROUP BY c ;
RESULT
@@ -91,7 +91,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1` group by NULL
+Note 1003 select <expr_cache><NULL>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1` group by NULL
second equivalent variant
SELECT (SELECT 1 FROM t2 WHERE d = c) AS RESULT FROM t1 GROUP BY c ;
RESULT
@@ -102,7 +102,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1` group by NULL
+Note 1003 select <expr_cache><NULL>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1` group by NULL
DROP TABLE t1,t2;
#
# BUG#45928 "Differing query results depending on MRR and
=== modified file 'mysql-test/r/subselect_cache.result'
--- a/mysql-test/r/subselect_cache.result 2010-08-09 10:00:58 +0000
+++ b/mysql-test/r/subselect_cache.result 2010-09-05 19:36:12 +0000
@@ -3183,3 +3183,18 @@
NULL
drop table t1,t2,t3;
set @@optimizer_switch= default;
+# LP BUG#615760 (double transformation)
+create table t1 (a int);
+insert into t1 values (1),(2);
+create table t2 (b int);
+insert into t2 values (1),(2);
+set optimizer_switch='default,semijoin=off,materialization=off,subquery_cache=on';
+explain extended
+select * from t1 where a in (select b from t2);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
+2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
+Warnings:
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`b`))))
+drop table t1,t2;
+set @@optimizer_switch= default;
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-09-05 19:36:12 +0000
@@ -41,7 +41,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a1`>(<in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`))))))
select * from t1 where a1 in (select b1 from t2 where b1 > '0');
a1 a2
1 - 01 2 - 01
@@ -52,7 +52,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a1`>(<in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`))))))
select * from t1 where a1 in (select b1 from t2 where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -63,7 +63,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))))
select * from t1 where (a1, a2) in (select b1, b2 from t2 where b1 > '0' group by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -74,7 +74,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,min(`test`.`t2`.`b2`) from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,min(`test`.`t2`.`b2`) from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`))))))
select * from t1 where (a1, a2) in (select b1, min(b2) from t2 where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -85,7 +85,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i index it2i1,it2i3 it2i1 9 NULL 5 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache><`test`.`t1i`.`a1`>(<in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`))))))
select * from t1i where a1 in (select b1 from t2i where b1 > '0');
a1 a2
1 - 01 2 - 01
@@ -96,7 +96,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i range it2i1,it2i3 it2i1 9 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache><`test`.`t1i`.`a1`>(<in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`))))))
select * from t1i where a1 in (select b1 from t2i where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -107,7 +107,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i index it2i1,it2i3 it2i3 18 NULL 5 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
select * from t1i where (a1, a2) in (select b1, b2 from t2i where b1 > '0');
a1 a2
1 - 01 2 - 01
@@ -118,7 +118,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i range it2i1,it2i3 it2i3 18 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
select * from t1i where (a1, a2) in (select b1, b2 from t2i where b1 > '0' group by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -129,7 +129,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i range it2i1,it2i3 it2i3 18 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`min(b2)`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`min(b2)`))))))
select * from t1i where (a1, a2) in (select b1, min(b2) from t2i where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -140,7 +140,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2i range NULL it2i3 9 NULL 3 100.00 Using index for group-by
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,max(`test`.`t2i`.`b2`) from `test`.`t2i` group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`max(b2)`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,max(`test`.`t2i`.`b2`) from `test`.`t2i` group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`max(b2)`))))))
select * from t1 where (a1, a2) in (select b1, max(b2) from t2i group by b1);
a1 a2
1 - 01 2 - 01
@@ -169,7 +169,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2i range it2i1,it2i3 it2i3 18 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`))))))
select * from t1 where (a1, a2) in (select b1, min(b2) from t2i where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -209,7 +209,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` order by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` order by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))))
select * from t1 where (a1, a2) in (select b1, b2 from t2 order by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -220,7 +220,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i index NULL it2i3 18 NULL 5 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` order by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` order by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
select * from t1i where (a1, a2) in (select b1, b2 from t2i order by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -275,7 +275,7 @@
4 SUBQUERY t2i index it2i2 it2i3 18 NULL 5 100.00 Using where; Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache><`test`.`t3`.`c2`,`test`.`t3`.`c1`>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select b1, b2 from t2 where b1 > '0') and
(a1, a2) in (select c1, c2 from t3
@@ -294,7 +294,7 @@
4 SUBQUERY t2i index it2i2 it2i3 18 NULL 5 100.00 Using where; Using index
2 SUBQUERY t2i index it2i1,it2i3 it2i3 18 NULL 5 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <expr_cache>(<in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`)))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <expr_cache><`test`.`t3i`.`c2`,`test`.`t3i`.`c1`>(<in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1i
where (a1, a2) in (select b1, b2 from t2i where b1 > '0') and
(a1, a2) in (select c1, c2 from t3i
@@ -317,7 +317,7 @@
4 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
3 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<expr_cache><`test`.`t2`.`b2`>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) or <expr_cache><`test`.`t2`.`b2`>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache><`test`.`t3`.`c2`,`test`.`t3`.`c1`>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select b1, b2 from t2
where b2 in (select c2 from t3 where c2 LIKE '%02') or
@@ -342,7 +342,7 @@
3 DEPENDENT SUBQUERY t3a ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.a1' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where <expr_cache>(<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),(`test`.`t3c`.`c1`,`test`.`t3c`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3c`.`c1` in <temporary table> on distinct_key where ((`test`.`t3c`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3c`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<expr_cache><`test`.`t2`.`b2`,`test`.`t1`.`a1`>(<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`))))) or <expr_cache><`test`.`t2`.`b2`>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where <expr_cache><`test`.`t3c`.`c2`,`test`.`t3c`.`c1`>(<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),(`test`.`t3c`.`c1`,`test`.`t3c`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3c`.`c1` in <temporary table> on distinct_key where ((`test`.`t3c`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3c`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select b1, b2 from t2
where b2 in (select c2 from t3 t3a where c1 = a1) or
@@ -378,7 +378,7 @@
8 SUBQUERY t2i index it2i1,it2i3 it2i3 18 NULL 5 100.00 Using where; Using index
NULL UNION RESULT <union1,7> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 (select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))) union (select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <expr_cache>(<in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`))))))))
+Note 1003 (select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<expr_cache><`test`.`t2`.`b2`>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) or <expr_cache><`test`.`t2`.`b2`>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache><`test`.`t3`.`c2`,`test`.`t3`.`c1`>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))) union (select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache><`test`.`t1i`.`a2`,`test`.`t1i`.`a1`>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <expr_cache><`test`.`t3i`.`c2`,`test`.`t3i`.`c1`>(<in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`))))))))
(select * from t1
where (a1, a2) in (select b1, b2 from t2
where b2 in (select c2 from t3 where c2 LIKE '%02') or
@@ -407,7 +407,7 @@
3 DEPENDENT UNION t2 ALL NULL NULL NULL NULL 5 100.00 Using where
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache><`test`.`t3`.`c2`,`test`.`t3`.`c1`>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select * from t1 where a1 > '0' UNION select * from t2 where b1 < '9') and
(a1, a2) in (select c1, c2 from t3
@@ -430,7 +430,7 @@
3 DEPENDENT UNION t2 ALL NULL NULL NULL NULL 5 100.00 Using where
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2`,`test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`c1` = `test`.`t1`.`a1`) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`c1`) and (`test`.`t3`.`c2` = `materialized subselect`.`c2`)))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2`,`test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`c1` = `test`.`t1`.`a1`) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache><`test`.`t3`.`c2`,`test`.`t3`.`c1`>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache><`test`.`t3`.`c2`,`test`.`t3`.`c1`>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`c1`) and (`test`.`t3`.`c2` = `materialized subselect`.`c2`)))))))
select * from t1, t3
where (a1, a2) in (select * from t1 where a1 > '0' UNION select * from t2 where b1 < '9') and
(c1, c2) in (select c1, c2 from t3
@@ -452,7 +452,7 @@
3 DEPENDENT UNION t2 ALL NULL NULL NULL NULL 5 100.00 Using where
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t3` where <expr_cache>(<in_optimizer>(`test`.`t3`.`c1`,<exists>(select 1 from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t3`.`c1`) = `test`.`t1`.`a1`)) union select 1 from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t3`.`c1`) = `test`.`t2`.`b1`)))))
+Note 1003 select `test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t3` where <expr_cache><`test`.`t3`.`c1`>(<in_optimizer>(`test`.`t3`.`c1`,<exists>(select 1 from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t3`.`c1`) = `test`.`t1`.`a1`)) union select 1 from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t3`.`c1`) = `test`.`t2`.`b1`)))))
select * from t3
where c1 in (select a1 from t1 where a1 > '0' UNION select b1 from t2 where b1 < '9');
c1 c2
@@ -476,14 +476,14 @@
Warnings:
Note 1276 Field or reference 'test.t1.a1' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'test.t1.a2' of SELECT #6 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where (<expr_cache>(<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),<exists>(<index_lookup>(<cache>(`test`.`t3c`.`c1`) in t2i on it2i3 where (((`test`.`t2i`.`b2` > '0') or (`test`.`t2i`.`b2` = `test`.`t1`.`a2`)) and (<cache>(`test`.`t3c`.`c1`) = `test`.`t2i`.`b1`) and (<cache>(`test`.`t3c`.`c2`) = `test`.`t2i`.`b2`)))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t3c`.`c1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t3c`.`c2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<expr_cache><`test`.`t2`.`b2`,`test`.`t1`.`a1`>(<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`))))) or <expr_cache><`test`.`t2`.`b2`>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`,`test`.`t1`.`a2`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where (<expr_cache><`test`.`t3c`.`c2`,`test`.`t3c`.`c1`,`test`.`t1`.`a2`>(<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),<exists>(<index_lookup>(<cache>(`test`.`t3c`.`c1`) in t2i on it2i3 where (((`test`.`t2i`.`b2` > '0') or (`test`.`t2i`.`b2` = `test`.`t1`.`a2`)) and (<cache>(`test`.`t3c`.`c1`) = `test`.`t2i`.`b1`) and (<cache>(`test`.`t3c`.`c2`) = `test`.`t2i`.`b2`)))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t3c`.`c1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t3c`.`c2`))))))
explain extended
select * from t1 where (a1, a2) in (select '1 - 01', '2 - 01');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01')))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01')))))
select * from t1 where (a1, a2) in (select '1 - 01', '2 - 01');
a1 a2
1 - 01 2 - 01
@@ -493,7 +493,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01')))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a2`,`test`.`t1`.`a1`>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01')))))
select * from t1 where (a1, a2) in (select '1 - 01', '2 - 01' from dual);
a1 a2
1 - 01 2 - 01
@@ -525,7 +525,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using temporary; Using filesort
2 DEPENDENT SUBQUERY columns unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` group by <expr_cache>(<in_optimizer>(`test`.`t1`.`a1`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`a1`) in columns on PRIMARY where trigcond((<cache>(`test`.`t1`.`a1`) = `test`.`columns`.`col`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` group by <expr_cache><`test`.`t1`.`a1`>(<in_optimizer>(`test`.`t1`.`a1`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`a1`) in columns on PRIMARY where trigcond((<cache>(`test`.`t1`.`a1`) = `test`.`columns`.`col`))))))
select * from t1 group by (a1 in (select col from columns));
a1 a2
1 - 00 2 - 00
@@ -583,7 +583,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select 1 from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`)))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache><`test`.`t1_16`.`a1`>(<in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select 1 from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select b1 from t2_16 where b1 > '0');
@@ -597,7 +597,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache><`test`.`t1_16`.`a2`,`test`.`t1_16`.`a1`>(<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_16
where (a1,a2) in (select b1, b2 from t2_16 where b1 > '0');
@@ -611,7 +611,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select substr(`test`.`t2_16`.`b1`,1,16) from `test`.`t2_16` where (`test`.`t2_16`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`substring(b1,1,16)`))))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache><`test`.`t1_16`.`a1`>(<in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select substr(`test`.`t2_16`.`b1`,1,16) from `test`.`t2_16` where (`test`.`t2_16`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`substring(b1,1,16)`))))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select substring(b1,1,16) from t2_16 where b1 > '0');
@@ -625,7 +625,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ','))))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache><`test`.`t1_16`.`a1`>(<in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ','))))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select group_concat(b1) from t2_16 group by b2);
@@ -640,7 +640,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache><`test`.`t1_16`.`a1`>(<in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select group_concat(b1) from t2_16 group by b2);
@@ -662,7 +662,7 @@
3 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using join buffer
4 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 from `test`.`t1_16` where (<expr_cache>(<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <expr_cache>(<in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`)))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache><concat(`test`.`t1`.`a1`,'x')>(<in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 from `test`.`t1_16` where (<expr_cache><`test`.`t1_16`.`a2`,`test`.`t1_16`.`a1`>(<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <expr_cache><`test`.`t2`.`b1`>(<in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`)))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8))))))
drop table t1_16, t2_16, t3_16;
set @blob_len = 512;
set @suffix_len = @blob_len - @prefix_len;
@@ -696,7 +696,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,<exists>(select 1 from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`)))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache><`test`.`t1_512`.`a1`>(<in_optimizer>(`test`.`t1_512`.`a1`,<exists>(select 1 from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select b1 from t2_512 where b1 > '0');
@@ -710,7 +710,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),<exists>(select `test`.`t2_512`.`b1`,`test`.`t2_512`.`b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`)))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache><`test`.`t1_512`.`a2`,`test`.`t1_512`.`a1`>(<in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),<exists>(select `test`.`t2_512`.`b1`,`test`.`t2_512`.`b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_512
where (a1,a2) in (select b1, b2 from t2_512 where b1 > '0');
@@ -724,7 +724,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select substr(`test`.`t2_512`.`b1`,1,512) from `test`.`t2_512` where (`test`.`t2_512`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`substring(b1,1,512)`))))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache><`test`.`t1_512`.`a1`>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select substr(`test`.`t2_512`.`b1`,1,512) from `test`.`t2_512` where (`test`.`t2_512`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`substring(b1,1,512)`))))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select substring(b1,1,512) from t2_512 where b1 > '0');
@@ -738,7 +738,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache><`test`.`t1_512`.`a1`>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select group_concat(b1) from t2_512 group by b2);
@@ -751,7 +751,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache><`test`.`t1_512`.`a1`>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select group_concat(b1) from t2_512 group by b2);
@@ -789,7 +789,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,<exists>(select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`)))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache><`test`.`t1_1024`.`a1`>(<in_optimizer>(`test`.`t1_1024`.`a1`,<exists>(select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select b1 from t2_1024 where b1 > '0');
@@ -803,7 +803,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),<exists>(select `test`.`t2_1024`.`b1`,`test`.`t2_1024`.`b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`)))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache><`test`.`t1_1024`.`a2`,`test`.`t1_1024`.`a1`>(<in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),<exists>(select `test`.`t2_1024`.`b1`,`test`.`t2_1024`.`b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_1024
where (a1,a2) in (select b1, b2 from t2_1024 where b1 > '0');
@@ -817,7 +817,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in (select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = substr(`test`.`t2_1024`.`b1`,1,1024))))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache><`test`.`t1_1024`.`a1`>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in (select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = substr(`test`.`t2_1024`.`b1`,1,1024))))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select substring(b1,1,1024) from t2_1024 where b1 > '0');
@@ -831,7 +831,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache><`test`.`t1_1024`.`a1`>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select group_concat(b1) from t2_1024 group by b2);
@@ -844,7 +844,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache><`test`.`t1_1024`.`a1`>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select group_concat(b1) from t2_1024 group by b2);
@@ -882,7 +882,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,<exists>(select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`)))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache><`test`.`t1_1025`.`a1`>(<in_optimizer>(`test`.`t1_1025`.`a1`,<exists>(select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select b1 from t2_1025 where b1 > '0');
@@ -896,7 +896,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),<exists>(select `test`.`t2_1025`.`b1`,`test`.`t2_1025`.`b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`)))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache><`test`.`t1_1025`.`a2`,`test`.`t1_1025`.`a1`>(<in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),<exists>(select `test`.`t2_1025`.`b1`,`test`.`t2_1025`.`b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_1025
where (a1,a2) in (select b1, b2 from t2_1025 where b1 > '0');
@@ -910,7 +910,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in (select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = substr(`test`.`t2_1025`.`b1`,1,1025))))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache><`test`.`t1_1025`.`a1`>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in (select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = substr(`test`.`t2_1025`.`b1`,1,1025))))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select substring(b1,1,1025) from t2_1025 where b1 > '0');
@@ -924,7 +924,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache><`test`.`t1_1025`.`a1`>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select group_concat(b1) from t2_1025 group by b2);
@@ -937,7 +937,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache><`test`.`t1_1025`.`a1`>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select group_concat(b1) from t2_1025 group by b2);
@@ -959,7 +959,7 @@
1 PRIMARY t1bit ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2bit ALL NULL NULL NULL NULL 3 100.00
Warnings:
-Note 1003 select conv(`test`.`t1bit`.`a1`,10,2) AS `bin(a1)`,conv(`test`.`t1bit`.`a2`,10,2) AS `bin(a2)` from `test`.`t1bit` where <expr_cache>(<in_optimizer>((`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`),(`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`) in ( <materialize> (select `test`.`t2bit`.`b1`,`test`.`t2bit`.`b2` from `test`.`t2bit` ), <primary_index_lookup>(`test`.`t1bit`.`a1` in <temporary table> on distinct_key where ((`test`.`t1bit`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1bit`.`a2` = `materialized subselect`.`b2`))))))
+Note 1003 select conv(`test`.`t1bit`.`a1`,10,2) AS `bin(a1)`,conv(`test`.`t1bit`.`a2`,10,2) AS `bin(a2)` from `test`.`t1bit` where <expr_cache><`test`.`t1bit`.`a2`,`test`.`t1bit`.`a1`>(<in_optimizer>((`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`),(`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`) in ( <materialize> (select `test`.`t2bit`.`b1`,`test`.`t2bit`.`b2` from `test`.`t2bit` ), <primary_index_lookup>(`test`.`t1bit`.`a1` in <temporary table> on distinct_key where ((`test`.`t1bit`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1bit`.`a2` = `materialized subselect`.`b2`))))))
select bin(a1), bin(a2)
from t1bit
where (a1, a2) in (select b1, b2 from t2bit);
@@ -982,7 +982,7 @@
1 PRIMARY t1bb ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2bb ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <expr_cache>(<in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),<exists>(select `test`.`t2bb`.`b1`,`test`.`t2bb`.`b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`)))))
+Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <expr_cache><`test`.`t1bb`.`a2`,`test`.`t1bb`.`a1`>(<in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),<exists>(select `test`.`t2bb`.`b1`,`test`.`t2bb`.`b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`)))))
select bin(a1), a2
from t1bb
where (a1, a2) in (select b1, b2 from t2bb);
@@ -1030,7 +1030,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 7 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 6 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 where a in (select c from t2 where d >= 20);
a
2
@@ -1044,7 +1044,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using where; Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 6 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 where a in (select c from t2 where d >= 20);
a
2
@@ -1058,7 +1058,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using where; Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 7 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 where a in (select c from t2 where d >= 20);
a
2
@@ -1071,7 +1071,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 7 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 group by a having a in (select c from t2 where d >= 20);
a
2
@@ -1083,7 +1083,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 7 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 group by a having a in (select c from t2 where d >= 20);
a
2
@@ -1097,7 +1097,7 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select `test`.`t3`.`e` from `test`.`t3` where (max(`test`.`t1`.`b`) = `test`.`t3`.`e`) having (<cache>(`test`.`t2`.`d`) >= <ref_null_helper>(`test`.`t3`.`e`)))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache><`test`.`t1`.`a`,max(`test`.`t1`.`b`)>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<expr_cache><`test`.`t2`.`d`,max(`test`.`t1`.`b`)>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select `test`.`t3`.`e` from `test`.`t3` where (max(`test`.`t1`.`b`) = `test`.`t3`.`e`) having (<cache>(`test`.`t2`.`d`) >= <ref_null_helper>(`test`.`t3`.`e`)))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`)))))
select a from t1 group by a
having a in (select c from t2 where d >= some(select e from t3 where max(b)=e));
a
@@ -1112,7 +1112,7 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache><`test`.`t1`.`a`,`test`.`t1`.`b`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<expr_cache><`test`.`t2`.`d`,`test`.`t1`.`b`>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`)))))
select a from t1
where a in (select c from t2 where d >= some(select e from t3 where b=e));
a
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-09-05 19:36:12 +0000
@@ -56,7 +56,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache><'1'>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -320,7 +320,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache><`test`.`t2`.`a`>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -338,7 +338,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache><`test`.`t6`.`clinic_uq`>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -749,7 +749,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache><`test`.`t2`.`id`>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -897,7 +897,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -912,7 +912,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1473,25 +1473,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1746,14 +1746,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache><`test`.`t1`.`id`>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache><`test`.`tt`.`id`>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2289,7 +2289,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache><`test`.`up`.`a`>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2830,7 +2830,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
@@ -2842,7 +2842,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4329,7 +4329,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1)))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache><1>(<in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1)))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-09-05 19:36:12 +0000
@@ -53,7 +53,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache><'1'>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -317,7 +317,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache><`test`.`t2`.`a`>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -335,7 +335,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache><`test`.`t6`.`clinic_uq`>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -746,7 +746,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache><`test`.`t2`.`id`>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -894,7 +894,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -909,7 +909,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1299,7 +1299,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1309,7 +1309,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1320,7 +1320,7 @@
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 func 1 100.00
2 DEPENDENT SUBQUERY t3 eq_ref PRIMARY PRIMARY 4 test.t1.b 1 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
drop table t1, t2, t3;
create table t1 (a int, b int, index a (a,b));
create table t2 (a int, index a (a));
@@ -1342,7 +1342,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 1001 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1352,7 +1352,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 1001 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1363,7 +1363,7 @@
2 DEPENDENT SUBQUERY t1 ref a a 5 func 1001 100.00 Using index
2 DEPENDENT SUBQUERY t3 index a a 5 NULL 3 100.00 Using where; Using index; Using join buffer
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
insert into t1 values (3,31);
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
@@ -1379,7 +1379,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 1001 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
drop table t0, t1, t2, t3;
create table t1 (a int, b int);
create table t2 (a int, b int);
@@ -1470,25 +1470,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1743,14 +1743,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache><`test`.`t1`.`id`>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache><`test`.`tt`.`id`>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2286,7 +2286,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache><`test`.`up`.`a`>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2827,19 +2827,19 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00 Using where
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = 'N') and (<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) and (<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`)))))
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = 'N') and (<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) and (<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`)))))
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4326,7 +4326,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1)))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache><1>(<in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1)))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-09-05 19:36:12 +0000
@@ -53,7 +53,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache><'1'>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -317,7 +317,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache><`test`.`t2`.`a`>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -335,7 +335,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache><`test`.`t6`.`clinic_uq`>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -746,7 +746,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache><`test`.`t2`.`id`>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -894,7 +894,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -909,7 +909,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1299,7 +1299,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1309,7 +1309,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1320,7 +1320,7 @@
2 SUBQUERY t3 index PRIMARY PRIMARY 4 NULL 3 100.00 Using index
2 SUBQUERY t1 ALL NULL NULL NULL NULL 4 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
drop table t1, t2, t3;
create table t1 (a int, b int, index a (a,b));
create table t2 (a int, index a (a));
@@ -1342,7 +1342,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL a 10 NULL 10004 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1352,7 +1352,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL a 10 NULL 10004 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1363,7 +1363,7 @@
2 SUBQUERY t3 index a a 5 NULL 3 100.00 Using index
2 SUBQUERY t1 index NULL a 10 NULL 10004 100.00 Using where; Using index; Using join buffer
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
insert into t1 values (3,31);
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
@@ -1379,7 +1379,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL a 10 NULL 10005 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache><`test`.`t2`.`a`>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
drop table t0, t1, t2, t3;
create table t1 (a int, b int);
create table t2 (a int, b int);
@@ -1470,25 +1470,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1743,14 +1743,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache><`test`.`t1`.`id`>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache><`test`.`tt`.`id`>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2286,7 +2286,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache><`test`.`up`.`a`>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2827,19 +2827,19 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),(`test`.`t1`.`one`,`test`.`t1`.`two`) in ( <materialize> (select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = 'N') ), <primary_index_lookup>(`test`.`t1`.`one` in <temporary table> on distinct_key where ((`test`.`t1`.`one` = `materialized subselect`.`one`) and (`test`.`t1`.`two` = `materialized subselect`.`two`))))))
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),(`test`.`t1`.`one`,`test`.`t1`.`two`) in ( <materialize> (select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = 'N') ), <primary_index_lookup>(`test`.`t1`.`one` in <temporary table> on distinct_key where ((`test`.`t1`.`one` = `materialized subselect`.`one`) and (`test`.`t1`.`two` = `materialized subselect`.`two`))))))
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4326,13 +4326,13 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache><1>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache><1>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
DROP TABLE t1;
#
# Bug#45061: Incorrectly market field caused wrong result.
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-09-05 19:36:12 +0000
@@ -772,11 +772,11 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache><`test`.`t2`.`d`,`test`.`t1`.`b`>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
show warnings;
Level Code Message
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache><`test`.`t2`.`d`,`test`.`t1`.`b`>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
select a from t1
where a in (select c from t2 where d >= some(select e from t3 where b=e));
a
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-08-31 13:16:10 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-09-05 19:36:12 +0000
@@ -776,11 +776,11 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache><`test`.`t2`.`d`,`test`.`t1`.`b`>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
show warnings;
Level Code Message
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache><`test`.`t2`.`d`,`test`.`t1`.`b`>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
select a from t1
where a in (select c from t2 where d >= some(select e from t3 where b=e));
a
=== modified file 'mysql-test/suite/pbxt/r/subselect.result'
--- a/mysql-test/suite/pbxt/r/subselect.result 2010-09-01 14:42:41 +0000
+++ b/mysql-test/suite/pbxt/r/subselect.result 2010-09-05 19:36:12 +0000
@@ -50,7 +50,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache><'1'>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -314,7 +314,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select <expr_cache>((select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`a` = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache><`test`.`t2`.`a`>((select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`a` = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -332,7 +332,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache><`test`.`t6`.`clinic_uq`>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -743,7 +743,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache><`test`.`t2`.`id`>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -893,7 +893,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -908,7 +908,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache><`test`.`t1`.`a`>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1465,25 +1465,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache><`test`.`t1`.`s1`>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1738,14 +1738,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache><`test`.`t1`.`id`>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache><`test`.`tt`.`id`>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2282,7 +2282,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache><`test`.`up`.`a`>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2825,7 +2825,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
@@ -2837,7 +2837,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache><`test`.`t1`.`two`,`test`.`t1`.`one`>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a char(5), b char(5));
@@ -4284,7 +4284,7 @@
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select 2 AS `2` from `test`.`t1` where <expr_cache>(exists(select 1 from `test`.`t2` where (`test`.`t1`.`a` = `test`.`t2`.`a`)))
+Note 1003 select 2 AS `2` from `test`.`t1` where <expr_cache><`test`.`t1`.`a`>(exists(select 1 from `test`.`t2` where (`test`.`t1`.`a` = `test`.`t2`.`a`)))
EXPLAIN EXTENDED
SELECT 2 FROM t1 WHERE EXISTS ((SELECT 1 FROM t2 WHERE t1.a=t2.a) UNION
(SELECT 1 FROM t2 WHERE t1.a = t2.a));
=== modified file 'mysql-test/t/subselect_cache.test'
--- a/mysql-test/t/subselect_cache.test 2010-08-09 10:00:58 +0000
+++ b/mysql-test/t/subselect_cache.test 2010-09-05 19:36:12 +0000
@@ -1470,3 +1470,18 @@
drop table t1,t2,t3;
set @@optimizer_switch= default;
+
+#
+--echo # LP BUG#615760 (double transformation)
+#
+create table t1 (a int);
+insert into t1 values (1),(2);
+create table t2 (b int);
+insert into t2 values (1),(2);
+set optimizer_switch='default,semijoin=off,materialization=off,subquery_cache=on';
+
+explain extended
+select * from t1 where a in (select b from t2);
+
+drop table t1,t2;
+set @@optimizer_switch= default;
=== modified file 'sql/item.cc'
--- a/sql/item.cc 2010-07-29 11:13:48 +0000
+++ b/sql/item.cc 2010-09-05 19:36:12 +0000
@@ -567,10 +567,10 @@
!wrapper->fix_fields(thd, (Item**)&wrapper))
{
if (wrapper->set_cache(thd, depends_on))
- DBUG_RETURN(this);
+ DBUG_RETURN(NULL);
DBUG_RETURN(wrapper);
}
- DBUG_RETURN(this);
+ DBUG_RETURN(NULL);
}
@@ -6635,6 +6635,19 @@
}
+void Item_cache_wrapper::print(String *str, enum_query_type query_type)
+{
+ str->append(func_name());
+ if (expr_cache)
+ expr_cache->print(str, query_type);
+ else
+ str->append(STRING_WITH_LEN("<<DISABLED>>"));
+ str->append('(');
+ orig_item->print(str, query_type);
+ str->append(')');
+}
+
+
/**
Prepare the expression cache wrapper (do nothing)
=== modified file 'sql/item.h'
--- a/sql/item.h 2010-08-31 13:16:10 +0000
+++ b/sql/item.h 2010-09-05 19:36:12 +0000
@@ -2628,13 +2628,7 @@
/* Following methods make this item transparent as much as possible */
- virtual void print(String *str, enum_query_type query_type)
- {
- str->append(func_name());
- str->append('(');
- orig_item->print(str, query_type);
- str->append(')');
- }
+ virtual void print(String *str, enum_query_type query_type);
virtual const char *full_name() const { return orig_item->full_name(); }
virtual void make_field(Send_field *field) { orig_item->make_field(field); }
bool eq(const Item *item, bool binary_cmp) const
=== modified file 'sql/item_cmpfunc.cc'
--- a/sql/item_cmpfunc.cc 2010-07-16 07:33:01 +0000
+++ b/sql/item_cmpfunc.cc 2010-09-05 19:36:12 +0000
@@ -1774,6 +1774,9 @@
DBUG_ENTER("Item_in_optimizer::expr_cache_insert_transformer");
List<Item*> &depends_on= ((Item_subselect *)args[1])->depends_on;
+ if (expr_cache)
+ DBUG_RETURN(expr_cache);
+
/* Add left expression to the list of the parameters of the subquery */
if (args[0]->cols() == 1)
depends_on.push_front((Item**)args);
@@ -1785,8 +1788,11 @@
}
}
- if (args[1]->expr_cache_is_needed(thd))
- DBUG_RETURN(set_expr_cache(thd, depends_on));
+ if (args[1]->expr_cache_is_needed(thd) &&
+ (expr_cache= set_expr_cache(thd, depends_on)))
+ DBUG_RETURN(expr_cache);
+
+ depends_on.pop();
DBUG_RETURN(this);
}
@@ -1889,6 +1895,7 @@
Item_bool_func::cleanup();
if (!save_cache)
cache= 0;
+ expr_cache= 0;
DBUG_VOID_RETURN;
}
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-07-10 10:37:30 +0000
+++ b/sql/item_cmpfunc.h 2010-09-05 19:36:12 +0000
@@ -242,6 +242,7 @@
{
protected:
Item_cache *cache;
+ Item *expr_cache;
bool save_cache;
/*
Stores the value of "NULL IN (SELECT ...)" for uncorrelated subqueries:
@@ -252,7 +253,7 @@
my_bool result_for_null_param;
public:
Item_in_optimizer(Item *a, Item_in_subselect *b):
- Item_bool_func(a, my_reinterpret_cast(Item *)(b)), cache(0),
+ Item_bool_func(a, my_reinterpret_cast(Item *)(b)), cache(0), expr_cache(0),
save_cache(0), result_for_null_param(UNKNOWN)
{}
bool fix_fields(THD *, Item **);
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-08-30 08:07:16 +0000
+++ b/sql/item_subselect.cc 2010-09-05 19:36:12 +0000
@@ -34,11 +34,10 @@
Item_subselect::Item_subselect():
Item_result_field(), value_assigned(0), thd(0), substitution(0),
- engine(0), old_engine(0), used_tables_cache(0), have_to_be_excluded(0),
- const_item_cache(1),
- inside_first_fix_fields(0), done_first_fix_fields(FALSE),
- eliminated(FALSE),
- engine_changed(0), changed(0), is_correlated(FALSE)
+ expr_cache(0), engine(0), old_engine(0), used_tables_cache(0),
+ have_to_be_excluded(0), const_item_cache(1), inside_first_fix_fields(0),
+ done_first_fix_fields(FALSE), eliminated(FALSE), engine_changed(0),
+ changed(0), is_correlated(FALSE)
{
with_subselect= 1;
reset();
@@ -121,6 +120,7 @@
depends_on.empty();
reset();
value_assigned= 0;
+ expr_cache= 0;
DBUG_VOID_RETURN;
}
@@ -861,8 +861,12 @@
THD *thd= (THD*) thd_arg;
DBUG_ENTER("Item_singlerow_subselect::expr_cache_insert_transformer");
- if (expr_cache_is_needed(thd))
- DBUG_RETURN(set_expr_cache(thd, depends_on));
+ if (expr_cache)
+ DBUG_RETURN(expr_cache);
+
+ if (expr_cache_is_needed(thd) &&
+ (expr_cache= set_expr_cache(thd, depends_on)))
+ DBUG_RETURN(expr_cache);
DBUG_RETURN(this);
}
@@ -1083,8 +1087,12 @@
THD *thd= (THD*) thd_arg;
DBUG_ENTER("Item_exists_subselect::expr_cache_insert_transformer");
- if (substype() == EXISTS_SUBS && expr_cache_is_needed(thd))
- DBUG_RETURN(set_expr_cache(thd, depends_on));
+ if (expr_cache)
+ DBUG_RETURN(expr_cache);
+
+ if (substype() == EXISTS_SUBS && expr_cache_is_needed(thd) &&
+ (expr_cache= set_expr_cache(thd, depends_on)))
+ DBUG_RETURN(expr_cache);
DBUG_RETURN(this);
}
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-07-10 10:37:30 +0000
+++ b/sql/item_subselect.h 2010-09-05 19:36:12 +0000
@@ -53,6 +53,7 @@
/* unit of subquery */
st_select_lex_unit *unit;
protected:
+ Item *expr_cache;
/* engine that perform execution of subselect (single select or union) */
subselect_engine *engine;
/* old engine if engine was changed */
@@ -214,7 +215,8 @@
Item_cache *value, **row;
public:
Item_singlerow_subselect(st_select_lex *select_lex);
- Item_singlerow_subselect() :Item_subselect(), value(0), row (0) {}
+ Item_singlerow_subselect() :Item_subselect(), value(0), row (0)
+ {}
void cleanup();
subs_type substype() { return SINGLEROW_SUBS; }
=== modified file 'sql/sql_expression_cache.cc'
--- a/sql/sql_expression_cache.cc 2010-08-09 10:00:58 +0000
+++ b/sql/sql_expression_cache.cc 2010-09-05 19:36:12 +0000
@@ -314,3 +314,21 @@
cache_table= NULL;
DBUG_RETURN(TRUE);
}
+
+
+void Expression_cache_tmptable::print(String *str, enum_query_type query_type)
+{
+ List_iterator<Item*> li(*list);
+ Item **item;
+ bool is_first= TRUE;
+
+ str->append('<');
+ while ((item= li++))
+ {
+ if (!is_first)
+ str->append(',');
+ (*item)->print(str, query_type);
+ is_first= FALSE;
+ }
+ str->append('>');
+}
=== modified file 'sql/sql_expression_cache.h'
--- a/sql/sql_expression_cache.h 2010-07-10 10:37:30 +0000
+++ b/sql/sql_expression_cache.h 2010-09-05 19:36:12 +0000
@@ -32,6 +32,11 @@
into the expression cache
*/
virtual my_bool put_value(Item *value)= 0;
+
+ /**
+ Print cache parameters
+ */
+ virtual void print(String *str, enum_query_type query_type)= 0;
};
struct st_table_ref;
@@ -51,6 +56,8 @@
virtual result check_value(Item **value);
virtual my_bool put_value(Item *value);
+ void print(String *str, enum_query_type query_type);
+
private:
void init();
bool make_equalities();
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-07-23 08:25:00 +0000
+++ b/sql/sql_select.cc 2010-09-05 19:36:12 +0000
@@ -1517,63 +1517,6 @@
/**
- Transform given item with changing it in all_fields if it is needed.
-
- @param expr Reference on expression to transform
- @param transformer Transformer to apply to the expression
-
- @details
- The function transforms the expression expr and, if the top item of the
- expression has changed, the function looks for the item in
- JOIN::all_fields and replaces it with the result of transformation.
-*/
-
-void JOIN::transform_and_change_in_all_fields(Item** expr,
- Item_transformer transformer)
-{
- DBUG_ENTER("JOIN::transform_and_change_in_all_fields");
- Item *new_item= (*expr)->transform(transformer,
- (uchar*) thd);
- if (new_item != (*expr))
- {
- List_iterator<Item> li(all_fields);
- Item *it;
-
- /*
- Check if this item already has expression cache and if it has then use
- that cache instead of the cache we have just created
- */
- while ((it= li++))
- {
- if (((*expr) == it->get_cached_item()))
- {
- /*
- We have to forget about the created cache, but this situation is
- really rare.
- */
- new_item->cleanup();
- new_item= it;
- DBUG_PRINT("info", ("Other cache found"));
- break;
- }
- }
-
- li.rewind();
- while ((it= li++))
- {
- if (it == (*expr))
- {
- li.replace(new_item);
- DBUG_PRINT("info", ("Cache Added"));
- }
- }
- *expr= new_item;
- }
- DBUG_VOID_RETURN;
-}
-
-
-/**
Setup expression caches for subqueries that need them
@details
@@ -1633,7 +1576,7 @@
select_lex->expr_cache_may_be_used[IN_GROUP_BY] ||
select_lex->expr_cache_may_be_used[NO_MATTER])
{
- List_iterator<Item> li(fields_list);
+ List_iterator<Item> li(all_fields);
Item *item;
while ((item= li++))
{
@@ -1646,16 +1589,18 @@
}
for (ORDER *group= group_list; group ; group= group->next)
{
- transform_and_change_in_all_fields(group->item,
- &Item::expr_cache_insert_transformer);
+ *group->item=
+ (*group->item)->transform(&Item::expr_cache_insert_transformer,
+ (uchar*) thd);
}
}
if (select_lex->expr_cache_may_be_used[NO_MATTER])
{
for (ORDER *ord= order; ord; ord= ord->next)
{
- transform_and_change_in_all_fields(ord->item,
- &Item::expr_cache_insert_transformer);
+ *ord->item=
+ (*ord->item)->transform(&Item::expr_cache_insert_transformer,
+ (uchar*) thd);
}
}
DBUG_RETURN(FALSE);
=== modified file 'sql/sql_select.h'
--- a/sql/sql_select.h 2010-07-10 10:37:30 +0000
+++ b/sql/sql_select.h 2010-09-05 19:36:12 +0000
@@ -1742,8 +1742,6 @@
*/
bool implicit_grouping;
bool make_simple_join(JOIN *join, TABLE *tmp_table);
- void transform_and_change_in_all_fields(Item** item,
- Item_transformer transformer);
};
1
0

[Maria-developers] WL#137 New (by Sergei): nested transactions
by worklog-noreply@askmonty.org 05 Sep '10
by worklog-noreply@askmonty.org 05 Sep '10
05 Sep '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: nested transactions
CREATION DATE..: Sun, 05 Sep 2010, 14:12
SUPERVISOR.....:
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 137 (https://askmonty.org/worklog/?tid=137)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 24 (hours remain)
ORIG. ESTIMATE.: 24
PROGRESS NOTES:
DESCRIPTION:
Add support for nested transactions.
Technically the implementation is trivial - START TRANSACTION
should contain a check if there is an existing transaction, and if yes - execute
SAVEPOINT instead. Similarly, ROLLBACK should cause "ROLLBACK TO SAVEPOINT" if
transaction nesting level is more than 1.
The problem - this is a serious change in behavior for START
TRANSACTION/COMMIT/ROLLBACK.
Possible solutions: add an sql mode or a server variable, e.g. SET
@@allow_nested_transactions=YES.
Or - create a special syntax, such as START NESTED TRANSACTION, ROLLBACK NESTED,
COMMIT NESTED. These statements should execute internally either transaction or
savepoint functions, as appropriate.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#136 New (by Knielsen): Cross-engine consistency for START TRANSACTION WITH CONSISTENT SNAPSHOT
by worklog-noreply@askmonty.org 03 Sep '10
by worklog-noreply@askmonty.org 03 Sep '10
03 Sep '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Cross-engine consistency for START TRANSACTION WITH CONSISTENT SNAPSHOT
CREATION DATE..: Fri, 03 Sep 2010, 14:15
SUPERVISOR.....: Sergei
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 136 (http://askmonty.org/worklog/?tid=136)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
With current MySQL/MariaDB, there is no imposed commit ordering. So even with
START TRANSACTION WITH CONSISTENT SNAPSHOT, you can end up with a snapshot
where in engine E, A is committed but not B; and in engine F, B is committed
but not A.
MWL#116 introduces consistent commit order, making this inconsistency
impossible. A and B will be consistently ordered (say A commits before B), and
it is not possible to see <B committed but not A> in any engine.
But there is still another inconsistency possible, where we get a snapshot
where both A and B are committed in one engine, but only A is committed in
another.
This worklog is about making START TRANSACTION WITH CONSISTENT SNAPSHOT always
create a snapshot with a consistent picture between engines about which
transaction is committed and which is not.
The change will work for storage engines which support MVCC, and which
implement the commit_ordered() handler method of MWL#116 (but not for other
engines).
In order to test this, it will be necessary that the PBXT engine (or some
other engine, but PBXT seems the obvious candidate) is made to implement
commit_ordered().
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] Rev 2814: pbxt test suite fix (expression test added to EXPLAIN EXTENDED). in file:///home/bell/maria/bzr/work-maria-5.3-lb615752/
by sanja@askmonty.org 01 Sep '10
by sanja@askmonty.org 01 Sep '10
01 Sep '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb615752/
------------------------------------------------------------
revno: 2814
revision-id: sanja(a)askmonty.org-20100901144241-0b04wm4z328tmwtq
parent: sanja(a)askmonty.org-20100831131610-scua714ci2eg5p53
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb615752
timestamp: Wed 2010-09-01 17:42:41 +0300
message:
pbxt test suite fix (expression test added to EXPLAIN EXTENDED).
=== modified file 'mysql-test/suite/pbxt/r/subselect.result'
--- a/mysql-test/suite/pbxt/r/subselect.result 2010-06-26 10:05:41 +0000
+++ b/mysql-test/suite/pbxt/r/subselect.result 2010-09-01 14:42:41 +0000
@@ -50,7 +50,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having ((select '1') = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -314,7 +314,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`a` = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`)) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache>((select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`a` = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -332,7 +332,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -743,7 +743,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -893,7 +893,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -908,7 +908,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1465,25 +1465,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1738,14 +1738,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`)))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2282,7 +2282,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2825,7 +2825,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
@@ -2837,7 +2837,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a char(5), b char(5));
@@ -4284,7 +4284,7 @@
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select 2 AS `2` from `test`.`t1` where exists(select 1 from `test`.`t2` where (`test`.`t1`.`a` = `test`.`t2`.`a`))
+Note 1003 select 2 AS `2` from `test`.`t1` where <expr_cache>(exists(select 1 from `test`.`t2` where (`test`.`t1`.`a` = `test`.`t2`.`a`)))
EXPLAIN EXTENDED
SELECT 2 FROM t1 WHERE EXISTS ((SELECT 1 FROM t2 WHERE t1.a=t2.a) UNION
(SELECT 1 FROM t2 WHERE t1.a = t2.a));
1
0

[Maria-developers] WL#135 New (by Dreas): Expose marked as crashed in table status
by worklog-noreply@askmonty.org 31 Aug '10
by worklog-noreply@askmonty.org 31 Aug '10
31 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Expose marked as crashed in table status
CREATION DATE..: Tue, 31 Aug 2010, 14:59
SUPERVISOR.....:
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 135 (http://askmonty.org/worklog/?tid=135)
VERSION........: WorkLog-4.0
STATUS.........: Un-Assigned
PRIORITY.......: 10
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
We regularly run into tables that are marked as crashed (shown in syslog). If
MySQL is marking the table as crashed, why is there no way to get a list of tables
marked as such? That would make it much faster to check/repair such tables
automatically without the need to run a check on ALL tables.
>From IRC:
(04:04:58 PM) montywi: myisamchk -d */*.MYI ; search for 'marked'
(04:05:09 PM) montywi: -d (--describe) is safe to do
(04:05:59 PM) montywi: best thing would probably would be to add an option '
IF_CRASHED' to CHECK TABLE
(04:06:48 PM) montywi: in that case one could trivially use mysqlcheck to find out
crashed tables and then automaticly, if so desired, run repair on them
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#133 New (by Knielsen): Event applier API
by worklog-noreply@askmonty.org 27 Aug '10
by worklog-noreply@askmonty.org 27 Aug '10
27 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Event applier API
CREATION DATE..: Fri, 27 Aug 2010, 13:17
SUPERVISOR.....: Sergei
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 133 (http://askmonty.org/worklog/?tid=133)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
A part of the replication project, MWL#107.
The event applier API does the opposite of the stacked event generator API,
MWL#120. It allows a plugin to apply (=execute) abstract events on a slave.
Just like generated events from MWL#120 are abstract without any fixed
materialised format, applied events are also abstract with no fixed
materialised format imposed by the API. All event data is supplied by the
plugin by calling "provider" methods in applier objects obtained from the
API. This allows the plugin to choose which information to supply. For
example, one plugin may supply row event values identified by column name,
while another may supply by column poisition. Using a provider interface,
different plugins can supply different information, as long as the provided
information is sufficient for the applier API to execute the event in _some_
way.
The event applier API also need to provide a facility for initialising a
thread (or multiple threads) to be able to execute events (setting up THD
etc).
Event appliers are stacked, just like event generators.
For example, row-based events are stacked on statement-based events. So a row
event applier can accept statement events in addition to row events (and
recursively also accept transaction events, as statement-based is stacked on
transaction-based).
On top of the detailed interfaces for event generation and application, one
can build simple materialised event formats, which are just byte strings
containing all necessary information.
On the generator side, this can be done as a filter that consumes row-based
events (say) and encapsulates them in some way (say Google protobuf
packets).
On the applier side, it can be done by implementing a simple event applier
class, that has a single provider method only, accepting a materialised event
packet.
The strength of the full abstraction below this is that it is possible to
implement arbitrary such materialised event formats in plugins, and possible
to implement event format plugins and event transport plugins independent of
each other.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] Rev 2897: Fix for LP bug#611379. in file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
by sanja@askmonty.org 27 Aug '10
by sanja@askmonty.org 27 Aug '10
27 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
------------------------------------------------------------
revno: 2897
revision-id: sanja(a)askmonty.org-20100827110519-b0gnykgc417kgi4q
parent: monty(a)mysql.com-20100809170542-ewa2awm6pcoi1ipy
committer: sanja(a)askmonty.org
branch nick: work-maria-5.1-lb611379
timestamp: Fri 2010-08-27 14:05:19 +0300
message:
Fix for LP bug#611379.
maybe_null/null_value flag set to TRUE for Item_sum_distinct.
=== modified file 'mysql-test/r/func_group.result'
--- a/mysql-test/r/func_group.result 2009-11-24 15:26:13 +0000
+++ b/mysql-test/r/func_group.result 2010-08-27 11:05:19 +0000
@@ -1713,4 +1713,16 @@
NULL NULL NULL NULL NULL
drop table t1;
#
+#test for LP Bug#611379
+#
+create table t1 (a int not null);
+insert into t1 values (3), (1), (2);
+select sum(distinct a) from t1 where a < 0;
+sum(distinct a)
+NULL
+select * from (select sum(distinct a) from t1 where a < 0) as t;
+sum(distinct a)
+NULL
+drop table t1;
+#
End of 5.1 tests
=== modified file 'mysql-test/t/func_group.test'
--- a/mysql-test/t/func_group.test 2009-11-24 15:26:13 +0000
+++ b/mysql-test/t/func_group.test 2010-08-27 11:05:19 +0000
@@ -1082,6 +1082,16 @@
from t1 a, t1 b;
select *, f1 = f2 from t1;
drop table t1;
+
+--echo #
+--echo #test for LP Bug#611379
+--echo #
+create table t1 (a int not null);
+insert into t1 values (3), (1), (2);
+select sum(distinct a) from t1 where a < 0;
+select * from (select sum(distinct a) from t1 where a < 0) as t;
+drop table t1;
+
--echo #
--echo End of 5.1 tests
=== modified file 'sql/item_sum.cc'
--- a/sql/item_sum.cc 2010-08-02 09:01:24 +0000
+++ b/sql/item_sum.cc 2010-08-27 11:05:19 +0000
@@ -587,13 +587,11 @@
return TRUE;
decimals=0;
- maybe_null=0;
for (uint i=0 ; i < arg_count ; i++)
{
if (args[i]->fix_fields(thd, args + i) || args[i]->check_cols(1))
return TRUE;
set_if_bigger(decimals, args[i]->decimals);
- maybe_null |= args[i]->maybe_null;
}
result_field=0;
max_length=float_length(decimals);
@@ -949,6 +947,7 @@
{
DBUG_ASSERT(args[0]->fixed);
+ maybe_null= TRUE;
table_field_type= args[0]->field_type();
/* Adjust tmp table type according to the chosen aggregation type */
1
0

25 Aug '10
Sergey,
1. I failed to find LLD for this patch anywhere.
That's why I'll try to put together the description
of the abstract data structures used by DsMrr_impl.
2. The main abstract data structure here could be called
Flexible_LIFO_Buffer. I call it LIFO because it's filled in one
direction while it's read always in the opposite direction starting
from the end of the buffer.
I call it Flexible because it can be extended when we write into it,
and it can be shrunk when we read from it.
3. It makes sense to introduce two sub-classes of this data structure:
Flexible_LIFO_Buffer_LR, that writes from left to right, and
Flexible_LIFO_Buffer_RL, that writes from right to left.
4. You need two buffers:
a buffer for keys [+ association]
a buffer for rowids [+ association]
If one is of the type Flexible_LIFO_Buffer_LR then the other must be
of the type Flexible_LIFO_Buffer_RL.
5. It looks like the buffer for rowids should be of the type
Flexible_FIFO_Buffer_LR/RL.
Yet because we read from the buffer always after sorting we always
can employ Flexible_LIFO_Buffer_LR/RL instead with a proper choice of
the comparison function used by the sort procedure.
6. I don't mind if you use the name DsMrr_Buffer[_RL/RL] instead of
Flexible_FIFO_Buffer_LR/RL.
Maybe it will be even better.
7. The class DsMrr_Buffer must contain a method that sorts the elements
in the buffer.
Some other remarks:
1. Always use mnemonic constants instead of the constant like 1, -1 you
used for direction values, or the constant from a enum type.
2. Each member of the class you use must me specified somehow.
3. Do not overload the semantics of the field members of you classes.
E.g. key_tuple_length is a constant for any key buffer and should not
everbe changed.
4. Your buffers are actually containers of elements. So it's more
natural to have read/write methods for these elements.
5. It looks like it's enough to have only one cursor (let's call it
'pos') for your read/write operations.
6. I think that class DsMrr_impl does not have to know anything about
the structure/representation of the elements in a buffer.
7. The companion iterator class should iterate over interesting
association info.
Maybe it make sense to have a counter of the remaining elements that
still have to be iterated over. The init method of the iterator class
should reset this counter.
8. DsMrr_impl should contain one method that shifts the boundary between
the key buffer and the rowid buffer. It should employ the methods of
the buffer classes to do this.
9. As you have only two buffers I don't mind if you have instead of two
similar classes Flexible_LIFO_Buffer_LR and Flexible_LIFO_Buffer_RL
two quite different classes DsMrr_Key_Buffer and DsMrr_Rowid_Buffer.
The first one will be of the type Flexible_LIFO_Buffer with the
iterator companion class in it. The second class may be of the
Flexible_FIFO_Buffer type.
However the generic functionality/interface still must be factored
out into the base class DsMrr_Buffer.
10. I think it makes sense to skip unnecessary calls of handler functions.
11. Test cases for the new functionality are scarce.
12. Please take care of the bug I discovered earlier and report of which
I resent you again (see [Fwd: [Fwd: [Fwd: Another serious problem with
5.3-dsmrr-cpk]]])
Regards,
Igor.
-------- Original Message --------
Subject: DS-MRR improvements patch ready for review
Date: Fri, 20 Aug 2010 09:12:19 +0200
From: Sergey Petrunya <psergey(a)askmonty.org>
To: igor(a)askmonty.org
CC: maria-developers(a)lists.launchpad.net
Hello Igor,
Please find attached the combined patch of DS-MRR for clustered PKs and key
sorting.
The tree is in launchpad and buildbot also:
https://code.launchpad.net/~maria-captains/maria/5.3-dsmrr-cpk
and all observed buildbot failures in the tree are known to occur
without the
new code as well.
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] WL#132 New (by Knielsen): Transaction coordinator plugin
by worklog-noreply@askmonty.org 24 Aug '10
by worklog-noreply@askmonty.org 24 Aug '10
24 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Transaction coordinator plugin
CREATION DATE..: Tue, 24 Aug 2010, 14:06
SUPERVISOR.....:
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 132 (http://askmonty.org/worklog/?tid=132)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
This worklog describes an API that allows a replication plugin to hook deep
into the commit mechanism of transactions in mysqld and implement its own
transaction coordinator (TC).
The existing binary log already implements a TC. This is used to make writes
to the binary log happen in 2-phase commit with transactional engines. It is
also used during crash recovery to get the list of prepared-but-not-committed
transactions from the binary log and ensure that those transactions are
committed (and any others rolled back), to avoid inconsistencies between the
contents of binary log and engines.
Thus the TC needs to be part of a replication API that allows the existing
binary log implementation to be written as a plugin, or to allow the
implementation of a replacement binary log.
In addition, something like Galera synchronous replication needs more control
over the commit process. In particular, it needs to be able to control the
order in which transactions are committed. This worklog extends the TC
interface to also allow TC to control commit order. This part of the worklog
builds on the work in MWL#116, in particular on the prepare_ordered() and
commit_ordered() extensions to the storage engine API.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

24 Aug '10
Hello Igor,
Please find attached the combined patch of DS-MRR for clustered PKs and key
sorting.
The tree is in launchpad and buildbot also:
https://code.launchpad.net/~maria-captains/maria/5.3-dsmrr-cpk
and all observed buildbot failures in the tree are known to occur without the
new code as well.
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
2
2
Dear Folks,
With 5 billion devices connected on the internet, and only a total of
4 billion ip(v4) addreses, available ipv4 addresses are becoming more
and more scarcely. Latest estimates are that it will take less than a
year before IANA has no more ipv4 addresses to distribute to LIR's
(APNIC, RIPE, etc), and some time after that it will be really, really
difficult for anybody to get an ipv4 address. Luckily there's a
solution for that in the form of ipv6. Ipv6 is very common in certain
parts of Asia (Japan) where students are only given a block of ipv6
addresses with a mere tunnel to access ipv4 hosts. In France one of
the biggest ISP's ( http://free.fr ) is giving every residential dsl
connection a block of ipv6 addresses, and in The Netherlands more and
more shared webhosters assign each individual website its own ipv6
address (obviously together with a shared ipv4 address) since Direct
Admin ( http://directadmin.com competitor of cpanel and plesk) fully
supports ipv6. All in all, the world is calling for ipv6 and products
that suppoort it.
Whereas ipv4 is 32bit, resulting in a total number of 2^32 (=4billion)
ip's, ipv6 is 128bit, giving 2^128 addresses. A number large enough
for me not to spell it out for you :D Though an ipv6 address does have
several hexadecimal notations, it still simply is a 128bit unsigned
integer. And as such, I'm sending this email requesting to support it.
My proposal would be to introduce two new datatypes, one for 128bit
integers, and one extending the 128bit integer type for ipv6
addresses. Meaning that the latter accept ipv6 addresses only, but
relies on the first one for storing it, and displaying it again as
ipv6 address. With ipv6 one usually will do much more rangechecks than
ipv4 because with ipv6 residential homes simply get a /64 block
(2^(128-64)= a lot of addresses) instead of simply one address.
Therefore a nice feature would be to allow queries like: `SELECT *
FROM table WHERE columnName IN 2001:db8:85a3::8a2e:370:7334/62` which
essentially would select all rows where columnName is in the same /64
block as 2001:db8:85a3::8a2e:370:7334 is.
I'm a simple PHP developer so I can't supply a patch, but I would be
really curious in what you think about the above, and hope someone is
able and willing to pick it up.
Regards,
Dolf Schimmel
-- Freeaqingme
P.s. I'm sorry if the first paragraph is stating the obvious only.
Some people know everything about ipv6, and some don't at all. Which
is why I figured an introduction wouldn't hurt.
3
4

[Maria-developers] WL#131 New (by Sergei): new regular expression library
by worklog-noreply@askmonty.org 23 Aug '10
by worklog-noreply@askmonty.org 23 Aug '10
23 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: new regular expression library
CREATION DATE..: Mon, 23 Aug 2010, 19:57
SUPERVISOR.....: Sergei
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-BackLog
TASK ID........: 131 (https://askmonty.org/worklog/?tid=131)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 48 (hours remain)
ORIG. ESTIMATE.: 48
PROGRESS NOTES:
DESCRIPTION:
We need to replace old regex library that MariaDB uses at the moment
with a new one, that works with multi-byte charsets and preferably has more
features (pcre?).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

23 Aug '10
Hi all,
I'm sending this patch again, since I didn't get any responses to the
last two mails.
The patch is almost identical to the last version except that I have
removed some parts of it - those warnings were fixed by someone else
after I sent the last version of the patch.
Here is the patch description from the last mail:
This is a new version of the patch, and it removes all compiler warnings
on a 32 bit build using Visual Studio 2008.
This patch sets a compiler flag on the bison and flex generated files
that removes warnings on those. (That nothing uses the yyerrorlab label
is fine.)
The libmysqld.def file: The description line isn't supported in the more
recent compilers.
In fsp0fsp.c, there is an explicit cast to ullint. In this case, the
code does what's intended, and the compiler warning is one of the "are
you sure this is right" warnings. C4334 gives a warning if you make a
1u << 10 and store that in a 64 bit variable, because you could have
meant 1i64 << 10.
Can I push this to 5.1?
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
2
1

23 Aug '10
Hi everyone,
I have updated the attached patch to apply to the 5.1 tree. Please see
the comments in my original mail below.
Any objections to apply this patch?
-------- Original Meddelelse --------
Subject: Separate builddir from src dir with cmake
Date: Mon, 12 Jul 2010 14:01:23 +0200
From: Bo Thorsen <bo(a)askmonty.org>
Organization: Monty Program AB
To: maria-developers(a)lists.launchpad.net
Hi people,
Our cmake scripts weren't able to build outside the source dir. This is
bad for 3 reasons:
- it's supposed to work
- cmake recommends building outside the src dir
- on Windows, cmake can't build 32 and 64 bit in the same solution, so
patches has to be manually copied to test compilation on the other
The attached patch fixes all the issues. I ran a find on the source dir
before and after the the build, and they were identical. I didn't
actually check that no existing files we're modified, but bzr di claim
this isn't the case.
I didn't port the zip creation script, but that shouldn't be a problem.
src != blddir is more a development thing, and the buildbot slave
producing the zip file will just continue to build in the same dir. The
cpack installer builds just fine.
OK to push to 5.1?
Should I send this patch to the Oracle as well to minimize the drift
between the cmake files?
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
2
1
I'm working on getting rid of the compiler warnings on 64 bit Windows.
There are literally thousands of places, where we store a size_t in an
int, and the compiler warns about possible loss of data.
I'm trying to figure out how to handle this in the best way, and I need
a bit of advice on how to proceed.
First, is it acceptable to change all/most of these vars from int to
size_t? In places where there is a local variable that stores the
difference between two pointers, it's fine. But what about places like
struct z_stream_s in zlib.h?
How do you guys prefer to see patches on this - one big patch, or
several smaller patches?
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
4
3

[Maria-developers] Rev 2897: Fix for LP bug#611379. in file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
by sanja@askmonty.org 20 Aug '10
by sanja@askmonty.org 20 Aug '10
20 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
------------------------------------------------------------
revno: 2897
revision-id: sanja(a)askmonty.org-20100812075534-wc2dngmpiqm9grra
parent: monty(a)mysql.com-20100809170542-ewa2awm6pcoi1ipy
committer: sanja(a)askmonty.org
branch nick: work-maria-5.1-lb611379
timestamp: Thu 2010-08-12 10:55:34 +0300
message:
Fix for LP bug#611379.
maybe_null/null_value flag set to TRUE for Item_sum_distinct.
=== modified file 'mysql-test/r/func_group.result'
--- a/mysql-test/r/func_group.result 2009-11-24 15:26:13 +0000
+++ b/mysql-test/r/func_group.result 2010-08-12 07:55:34 +0000
@@ -1713,4 +1713,26 @@
NULL NULL NULL NULL NULL
drop table t1;
#
+#test for LP Bug#611379
+#
+create table t1 (a int not null);
+insert into t1 values (1);
+create table t2 (a int not null primary key);
+insert into t2 values (10);
+explain select sum(distinct t1.a) from t1,t2 where t1.a=t2.a;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
+explain select * from (select sum(distinct t1.a) from t1,t2 where t1.a=t2.a)
+as t;
+id select_type table type possible_keys key key_len ref rows Extra
+1 PRIMARY <derived2> system NULL NULL NULL NULL 1
+2 DERIVED NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
+select sum(distinct t1.a) from t1,t2 where t1.a=t2.a;
+sum(distinct t1.a)
+NULL
+select * from (select sum(distinct t1.a) from t1,t2 where t1.a=t2.a) as t;
+sum(distinct t1.a)
+NULL
+drop table t1,t2;
+#
End of 5.1 tests
=== modified file 'mysql-test/t/func_group.test'
--- a/mysql-test/t/func_group.test 2009-11-24 15:26:13 +0000
+++ b/mysql-test/t/func_group.test 2010-08-12 07:55:34 +0000
@@ -1082,6 +1082,21 @@
from t1 a, t1 b;
select *, f1 = f2 from t1;
drop table t1;
+
+--echo #
+--echo #test for LP Bug#611379
+--echo #
+create table t1 (a int not null);
+insert into t1 values (1);
+create table t2 (a int not null primary key);
+insert into t2 values (10);
+explain select sum(distinct t1.a) from t1,t2 where t1.a=t2.a;
+explain select * from (select sum(distinct t1.a) from t1,t2 where t1.a=t2.a)
+as t;
+select sum(distinct t1.a) from t1,t2 where t1.a=t2.a;
+select * from (select sum(distinct t1.a) from t1,t2 where t1.a=t2.a) as t;
+drop table t1,t2;
+
--echo #
--echo End of 5.1 tests
=== modified file 'sql/item_sum.cc'
--- a/sql/item_sum.cc 2010-08-02 09:01:24 +0000
+++ b/sql/item_sum.cc 2010-08-12 07:55:34 +0000
@@ -949,6 +949,7 @@
{
DBUG_ASSERT(args[0]->fixed);
+ null_value= maybe_null= TRUE;
table_field_type= args[0]->field_type();
/* Adjust tmp table type according to the chosen aggregation type */
2
1

[Maria-developers] WL#130 New (by Knielsen): Good defaults for ./configure (for end-user source builds)
by worklog-noreply@askmonty.org 19 Aug '10
by worklog-noreply@askmonty.org 19 Aug '10
19 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Good defaults for ./configure (for end-user source builds)
CREATION DATE..: Thu, 19 Aug 2010, 10:09
SUPERVISOR.....:
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 130 (http://askmonty.org/worklog/?tid=130)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
I would really like it to be easy to build MariaDB from source.
Basically, a plain ./configure should just work in the normal case, and
produce something sensible that is close/identical to what a relase build
looks like.
Unfortunately, this is not currently so. There is a long tradition in MySQL
(and now MariaDB) that release builds are made with a long carefully crafted
list of ./configure options that are necessary to get everything included or
enabled.
One concrete example I found is that plain ./configure will build
XtraDB/InnoDB as plugins by default. This means that upon starting the server,
there will be no engine "innodb" available until one is installed with INSTALL
PLUGIN! I think this is not desirable default behaviour.
The task here is to go through the ourdelta build scripts [1] and check all
the ./configure options we use in them and what the default behaviour of that
option is. Defaults that is not sensible for a plain ./configure build should
be changed.
References:
[1]
http://bazaar.launchpad.net/~maria-captains/ourdelta/ourdelta-montyprogram-…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0
Hi Bo,
I noticed some test failures in Federated in our Windows Build, which made me
realise that we are building Federated[X] incorrectly:
http://buildbot.askmonty.org/buildbot/builders/kvm-zip-winxp-x86/builds/104…
build FEDERATED as static library
build FEDERATEDX as DLL
I believe this means that Federated, not FederatedX, will be used, which is
opposite from what all the other platforms are doing, and hence seems
wrong. You can probably get Serg to help with how things should be to be
correct.
- Kristian.
2
1
Hi, I was wondering if anyone here had any opinions about this proposed fix for the 64 table join limit (originally by Sergei)
http://lists.mysql.com/internals/38013
With the increased use of schemaless and hybrid, partly schemaless application designs, this would be a big enhancement. In my own application, being able to make minor, typed, virtual column additions for different SAAS clients in virtual tables at runtime without changing the physical schema or regenerating application binding layers is pretty priceless. But currently we are limited to 64 virtual columns by this mySQL join limit.
5
6
All,
I have started (and over the course of the next week I will continue)
moving several pages from http://askmonty.org/wiki to
http://kb.askmonty.org
With the Knowledgebase now in beta it is time to move the content over.
The procedure I plan on using for this is as follows:
1. Copy the content of a page from the wiki into the Knowledgebase.
2. Add a link to the top of the wiki page notifying of the move and
giving a link to the new location of the page on the KB.
3. Add a redirect to the webserver config which automatically sends you
to the KB page if you click on a link to the wiki page.
I wanted to let you know about it now so that you are not surprised
when you click on a link in the wiki and end up in the Knowledgebase.
After most of the pages which are slated to be moved have been moved I
will start deleting the pages from the wiki. The pages won't actually
be gone (because of MediaWiki's undelete functionality) but having them
officially deleted brings closure to the page and prevents it being
accidentally viewed and/or edited.
The pages slated to be moved include all of the ones in the "Manual:"
namespace and several of the ones in the "MariaDB:" section. An easy
way to think of it is that all documentation-type pages on the wiki are
going to move to the Knowledgebase.
Pages the developers use for planning releases (such as the 5.2 TODO
page) will not move. Pages developers use to document new features will
be moved when the developer indicates the page has become stable and
ready to be used as documentation. In the future, some features will
likely be documented first on the KB, but I'm sure there will always be
some which begin life on the wiki.
The redirects will stay in place indefinitely since keeping them
around is easy and painless.
The wiki will not go away, it is going to continue to be a site the
MariaDB devs can use for notes, plans, and so on. The look will also
change to better fit this mission. The corporate info, for example, has
already been moved to http://montyprogram.com so the navigation links
at the top of every page (About Us, Business, Partners, etc...) are
no-longer necessary.
Let me know if you have any questions or suggestions.
Thanks.
--
Daniel Bartholomew
Monty Program - http://askmonty.org
1
0

Re: [Maria-developers] [Commits] Rev 2898: generalization of mtr to support suite.pm extensions: in http://bazaar.launchpad.net/~maria-captains/maria/5.1/
by Kristian Nielsen 12 Aug '10
by Kristian Nielsen 12 Aug '10
12 Aug '10
<serg(a)askmonty.org> writes:
> Hi, Kristian
>
> Would you like to look once again at the mtr patch ?
Sure!
> I've removed auto-creation of my.cnf options by mere mentioning, as you
> found that confusing. Instead I added explicit group OPT, so now sphinx
> my.cnf would do
>
> [searchd]
> port=(a)OPT.port
>
> [ENV]
> SEARCHD_PORT=(a)searchd.port
Ah, that looks much better (and should be useful if one needs to assign a port
to some option that is not named exactly "port").
> === added file 'mysql-test/README.suites'
> --- a/mysql-test/README.suites 1970-01-01 00:00:00 +0000
> +++ b/mysql-test/README.suites 2010-08-11 08:15:49 +0000
> @@ -0,0 +1,129 @@
> +These are the assorted notes that will be turned into a manual eventually.
Excellent! Will be very useful to future mtr hackers.
> === modified file 'mysql-test/lib/My/Config.pm'
> --- a/mysql-test/lib/My/Config.pm 2008-09-05 13:31:09 +0000
> +++ b/mysql-test/lib/My/Config.pm 2010-08-11 08:15:49 +0000
> +package My::Config::Group::ENV;
> +#
> +# Return value for an option in the group, fail if it does not exist
> +#
> +sub value {
> + my ($self, $option_name)= @_;
> + my $option= $self->option($option_name);
> +
> + if (! defined($option) and $ENV{$option_name}) {
This will prevent an option to be set from an environment variable with the
value "0". (Just a remark, I'm not sure if that is a problem or not though.)
It's a nice cleanup of mtr, and I'm positively surprised how much cleanup you
achieved with how little code changes, thanks!
Ok to push.
- Kristian.
2
1

[Maria-developers] Rev 2809: Fix for LP BUG#615752 Subquery caching is not visible in EXPLAIN in file:///home/bell/maria/bzr/work-maria-5.3-nulls/
by sanja@askmonty.org 11 Aug '10
by sanja@askmonty.org 11 Aug '10
11 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.3-nulls/
------------------------------------------------------------
revno: 2809
revision-id: sanja(a)askmonty.org-20100811175106-979ljtxs36udp44q
parent: sanja(a)askmonty.org-20100730041658-2naumadh26t93e3g
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-nulls
timestamp: Wed 2010-08-11 20:51:06 +0300
message:
Fix for LP BUG#615752 Subquery caching is not visible in EXPLAIN
Subquery cache added to EXPLAIN EXTENDED output.
=== modified file 'mysql-test/r/compare.result'
--- a/mysql-test/r/compare.result 2010-03-09 10:36:26 +0000
+++ b/mysql-test/r/compare.result 2010-08-11 17:51:06 +0000
@@ -88,7 +88,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,(select count(0) from `test`.`t1` where ((`test`.`t1`.`b` = `test`.`t2`.`a`) and (concat(`test`.`t1`.`b`,`test`.`t1`.`c`) = concat('0',`test`.`t2`.`a`,'01')))) AS `x` from `test`.`t2` order by `test`.`t2`.`a`
+Note 1003 select `test`.`t2`.`a` AS `a`,<expr_cache>((select count(0) from `test`.`t1` where ((`test`.`t1`.`b` = `test`.`t2`.`a`) and (concat(`test`.`t1`.`b`,`test`.`t1`.`c`) = concat('0',`test`.`t2`.`a`,'01'))))) AS `x` from `test`.`t2` order by `test`.`t2`.`a`
DROP TABLE t1,t2;
CREATE TABLE t1 (a TIMESTAMP);
INSERT INTO t1 VALUES (NOW()),(NOW()),(NOW());
=== modified file 'mysql-test/r/explain.result'
--- a/mysql-test/r/explain.result 2010-06-26 10:05:41 +0000
+++ b/mysql-test/r/explain.result 2010-08-11 17:51:06 +0000
@@ -224,7 +224,7 @@
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select (select 1 from `test`.`t2` where (`test`.`t2`.`d` = NULL)) AS `(SELECT 1 FROM t2 WHERE d = c)` from `test`.`t1`
+Note 1003 select <expr_cache>((select 1 from `test`.`t2` where (`test`.`t2`.`d` = NULL))) AS `(SELECT 1 FROM t2 WHERE d = c)` from `test`.`t1`
DROP TABLE t1, t2;
#
# Bug #48419: another explain crash..
=== modified file 'mysql-test/r/group_by.result'
--- a/mysql-test/r/group_by.result 2010-06-26 10:05:41 +0000
+++ b/mysql-test/r/group_by.result 2010-08-11 17:51:06 +0000
@@ -1742,7 +1742,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select (select `test`.`t1`.`a`) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by ((select `test`.`t1`.`a`) + 0)
+Note 1003 select <expr_cache>((select `test`.`t1`.`a`)) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by (<expr_cache>((select `test`.`t1`.`a`)) + 0)
EXPLAIN EXTENDED
SELECT (SELECT t1.a) aa, COUNT(DISTINCT b) FROM t1 GROUP BY -aa;
id select_type table type possible_keys key key_len ref rows filtered Extra
@@ -1750,7 +1750,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select (select `test`.`t1`.`a`) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by -((select `test`.`t1`.`a`))
+Note 1003 select <expr_cache>((select `test`.`t1`.`a`)) AS `aa`,count(distinct `test`.`t1`.`b`) AS `COUNT(DISTINCT b)` from `test`.`t1` group by -(<expr_cache>((select `test`.`t1`.`a`)))
# should return only one record
SELECT (SELECT tt.a FROM t1 tt LIMIT 1) aa, COUNT(DISTINCT b) FROM t1
GROUP BY aa;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-06-26 19:11:45 +0000
+++ b/mysql-test/r/subselect.result 2010-08-11 17:51:06 +0000
@@ -52,7 +52,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having ((select '1') = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -316,7 +316,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select (select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`)) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -334,7 +334,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -745,7 +745,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -893,7 +893,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -908,7 +908,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1469,25 +1469,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1742,14 +1742,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`)))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2285,7 +2285,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2826,7 +2826,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
@@ -2838,7 +2838,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4325,13 +4325,13 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`)))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`)))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
DROP TABLE t1;
#
# Bug#45061: Incorrectly market field caused wrong result.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect3.result 2010-08-11 17:51:06 +0000
@@ -30,7 +30,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `Z` from `test`.`t2`
explain extended
select a, oref from t2
where a in (select max(ie) from t1 where oref=t2.oref group by grp);
@@ -39,7 +39,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))
select a, oref, a in (
select max(ie) from t1 where oref=t2.oref group by grp union
select max(ie) from t1 where oref=t2.oref group by grp
@@ -70,7 +70,7 @@
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+Note 1003 select <expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
@@ -95,7 +95,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))))) AS `Z` from `test`.`t2`
flush status;
select oref, a from t2 where a in (select a from t1 where oref=t2.oref);
oref a
@@ -162,7 +162,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 5 test.t1.b 1 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
drop table t1, t2, t3;
create table t1 (a int NOT NULL, b int NOT NULL, key(a));
insert into t1 values
@@ -190,7 +190,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 4 test.t1.b 1 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`)))))) AS `Z` from `test`.`t3`
drop table t1,t2,t3;
create table t1 (oref int, grp int);
insert into t1 (oref, grp) values
@@ -214,7 +214,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 't2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0)))))))) AS `Z` from `test`.`t2`
drop table t1, t2;
create table t1 (a int, b int, primary key (a));
insert into t1 values (1,1), (3,1),(100,1);
@@ -246,7 +246,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))))) AS `Z` from `test`.`t2`
select a,b, oref, (a,b) in (select a,b from t1 where c=t2.oref) Z from t2;
a b oref Z
NULL 1 100 0
@@ -263,7 +263,7 @@
2 DEPENDENT SUBQUERY t4 ALL NULL NULL NULL NULL 100 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
select a,b, oref,
(a,b) in (select a,b from t1,t4 where c=t2.oref) Z
from t2;
@@ -308,7 +308,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`)))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
drop table t1, t2;
create table t1 (oref char(4), grp int, ie int);
insert into t1 (oref, grp, ie) values
@@ -578,7 +578,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2`
drop table t1,t2;
create table t1 (oref char(4), grp int, ie int primary key);
insert into t1 (oref, grp, ie) values
@@ -710,7 +710,7 @@
1 PRIMARY t2 eq_ref PRIMARY PRIMARY 4 test.t1.a 1 100.00 Using index
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`)))))))
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1));
a
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1 WHERE a > 4));
@@ -881,7 +881,7 @@
Note 1276 Field or reference 'test.t1.a' of SELECT #3 was resolved in SELECT #2
Note 1276 Field or reference 'test.t1.c' of SELECT #3 was resolved in SELECT #2
Error 1054 Unknown column 'c' in 'field list'
-Note 1003 select `c` AS `c` from (select (select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`) AS `(SELECT COUNT(a) FROM
+Note 1003 select `c` AS `c` from (select <expr_cache>((select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`)) AS `(SELECT COUNT(a) FROM
(SELECT COUNT(b) FROM t1) AS x GROUP BY c
)` from `test`.`t1` group by `test`.`t1`.`b`) `y`
DROP TABLE t1;
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-08-11 17:51:06 +0000
@@ -34,7 +34,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `Z` from `test`.`t2`
explain extended
select a, oref from t2
where a in (select max(ie) from t1 where oref=t2.oref group by grp);
@@ -43,7 +43,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) group by `test`.`t1`.`grp` having (<cache>(`test`.`t2`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))
select a, oref, a in (
select max(ie) from t1 where oref=t2.oref group by grp union
select max(ie) from t1 where oref=t2.oref group by grp
@@ -74,7 +74,7 @@
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+Note 1003 select <expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`))))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
@@ -99,7 +99,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where (`test`.`t1`.`oref` = `test`.`t2`.`oref`) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))))) AS `Z` from `test`.`t2`
flush status;
select oref, a from t2 where a in (select a from t1 where oref=t2.oref);
oref a
@@ -166,7 +166,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 5 test.t1.b 1 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`)))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond(((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)))) having trigcond(<is_not_null_test>(`test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
drop table t1, t2, t3;
create table t1 (a int NOT NULL, b int NOT NULL, key(a));
insert into t1 values
@@ -194,7 +194,7 @@
2 DEPENDENT SUBQUERY t2 ref a a 4 test.t1.b 1 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t3.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`))))) AS `Z` from `test`.`t3`
+Note 1003 select `test`.`t3`.`a` AS `a`,`test`.`t3`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t3`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`a` = `test`.`t1`.`b`) and (`test`.`t2`.`b` = `test`.`t3`.`oref`) and trigcond((<cache>(`test`.`t3`.`a`) = `test`.`t1`.`a`)))))) AS `Z` from `test`.`t3`
drop table t1,t2,t3;
create table t1 (oref int, grp int);
insert into t1 (oref, grp) values
@@ -218,7 +218,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
Note 1276 Field or reference 't2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0))))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select count(0) from `test`.`t1` group by `test`.`t1`.`grp` having ((`test`.`t1`.`grp` = `test`.`t2`.`oref`) and trigcond((<cache>(`test`.`t2`.`a`) = <ref_null_helper>(count(0)))))))) AS `Z` from `test`.`t2`
drop table t1, t2;
create table t1 (a int, b int, primary key (a));
insert into t1 values (1,1), (3,1),(100,1);
@@ -250,7 +250,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 2 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a checking NULL where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))))) AS `Z` from `test`.`t2`
select a,b, oref, (a,b) in (select a,b from t1 where c=t2.oref) Z from t2;
a b oref Z
NULL 1 100 0
@@ -267,7 +267,7 @@
2 DEPENDENT SUBQUERY t4 ALL NULL NULL NULL NULL 100 100.00 Using where; Using join buffer
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,`test`.`t2`.`oref` AS `oref`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(select `test`.`t1`.`a`,`test`.`t1`.`b` from `test`.`t1` join `test`.`t4` where ((`test`.`t1`.`c` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`b`) or isnull(`test`.`t1`.`b`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`a`)) and trigcond(<is_not_null_test>(`test`.`t1`.`b`)))))) AS `Z` from `test`.`t2`
select a,b, oref,
(a,b) in (select a,b from t1,t4 where c=t2.oref) Z
from t2;
@@ -312,7 +312,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`)))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2` where ((`test`.`t2`.`b` = 10) and (`test`.`t2`.`a` = 10))
drop table t1, t2;
create table t1 (oref char(4), grp int, ie int);
insert into t1 (oref, grp, ie) values
@@ -582,7 +582,7 @@
2 DEPENDENT SUBQUERY t1 index_subquery idx idx 5 func 4 100.00 Using where; Full scan on NULL key
Warnings:
Note 1276 Field or reference 'test.t2.oref' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`)))))) AS `Z` from `test`.`t2`
+Note 1003 select `test`.`t2`.`oref` AS `oref`,`test`.`t2`.`a` AS `a`,`test`.`t2`.`b` AS `b`,<expr_cache>(<in_optimizer>((`test`.`t2`.`a`,`test`.`t2`.`b`),<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on idx checking NULL where ((`test`.`t1`.`oref` = `test`.`t2`.`oref`) and trigcond(((<cache>(`test`.`t2`.`a`) = `test`.`t1`.`ie1`) or isnull(`test`.`t1`.`ie1`))) and trigcond(((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`ie2`) or isnull(`test`.`t1`.`ie2`)))) having (trigcond(<is_not_null_test>(`test`.`t1`.`ie1`)) and trigcond(<is_not_null_test>(`test`.`t1`.`ie2`))))))) AS `Z` from `test`.`t2`
drop table t1,t2;
create table t1 (oref char(4), grp int, ie int primary key);
insert into t1 (oref, grp, ie) values
@@ -714,7 +714,7 @@
1 PRIMARY t2 eq_ref PRIMARY PRIMARY 4 test.t1.a 1 100.00 Using index
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` join `test`.`t2` where ((`test`.`t2`.`b` = `test`.`t1`.`a`) and (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t1` where ((<cache>(`test`.`t2`.`b`) = `test`.`t1`.`a`) or isnull(`test`.`t1`.`a`)) having <is_not_null_test>(`test`.`t1`.`a`)))))))
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1));
a
SELECT a FROM t1, t2 WHERE a=b AND (b NOT IN (SELECT a FROM t1 WHERE a > 4));
@@ -885,7 +885,7 @@
Note 1276 Field or reference 'test.t1.a' of SELECT #3 was resolved in SELECT #2
Note 1276 Field or reference 'test.t1.c' of SELECT #3 was resolved in SELECT #2
Error 1054 Unknown column 'c' in 'field list'
-Note 1003 select `c` AS `c` from (select (select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`) AS `(SELECT COUNT(a) FROM
+Note 1003 select `c` AS `c` from (select <expr_cache>((select count(`test`.`t1`.`a`) from (select count(`test`.`t1`.`b`) AS `COUNT(b)` from `test`.`t1`) `x` group by `t1`.`c`)) AS `(SELECT COUNT(a) FROM
(SELECT COUNT(b) FROM t1) AS x GROUP BY c
)` from `test`.`t1` group by `test`.`t1`.`b`) `y`
DROP TABLE t1;
=== modified file 'mysql-test/r/subselect4.result'
--- a/mysql-test/r/subselect4.result 2010-06-26 10:05:41 +0000
+++ b/mysql-test/r/subselect4.result 2010-08-11 17:51:06 +0000
@@ -80,7 +80,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select (select 1 from `test`.`t2` where 0) AS `RESULT` from `test`.`t1`
+Note 1003 select <expr_cache>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1`
first equivalent variant
SELECT (SELECT 1 FROM t2 WHERE d = IFNULL(c,NULL)) AS RESULT FROM t1 GROUP BY c ;
RESULT
@@ -91,7 +91,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select (select 1 from `test`.`t2` where 0) AS `RESULT` from `test`.`t1` group by NULL
+Note 1003 select <expr_cache>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1` group by NULL
second equivalent variant
SELECT (SELECT 1 FROM t2 WHERE d = c) AS RESULT FROM t1 GROUP BY c ;
RESULT
@@ -102,7 +102,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.c' of SELECT #2 was resolved in SELECT #1
-Note 1003 select (select 1 from `test`.`t2` where 0) AS `RESULT` from `test`.`t1` group by NULL
+Note 1003 select <expr_cache>((select 1 from `test`.`t2` where 0)) AS `RESULT` from `test`.`t1` group by NULL
DROP TABLE t1,t2;
#
# BUG#45928 "Differing query results depending on MRR and
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-07-16 11:02:15 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-08-11 17:51:06 +0000
@@ -41,7 +41,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`))))))
select * from t1 where a1 in (select b1 from t2 where b1 > '0');
a1 a2
1 - 01 2 - 01
@@ -52,7 +52,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a1`,`test`.`t1`.`a1` in ( <materialize> (select `test`.`t2`.`b1` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`))))))
select * from t1 where a1 in (select b1 from t2 where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -63,7 +63,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))))
select * from t1 where (a1, a2) in (select b1, b2 from t2 where b1 > '0' group by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -74,7 +74,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,min(`test`.`t2`.`b2`) from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,min(`test`.`t2`.`b2`) from `test`.`t2` where (`test`.`t2`.`b1` > '0') group by `test`.`t2`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`))))))
select * from t1 where (a1, a2) in (select b1, min(b2) from t2 where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -85,7 +85,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i index it2i1,it2i3 it2i1 9 NULL 5 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`)))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`))))))
select * from t1i where a1 in (select b1 from t2i where b1 > '0');
a1 a2
1 - 01 2 - 01
@@ -96,7 +96,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i range it2i1,it2i3 it2i1 9 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`)))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>(`test`.`t1i`.`a1`,`test`.`t1i`.`a1` in ( <materialize> (select `test`.`t2i`.`b1` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`))))))
select * from t1i where a1 in (select b1 from t2i where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -107,7 +107,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i index it2i1,it2i3 it2i3 18 NULL 5 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
select * from t1i where (a1, a2) in (select b1, b2 from t2i where b1 > '0');
a1 a2
1 - 01 2 - 01
@@ -118,7 +118,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i range it2i1,it2i3 it2i3 18 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
select * from t1i where (a1, a2) in (select b1, b2 from t2i where b1 > '0' group by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -129,7 +129,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i range it2i1,it2i3 it2i3 18 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`min(b2)`)))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`min(b2)`))))))
select * from t1i where (a1, a2) in (select b1, min(b2) from t2i where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -140,7 +140,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2i range NULL it2i3 9 NULL 3 100.00 Using index for group-by
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,max(`test`.`t2i`.`b2`) from `test`.`t2i` group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`max(b2)`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,max(`test`.`t2i`.`b2`) from `test`.`t2i` group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`max(b2)`))))))
select * from t1 where (a1, a2) in (select b1, max(b2) from t2i group by b1);
a1 a2
1 - 01 2 - 01
@@ -169,7 +169,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2i range it2i1,it2i3 it2i3 18 NULL 3 100.00 Using where; Using index for group-by
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,min(`test`.`t2i`.`b2`) from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') group by `test`.`t2i`.`b1` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`min(b2)`))))))
select * from t1 where (a1, a2) in (select b1, min(b2) from t2i where b1 > '0' group by b1);
a1 a2
1 - 01 2 - 01
@@ -209,7 +209,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` order by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` order by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))))
select * from t1 where (a1, a2) in (select b1, b2 from t2 order by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -220,7 +220,7 @@
1 PRIMARY t1i index NULL it1i3 18 NULL 3 100.00 Using where; Using index
2 SUBQUERY t2i index NULL it2i3 18 NULL 5 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` order by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` order by `test`.`t2i`.`b1`,`test`.`t2i`.`b2` ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))))
select * from t1i where (a1, a2) in (select b1, b2 from t2i order by b1, b2);
a1 a2
1 - 01 2 - 01
@@ -275,7 +275,7 @@
4 SUBQUERY t2i index it2i2 it2i3 18 NULL 5 100.00 Using where; Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (`test`.`t2`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select b1, b2 from t2 where b1 > '0') and
(a1, a2) in (select c1, c2 from t3
@@ -294,7 +294,7 @@
4 SUBQUERY t2i index it2i2 it2i3 18 NULL 5 100.00 Using where; Using index
2 SUBQUERY t2i index it2i1,it2i3 it2i3 18 NULL 5 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))) and <in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`))))))
+Note 1003 select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <expr_cache>(<in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1i
where (a1, a2) in (select b1, b2 from t2i where b1 > '0') and
(a1, a2) in (select c1, c2 from t3i
@@ -317,7 +317,7 @@
4 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
3 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))) or <in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select b1, b2 from t2
where b2 in (select c2 from t3 where c2 LIKE '%02') or
@@ -342,7 +342,7 @@
3 DEPENDENT SUBQUERY t3a ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.a1' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`)))) or <in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`)))) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where <in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),(`test`.`t3c`.`c1`,`test`.`t3c`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3c`.`c1` in <temporary table> on distinct_key where ((`test`.`t3c`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3c`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where <expr_cache>(<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),(`test`.`t3c`.`c1`,`test`.`t3c`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3c`.`c1` in <temporary table> on distinct_key where ((`test`.`t3c`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3c`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select b1, b2 from t2
where b2 in (select c2 from t3 t3a where c1 = a1) or
@@ -378,7 +378,7 @@
8 SUBQUERY t2i index it2i1,it2i3 it2i3 18 NULL 5 100.00 Using where; Using index
NULL UNION RESULT <union1,7> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 (select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))) or <in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`))))) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`))))))) union (select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`))))) and <in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`)))))))
+Note 1003 (select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where (<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%02') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3`.`c2` from `test`.`t3` where (`test`.`t3`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) group by `test`.`t2`.`b1`,`test`.`t2`.`b2` ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))) union (select `test`.`t1i`.`a1` AS `a1`,`test`.`t1i`.`a2` AS `a2` from `test`.`t1i` where (<expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`b2`)))))) and <expr_cache>(<in_optimizer>((`test`.`t1i`.`a1`,`test`.`t1i`.`a2`),(`test`.`t1i`.`a1`,`test`.`t1i`.`a2`) in ( <materialize> (select `test`.`t3i`.`c1`,`test`.`t3i`.`c2` from `test`.`t3i` where <expr_cache>(<in_optimizer>((`test`.`t3i`.`c1`,`test`.`t3i`.`c2`),(`test`.`t3i`.`c1`,`test`.`t3i`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3i`.`c1` in <temporary table> on distinct_key where ((`test`.`t3i`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3i`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1i`.`a1` in <temporary table> on distinct_key where ((`test`.`t1i`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1i`.`a2` = `materialized subselect`.`c2`))))))))
(select * from t1
where (a1, a2) in (select b1, b2 from t2
where b2 in (select c2 from t3 where c2 LIKE '%02') or
@@ -407,7 +407,7 @@
3 DEPENDENT UNION t2 ALL NULL NULL NULL NULL 5 100.00 Using where
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`)))) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),(`test`.`t1`.`a1`,`test`.`t1`.`a2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t1`.`a1` in <temporary table> on distinct_key where ((`test`.`t1`.`a1` = `materialized subselect`.`c1`) and (`test`.`t1`.`a2` = `materialized subselect`.`c2`)))))))
select * from t1
where (a1, a2) in (select * from t1 where a1 > '0' UNION select * from t2 where b1 < '9') and
(a1, a2) in (select c1, c2 from t3
@@ -430,7 +430,7 @@
3 DEPENDENT UNION t2 ALL NULL NULL NULL NULL 5 100.00 Using where
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2`,`test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`c1` = `test`.`t1`.`a1`) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`)))) and <in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`))))) ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`c1`) and (`test`.`t3`.`c2` = `materialized subselect`.`c2`))))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2`,`test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`c1` = `test`.`t1`.`a1`) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t1`.`a1`,`test`.`t1`.`a2` from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t1`.`a1`) = `test`.`t1`.`a1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t1`.`a2`)) union select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t3`.`c1`,`test`.`t3`.`c2` from `test`.`t3` where <expr_cache>(<in_optimizer>((`test`.`t3`.`c1`,`test`.`t3`.`c2`),(`test`.`t3`.`c1`,`test`.`t3`.`c2`) in ( <materialize> (select `test`.`t2i`.`b1`,`test`.`t2i`.`b2` from `test`.`t2i` where (`test`.`t2i`.`b2` > '0') ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`b1`) and (`test`.`t3`.`c2` = `materialized subselect`.`b2`)))))) ), <primary_index_lookup>(`test`.`t3`.`c1` in <temporary table> on distinct_key where ((`test`.`t3`.`c1` = `materialized subselect`.`c1`) and (`test`.`t3`.`c2` = `materialized subselect`.`c2`)))))))
select * from t1, t3
where (a1, a2) in (select * from t1 where a1 > '0' UNION select * from t2 where b1 < '9') and
(c1, c2) in (select c1, c2 from t3
@@ -452,7 +452,7 @@
3 DEPENDENT UNION t2 ALL NULL NULL NULL NULL 5 100.00 Using where
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t3` where <in_optimizer>(`test`.`t3`.`c1`,<exists>(select 1 from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t3`.`c1`) = `test`.`t1`.`a1`)) union select 1 from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t3`.`c1`) = `test`.`t2`.`b1`))))
+Note 1003 select `test`.`t3`.`c1` AS `c1`,`test`.`t3`.`c2` AS `c2` from `test`.`t3` where <expr_cache>(<in_optimizer>(`test`.`t3`.`c1`,<exists>(select 1 from `test`.`t1` where ((`test`.`t1`.`a1` > '0') and (<cache>(`test`.`t3`.`c1`) = `test`.`t1`.`a1`)) union select 1 from `test`.`t2` where ((`test`.`t2`.`b1` < '9') and (<cache>(`test`.`t3`.`c1`) = `test`.`t2`.`b1`)))))
select * from t3
where c1 in (select a1 from t1 where a1 > '0' UNION select b1 from t2 where b1 < '9');
c1 c2
@@ -476,14 +476,14 @@
Warnings:
Note 1276 Field or reference 'test.t1.a1' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'test.t1.a2' of SELECT #6 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`)))) or <in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`)))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`)))) and <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where (<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),<exists>(<index_lookup>(<cache>(`test`.`t3c`.`c1`) in t2i on it2i3 where (((`test`.`t2i`.`b2` > '0') or (`test`.`t2i`.`b2` = `test`.`t1`.`a2`)) and (<cache>(`test`.`t3c`.`c1`) = `test`.`t2i`.`b1`) and (<cache>(`test`.`t3c`.`c2`) = `test`.`t2i`.`b2`))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t3c`.`c1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t3c`.`c2`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where (<expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t2`.`b1`,`test`.`t2`.`b2` from `test`.`t2` where ((<expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,<exists>(select 1 from `test`.`t3` `t3a` where ((`test`.`t3a`.`c1` = `test`.`t1`.`a1`) and (<cache>(`test`.`t2`.`b2`) = `test`.`t3a`.`c2`))))) or <expr_cache>(<in_optimizer>(`test`.`t2`.`b2`,`test`.`t2`.`b2` in ( <materialize> (select `test`.`t3b`.`c2` from `test`.`t3` `t3b` where (`test`.`t3b`.`c2` like '%03') ), <primary_index_lookup>(`test`.`t2`.`b2` in <temporary table> on distinct_key where ((`test`.`t2`.`b2` = `materialized subselect`.`c2`))))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t2`.`b1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t2`.`b2`))))) and <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select `test`.`t3c`.`c1`,`test`.`t3c`.`c2` from `test`.`t3` `t3c` where (<expr_cache>(<in_optimizer>((`test`.`t3c`.`c1`,`test`.`t3c`.`c2`),<exists>(<index_lookup>(<cache>(`test`.`t3c`.`c1`) in t2i on it2i3 where (((`test`.`t2i`.`b2` > '0') or (`test`.`t2i`.`b2` = `test`.`t1`.`a2`)) and (<cache>(`test`.`t3c`.`c1`) = `test`.`t2i`.`b1`) and (<cache>(`test`.`t3c`.`c2`) = `test`.`t2i`.`b2`)))))) and (<cache>(`test`.`t1`.`a1`) = `test`.`t3c`.`c1`) and (<cache>(`test`.`t1`.`a2`) = `test`.`t3c`.`c2`))))))
explain extended
select * from t1 where (a1, a2) in (select '1 - 01', '2 - 01');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01'))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01')))))
select * from t1 where (a1, a2) in (select '1 - 01', '2 - 01');
a1 a2
1 - 01 2 - 01
@@ -493,7 +493,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01'))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`a1`,`test`.`t1`.`a2`),<exists>(select '1 - 01','2 - 01' having ((<cache>(`test`.`t1`.`a1`) = '1 - 01') and (<cache>(`test`.`t1`.`a2`) = '2 - 01')))))
select * from t1 where (a1, a2) in (select '1 - 01', '2 - 01' from dual);
a1 a2
1 - 01 2 - 01
@@ -525,7 +525,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 3 100.00 Using temporary; Using filesort
2 DEPENDENT SUBQUERY columns unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` group by <in_optimizer>(`test`.`t1`.`a1`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`a1`) in columns on PRIMARY where trigcond((<cache>(`test`.`t1`.`a1`) = `test`.`columns`.`col`)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` group by <expr_cache>(<in_optimizer>(`test`.`t1`.`a1`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`a1`) in columns on PRIMARY where trigcond((<cache>(`test`.`t1`.`a1`) = `test`.`columns`.`col`))))))
select * from t1 group by (a1 in (select col from columns));
a1 a2
1 - 00 2 - 00
@@ -583,7 +583,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select 1 from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select 1 from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select b1 from t2_16 where b1 > '0');
@@ -597,7 +597,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_16
where (a1,a2) in (select b1, b2 from t2_16 where b1 > '0');
@@ -611,7 +611,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select substr(`test`.`t2_16`.`b1`,1,16) from `test`.`t2_16` where (`test`.`t2_16`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`substring(b1,1,16)`)))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select substr(`test`.`t2_16`.`b1`,1,16) from `test`.`t2_16` where (`test`.`t2_16`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`substring(b1,1,16)`))))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select substring(b1,1,16) from t2_16 where b1 > '0');
@@ -625,7 +625,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ',')))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ','))))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select group_concat(b1) from t2_16 group by b2);
@@ -640,7 +640,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <expr_cache>(<in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in ( <materialize> (select group_concat(`test`.`t2_16`.`b1` separator ',') from `test`.`t2_16` group by `test`.`t2_16`.`b2` ), <primary_index_lookup>(`test`.`t1_16`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_16`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select group_concat(b1) from t2_16 group by b2);
@@ -662,7 +662,7 @@
3 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using join buffer
4 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 from `test`.`t1_16` where (<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <expr_cache>(<in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 from `test`.`t1_16` where (<expr_cache>(<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1`,`test`.`t2_16`.`b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <expr_cache>(<in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`)))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8))))))
drop table t1_16, t2_16, t3_16;
set @blob_len = 512;
set @suffix_len = @blob_len - @prefix_len;
@@ -696,7 +696,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,<exists>(select 1 from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,<exists>(select 1 from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select b1 from t2_512 where b1 > '0');
@@ -710,7 +710,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),<exists>(select `test`.`t2_512`.`b1`,`test`.`t2_512`.`b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),<exists>(select `test`.`t2_512`.`b1`,`test`.`t2_512`.`b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_512
where (a1,a2) in (select b1, b2 from t2_512 where b1 > '0');
@@ -724,7 +724,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select substr(`test`.`t2_512`.`b1`,1,512) from `test`.`t2_512` where (`test`.`t2_512`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`substring(b1,1,512)`)))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select substr(`test`.`t2_512`.`b1`,1,512) from `test`.`t2_512` where (`test`.`t2_512`.`b1` > '0') ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`substring(b1,1,512)`))))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select substring(b1,1,512) from t2_512 where b1 > '0');
@@ -738,7 +738,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select group_concat(b1) from t2_512 group by b2);
@@ -751,7 +751,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <expr_cache>(<in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in ( <materialize> (select group_concat(`test`.`t2_512`.`b1` separator ',') from `test`.`t2_512` group by `test`.`t2_512`.`b2` ), <primary_index_lookup>(`test`.`t1_512`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_512`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select group_concat(b1) from t2_512 group by b2);
@@ -789,7 +789,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,<exists>(select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,<exists>(select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select b1 from t2_1024 where b1 > '0');
@@ -803,7 +803,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),<exists>(select `test`.`t2_1024`.`b1`,`test`.`t2_1024`.`b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),<exists>(select `test`.`t2_1024`.`b1`,`test`.`t2_1024`.`b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_1024
where (a1,a2) in (select b1, b2 from t2_1024 where b1 > '0');
@@ -817,7 +817,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in (select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = substr(`test`.`t2_1024`.`b1`,1,1024)))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in (select 1 from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = substr(`test`.`t2_1024`.`b1`,1,1024))))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select substring(b1,1,1024) from t2_1024 where b1 > '0');
@@ -831,7 +831,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select group_concat(b1) from t2_1024 group by b2);
@@ -844,7 +844,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <expr_cache>(<in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1024`.`b1` separator ',') from `test`.`t2_1024` group by `test`.`t2_1024`.`b2` ), <primary_index_lookup>(`test`.`t1_1024`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1024`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select group_concat(b1) from t2_1024 group by b2);
@@ -882,7 +882,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,<exists>(select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,<exists>(select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`)))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select b1 from t2_1025 where b1 > '0');
@@ -896,7 +896,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),<exists>(select `test`.`t2_1025`.`b1`,`test`.`t2_1025`.`b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),<exists>(select `test`.`t2_1025`.`b1`,`test`.`t2_1025`.`b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`)))))
select left(a1,7), left(a2,7)
from t1_1025
where (a1,a2) in (select b1, b2 from t2_1025 where b1 > '0');
@@ -910,7 +910,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in (select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = substr(`test`.`t2_1025`.`b1`,1,1025)))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in (select 1 from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = substr(`test`.`t2_1025`.`b1`,1,1025))))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select substring(b1,1,1025) from t2_1025 where b1 > '0');
@@ -924,7 +924,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select group_concat(b1) from t2_1025 group by b2);
@@ -937,7 +937,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`)))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <expr_cache>(<in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in ( <materialize> (select group_concat(`test`.`t2_1025`.`b1` separator ',') from `test`.`t2_1025` group by `test`.`t2_1025`.`b2` ), <primary_index_lookup>(`test`.`t1_1025`.`a1` in <temporary table> on distinct_key where ((`test`.`t1_1025`.`a1` = `materialized subselect`.`group_concat(b1)`))))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select group_concat(b1) from t2_1025 group by b2);
@@ -959,7 +959,7 @@
1 PRIMARY t1bit ALL NULL NULL NULL NULL 3 100.00 Using where
2 SUBQUERY t2bit ALL NULL NULL NULL NULL 3 100.00
Warnings:
-Note 1003 select conv(`test`.`t1bit`.`a1`,10,2) AS `bin(a1)`,conv(`test`.`t1bit`.`a2`,10,2) AS `bin(a2)` from `test`.`t1bit` where <in_optimizer>((`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`),(`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`) in ( <materialize> (select `test`.`t2bit`.`b1`,`test`.`t2bit`.`b2` from `test`.`t2bit` ), <primary_index_lookup>(`test`.`t1bit`.`a1` in <temporary table> on distinct_key where ((`test`.`t1bit`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1bit`.`a2` = `materialized subselect`.`b2`)))))
+Note 1003 select conv(`test`.`t1bit`.`a1`,10,2) AS `bin(a1)`,conv(`test`.`t1bit`.`a2`,10,2) AS `bin(a2)` from `test`.`t1bit` where <expr_cache>(<in_optimizer>((`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`),(`test`.`t1bit`.`a1`,`test`.`t1bit`.`a2`) in ( <materialize> (select `test`.`t2bit`.`b1`,`test`.`t2bit`.`b2` from `test`.`t2bit` ), <primary_index_lookup>(`test`.`t1bit`.`a1` in <temporary table> on distinct_key where ((`test`.`t1bit`.`a1` = `materialized subselect`.`b1`) and (`test`.`t1bit`.`a2` = `materialized subselect`.`b2`))))))
select bin(a1), bin(a2)
from t1bit
where (a1, a2) in (select b1, b2 from t2bit);
@@ -982,7 +982,7 @@
1 PRIMARY t1bb ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2bb ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),<exists>(select `test`.`t2bb`.`b1`,`test`.`t2bb`.`b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`))))
+Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <expr_cache>(<in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),<exists>(select `test`.`t2bb`.`b1`,`test`.`t2bb`.`b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`)))))
select bin(a1), a2
from t1bb
where (a1, a2) in (select b1, b2 from t2bb);
@@ -1030,7 +1030,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 7 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 6 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 where a in (select c from t2 where d >= 20);
a
2
@@ -1044,7 +1044,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using where; Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 6 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 where a in (select c from t2 where d >= 20);
a
2
@@ -1058,7 +1058,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using where; Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 7 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 where a in (select c from t2 where d >= 20);
a
2
@@ -1071,7 +1071,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 7 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 group by a having a in (select c from t2 where d >= 20);
a
2
@@ -1083,7 +1083,7 @@
1 PRIMARY t1 index NULL it1a 4 NULL 7 100.00 Using index
2 SUBQUERY t2 ALL NULL NULL NULL NULL 7 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`)))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,`test`.`t1`.`a` in ( <materialize> (select `test`.`t2`.`c` from `test`.`t2` where (`test`.`t2`.`d` >= 20) ), <primary_index_lookup>(`test`.`t1`.`a` in <temporary table> on distinct_key where ((`test`.`t1`.`a` = `materialized subselect`.`c`))))))
select a from t1 group by a having a in (select c from t2 where d >= 20);
a
2
@@ -1097,7 +1097,7 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select `test`.`t3`.`e` from `test`.`t3` where (max(`test`.`t1`.`b`) = `test`.`t3`.`e`) having (<cache>(`test`.`t2`.`d`) >= <ref_null_helper>(`test`.`t3`.`e`))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` group by `test`.`t1`.`a` having <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select `test`.`t3`.`e` from `test`.`t3` where (max(`test`.`t1`.`b`) = `test`.`t3`.`e`) having (<cache>(`test`.`t2`.`d`) >= <ref_null_helper>(`test`.`t3`.`e`)))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`)))))
select a from t1 group by a
having a in (select c from t2 where d >= some(select e from t3 where max(b)=e));
a
@@ -1112,7 +1112,7 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` where <expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` where (<nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))) and (<cache>(`test`.`t1`.`a`) = `test`.`t2`.`c`)))))
select a from t1
where a in (select c from t2 where d >= some(select e from t3 where b=e));
a
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-07-16 08:58:24 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-08-11 17:51:06 +0000
@@ -56,7 +56,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having ((select '1') = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -320,7 +320,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select (select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`)) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -338,7 +338,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -749,7 +749,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -897,7 +897,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -912,7 +912,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1473,25 +1473,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1746,14 +1746,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`)))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2289,7 +2289,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2830,7 +2830,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
@@ -2842,7 +2842,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4329,7 +4329,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1)))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-07-16 08:58:24 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-08-11 17:51:06 +0000
@@ -53,7 +53,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having ((select '1') = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -317,7 +317,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select (select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`)) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -335,7 +335,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -746,7 +746,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -894,7 +894,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -909,7 +909,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1299,7 +1299,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY)))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1309,7 +1309,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<primary_index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on PRIMARY where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1320,7 +1320,7 @@
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 func 1 100.00
2 DEPENDENT SUBQUERY t3 eq_ref PRIMARY PRIMARY 4 test.t1.b 1 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
drop table t1, t2, t3;
create table t1 (a int, b int, index a (a,b));
create table t2 (a int, index a (a));
@@ -1342,7 +1342,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 1001 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a)))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1352,7 +1352,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 1001 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1363,7 +1363,7 @@
2 DEPENDENT SUBQUERY t1 ref a a 5 func 1001 100.00 Using index
2 DEPENDENT SUBQUERY t3 index a a 5 NULL 3 100.00 Using where; Using index; Using join buffer
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(select 1 from `test`.`t1` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t1`.`b`) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
insert into t1 values (3,31);
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
@@ -1379,7 +1379,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 DEPENDENT SUBQUERY t1 index_subquery a a 5 func 1001 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t2`.`a`) in t1 on a where ((`test`.`t1`.`b` <> 30) and (<cache>(`test`.`t2`.`a`) = `test`.`t1`.`a`))))))
drop table t0, t1, t2, t3;
create table t1 (a int, b int);
create table t2 (a int, b int);
@@ -1470,25 +1470,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1743,14 +1743,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`)))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2286,7 +2286,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2827,19 +2827,19 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00 Using where
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = 'N') and (<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) and (<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`))))
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = 'N') and (<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) and (<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`)))))
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4326,7 +4326,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,<exists>(select 1 from `test`.`t1` group by `test`.`t1`.`a` having 1)))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-07-16 08:58:24 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-08-11 17:51:06 +0000
@@ -53,7 +53,7 @@
Warnings:
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
Note 1276 Field or reference 'b.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having ((select '1') = 1)
+Note 1003 select 1 AS `1` from (select 1 AS `a`) `b` having (<expr_cache>((select '1')) = 1)
SELECT 1 FROM (SELECT 1 as a) as b HAVING (SELECT a)=1;
1
1
@@ -317,7 +317,7 @@
Warnings:
Note 1276 Field or reference 'test.t2.a' of SELECT #2 was resolved in SELECT #1
Note 1276 Field or reference 'test.t2.a' of SELECT #3 was resolved in SELECT #1
-Note 1003 select (select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`)) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
+Note 1003 select <expr_cache>((select '2' from `test`.`t1` where ('2' = `test`.`t2`.`a`) union select `test`.`t5`.`a` from `test`.`t5` where (`test`.`t5`.`a` = `test`.`t2`.`a`))) AS `(select a from t1 where t1.a=t2.a union select a from t5 where t5.a=t2.a)`,`test`.`t2`.`a` AS `a` from `test`.`t2`
select (select a from t1 where t1.a=t2.a union all select a from t5 where t5.a=t2.a), a from t2;
ERROR 21000: Subquery returns more than 1 row
create table t6 (patient_uq int, clinic_uq int, index i1 (clinic_uq));
@@ -335,7 +335,7 @@
2 DEPENDENT SUBQUERY t7 eq_ref PRIMARY PRIMARY 4 test.t6.clinic_uq 1 100.00 Using index
Warnings:
Note 1276 Field or reference 'test.t6.clinic_uq' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`))
+Note 1003 select `test`.`t6`.`patient_uq` AS `patient_uq`,`test`.`t6`.`clinic_uq` AS `clinic_uq` from `test`.`t6` where <expr_cache>(exists(select 1 from `test`.`t7` where (`test`.`t7`.`uq` = `test`.`t6`.`clinic_uq`)))
select * from t1 where a= (select a from t2,t4 where t2.b=t4.b);
ERROR 23000: Column 'a' in field list is ambiguous
drop table t1,t2,t3;
@@ -746,7 +746,7 @@
3 DEPENDENT UNION NULL NULL NULL NULL NULL NULL NULL NULL No tables used
NULL UNION RESULT <union2,3> ALL NULL NULL NULL NULL NULL NULL
Warnings:
-Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3))))
+Note 1003 select `test`.`t2`.`id` AS `id` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`id`,<exists>(select 1 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(1)) union select 3 having (<cache>(`test`.`t2`.`id`) = <ref_null_helper>(3)))))
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 3);
id
SELECT * FROM t2 WHERE id IN (SELECT 5 UNION SELECT 2);
@@ -894,7 +894,7 @@
1 PRIMARY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery a a 5 func 2 100.00 Using index
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`a`) in t2 on a checking NULL having <is_not_null_test>(`test`.`t2`.`a`))))) AS `t1.a in (select t2.a from t2)` from `test`.`t1`
CREATE TABLE t3 (a int(11) default '0');
INSERT INTO t3 VALUES (1),(2),(3);
SELECT t1.a, t1.a in (select t2.a from t2,t3 where t3.a=t2.a) FROM t1;
@@ -909,7 +909,7 @@
2 DEPENDENT SUBQUERY t2 ref_or_null a a 5 func 2 100.00 Using index
2 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 3 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t1`.`a` AS `a`,<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`a` AS `a`,<expr_cache>(<in_optimizer>(`test`.`t1`.`a`,<exists>(select 1 from `test`.`t2` join `test`.`t3` where ((`test`.`t3`.`a` = `test`.`t2`.`a`) and ((<cache>(`test`.`t1`.`a`) = `test`.`t2`.`a`) or isnull(`test`.`t2`.`a`))) having <is_not_null_test>(`test`.`t2`.`a`)))) AS `t1.a in (select t2.a from t2,t3 where t3.a=t2.a)` from `test`.`t1`
drop table t1,t2,t3;
create table t1 (a float);
select 10.5 IN (SELECT * from t1 LIMIT 1);
@@ -1299,7 +1299,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL PRIMARY 4 NULL 4 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1309,7 +1309,7 @@
1 PRIMARY t2 index NULL PRIMARY 4 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1320,7 +1320,7 @@
2 SUBQUERY t3 index PRIMARY PRIMARY 4 NULL 3 100.00 Using index
2 SUBQUERY t1 ALL NULL NULL NULL NULL 4 100.00 Using where; Using join buffer
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
drop table t1, t2, t3;
create table t1 (a int, b int, index a (a,b));
create table t2 (a int, index a (a));
@@ -1342,7 +1342,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL a 10 NULL 10004 100.00 Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
2
@@ -1352,7 +1352,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL a 10 NULL 10004 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
select * from t2 where t2.a in (select t1.a from t1,t3 where t1.b=t3.a);
a
2
@@ -1363,7 +1363,7 @@
2 SUBQUERY t3 index a a 5 NULL 3 100.00 Using index
2 SUBQUERY t1 index NULL a 10 NULL 10004 100.00 Using where; Using index; Using join buffer
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` join `test`.`t3` where (`test`.`t1`.`b` = `test`.`t3`.`a`) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
insert into t1 values (3,31);
select * from t2 where t2.a in (select a from t1 where t1.b <> 30);
a
@@ -1379,7 +1379,7 @@
1 PRIMARY t2 index NULL a 5 NULL 4 100.00 Using where; Using index
2 SUBQUERY t1 index NULL a 10 NULL 10005 100.00 Using where; Using index
Warnings:
-Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`)))))
+Note 1003 select `test`.`t2`.`a` AS `a` from `test`.`t2` where <expr_cache>(<in_optimizer>(`test`.`t2`.`a`,`test`.`t2`.`a` in ( <materialize> (select `test`.`t1`.`a` from `test`.`t1` where (`test`.`t1`.`b` <> 30) ), <primary_index_lookup>(`test`.`t2`.`a` in <temporary table> on distinct_key where ((`test`.`t2`.`a` = `materialized subselect`.`a`))))))
drop table t0, t1, t2, t3;
create table t1 (a int, b int);
create table t2 (a int, b int);
@@ -1470,25 +1470,25 @@
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 = ANY (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))) AS `s1 = ANY (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 <> ALL (SELECT s1 FROM t2) from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 <> ALL (SELECT s1 FROM t2)` from `test`.`t1`
explain extended select s1, s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2') from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 index NULL s1 6 NULL 3 100.00 Using index
2 DEPENDENT SUBQUERY t2 index_subquery s1 s1 6 func 2 100.00 Using index; Using where; Full scan on NULL key
Warnings:
-Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
+Note 1003 select `test`.`t1`.`s1` AS `s1`,(not(<expr_cache>(<in_optimizer>(`test`.`t1`.`s1`,<exists>(<index_lookup>(<cache>(`test`.`t1`.`s1`) in t2 on s1 checking NULL where (`test`.`t2`.`s1` < 'a2') having trigcond(<is_not_null_test>(`test`.`t2`.`s1`)))))))) AS `s1 NOT IN (SELECT s1 FROM t2 WHERE s1 < 'a2')` from `test`.`t1`
drop table t1,t2;
create table t2 (a int, b int);
create table t3 (a int);
@@ -1743,14 +1743,14 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 unique_subquery PRIMARY PRIMARY 4 func 1 100.00 Using index; Using where
Warnings:
-Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`)))))))
+Note 1003 select `test`.`t1`.`id` AS `id`,`test`.`t1`.`text` AS `text` from `test`.`t1` where (not(<expr_cache>(<in_optimizer>(`test`.`t1`.`id`,<exists>(<primary_index_lookup>(<cache>(`test`.`t1`.`id`) in t1 on PRIMARY where ((`test`.`t1`.`id` < 8) and (<cache>(`test`.`t1`.`id`) = `test`.`t1`.`id`))))))))
explain extended select * from t1 as tt where not exists (select id from t1 where id < 8 and (id = tt.id or id is null) having id is not null);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY tt ALL NULL NULL NULL NULL 12 100.00 Using where
2 DEPENDENT SUBQUERY t1 eq_ref PRIMARY PRIMARY 4 test.tt.id 1 100.00 Using where; Using index
Warnings:
Note 1276 Field or reference 'test.tt.id' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null))))
+Note 1003 select `test`.`tt`.`id` AS `id`,`test`.`tt`.`text` AS `text` from `test`.`t1` `tt` where (not(<expr_cache>(exists(select `test`.`t1`.`id` from `test`.`t1` where ((`test`.`t1`.`id` < 8) and (`test`.`t1`.`id` = `test`.`tt`.`id`)) having (`test`.`t1`.`id` is not null)))))
insert into t1 (id, text) values (1000, 'text1000'), (1001, 'text1001');
create table t2 (id int not null, text varchar(20) not null default '', primary key (id));
insert into t2 (id, text) values (1, 'text1'), (2, 'text2'), (3, 'text3'), (4, 'text4'), (5, 'text5'), (6, 'text6'), (7, 'text7'), (8, 'text8'), (9, 'text9'), (10, 'text10'), (11, 'text1'), (12, 'text2'), (13, 'text3'), (14, 'text4'), (15, 'text5'), (16, 'text6'), (17, 'text7'), (18, 'text8'), (19, 'text9'), (20, 'text10'),(21, 'text1'), (22, 'text2'), (23, 'text3'), (24, 'text4'), (25, 'text5'), (26, 'text6'), (27, 'text7'), (28, 'text8'), (29, 'text9'), (30, 'text10'), (31, 'text1'), (32, 'text2'), (33, 'text3'), (34, 'text4'), (35, 'text5'), (36, 'text6'), (37, 'text7'), (38, 'text8'), (39, 'text9'), (40, 'text10'), (41, 'text1'), (42, 'text2'), (43, 'text3'), (44, 'text4'), (45, 'text5'), (46, 'text6'), (47, 'text7'), (48, 'text8'), (49, 'text9'), (50, 'text10');
@@ -2286,7 +2286,7 @@
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.up.a' of SELECT #2 was resolved in SELECT #1
-Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`))
+Note 1003 select `test`.`up`.`a` AS `a`,`test`.`up`.`b` AS `b` from `test`.`t1` `up` where <expr_cache>(exists(select 1 from `test`.`t1` where (`test`.`t1`.`a` = `test`.`up`.`a`)))
drop table t1;
CREATE TABLE t1 (t1_a int);
INSERT INTO t1 VALUES (1);
@@ -2827,19 +2827,19 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where ((`test`.`t2`.`flag` = '0') and trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`)))) having (trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
explain extended SELECT one,two from t1 where ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = 'N');
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00 Using where
2 SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),(`test`.`t1`.`one`,`test`.`t1`.`two`) in ( <materialize> (select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = 'N') ), <primary_index_lookup>(`test`.`t1`.`one` in <temporary table> on distinct_key where ((`test`.`t1`.`one` = `materialized subselect`.`one`) and (`test`.`t1`.`two` = `materialized subselect`.`two`)))))
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two` from `test`.`t1` where <expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),(`test`.`t1`.`one`,`test`.`t1`.`two`) in ( <materialize> (select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = 'N') ), <primary_index_lookup>(`test`.`t1`.`one` in <temporary table> on distinct_key where ((`test`.`t1`.`one` = `materialized subselect`.`one`) and (`test`.`t1`.`two` = `materialized subselect`.`two`))))))
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 8 100.00
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 9 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
+Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<expr_cache>(<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one`,`test`.`t2`.`two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`)))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
@@ -4326,13 +4326,13 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`)))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
EXPLAIN EXTENDED SELECT 1 FROM t1 WHERE 1 IN (SELECT 1 FROM t1 WHERE a > 3 GROUP BY a);
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00 Using where
2 SUBQUERY t1 ALL NULL NULL NULL NULL 2 100.00 Using where; Using temporary; Using filesort
Warnings:
-Note 1003 select 1 AS `1` from `test`.`t1` where <in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`)))))
+Note 1003 select 1 AS `1` from `test`.`t1` where <expr_cache>(<in_optimizer>(1,1 in ( <materialize> (select 1 from `test`.`t1` where (`test`.`t1`.`a` > 3) group by `test`.`t1`.`a` ), <primary_index_lookup>(1 in <temporary table> on distinct_key where ((1 = `materialized subselect`.`1`))))))
DROP TABLE t1;
#
# Bug#45061: Incorrectly market field caused wrong result.
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-07-16 08:58:24 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-08-11 17:51:06 +0000
@@ -772,11 +772,11 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
show warnings;
Level Code Message
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
select a from t1
where a in (select c from t2 where d >= some(select e from t3 where b=e));
a
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-07-16 08:58:24 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-08-11 17:51:06 +0000
@@ -776,11 +776,11 @@
3 DEPENDENT SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
show warnings;
Level Code Message
Note 1276 Field or reference 'test.t1.b' of SELECT #3 was resolved in SELECT #1
-Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`))))))
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t1`.`a` = `test`.`t2`.`c`) and <nop>(<expr_cache>(<in_optimizer>(`test`.`t2`.`d`,<exists>(select 1 from `test`.`t3` where ((`test`.`t1`.`b` = `test`.`t3`.`e`) and (<cache>(`test`.`t2`.`d`) >= `test`.`t3`.`e`)))))))
select a from t1
where a in (select c from t2 where d >= some(select e from t3 where b=e));
a
=== modified file 'sql/item.h'
--- a/sql/item.h 2010-07-29 11:13:48 +0000
+++ b/sql/item.h 2010-08-11 17:51:06 +0000
@@ -2630,8 +2630,10 @@
virtual void print(String *str, enum_query_type query_type)
{
- /* TODO: maybe print something for EXPLAIN EXTENDED */
- return orig_item->print(str, query_type);
+ str->append(func_name());
+ str->append('(');
+ orig_item->print(str, query_type);
+ str->append(')');
}
virtual const char *full_name() const { return orig_item->full_name(); }
virtual void make_field(Send_field *field) { orig_item->make_field(field); }
1
0

Re: [Maria-developers] [Commits] Rev 2898: generalization of mtr to support suite.pm extensions
by Sergei Golubchik 11 Aug '10
by Sergei Golubchik 11 Aug '10
11 Aug '10
Hi, Kristian
Would you like to look once again at the mtr patch ?
I've moved it to 5.1 and added one more feature
(SUITENAME_COMBINATIONS, see below) to be able to incorporate Monty's
changes for the innodb_plugin in a sane way.
I've removed auto-creation of my.cnf options by mere mentioning, as you
found that confusing. Instead I added explicit group OPT, so now sphinx
my.cnf would do
[searchd]
port=(a)OPT.port
[ENV]
SEARCHD_PORT=(a)searchd.port
which is hopefully less confusing.
Also, I've added a README file will assorted notes about what I've
discovered when looking into mtr and what I've added. It's not an
exhaustive manual yet, though.
Regards,
Sergei
1
0

Re: [Maria-developers] [Commits] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (monty:2897)
by Kristian Nielsen 10 Aug '10
by Kristian Nielsen 10 Aug '10
10 Aug '10
Michael Widenius <michael.widenius(a)gmail.com> writes:
> #At lp:maria based on revid:monty@mysql.com-20100809170542-ewa2awm6pcoi1ipy
>
> 2897 Michael Widenius 2010-08-09
> Ignore ENOLCK errno from FreeBSD (known problem in old FreeBSD releases)
> modified:
> mysys/my_sync.c
>
> === modified file 'mysys/my_sync.c'
> --- a/mysys/my_sync.c 2010-08-09 17:05:42 +0000
> +++ b/mysys/my_sync.c 2010-08-09 17:54:58 +0000
> @@ -68,6 +68,8 @@ int my_sync(File fd, myf my_flags)
> res= fdatasync(fd);
> #elif defined(HAVE_FSYNC)
> res= fsync(fd);
> + if (res == -1 and errno == ENOLCK)
^^^
Ehm, the operator is called `&&' in C ;-)
(causes build to break for macosx in Buildbot).
- Kristian.
1
0
My ISP went down last night around midnight. I'm not sure if it affected
some of the builds. Sorry if it did. Please let me know if you need to me
to do anything with:
adutko-centos5-amd64
adutko-ultrasparc3
mariadb-brs
I just checked each of them and the buildbot status page and they all seem
to be working normally.
-Adam
1
0
Hi!
Sanja, in Aria engine in 5.2 we have the code:
else
{
soft_sync_max= lsn;
soft_need_sync= 1;
}
This is likely wrong as soft_sync_max is supposed to be a file number
and lsn is too big to fix into soft_sync_max (which is only a uint32).
Could you please take a look at this.
Note that I also assigned the following bug in 5.2 for you to take a
look at:
#611379 Equivalent queries with Impossible where return different results
Please also try to get this one fixed (as it's marked 5.1):
#473914 mysql_client_test fail on windows x64
Regards,
Monty
2
1

[Maria-developers] Rev 2897: Fix for LP bug#611379. in file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
by sanja@askmonty.org 10 Aug '10
by sanja@askmonty.org 10 Aug '10
10 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
------------------------------------------------------------
revno: 2897
revision-id: sanja(a)askmonty.org-20100810030810-q290n2tcc9uc4xg3
parent: monty(a)mysql.com-20100809170542-ewa2awm6pcoi1ipy
committer: sanja(a)askmonty.org
branch nick: work-maria-5.1-lb611379
timestamp: Tue 2010-08-10 06:08:10 +0300
message:
Fix for LP bug#611379.
maybe_null/null_value flag set to TRUE for Item_sum_distinct.
=== modified file 'mysql-test/r/func_group.result'
--- a/mysql-test/r/func_group.result 2009-11-24 15:26:13 +0000
+++ b/mysql-test/r/func_group.result 2010-08-10 03:08:10 +0000
@@ -1713,4 +1713,60 @@
NULL NULL NULL NULL NULL
drop table t1;
#
+#test for LP Bug#611379
+#
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (10,8,'v');
+INSERT INTO `t1` VALUES (11,9,'r');
+INSERT INTO `t1` VALUES (12,9,'a');
+INSERT INTO `t1` VALUES (13,186,'m');
+INSERT INTO `t1` VALUES (14,NULL,'y');
+INSERT INTO `t1` VALUES (15,2,'j');
+INSERT INTO `t1` VALUES (16,3,'d');
+INSERT INTO `t1` VALUES (17,0,'z');
+INSERT INTO `t1` VALUES (18,133,'e');
+INSERT INTO `t1` VALUES (19,1,'h');
+INSERT INTO `t1` VALUES (20,8,'b');
+INSERT INTO `t1` VALUES (21,5,'s');
+INSERT INTO `t1` VALUES (22,5,'e');
+INSERT INTO `t1` VALUES (23,8,'j');
+INSERT INTO `t1` VALUES (24,6,'e');
+INSERT INTO `t1` VALUES (25,51,'f');
+INSERT INTO `t1` VALUES (26,4,'v');
+INSERT INTO `t1` VALUES (27,7,'x');
+INSERT INTO `t1` VALUES (28,6,'m');
+INSERT INTO `t1` VALUES (29,4,'c');
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,8,NULL);
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (1,7,'f');
+SELECT SUM( DISTINCT table1 .`pk` ) FROM t3 table1 STRAIGHT_JOIN ( t2 table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ;
+SUM( DISTINCT table1 .`pk` )
+NULL
+SELECT * FROM ( SELECT SUM( DISTINCT table1 .`pk` ) FROM t3 table1 STRAIGHT_JOIN ( t2 table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ) AS t1;
+SUM( DISTINCT table1 .`pk` )
+NULL
+drop table t1,t2,t3;
+#
End of 5.1 tests
=== modified file 'mysql-test/t/func_group.test'
--- a/mysql-test/t/func_group.test 2009-11-24 15:26:13 +0000
+++ b/mysql-test/t/func_group.test 2010-08-10 03:08:10 +0000
@@ -1082,6 +1082,61 @@
from t1 a, t1 b;
select *, f1 = f2 from t1;
drop table t1;
+
+--echo #
+--echo #test for LP Bug#611379
+--echo #
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (10,8,'v');
+INSERT INTO `t1` VALUES (11,9,'r');
+INSERT INTO `t1` VALUES (12,9,'a');
+INSERT INTO `t1` VALUES (13,186,'m');
+INSERT INTO `t1` VALUES (14,NULL,'y');
+INSERT INTO `t1` VALUES (15,2,'j');
+INSERT INTO `t1` VALUES (16,3,'d');
+INSERT INTO `t1` VALUES (17,0,'z');
+INSERT INTO `t1` VALUES (18,133,'e');
+INSERT INTO `t1` VALUES (19,1,'h');
+INSERT INTO `t1` VALUES (20,8,'b');
+INSERT INTO `t1` VALUES (21,5,'s');
+INSERT INTO `t1` VALUES (22,5,'e');
+INSERT INTO `t1` VALUES (23,8,'j');
+INSERT INTO `t1` VALUES (24,6,'e');
+INSERT INTO `t1` VALUES (25,51,'f');
+INSERT INTO `t1` VALUES (26,4,'v');
+INSERT INTO `t1` VALUES (27,7,'x');
+INSERT INTO `t1` VALUES (28,6,'m');
+INSERT INTO `t1` VALUES (29,4,'c');
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,8,NULL);
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (1,7,'f');
+
+SELECT SUM( DISTINCT table1 .`pk` ) FROM t3 table1 STRAIGHT_JOIN ( t2 table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ;
+SELECT * FROM ( SELECT SUM( DISTINCT table1 .`pk` ) FROM t3 table1 STRAIGHT_JOIN ( t2 table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ) AS t1;
+drop table t1,t2,t3;
+
--echo #
--echo End of 5.1 tests
=== modified file 'sql/item_sum.cc'
--- a/sql/item_sum.cc 2010-08-02 09:01:24 +0000
+++ b/sql/item_sum.cc 2010-08-10 03:08:10 +0000
@@ -949,6 +949,7 @@
{
DBUG_ASSERT(args[0]->fixed);
+ null_value= maybe_null= TRUE;
table_field_type= args[0]->field_type();
/* Adjust tmp table type according to the chosen aggregation type */
1
0

[Maria-developers] Rev 2897: Fix for LP bug#611379. in file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
by sanja@askmonty.org 09 Aug '10
by sanja@askmonty.org 09 Aug '10
09 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.1-lb611379/
------------------------------------------------------------
revno: 2897
revision-id: sanja(a)askmonty.org-20100809193502-gdriohzan0t6s727
parent: monty(a)mysql.com-20100809170542-ewa2awm6pcoi1ipy
committer: sanja(a)askmonty.org
branch nick: work-maria-5.1-lb611379
timestamp: Mon 2010-08-09 22:35:02 +0300
message:
Fix for LP bug#611379.
maybe_null/null_value flag set to TRUE for Item_sum_distinct.
=== modified file 'mysql-test/r/func_group.result'
--- a/mysql-test/r/func_group.result 2009-11-24 15:26:13 +0000
+++ b/mysql-test/r/func_group.result 2010-08-09 19:35:02 +0000
@@ -1713,4 +1713,60 @@
NULL NULL NULL NULL NULL
drop table t1;
#
+#test for LP Bug#611379
+#
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (10,8,'v');
+INSERT INTO `t1` VALUES (11,9,'r');
+INSERT INTO `t1` VALUES (12,9,'a');
+INSERT INTO `t1` VALUES (13,186,'m');
+INSERT INTO `t1` VALUES (14,NULL,'y');
+INSERT INTO `t1` VALUES (15,2,'j');
+INSERT INTO `t1` VALUES (16,3,'d');
+INSERT INTO `t1` VALUES (17,0,'z');
+INSERT INTO `t1` VALUES (18,133,'e');
+INSERT INTO `t1` VALUES (19,1,'h');
+INSERT INTO `t1` VALUES (20,8,'b');
+INSERT INTO `t1` VALUES (21,5,'s');
+INSERT INTO `t1` VALUES (22,5,'e');
+INSERT INTO `t1` VALUES (23,8,'j');
+INSERT INTO `t1` VALUES (24,6,'e');
+INSERT INTO `t1` VALUES (25,51,'f');
+INSERT INTO `t1` VALUES (26,4,'v');
+INSERT INTO `t1` VALUES (27,7,'x');
+INSERT INTO `t1` VALUES (28,6,'m');
+INSERT INTO `t1` VALUES (29,4,'c');
+CREATE TABLE `BB` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `BB` VALUES (10,8,NULL);
+CREATE TABLE `B` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+INSERT INTO `B` VALUES (1,7,'f');
+SELECT SUM( DISTINCT table1 .`pk` ) FROM B table1 STRAIGHT_JOIN ( BB table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ;
+SUM( DISTINCT table1 .`pk` )
+NULL
+SELECT * FROM ( SELECT SUM( DISTINCT table1 .`pk` ) FROM B table1 STRAIGHT_JOIN ( BB table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ) AS t1;
+SUM( DISTINCT table1 .`pk` )
+NULL
+drop table t1;
+#
End of 5.1 tests
=== modified file 'mysql-test/t/func_group.test'
--- a/mysql-test/t/func_group.test 2009-11-24 15:26:13 +0000
+++ b/mysql-test/t/func_group.test 2010-08-09 19:35:02 +0000
@@ -1082,6 +1082,61 @@
from t1 a, t1 b;
select *, f1 = f2 from t1;
drop table t1;
+
+--echo #
+--echo #test for LP Bug#611379
+--echo #
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (10,8,'v');
+INSERT INTO `t1` VALUES (11,9,'r');
+INSERT INTO `t1` VALUES (12,9,'a');
+INSERT INTO `t1` VALUES (13,186,'m');
+INSERT INTO `t1` VALUES (14,NULL,'y');
+INSERT INTO `t1` VALUES (15,2,'j');
+INSERT INTO `t1` VALUES (16,3,'d');
+INSERT INTO `t1` VALUES (17,0,'z');
+INSERT INTO `t1` VALUES (18,133,'e');
+INSERT INTO `t1` VALUES (19,1,'h');
+INSERT INTO `t1` VALUES (20,8,'b');
+INSERT INTO `t1` VALUES (21,5,'s');
+INSERT INTO `t1` VALUES (22,5,'e');
+INSERT INTO `t1` VALUES (23,8,'j');
+INSERT INTO `t1` VALUES (24,6,'e');
+INSERT INTO `t1` VALUES (25,51,'f');
+INSERT INTO `t1` VALUES (26,4,'v');
+INSERT INTO `t1` VALUES (27,7,'x');
+INSERT INTO `t1` VALUES (28,6,'m');
+INSERT INTO `t1` VALUES (29,4,'c');
+CREATE TABLE `BB` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `BB` VALUES (10,8,NULL);
+CREATE TABLE `B` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+INSERT INTO `B` VALUES (1,7,'f');
+
+SELECT SUM( DISTINCT table1 .`pk` ) FROM B table1 STRAIGHT_JOIN ( BB table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ;
+SELECT * FROM ( SELECT SUM( DISTINCT table1 .`pk` ) FROM B table1 STRAIGHT_JOIN ( BB table2 JOIN t1 ON table2 .`col_varchar_key` ) ON table2 .`pk` ) AS t1;
+drop table t1;
+
--echo #
--echo End of 5.1 tests
=== modified file 'sql/item_sum.cc'
--- a/sql/item_sum.cc 2010-08-02 09:01:24 +0000
+++ b/sql/item_sum.cc 2010-08-09 19:35:02 +0000
@@ -949,6 +949,7 @@
{
DBUG_ASSERT(args[0]->fixed);
+ null_value= maybe_null= TRUE;
table_field_type= args[0]->field_type();
/* Adjust tmp table type according to the chosen aggregation type */
1
0

[Maria-developers] Rev 2843: Fix of soft group commit (assigned LSN instead of file number). Found by Monty. in file:///home/bell/maria/bzr/work-maria-5.2-soft_sync/
by sanja@askmonty.org 09 Aug '10
by sanja@askmonty.org 09 Aug '10
09 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.2-soft_sync/
------------------------------------------------------------
revno: 2843
revision-id: sanja(a)askmonty.org-20100809124203-7lz5zukr5q7wtdyz
parent: monty(a)askmonty.org-20100807150304-1qb1s7hvv0fh9wlq
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-soft_sync
timestamp: Mon 2010-08-09 15:42:03 +0300
message:
Fix of soft group commit (assigned LSN instead of file number). Found by Monty.
=== modified file 'storage/maria/ma_loghandler.c'
--- a/storage/maria/ma_loghandler.c 2010-05-06 09:59:10 +0000
+++ b/storage/maria/ma_loghandler.c 2010-08-09 12:42:03 +0000
@@ -8073,7 +8073,7 @@
}
else
{
- soft_sync_max= lsn;
+ soft_sync_max= LSN_FILE_NO(lsn);
soft_need_sync= 1;
}
1
0

[Maria-developers] Rev 2810: Fix for LP bug#611625: Removing NULL references from subquery parameter list added. in file:///home/bell/maria/bzr/work-maria-5.3-lb611625/
by sanja@askmonty.org 09 Aug '10
by sanja@askmonty.org 09 Aug '10
09 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb611625/
------------------------------------------------------------
revno: 2810
revision-id: sanja(a)askmonty.org-20100809100058-qqmlj17yajig4gwj
parent: sanja(a)askmonty.org-20100805142348-wdtwgvj5tm2si5gt
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb611625
timestamp: Mon 2010-08-09 13:00:58 +0300
message:
Fix for LP bug#611625: Removing NULL references from subquery parameter list added.
Incorrect limitation on number of parameters removed.
=== modified file 'mysql-test/r/subselect_cache.result'
--- a/mysql-test/r/subselect_cache.result 2010-08-05 14:23:48 +0000
+++ b/mysql-test/r/subselect_cache.result 2010-08-09 10:00:58 +0000
@@ -2985,3 +2985,201 @@
1 NULL f
drop table t1,t2,t3,t4;
set @@optimizer_switch= default;
+#launchpad BUG#611625
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,'w');
+INSERT INTO `t1` VALUES (2,7,'m');
+INSERT INTO `t1` VALUES (3,9,'m');
+INSERT INTO `t1` VALUES (4,7,'k');
+INSERT INTO `t1` VALUES (5,4,'r');
+INSERT INTO `t1` VALUES (6,2,'t');
+INSERT INTO `t1` VALUES (7,6,'j');
+INSERT INTO `t1` VALUES (8,8,'u');
+INSERT INTO `t1` VALUES (9,NULL,'h');
+INSERT INTO `t1` VALUES (10,5,'o');
+INSERT INTO `t1` VALUES (11,NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,'k');
+INSERT INTO `t1` VALUES (13,188,'e');
+INSERT INTO `t1` VALUES (14,2,'n');
+INSERT INTO `t1` VALUES (15,1,'t');
+INSERT INTO `t1` VALUES (16,1,'c');
+INSERT INTO `t1` VALUES (17,0,'m');
+INSERT INTO `t1` VALUES (18,9,'y');
+INSERT INTO `t1` VALUES (19,NULL,'f');
+INSERT INTO `t1` VALUES (20,4,'d');
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (1,6,'r');
+INSERT INTO `t3` VALUES (2,8,'c');
+INSERT INTO `t3` VALUES (3,6,'o');
+INSERT INTO `t3` VALUES (4,6,'c');
+INSERT INTO `t3` VALUES (5,3,'d');
+INSERT INTO `t3` VALUES (6,9,'v');
+INSERT INTO `t3` VALUES (7,2,'m');
+INSERT INTO `t3` VALUES (8,1,'j');
+INSERT INTO `t3` VALUES (9,8,'f');
+INSERT INTO `t3` VALUES (10,0,'n');
+INSERT INTO `t3` VALUES (11,9,'z');
+INSERT INTO `t3` VALUES (12,8,'h');
+INSERT INTO `t3` VALUES (13,NULL,'q');
+INSERT INTO `t3` VALUES (14,0,'w');
+INSERT INTO `t3` VALUES (15,5,'z');
+INSERT INTO `t3` VALUES (16,1,'j');
+INSERT INTO `t3` VALUES (17,1,'a');
+INSERT INTO `t3` VALUES (18,6,'m');
+INSERT INTO `t3` VALUES (19,6,'n');
+INSERT INTO `t3` VALUES (20,1,'e');
+INSERT INTO `t3` VALUES (21,8,'u');
+INSERT INTO `t3` VALUES (22,1,'s');
+INSERT INTO `t3` VALUES (23,0,'u');
+INSERT INTO `t3` VALUES (24,4,'r');
+INSERT INTO `t3` VALUES (25,9,'g');
+INSERT INTO `t3` VALUES (26,8,'o');
+INSERT INTO `t3` VALUES (27,5,'w');
+INSERT INTO `t3` VALUES (28,9,'b');
+INSERT INTO `t3` VALUES (29,5,NULL);
+INSERT INTO `t3` VALUES (30,NULL,'y');
+INSERT INTO `t3` VALUES (31,NULL,'y');
+INSERT INTO `t3` VALUES (32,105,'u');
+INSERT INTO `t3` VALUES (33,0,'p');
+INSERT INTO `t3` VALUES (34,3,'s');
+INSERT INTO `t3` VALUES (35,1,'e');
+INSERT INTO `t3` VALUES (36,75,'d');
+INSERT INTO `t3` VALUES (37,9,'d');
+INSERT INTO `t3` VALUES (38,7,'c');
+INSERT INTO `t3` VALUES (39,NULL,'b');
+INSERT INTO `t3` VALUES (40,NULL,'t');
+INSERT INTO `t3` VALUES (41,4,NULL);
+INSERT INTO `t3` VALUES (42,0,'y');
+INSERT INTO `t3` VALUES (43,204,'c');
+INSERT INTO `t3` VALUES (44,0,'d');
+INSERT INTO `t3` VALUES (45,9,'x');
+INSERT INTO `t3` VALUES (46,8,'p');
+INSERT INTO `t3` VALUES (47,7,'e');
+INSERT INTO `t3` VALUES (48,8,'g');
+INSERT INTO `t3` VALUES (49,NULL,'x');
+INSERT INTO `t3` VALUES (50,6,'s');
+INSERT INTO `t3` VALUES (51,5,'e');
+INSERT INTO `t3` VALUES (52,2,'l');
+INSERT INTO `t3` VALUES (53,3,'p');
+INSERT INTO `t3` VALUES (54,7,'h');
+INSERT INTO `t3` VALUES (55,NULL,'m');
+INSERT INTO `t3` VALUES (56,145,'n');
+INSERT INTO `t3` VALUES (57,0,'v');
+INSERT INTO `t3` VALUES (58,1,'b');
+INSERT INTO `t3` VALUES (59,7,'x');
+INSERT INTO `t3` VALUES (60,3,'r');
+INSERT INTO `t3` VALUES (61,NULL,'t');
+INSERT INTO `t3` VALUES (62,2,'w');
+INSERT INTO `t3` VALUES (63,2,'w');
+INSERT INTO `t3` VALUES (64,2,'k');
+INSERT INTO `t3` VALUES (65,8,'a');
+INSERT INTO `t3` VALUES (66,6,'t');
+INSERT INTO `t3` VALUES (67,1,'z');
+INSERT INTO `t3` VALUES (68,NULL,'e');
+INSERT INTO `t3` VALUES (69,1,'q');
+INSERT INTO `t3` VALUES (70,0,'e');
+INSERT INTO `t3` VALUES (71,4,'v');
+INSERT INTO `t3` VALUES (72,1,'d');
+INSERT INTO `t3` VALUES (73,1,'u');
+INSERT INTO `t3` VALUES (74,27,'o');
+INSERT INTO `t3` VALUES (75,4,'b');
+INSERT INTO `t3` VALUES (76,6,'c');
+INSERT INTO `t3` VALUES (77,2,'q');
+INSERT INTO `t3` VALUES (78,248,NULL);
+INSERT INTO `t3` VALUES (79,NULL,'h');
+INSERT INTO `t3` VALUES (80,9,'d');
+INSERT INTO `t3` VALUES (81,75,'w');
+INSERT INTO `t3` VALUES (82,2,'m');
+INSERT INTO `t3` VALUES (83,9,'i');
+INSERT INTO `t3` VALUES (84,4,'w');
+INSERT INTO `t3` VALUES (85,0,'f');
+INSERT INTO `t3` VALUES (86,0,'k');
+INSERT INTO `t3` VALUES (87,1,'v');
+INSERT INTO `t3` VALUES (88,119,'c');
+INSERT INTO `t3` VALUES (89,1,'y');
+INSERT INTO `t3` VALUES (90,7,'h');
+INSERT INTO `t3` VALUES (91,2,NULL);
+INSERT INTO `t3` VALUES (92,7,'t');
+INSERT INTO `t3` VALUES (93,2,'l');
+INSERT INTO `t3` VALUES (94,6,'a');
+INSERT INTO `t3` VALUES (95,4,'r');
+INSERT INTO `t3` VALUES (96,5,'s');
+INSERT INTO `t3` VALUES (97,7,'z');
+INSERT INTO `t3` VALUES (98,1,'j');
+INSERT INTO `t3` VALUES (99,7,'c');
+INSERT INTO `t3` VALUES (100,2,'f');
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,8,NULL);
+set optimizer_switch='subquery_cache=off';
+SELECT (
+SELECT `col_int_nokey`
+FROM t3
+WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+field13
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+set optimizer_switch='subquery_cache=on';
+SELECT
+(SELECT `col_int_nokey`
+ FROM t3
+WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+field13
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+drop table t1,t2,t3;
+set @@optimizer_switch= default;
=== modified file 'mysql-test/t/subselect_cache.test'
--- a/mysql-test/t/subselect_cache.test 2010-08-05 14:23:48 +0000
+++ b/mysql-test/t/subselect_cache.test 2010-08-09 10:00:58 +0000
@@ -1306,3 +1306,167 @@
drop table t1,t2,t3,t4;
set @@optimizer_switch= default;
+
+#
+--echo #launchpad BUG#611625
+#
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,'w');
+INSERT INTO `t1` VALUES (2,7,'m');
+INSERT INTO `t1` VALUES (3,9,'m');
+INSERT INTO `t1` VALUES (4,7,'k');
+INSERT INTO `t1` VALUES (5,4,'r');
+INSERT INTO `t1` VALUES (6,2,'t');
+INSERT INTO `t1` VALUES (7,6,'j');
+INSERT INTO `t1` VALUES (8,8,'u');
+INSERT INTO `t1` VALUES (9,NULL,'h');
+INSERT INTO `t1` VALUES (10,5,'o');
+INSERT INTO `t1` VALUES (11,NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,'k');
+INSERT INTO `t1` VALUES (13,188,'e');
+INSERT INTO `t1` VALUES (14,2,'n');
+INSERT INTO `t1` VALUES (15,1,'t');
+INSERT INTO `t1` VALUES (16,1,'c');
+INSERT INTO `t1` VALUES (17,0,'m');
+INSERT INTO `t1` VALUES (18,9,'y');
+INSERT INTO `t1` VALUES (19,NULL,'f');
+INSERT INTO `t1` VALUES (20,4,'d');
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (1,6,'r');
+INSERT INTO `t3` VALUES (2,8,'c');
+INSERT INTO `t3` VALUES (3,6,'o');
+INSERT INTO `t3` VALUES (4,6,'c');
+INSERT INTO `t3` VALUES (5,3,'d');
+INSERT INTO `t3` VALUES (6,9,'v');
+INSERT INTO `t3` VALUES (7,2,'m');
+INSERT INTO `t3` VALUES (8,1,'j');
+INSERT INTO `t3` VALUES (9,8,'f');
+INSERT INTO `t3` VALUES (10,0,'n');
+INSERT INTO `t3` VALUES (11,9,'z');
+INSERT INTO `t3` VALUES (12,8,'h');
+INSERT INTO `t3` VALUES (13,NULL,'q');
+INSERT INTO `t3` VALUES (14,0,'w');
+INSERT INTO `t3` VALUES (15,5,'z');
+INSERT INTO `t3` VALUES (16,1,'j');
+INSERT INTO `t3` VALUES (17,1,'a');
+INSERT INTO `t3` VALUES (18,6,'m');
+INSERT INTO `t3` VALUES (19,6,'n');
+INSERT INTO `t3` VALUES (20,1,'e');
+INSERT INTO `t3` VALUES (21,8,'u');
+INSERT INTO `t3` VALUES (22,1,'s');
+INSERT INTO `t3` VALUES (23,0,'u');
+INSERT INTO `t3` VALUES (24,4,'r');
+INSERT INTO `t3` VALUES (25,9,'g');
+INSERT INTO `t3` VALUES (26,8,'o');
+INSERT INTO `t3` VALUES (27,5,'w');
+INSERT INTO `t3` VALUES (28,9,'b');
+INSERT INTO `t3` VALUES (29,5,NULL);
+INSERT INTO `t3` VALUES (30,NULL,'y');
+INSERT INTO `t3` VALUES (31,NULL,'y');
+INSERT INTO `t3` VALUES (32,105,'u');
+INSERT INTO `t3` VALUES (33,0,'p');
+INSERT INTO `t3` VALUES (34,3,'s');
+INSERT INTO `t3` VALUES (35,1,'e');
+INSERT INTO `t3` VALUES (36,75,'d');
+INSERT INTO `t3` VALUES (37,9,'d');
+INSERT INTO `t3` VALUES (38,7,'c');
+INSERT INTO `t3` VALUES (39,NULL,'b');
+INSERT INTO `t3` VALUES (40,NULL,'t');
+INSERT INTO `t3` VALUES (41,4,NULL);
+INSERT INTO `t3` VALUES (42,0,'y');
+INSERT INTO `t3` VALUES (43,204,'c');
+INSERT INTO `t3` VALUES (44,0,'d');
+INSERT INTO `t3` VALUES (45,9,'x');
+INSERT INTO `t3` VALUES (46,8,'p');
+INSERT INTO `t3` VALUES (47,7,'e');
+INSERT INTO `t3` VALUES (48,8,'g');
+INSERT INTO `t3` VALUES (49,NULL,'x');
+INSERT INTO `t3` VALUES (50,6,'s');
+INSERT INTO `t3` VALUES (51,5,'e');
+INSERT INTO `t3` VALUES (52,2,'l');
+INSERT INTO `t3` VALUES (53,3,'p');
+INSERT INTO `t3` VALUES (54,7,'h');
+INSERT INTO `t3` VALUES (55,NULL,'m');
+INSERT INTO `t3` VALUES (56,145,'n');
+INSERT INTO `t3` VALUES (57,0,'v');
+INSERT INTO `t3` VALUES (58,1,'b');
+INSERT INTO `t3` VALUES (59,7,'x');
+INSERT INTO `t3` VALUES (60,3,'r');
+INSERT INTO `t3` VALUES (61,NULL,'t');
+INSERT INTO `t3` VALUES (62,2,'w');
+INSERT INTO `t3` VALUES (63,2,'w');
+INSERT INTO `t3` VALUES (64,2,'k');
+INSERT INTO `t3` VALUES (65,8,'a');
+INSERT INTO `t3` VALUES (66,6,'t');
+INSERT INTO `t3` VALUES (67,1,'z');
+INSERT INTO `t3` VALUES (68,NULL,'e');
+INSERT INTO `t3` VALUES (69,1,'q');
+INSERT INTO `t3` VALUES (70,0,'e');
+INSERT INTO `t3` VALUES (71,4,'v');
+INSERT INTO `t3` VALUES (72,1,'d');
+INSERT INTO `t3` VALUES (73,1,'u');
+INSERT INTO `t3` VALUES (74,27,'o');
+INSERT INTO `t3` VALUES (75,4,'b');
+INSERT INTO `t3` VALUES (76,6,'c');
+INSERT INTO `t3` VALUES (77,2,'q');
+INSERT INTO `t3` VALUES (78,248,NULL);
+INSERT INTO `t3` VALUES (79,NULL,'h');
+INSERT INTO `t3` VALUES (80,9,'d');
+INSERT INTO `t3` VALUES (81,75,'w');
+INSERT INTO `t3` VALUES (82,2,'m');
+INSERT INTO `t3` VALUES (83,9,'i');
+INSERT INTO `t3` VALUES (84,4,'w');
+INSERT INTO `t3` VALUES (85,0,'f');
+INSERT INTO `t3` VALUES (86,0,'k');
+INSERT INTO `t3` VALUES (87,1,'v');
+INSERT INTO `t3` VALUES (88,119,'c');
+INSERT INTO `t3` VALUES (89,1,'y');
+INSERT INTO `t3` VALUES (90,7,'h');
+INSERT INTO `t3` VALUES (91,2,NULL);
+INSERT INTO `t3` VALUES (92,7,'t');
+INSERT INTO `t3` VALUES (93,2,'l');
+INSERT INTO `t3` VALUES (94,6,'a');
+INSERT INTO `t3` VALUES (95,4,'r');
+INSERT INTO `t3` VALUES (96,5,'s');
+INSERT INTO `t3` VALUES (97,7,'z');
+INSERT INTO `t3` VALUES (98,1,'j');
+INSERT INTO `t3` VALUES (99,7,'c');
+INSERT INTO `t3` VALUES (100,2,'f');
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,8,NULL);
+
+set optimizer_switch='subquery_cache=off';
+
+SELECT (
+SELECT `col_int_nokey`
+FROM t3
+WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+
+set optimizer_switch='subquery_cache=on';
+
+SELECT
+ (SELECT `col_int_nokey`
+ FROM t3
+ WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+
+drop table t1,t2,t3;
+set @@optimizer_switch= default;
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-07-16 11:02:15 +0000
+++ b/sql/sql_class.h 2010-08-09 10:00:58 +0000
@@ -62,9 +62,9 @@
class Item_iterator_ref_list: public Item_iterator
{
- List_iterator_fast<Item*> list;
+ List_iterator<Item*> list;
public:
- Item_iterator_ref_list(List_iterator_fast<Item*> &arg_list):
+ Item_iterator_ref_list(List_iterator<Item*> &arg_list):
list(arg_list) {}
void open() { list.rewind(); }
Item *next() { return *(list++); }
=== modified file 'sql/sql_expression_cache.cc'
--- a/sql/sql_expression_cache.cc 2010-07-30 04:16:58 +0000
+++ b/sql/sql_expression_cache.cc 2010-08-09 10:00:58 +0000
@@ -96,22 +96,39 @@
void Expression_cache_tmptable::init()
{
- List_iterator_fast<Item*> li(*list);
+ List_iterator<Item*> li(*list);
Item_iterator_ref_list it(li);
Item **item;
uint field_counter;
DBUG_ENTER("Expression_cache_tmptable::init");
DBUG_ASSERT(!inited);
inited= TRUE;
-
- if (!(ULONGLONG_MAX >> (list->elements + 1)))
- {
- DBUG_PRINT("info", ("Too many dependencies"));
+ cache_table= NULL;
+
+ while ((item= li++))
+ {
+ DBUG_ASSERT(item);
+ if (*item)
+ {
+ DBUG_ASSERT((*item)->fixed);
+ items.push_back((*item));
+ }
+ else
+ {
+ /*
+ This is possible when optimizer already executed this subquery and
+ optimized out the condition predicate.
+ */
+ li.remove();
+ }
+ }
+
+ if (list->elements == 0)
+ {
+ DBUG_PRINT("info", ("All parameters were removed by optimizer."));
DBUG_VOID_RETURN;
}
- cache_table= NULL;
-
cache_table_param.init();
/* dependent items and result */
cache_table_param.field_count= list->elements + 1;
@@ -119,13 +136,6 @@
cache_table_param.skip_create_table= 1;
cache_table= NULL;
- while ((item= li++))
- {
- DBUG_ASSERT(item);
- DBUG_ASSERT(*item);
- DBUG_ASSERT((*item)->fixed);
- items.push_back((*item));
- }
items.push_front(val);
if (!(cache_table= create_tmp_table(table_thd, &cache_table_param,
1
0
Yes, flexible indexing for the dynamic columns will be key for performance, for example, for date or numeric range scans, and perhaps multipart keys. The single-table self-join method I'm currently using is indexed both id/fieldname/value and value/fieldname/id to help make searching and logical joining betwen dynamic tables perform, with additional indexes using the numeric and date equivalents for the string values where applicable. I agree there are optimal use cases for both schemaless and schemafull, and I use a hybrid model with both.
On Aug 7, 2010, at 8:27 AM, Michael Widenius <monty(a)askmonty.org> wrote:
Hi!
"Arjen" == Arjen Lentz <arjen(a)openquery.com> writes:
Arjen> Hi Dan
Arjen> On 05/08/2010, at 12:28 AM, Dan Meany wrote:
Hi, I was wondering if anyone here had any opinions about this
proposed fix for the 64 table join limit (originally by Sergei)
http://lists.mysql.com/internals/38013
With the increased use of schemaless and hybrid, partly schemaless
application designs, this would be a big enhancement. In my own
application, being able to make minor, typed, virtual column
additions for different SAAS clients in virtual tables at runtime
without changing the physical schema or regenerating application
binding layers is pretty priceless. But currently we are limited to
64 virtual columns by this mySQL join limit.
<cut>
Arjen> - if you really want to be schemaless, don't use an RDBMS. RDBMS are
Arjen> structured, for a reason. It helps in a multitude of ways. Sometimes
Arjen> for whatever reason you can decide that structure is not what you
Arjen> need, but then using an RDBMS (apart from the familiar convenient
Arjen> interface) makes no sense.
We have plans to add dynamic columns into MariaDB soon.
This will probably happen in 5.3, assuming there is some interest in
this feature.
With dynamic columns you can store 'any' number of columns in a blob.
In effect, each row in the database may have it's own set of columns.
One will be able to trivially access and update data in the dynamic
columns and also add/drop columns inside the blob.
Example usage:
SELECT column_get(blob, 1, varchar(100)) from table_with_dynamic;
UPDATE table_with_dynamic SET blob=column_add(blob, 2, "hello") where id=1;
UPDATE table_with_dynamic SET blob=column_del(blob,4) where id=5;
Note that 'column_add()' will replace any old value with the given column_id.
Future ideas:
- Allow indexing with name instead of numbers. When this is done we can drop the
type as part of column_get()
- Allow indexing on dynamic fields.
You can find a full specification of this feature here:
http://askmonty.org/worklog/Server-BackLog/?tid=34
Regards,
Monty
_______________________________________________
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers(a)lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help : https://help.launchpad.net/ListHelp
1
0

Re: [Maria-developers] [Commits] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (monty:2877)
by Kristian Nielsen 07 Aug '10
by Kristian Nielsen 07 Aug '10
07 Aug '10
Michael Widenius <monty(a)askmonty.org> writes:
> + &find_innodb_plugin;
> + if (&find_innodb_plugin)
Why use such wierd syntax for calling a function?
Why not the normal find_innodb_plugin() ?
- Kristian.
2
1
Hi,
I'v renamed everything user-visible (error messages, variables,
command-line utilites, etc).
Do you want me to rename just *everything* ? file names, function names,
etc ? I mean, all internal stuff. It wouldn't add much value, and would
complicate the merges. On the other hand, it'd be conststent.
Regards,
Sergei
1
0

Re: [Maria-developers] [Commits] Rev 2835: added a sphinxse test suite in http://bazaar.launchpad.net/~maria-captains/maria/5.2/
by Kristian Nielsen 06 Aug '10
by Kristian Nielsen 06 Aug '10
06 Aug '10
<serg(a)askmonty.org> writes:
> === added file 'mysql-test/suite/sphinx/sphinx.test'
> --- a/mysql-test/suite/sphinx/sphinx.test 1970-01-01 00:00:00 +0000
> +++ b/mysql-test/suite/sphinx/sphinx.test 2010-08-06 13:11:37 +0000
> @@ -0,0 +1,30 @@
> +if (`SELECT "$HAVE_SPHINX" < "0000.0009.0009"`) {
> + skip Need Sphinx version 0.9.9 or later;
> +}
> +if (`SELECT "$HA_SPHINX_SO" = ""`) {
> + skip No SphinxSE plugin;
> +}
> +
> +eval INSTALL PLUGIN sphinx SONAME "$HA_SPHINX_SO";
Should there be an UNINSTALL PLUGIN at the end of the test case?
Otherwise I suppose following test cases could end up running with the plugin
loaded (or not, at random), potentially causing .result differences.
- Kristian.
2
1
Hi,
Kristian, I've done the changes - would you mind to review the patch ?
Somehow bzr didn't send the email automatically - could not connect to
the server, so I'll attach the output of bzr bundle to this email.
If you dont have time to review it (or don't want to :),
please remove yourself from a "reviewer" field in WL#127.
Thanks!
Regards,
Sergei
P.S. there are three patches attached - three bundles.
the first one fixes ndb. as ndb is hacked in mtr and is all over the
place, I could not change mtr without touching ndb related pieces,
and I need to make sure that I didn't break them. I found out that ndb
doesn't even build - without my changes, so I fixed that. I think we
need to build ndb at on some buildbot slaves, just to make sure we don't
break it. Or remove it completely - as it's clearly not used, and not
expected to be used, ndb users should use telco trees.
the second contain mtr changes.
the third adds sphinx suite.
3
5

[Maria-developers] Rev 2809: The test files renamed to have uniform name. in file:///home/bell/maria/bzr/work-maria-5.3-rename/
by sanja@askmonty.org 05 Aug '10
by sanja@askmonty.org 05 Aug '10
05 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.3-rename/
------------------------------------------------------------
revno: 2809
revision-id: sanja(a)askmonty.org-20100805142348-wdtwgvj5tm2si5gt
parent: sanja(a)askmonty.org-20100730041658-2naumadh26t93e3g
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-rename
timestamp: Thu 2010-08-05 17:23:48 +0300
message:
The test files renamed to have uniform name.
=== renamed file 'mysql-test/r/subquery_cache.result' => 'mysql-test/r/subselect_cache.result'
=== renamed file 'mysql-test/t/subquery_cache.test' => 'mysql-test/t/subselect_cache.test'
1
0

[Maria-developers] WL#129 New (by Sergei): Rename " Maria" to " Aria"
by worklog-noreply@askmonty.org 05 Aug '10
by worklog-noreply@askmonty.org 05 Aug '10
05 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Rename "Maria" to "Aria"
CREATION DATE..: Thu, 05 Aug 2010, 14:02
SUPERVISOR.....: Sergei
IMPLEMENTOR....: Sergei
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 129 (http://askmonty.org/worklog/?tid=129)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 90
WORKED HOURS...: 0
ESTIMATE.......: 16 (hours remain)
ORIG. ESTIMATE.: 16
PROGRESS NOTES:
DESCRIPTION:
The engine Maria was renamed to Aria. The code needs to be changed to reflect that
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#128 New (by Igor): Implement Block Nested Loop Hash Join
by worklog-noreply@askmonty.org 04 Aug '10
by worklog-noreply@askmonty.org 04 Aug '10
04 Aug '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Implement Block Nested Loop Hash Join
CREATION DATE..: Wed, 04 Aug 2010, 18:56
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor Mneptok Monty Psergey Sanja Sergei Timour
CATEGORY.......: Server-BackLog
TASK ID........: 128 (http://askmonty.org/worklog/?tid=128)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
Block Nested Loop Hash (BNLH) Join Algorithm (also known as Classic Hash Join
Algorithm) can be applied when the join condition contains conjunctive equality
predicates over some attributes of the joined tables.
The algorithm employs the same logical schema as Block Nested Loop Join Algorithm.
A portions of rows of the left join operand fit into a preallocated join buffer
are put into the buffer. A hash table over the equi-join attributes for the rows
in the buffer is built. Then the table of the second operand is scanned and for
each row from this table the matching rows in the join buffer are found using
the built hash table. These actions are repeated until there is no more rows can
be supplied by the first operand.
The algorithm requires as many scans of the second table as many times the join
buffer is refilled.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] Rev 2836: Check of maria engine presence added. in file:///home/bell/maria/bzr/work-maria-5.2-lb607147/
by sanja@askmonty.org 04 Aug '10
by sanja@askmonty.org 04 Aug '10
04 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.2-lb607147/
------------------------------------------------------------
revno: 2836
revision-id: sanja(a)askmonty.org-20100804094351-8yyx0m06vi4pr9fj
parent: sanja(a)askmonty.org-20100803094925-fpuj52qvdkkw5994
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-lb607147
timestamp: Wed 2010-08-04 12:43:51 +0300
message:
Check of maria engine presence added.
Comment fixed.
=== modified file 'mysql-test/suite/vcol/t/vcol_handler_maria.test'
--- a/mysql-test/suite/vcol/t/vcol_handler_maria.test 2010-08-03 09:49:25 +0000
+++ b/mysql-test/suite/vcol/t/vcol_handler_maria.test 2010-08-04 09:43:51 +0000
@@ -14,8 +14,10 @@
# Change: #
################################################################################
+--source include/have_maria.inc
+
#
-# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
+# NOTE: PLEASE DO NOT ADD NOT MARIA SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
1
0

[Maria-developers] Rev 2835: Fix for launchpad bug #612894 in file:///home/bell/maria/bzr/work-maria-5.2-lb607147/
by sanja@askmonty.org 03 Aug '10
by sanja@askmonty.org 03 Aug '10
03 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.2-lb607147/
------------------------------------------------------------
revno: 2835
revision-id: sanja(a)askmonty.org-20100803094925-fpuj52qvdkkw5994
parent: igor(a)askmonty.org-20100728190938-esx94q58hw3v5jue
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-lb607147
timestamp: Tue 2010-08-03 12:49:25 +0300
message:
Fix for launchpad bug #612894
Support of virtual columns added to maria engine.
=== added file 'mysql-test/suite/vcol/r/vcol_handler_maria.result'
--- a/mysql-test/suite/vcol/r/vcol_handler_maria.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/vcol/r/vcol_handler_maria.result 2010-08-03 09:49:25 +0000
@@ -0,0 +1,76 @@
+SET @@session.storage_engine = 'maria';
+create table t1 (a int,
+b int as (-a),
+c int as (-a) persistent,
+d char(1),
+index (a),
+index (c));
+insert into t1 (a,d) values (4,'a'), (2,'b'), (1,'c'), (3,'d');
+select * from t1;
+a b c d
+4 -4 -4 a
+2 -2 -2 b
+1 -1 -1 c
+3 -3 -3 d
+# HANDLER tbl_name OPEN
+handler t1 open;
+# HANDLER tbl_name READ non-vcol_index_name > (value1,value2,...)
+handler t1 read a > (2);
+a b c d
+3 -3 -3 d
+# HANDLER tbl_name READ non-vcol_index_name > (value1,value2,...) WHERE non-vcol_field=expr
+handler t1 read a > (2) where d='c';
+a b c d
+# HANDLER tbl_name READ vcol_index_name = (value1,value2,...)
+handler t1 read c = (-2);
+a b c d
+2 -2 -2 b
+# HANDLER tbl_name READ vcol_index_name = (value1,value2,...) WHERE non-vcol_field=expr
+handler t1 read c = (-2) where d='c';
+a b c d
+# HANDLER tbl_name READ non-vcol_index_name > (value1,value2,...) WHERE vcol_field=expr
+handler t1 read a > (2) where b=-3 && c=-3;
+a b c d
+3 -3 -3 d
+# HANDLER tbl_name READ vcol_index_name <= (value1,value2,...)
+handler t1 read c <= (-2);
+a b c d
+2 -2 -2 b
+# HANDLER tbl_name READ vcol_index_name > (value1,value2,...) WHERE vcol_field=expr
+handler t1 read c <= (-2) where b=-3;
+a b c d
+3 -3 -3 d
+# HANDLER tbl_name READ vcol_index_name FIRST
+handler t1 read c first;
+a b c d
+4 -4 -4 a
+# HANDLER tbl_name READ vcol_index_name NEXT
+handler t1 read c next;
+a b c d
+3 -3 -3 d
+# HANDLER tbl_name READ vcol_index_name PREV
+handler t1 read c prev;
+a b c d
+4 -4 -4 a
+# HANDLER tbl_name READ vcol_index_name LAST
+handler t1 read c last;
+a b c d
+1 -1 -1 c
+# HANDLER tbl_name READ FIRST where non-vcol=expr
+handler t1 read FIRST where a >= 2;
+a b c d
+4 -4 -4 a
+# HANDLER tbl_name READ FIRST where vcol=expr
+handler t1 read FIRST where b >= -2;
+a b c d
+2 -2 -2 b
+# HANDLER tbl_name READ NEXT where non-vcol=expr
+handler t1 read NEXT where d='c';
+a b c d
+1 -1 -1 c
+# HANDLER tbl_name READ NEXT where vcol=expr
+handler t1 read NEXT where b<=-4;
+a b c d
+# HANDLER tbl_name CLOSE
+handler t1 close;
+drop table t1;
=== added file 'mysql-test/suite/vcol/t/vcol_handler_maria.test'
--- a/mysql-test/suite/vcol/t/vcol_handler_maria.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/vcol/t/vcol_handler_maria.test 2010-08-03 09:49:25 +0000
@@ -0,0 +1,50 @@
+################################################################################
+# t/vcol_handler_maria.test #
+# #
+# Purpose: #
+# Testing HANDLER.
+# #
+# Maria branch #
+# #
+#------------------------------------------------------------------------------#
+# Original Author: Andrey Zhakov #
+# Original Date: 2008-09-04 #
+# Change Author: #
+# Change Date: #
+# Change: #
+################################################################################
+
+#
+# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
+# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
+# THE SOURCED FILES ONLY.
+#
+
+#------------------------------------------------------------------------------#
+# General not engine specific settings and requirements
+--source suite/vcol/inc/vcol_init_vars.pre
+
+#------------------------------------------------------------------------------#
+# Cleanup
+--source suite/vcol/inc/vcol_cleanup.inc
+
+#------------------------------------------------------------------------------#
+# Engine specific settings and requirements
+
+##### Storage engine to be tested
+# Set the session storage engine
+eval SET @@session.storage_engine = 'maria';
+
+##### Workarounds for known open engine specific bugs
+# none
+
+#------------------------------------------------------------------------------#
+# Execute the tests to be applied to all storage engines
+--source suite/vcol/inc/vcol_handler.inc
+
+#------------------------------------------------------------------------------#
+# Execute storage engine specific tests
+
+#------------------------------------------------------------------------------#
+# Cleanup
+--source suite/vcol/inc/vcol_cleanup.inc
=== modified file 'storage/maria/ha_maria.cc'
--- a/storage/maria/ha_maria.cc 2010-07-25 15:09:21 +0000
+++ b/storage/maria/ha_maria.cc 2010-08-03 09:49:25 +0000
@@ -468,7 +468,7 @@
recinfo_pos= recinfo;
create_info->null_bytes= table_arg->s->null_bytes;
- while (recpos < (uint) share->reclength)
+ while (recpos < (uint) share->stored_rec_length)
{
Field **field, *found= 0;
minpos= share->reclength;
=== modified file 'storage/maria/ha_maria.h'
--- a/storage/maria/ha_maria.h 2010-07-23 20:37:21 +0000
+++ b/storage/maria/ha_maria.h 2010-08-03 09:49:25 +0000
@@ -148,6 +148,7 @@
int assign_to_keycache(THD * thd, HA_CHECK_OPT * check_opt);
int preload_keys(THD * thd, HA_CHECK_OPT * check_opt);
bool check_if_incompatible_data(HA_CREATE_INFO * info, uint table_changes);
+ bool check_if_supported_virtual_columns(void) { return TRUE;}
#ifdef HAVE_REPLICATION
int dump(THD * thd, int fd);
int net_read_dump(NET * net);
1
0
Hi all,
I hand-added Antony's fix for 571200 that I had started to
https://code.launchpad.net/~capttofu/maria/bug_571200/+merge/31606 for
review. I'll be adding his other fixes for each bug as well.
Thanks Antony!
Patrick
1
0

[Maria-developers] Rev 2809: Fix for luanchpad bug#611625: Removing NULL references from subquery parameter list added. in file:///home/bell/maria/bzr/work-maria-5.3-lb611625/
by sanja@askmonty.org 02 Aug '10
by sanja@askmonty.org 02 Aug '10
02 Aug '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb611625/
------------------------------------------------------------
revno: 2809
revision-id: sanja(a)askmonty.org-20100802055612-se9olthiaazi5xju
parent: sanja(a)askmonty.org-20100730041658-2naumadh26t93e3g
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb611625
timestamp: Mon 2010-08-02 08:56:12 +0300
message:
Fix for luanchpad bug#611625: Removing NULL references from subquery parameter list added.
Incorrect limitation on number of parameters removed.
=== modified file 'mysql-test/r/subquery_cache.result'
--- a/mysql-test/r/subquery_cache.result 2010-07-30 04:16:58 +0000
+++ b/mysql-test/r/subquery_cache.result 2010-08-02 05:56:12 +0000
@@ -2985,3 +2985,201 @@
1 NULL f
drop table t1,t2,t3,t4;
set @@optimizer_switch= default;
+#launchpad BUG#611625
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,'w');
+INSERT INTO `t1` VALUES (2,7,'m');
+INSERT INTO `t1` VALUES (3,9,'m');
+INSERT INTO `t1` VALUES (4,7,'k');
+INSERT INTO `t1` VALUES (5,4,'r');
+INSERT INTO `t1` VALUES (6,2,'t');
+INSERT INTO `t1` VALUES (7,6,'j');
+INSERT INTO `t1` VALUES (8,8,'u');
+INSERT INTO `t1` VALUES (9,NULL,'h');
+INSERT INTO `t1` VALUES (10,5,'o');
+INSERT INTO `t1` VALUES (11,NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,'k');
+INSERT INTO `t1` VALUES (13,188,'e');
+INSERT INTO `t1` VALUES (14,2,'n');
+INSERT INTO `t1` VALUES (15,1,'t');
+INSERT INTO `t1` VALUES (16,1,'c');
+INSERT INTO `t1` VALUES (17,0,'m');
+INSERT INTO `t1` VALUES (18,9,'y');
+INSERT INTO `t1` VALUES (19,NULL,'f');
+INSERT INTO `t1` VALUES (20,4,'d');
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (1,6,'r');
+INSERT INTO `t3` VALUES (2,8,'c');
+INSERT INTO `t3` VALUES (3,6,'o');
+INSERT INTO `t3` VALUES (4,6,'c');
+INSERT INTO `t3` VALUES (5,3,'d');
+INSERT INTO `t3` VALUES (6,9,'v');
+INSERT INTO `t3` VALUES (7,2,'m');
+INSERT INTO `t3` VALUES (8,1,'j');
+INSERT INTO `t3` VALUES (9,8,'f');
+INSERT INTO `t3` VALUES (10,0,'n');
+INSERT INTO `t3` VALUES (11,9,'z');
+INSERT INTO `t3` VALUES (12,8,'h');
+INSERT INTO `t3` VALUES (13,NULL,'q');
+INSERT INTO `t3` VALUES (14,0,'w');
+INSERT INTO `t3` VALUES (15,5,'z');
+INSERT INTO `t3` VALUES (16,1,'j');
+INSERT INTO `t3` VALUES (17,1,'a');
+INSERT INTO `t3` VALUES (18,6,'m');
+INSERT INTO `t3` VALUES (19,6,'n');
+INSERT INTO `t3` VALUES (20,1,'e');
+INSERT INTO `t3` VALUES (21,8,'u');
+INSERT INTO `t3` VALUES (22,1,'s');
+INSERT INTO `t3` VALUES (23,0,'u');
+INSERT INTO `t3` VALUES (24,4,'r');
+INSERT INTO `t3` VALUES (25,9,'g');
+INSERT INTO `t3` VALUES (26,8,'o');
+INSERT INTO `t3` VALUES (27,5,'w');
+INSERT INTO `t3` VALUES (28,9,'b');
+INSERT INTO `t3` VALUES (29,5,NULL);
+INSERT INTO `t3` VALUES (30,NULL,'y');
+INSERT INTO `t3` VALUES (31,NULL,'y');
+INSERT INTO `t3` VALUES (32,105,'u');
+INSERT INTO `t3` VALUES (33,0,'p');
+INSERT INTO `t3` VALUES (34,3,'s');
+INSERT INTO `t3` VALUES (35,1,'e');
+INSERT INTO `t3` VALUES (36,75,'d');
+INSERT INTO `t3` VALUES (37,9,'d');
+INSERT INTO `t3` VALUES (38,7,'c');
+INSERT INTO `t3` VALUES (39,NULL,'b');
+INSERT INTO `t3` VALUES (40,NULL,'t');
+INSERT INTO `t3` VALUES (41,4,NULL);
+INSERT INTO `t3` VALUES (42,0,'y');
+INSERT INTO `t3` VALUES (43,204,'c');
+INSERT INTO `t3` VALUES (44,0,'d');
+INSERT INTO `t3` VALUES (45,9,'x');
+INSERT INTO `t3` VALUES (46,8,'p');
+INSERT INTO `t3` VALUES (47,7,'e');
+INSERT INTO `t3` VALUES (48,8,'g');
+INSERT INTO `t3` VALUES (49,NULL,'x');
+INSERT INTO `t3` VALUES (50,6,'s');
+INSERT INTO `t3` VALUES (51,5,'e');
+INSERT INTO `t3` VALUES (52,2,'l');
+INSERT INTO `t3` VALUES (53,3,'p');
+INSERT INTO `t3` VALUES (54,7,'h');
+INSERT INTO `t3` VALUES (55,NULL,'m');
+INSERT INTO `t3` VALUES (56,145,'n');
+INSERT INTO `t3` VALUES (57,0,'v');
+INSERT INTO `t3` VALUES (58,1,'b');
+INSERT INTO `t3` VALUES (59,7,'x');
+INSERT INTO `t3` VALUES (60,3,'r');
+INSERT INTO `t3` VALUES (61,NULL,'t');
+INSERT INTO `t3` VALUES (62,2,'w');
+INSERT INTO `t3` VALUES (63,2,'w');
+INSERT INTO `t3` VALUES (64,2,'k');
+INSERT INTO `t3` VALUES (65,8,'a');
+INSERT INTO `t3` VALUES (66,6,'t');
+INSERT INTO `t3` VALUES (67,1,'z');
+INSERT INTO `t3` VALUES (68,NULL,'e');
+INSERT INTO `t3` VALUES (69,1,'q');
+INSERT INTO `t3` VALUES (70,0,'e');
+INSERT INTO `t3` VALUES (71,4,'v');
+INSERT INTO `t3` VALUES (72,1,'d');
+INSERT INTO `t3` VALUES (73,1,'u');
+INSERT INTO `t3` VALUES (74,27,'o');
+INSERT INTO `t3` VALUES (75,4,'b');
+INSERT INTO `t3` VALUES (76,6,'c');
+INSERT INTO `t3` VALUES (77,2,'q');
+INSERT INTO `t3` VALUES (78,248,NULL);
+INSERT INTO `t3` VALUES (79,NULL,'h');
+INSERT INTO `t3` VALUES (80,9,'d');
+INSERT INTO `t3` VALUES (81,75,'w');
+INSERT INTO `t3` VALUES (82,2,'m');
+INSERT INTO `t3` VALUES (83,9,'i');
+INSERT INTO `t3` VALUES (84,4,'w');
+INSERT INTO `t3` VALUES (85,0,'f');
+INSERT INTO `t3` VALUES (86,0,'k');
+INSERT INTO `t3` VALUES (87,1,'v');
+INSERT INTO `t3` VALUES (88,119,'c');
+INSERT INTO `t3` VALUES (89,1,'y');
+INSERT INTO `t3` VALUES (90,7,'h');
+INSERT INTO `t3` VALUES (91,2,NULL);
+INSERT INTO `t3` VALUES (92,7,'t');
+INSERT INTO `t3` VALUES (93,2,'l');
+INSERT INTO `t3` VALUES (94,6,'a');
+INSERT INTO `t3` VALUES (95,4,'r');
+INSERT INTO `t3` VALUES (96,5,'s');
+INSERT INTO `t3` VALUES (97,7,'z');
+INSERT INTO `t3` VALUES (98,1,'j');
+INSERT INTO `t3` VALUES (99,7,'c');
+INSERT INTO `t3` VALUES (100,2,'f');
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,8,NULL);
+set optimizer_switch='subquery_cache=off';
+SELECT (
+SELECT `col_int_nokey`
+FROM t3
+WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+field13
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+set optimizer_switch='subquery_cache=on';
+SELECT
+(SELECT `col_int_nokey`
+ FROM t3
+WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+field13
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+NULL
+drop table t1,t2,t3;
+set @@optimizer_switch= default;
=== modified file 'mysql-test/t/subquery_cache.test'
--- a/mysql-test/t/subquery_cache.test 2010-07-30 04:16:58 +0000
+++ b/mysql-test/t/subquery_cache.test 2010-08-02 05:56:12 +0000
@@ -1306,3 +1306,167 @@
drop table t1,t2,t3,t4;
set @@optimizer_switch= default;
+
+#
+--echo #launchpad BUG#611625
+#
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,'w');
+INSERT INTO `t1` VALUES (2,7,'m');
+INSERT INTO `t1` VALUES (3,9,'m');
+INSERT INTO `t1` VALUES (4,7,'k');
+INSERT INTO `t1` VALUES (5,4,'r');
+INSERT INTO `t1` VALUES (6,2,'t');
+INSERT INTO `t1` VALUES (7,6,'j');
+INSERT INTO `t1` VALUES (8,8,'u');
+INSERT INTO `t1` VALUES (9,NULL,'h');
+INSERT INTO `t1` VALUES (10,5,'o');
+INSERT INTO `t1` VALUES (11,NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,'k');
+INSERT INTO `t1` VALUES (13,188,'e');
+INSERT INTO `t1` VALUES (14,2,'n');
+INSERT INTO `t1` VALUES (15,1,'t');
+INSERT INTO `t1` VALUES (16,1,'c');
+INSERT INTO `t1` VALUES (17,0,'m');
+INSERT INTO `t1` VALUES (18,9,'y');
+INSERT INTO `t1` VALUES (19,NULL,'f');
+INSERT INTO `t1` VALUES (20,4,'d');
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (1,6,'r');
+INSERT INTO `t3` VALUES (2,8,'c');
+INSERT INTO `t3` VALUES (3,6,'o');
+INSERT INTO `t3` VALUES (4,6,'c');
+INSERT INTO `t3` VALUES (5,3,'d');
+INSERT INTO `t3` VALUES (6,9,'v');
+INSERT INTO `t3` VALUES (7,2,'m');
+INSERT INTO `t3` VALUES (8,1,'j');
+INSERT INTO `t3` VALUES (9,8,'f');
+INSERT INTO `t3` VALUES (10,0,'n');
+INSERT INTO `t3` VALUES (11,9,'z');
+INSERT INTO `t3` VALUES (12,8,'h');
+INSERT INTO `t3` VALUES (13,NULL,'q');
+INSERT INTO `t3` VALUES (14,0,'w');
+INSERT INTO `t3` VALUES (15,5,'z');
+INSERT INTO `t3` VALUES (16,1,'j');
+INSERT INTO `t3` VALUES (17,1,'a');
+INSERT INTO `t3` VALUES (18,6,'m');
+INSERT INTO `t3` VALUES (19,6,'n');
+INSERT INTO `t3` VALUES (20,1,'e');
+INSERT INTO `t3` VALUES (21,8,'u');
+INSERT INTO `t3` VALUES (22,1,'s');
+INSERT INTO `t3` VALUES (23,0,'u');
+INSERT INTO `t3` VALUES (24,4,'r');
+INSERT INTO `t3` VALUES (25,9,'g');
+INSERT INTO `t3` VALUES (26,8,'o');
+INSERT INTO `t3` VALUES (27,5,'w');
+INSERT INTO `t3` VALUES (28,9,'b');
+INSERT INTO `t3` VALUES (29,5,NULL);
+INSERT INTO `t3` VALUES (30,NULL,'y');
+INSERT INTO `t3` VALUES (31,NULL,'y');
+INSERT INTO `t3` VALUES (32,105,'u');
+INSERT INTO `t3` VALUES (33,0,'p');
+INSERT INTO `t3` VALUES (34,3,'s');
+INSERT INTO `t3` VALUES (35,1,'e');
+INSERT INTO `t3` VALUES (36,75,'d');
+INSERT INTO `t3` VALUES (37,9,'d');
+INSERT INTO `t3` VALUES (38,7,'c');
+INSERT INTO `t3` VALUES (39,NULL,'b');
+INSERT INTO `t3` VALUES (40,NULL,'t');
+INSERT INTO `t3` VALUES (41,4,NULL);
+INSERT INTO `t3` VALUES (42,0,'y');
+INSERT INTO `t3` VALUES (43,204,'c');
+INSERT INTO `t3` VALUES (44,0,'d');
+INSERT INTO `t3` VALUES (45,9,'x');
+INSERT INTO `t3` VALUES (46,8,'p');
+INSERT INTO `t3` VALUES (47,7,'e');
+INSERT INTO `t3` VALUES (48,8,'g');
+INSERT INTO `t3` VALUES (49,NULL,'x');
+INSERT INTO `t3` VALUES (50,6,'s');
+INSERT INTO `t3` VALUES (51,5,'e');
+INSERT INTO `t3` VALUES (52,2,'l');
+INSERT INTO `t3` VALUES (53,3,'p');
+INSERT INTO `t3` VALUES (54,7,'h');
+INSERT INTO `t3` VALUES (55,NULL,'m');
+INSERT INTO `t3` VALUES (56,145,'n');
+INSERT INTO `t3` VALUES (57,0,'v');
+INSERT INTO `t3` VALUES (58,1,'b');
+INSERT INTO `t3` VALUES (59,7,'x');
+INSERT INTO `t3` VALUES (60,3,'r');
+INSERT INTO `t3` VALUES (61,NULL,'t');
+INSERT INTO `t3` VALUES (62,2,'w');
+INSERT INTO `t3` VALUES (63,2,'w');
+INSERT INTO `t3` VALUES (64,2,'k');
+INSERT INTO `t3` VALUES (65,8,'a');
+INSERT INTO `t3` VALUES (66,6,'t');
+INSERT INTO `t3` VALUES (67,1,'z');
+INSERT INTO `t3` VALUES (68,NULL,'e');
+INSERT INTO `t3` VALUES (69,1,'q');
+INSERT INTO `t3` VALUES (70,0,'e');
+INSERT INTO `t3` VALUES (71,4,'v');
+INSERT INTO `t3` VALUES (72,1,'d');
+INSERT INTO `t3` VALUES (73,1,'u');
+INSERT INTO `t3` VALUES (74,27,'o');
+INSERT INTO `t3` VALUES (75,4,'b');
+INSERT INTO `t3` VALUES (76,6,'c');
+INSERT INTO `t3` VALUES (77,2,'q');
+INSERT INTO `t3` VALUES (78,248,NULL);
+INSERT INTO `t3` VALUES (79,NULL,'h');
+INSERT INTO `t3` VALUES (80,9,'d');
+INSERT INTO `t3` VALUES (81,75,'w');
+INSERT INTO `t3` VALUES (82,2,'m');
+INSERT INTO `t3` VALUES (83,9,'i');
+INSERT INTO `t3` VALUES (84,4,'w');
+INSERT INTO `t3` VALUES (85,0,'f');
+INSERT INTO `t3` VALUES (86,0,'k');
+INSERT INTO `t3` VALUES (87,1,'v');
+INSERT INTO `t3` VALUES (88,119,'c');
+INSERT INTO `t3` VALUES (89,1,'y');
+INSERT INTO `t3` VALUES (90,7,'h');
+INSERT INTO `t3` VALUES (91,2,NULL);
+INSERT INTO `t3` VALUES (92,7,'t');
+INSERT INTO `t3` VALUES (93,2,'l');
+INSERT INTO `t3` VALUES (94,6,'a');
+INSERT INTO `t3` VALUES (95,4,'r');
+INSERT INTO `t3` VALUES (96,5,'s');
+INSERT INTO `t3` VALUES (97,7,'z');
+INSERT INTO `t3` VALUES (98,1,'j');
+INSERT INTO `t3` VALUES (99,7,'c');
+INSERT INTO `t3` VALUES (100,2,'f');
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,8,NULL);
+
+set optimizer_switch='subquery_cache=off';
+
+SELECT (
+SELECT `col_int_nokey`
+FROM t3
+WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+
+set optimizer_switch='subquery_cache=on';
+
+SELECT
+ (SELECT `col_int_nokey`
+ FROM t3
+ WHERE table1 .`col_varchar_nokey` ) field13
+FROM t2 table1 JOIN t1 table2 ON table2 .`pk`
+ORDER BY field13;
+
+drop table t1,t2,t3;
+set @@optimizer_switch= default;
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-07-16 11:02:15 +0000
+++ b/sql/sql_class.h 2010-08-02 05:56:12 +0000
@@ -62,9 +62,9 @@
class Item_iterator_ref_list: public Item_iterator
{
- List_iterator_fast<Item*> list;
+ List_iterator<Item*> list;
public:
- Item_iterator_ref_list(List_iterator_fast<Item*> &arg_list):
+ Item_iterator_ref_list(List_iterator<Item*> &arg_list):
list(arg_list) {}
void open() { list.rewind(); }
Item *next() { return *(list++); }
=== modified file 'sql/sql_expression_cache.cc'
--- a/sql/sql_expression_cache.cc 2010-07-30 04:16:58 +0000
+++ b/sql/sql_expression_cache.cc 2010-08-02 05:56:12 +0000
@@ -96,22 +96,39 @@
void Expression_cache_tmptable::init()
{
- List_iterator_fast<Item*> li(*list);
+ List_iterator<Item*> li(*list);
Item_iterator_ref_list it(li);
Item **item;
uint field_counter;
DBUG_ENTER("Expression_cache_tmptable::init");
DBUG_ASSERT(!inited);
inited= TRUE;
-
- if (!(ULONGLONG_MAX >> (list->elements + 1)))
- {
- DBUG_PRINT("info", ("Too many dependencies"));
+ cache_table= NULL;
+
+ while ((item= li++))
+ {
+ DBUG_ASSERT(item);
+ if (*item)
+ {
+ DBUG_ASSERT((*item)->fixed);
+ items.push_back((*item));
+ }
+ else
+ {
+ /*
+ This is possible when optimizer already executed this subquery and
+ optimized out a condition predicate. See launchpad bug#611625
+ */
+ li.remove();
+ }
+ }
+
+ if (list->elements == 0)
+ {
+ DBUG_PRINT("info", ("All parameters was removed by optimizer."));
DBUG_VOID_RETURN;
}
- cache_table= NULL;
-
cache_table_param.init();
/* dependent items and result */
cache_table_param.field_count= list->elements + 1;
@@ -119,13 +136,6 @@
cache_table_param.skip_create_table= 1;
cache_table= NULL;
- while ((item= li++))
- {
- DBUG_ASSERT(item);
- DBUG_ASSERT(*item);
- DBUG_ASSERT((*item)->fixed);
- items.push_back((*item));
- }
items.push_front(val);
if (!(cache_table= create_tmp_table(table_thd, &cache_table_param,
1
0

[Maria-developers] Rev 2808: Fix for luanchpad bug#609043 in file:///home/bell/maria/bzr/work-maria-5.3-lb609043/
by sanja@askmonty.org 30 Jul '10
by sanja@askmonty.org 30 Jul '10
30 Jul '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb609043/
------------------------------------------------------------
revno: 2808
revision-id: sanja(a)askmonty.org-20100730041658-2naumadh26t93e3g
parent: sanja(a)askmonty.org-20100729111348-jjp89wlvs3kg0fqq
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb609043
timestamp: Fri 2010-07-30 07:16:58 +0300
message:
Fix for luanchpad bug#609043
Removed indirect reference in equalities for cache index lookup.
We should use a direct reference because some optimization of the
query may optimize out a condition predicate and if the outer reference
is the only element of the condition predicate the indirect reference
becomes NULL.
We can resolve correctly the indirect reference in
Expression_cache_tmptable::make_equalities because it is called before
optimization of the cached subquery.
=== modified file 'mysql-test/r/subquery_cache.result'
--- a/mysql-test/r/subquery_cache.result 2010-07-29 11:13:48 +0000
+++ b/mysql-test/r/subquery_cache.result 2010-07-30 04:16:58 +0000
@@ -2881,3 +2881,107 @@
field1 field2 field3 field4 field5 field6 field7 field8 field9 field10
drop table t1,t2,t3,t4,t5;
set @@optimizer_switch= default;
+#launchpad BUG#609043
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (19,NULL,6,'2004-08-20','2004-08-20','05:03:03','05:03:03','2007-04-19 00:19:53','2007-04-19 00:19:53','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'1900-01-01','1900-01-01','18:38:59','18:38:59','1900-01-01 00:00:00','1900-01-01 00:00:00','d','d');
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+CREATE TABLE `t4` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t4` VALUES (100,2,5,'2001-07-26','2001-07-26','11:49:25','11:49:25','2007-04-25 05:08:49','2007-04-25 05:08:49','f','f');
+SET @@optimizer_switch = 'subquery_cache=off';
+/* cache is off */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+COUNT( DISTINCT table2 .`col_int_key` ) (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) field10
+1 NULL d
+1 NULL f
+SET @@optimizer_switch = 'subquery_cache=on';
+/* cache is on */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+COUNT( DISTINCT table2 .`col_int_key` ) (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) field10
+1 NULL d
+1 NULL f
+drop table t1,t2,t3,t4;
+set @@optimizer_switch= default;
=== modified file 'mysql-test/t/subquery_cache.test'
--- a/mysql-test/t/subquery_cache.test 2010-07-29 11:13:48 +0000
+++ b/mysql-test/t/subquery_cache.test 2010-07-30 04:16:58 +0000
@@ -1202,3 +1202,107 @@
drop table t1,t2,t3,t4,t5;
set @@optimizer_switch= default;
+
+
+#
+--echo #launchpad BUG#609043
+#
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (19,NULL,6,'2004-08-20','2004-08-20','05:03:03','05:03:03','2007-04-19 00:19:53','2007-04-19 00:19:53','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'1900-01-01','1900-01-01','18:38:59','18:38:59','1900-01-01 00:00:00','1900-01-01 00:00:00','d','d');
+
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+
+CREATE TABLE `t4` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t4` VALUES (100,2,5,'2001-07-26','2001-07-26','11:49:25','11:49:25','2007-04-25 05:08:49','2007-04-25 05:08:49','f','f');
+
+SET @@optimizer_switch = 'subquery_cache=off';
+
+/* cache is off */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+
+SET @@optimizer_switch = 'subquery_cache=on';
+
+/* cache is on */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+
+drop table t1,t2,t3,t4;
+set @@optimizer_switch= default;
=== modified file 'sql/sql_expression_cache.cc'
--- a/sql/sql_expression_cache.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_expression_cache.cc 2010-07-30 04:16:58 +0000
@@ -41,7 +41,6 @@
List<Item> args;
List_iterator_fast<Item*> li(*list);
Item **ref;
- Name_resolution_context *cn= NULL;
DBUG_ENTER("Expression_cache_tmptable::make_equalities");
for (uint i= 1 /* skip result filed */; (ref= li++); i++)
@@ -58,14 +57,7 @@
fld->type() == MYSQL_TYPE_NEWDECIMAL ||
fld->type() == MYSQL_TYPE_DECIMAL)
{
- if (!cn)
- {
- // dummy resolution context
- cn= new Name_resolution_context();
- cn->init();
- }
- args.push_front(new Item_func_eq(new Item_ref(cn, ref, "", "", FALSE),
- new Item_field(fld)));
+ args.push_front(new Item_func_eq(*ref, new Item_field(fld)));
}
}
if (args.elements == 1)
1
0

[Maria-developers] Rev 2807: Fix for luanchpad bug#609043 in file:///home/bell/maria/bzr/work-maria-5.3-lb609043/
by sanja@askmonty.org 29 Jul '10
by sanja@askmonty.org 29 Jul '10
29 Jul '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb609043/
------------------------------------------------------------
revno: 2807
revision-id: sanja(a)askmonty.org-20100729164449-r66iqeuva2z0d8o8
parent: timour(a)askmonty.org-20100723082500-kwqzzvuv62nw412k
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb609043
timestamp: Thu 2010-07-29 19:44:49 +0300
message:
Fix for luanchpad bug#609043
Removed indirect reference in equalities for cache index lookup.
We should use direct reference because optiomization of the query can optimize out condition and if the outer reference is the only element of condition the indirect reference become NULL.
We can resolve correctly indirect reference in Expression_cache_tmptable::make_equalities because it called before optimisation of the cached subquery.
=== modified file 'mysql-test/r/subquery_cache.result'
--- a/mysql-test/r/subquery_cache.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subquery_cache.result 2010-07-29 16:44:49 +0000
@@ -1838,3 +1838,107 @@
Handler_read_rnd_next 27
drop table t0,t1,t2;
set optimizer_switch='default';
+# launchpad BUG#609043
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (19,NULL,6,'2004-08-20','2004-08-20','05:03:03','05:03:03','2007-04-19 00:19:53','2007-04-19 00:19:53','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'1900-01-01','1900-01-01','18:38:59','18:38:59','1900-01-01 00:00:00','1900-01-01 00:00:00','d','d');
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+CREATE TABLE `t4` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t4` VALUES (100,2,5,'2001-07-26','2001-07-26','11:49:25','11:49:25','2007-04-25 05:08:49','2007-04-25 05:08:49','f','f');
+SET @@optimizer_switch = 'subquery_cache=off';
+/* cache is off */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+COUNT( DISTINCT table2 .`col_int_key` ) (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) field10
+1 NULL d
+1 NULL f
+SET @@optimizer_switch = 'subquery_cache=on';
+/* cache is on */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+COUNT( DISTINCT table2 .`col_int_key` ) (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) field10
+1 NULL d
+1 NULL f
+drop table t1,t2,t3,t4;
+set @@optimizer_switch= default;
=== modified file 'mysql-test/t/subquery_cache.test'
--- a/mysql-test/t/subquery_cache.test 2010-07-10 10:37:30 +0000
+++ b/mysql-test/t/subquery_cache.test 2010-07-29 16:44:49 +0000
@@ -507,3 +507,107 @@
drop table t0,t1,t2;
set optimizer_switch='default';
+
+#
+--echo # launchpad BUG#609043
+#
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (19,NULL,6,'2004-08-20','2004-08-20','05:03:03','05:03:03','2007-04-19 00:19:53','2007-04-19 00:19:53','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'1900-01-01','1900-01-01','18:38:59','18:38:59','1900-01-01 00:00:00','1900-01-01 00:00:00','d','d');
+
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+
+CREATE TABLE `t4` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t4` VALUES (100,2,5,'2001-07-26','2001-07-26','11:49:25','11:49:25','2007-04-25 05:08:49','2007-04-25 05:08:49','f','f');
+
+SET @@optimizer_switch = 'subquery_cache=off';
+
+/* cache is off */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+
+SET @@optimizer_switch = 'subquery_cache=on';
+
+/* cache is on */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+
+drop table t1,t2,t3,t4;
+set @@optimizer_switch= default;
+
=== modified file 'sql/sql_expression_cache.cc'
--- a/sql/sql_expression_cache.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_expression_cache.cc 2010-07-29 16:44:49 +0000
@@ -41,7 +41,6 @@
List<Item> args;
List_iterator_fast<Item*> li(*list);
Item **ref;
- Name_resolution_context *cn= NULL;
DBUG_ENTER("Expression_cache_tmptable::make_equalities");
for (uint i= 1 /* skip result filed */; (ref= li++); i++)
@@ -58,13 +57,7 @@
fld->type() == MYSQL_TYPE_NEWDECIMAL ||
fld->type() == MYSQL_TYPE_DECIMAL)
{
- if (!cn)
- {
- // dummy resolution context
- cn= new Name_resolution_context();
- cn->init();
- }
- args.push_front(new Item_func_eq(new Item_ref(cn, ref, "", "", FALSE),
+ args.push_front(new Item_func_eq(*ref,
new Item_field(fld)));
}
}
1
0

[Maria-developers] Rev 2807: Bugfix for lounchpad bug#608834 (608824, 609045, 609052). in file:///home/bell/maria/bzr/work-maria-5.3-lb608834/
by sanja@askmonty.org 29 Jul '10
by sanja@askmonty.org 29 Jul '10
29 Jul '10
At file:///home/bell/maria/bzr/work-maria-5.3-lb608834/
------------------------------------------------------------
revno: 2807
revision-id: sanja(a)askmonty.org-20100729111348-jjp89wlvs3kg0fqq
parent: timour(a)askmonty.org-20100723082500-kwqzzvuv62nw412k
committer: sanja(a)askmonty.org
branch nick: work-maria-5.3-lb608834
timestamp: Thu 2010-07-29 14:13:48 +0300
message:
Bugfix for lounchpad bug#608834 (608824, 609045, 609052).
Added get_tmp_table_item() to cache wrapper as it has all not simple Items (Item_func, Item_field, Item_subquery).
=== modified file 'mysql-test/r/subquery_cache.result'
--- a/mysql-test/r/subquery_cache.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subquery_cache.result 2010-07-29 11:13:48 +0000
@@ -1838,3 +1838,1046 @@
Handler_read_rnd_next 27
drop table t0,t1,t2;
set optimizer_switch='default';
+#launchpad BUG#608834
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,7,8,'01:27:35','v','v');
+INSERT INTO `t2` VALUES (11,1,9,'19:48:31','r','r');
+INSERT INTO `t2` VALUES (12,5,9,'00:00:00','a','a');
+INSERT INTO `t2` VALUES (13,3,186,'19:53:05','m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,'19:18:56','y','y');
+INSERT INTO `t2` VALUES (15,92,2,'10:55:12','j','j');
+INSERT INTO `t2` VALUES (16,7,3,'00:25:00','d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'12:35:47','z','z');
+INSERT INTO `t2` VALUES (18,3,133,'19:53:03','e','e');
+INSERT INTO `t2` VALUES (19,5,1,'17:53:30','h','h');
+INSERT INTO `t2` VALUES (20,1,8,'11:35:49','b','b');
+INSERT INTO `t2` VALUES (21,2,5,NULL,'s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'06:01:40','e','e');
+INSERT INTO `t2` VALUES (23,1,8,'05:45:11','j','j');
+INSERT INTO `t2` VALUES (24,0,6,'00:00:00','e','e');
+INSERT INTO `t2` VALUES (25,210,51,'00:00:00','f','f');
+INSERT INTO `t2` VALUES (26,8,4,'06:11:01','v','v');
+INSERT INTO `t2` VALUES (27,7,7,'13:02:46','x','x');
+INSERT INTO `t2` VALUES (28,5,6,'21:44:25','m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'22:43:58','c','c');
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,'11:28:45','w','w');
+INSERT INTO `t1` VALUES (2,7,9,'20:25:14','m','m');
+INSERT INTO `t1` VALUES (3,9,3,'13:47:24','m','m');
+INSERT INTO `t1` VALUES (4,7,9,'19:24:11','k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'15:59:13','r','r');
+INSERT INTO `t1` VALUES (6,2,9,'00:00:00','t','t');
+INSERT INTO `t1` VALUES (7,6,3,'15:15:04','j','j');
+INSERT INTO `t1` VALUES (8,8,8,'11:32:06','u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'18:32:33','h','h');
+INSERT INTO `t1` VALUES (10,5,53,'15:19:25','o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,'19:03:19',NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'00:39:46','k','k');
+INSERT INTO `t1` VALUES (13,188,166,NULL,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,'00:00:00','n','n');
+INSERT INTO `t1` VALUES (15,1,0,'13:12:11','t','t');
+INSERT INTO `t1` VALUES (16,1,1,'04:56:48','c','c');
+INSERT INTO `t1` VALUES (17,0,9,'19:56:05','m','m');
+INSERT INTO `t1` VALUES (18,9,5,'19:35:19','y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'05:03:03','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'18:38:59','d','d');
+set @@optimizer_switch='subquery_cache=off';
+/* cache is off */ SELECT (
+SELECT 4
+FROM DUAL ) AS field1 , SUM( DISTINCT table1 . `pk` ) AS field2 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_nokey` ) AS SUBQUERY2_field1
+FROM ( t1 AS SUBQUERY2_t1 INNER JOIN t1 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `pk` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` <= table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_int_nokey` < table1 . `pk` ) AS field3 , table1 . `col_time_key` AS field4 , table1 . `col_int_key` AS field5 , CONCAT ( table2 . `col_varchar_nokey` , table1 . `col_varchar_key` ) AS field6
+FROM ( t1 AS table1 INNER JOIN ( ( t1 AS table2 LEFT JOIN t2 AS table3 ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_nokey` ) )
+WHERE ( table2 . `col_varchar_nokey` NOT IN (
+SELECT 'd' UNION
+SELECT 'u' ) ) OR table3 . `col_varchar_nokey` <= table1 . `col_varchar_key`
+GROUP BY field1, field3, field4, field5, field6
+ORDER BY table1 . `col_int_key` , field1, field2, field3, field4, field5, field6
+;
+field1 field2 field3 field4 field5 field6
+4 5 9 15:59:13 NULL cr
+4 5 9 15:59:13 NULL dr
+4 5 9 15:59:13 NULL er
+4 5 9 15:59:13 NULL fr
+4 5 9 15:59:13 NULL hr
+4 5 9 15:59:13 NULL jr
+4 5 9 15:59:13 NULL mr
+4 5 9 15:59:13 NULL rr
+4 5 9 15:59:13 NULL yr
+4 11 9 19:03:19 0 NULL
+4 15 9 13:12:11 0 ct
+4 15 9 13:12:11 0 dt
+4 15 9 13:12:11 0 et
+4 15 9 13:12:11 0 ft
+4 15 9 13:12:11 0 ht
+4 15 9 13:12:11 0 jt
+4 15 9 13:12:11 0 mt
+4 15 9 13:12:11 0 rt
+4 15 9 13:12:11 0 yt
+4 16 9 04:56:48 1 cc
+4 16 9 04:56:48 1 ec
+4 16 9 04:56:48 1 fc
+4 16 9 04:56:48 1 hc
+4 16 9 04:56:48 1 jc
+4 16 9 04:56:48 1 mc
+4 16 9 04:56:48 1 rc
+4 16 9 04:56:48 1 yc
+4 1 9 11:28:45 2 cw
+4 1 9 11:28:45 2 dw
+4 1 9 11:28:45 2 ew
+4 1 9 11:28:45 2 fw
+4 1 9 11:28:45 2 hw
+4 1 9 11:28:45 2 jw
+4 1 9 11:28:45 2 mw
+4 1 9 11:28:45 2 rw
+4 1 9 11:28:45 2 yw
+4 20 9 18:38:59 2 cd
+4 20 9 18:38:59 2 dd
+4 20 9 18:38:59 2 ed
+4 20 9 18:38:59 2 fd
+4 20 9 18:38:59 2 hd
+4 20 9 18:38:59 2 jd
+4 20 9 18:38:59 2 md
+4 20 9 18:38:59 2 rd
+4 20 9 18:38:59 2 yd
+4 3 9 13:47:24 3 cm
+4 3 9 13:47:24 3 dm
+4 3 9 13:47:24 3 em
+4 3 9 13:47:24 3 fm
+4 3 9 13:47:24 3 hm
+4 3 9 13:47:24 3 jm
+4 3 9 13:47:24 3 mm
+4 3 9 13:47:24 3 rm
+4 3 9 13:47:24 3 ym
+4 7 9 15:15:04 3 cj
+4 7 9 15:15:04 3 dj
+4 7 9 15:15:04 3 ej
+4 7 9 15:15:04 3 fj
+4 7 9 15:15:04 3 hj
+4 7 9 15:15:04 3 jj
+4 7 9 15:15:04 3 mj
+4 7 9 15:15:04 3 rj
+4 7 9 15:15:04 3 yj
+4 14 9 00:00:00 3 cn
+4 14 9 00:00:00 3 dn
+4 14 9 00:00:00 3 en
+4 14 9 00:00:00 3 fn
+4 14 9 00:00:00 3 hn
+4 14 9 00:00:00 3 jn
+4 14 9 00:00:00 3 mn
+4 14 9 00:00:00 3 rn
+4 14 9 00:00:00 3 yn
+4 12 9 00:39:46 5 ck
+4 12 9 00:39:46 5 dk
+4 12 9 00:39:46 5 ek
+4 12 9 00:39:46 5 fk
+4 12 9 00:39:46 5 hk
+4 12 9 00:39:46 5 jk
+4 12 9 00:39:46 5 mk
+4 12 9 00:39:46 5 rk
+4 12 9 00:39:46 5 yk
+4 18 9 19:35:19 5 cy
+4 18 9 19:35:19 5 dy
+4 18 9 19:35:19 5 ey
+4 18 9 19:35:19 5 fy
+4 18 9 19:35:19 5 hy
+4 18 9 19:35:19 5 jy
+4 18 9 19:35:19 5 my
+4 18 9 19:35:19 5 ry
+4 18 9 19:35:19 5 yy
+4 19 9 05:03:03 6 cf
+4 19 9 05:03:03 6 df
+4 19 9 05:03:03 6 ef
+4 19 9 05:03:03 6 ff
+4 19 9 05:03:03 6 hf
+4 19 9 05:03:03 6 jf
+4 19 9 05:03:03 6 mf
+4 19 9 05:03:03 6 rf
+4 19 9 05:03:03 6 yf
+4 8 9 11:32:06 8 cu
+4 8 9 11:32:06 8 du
+4 8 9 11:32:06 8 eu
+4 8 9 11:32:06 8 fu
+4 8 9 11:32:06 8 hu
+4 8 9 11:32:06 8 ju
+4 8 9 11:32:06 8 mu
+4 8 9 11:32:06 8 ru
+4 8 9 11:32:06 8 yu
+4 9 8 18:32:33 8 ch
+4 9 8 18:32:33 8 dh
+4 9 8 18:32:33 8 eh
+4 9 8 18:32:33 8 fh
+4 9 8 18:32:33 8 hh
+4 9 8 18:32:33 8 jh
+4 9 8 18:32:33 8 mh
+4 9 8 18:32:33 8 rh
+4 9 8 18:32:33 8 yh
+4 2 9 20:25:14 9 cm
+4 2 9 20:25:14 9 dm
+4 2 9 20:25:14 9 em
+4 2 9 20:25:14 9 fm
+4 2 9 20:25:14 9 hm
+4 2 9 20:25:14 9 jm
+4 2 9 20:25:14 9 mm
+4 2 9 20:25:14 9 rm
+4 2 9 20:25:14 9 ym
+4 4 9 19:24:11 9 ck
+4 4 9 19:24:11 9 dk
+4 4 9 19:24:11 9 ek
+4 4 9 19:24:11 9 fk
+4 4 9 19:24:11 9 hk
+4 4 9 19:24:11 9 jk
+4 4 9 19:24:11 9 mk
+4 4 9 19:24:11 9 rk
+4 4 9 19:24:11 9 yk
+4 6 9 00:00:00 9 ct
+4 6 9 00:00:00 9 dt
+4 6 9 00:00:00 9 et
+4 6 9 00:00:00 9 ft
+4 6 9 00:00:00 9 ht
+4 6 9 00:00:00 9 jt
+4 6 9 00:00:00 9 mt
+4 6 9 00:00:00 9 rt
+4 6 9 00:00:00 9 yt
+4 17 9 19:56:05 9 cm
+4 17 9 19:56:05 9 dm
+4 17 9 19:56:05 9 em
+4 17 9 19:56:05 9 fm
+4 17 9 19:56:05 9 hm
+4 17 9 19:56:05 9 jm
+4 17 9 19:56:05 9 mm
+4 17 9 19:56:05 9 rm
+4 17 9 19:56:05 9 ym
+4 10 9 15:19:25 53 co
+4 10 9 15:19:25 53 do
+4 10 9 15:19:25 53 eo
+4 10 9 15:19:25 53 fo
+4 10 9 15:19:25 53 ho
+4 10 9 15:19:25 53 jo
+4 10 9 15:19:25 53 mo
+4 10 9 15:19:25 53 ro
+4 10 9 15:19:25 53 yo
+4 13 9 NULL 166 ce
+4 13 9 NULL 166 de
+4 13 9 NULL 166 ee
+4 13 9 NULL 166 fe
+4 13 9 NULL 166 he
+4 13 9 NULL 166 je
+4 13 9 NULL 166 me
+4 13 9 NULL 166 re
+4 13 9 NULL 166 ye
+set @@optimizer_switch='subquery_cache=on';
+/* cache is on */ SELECT (
+SELECT 4
+FROM DUAL ) AS field1 , SUM( DISTINCT table1 . `pk` ) AS field2 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_nokey` ) AS SUBQUERY2_field1
+FROM ( t1 AS SUBQUERY2_t1 INNER JOIN t1 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `pk` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` <= table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_int_nokey` < table1 . `pk` ) AS field3 , table1 . `col_time_key` AS field4 , table1 . `col_int_key` AS field5 , CONCAT ( table2 . `col_varchar_nokey` , table1 . `col_varchar_key` ) AS field6
+FROM ( t1 AS table1 INNER JOIN ( ( t1 AS table2 LEFT JOIN t2 AS table3 ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_nokey` ) )
+WHERE ( table2 . `col_varchar_nokey` NOT IN (
+SELECT 'd' UNION
+SELECT 'u' ) ) OR table3 . `col_varchar_nokey` <= table1 . `col_varchar_key`
+GROUP BY field1, field3, field4, field5, field6
+ORDER BY table1 . `col_int_key` , field1, field2, field3, field4, field5, field6
+;
+field1 field2 field3 field4 field5 field6
+4 5 9 15:59:13 NULL cr
+4 5 9 15:59:13 NULL dr
+4 5 9 15:59:13 NULL er
+4 5 9 15:59:13 NULL fr
+4 5 9 15:59:13 NULL hr
+4 5 9 15:59:13 NULL jr
+4 5 9 15:59:13 NULL mr
+4 5 9 15:59:13 NULL rr
+4 5 9 15:59:13 NULL yr
+4 11 9 19:03:19 0 NULL
+4 15 9 13:12:11 0 ct
+4 15 9 13:12:11 0 dt
+4 15 9 13:12:11 0 et
+4 15 9 13:12:11 0 ft
+4 15 9 13:12:11 0 ht
+4 15 9 13:12:11 0 jt
+4 15 9 13:12:11 0 mt
+4 15 9 13:12:11 0 rt
+4 15 9 13:12:11 0 yt
+4 16 9 04:56:48 1 cc
+4 16 9 04:56:48 1 ec
+4 16 9 04:56:48 1 fc
+4 16 9 04:56:48 1 hc
+4 16 9 04:56:48 1 jc
+4 16 9 04:56:48 1 mc
+4 16 9 04:56:48 1 rc
+4 16 9 04:56:48 1 yc
+4 1 9 11:28:45 2 cw
+4 1 9 11:28:45 2 dw
+4 1 9 11:28:45 2 ew
+4 1 9 11:28:45 2 fw
+4 1 9 11:28:45 2 hw
+4 1 9 11:28:45 2 jw
+4 1 9 11:28:45 2 mw
+4 1 9 11:28:45 2 rw
+4 1 9 11:28:45 2 yw
+4 20 9 18:38:59 2 cd
+4 20 9 18:38:59 2 dd
+4 20 9 18:38:59 2 ed
+4 20 9 18:38:59 2 fd
+4 20 9 18:38:59 2 hd
+4 20 9 18:38:59 2 jd
+4 20 9 18:38:59 2 md
+4 20 9 18:38:59 2 rd
+4 20 9 18:38:59 2 yd
+4 3 9 13:47:24 3 cm
+4 3 9 13:47:24 3 dm
+4 3 9 13:47:24 3 em
+4 3 9 13:47:24 3 fm
+4 3 9 13:47:24 3 hm
+4 3 9 13:47:24 3 jm
+4 3 9 13:47:24 3 mm
+4 3 9 13:47:24 3 rm
+4 3 9 13:47:24 3 ym
+4 7 9 15:15:04 3 cj
+4 7 9 15:15:04 3 dj
+4 7 9 15:15:04 3 ej
+4 7 9 15:15:04 3 fj
+4 7 9 15:15:04 3 hj
+4 7 9 15:15:04 3 jj
+4 7 9 15:15:04 3 mj
+4 7 9 15:15:04 3 rj
+4 7 9 15:15:04 3 yj
+4 14 9 00:00:00 3 cn
+4 14 9 00:00:00 3 dn
+4 14 9 00:00:00 3 en
+4 14 9 00:00:00 3 fn
+4 14 9 00:00:00 3 hn
+4 14 9 00:00:00 3 jn
+4 14 9 00:00:00 3 mn
+4 14 9 00:00:00 3 rn
+4 14 9 00:00:00 3 yn
+4 12 9 00:39:46 5 ck
+4 12 9 00:39:46 5 dk
+4 12 9 00:39:46 5 ek
+4 12 9 00:39:46 5 fk
+4 12 9 00:39:46 5 hk
+4 12 9 00:39:46 5 jk
+4 12 9 00:39:46 5 mk
+4 12 9 00:39:46 5 rk
+4 12 9 00:39:46 5 yk
+4 18 9 19:35:19 5 cy
+4 18 9 19:35:19 5 dy
+4 18 9 19:35:19 5 ey
+4 18 9 19:35:19 5 fy
+4 18 9 19:35:19 5 hy
+4 18 9 19:35:19 5 jy
+4 18 9 19:35:19 5 my
+4 18 9 19:35:19 5 ry
+4 18 9 19:35:19 5 yy
+4 19 9 05:03:03 6 cf
+4 19 9 05:03:03 6 df
+4 19 9 05:03:03 6 ef
+4 19 9 05:03:03 6 ff
+4 19 9 05:03:03 6 hf
+4 19 9 05:03:03 6 jf
+4 19 9 05:03:03 6 mf
+4 19 9 05:03:03 6 rf
+4 19 9 05:03:03 6 yf
+4 8 9 11:32:06 8 cu
+4 8 9 11:32:06 8 du
+4 8 9 11:32:06 8 eu
+4 8 9 11:32:06 8 fu
+4 8 9 11:32:06 8 hu
+4 8 9 11:32:06 8 ju
+4 8 9 11:32:06 8 mu
+4 8 9 11:32:06 8 ru
+4 8 9 11:32:06 8 yu
+4 9 8 18:32:33 8 ch
+4 9 8 18:32:33 8 dh
+4 9 8 18:32:33 8 eh
+4 9 8 18:32:33 8 fh
+4 9 8 18:32:33 8 hh
+4 9 8 18:32:33 8 jh
+4 9 8 18:32:33 8 mh
+4 9 8 18:32:33 8 rh
+4 9 8 18:32:33 8 yh
+4 2 9 20:25:14 9 cm
+4 2 9 20:25:14 9 dm
+4 2 9 20:25:14 9 em
+4 2 9 20:25:14 9 fm
+4 2 9 20:25:14 9 hm
+4 2 9 20:25:14 9 jm
+4 2 9 20:25:14 9 mm
+4 2 9 20:25:14 9 rm
+4 2 9 20:25:14 9 ym
+4 4 9 19:24:11 9 ck
+4 4 9 19:24:11 9 dk
+4 4 9 19:24:11 9 ek
+4 4 9 19:24:11 9 fk
+4 4 9 19:24:11 9 hk
+4 4 9 19:24:11 9 jk
+4 4 9 19:24:11 9 mk
+4 4 9 19:24:11 9 rk
+4 4 9 19:24:11 9 yk
+4 6 9 00:00:00 9 ct
+4 6 9 00:00:00 9 dt
+4 6 9 00:00:00 9 et
+4 6 9 00:00:00 9 ft
+4 6 9 00:00:00 9 ht
+4 6 9 00:00:00 9 jt
+4 6 9 00:00:00 9 mt
+4 6 9 00:00:00 9 rt
+4 6 9 00:00:00 9 yt
+4 17 9 19:56:05 9 cm
+4 17 9 19:56:05 9 dm
+4 17 9 19:56:05 9 em
+4 17 9 19:56:05 9 fm
+4 17 9 19:56:05 9 hm
+4 17 9 19:56:05 9 jm
+4 17 9 19:56:05 9 mm
+4 17 9 19:56:05 9 rm
+4 17 9 19:56:05 9 ym
+4 10 9 15:19:25 53 co
+4 10 9 15:19:25 53 do
+4 10 9 15:19:25 53 eo
+4 10 9 15:19:25 53 fo
+4 10 9 15:19:25 53 ho
+4 10 9 15:19:25 53 jo
+4 10 9 15:19:25 53 mo
+4 10 9 15:19:25 53 ro
+4 10 9 15:19:25 53 yo
+4 13 9 NULL 166 ce
+4 13 9 NULL 166 de
+4 13 9 NULL 166 ee
+4 13 9 NULL 166 fe
+4 13 9 NULL 166 he
+4 13 9 NULL 166 je
+4 13 9 NULL 166 me
+4 13 9 NULL 166 re
+4 13 9 NULL 166 ye
+drop table t1,t2;
+set @@optimizer_switch= default;
+#launchpad BUG#609045
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,NULL,NULL,'11:28:45','11:28:45','2004-10-11 18:13:16','2004-10-11 18:13:16','w','w');
+INSERT INTO `t1` VALUES (2,7,9,'2001-09-19','2001-09-19','20:25:14','20:25:14',NULL,NULL,'m','m');
+INSERT INTO `t1` VALUES (3,9,3,'2004-09-12','2004-09-12','13:47:24','13:47:24','1900-01-01 00:00:00','1900-01-01 00:00:00','m','m');
+INSERT INTO `t1` VALUES (4,7,9,NULL,NULL,'19:24:11','19:24:11','2009-07-25 00:00:00','2009-07-25 00:00:00','k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'2002-07-19','2002-07-19','15:59:13','15:59:13',NULL,NULL,'r','r');
+INSERT INTO `t1` VALUES (6,2,9,'2002-12-16','2002-12-16','00:00:00','00:00:00','2008-07-27 00:00:00','2008-07-27 00:00:00','t','t');
+INSERT INTO `t1` VALUES (7,6,3,'2006-02-08','2006-02-08','15:15:04','15:15:04','2002-11-13 16:37:31','2002-11-13 16:37:31','j','j');
+INSERT INTO `t1` VALUES (8,8,8,'2006-08-28','2006-08-28','11:32:06','11:32:06','1900-01-01 00:00:00','1900-01-01 00:00:00','u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'2001-04-14','2001-04-14','18:32:33','18:32:33','2003-12-10 00:00:00','2003-12-10 00:00:00','h','h');
+INSERT INTO `t1` VALUES (10,5,53,'2000-01-05','2000-01-05','15:19:25','15:19:25','2001-12-21 22:38:22','2001-12-21 22:38:22','o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,'2003-12-06','2003-12-06','19:03:19','19:03:19','2008-12-13 23:16:44','2008-12-13 23:16:44',NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'1900-01-01','1900-01-01','00:39:46','00:39:46','2005-08-15 12:39:41','2005-08-15 12:39:41','k','k');
+INSERT INTO `t1` VALUES (13,188,166,'2002-11-27','2002-11-27',NULL,NULL,NULL,NULL,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,NULL,NULL,'00:00:00','00:00:00','2006-09-11 12:06:14','2006-09-11 12:06:14','n','n');
+INSERT INTO `t1` VALUES (15,1,0,'2003-05-27','2003-05-27','13:12:11','13:12:11','2007-12-15 12:39:34','2007-12-15 12:39:34','t','t');
+INSERT INTO `t1` VALUES (16,1,1,'2005-05-03','2005-05-03','04:56:48','04:56:48','2005-08-09 00:00:00','2005-08-09 00:00:00','c','c');
+INSERT INTO `t1` VALUES (17,0,9,'2001-04-18','2001-04-18','19:56:05','19:56:05','2001-09-02 22:50:02','2001-09-02 22:50:02','m','m');
+INSERT INTO `t1` VALUES (18,9,5,'2005-12-27','2005-12-27','19:35:19','19:35:19','2005-12-16 22:58:11','2005-12-16 22:58:11','y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'2004-08-20','2004-08-20','05:03:03','05:03:03','2007-04-19 00:19:53','2007-04-19 00:19:53','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'1900-01-01','1900-01-01','18:38:59','18:38:59','1900-01-01 00:00:00','1900-01-01 00:00:00','d','d');
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+);
+INSERT INTO `t2` VALUES (10,7,8,NULL,NULL,'01:27:35','01:27:35','2002-02-26 06:14:37','2002-02-26 06:14:37','v','v');
+INSERT INTO `t2` VALUES (11,1,9,'2006-06-14','2006-06-14','19:48:31','19:48:31','1900-01-01 00:00:00','1900-01-01 00:00:00','r','r');
+INSERT INTO `t2` VALUES (12,5,9,'2002-09-12','2002-09-12','00:00:00','00:00:00','2006-12-03 09:37:26','2006-12-03 09:37:26','a','a');
+INSERT INTO `t2` VALUES (13,3,186,'2005-02-15','2005-02-15','19:53:05','19:53:05','2008-05-26 12:27:10','2008-05-26 12:27:10','m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,NULL,NULL,'19:18:56','19:18:56','2004-12-14 16:37:30','2004-12-14 16:37:30','y','y');
+INSERT INTO `t2` VALUES (15,92,2,'2008-11-04','2008-11-04','10:55:12','10:55:12','2003-02-11 21:19:41','2003-02-11 21:19:41','j','j');
+INSERT INTO `t2` VALUES (16,7,3,'2004-09-04','2004-09-04','00:25:00','00:25:00','2009-10-18 02:27:49','2009-10-18 02:27:49','d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'2006-06-05','2006-06-05','12:35:47','12:35:47','2000-09-26 07:45:57','2000-09-26 07:45:57','z','z');
+INSERT INTO `t2` VALUES (18,3,133,'1900-01-01','1900-01-01','19:53:03','19:53:03',NULL,NULL,'e','e');
+INSERT INTO `t2` VALUES (19,5,1,'1900-01-01','1900-01-01','17:53:30','17:53:30','2005-11-10 12:40:29','2005-11-10 12:40:29','h','h');
+INSERT INTO `t2` VALUES (20,1,8,'1900-01-01','1900-01-01','11:35:49','11:35:49','2009-04-25 00:00:00','2009-04-25 00:00:00','b','b');
+INSERT INTO `t2` VALUES (21,2,5,'2005-01-13','2005-01-13',NULL,NULL,'2002-11-27 00:00:00','2002-11-27 00:00:00','s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'2006-05-21','2006-05-21','06:01:40','06:01:40','2004-01-26 20:32:32','2004-01-26 20:32:32','e','e');
+INSERT INTO `t2` VALUES (23,1,8,'2003-09-08','2003-09-08','05:45:11','05:45:11','2007-10-26 11:41:40','2007-10-26 11:41:40','j','j');
+INSERT INTO `t2` VALUES (24,0,6,'2006-12-23','2006-12-23','00:00:00','00:00:00','2005-10-07 00:00:00','2005-10-07 00:00:00','e','e');
+INSERT INTO `t2` VALUES (25,210,51,'2006-10-15','2006-10-15','00:00:00','00:00:00','2000-07-15 05:00:34','2000-07-15 05:00:34','f','f');
+INSERT INTO `t2` VALUES (26,8,4,'2005-04-06','2005-04-06','06:11:01','06:11:01','2000-04-03 16:33:32','2000-04-03 16:33:32','v','v');
+INSERT INTO `t2` VALUES (27,7,7,'2008-04-07','2008-04-07','13:02:46','13:02:46',NULL,NULL,'x','x');
+INSERT INTO `t2` VALUES (28,5,6,'2006-10-10','2006-10-10','21:44:25','21:44:25','2001-04-25 01:26:12','2001-04-25 01:26:12','m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'1900-01-01','1900-01-01','22:43:58','22:43:58','2000-12-27 00:00:00','2000-12-27 00:00:00','c','c');
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+);
+INSERT INTO `t3` VALUES (1,1,7,'1900-01-01','1900-01-01','01:13:38','01:13:38','2005-02-05 00:00:00','2005-02-05 00:00:00','f','f');
+CREATE TABLE `t4` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_date_key` date DEFAULT NULL,
+`col_date_nokey` date DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_time_nokey` time DEFAULT NULL,
+`col_datetime_key` datetime DEFAULT NULL,
+`col_datetime_nokey` datetime DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_date_key` (`col_date_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_datetime_key` (`col_datetime_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+);
+INSERT INTO `t4` VALUES (1,6,NULL,'2003-05-12','2003-05-12',NULL,NULL,'2000-09-12 00:00:00','2000-09-12 00:00:00','r','r');
+INSERT INTO `t4` VALUES (2,8,0,'2003-01-07','2003-01-07','14:34:45','14:34:45','2004-08-10 09:09:31','2004-08-10 09:09:31','c','c');
+INSERT INTO `t4` VALUES (3,6,0,NULL,NULL,'11:49:48','11:49:48','2005-03-21 04:31:40','2005-03-21 04:31:40','o','o');
+INSERT INTO `t4` VALUES (4,6,7,'2005-03-12','2005-03-12','18:12:55','18:12:55','2002-10-25 23:50:35','2002-10-25 23:50:35','c','c');
+INSERT INTO `t4` VALUES (5,3,8,'2000-08-02','2000-08-02','18:30:05','18:30:05','2001-04-01 21:14:04','2001-04-01 21:14:04','d','d');
+INSERT INTO `t4` VALUES (6,9,4,'1900-01-01','1900-01-01','14:19:30','14:19:30','2005-03-12 06:02:34','2005-03-12 06:02:34','v','v');
+INSERT INTO `t4` VALUES (7,2,6,'2006-07-06','2006-07-06','05:20:04','05:20:04','2001-05-06 14:49:12','2001-05-06 14:49:12','m','m');
+INSERT INTO `t4` VALUES (8,1,5,'2006-12-24','2006-12-24','20:29:31','20:29:31','2004-04-25 00:00:00','2004-04-25 00:00:00','j','j');
+INSERT INTO `t4` VALUES (9,8,NULL,'2004-11-16','2004-11-16','07:08:09','07:08:09','2001-03-22 18:38:43','2001-03-22 18:38:43','f','f');
+INSERT INTO `t4` VALUES (10,0,NULL,'2002-09-09','2002-09-09','14:49:14','14:49:14','2006-04-25 21:03:02','2006-04-25 21:03:02','n','n');
+INSERT INTO `t4` VALUES (11,9,8,NULL,NULL,'00:00:00','00:00:00','2009-09-07 18:40:43','2009-09-07 18:40:43','z','z');
+INSERT INTO `t4` VALUES (12,8,8,'2008-06-24','2008-06-24','09:58:06','09:58:06','2004-03-23 00:00:00','2004-03-23 00:00:00','h','h');
+INSERT INTO `t4` VALUES (13,NULL,8,'2001-04-21','2001-04-21',NULL,NULL,'2009-04-15 00:08:29','2009-04-15 00:08:29','q','q');
+INSERT INTO `t4` VALUES (14,0,1,'2003-11-22','2003-11-22','18:24:16','18:24:16','2000-04-21 00:00:00','2000-04-21 00:00:00','w','w');
+INSERT INTO `t4` VALUES (15,5,1,'2004-09-12','2004-09-12','17:39:57','17:39:57','2000-02-17 19:41:23','2000-02-17 19:41:23','z','z');
+INSERT INTO `t4` VALUES (16,1,5,'2006-06-20','2006-06-20','08:23:21','08:23:21','2003-09-20 07:38:14','2003-09-20 07:38:14','j','j');
+INSERT INTO `t4` VALUES (17,1,2,NULL,NULL,NULL,NULL,'2000-11-28 20:42:12','2000-11-28 20:42:12','a','a');
+INSERT INTO `t4` VALUES (18,6,7,'2001-11-25','2001-11-25','21:50:46','21:50:46','2005-06-12 11:13:17','2005-06-12 11:13:17','m','m');
+INSERT INTO `t4` VALUES (19,6,6,'2004-10-26','2004-10-26','12:33:17','12:33:17','1900-01-01 00:00:00','1900-01-01 00:00:00','n','n');
+INSERT INTO `t4` VALUES (20,1,4,'2005-01-19','2005-01-19','03:06:43','03:06:43','2006-02-09 20:41:06','2006-02-09 20:41:06','e','e');
+INSERT INTO `t4` VALUES (21,8,7,'2008-07-06','2008-07-06','03:46:14','03:46:14','2004-05-22 01:05:57','2004-05-22 01:05:57','u','u');
+INSERT INTO `t4` VALUES (22,1,0,'1900-01-01','1900-01-01','20:34:52','20:34:52','2004-03-04 13:46:31','2004-03-04 13:46:31','s','s');
+INSERT INTO `t4` VALUES (23,0,9,'1900-01-01','1900-01-01',NULL,NULL,'1900-01-01 00:00:00','1900-01-01 00:00:00','u','u');
+INSERT INTO `t4` VALUES (24,4,3,'2004-06-08','2004-06-08','10:41:20','10:41:20','2004-10-20 07:20:19','2004-10-20 07:20:19','r','r');
+INSERT INTO `t4` VALUES (25,9,5,'2007-02-20','2007-02-20','08:43:11','08:43:11','2006-04-17 00:00:00','2006-04-17 00:00:00','g','g');
+INSERT INTO `t4` VALUES (26,8,1,'2008-06-18','2008-06-18',NULL,NULL,'2000-10-27 00:00:00','2000-10-27 00:00:00','o','o');
+INSERT INTO `t4` VALUES (27,5,1,'2008-05-15','2008-05-15','10:17:51','10:17:51','2007-04-14 08:54:06','2007-04-14 08:54:06','w','w');
+INSERT INTO `t4` VALUES (28,9,5,'2005-10-06','2005-10-06','06:34:09','06:34:09','2008-04-12 17:03:52','2008-04-12 17:03:52','b','b');
+INSERT INTO `t4` VALUES (29,5,9,NULL,NULL,'21:22:47','21:22:47','2007-02-19 17:37:09','2007-02-19 17:37:09',NULL,NULL);
+INSERT INTO `t4` VALUES (30,NULL,2,'2006-10-12','2006-10-12','04:02:32','04:02:32','1900-01-01 00:00:00','1900-01-01 00:00:00','y','y');
+INSERT INTO `t4` VALUES (31,NULL,5,'2005-01-24','2005-01-24','02:33:14','02:33:14','2001-10-10 08:32:27','2001-10-10 08:32:27','y','y');
+INSERT INTO `t4` VALUES (32,105,248,'2009-06-27','2009-06-27','16:32:56','16:32:56',NULL,NULL,'u','u');
+INSERT INTO `t4` VALUES (33,0,0,NULL,NULL,'21:32:42','21:32:42','2001-12-16 05:31:53','2001-12-16 05:31:53','p','p');
+INSERT INTO `t4` VALUES (34,3,8,NULL,NULL,'23:04:47','23:04:47','2003-07-19 18:03:28','2003-07-19 18:03:28','s','s');
+INSERT INTO `t4` VALUES (35,1,1,'1900-01-01','1900-01-01','22:05:43','22:05:43','2001-03-27 11:44:10','2001-03-27 11:44:10','e','e');
+INSERT INTO `t4` VALUES (36,75,255,'2005-12-22','2005-12-22','02:05:45','02:05:45','2008-06-15 02:13:00','2008-06-15 02:13:00','d','d');
+INSERT INTO `t4` VALUES (37,9,9,'2005-05-03','2005-05-03','00:00:00','00:00:00','2009-03-14 21:29:56','2009-03-14 21:29:56','d','d');
+INSERT INTO `t4` VALUES (38,7,9,'2003-05-27','2003-05-27','18:09:07','18:09:07','2005-01-02 00:00:00','2005-01-02 00:00:00','c','c');
+INSERT INTO `t4` VALUES (39,NULL,3,'2006-05-25','2006-05-25','10:54:06','10:54:06','2007-07-16 04:44:07','2007-07-16 04:44:07','b','b');
+INSERT INTO `t4` VALUES (40,NULL,9,NULL,NULL,'23:15:50','23:15:50','2003-08-26 21:38:26','2003-08-26 21:38:26','t','t');
+INSERT INTO `t4` VALUES (41,4,6,'2009-01-04','2009-01-04','10:17:40','10:17:40','2004-04-19 04:18:47','2004-04-19 04:18:47',NULL,NULL);
+INSERT INTO `t4` VALUES (42,0,4,'2009-02-14','2009-02-14','03:37:09','03:37:09','2000-01-06 20:32:48','2000-01-06 20:32:48','y','y');
+INSERT INTO `t4` VALUES (43,204,60,'2003-01-16','2003-01-16','22:26:06','22:26:06','2006-06-23 13:27:17','2006-06-23 13:27:17','c','c');
+INSERT INTO `t4` VALUES (44,0,7,'1900-01-01','1900-01-01','17:10:38','17:10:38','2007-11-27 00:00:00','2007-11-27 00:00:00','d','d');
+INSERT INTO `t4` VALUES (45,9,1,'2007-06-26','2007-06-26','00:00:00','00:00:00','2002-04-03 12:06:51','2002-04-03 12:06:51','x','x');
+INSERT INTO `t4` VALUES (46,8,6,'2004-03-27','2004-03-27','17:08:49','17:08:49','2008-12-28 09:47:42','2008-12-28 09:47:42','p','p');
+INSERT INTO `t4` VALUES (47,7,4,NULL,NULL,'19:04:40','19:04:40','2002-04-04 10:07:54','2002-04-04 10:07:54','e','e');
+INSERT INTO `t4` VALUES (48,8,NULL,'2005-06-06','2005-06-06','20:53:28','20:53:28','2003-04-26 02:55:13','2003-04-26 02:55:13','g','g');
+INSERT INTO `t4` VALUES (49,NULL,8,'2003-03-02','2003-03-02','11:46:03','11:46:03',NULL,NULL,'x','x');
+INSERT INTO `t4` VALUES (50,6,0,'2004-05-13','2004-05-13',NULL,NULL,'2009-02-19 03:17:06','2009-02-19 03:17:06','s','s');
+INSERT INTO `t4` VALUES (51,5,8,'2005-09-13','2005-09-13','10:58:07','10:58:07','1900-01-01 00:00:00','1900-01-01 00:00:00','e','e');
+INSERT INTO `t4` VALUES (52,2,151,'2005-10-03','2005-10-03','00:00:00','00:00:00','2000-11-10 08:20:01','2000-11-10 08:20:01','l','l');
+INSERT INTO `t4` VALUES (53,3,7,'2005-10-14','2005-10-14','09:43:15','09:43:15','2008-02-10 00:00:00','2008-02-10 00:00:00','p','p');
+INSERT INTO `t4` VALUES (54,7,6,NULL,NULL,'21:40:32','21:40:32','1900-01-01 00:00:00','1900-01-01 00:00:00','h','h');
+INSERT INTO `t4` VALUES (55,NULL,NULL,'2005-09-16','2005-09-16','00:17:44','00:17:44',NULL,NULL,'m','m');
+INSERT INTO `t4` VALUES (56,145,23,'2005-03-10','2005-03-10','16:47:26','16:47:26','2001-02-05 02:01:50','2001-02-05 02:01:50','n','n');
+INSERT INTO `t4` VALUES (57,0,2,'2000-06-19','2000-06-19','00:00:00','00:00:00','2000-10-28 08:44:25','2000-10-28 08:44:25','v','v');
+INSERT INTO `t4` VALUES (58,1,4,'2002-11-03','2002-11-03','05:25:59','05:25:59','2005-03-20 10:53:59','2005-03-20 10:53:59','b','b');
+INSERT INTO `t4` VALUES (59,7,NULL,'2009-01-05','2009-01-05','00:00:00','00:00:00','2001-06-02 13:54:13','2001-06-02 13:54:13','x','x');
+INSERT INTO `t4` VALUES (60,3,NULL,'2003-05-22','2003-05-22','20:33:04','20:33:04','1900-01-01 00:00:00','1900-01-01 00:00:00','r','r');
+INSERT INTO `t4` VALUES (61,NULL,77,'2005-07-02','2005-07-02','00:46:12','00:46:12','2009-07-16 13:05:43','2009-07-16 13:05:43','t','t');
+INSERT INTO `t4` VALUES (62,2,NULL,'1900-01-01','1900-01-01','00:00:00','00:00:00','2009-03-26 23:16:20','2009-03-26 23:16:20','w','w');
+INSERT INTO `t4` VALUES (63,2,NULL,'2006-06-21','2006-06-21','02:13:59','02:13:59','2003-02-06 18:12:15','2003-02-06 18:12:15','w','w');
+INSERT INTO `t4` VALUES (64,2,7,NULL,NULL,'02:54:47','02:54:47','2006-06-05 03:22:51','2006-06-05 03:22:51','k','k');
+INSERT INTO `t4` VALUES (65,8,1,'2005-12-16','2005-12-16','18:13:59','18:13:59','2002-02-10 05:47:27','2002-02-10 05:47:27','a','a');
+INSERT INTO `t4` VALUES (66,6,9,'2004-11-05','2004-11-05','13:53:08','13:53:08','2001-08-01 08:50:52','2001-08-01 08:50:52','t','t');
+INSERT INTO `t4` VALUES (67,1,6,NULL,NULL,'22:21:30','22:21:30','1900-01-01 00:00:00','1900-01-01 00:00:00','z','z');
+INSERT INTO `t4` VALUES (68,NULL,2,'2004-09-14','2004-09-14','11:41:50','11:41:50',NULL,NULL,'e','e');
+INSERT INTO `t4` VALUES (69,1,3,'2002-04-06','2002-04-06','15:20:02','15:20:02','1900-01-01 00:00:00','1900-01-01 00:00:00','q','q');
+INSERT INTO `t4` VALUES (70,0,0,NULL,NULL,NULL,NULL,'2000-09-23 00:00:00','2000-09-23 00:00:00','e','e');
+INSERT INTO `t4` VALUES (71,4,NULL,'2002-11-13','2002-11-13',NULL,NULL,'2007-07-09 08:32:49','2007-07-09 08:32:49','v','v');
+INSERT INTO `t4` VALUES (72,1,6,'2006-05-27','2006-05-27','07:51:52','07:51:52','2000-01-05 00:00:00','2000-01-05 00:00:00','d','d');
+INSERT INTO `t4` VALUES (73,1,3,'2000-12-22','2000-12-22','00:00:00','00:00:00','2000-09-24 00:00:00','2000-09-24 00:00:00','u','u');
+INSERT INTO `t4` VALUES (74,27,195,'2004-02-21','2004-02-21',NULL,NULL,'2005-05-06 00:00:00','2005-05-06 00:00:00','o','o');
+INSERT INTO `t4` VALUES (75,4,5,'2009-05-15','2009-05-15',NULL,NULL,'2000-03-11 00:00:00','2000-03-11 00:00:00','b','b');
+INSERT INTO `t4` VALUES (76,6,2,'2008-12-12','2008-12-12','12:31:05','12:31:05','2001-09-02 16:17:35','2001-09-02 16:17:35','c','c');
+INSERT INTO `t4` VALUES (77,2,7,'2000-04-15','2000-04-15','00:00:00','00:00:00','2006-04-25 05:43:44','2006-04-25 05:43:44','q','q');
+INSERT INTO `t4` VALUES (78,248,25,NULL,NULL,'01:16:45','01:16:45','2009-10-25 22:04:02','2009-10-25 22:04:02',NULL,NULL);
+INSERT INTO `t4` VALUES (79,NULL,NULL,'2001-10-18','2001-10-18','20:38:54','20:38:54','2004-08-06 00:00:00','2004-08-06 00:00:00','h','h');
+INSERT INTO `t4` VALUES (80,9,0,'2008-05-25','2008-05-25','00:30:15','00:30:15','2001-11-27 05:07:57','2001-11-27 05:07:57','d','d');
+INSERT INTO `t4` VALUES (81,75,98,'2004-12-02','2004-12-02','23:46:36','23:46:36','2009-06-28 03:18:39','2009-06-28 03:18:39','w','w');
+INSERT INTO `t4` VALUES (82,2,6,'2002-02-15','2002-02-15','19:03:13','19:03:13','2000-03-12 00:00:00','2000-03-12 00:00:00','m','m');
+INSERT INTO `t4` VALUES (83,9,5,'2002-03-03','2002-03-03','10:54:27','10:54:27',NULL,NULL,'i','i');
+INSERT INTO `t4` VALUES (84,4,0,NULL,NULL,'00:25:47','00:25:47','2007-10-20 00:00:00','2007-10-20 00:00:00','w','w');
+INSERT INTO `t4` VALUES (85,0,3,'2003-01-26','2003-01-26','08:44:27','08:44:27','2009-09-27 00:00:00','2009-09-27 00:00:00','f','f');
+INSERT INTO `t4` VALUES (86,0,1,'2001-12-19','2001-12-19','08:15:38','08:15:38','2002-07-16 00:00:00','2002-07-16 00:00:00','k','k');
+INSERT INTO `t4` VALUES (87,1,1,'2001-08-07','2001-08-07','19:56:21','19:56:21','2005-02-20 00:00:00','2005-02-20 00:00:00','v','v');
+INSERT INTO `t4` VALUES (88,119,147,'2005-02-16','2005-02-16','00:00:00','00:00:00',NULL,NULL,'c','c');
+INSERT INTO `t4` VALUES (89,1,3,'2006-06-10','2006-06-10','20:50:52','20:50:52','2001-07-16 00:00:00','2001-07-16 00:00:00','y','y');
+INSERT INTO `t4` VALUES (90,7,3,NULL,NULL,'03:54:39','03:54:39','2009-05-20 21:04:12','2009-05-20 21:04:12','h','h');
+INSERT INTO `t4` VALUES (91,2,NULL,'2005-04-06','2005-04-06','23:58:17','23:58:17','2002-03-13 10:55:40','2002-03-13 10:55:40',NULL,NULL);
+INSERT INTO `t4` VALUES (92,7,2,'2003-04-27','2003-04-27','12:54:58','12:54:58','2005-07-12 00:00:00','2005-07-12 00:00:00','t','t');
+INSERT INTO `t4` VALUES (93,2,1,'2005-10-13','2005-10-13','04:02:43','04:02:43','2006-07-22 09:46:34','2006-07-22 09:46:34','l','l');
+INSERT INTO `t4` VALUES (94,6,8,'2003-10-02','2003-10-02','11:31:12','11:31:12','2001-09-01 00:00:00','2001-09-01 00:00:00','a','a');
+INSERT INTO `t4` VALUES (95,4,8,'2005-09-09','2005-09-09','20:20:04','20:20:04','2002-05-27 18:38:45','2002-05-27 18:38:45','r','r');
+INSERT INTO `t4` VALUES (96,5,8,NULL,NULL,'00:22:24','00:22:24',NULL,NULL,'s','s');
+INSERT INTO `t4` VALUES (97,7,0,'2006-02-15','2006-02-15','10:09:31','10:09:31',NULL,NULL,'z','z');
+INSERT INTO `t4` VALUES (98,1,1,'1900-01-01','1900-01-01',NULL,NULL,'2009-08-08 22:38:53','2009-08-08 22:38:53','j','j');
+INSERT INTO `t4` VALUES (99,7,8,'2003-12-24','2003-12-24','18:45:35','18:45:35',NULL,NULL,'c','c');
+INSERT INTO `t4` VALUES (100,2,5,'2001-07-26','2001-07-26','11:49:25','11:49:25','2007-04-25 05:08:49','2007-04-25 05:08:49','f','f');
+SET @@optimizer_switch='subquery_cache=off';
+/* cache is off */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+COUNT( DISTINCT table2 .`col_int_key` ) (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) field10
+1 NULL c
+1 NULL d
+1 NULL e
+1 NULL f
+1 NULL h
+1 NULL j
+2 NULL k
+2 NULL m
+1 NULL n
+1 NULL o
+0 NULL r
+2 NULL t
+1 NULL u
+1 NULL w
+1 NULL y
+SET @@optimizer_switch='subquery_cache=on';
+/* cache is on */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+COUNT( DISTINCT table2 .`col_int_key` ) (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) field10
+1 NULL c
+1 NULL d
+1 NULL e
+1 NULL f
+1 NULL h
+1 NULL j
+2 NULL k
+2 NULL m
+1 NULL n
+1 NULL o
+0 NULL r
+2 NULL t
+1 NULL u
+1 NULL w
+1 NULL y
+drop table t1,t2,t3,t4;
+set @@optimizer_switch= default;
+#launchpad BUG#609045
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,7,8,'v','v');
+INSERT INTO `t2` VALUES (11,1,9,'r','r');
+INSERT INTO `t2` VALUES (12,5,9,'a','a');
+INSERT INTO `t2` VALUES (13,3,186,'m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,'y','y');
+INSERT INTO `t2` VALUES (15,92,2,'j','j');
+INSERT INTO `t2` VALUES (16,7,3,'d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'z','z');
+INSERT INTO `t2` VALUES (18,3,133,'e','e');
+INSERT INTO `t2` VALUES (19,5,1,'h','h');
+INSERT INTO `t2` VALUES (20,1,8,'b','b');
+INSERT INTO `t2` VALUES (21,2,5,'s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'e','e');
+INSERT INTO `t2` VALUES (23,1,8,'j','j');
+INSERT INTO `t2` VALUES (24,0,6,'e','e');
+INSERT INTO `t2` VALUES (25,210,51,'f','f');
+INSERT INTO `t2` VALUES (26,8,4,'v','v');
+INSERT INTO `t2` VALUES (27,7,7,'x','x');
+INSERT INTO `t2` VALUES (28,5,6,'m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'c','c');
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,'w','w');
+INSERT INTO `t1` VALUES (2,7,9,'m','m');
+INSERT INTO `t1` VALUES (3,9,3,'m','m');
+INSERT INTO `t1` VALUES (4,7,9,'k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'r','r');
+INSERT INTO `t1` VALUES (6,2,9,'t','t');
+INSERT INTO `t1` VALUES (7,6,3,'j','j');
+INSERT INTO `t1` VALUES (8,8,8,'u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'h','h');
+INSERT INTO `t1` VALUES (10,5,53,'o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'k','k');
+INSERT INTO `t1` VALUES (13,188,166,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,'n','n');
+INSERT INTO `t1` VALUES (15,1,0,'t','t');
+INSERT INTO `t1` VALUES (16,1,1,'c','c');
+INSERT INTO `t1` VALUES (17,0,9,'m','m');
+INSERT INTO `t1` VALUES (18,9,5,'y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'f','f');
+INSERT INTO `t1` VALUES (20,4,2,'d','d');
+SET @@optimizer_switch = 'subquery_cache=off';
+/* cache is off */ SELECT SUM( DISTINCT table1 .`pk` ) , (
+SELECT MAX( `col_int_nokey` )
+FROM t1
+WHERE table1 .`pk` ) field3
+FROM t1 table1
+JOIN (
+t1 table2
+JOIN t2 table3
+ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+)
+ON table3 .`col_varchar_key` = table2 .`col_varchar_nokey`
+GROUP BY field3 ;
+SUM( DISTINCT table1 .`pk` ) field3
+210 188
+SET @@optimizer_switch = 'subquery_cache=on';
+/* cache is on */ SELECT SUM( DISTINCT table1 .`pk` ) , (
+SELECT MAX( `col_int_nokey` )
+FROM t1
+WHERE table1 .`pk` ) field3
+FROM t1 table1
+JOIN (
+t1 table2
+JOIN t2 table3
+ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+)
+ON table3 .`col_varchar_key` = table2 .`col_varchar_nokey`
+GROUP BY field3 ;
+SUM( DISTINCT table1 .`pk` ) field3
+210 188
+drop table t1,t2;
+set @@optimizer_switch= default;
+#launchpad BUG#609052
+CREATE TABLE `t2` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,7,8,'01:27:35','v','v');
+INSERT INTO `t2` VALUES (11,1,9,'19:48:31','r','r');
+INSERT INTO `t2` VALUES (12,5,9,'00:00:00','a','a');
+INSERT INTO `t2` VALUES (13,3,186,'19:53:05','m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,'19:18:56','y','y');
+INSERT INTO `t2` VALUES (15,92,2,'10:55:12','j','j');
+INSERT INTO `t2` VALUES (16,7,3,'00:25:00','d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'12:35:47','z','z');
+INSERT INTO `t2` VALUES (18,3,133,'19:53:03','e','e');
+INSERT INTO `t2` VALUES (19,5,1,'17:53:30','h','h');
+INSERT INTO `t2` VALUES (20,1,8,'11:35:49','b','b');
+INSERT INTO `t2` VALUES (21,2,5,NULL,'s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'06:01:40','e','e');
+INSERT INTO `t2` VALUES (23,1,8,'05:45:11','j','j');
+INSERT INTO `t2` VALUES (24,0,6,'00:00:00','e','e');
+INSERT INTO `t2` VALUES (25,210,51,'00:00:00','f','f');
+INSERT INTO `t2` VALUES (26,8,4,'06:11:01','v','v');
+INSERT INTO `t2` VALUES (27,7,7,'13:02:46','x','x');
+INSERT INTO `t2` VALUES (28,5,6,'21:44:25','m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'22:43:58','c','c');
+CREATE TABLE `t4` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t4` VALUES (1,6,NULL,NULL,'r','r');
+INSERT INTO `t4` VALUES (2,8,0,'14:34:45','c','c');
+INSERT INTO `t4` VALUES (3,6,0,'11:49:48','o','o');
+INSERT INTO `t4` VALUES (4,6,7,'18:12:55','c','c');
+INSERT INTO `t4` VALUES (5,3,8,'18:30:05','d','d');
+INSERT INTO `t4` VALUES (6,9,4,'14:19:30','v','v');
+INSERT INTO `t4` VALUES (7,2,6,'05:20:04','m','m');
+INSERT INTO `t4` VALUES (8,1,5,'20:29:31','j','j');
+INSERT INTO `t4` VALUES (9,8,NULL,'07:08:09','f','f');
+INSERT INTO `t4` VALUES (10,0,NULL,'14:49:14','n','n');
+INSERT INTO `t4` VALUES (11,9,8,'00:00:00','z','z');
+INSERT INTO `t4` VALUES (12,8,8,'09:58:06','h','h');
+INSERT INTO `t4` VALUES (13,NULL,8,NULL,'q','q');
+INSERT INTO `t4` VALUES (14,0,1,'18:24:16','w','w');
+INSERT INTO `t4` VALUES (15,5,1,'17:39:57','z','z');
+INSERT INTO `t4` VALUES (16,1,5,'08:23:21','j','j');
+INSERT INTO `t4` VALUES (17,1,2,NULL,'a','a');
+INSERT INTO `t4` VALUES (18,6,7,'21:50:46','m','m');
+INSERT INTO `t4` VALUES (19,6,6,'12:33:17','n','n');
+INSERT INTO `t4` VALUES (20,1,4,'03:06:43','e','e');
+INSERT INTO `t4` VALUES (21,8,7,'03:46:14','u','u');
+INSERT INTO `t4` VALUES (22,1,0,'20:34:52','s','s');
+INSERT INTO `t4` VALUES (23,0,9,NULL,'u','u');
+INSERT INTO `t4` VALUES (24,4,3,'10:41:20','r','r');
+INSERT INTO `t4` VALUES (25,9,5,'08:43:11','g','g');
+INSERT INTO `t4` VALUES (26,8,1,NULL,'o','o');
+INSERT INTO `t4` VALUES (27,5,1,'10:17:51','w','w');
+INSERT INTO `t4` VALUES (28,9,5,'06:34:09','b','b');
+INSERT INTO `t4` VALUES (29,5,9,'21:22:47',NULL,NULL);
+INSERT INTO `t4` VALUES (30,NULL,2,'04:02:32','y','y');
+INSERT INTO `t4` VALUES (31,NULL,5,'02:33:14','y','y');
+INSERT INTO `t4` VALUES (32,105,248,'16:32:56','u','u');
+INSERT INTO `t4` VALUES (33,0,0,'21:32:42','p','p');
+INSERT INTO `t4` VALUES (34,3,8,'23:04:47','s','s');
+INSERT INTO `t4` VALUES (35,1,1,'22:05:43','e','e');
+INSERT INTO `t4` VALUES (36,75,255,'02:05:45','d','d');
+INSERT INTO `t4` VALUES (37,9,9,'00:00:00','d','d');
+INSERT INTO `t4` VALUES (38,7,9,'18:09:07','c','c');
+INSERT INTO `t4` VALUES (39,NULL,3,'10:54:06','b','b');
+INSERT INTO `t4` VALUES (40,NULL,9,'23:15:50','t','t');
+INSERT INTO `t4` VALUES (41,4,6,'10:17:40',NULL,NULL);
+INSERT INTO `t4` VALUES (42,0,4,'03:37:09','y','y');
+INSERT INTO `t4` VALUES (43,204,60,'22:26:06','c','c');
+INSERT INTO `t4` VALUES (44,0,7,'17:10:38','d','d');
+INSERT INTO `t4` VALUES (45,9,1,'00:00:00','x','x');
+INSERT INTO `t4` VALUES (46,8,6,'17:08:49','p','p');
+INSERT INTO `t4` VALUES (47,7,4,'19:04:40','e','e');
+INSERT INTO `t4` VALUES (48,8,NULL,'20:53:28','g','g');
+INSERT INTO `t4` VALUES (49,NULL,8,'11:46:03','x','x');
+INSERT INTO `t4` VALUES (50,6,0,NULL,'s','s');
+INSERT INTO `t4` VALUES (51,5,8,'10:58:07','e','e');
+INSERT INTO `t4` VALUES (52,2,151,'00:00:00','l','l');
+INSERT INTO `t4` VALUES (53,3,7,'09:43:15','p','p');
+INSERT INTO `t4` VALUES (54,7,6,'21:40:32','h','h');
+INSERT INTO `t4` VALUES (55,NULL,NULL,'00:17:44','m','m');
+INSERT INTO `t4` VALUES (56,145,23,'16:47:26','n','n');
+INSERT INTO `t4` VALUES (57,0,2,'00:00:00','v','v');
+INSERT INTO `t4` VALUES (58,1,4,'05:25:59','b','b');
+INSERT INTO `t4` VALUES (59,7,NULL,'00:00:00','x','x');
+INSERT INTO `t4` VALUES (60,3,NULL,'20:33:04','r','r');
+INSERT INTO `t4` VALUES (61,NULL,77,'00:46:12','t','t');
+INSERT INTO `t4` VALUES (62,2,NULL,'00:00:00','w','w');
+INSERT INTO `t4` VALUES (63,2,NULL,'02:13:59','w','w');
+INSERT INTO `t4` VALUES (64,2,7,'02:54:47','k','k');
+INSERT INTO `t4` VALUES (65,8,1,'18:13:59','a','a');
+INSERT INTO `t4` VALUES (66,6,9,'13:53:08','t','t');
+INSERT INTO `t4` VALUES (67,1,6,'22:21:30','z','z');
+INSERT INTO `t4` VALUES (68,NULL,2,'11:41:50','e','e');
+INSERT INTO `t4` VALUES (69,1,3,'15:20:02','q','q');
+INSERT INTO `t4` VALUES (70,0,0,NULL,'e','e');
+INSERT INTO `t4` VALUES (71,4,NULL,NULL,'v','v');
+INSERT INTO `t4` VALUES (72,1,6,'07:51:52','d','d');
+INSERT INTO `t4` VALUES (73,1,3,'00:00:00','u','u');
+INSERT INTO `t4` VALUES (74,27,195,NULL,'o','o');
+INSERT INTO `t4` VALUES (75,4,5,NULL,'b','b');
+INSERT INTO `t4` VALUES (76,6,2,'12:31:05','c','c');
+INSERT INTO `t4` VALUES (77,2,7,'00:00:00','q','q');
+INSERT INTO `t4` VALUES (78,248,25,'01:16:45',NULL,NULL);
+INSERT INTO `t4` VALUES (79,NULL,NULL,'20:38:54','h','h');
+INSERT INTO `t4` VALUES (80,9,0,'00:30:15','d','d');
+INSERT INTO `t4` VALUES (81,75,98,'23:46:36','w','w');
+INSERT INTO `t4` VALUES (82,2,6,'19:03:13','m','m');
+INSERT INTO `t4` VALUES (83,9,5,'10:54:27','i','i');
+INSERT INTO `t4` VALUES (84,4,0,'00:25:47','w','w');
+INSERT INTO `t4` VALUES (85,0,3,'08:44:27','f','f');
+INSERT INTO `t4` VALUES (86,0,1,'08:15:38','k','k');
+INSERT INTO `t4` VALUES (87,1,1,'19:56:21','v','v');
+INSERT INTO `t4` VALUES (88,119,147,'00:00:00','c','c');
+INSERT INTO `t4` VALUES (89,1,3,'20:50:52','y','y');
+INSERT INTO `t4` VALUES (90,7,3,'03:54:39','h','h');
+INSERT INTO `t4` VALUES (91,2,NULL,'23:58:17',NULL,NULL);
+INSERT INTO `t4` VALUES (92,7,2,'12:54:58','t','t');
+INSERT INTO `t4` VALUES (93,2,1,'04:02:43','l','l');
+INSERT INTO `t4` VALUES (94,6,8,'11:31:12','a','a');
+INSERT INTO `t4` VALUES (95,4,8,'20:20:04','r','r');
+INSERT INTO `t4` VALUES (96,5,8,'00:22:24','s','s');
+INSERT INTO `t4` VALUES (97,7,0,'10:09:31','z','z');
+INSERT INTO `t4` VALUES (98,1,1,NULL,'j','j');
+INSERT INTO `t4` VALUES (99,7,8,'18:45:35','c','c');
+INSERT INTO `t4` VALUES (100,2,5,'11:49:25','f','f');
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,'11:28:45','w','w');
+INSERT INTO `t1` VALUES (2,7,9,'20:25:14','m','m');
+INSERT INTO `t1` VALUES (3,9,3,'13:47:24','m','m');
+INSERT INTO `t1` VALUES (4,7,9,'19:24:11','k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'15:59:13','r','r');
+INSERT INTO `t1` VALUES (6,2,9,'00:00:00','t','t');
+INSERT INTO `t1` VALUES (7,6,3,'15:15:04','j','j');
+INSERT INTO `t1` VALUES (8,8,8,'11:32:06','u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'18:32:33','h','h');
+INSERT INTO `t1` VALUES (10,5,53,'15:19:25','o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,'19:03:19',NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'00:39:46','k','k');
+INSERT INTO `t1` VALUES (13,188,166,NULL,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,'00:00:00','n','n');
+INSERT INTO `t1` VALUES (15,1,0,'13:12:11','t','t');
+INSERT INTO `t1` VALUES (16,1,1,'04:56:48','c','c');
+INSERT INTO `t1` VALUES (17,0,9,'19:56:05','m','m');
+INSERT INTO `t1` VALUES (18,9,5,'19:35:19','y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'05:03:03','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'18:38:59','d','d');
+CREATE TABLE `t3` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (10,8,8,'18:27:58',NULL,NULL);
+CREATE TABLE `t5` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`col_int_nokey` int(11) DEFAULT NULL,
+`col_int_key` int(11) DEFAULT NULL,
+`col_time_key` time DEFAULT NULL,
+`col_varchar_key` varchar(1) DEFAULT NULL,
+`col_varchar_nokey` varchar(1) DEFAULT NULL,
+PRIMARY KEY (`pk`),
+KEY `col_int_key` (`col_int_key`),
+KEY `col_time_key` (`col_time_key`),
+KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+INSERT INTO `t5` VALUES (1,1,7,'01:13:38','f','f');
+SET @@optimizer_switch='subquery_cache=off';
+/* cache is off */ SELECT SQL_SMALL_RESULT MAX( DISTINCT table1 . `col_varchar_key` ) AS field1 , MIN( table1 . `col_varchar_nokey` ) AS field2 , COUNT( table1 . `col_varchar_key` ) AS field3 , table2 . `col_time_key` AS field4 , COUNT( DISTINCT table2 . `col_int_key` ) AS field5 , (
+SELECT MAX( SUBQUERY1_t2 . `col_int_nokey` ) AS SUBQUERY1_field1
+FROM ( t3 AS SUBQUERY1_t1 INNER JOIN t1 AS SUBQUERY1_t2 ON (SUBQUERY1_t2 . `col_varchar_key` = SUBQUERY1_t1 . `col_varchar_nokey` ) )
+WHERE SUBQUERY1_t2 . `pk` < SUBQUERY1_t2 . `pk` ) AS field6 , COUNT( table1 . `col_varchar_nokey` ) AS field7 , COUNT( table2 . `pk` ) AS field8 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_key` ) AS SUBQUERY2_field1
+FROM ( t5 AS SUBQUERY2_t1 LEFT JOIN t2 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `col_int_key` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` != table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_varchar_nokey` >= 'o' ) AS field9 , CONCAT ( table1 . `col_varchar_key` , table2 . `col_varchar_nokey` ) AS field10
+FROM ( t4 AS table1 LEFT JOIN ( ( t1 AS table2 STRAIGHT_JOIN t1 AS table3 ON (table3 . `col_int_nokey` = table2 . `pk` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) )
+WHERE ( EXISTS (
+SELECT SUBQUERY3_t1 . `pk` AS SUBQUERY3_field1
+FROM ( t4 AS SUBQUERY3_t1 INNER JOIN t4 AS SUBQUERY3_t2 ON (SUBQUERY3_t2 . `col_varchar_key` = SUBQUERY3_t1 . `col_varchar_key` ) )
+WHERE SUBQUERY3_t1 . `col_int_key` > table3 . `pk` AND SUBQUERY3_t1 . `pk` != table3 . `pk` ) ) AND ( table1 . `pk` > 116 AND table1 . `pk` < ( 116 + 175 ) OR table1 . `pk` IN (251) ) OR table1 . `col_int_nokey` = table1 . `col_int_nokey`
+GROUP BY field4, field6, field9, field10
+HAVING field10 = 'c'
+;
+field1 field2 field3 field4 field5 field6 field7 field8 field9 field10
+SET @@optimizer_switch='subquery_cache=on';
+/* cache is on */ SELECT SQL_SMALL_RESULT MAX( DISTINCT table1 . `col_varchar_key` ) AS field1 , MIN( table1 . `col_varchar_nokey` ) AS field2 , COUNT( table1 . `col_varchar_key` ) AS field3 , table2 . `col_time_key` AS field4 , COUNT( DISTINCT table2 . `col_int_key` ) AS field5 , (
+SELECT MAX( SUBQUERY1_t2 . `col_int_nokey` ) AS SUBQUERY1_field1
+FROM ( t3 AS SUBQUERY1_t1 INNER JOIN t1 AS SUBQUERY1_t2 ON (SUBQUERY1_t2 . `col_varchar_key` = SUBQUERY1_t1 . `col_varchar_nokey` ) )
+WHERE SUBQUERY1_t2 . `pk` < SUBQUERY1_t2 . `pk` ) AS field6 , COUNT( table1 . `col_varchar_nokey` ) AS field7 , COUNT( table2 . `pk` ) AS field8 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_key` ) AS SUBQUERY2_field1
+FROM ( t5 AS SUBQUERY2_t1 LEFT JOIN t2 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `col_int_key` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` != table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_varchar_nokey` >= 'o' ) AS field9 , CONCAT ( table1 . `col_varchar_key` , table2 . `col_varchar_nokey` ) AS field10
+FROM ( t4 AS table1 LEFT JOIN ( ( t1 AS table2 STRAIGHT_JOIN t1 AS table3 ON (table3 . `col_int_nokey` = table2 . `pk` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) )
+WHERE ( EXISTS (
+SELECT SUBQUERY3_t1 . `pk` AS SUBQUERY3_field1
+FROM ( t4 AS SUBQUERY3_t1 INNER JOIN t4 AS SUBQUERY3_t2 ON (SUBQUERY3_t2 . `col_varchar_key` = SUBQUERY3_t1 . `col_varchar_key` ) )
+WHERE SUBQUERY3_t1 . `col_int_key` > table3 . `pk` AND SUBQUERY3_t1 . `pk` != table3 . `pk` ) ) AND ( table1 . `pk` > 116 AND table1 . `pk` < ( 116 + 175 ) OR table1 . `pk` IN (251) ) OR table1 . `col_int_nokey` = table1 . `col_int_nokey`
+GROUP BY field4, field6, field9, field10
+HAVING field10 = 'c'
+;
+field1 field2 field3 field4 field5 field6 field7 field8 field9 field10
+drop table t1,t2,t3,t4,t5;
+set @@optimizer_switch= default;
=== modified file 'mysql-test/t/subquery_cache.test'
--- a/mysql-test/t/subquery_cache.test 2010-07-10 10:37:30 +0000
+++ b/mysql-test/t/subquery_cache.test 2010-07-29 11:13:48 +0000
@@ -507,3 +507,698 @@
drop table t0,t1,t2;
set optimizer_switch='default';
+
+#
+--echo #launchpad BUG#608834
+#
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,7,8,'01:27:35','v','v');
+INSERT INTO `t2` VALUES (11,1,9,'19:48:31','r','r');
+INSERT INTO `t2` VALUES (12,5,9,'00:00:00','a','a');
+INSERT INTO `t2` VALUES (13,3,186,'19:53:05','m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,'19:18:56','y','y');
+INSERT INTO `t2` VALUES (15,92,2,'10:55:12','j','j');
+INSERT INTO `t2` VALUES (16,7,3,'00:25:00','d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'12:35:47','z','z');
+INSERT INTO `t2` VALUES (18,3,133,'19:53:03','e','e');
+INSERT INTO `t2` VALUES (19,5,1,'17:53:30','h','h');
+INSERT INTO `t2` VALUES (20,1,8,'11:35:49','b','b');
+INSERT INTO `t2` VALUES (21,2,5,NULL,'s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'06:01:40','e','e');
+INSERT INTO `t2` VALUES (23,1,8,'05:45:11','j','j');
+INSERT INTO `t2` VALUES (24,0,6,'00:00:00','e','e');
+INSERT INTO `t2` VALUES (25,210,51,'00:00:00','f','f');
+INSERT INTO `t2` VALUES (26,8,4,'06:11:01','v','v');
+INSERT INTO `t2` VALUES (27,7,7,'13:02:46','x','x');
+INSERT INTO `t2` VALUES (28,5,6,'21:44:25','m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'22:43:58','c','c');
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,'11:28:45','w','w');
+INSERT INTO `t1` VALUES (2,7,9,'20:25:14','m','m');
+INSERT INTO `t1` VALUES (3,9,3,'13:47:24','m','m');
+INSERT INTO `t1` VALUES (4,7,9,'19:24:11','k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'15:59:13','r','r');
+INSERT INTO `t1` VALUES (6,2,9,'00:00:00','t','t');
+INSERT INTO `t1` VALUES (7,6,3,'15:15:04','j','j');
+INSERT INTO `t1` VALUES (8,8,8,'11:32:06','u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'18:32:33','h','h');
+INSERT INTO `t1` VALUES (10,5,53,'15:19:25','o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,'19:03:19',NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'00:39:46','k','k');
+INSERT INTO `t1` VALUES (13,188,166,NULL,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,'00:00:00','n','n');
+INSERT INTO `t1` VALUES (15,1,0,'13:12:11','t','t');
+INSERT INTO `t1` VALUES (16,1,1,'04:56:48','c','c');
+INSERT INTO `t1` VALUES (17,0,9,'19:56:05','m','m');
+INSERT INTO `t1` VALUES (18,9,5,'19:35:19','y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'05:03:03','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'18:38:59','d','d');
+
+set @@optimizer_switch='subquery_cache=off';
+
+/* cache is off */ SELECT (
+SELECT 4
+FROM DUAL ) AS field1 , SUM( DISTINCT table1 . `pk` ) AS field2 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_nokey` ) AS SUBQUERY2_field1
+FROM ( t1 AS SUBQUERY2_t1 INNER JOIN t1 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `pk` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` <= table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_int_nokey` < table1 . `pk` ) AS field3 , table1 . `col_time_key` AS field4 , table1 . `col_int_key` AS field5 , CONCAT ( table2 . `col_varchar_nokey` , table1 . `col_varchar_key` ) AS field6
+FROM ( t1 AS table1 INNER JOIN ( ( t1 AS table2 LEFT JOIN t2 AS table3 ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_nokey` ) )
+WHERE ( table2 . `col_varchar_nokey` NOT IN (
+SELECT 'd' UNION
+SELECT 'u' ) ) OR table3 . `col_varchar_nokey` <= table1 . `col_varchar_key`
+GROUP BY field1, field3, field4, field5, field6
+ORDER BY table1 . `col_int_key` , field1, field2, field3, field4, field5, field6
+;
+
+set @@optimizer_switch='subquery_cache=on';
+
+/* cache is on */ SELECT (
+SELECT 4
+FROM DUAL ) AS field1 , SUM( DISTINCT table1 . `pk` ) AS field2 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_nokey` ) AS SUBQUERY2_field1
+FROM ( t1 AS SUBQUERY2_t1 INNER JOIN t1 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `pk` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` <= table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_int_nokey` < table1 . `pk` ) AS field3 , table1 . `col_time_key` AS field4 , table1 . `col_int_key` AS field5 , CONCAT ( table2 . `col_varchar_nokey` , table1 . `col_varchar_key` ) AS field6
+FROM ( t1 AS table1 INNER JOIN ( ( t1 AS table2 LEFT JOIN t2 AS table3 ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_nokey` ) )
+WHERE ( table2 . `col_varchar_nokey` NOT IN (
+SELECT 'd' UNION
+SELECT 'u' ) ) OR table3 . `col_varchar_nokey` <= table1 . `col_varchar_key`
+GROUP BY field1, field3, field4, field5, field6
+ORDER BY table1 . `col_int_key` , field1, field2, field3, field4, field5, field6
+;
+
+drop table t1,t2;
+set @@optimizer_switch= default;
+
+#
+--echo #launchpad BUG#609045
+#
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+
+INSERT INTO `t1` VALUES (1,NULL,2,NULL,NULL,'11:28:45','11:28:45','2004-10-11 18:13:16','2004-10-11 18:13:16','w','w');
+INSERT INTO `t1` VALUES (2,7,9,'2001-09-19','2001-09-19','20:25:14','20:25:14',NULL,NULL,'m','m');
+INSERT INTO `t1` VALUES (3,9,3,'2004-09-12','2004-09-12','13:47:24','13:47:24','1900-01-01 00:00:00','1900-01-01 00:00:00','m','m');
+INSERT INTO `t1` VALUES (4,7,9,NULL,NULL,'19:24:11','19:24:11','2009-07-25 00:00:00','2009-07-25 00:00:00','k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'2002-07-19','2002-07-19','15:59:13','15:59:13',NULL,NULL,'r','r');
+INSERT INTO `t1` VALUES (6,2,9,'2002-12-16','2002-12-16','00:00:00','00:00:00','2008-07-27 00:00:00','2008-07-27 00:00:00','t','t');
+INSERT INTO `t1` VALUES (7,6,3,'2006-02-08','2006-02-08','15:15:04','15:15:04','2002-11-13 16:37:31','2002-11-13 16:37:31','j','j');
+INSERT INTO `t1` VALUES (8,8,8,'2006-08-28','2006-08-28','11:32:06','11:32:06','1900-01-01 00:00:00','1900-01-01 00:00:00','u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'2001-04-14','2001-04-14','18:32:33','18:32:33','2003-12-10 00:00:00','2003-12-10 00:00:00','h','h');
+INSERT INTO `t1` VALUES (10,5,53,'2000-01-05','2000-01-05','15:19:25','15:19:25','2001-12-21 22:38:22','2001-12-21 22:38:22','o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,'2003-12-06','2003-12-06','19:03:19','19:03:19','2008-12-13 23:16:44','2008-12-13 23:16:44',NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'1900-01-01','1900-01-01','00:39:46','00:39:46','2005-08-15 12:39:41','2005-08-15 12:39:41','k','k');
+INSERT INTO `t1` VALUES (13,188,166,'2002-11-27','2002-11-27',NULL,NULL,NULL,NULL,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,NULL,NULL,'00:00:00','00:00:00','2006-09-11 12:06:14','2006-09-11 12:06:14','n','n');
+INSERT INTO `t1` VALUES (15,1,0,'2003-05-27','2003-05-27','13:12:11','13:12:11','2007-12-15 12:39:34','2007-12-15 12:39:34','t','t');
+INSERT INTO `t1` VALUES (16,1,1,'2005-05-03','2005-05-03','04:56:48','04:56:48','2005-08-09 00:00:00','2005-08-09 00:00:00','c','c');
+INSERT INTO `t1` VALUES (17,0,9,'2001-04-18','2001-04-18','19:56:05','19:56:05','2001-09-02 22:50:02','2001-09-02 22:50:02','m','m');
+INSERT INTO `t1` VALUES (18,9,5,'2005-12-27','2005-12-27','19:35:19','19:35:19','2005-12-16 22:58:11','2005-12-16 22:58:11','y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'2004-08-20','2004-08-20','05:03:03','05:03:03','2007-04-19 00:19:53','2007-04-19 00:19:53','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'1900-01-01','1900-01-01','18:38:59','18:38:59','1900-01-01 00:00:00','1900-01-01 00:00:00','d','d');
+
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+);
+
+INSERT INTO `t2` VALUES (10,7,8,NULL,NULL,'01:27:35','01:27:35','2002-02-26 06:14:37','2002-02-26 06:14:37','v','v');
+INSERT INTO `t2` VALUES (11,1,9,'2006-06-14','2006-06-14','19:48:31','19:48:31','1900-01-01 00:00:00','1900-01-01 00:00:00','r','r');
+INSERT INTO `t2` VALUES (12,5,9,'2002-09-12','2002-09-12','00:00:00','00:00:00','2006-12-03 09:37:26','2006-12-03 09:37:26','a','a');
+INSERT INTO `t2` VALUES (13,3,186,'2005-02-15','2005-02-15','19:53:05','19:53:05','2008-05-26 12:27:10','2008-05-26 12:27:10','m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,NULL,NULL,'19:18:56','19:18:56','2004-12-14 16:37:30','2004-12-14 16:37:30','y','y');
+INSERT INTO `t2` VALUES (15,92,2,'2008-11-04','2008-11-04','10:55:12','10:55:12','2003-02-11 21:19:41','2003-02-11 21:19:41','j','j');
+INSERT INTO `t2` VALUES (16,7,3,'2004-09-04','2004-09-04','00:25:00','00:25:00','2009-10-18 02:27:49','2009-10-18 02:27:49','d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'2006-06-05','2006-06-05','12:35:47','12:35:47','2000-09-26 07:45:57','2000-09-26 07:45:57','z','z');
+INSERT INTO `t2` VALUES (18,3,133,'1900-01-01','1900-01-01','19:53:03','19:53:03',NULL,NULL,'e','e');
+INSERT INTO `t2` VALUES (19,5,1,'1900-01-01','1900-01-01','17:53:30','17:53:30','2005-11-10 12:40:29','2005-11-10 12:40:29','h','h');
+INSERT INTO `t2` VALUES (20,1,8,'1900-01-01','1900-01-01','11:35:49','11:35:49','2009-04-25 00:00:00','2009-04-25 00:00:00','b','b');
+INSERT INTO `t2` VALUES (21,2,5,'2005-01-13','2005-01-13',NULL,NULL,'2002-11-27 00:00:00','2002-11-27 00:00:00','s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'2006-05-21','2006-05-21','06:01:40','06:01:40','2004-01-26 20:32:32','2004-01-26 20:32:32','e','e');
+INSERT INTO `t2` VALUES (23,1,8,'2003-09-08','2003-09-08','05:45:11','05:45:11','2007-10-26 11:41:40','2007-10-26 11:41:40','j','j');
+INSERT INTO `t2` VALUES (24,0,6,'2006-12-23','2006-12-23','00:00:00','00:00:00','2005-10-07 00:00:00','2005-10-07 00:00:00','e','e');
+INSERT INTO `t2` VALUES (25,210,51,'2006-10-15','2006-10-15','00:00:00','00:00:00','2000-07-15 05:00:34','2000-07-15 05:00:34','f','f');
+INSERT INTO `t2` VALUES (26,8,4,'2005-04-06','2005-04-06','06:11:01','06:11:01','2000-04-03 16:33:32','2000-04-03 16:33:32','v','v');
+INSERT INTO `t2` VALUES (27,7,7,'2008-04-07','2008-04-07','13:02:46','13:02:46',NULL,NULL,'x','x');
+INSERT INTO `t2` VALUES (28,5,6,'2006-10-10','2006-10-10','21:44:25','21:44:25','2001-04-25 01:26:12','2001-04-25 01:26:12','m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'1900-01-01','1900-01-01','22:43:58','22:43:58','2000-12-27 00:00:00','2000-12-27 00:00:00','c','c');
+
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+);
+
+INSERT INTO `t3` VALUES (1,1,7,'1900-01-01','1900-01-01','01:13:38','01:13:38','2005-02-05 00:00:00','2005-02-05 00:00:00','f','f');
+
+CREATE TABLE `t4` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_date_key` date DEFAULT NULL,
+ `col_date_nokey` date DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_time_nokey` time DEFAULT NULL,
+ `col_datetime_key` datetime DEFAULT NULL,
+ `col_datetime_nokey` datetime DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_date_key` (`col_date_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_datetime_key` (`col_datetime_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+);
+
+INSERT INTO `t4` VALUES (1,6,NULL,'2003-05-12','2003-05-12',NULL,NULL,'2000-09-12 00:00:00','2000-09-12 00:00:00','r','r');
+INSERT INTO `t4` VALUES (2,8,0,'2003-01-07','2003-01-07','14:34:45','14:34:45','2004-08-10 09:09:31','2004-08-10 09:09:31','c','c');
+INSERT INTO `t4` VALUES (3,6,0,NULL,NULL,'11:49:48','11:49:48','2005-03-21 04:31:40','2005-03-21 04:31:40','o','o');
+INSERT INTO `t4` VALUES (4,6,7,'2005-03-12','2005-03-12','18:12:55','18:12:55','2002-10-25 23:50:35','2002-10-25 23:50:35','c','c');
+INSERT INTO `t4` VALUES (5,3,8,'2000-08-02','2000-08-02','18:30:05','18:30:05','2001-04-01 21:14:04','2001-04-01 21:14:04','d','d');
+INSERT INTO `t4` VALUES (6,9,4,'1900-01-01','1900-01-01','14:19:30','14:19:30','2005-03-12 06:02:34','2005-03-12 06:02:34','v','v');
+INSERT INTO `t4` VALUES (7,2,6,'2006-07-06','2006-07-06','05:20:04','05:20:04','2001-05-06 14:49:12','2001-05-06 14:49:12','m','m');
+INSERT INTO `t4` VALUES (8,1,5,'2006-12-24','2006-12-24','20:29:31','20:29:31','2004-04-25 00:00:00','2004-04-25 00:00:00','j','j');
+INSERT INTO `t4` VALUES (9,8,NULL,'2004-11-16','2004-11-16','07:08:09','07:08:09','2001-03-22 18:38:43','2001-03-22 18:38:43','f','f');
+INSERT INTO `t4` VALUES (10,0,NULL,'2002-09-09','2002-09-09','14:49:14','14:49:14','2006-04-25 21:03:02','2006-04-25 21:03:02','n','n');
+INSERT INTO `t4` VALUES (11,9,8,NULL,NULL,'00:00:00','00:00:00','2009-09-07 18:40:43','2009-09-07 18:40:43','z','z');
+INSERT INTO `t4` VALUES (12,8,8,'2008-06-24','2008-06-24','09:58:06','09:58:06','2004-03-23 00:00:00','2004-03-23 00:00:00','h','h');
+INSERT INTO `t4` VALUES (13,NULL,8,'2001-04-21','2001-04-21',NULL,NULL,'2009-04-15 00:08:29','2009-04-15 00:08:29','q','q');
+INSERT INTO `t4` VALUES (14,0,1,'2003-11-22','2003-11-22','18:24:16','18:24:16','2000-04-21 00:00:00','2000-04-21 00:00:00','w','w');
+INSERT INTO `t4` VALUES (15,5,1,'2004-09-12','2004-09-12','17:39:57','17:39:57','2000-02-17 19:41:23','2000-02-17 19:41:23','z','z');
+INSERT INTO `t4` VALUES (16,1,5,'2006-06-20','2006-06-20','08:23:21','08:23:21','2003-09-20 07:38:14','2003-09-20 07:38:14','j','j');
+INSERT INTO `t4` VALUES (17,1,2,NULL,NULL,NULL,NULL,'2000-11-28 20:42:12','2000-11-28 20:42:12','a','a');
+INSERT INTO `t4` VALUES (18,6,7,'2001-11-25','2001-11-25','21:50:46','21:50:46','2005-06-12 11:13:17','2005-06-12 11:13:17','m','m');
+INSERT INTO `t4` VALUES (19,6,6,'2004-10-26','2004-10-26','12:33:17','12:33:17','1900-01-01 00:00:00','1900-01-01 00:00:00','n','n');
+INSERT INTO `t4` VALUES (20,1,4,'2005-01-19','2005-01-19','03:06:43','03:06:43','2006-02-09 20:41:06','2006-02-09 20:41:06','e','e');
+INSERT INTO `t4` VALUES (21,8,7,'2008-07-06','2008-07-06','03:46:14','03:46:14','2004-05-22 01:05:57','2004-05-22 01:05:57','u','u');
+INSERT INTO `t4` VALUES (22,1,0,'1900-01-01','1900-01-01','20:34:52','20:34:52','2004-03-04 13:46:31','2004-03-04 13:46:31','s','s');
+INSERT INTO `t4` VALUES (23,0,9,'1900-01-01','1900-01-01',NULL,NULL,'1900-01-01 00:00:00','1900-01-01 00:00:00','u','u');
+INSERT INTO `t4` VALUES (24,4,3,'2004-06-08','2004-06-08','10:41:20','10:41:20','2004-10-20 07:20:19','2004-10-20 07:20:19','r','r');
+INSERT INTO `t4` VALUES (25,9,5,'2007-02-20','2007-02-20','08:43:11','08:43:11','2006-04-17 00:00:00','2006-04-17 00:00:00','g','g');
+INSERT INTO `t4` VALUES (26,8,1,'2008-06-18','2008-06-18',NULL,NULL,'2000-10-27 00:00:00','2000-10-27 00:00:00','o','o');
+INSERT INTO `t4` VALUES (27,5,1,'2008-05-15','2008-05-15','10:17:51','10:17:51','2007-04-14 08:54:06','2007-04-14 08:54:06','w','w');
+INSERT INTO `t4` VALUES (28,9,5,'2005-10-06','2005-10-06','06:34:09','06:34:09','2008-04-12 17:03:52','2008-04-12 17:03:52','b','b');
+INSERT INTO `t4` VALUES (29,5,9,NULL,NULL,'21:22:47','21:22:47','2007-02-19 17:37:09','2007-02-19 17:37:09',NULL,NULL);
+INSERT INTO `t4` VALUES (30,NULL,2,'2006-10-12','2006-10-12','04:02:32','04:02:32','1900-01-01 00:00:00','1900-01-01 00:00:00','y','y');
+INSERT INTO `t4` VALUES (31,NULL,5,'2005-01-24','2005-01-24','02:33:14','02:33:14','2001-10-10 08:32:27','2001-10-10 08:32:27','y','y');
+INSERT INTO `t4` VALUES (32,105,248,'2009-06-27','2009-06-27','16:32:56','16:32:56',NULL,NULL,'u','u');
+INSERT INTO `t4` VALUES (33,0,0,NULL,NULL,'21:32:42','21:32:42','2001-12-16 05:31:53','2001-12-16 05:31:53','p','p');
+INSERT INTO `t4` VALUES (34,3,8,NULL,NULL,'23:04:47','23:04:47','2003-07-19 18:03:28','2003-07-19 18:03:28','s','s');
+INSERT INTO `t4` VALUES (35,1,1,'1900-01-01','1900-01-01','22:05:43','22:05:43','2001-03-27 11:44:10','2001-03-27 11:44:10','e','e');
+INSERT INTO `t4` VALUES (36,75,255,'2005-12-22','2005-12-22','02:05:45','02:05:45','2008-06-15 02:13:00','2008-06-15 02:13:00','d','d');
+INSERT INTO `t4` VALUES (37,9,9,'2005-05-03','2005-05-03','00:00:00','00:00:00','2009-03-14 21:29:56','2009-03-14 21:29:56','d','d');
+INSERT INTO `t4` VALUES (38,7,9,'2003-05-27','2003-05-27','18:09:07','18:09:07','2005-01-02 00:00:00','2005-01-02 00:00:00','c','c');
+INSERT INTO `t4` VALUES (39,NULL,3,'2006-05-25','2006-05-25','10:54:06','10:54:06','2007-07-16 04:44:07','2007-07-16 04:44:07','b','b');
+INSERT INTO `t4` VALUES (40,NULL,9,NULL,NULL,'23:15:50','23:15:50','2003-08-26 21:38:26','2003-08-26 21:38:26','t','t');
+INSERT INTO `t4` VALUES (41,4,6,'2009-01-04','2009-01-04','10:17:40','10:17:40','2004-04-19 04:18:47','2004-04-19 04:18:47',NULL,NULL);
+INSERT INTO `t4` VALUES (42,0,4,'2009-02-14','2009-02-14','03:37:09','03:37:09','2000-01-06 20:32:48','2000-01-06 20:32:48','y','y');
+INSERT INTO `t4` VALUES (43,204,60,'2003-01-16','2003-01-16','22:26:06','22:26:06','2006-06-23 13:27:17','2006-06-23 13:27:17','c','c');
+INSERT INTO `t4` VALUES (44,0,7,'1900-01-01','1900-01-01','17:10:38','17:10:38','2007-11-27 00:00:00','2007-11-27 00:00:00','d','d');
+INSERT INTO `t4` VALUES (45,9,1,'2007-06-26','2007-06-26','00:00:00','00:00:00','2002-04-03 12:06:51','2002-04-03 12:06:51','x','x');
+INSERT INTO `t4` VALUES (46,8,6,'2004-03-27','2004-03-27','17:08:49','17:08:49','2008-12-28 09:47:42','2008-12-28 09:47:42','p','p');
+INSERT INTO `t4` VALUES (47,7,4,NULL,NULL,'19:04:40','19:04:40','2002-04-04 10:07:54','2002-04-04 10:07:54','e','e');
+INSERT INTO `t4` VALUES (48,8,NULL,'2005-06-06','2005-06-06','20:53:28','20:53:28','2003-04-26 02:55:13','2003-04-26 02:55:13','g','g');
+INSERT INTO `t4` VALUES (49,NULL,8,'2003-03-02','2003-03-02','11:46:03','11:46:03',NULL,NULL,'x','x');
+INSERT INTO `t4` VALUES (50,6,0,'2004-05-13','2004-05-13',NULL,NULL,'2009-02-19 03:17:06','2009-02-19 03:17:06','s','s');
+INSERT INTO `t4` VALUES (51,5,8,'2005-09-13','2005-09-13','10:58:07','10:58:07','1900-01-01 00:00:00','1900-01-01 00:00:00','e','e');
+INSERT INTO `t4` VALUES (52,2,151,'2005-10-03','2005-10-03','00:00:00','00:00:00','2000-11-10 08:20:01','2000-11-10 08:20:01','l','l');
+INSERT INTO `t4` VALUES (53,3,7,'2005-10-14','2005-10-14','09:43:15','09:43:15','2008-02-10 00:00:00','2008-02-10 00:00:00','p','p');
+INSERT INTO `t4` VALUES (54,7,6,NULL,NULL,'21:40:32','21:40:32','1900-01-01 00:00:00','1900-01-01 00:00:00','h','h');
+INSERT INTO `t4` VALUES (55,NULL,NULL,'2005-09-16','2005-09-16','00:17:44','00:17:44',NULL,NULL,'m','m');
+INSERT INTO `t4` VALUES (56,145,23,'2005-03-10','2005-03-10','16:47:26','16:47:26','2001-02-05 02:01:50','2001-02-05 02:01:50','n','n');
+INSERT INTO `t4` VALUES (57,0,2,'2000-06-19','2000-06-19','00:00:00','00:00:00','2000-10-28 08:44:25','2000-10-28 08:44:25','v','v');
+INSERT INTO `t4` VALUES (58,1,4,'2002-11-03','2002-11-03','05:25:59','05:25:59','2005-03-20 10:53:59','2005-03-20 10:53:59','b','b');
+INSERT INTO `t4` VALUES (59,7,NULL,'2009-01-05','2009-01-05','00:00:00','00:00:00','2001-06-02 13:54:13','2001-06-02 13:54:13','x','x');
+INSERT INTO `t4` VALUES (60,3,NULL,'2003-05-22','2003-05-22','20:33:04','20:33:04','1900-01-01 00:00:00','1900-01-01 00:00:00','r','r');
+INSERT INTO `t4` VALUES (61,NULL,77,'2005-07-02','2005-07-02','00:46:12','00:46:12','2009-07-16 13:05:43','2009-07-16 13:05:43','t','t');
+INSERT INTO `t4` VALUES (62,2,NULL,'1900-01-01','1900-01-01','00:00:00','00:00:00','2009-03-26 23:16:20','2009-03-26 23:16:20','w','w');
+INSERT INTO `t4` VALUES (63,2,NULL,'2006-06-21','2006-06-21','02:13:59','02:13:59','2003-02-06 18:12:15','2003-02-06 18:12:15','w','w');
+INSERT INTO `t4` VALUES (64,2,7,NULL,NULL,'02:54:47','02:54:47','2006-06-05 03:22:51','2006-06-05 03:22:51','k','k');
+INSERT INTO `t4` VALUES (65,8,1,'2005-12-16','2005-12-16','18:13:59','18:13:59','2002-02-10 05:47:27','2002-02-10 05:47:27','a','a');
+INSERT INTO `t4` VALUES (66,6,9,'2004-11-05','2004-11-05','13:53:08','13:53:08','2001-08-01 08:50:52','2001-08-01 08:50:52','t','t');
+INSERT INTO `t4` VALUES (67,1,6,NULL,NULL,'22:21:30','22:21:30','1900-01-01 00:00:00','1900-01-01 00:00:00','z','z');
+INSERT INTO `t4` VALUES (68,NULL,2,'2004-09-14','2004-09-14','11:41:50','11:41:50',NULL,NULL,'e','e');
+INSERT INTO `t4` VALUES (69,1,3,'2002-04-06','2002-04-06','15:20:02','15:20:02','1900-01-01 00:00:00','1900-01-01 00:00:00','q','q');
+INSERT INTO `t4` VALUES (70,0,0,NULL,NULL,NULL,NULL,'2000-09-23 00:00:00','2000-09-23 00:00:00','e','e');
+INSERT INTO `t4` VALUES (71,4,NULL,'2002-11-13','2002-11-13',NULL,NULL,'2007-07-09 08:32:49','2007-07-09 08:32:49','v','v');
+INSERT INTO `t4` VALUES (72,1,6,'2006-05-27','2006-05-27','07:51:52','07:51:52','2000-01-05 00:00:00','2000-01-05 00:00:00','d','d');
+INSERT INTO `t4` VALUES (73,1,3,'2000-12-22','2000-12-22','00:00:00','00:00:00','2000-09-24 00:00:00','2000-09-24 00:00:00','u','u');
+INSERT INTO `t4` VALUES (74,27,195,'2004-02-21','2004-02-21',NULL,NULL,'2005-05-06 00:00:00','2005-05-06 00:00:00','o','o');
+INSERT INTO `t4` VALUES (75,4,5,'2009-05-15','2009-05-15',NULL,NULL,'2000-03-11 00:00:00','2000-03-11 00:00:00','b','b');
+INSERT INTO `t4` VALUES (76,6,2,'2008-12-12','2008-12-12','12:31:05','12:31:05','2001-09-02 16:17:35','2001-09-02 16:17:35','c','c');
+INSERT INTO `t4` VALUES (77,2,7,'2000-04-15','2000-04-15','00:00:00','00:00:00','2006-04-25 05:43:44','2006-04-25 05:43:44','q','q');
+INSERT INTO `t4` VALUES (78,248,25,NULL,NULL,'01:16:45','01:16:45','2009-10-25 22:04:02','2009-10-25 22:04:02',NULL,NULL);
+INSERT INTO `t4` VALUES (79,NULL,NULL,'2001-10-18','2001-10-18','20:38:54','20:38:54','2004-08-06 00:00:00','2004-08-06 00:00:00','h','h');
+INSERT INTO `t4` VALUES (80,9,0,'2008-05-25','2008-05-25','00:30:15','00:30:15','2001-11-27 05:07:57','2001-11-27 05:07:57','d','d');
+INSERT INTO `t4` VALUES (81,75,98,'2004-12-02','2004-12-02','23:46:36','23:46:36','2009-06-28 03:18:39','2009-06-28 03:18:39','w','w');
+INSERT INTO `t4` VALUES (82,2,6,'2002-02-15','2002-02-15','19:03:13','19:03:13','2000-03-12 00:00:00','2000-03-12 00:00:00','m','m');
+INSERT INTO `t4` VALUES (83,9,5,'2002-03-03','2002-03-03','10:54:27','10:54:27',NULL,NULL,'i','i');
+INSERT INTO `t4` VALUES (84,4,0,NULL,NULL,'00:25:47','00:25:47','2007-10-20 00:00:00','2007-10-20 00:00:00','w','w');
+INSERT INTO `t4` VALUES (85,0,3,'2003-01-26','2003-01-26','08:44:27','08:44:27','2009-09-27 00:00:00','2009-09-27 00:00:00','f','f');
+INSERT INTO `t4` VALUES (86,0,1,'2001-12-19','2001-12-19','08:15:38','08:15:38','2002-07-16 00:00:00','2002-07-16 00:00:00','k','k');
+INSERT INTO `t4` VALUES (87,1,1,'2001-08-07','2001-08-07','19:56:21','19:56:21','2005-02-20 00:00:00','2005-02-20 00:00:00','v','v');
+INSERT INTO `t4` VALUES (88,119,147,'2005-02-16','2005-02-16','00:00:00','00:00:00',NULL,NULL,'c','c');
+INSERT INTO `t4` VALUES (89,1,3,'2006-06-10','2006-06-10','20:50:52','20:50:52','2001-07-16 00:00:00','2001-07-16 00:00:00','y','y');
+INSERT INTO `t4` VALUES (90,7,3,NULL,NULL,'03:54:39','03:54:39','2009-05-20 21:04:12','2009-05-20 21:04:12','h','h');
+INSERT INTO `t4` VALUES (91,2,NULL,'2005-04-06','2005-04-06','23:58:17','23:58:17','2002-03-13 10:55:40','2002-03-13 10:55:40',NULL,NULL);
+INSERT INTO `t4` VALUES (92,7,2,'2003-04-27','2003-04-27','12:54:58','12:54:58','2005-07-12 00:00:00','2005-07-12 00:00:00','t','t');
+INSERT INTO `t4` VALUES (93,2,1,'2005-10-13','2005-10-13','04:02:43','04:02:43','2006-07-22 09:46:34','2006-07-22 09:46:34','l','l');
+INSERT INTO `t4` VALUES (94,6,8,'2003-10-02','2003-10-02','11:31:12','11:31:12','2001-09-01 00:00:00','2001-09-01 00:00:00','a','a');
+INSERT INTO `t4` VALUES (95,4,8,'2005-09-09','2005-09-09','20:20:04','20:20:04','2002-05-27 18:38:45','2002-05-27 18:38:45','r','r');
+INSERT INTO `t4` VALUES (96,5,8,NULL,NULL,'00:22:24','00:22:24',NULL,NULL,'s','s');
+INSERT INTO `t4` VALUES (97,7,0,'2006-02-15','2006-02-15','10:09:31','10:09:31',NULL,NULL,'z','z');
+INSERT INTO `t4` VALUES (98,1,1,'1900-01-01','1900-01-01',NULL,NULL,'2009-08-08 22:38:53','2009-08-08 22:38:53','j','j');
+INSERT INTO `t4` VALUES (99,7,8,'2003-12-24','2003-12-24','18:45:35','18:45:35',NULL,NULL,'c','c');
+INSERT INTO `t4` VALUES (100,2,5,'2001-07-26','2001-07-26','11:49:25','11:49:25','2007-04-25 05:08:49','2007-04-25 05:08:49','f','f');
+
+SET @@optimizer_switch='subquery_cache=off';
+
+/* cache is off */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+
+SET @@optimizer_switch='subquery_cache=on';
+
+/* cache is on */ SELECT COUNT( DISTINCT table2 .`col_int_key` ) , (
+SELECT SUBQUERY2_t1 .`col_int_key`
+FROM t3 SUBQUERY2_t1 JOIN t2 ON SUBQUERY2_t1 .`col_int_key`
+WHERE table1 .`col_varchar_key` ) , table2 .`col_varchar_nokey` field10
+FROM t4 table1 JOIN ( t1 table2 STRAIGHT_JOIN t1 table3 ON table2 .`pk` ) ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+GROUP BY field10 ;
+
+drop table t1,t2,t3,t4;
+set @@optimizer_switch= default;
+
+#
+--echo #launchpad BUG#609045
+#
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,7,8,'v','v');
+INSERT INTO `t2` VALUES (11,1,9,'r','r');
+INSERT INTO `t2` VALUES (12,5,9,'a','a');
+INSERT INTO `t2` VALUES (13,3,186,'m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,'y','y');
+INSERT INTO `t2` VALUES (15,92,2,'j','j');
+INSERT INTO `t2` VALUES (16,7,3,'d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'z','z');
+INSERT INTO `t2` VALUES (18,3,133,'e','e');
+INSERT INTO `t2` VALUES (19,5,1,'h','h');
+INSERT INTO `t2` VALUES (20,1,8,'b','b');
+INSERT INTO `t2` VALUES (21,2,5,'s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'e','e');
+INSERT INTO `t2` VALUES (23,1,8,'j','j');
+INSERT INTO `t2` VALUES (24,0,6,'e','e');
+INSERT INTO `t2` VALUES (25,210,51,'f','f');
+INSERT INTO `t2` VALUES (26,8,4,'v','v');
+INSERT INTO `t2` VALUES (27,7,7,'x','x');
+INSERT INTO `t2` VALUES (28,5,6,'m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'c','c');
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,'w','w');
+INSERT INTO `t1` VALUES (2,7,9,'m','m');
+INSERT INTO `t1` VALUES (3,9,3,'m','m');
+INSERT INTO `t1` VALUES (4,7,9,'k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'r','r');
+INSERT INTO `t1` VALUES (6,2,9,'t','t');
+INSERT INTO `t1` VALUES (7,6,3,'j','j');
+INSERT INTO `t1` VALUES (8,8,8,'u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'h','h');
+INSERT INTO `t1` VALUES (10,5,53,'o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'k','k');
+INSERT INTO `t1` VALUES (13,188,166,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,'n','n');
+INSERT INTO `t1` VALUES (15,1,0,'t','t');
+INSERT INTO `t1` VALUES (16,1,1,'c','c');
+INSERT INTO `t1` VALUES (17,0,9,'m','m');
+INSERT INTO `t1` VALUES (18,9,5,'y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'f','f');
+INSERT INTO `t1` VALUES (20,4,2,'d','d');
+
+SET @@optimizer_switch = 'subquery_cache=off';
+
+/* cache is off */ SELECT SUM( DISTINCT table1 .`pk` ) , (
+ SELECT MAX( `col_int_nokey` )
+ FROM t1
+ WHERE table1 .`pk` ) field3
+FROM t1 table1
+JOIN (
+ t1 table2
+ JOIN t2 table3
+ ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+)
+ON table3 .`col_varchar_key` = table2 .`col_varchar_nokey`
+GROUP BY field3 ;
+
+SET @@optimizer_switch = 'subquery_cache=on';
+
+/* cache is on */ SELECT SUM( DISTINCT table1 .`pk` ) , (
+ SELECT MAX( `col_int_nokey` )
+ FROM t1
+ WHERE table1 .`pk` ) field3
+FROM t1 table1
+JOIN (
+ t1 table2
+ JOIN t2 table3
+ ON table3 .`col_varchar_key` = table2 .`col_varchar_key`
+)
+ON table3 .`col_varchar_key` = table2 .`col_varchar_nokey`
+GROUP BY field3 ;
+
+drop table t1,t2;
+set @@optimizer_switch= default;
+
+#
+--echo #launchpad BUG#609052
+#
+CREATE TABLE `t2` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=30 DEFAULT CHARSET=latin1;
+INSERT INTO `t2` VALUES (10,7,8,'01:27:35','v','v');
+INSERT INTO `t2` VALUES (11,1,9,'19:48:31','r','r');
+INSERT INTO `t2` VALUES (12,5,9,'00:00:00','a','a');
+INSERT INTO `t2` VALUES (13,3,186,'19:53:05','m','m');
+INSERT INTO `t2` VALUES (14,6,NULL,'19:18:56','y','y');
+INSERT INTO `t2` VALUES (15,92,2,'10:55:12','j','j');
+INSERT INTO `t2` VALUES (16,7,3,'00:25:00','d','d');
+INSERT INTO `t2` VALUES (17,NULL,0,'12:35:47','z','z');
+INSERT INTO `t2` VALUES (18,3,133,'19:53:03','e','e');
+INSERT INTO `t2` VALUES (19,5,1,'17:53:30','h','h');
+INSERT INTO `t2` VALUES (20,1,8,'11:35:49','b','b');
+INSERT INTO `t2` VALUES (21,2,5,NULL,'s','s');
+INSERT INTO `t2` VALUES (22,NULL,5,'06:01:40','e','e');
+INSERT INTO `t2` VALUES (23,1,8,'05:45:11','j','j');
+INSERT INTO `t2` VALUES (24,0,6,'00:00:00','e','e');
+INSERT INTO `t2` VALUES (25,210,51,'00:00:00','f','f');
+INSERT INTO `t2` VALUES (26,8,4,'06:11:01','v','v');
+INSERT INTO `t2` VALUES (27,7,7,'13:02:46','x','x');
+INSERT INTO `t2` VALUES (28,5,6,'21:44:25','m','m');
+INSERT INTO `t2` VALUES (29,NULL,4,'22:43:58','c','c');
+CREATE TABLE `t4` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=101 DEFAULT CHARSET=latin1;
+INSERT INTO `t4` VALUES (1,6,NULL,NULL,'r','r');
+INSERT INTO `t4` VALUES (2,8,0,'14:34:45','c','c');
+INSERT INTO `t4` VALUES (3,6,0,'11:49:48','o','o');
+INSERT INTO `t4` VALUES (4,6,7,'18:12:55','c','c');
+INSERT INTO `t4` VALUES (5,3,8,'18:30:05','d','d');
+INSERT INTO `t4` VALUES (6,9,4,'14:19:30','v','v');
+INSERT INTO `t4` VALUES (7,2,6,'05:20:04','m','m');
+INSERT INTO `t4` VALUES (8,1,5,'20:29:31','j','j');
+INSERT INTO `t4` VALUES (9,8,NULL,'07:08:09','f','f');
+INSERT INTO `t4` VALUES (10,0,NULL,'14:49:14','n','n');
+INSERT INTO `t4` VALUES (11,9,8,'00:00:00','z','z');
+INSERT INTO `t4` VALUES (12,8,8,'09:58:06','h','h');
+INSERT INTO `t4` VALUES (13,NULL,8,NULL,'q','q');
+INSERT INTO `t4` VALUES (14,0,1,'18:24:16','w','w');
+INSERT INTO `t4` VALUES (15,5,1,'17:39:57','z','z');
+INSERT INTO `t4` VALUES (16,1,5,'08:23:21','j','j');
+INSERT INTO `t4` VALUES (17,1,2,NULL,'a','a');
+INSERT INTO `t4` VALUES (18,6,7,'21:50:46','m','m');
+INSERT INTO `t4` VALUES (19,6,6,'12:33:17','n','n');
+INSERT INTO `t4` VALUES (20,1,4,'03:06:43','e','e');
+INSERT INTO `t4` VALUES (21,8,7,'03:46:14','u','u');
+INSERT INTO `t4` VALUES (22,1,0,'20:34:52','s','s');
+INSERT INTO `t4` VALUES (23,0,9,NULL,'u','u');
+INSERT INTO `t4` VALUES (24,4,3,'10:41:20','r','r');
+INSERT INTO `t4` VALUES (25,9,5,'08:43:11','g','g');
+INSERT INTO `t4` VALUES (26,8,1,NULL,'o','o');
+INSERT INTO `t4` VALUES (27,5,1,'10:17:51','w','w');
+INSERT INTO `t4` VALUES (28,9,5,'06:34:09','b','b');
+INSERT INTO `t4` VALUES (29,5,9,'21:22:47',NULL,NULL);
+INSERT INTO `t4` VALUES (30,NULL,2,'04:02:32','y','y');
+INSERT INTO `t4` VALUES (31,NULL,5,'02:33:14','y','y');
+INSERT INTO `t4` VALUES (32,105,248,'16:32:56','u','u');
+INSERT INTO `t4` VALUES (33,0,0,'21:32:42','p','p');
+INSERT INTO `t4` VALUES (34,3,8,'23:04:47','s','s');
+INSERT INTO `t4` VALUES (35,1,1,'22:05:43','e','e');
+INSERT INTO `t4` VALUES (36,75,255,'02:05:45','d','d');
+INSERT INTO `t4` VALUES (37,9,9,'00:00:00','d','d');
+INSERT INTO `t4` VALUES (38,7,9,'18:09:07','c','c');
+INSERT INTO `t4` VALUES (39,NULL,3,'10:54:06','b','b');
+INSERT INTO `t4` VALUES (40,NULL,9,'23:15:50','t','t');
+INSERT INTO `t4` VALUES (41,4,6,'10:17:40',NULL,NULL);
+INSERT INTO `t4` VALUES (42,0,4,'03:37:09','y','y');
+INSERT INTO `t4` VALUES (43,204,60,'22:26:06','c','c');
+INSERT INTO `t4` VALUES (44,0,7,'17:10:38','d','d');
+INSERT INTO `t4` VALUES (45,9,1,'00:00:00','x','x');
+INSERT INTO `t4` VALUES (46,8,6,'17:08:49','p','p');
+INSERT INTO `t4` VALUES (47,7,4,'19:04:40','e','e');
+INSERT INTO `t4` VALUES (48,8,NULL,'20:53:28','g','g');
+INSERT INTO `t4` VALUES (49,NULL,8,'11:46:03','x','x');
+INSERT INTO `t4` VALUES (50,6,0,NULL,'s','s');
+INSERT INTO `t4` VALUES (51,5,8,'10:58:07','e','e');
+INSERT INTO `t4` VALUES (52,2,151,'00:00:00','l','l');
+INSERT INTO `t4` VALUES (53,3,7,'09:43:15','p','p');
+INSERT INTO `t4` VALUES (54,7,6,'21:40:32','h','h');
+INSERT INTO `t4` VALUES (55,NULL,NULL,'00:17:44','m','m');
+INSERT INTO `t4` VALUES (56,145,23,'16:47:26','n','n');
+INSERT INTO `t4` VALUES (57,0,2,'00:00:00','v','v');
+INSERT INTO `t4` VALUES (58,1,4,'05:25:59','b','b');
+INSERT INTO `t4` VALUES (59,7,NULL,'00:00:00','x','x');
+INSERT INTO `t4` VALUES (60,3,NULL,'20:33:04','r','r');
+INSERT INTO `t4` VALUES (61,NULL,77,'00:46:12','t','t');
+INSERT INTO `t4` VALUES (62,2,NULL,'00:00:00','w','w');
+INSERT INTO `t4` VALUES (63,2,NULL,'02:13:59','w','w');
+INSERT INTO `t4` VALUES (64,2,7,'02:54:47','k','k');
+INSERT INTO `t4` VALUES (65,8,1,'18:13:59','a','a');
+INSERT INTO `t4` VALUES (66,6,9,'13:53:08','t','t');
+INSERT INTO `t4` VALUES (67,1,6,'22:21:30','z','z');
+INSERT INTO `t4` VALUES (68,NULL,2,'11:41:50','e','e');
+INSERT INTO `t4` VALUES (69,1,3,'15:20:02','q','q');
+INSERT INTO `t4` VALUES (70,0,0,NULL,'e','e');
+INSERT INTO `t4` VALUES (71,4,NULL,NULL,'v','v');
+INSERT INTO `t4` VALUES (72,1,6,'07:51:52','d','d');
+INSERT INTO `t4` VALUES (73,1,3,'00:00:00','u','u');
+INSERT INTO `t4` VALUES (74,27,195,NULL,'o','o');
+INSERT INTO `t4` VALUES (75,4,5,NULL,'b','b');
+INSERT INTO `t4` VALUES (76,6,2,'12:31:05','c','c');
+INSERT INTO `t4` VALUES (77,2,7,'00:00:00','q','q');
+INSERT INTO `t4` VALUES (78,248,25,'01:16:45',NULL,NULL);
+INSERT INTO `t4` VALUES (79,NULL,NULL,'20:38:54','h','h');
+INSERT INTO `t4` VALUES (80,9,0,'00:30:15','d','d');
+INSERT INTO `t4` VALUES (81,75,98,'23:46:36','w','w');
+INSERT INTO `t4` VALUES (82,2,6,'19:03:13','m','m');
+INSERT INTO `t4` VALUES (83,9,5,'10:54:27','i','i');
+INSERT INTO `t4` VALUES (84,4,0,'00:25:47','w','w');
+INSERT INTO `t4` VALUES (85,0,3,'08:44:27','f','f');
+INSERT INTO `t4` VALUES (86,0,1,'08:15:38','k','k');
+INSERT INTO `t4` VALUES (87,1,1,'19:56:21','v','v');
+INSERT INTO `t4` VALUES (88,119,147,'00:00:00','c','c');
+INSERT INTO `t4` VALUES (89,1,3,'20:50:52','y','y');
+INSERT INTO `t4` VALUES (90,7,3,'03:54:39','h','h');
+INSERT INTO `t4` VALUES (91,2,NULL,'23:58:17',NULL,NULL);
+INSERT INTO `t4` VALUES (92,7,2,'12:54:58','t','t');
+INSERT INTO `t4` VALUES (93,2,1,'04:02:43','l','l');
+INSERT INTO `t4` VALUES (94,6,8,'11:31:12','a','a');
+INSERT INTO `t4` VALUES (95,4,8,'20:20:04','r','r');
+INSERT INTO `t4` VALUES (96,5,8,'00:22:24','s','s');
+INSERT INTO `t4` VALUES (97,7,0,'10:09:31','z','z');
+INSERT INTO `t4` VALUES (98,1,1,NULL,'j','j');
+INSERT INTO `t4` VALUES (99,7,8,'18:45:35','c','c');
+INSERT INTO `t4` VALUES (100,2,5,'11:49:25','f','f');
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=21 DEFAULT CHARSET=latin1;
+INSERT INTO `t1` VALUES (1,NULL,2,'11:28:45','w','w');
+INSERT INTO `t1` VALUES (2,7,9,'20:25:14','m','m');
+INSERT INTO `t1` VALUES (3,9,3,'13:47:24','m','m');
+INSERT INTO `t1` VALUES (4,7,9,'19:24:11','k','k');
+INSERT INTO `t1` VALUES (5,4,NULL,'15:59:13','r','r');
+INSERT INTO `t1` VALUES (6,2,9,'00:00:00','t','t');
+INSERT INTO `t1` VALUES (7,6,3,'15:15:04','j','j');
+INSERT INTO `t1` VALUES (8,8,8,'11:32:06','u','u');
+INSERT INTO `t1` VALUES (9,NULL,8,'18:32:33','h','h');
+INSERT INTO `t1` VALUES (10,5,53,'15:19:25','o','o');
+INSERT INTO `t1` VALUES (11,NULL,0,'19:03:19',NULL,NULL);
+INSERT INTO `t1` VALUES (12,6,5,'00:39:46','k','k');
+INSERT INTO `t1` VALUES (13,188,166,NULL,'e','e');
+INSERT INTO `t1` VALUES (14,2,3,'00:00:00','n','n');
+INSERT INTO `t1` VALUES (15,1,0,'13:12:11','t','t');
+INSERT INTO `t1` VALUES (16,1,1,'04:56:48','c','c');
+INSERT INTO `t1` VALUES (17,0,9,'19:56:05','m','m');
+INSERT INTO `t1` VALUES (18,9,5,'19:35:19','y','y');
+INSERT INTO `t1` VALUES (19,NULL,6,'05:03:03','f','f');
+INSERT INTO `t1` VALUES (20,4,2,'18:38:59','d','d');
+CREATE TABLE `t3` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=11 DEFAULT CHARSET=latin1;
+INSERT INTO `t3` VALUES (10,8,8,'18:27:58',NULL,NULL);
+CREATE TABLE `t5` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `col_int_nokey` int(11) DEFAULT NULL,
+ `col_int_key` int(11) DEFAULT NULL,
+ `col_time_key` time DEFAULT NULL,
+ `col_varchar_key` varchar(1) DEFAULT NULL,
+ `col_varchar_nokey` varchar(1) DEFAULT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `col_int_key` (`col_int_key`),
+ KEY `col_time_key` (`col_time_key`),
+ KEY `col_varchar_key` (`col_varchar_key`,`col_int_key`)
+) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
+INSERT INTO `t5` VALUES (1,1,7,'01:13:38','f','f');
+
+
+SET @@optimizer_switch='subquery_cache=off';
+
+/* cache is off */ SELECT SQL_SMALL_RESULT MAX( DISTINCT table1 . `col_varchar_key` ) AS field1 , MIN( table1 . `col_varchar_nokey` ) AS field2 , COUNT( table1 . `col_varchar_key` ) AS field3 , table2 . `col_time_key` AS field4 , COUNT( DISTINCT table2 . `col_int_key` ) AS field5 , (
+SELECT MAX( SUBQUERY1_t2 . `col_int_nokey` ) AS SUBQUERY1_field1
+FROM ( t3 AS SUBQUERY1_t1 INNER JOIN t1 AS SUBQUERY1_t2 ON (SUBQUERY1_t2 . `col_varchar_key` = SUBQUERY1_t1 . `col_varchar_nokey` ) )
+WHERE SUBQUERY1_t2 . `pk` < SUBQUERY1_t2 . `pk` ) AS field6 , COUNT( table1 . `col_varchar_nokey` ) AS field7 , COUNT( table2 . `pk` ) AS field8 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_key` ) AS SUBQUERY2_field1
+FROM ( t5 AS SUBQUERY2_t1 LEFT JOIN t2 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `col_int_key` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` != table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_varchar_nokey` >= 'o' ) AS field9 , CONCAT ( table1 . `col_varchar_key` , table2 . `col_varchar_nokey` ) AS field10
+FROM ( t4 AS table1 LEFT JOIN ( ( t1 AS table2 STRAIGHT_JOIN t1 AS table3 ON (table3 . `col_int_nokey` = table2 . `pk` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) )
+WHERE ( EXISTS (
+SELECT SUBQUERY3_t1 . `pk` AS SUBQUERY3_field1
+FROM ( t4 AS SUBQUERY3_t1 INNER JOIN t4 AS SUBQUERY3_t2 ON (SUBQUERY3_t2 . `col_varchar_key` = SUBQUERY3_t1 . `col_varchar_key` ) )
+WHERE SUBQUERY3_t1 . `col_int_key` > table3 . `pk` AND SUBQUERY3_t1 . `pk` != table3 . `pk` ) ) AND ( table1 . `pk` > 116 AND table1 . `pk` < ( 116 + 175 ) OR table1 . `pk` IN (251) ) OR table1 . `col_int_nokey` = table1 . `col_int_nokey`
+GROUP BY field4, field6, field9, field10
+HAVING field10 = 'c'
+;
+
+SET @@optimizer_switch='subquery_cache=on';
+
+/* cache is on */ SELECT SQL_SMALL_RESULT MAX( DISTINCT table1 . `col_varchar_key` ) AS field1 , MIN( table1 . `col_varchar_nokey` ) AS field2 , COUNT( table1 . `col_varchar_key` ) AS field3 , table2 . `col_time_key` AS field4 , COUNT( DISTINCT table2 . `col_int_key` ) AS field5 , (
+SELECT MAX( SUBQUERY1_t2 . `col_int_nokey` ) AS SUBQUERY1_field1
+FROM ( t3 AS SUBQUERY1_t1 INNER JOIN t1 AS SUBQUERY1_t2 ON (SUBQUERY1_t2 . `col_varchar_key` = SUBQUERY1_t1 . `col_varchar_nokey` ) )
+WHERE SUBQUERY1_t2 . `pk` < SUBQUERY1_t2 . `pk` ) AS field6 , COUNT( table1 . `col_varchar_nokey` ) AS field7 , COUNT( table2 . `pk` ) AS field8 , (
+SELECT MAX( SUBQUERY2_t1 . `col_int_key` ) AS SUBQUERY2_field1
+FROM ( t5 AS SUBQUERY2_t1 LEFT JOIN t2 AS SUBQUERY2_t2 ON (SUBQUERY2_t2 . `col_int_key` = SUBQUERY2_t1 . `col_int_key` ) )
+WHERE SUBQUERY2_t2 . `col_varchar_nokey` != table1 . `col_varchar_key` OR SUBQUERY2_t1 . `col_varchar_nokey` >= 'o' ) AS field9 , CONCAT ( table1 . `col_varchar_key` , table2 . `col_varchar_nokey` ) AS field10
+FROM ( t4 AS table1 LEFT JOIN ( ( t1 AS table2 STRAIGHT_JOIN t1 AS table3 ON (table3 . `col_int_nokey` = table2 . `pk` ) ) ) ON (table3 . `col_varchar_key` = table2 . `col_varchar_key` ) )
+WHERE ( EXISTS (
+SELECT SUBQUERY3_t1 . `pk` AS SUBQUERY3_field1
+FROM ( t4 AS SUBQUERY3_t1 INNER JOIN t4 AS SUBQUERY3_t2 ON (SUBQUERY3_t2 . `col_varchar_key` = SUBQUERY3_t1 . `col_varchar_key` ) )
+WHERE SUBQUERY3_t1 . `col_int_key` > table3 . `pk` AND SUBQUERY3_t1 . `pk` != table3 . `pk` ) ) AND ( table1 . `pk` > 116 AND table1 . `pk` < ( 116 + 175 ) OR table1 . `pk` IN (251) ) OR table1 . `col_int_nokey` = table1 . `col_int_nokey`
+GROUP BY field4, field6, field9, field10
+HAVING field10 = 'c'
+;
+
+drop table t1,t2,t3,t4,t5;
+set @@optimizer_switch= default;
=== modified file 'sql/item.cc'
--- a/sql/item.cc 2010-07-10 10:37:30 +0000
+++ b/sql/item.cc 2010-07-29 11:13:48 +0000
@@ -6966,6 +6966,14 @@
}
+Item* Item_cache_wrapper::get_tmp_table_item(THD *thd_arg)
+{
+ if (!orig_item->with_sum_func && !orig_item->const_item())
+ return new Item_field(result_field);
+ return copy_or_same(thd_arg);
+}
+
+
/**
Prepare referenced field then call usual Item_direct_ref::fix_fields .
=== modified file 'sql/item.h'
--- a/sql/item.h 2010-07-10 10:37:30 +0000
+++ b/sql/item.h 2010-07-29 11:13:48 +0000
@@ -2624,6 +2624,7 @@
{
save_val(result_field);
}
+ Item* get_tmp_table_item(THD *thd_arg);
/* Following methods make this item transparent as much as possible */
1
0

Re: [Maria-developers] [Commits] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (igor:2869) Bug#52005
by Sergey Petrunya 27 Jul '10
by Sergey Petrunya 27 Jul '10
27 Jul '10
Hello Igor,
Ok to push. I'm sorry for the delay.
On Sun, Jul 25, 2010 at 10:50:03PM -0700, Igor Babaev wrote:
> #At lp:maria based on revid:monty@askmonty.org-20100615220051-2xp3g51fysxle1r1
>
> 2869 Igor Babaev 2010-07-25
> Fixed bug #52005.
> Corrected coding for Warshall's algorithm.
> modified:
> mysql-test/r/join_outer.result
> mysql-test/t/join_outer.test
> sql/sql_select.cc
>
> === modified file 'mysql-test/r/join_outer.result'
> --- a/mysql-test/r/join_outer.result 2010-03-19 06:21:37 +0000
> +++ b/mysql-test/r/join_outer.result 2010-07-26 05:49:51 +0000
> @@ -1308,4 +1308,63 @@ WHERE (COALESCE(t1.f1, t2.f1), f3) IN ((
> f1 f2 f3 f1 f2
> 1 NULL 3 NULL NULL
> DROP TABLE t1, t2;
> +#
> +# Bug#46091 STRAIGHT_JOIN + RIGHT JOIN returns different result
> +#
> +CREATE TABLE t1 (f1 INT NOT NULL);
> +INSERT INTO t1 VALUES (9),(0);
> +CREATE TABLE t2 (f1 INT NOT NULL);
> +INSERT INTO t2 VALUES
> +(5),(3),(0),(3),(1),(0),(1),(7),(1),(0),(0),(8),(4),(9),(0),(2),(0),(8),(5),(1);
> +SELECT STRAIGHT_JOIN COUNT(*) FROM t1 TA1
> +RIGHT JOIN t2 TA2 JOIN t2 TA3 ON TA2.f1 ON TA3.f1;
> +COUNT(*)
> +476
> +EXPLAIN SELECT STRAIGHT_JOIN COUNT(*) FROM t1 TA1
> +RIGHT JOIN t2 TA2 JOIN t2 TA3 ON TA2.f1 ON TA3.f1;
> +id select_type table type possible_keys key key_len ref rows Extra
> +1 SIMPLE TA2 ALL NULL NULL NULL NULL 20 Using where
> +1 SIMPLE TA3 ALL NULL NULL NULL NULL 20 Using join buffer
> +1 SIMPLE TA1 ALL NULL NULL NULL NULL 2
> +DROP TABLE t1, t2;
> +#
> +# Bug#48971 Segfault in add_found_match_trig_cond () at sql_select.cc:5990
> +#
> +CREATE TABLE t1(f1 INT, PRIMARY KEY (f1));
> +INSERT INTO t1 VALUES (1),(2);
> +EXPLAIN EXTENDED SELECT STRAIGHT_JOIN jt1.f1 FROM t1 AS jt1
> +LEFT JOIN t1 AS jt2
> +RIGHT JOIN t1 AS jt3
> +JOIN t1 AS jt4 ON 1
> +LEFT JOIN t1 AS jt5 ON 1
> +ON 1
> +RIGHT JOIN t1 AS jt6 ON jt6.f1
> +ON 1;
> +id select_type table type possible_keys key key_len ref rows filtered Extra
> +1 SIMPLE jt1 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt6 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt3 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt4 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt5 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt2 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +Warnings:
> +Note 1003 select straight_join `test`.`jt1`.`f1` AS `f1` from `test`.`t1` `jt1` left join (`test`.`t1` `jt6` left join (`test`.`t1` `jt3` join `test`.`t1` `jt4` left join `test`.`t1` `jt5` on(1) left join `test`.`t1` `jt2` on(1)) on((`test`.`jt6`.`f1` and 1))) on(1) where 1
> +EXPLAIN EXTENDED SELECT STRAIGHT_JOIN jt1.f1 FROM t1 AS jt1
> +RIGHT JOIN t1 AS jt2
> +RIGHT JOIN t1 AS jt3
> +JOIN t1 AS jt4 ON 1
> +LEFT JOIN t1 AS jt5 ON 1
> +ON 1
> +RIGHT JOIN t1 AS jt6 ON jt6.f1
> +ON 1;
> +id select_type table type possible_keys key key_len ref rows filtered Extra
> +1 SIMPLE jt6 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt3 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt4 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt5 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt2 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +1 SIMPLE jt1 index NULL PRIMARY 4 NULL 2 100.00 Using index
> +Warnings:
> +Note 1003 select straight_join `test`.`jt1`.`f1` AS `f1` from `test`.`t1` `jt6` left join (`test`.`t1` `jt3` join `test`.`t1` `jt4` left join `test`.`t1` `jt5` on(1) left join `test`.`t1` `jt2` on(1)) on((`test`.`jt6`.`f1` and 1)) left join `test`.`t1` `jt1` on(1) where 1
> +DROP TABLE t1;
> End of 5.1 tests
>
> === modified file 'mysql-test/t/join_outer.test'
> --- a/mysql-test/t/join_outer.test 2010-03-19 06:21:37 +0000
> +++ b/mysql-test/t/join_outer.test 2010-07-26 05:49:51 +0000
> @@ -913,4 +913,48 @@ WHERE (COALESCE(t1.f1, t2.f1), f3) IN ((
>
> DROP TABLE t1, t2;
>
> +--echo #
> +--echo # Bug#46091 STRAIGHT_JOIN + RIGHT JOIN returns different result
> +--echo #
> +CREATE TABLE t1 (f1 INT NOT NULL);
> +INSERT INTO t1 VALUES (9),(0);
> +
> +CREATE TABLE t2 (f1 INT NOT NULL);
> +INSERT INTO t2 VALUES
> +(5),(3),(0),(3),(1),(0),(1),(7),(1),(0),(0),(8),(4),(9),(0),(2),(0),(8),(5),(1);
> +
> +SELECT STRAIGHT_JOIN COUNT(*) FROM t1 TA1
> +RIGHT JOIN t2 TA2 JOIN t2 TA3 ON TA2.f1 ON TA3.f1;
> +
> +EXPLAIN SELECT STRAIGHT_JOIN COUNT(*) FROM t1 TA1
> +RIGHT JOIN t2 TA2 JOIN t2 TA3 ON TA2.f1 ON TA3.f1;
> +
> +DROP TABLE t1, t2;
> +
> +--echo #
> +--echo # Bug#48971 Segfault in add_found_match_trig_cond () at sql_select.cc:5990
> +--echo #
> +CREATE TABLE t1(f1 INT, PRIMARY KEY (f1));
> +INSERT INTO t1 VALUES (1),(2);
> +
> +EXPLAIN EXTENDED SELECT STRAIGHT_JOIN jt1.f1 FROM t1 AS jt1
> + LEFT JOIN t1 AS jt2
> + RIGHT JOIN t1 AS jt3
> + JOIN t1 AS jt4 ON 1
> + LEFT JOIN t1 AS jt5 ON 1
> + ON 1
> + RIGHT JOIN t1 AS jt6 ON jt6.f1
> + ON 1;
> +
> +EXPLAIN EXTENDED SELECT STRAIGHT_JOIN jt1.f1 FROM t1 AS jt1
> + RIGHT JOIN t1 AS jt2
> + RIGHT JOIN t1 AS jt3
> + JOIN t1 AS jt4 ON 1
> + LEFT JOIN t1 AS jt5 ON 1
> + ON 1
> + RIGHT JOIN t1 AS jt6 ON jt6.f1
> + ON 1;
> +
> +DROP TABLE t1;
> +
> --echo End of 5.1 tests
>
> === modified file 'sql/sql_select.cc'
> --- a/sql/sql_select.cc 2010-05-26 18:55:40 +0000
> +++ b/sql/sql_select.cc 2010-07-26 05:49:51 +0000
> @@ -2717,15 +2717,29 @@ make_join_statistics(JOIN *join, TABLE_L
> as well as allow us to catch illegal cross references/
> Warshall's algorithm is used to build the transitive closure.
> As we use bitmaps to represent the relation the complexity
> - of the algorithm is O((number of tables)^2).
> + of the algorithm is O((number of tables)^2).
> +
> + The classic form of the Warshall's algorithm would look like:
> + for (i= 0; i < table_count; i++)
> + {
> + for (j= 0; j < table_count; j++)
> + {
> + for (k= 0; k < table_count; k++)
> + {
> + if (bitmap_is_set(stat[j], i) && bitmap_is_set(stat[i], k)
> + bitmap_set_bit(stat[j], k);
> + }
> + }
> + }
> */
> for (i= 0, s= stat ; i < table_count ; i++, s++)
> {
> - for (uint j= 0 ; j < table_count ; j++)
> + table= s->table;
> + JOIN_TAB *t;
> + for (uint j= 0, t= stat ; j < table_count ; j++, t++)
> {
> - table= stat[j].table;
> - if (s->dependent & table->map)
> - s->dependent |= table->reginfo.join_tab->dependent;
> + if (t->dependent & table->map)
> + t->dependent |= table->reginfo.join_tab->dependent;
> }
> if (outer_join & s->table->map)
> s->table->maybe_null= 1;
> @@ -8784,6 +8798,7 @@ simplify_joins(JOIN *join, List<TABLE_LI
> NESTED_JOIN *nested_join;
> TABLE_LIST *prev_table= 0;
> List_iterator<TABLE_LIST> li(*join_list);
> + bool straight_join= test(join->select_options & SELECT_STRAIGHT_JOIN);
> DBUG_ENTER("simplify_joins");
>
> /*
> @@ -8896,7 +8911,7 @@ simplify_joins(JOIN *join, List<TABLE_LI
> if (prev_table)
> {
> /* The order of tables is reverse: prev_table follows table */
> - if (prev_table->straight)
> + if (prev_table->straight || straight_join)
> prev_table->dep_tables|= used_tables;
> if (prev_table->on_expr)
> {
>
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] WL#127 New (by Sergei): generalize mtr to support per-suite extensions
by worklog-noreply@askmonty.org 26 Jul '10
by worklog-noreply@askmonty.org 26 Jul '10
26 Jul '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: generalize mtr to support per-suite extensions
CREATION DATE..: Mon, 26 Jul 2010, 08:45
SUPERVISOR.....: Sergei
IMPLEMENTOR....: Sergei
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 127 (http://askmonty.org/worklog/?tid=127)
VERSION........: Server-5.2
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 16 (hours remain)
ORIG. ESTIMATE.: 16
PROGRESS NOTES:
DESCRIPTION:
to test the sphinxse we need to needs to start the sphinx daemon and preload the
data. obviously we don't want any sphinxse specific code in the mysql-test-run,
so we need a generic way for a suite to hook in startup/shutdown code in the mtr.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3/ branch (timour:2806)
by timour@askmonty.org 23 Jul '10
by timour@askmonty.org 23 Jul '10
23 Jul '10
#At file:///home/tsk/mprog/src/5.3/ based on revid:timour@askmonty.org-20100716110215-toh8erf6p93d1n6i
2806 timour(a)askmonty.org 2010-07-23
Removed dead code that was made obsolete by the introduction of
check_join_cache_usage() by the change:
Revno: 2793
Revision Id: igor(a)askmonty.org-20091221022615-kx5ieiu0okmiupuc
Timestamp: Sun 2009-12-20 18:26:15 -0800
Backport into MariaDB-5.2 the following:
WL#2771 "Block Nested Loop Join and Batched Key Access Join"
modified:
sql/sql_select.cc
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-07-15 13:59:10 +0000
+++ b/sql/sql_select.cc 2010-07-23 08:25:00 +0000
@@ -7560,7 +7560,6 @@ make_join_readinfo(JOIN *join, ulonglong
{
uint i;
bool statistics= test(!(join->select_options & SELECT_DESCRIBE));
- bool ordered_set= 0;
bool sorted= 1;
uint first_sjm_table= MAX_TABLES;
uint last_sjm_table= MAX_TABLES;
@@ -7580,21 +7579,6 @@ make_join_readinfo(JOIN *join, ulonglong
tab->read_record.file=table->file;
tab->read_record.unlock_row= rr_unlock_row;
tab->next_select=sub_select; /* normal select */
-
- /*
- Determine if the set is already ordered for ORDER BY, so it can
- disable join cache because it will change the ordering of the results.
- Code handles sort table that is at any location (not only first after
- the const tables) despite the fact that it's currently prohibited.
- We must disable join cache if the first non-const table alone is
- ordered. If there is a temp table the ordering is done as a last
- operation and doesn't prevent join cache usage.
- */
- if (!ordered_set && !join->need_tmp &&
- (table == join->sort_by_table ||
- (join->sort_by_table == (TABLE *) 1 && i != join->const_tables)))
- ordered_set= 1;
-
tab->sorted= sorted;
sorted= 0; // only first must be sorted
if (tab->loosescan_match_tab)
1
0

22 Jul '10
Hello,
First off, is this the right list to post questions about MariaDB's
source code? If it is not, I apologize, and can somebody please direct
me to the right alias? I could not find anything on MariaDB's website.
I have noticed the following behavior in MariaDB 5.1.47 with our
storage engine that is different than MySQL. I see many cases where
handler::start_bulk_insert before insertions, but
handler::end_bulk_insert is NOT called. In MySQL 5.1.46,
handler::end_bulk_insert is always called if there was a call to
handler::start_bulk_insert.
The commands run are:
MariaDB [test]> create table ttt(a int);
MariaDB [test]> insert into ttt values (1),(2),(3);
Query OK, 3 rows affected (0.01 sec)
Records: 3 Duplicates: 0 Warnings: 0
Is there a reason for this? Does some flag need to be exposed for this
function to be called?
Thanks
-Zardosht
2
3

Re: [Maria-developers] [Fwd: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2821) Bug#604549]
by Sergey Petrunya 22 Jul '10
by Sergey Petrunya 22 Jul '10
22 Jul '10
Hello Igor,
Please find the feedback below.
On Mon, Jul 12, 2010 at 07:08:34PM -0700, Igor Babaev wrote:
> Please review this patch for the 5.2 tree.
>
> Regards,
> Igor.
>
> -------- Original Message --------
> Subject: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2
> branch (igor:2821) Bug#604549
> Date: Mon, 12 Jul 2010 18:23:26 -0700 (PDT)
> From: Igor Babaev <igor(a)askmonty.org>
> Reply-To: maria-developers(a)lists.launchpad.net
> To: commits(a)mariadb.org
>
> #At lp:maria/5.2 based on
> revid:knielsen@knielsen-hq.org-20100709120309-xzhk02q8coq7m6tl
>
> 2821 Igor Babaev 2010-07-12
> Fixed bug #604549.
> There was no error thrown when creating a table with a virtual table
> computed by an expression returning a row.
> This caused a crash when inserting into the table.
>
> Removed periods at the end of the error messages for virtual columns.
> Adjusted output in test result files accordingly.
Periods at the end of error messages were apparent for the whole time. Why do
we suddenly decide to remove them now?
> === modified file 'mysql-test/r/plugin.result'
> --- a/mysql-test/r/plugin.result 2010-04-30 20:04:35 +0000
> +++ b/mysql-test/r/plugin.result 2010-07-13 01:23:07 +0000
> @@ -75,9 +75,9 @@ SET SQL_MODE='IGNORE_BAD_TABLE_OPTIONS';
> #illegal value fixed
> CREATE TABLE t1 (a int) ENGINE=example ULL=10000000000000000000
> one_or_two='ttt' YESNO=SSS;
> Warnings:
> -Warning 1651 Incorrect value '10000000000000000000' for option 'ULL'
> -Warning 1651 Incorrect value 'ttt' for option 'one_or_two'
> -Warning 1651 Incorrect value 'SSS' for option 'YESNO'
> +Warning 1652 Incorrect value '10000000000000000000' for option 'ULL'
> +Warning 1652 Incorrect value 'ttt' for option 'one_or_two'
> +Warning 1652 Incorrect value 'SSS' for option 'YESNO'
Why did the warning code change? Is this intentional?
> === modified file 'sql/share/errmsg.txt'
> --- a/sql/share/errmsg.txt 2010-06-01 19:52:20 +0000
> +++ b/sql/share/errmsg.txt 2010-07-13 01:23:07 +0000
> @@ -6211,28 +6211,31 @@ ER_VCOL_BASED_ON_VCOL
> eng "A computed column cannot be based on a computed column"
>
> ER_VIRTUAL_COLUMN_FUNCTION_IS_NOT_ALLOWED
> - eng "Function or expression is not allowed for column '%s'."
> + eng "Function or expression is not allowed for column '%s'"
>
> ER_DATA_CONVERSION_ERROR_FOR_VIRTUAL_COLUMN
> - eng "Generated value for computed column '%s' cannot be
> converted to type '%s'."
> + eng "Generated value for computed column '%s' cannot be
> converted to type '%s'"
>
> ER_PRIMARY_KEY_BASED_ON_VIRTUAL_COLUMN
> - eng "Primary key cannot be defined upon a computed column."
> + eng "Primary key cannot be defined upon a computed column"
>
> ER_KEY_BASED_ON_GENERATED_VIRTUAL_COLUMN
> - eng "Key/Index cannot be defined on a non-stored computed column."
> + eng "Key/Index cannot be defined on a non-stored computed column"
>
> ER_WRONG_FK_OPTION_FOR_VIRTUAL_COLUMN
> - eng "Cannot define foreign key with %s clause on a computed
> column."
> + eng "Cannot define foreign key with %s clause on a computed column"
>
> ER_WARNING_NON_DEFAULT_VALUE_FOR_VIRTUAL_COLUMN
> - eng "The value specified for computed column '%s' in table '%s'
> ignored."
> + eng "The value specified for computed column '%s' in table '%s'
> ignored"
>
> ER_UNSUPPORTED_ACTION_ON_VIRTUAL_COLUMN
> - eng "'%s' is not yet supported for computed columns."
> + eng "'%s' is not yet supported for computed columns"
>
> ER_CONST_EXPR_IN_VCOL
> - eng "Constant expression in computed column function is not
> allowed."
> + eng "Constant expression in computed column function is not
> allowed"
> +
> +ER_ROW_EXPR_FOR_VCOL
> + eng "Expression for computed column cannot return a row"
>
When one sees this pair of codes ER_CONST_EXPR_IN_VCOL and ER_ROW_EXPR_FOR_VCOL,
one can't help asking himself whether that's the only disallowed expressions,
and if not, do we have error codes for vcol expressions with
- user variables
- subqueries
- SP calls
- etc, etc.
Do we handle such cases at all?
> ER_DEBUG_SYNC_TIMEOUT
> eng "debug sync point wait timed out"
>
> === modified file 'sql/table.cc'
> --- a/sql/table.cc 2010-06-05 14:53:36 +0000
> +++ b/sql/table.cc 2010-07-13 01:23:07 +0000
> @@ -1859,6 +1859,14 @@ bool fix_vcol_expr(THD *thd,
> goto end;
> }
> thd->where= save_where;
> +#if 0
> +#else
> + if (unlikely(func_expr->result_type() == ROW_RESULT))
> + {
> + my_error(ER_ROW_EXPR_FOR_VCOL, MYF(0));
> + goto end;
> + }
> +#endif
Please remove #if/#else.
> #ifdef PARANOID
> /*
> Walk through the Item tree checking if all items are valid
>
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
2
1
Antony,
Hi there! How goes? I have a question pertaining to bug 571200. I have
hand-coded the fix from Federated as shown on
http://lists.mysql.com/commits/102419. Obviously, FederatedX has changed
enough that there's a fair amount of work I had to do (see my branch at
bzr+ssh://bazaar.launchpad.net/~capttofu/maria_bug_571200) Most of it I
think I have a good solution to. The one thing remaining is how to get
anything that pertains to getting or setting the "result->data_cursor"
value, as you'll see is in that patch. Since "result" is now a
FEDERATEDX_IO_RESULT, it doesn't have that structure member, although
FEDERATEDX_IO_RESULT for federatedx_io_mysql class is a MYSQL_RES, but
trying to access results->data_cursor results in errors such as:
ha_federatedx.cc: In member function ‘int
ha_federatedx::read_next(uchar*, FEDERATEDX_IO_RESULT*)’:
ha_federatedx.cc:2888: error: invalid use of incomplete type ‘struct
st_federatedx_result’
ha_federatedx.h:127: error: forward declaration of ‘struct
st_federatedx_result’
So, the solution I think is to have a virtual method in the
federatedx_io class called "get_data_cursor_pos" and
"set_data_cursor_pos" that within federatedx_io_mysql class obtains
result->data_cursor. For other driver classes, it'll require some
thinking, but for now this would get the mysql driver working.
What are your thoughts on this and what I have thus far?
Thanks, hope all is going well!
regards,
Patrick
1
0

Re: [Maria-developers] [Fwd: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2827) Bug#607177]
by Sergey Petrunya 21 Jul '10
by Sergey Petrunya 21 Jul '10
21 Jul '10
Hello Igor,
Ok to push.
On Tue, Jul 20, 2010 at 10:01:30PM -0700, Igor Babaev wrote:
> Sergey,
>
> Please review this trivial patch for the 5.2 tree.
>
> Regards,
> Igor.
>
> -------- Original Message --------
> Subject: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2
> branch (igor:2827) Bug#607177
> Date: Tue, 20 Jul 2010 22:00:00 -0700 (PDT)
> From: Igor Babaev <igor(a)askmonty.org>
> Reply-To: maria-developers(a)lists.launchpad.net
> To: commits(a)mariadb.org
>
> #At lp:maria/5.2 based on
> revid:igor@askmonty.org-20100717195808-mvh782jvt6c32u2d
>
> 2827 Igor Babaev 2010-07-20
> Fixed bug #607177.
> Due to an invalid check for NULL of the second argument of the
> Item_func_round items performed in the code of
> Item_func_round::real_op
> the function ROUND sometimes could return wrong results.
> modified:
> mysql-test/suite/vcol/r/vcol_misc.result
> mysql-test/suite/vcol/t/vcol_misc.test
> sql/item_func.cc
>
> === modified file 'mysql-test/suite/vcol/r/vcol_misc.result'
> --- a/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-17 19:58:08 +0000
> +++ b/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-21 04:59:47 +0000
> @@ -87,3 +87,23 @@ a v
> 2002-02-15 00:00:00 0
> 2000-10-15 00:00:00 1
> DROP TABLE t1, t2;
> +CREATE TABLE t1 (p int, a double NOT NULL, v double AS (ROUND(a,p))
> VIRTUAL);
> +INSERT INTO t1 VALUES (0,1,0);
> +Warnings:
> +Warning 1645 The value specified for computed column 'v' in table 't1'
> ignored
> +INSERT INTO t1 VALUES (NULL,0,0);
> +Warnings:
> +Warning 1645 The value specified for computed column 'v' in table 't1'
> ignored
> +SELECT a, p, v, ROUND(a,p), ROUND(a,p+NULL) FROM t1;
> +a p v ROUND(a,p) ROUND(a,p+NULL)
> +1 0 1 1 NULL
> +0 NULL NULL NULL NULL
> +DROP TABLE t1;
> +CREATE TABLE t1 (p int, a double NOT NULL);
> +INSERT INTO t1(p,a) VALUES (0,1);
> +INSERT INTO t1(p,a) VALUES (NULL,0);
> +SELECT a, p, ROUND(a,p), ROUND(a,p+NULL) FROM t1;
> +a p ROUND(a,p) ROUND(a,p+NULL)
> +1 0 1 NULL
> +0 NULL NULL NULL
> +DROP TABLE t1;
>
> === modified file 'mysql-test/suite/vcol/t/vcol_misc.test'
> --- a/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-17 19:58:08 +0000
> +++ b/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-21 04:59:47 +0000
> @@ -87,3 +87,19 @@ INSERT INTO t2(a) VALUES ('2000-10-15');
> SELECT * FROM t2;
>
> DROP TABLE t1, t2;
> +
> +#
> +# Bug#607177: ROUND function in the expression for a virtual function
> +#
> +
> +CREATE TABLE t1 (p int, a double NOT NULL, v double AS (ROUND(a,p))
> VIRTUAL);
> +INSERT INTO t1 VALUES (0,1,0);
> +INSERT INTO t1 VALUES (NULL,0,0);
> +SELECT a, p, v, ROUND(a,p), ROUND(a,p+NULL) FROM t1;
> +DROP TABLE t1;
> +
> +CREATE TABLE t1 (p int, a double NOT NULL);
> +INSERT INTO t1(p,a) VALUES (0,1);
> +INSERT INTO t1(p,a) VALUES (NULL,0);
> +SELECT a, p, ROUND(a,p), ROUND(a,p+NULL) FROM t1;
> +DROP TABLE t1;
>
> === modified file 'sql/item_func.cc'
> --- a/sql/item_func.cc 2010-06-01 19:52:20 +0000
> +++ b/sql/item_func.cc 2010-07-21 04:59:47 +0000
> @@ -2040,10 +2040,12 @@ double Item_func_round::real_op()
> {
> double value= args[0]->val_real();
>
> - if (!(null_value= args[0]->null_value || args[1]->null_value))
> - return my_double_round(value, args[1]->val_int(),
> args[1]->unsigned_flag,
> - truncate);
> -
> + if (!(null_value= args[0]->null_value))
> + {
> + longlong dec= args[1]->val_int();
> + if (!(null_value= args[1]->null_value))
> + return my_double_round(value, dec, args[1]->unsigned_flag, truncate);
> + }
> return 0.0;
> }
>
>
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
--
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

Re: [Maria-developers] [Fwd: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2827) Bug#607566]
by Sergey Petrunya 20 Jul '10
by Sergey Petrunya 20 Jul '10
20 Jul '10
Hello Igor,
Ok to push.
On Mon, Jul 19, 2010 at 10:43:18PM -0700, Igor Babaev wrote:
> Sergey,
>
> Please review this patch for the 5.2 tree.
>
> Regards,
> Igor.
>
>
> -------- Original Message --------
> Subject: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2
> branch (igor:2827) Bug#607566
> Date: Mon, 19 Jul 2010 22:41:37 -0700 (PDT)
> From: Igor Babaev <igor(a)askmonty.org>
> Reply-To: maria-developers(a)lists.launchpad.net
> To: commits(a)mariadb.org
>
> #At lp:maria/5.2 based on
> revid:igor@askmonty.org-20100717195808-mvh782jvt6c32u2d
>
> 2827 Igor Babaev 2010-07-19
> Fixed bug #607566.
> For queries with order by clauses that employed filesort usage of
> virtual column references in select lists could trigger assertion
> failures. It happened because a wrong vcol_set bitmap was used for
> filesort. It turned out that filesort required its own vcol_set
> bitmap.
>
> Made management of the vcol_set bitmaps similar to the management
> of the read_set and write_set bitmaps.
> modified:
> mysql-test/suite/vcol/r/vcol_misc.result
> mysql-test/suite/vcol/t/vcol_misc.test
> sql/field.cc
> sql/filesort.cc
> sql/sql_insert.cc
> sql/sql_select.cc
> sql/table.cc
> sql/table.h
>
> === modified file 'mysql-test/suite/vcol/r/vcol_misc.result'
> --- a/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-17 19:58:08 +0000
> +++ b/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-20 05:41:24 +0000
> @@ -87,3 +87,13 @@ a v
> 2002-02-15 00:00:00 0
> 2000-10-15 00:00:00 1
> DROP TABLE t1, t2;
> +CREATE TABLE t1 (
> +a char(255), b char(255), c char(255), d char(255),
> +v char(255) AS (CONCAT(c,d) ) VIRTUAL
> +);
> +INSERT INTO t1(a,b,c,d) VALUES ('w','x','y','z'), ('W','X','Y','Z');
> +SELECT v FROM t1 ORDER BY CONCAT(a,b);
> +v
> +yz
> +YZ
> +DROP TABLE t1;
>
> === modified file 'mysql-test/suite/vcol/t/vcol_misc.test'
> --- a/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-17 19:58:08 +0000
> +++ b/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-20 05:41:24 +0000
> @@ -87,3 +87,18 @@ INSERT INTO t2(a) VALUES ('2000-10-15');
> SELECT * FROM t2;
>
> DROP TABLE t1, t2;
> +
> +#
> +# Bug#607566: Virtual column in the select list of SELECT with ORDER BY
> +#
> +
> +CREATE TABLE t1 (
> + a char(255), b char(255), c char(255), d char(255),
> + v char(255) AS (CONCAT(c,d) ) VIRTUAL
> +);
> +
> +INSERT INTO t1(a,b,c,d) VALUES ('w','x','y','z'), ('W','X','Y','Z');
> +
> +SELECT v FROM t1 ORDER BY CONCAT(a,b);
> +
> +DROP TABLE t1;
>
> === modified file 'sql/field.cc'
> --- a/sql/field.cc 2010-06-01 19:52:20 +0000
> +++ b/sql/field.cc 2010-07-20 05:41:24 +0000
> @@ -57,7 +57,7 @@ const char field_separator=',';
> ((ulong) ((LL(1) << min(arg, 4) * 8) - LL(1)))
>
> #define ASSERT_COLUMN_MARKED_FOR_READ DBUG_ASSERT(!table ||
> (!table->read_set || bitmap_is_set(table->read_set, field_index)))
> -#define ASSERT_COLUMN_MARKED_FOR_WRITE_OR_COMPUTED DBUG_ASSERT(!table
> || (!table->write_set || bitmap_is_set(table->write_set, field_index) ||
> bitmap_is_set(&table->vcol_set, field_index)))
> +#define ASSERT_COLUMN_MARKED_FOR_WRITE_OR_COMPUTED DBUG_ASSERT(!table
> || (!table->write_set || bitmap_is_set(table->write_set, field_index) ||
> bitmap_is_set(table->vcol_set, field_index)))
>
> /*
> Rules for merging different types of fields in UNION
>
> === modified file 'sql/filesort.cc'
> --- a/sql/filesort.cc 2010-07-17 19:58:08 +0000
> +++ b/sql/filesort.cc 2010-07-20 05:41:24 +0000
> @@ -515,7 +515,7 @@ static ha_rows find_all_keys(SORTPARAM *
> THD *thd= current_thd;
> volatile THD::killed_state *killed= &thd->killed;
> handler *file;
> - MY_BITMAP *save_read_set, *save_write_set;
> + MY_BITMAP *save_read_set, *save_write_set, *save_vcol_set;
> DBUG_ENTER("find_all_keys");
> DBUG_PRINT("info",("using: %s",
> (select ? select->quick ? "ranges" : "where":
> @@ -552,6 +552,7 @@ static ha_rows find_all_keys(SORTPARAM *
> /* Remember original bitmaps */
> save_read_set= sort_form->read_set;
> save_write_set= sort_form->write_set;
> + save_vcol_set= sort_form->vcol_set;
> /* Set up temporary column read map for columns used by sort */
> bitmap_clear_all(&sort_form->tmp_set);
> /* Temporary set for register_used_fields and
> register_field_in_read_map */
> @@ -560,7 +561,8 @@ static ha_rows find_all_keys(SORTPARAM *
> if (select && select->cond)
> select->cond->walk(&Item::register_field_in_read_map, 1,
> (uchar*) sort_form);
> - sort_form->column_bitmaps_set(&sort_form->tmp_set, &sort_form->tmp_set);
> + sort_form->column_bitmaps_set(&sort_form->tmp_set, &sort_form->tmp_set,
> + &sort_form->tmp_set);
>
> for (;;)
> {
> @@ -643,7 +645,7 @@ static ha_rows find_all_keys(SORTPARAM *
> DBUG_RETURN(HA_POS_ERROR);
>
> /* Signal we should use orignal column read and write maps */
> - sort_form->column_bitmaps_set(save_read_set, save_write_set);
> + sort_form->column_bitmaps_set(save_read_set, save_write_set,
> save_vcol_set);
>
> DBUG_PRINT("test",("error: %d indexpos: %d",error,indexpos));
> if (error != HA_ERR_END_OF_FILE)
>
> === modified file 'sql/sql_insert.cc'
> --- a/sql/sql_insert.cc 2010-07-15 23:51:05 +0000
> +++ b/sql/sql_insert.cc 2010-07-20 05:41:24 +0000
> @@ -2109,7 +2109,7 @@ TABLE *Delayed_insert::get_local_table(T
> copy= (TABLE*) client_thd->alloc(sizeof(*copy)+
> (share->fields+1)*sizeof(Field**)+
> share->reclength +
> - share->column_bitmap_size*2);
> + share->column_bitmap_size*3);
> if (!copy)
> goto error;
>
> @@ -2119,7 +2119,7 @@ TABLE *Delayed_insert::get_local_table(T
> /* Assign the pointers for the field pointers array and the record. */
> field= copy->field= (Field**) (copy + 1);
> bitmap= (uchar*) (field + share->fields + 1);
> - copy->record[0]= (bitmap + share->column_bitmap_size * 2);
> + copy->record[0]= (bitmap + share->column_bitmap_size*3);
> memcpy((char*) copy->record[0], (char*) table->record[0],
> share->reclength);
> /*
> Make a copy of all fields.
> @@ -2161,10 +2161,13 @@ TABLE *Delayed_insert::get_local_table(T
> copy->def_read_set.bitmap= (my_bitmap_map*) bitmap;
> copy->def_write_set.bitmap= ((my_bitmap_map*)
> (bitmap + share->column_bitmap_size));
> + copy->def_vcol_set.bitmap= ((my_bitmap_map*)
> + (bitmap + 2*share->column_bitmap_size));
> copy->tmp_set.bitmap= 0; // To catch errors
> - bzero((char*) bitmap, share->column_bitmap_size*2);
> + bzero((char*) bitmap, share->column_bitmap_size*3);
> copy->read_set= ©->def_read_set;
> copy->write_set= ©->def_write_set;
> + copy->vcol_set= ©->def_vcol_set;
>
> DBUG_RETURN(copy);
>
>
> === modified file 'sql/sql_select.cc'
> --- a/sql/sql_select.cc 2010-07-17 19:58:08 +0000
> +++ b/sql/sql_select.cc 2010-07-20 05:41:24 +0000
> @@ -5513,7 +5513,7 @@ static void calc_used_field_length(THD *
> {
> uint null_fields,blobs,fields,rec_length;
> Field **f_ptr,*field;
> - MY_BITMAP *read_set= join_tab->table->read_set;;
> + MY_BITMAP *read_set= join_tab->table->read_set;
>
> null_fields= blobs= fields= rec_length=0;
> for (f_ptr=join_tab->table->field ; (field= *f_ptr) ; f_ptr++)
> @@ -9877,11 +9877,11 @@ void setup_tmp_table_column_bitmaps(TABL
> uint field_count= table->s->fields;
> bitmap_init(&table->def_read_set, (my_bitmap_map*) bitmaps, field_count,
> FALSE);
> - bitmap_init(&table->tmp_set,
> + bitmap_init(&table->def_vcol_set,
> (my_bitmap_map*) (bitmaps+ bitmap_buffer_size(field_count)),
> field_count, FALSE);
> - bitmap_init(&table->vcol_set,
> - (my_bitmap_map*) (bitmaps+
> 2+bitmap_buffer_size(field_count)),
> + bitmap_init(&table->tmp_set,
> + (my_bitmap_map*) (bitmaps+
> 2*bitmap_buffer_size(field_count)),
> field_count, FALSE);
>
> /* write_set and all_set are copies of read_set */
>
> === modified file 'sql/table.cc'
> --- a/sql/table.cc 2010-07-17 19:58:08 +0000
> +++ b/sql/table.cc 2010-07-20 05:41:24 +0000
> @@ -2343,9 +2343,9 @@ partititon_err:
> (my_bitmap_map*) bitmaps, share->fields, FALSE);
> bitmap_init(&outparam->def_write_set,
> (my_bitmap_map*) (bitmaps+bitmap_size), share->fields,
> FALSE);
> - bitmap_init(&outparam->tmp_set,
> + bitmap_init(&outparam->def_vcol_set,
> (my_bitmap_map*) (bitmaps+bitmap_size*2), share->fields,
> FALSE);
> - bitmap_init(&outparam->vcol_set,
> + bitmap_init(&outparam->tmp_set,
> (my_bitmap_map*) (bitmaps+bitmap_size*3), share->fields,
> FALSE);
> outparam->default_column_bitmaps();
>
> @@ -4809,10 +4809,10 @@ void st_table::clear_column_bitmaps()
> Reset column read/write usage. It's identical to:
> bitmap_clear_all(&table->def_read_set);
> bitmap_clear_all(&table->def_write_set);
> + bitmap_clear_all(&table->def_vcol_set);
> */
> - bzero((char*) def_read_set.bitmap, s->column_bitmap_size*2);
> - bzero((char*) def_read_set.bitmap, s->column_bitmap_size*4);
> - column_bitmaps_set(&def_read_set, &def_write_set);
> + bzero((char*) def_read_set.bitmap, s->column_bitmap_size*3);
> + column_bitmaps_set(&def_read_set, &def_write_set, &def_vcol_set);
> }
>
>
> @@ -5085,7 +5085,7 @@ bool st_table::mark_virtual_col(Field *f
> {
> bool res;
> DBUG_ASSERT(field->vcol_info);
> - if (!(res= bitmap_fast_test_and_set(&vcol_set, field->field_index)))
> + if (!(res= bitmap_fast_test_and_set(vcol_set, field->field_index)))
> {
> Item *vcol_item= field->vcol_info->expr_item;
> DBUG_ASSERT(vcol_item);
> @@ -5464,7 +5464,7 @@ int update_virtual_fields(THD *thd, TABL
> vfield= (*vfield_ptr);
> DBUG_ASSERT(vfield->vcol_info && vfield->vcol_info->expr_item);
> /* Only update those fields that are marked in the vcol_set bitmap */
> - if (bitmap_is_set(&table->vcol_set, vfield->field_index) &&
> + if (bitmap_is_set(table->vcol_set, vfield->field_index) &&
> (for_write || !vfield->stored_in_db))
> {
> /* Compute the actual value of the virtual fields */
>
> === modified file 'sql/table.h'
> --- a/sql/table.h 2010-07-17 19:58:08 +0000
> +++ b/sql/table.h 2010-07-20 05:41:24 +0000
> @@ -719,9 +719,8 @@ struct st_table {
> const char *alias; /* alias or table name */
> uchar *null_flags;
> my_bitmap_map *bitmap_init_value;
> - MY_BITMAP def_read_set, def_write_set, tmp_set; /* containers */
> - MY_BITMAP vcol_set; /* set of used virtual columns */
> - MY_BITMAP *read_set, *write_set; /* Active column sets */
> + MY_BITMAP def_read_set, def_write_set, def_vcol_set, tmp_set;
> + MY_BITMAP *read_set, *write_set, *vcol_set; /* Active column sets */
> /*
> The ID of the query that opened and is using this table. Has different
> meanings depending on the table type.
> @@ -904,12 +903,30 @@ struct st_table {
> if (file)
> file->column_bitmaps_signal();
> }
> + inline void column_bitmaps_set(MY_BITMAP *read_set_arg,
> + MY_BITMAP *write_set_arg,
> + MY_BITMAP *vcol_set_arg)
> + {
> + read_set= read_set_arg;
> + write_set= write_set_arg;
> + vcol_set= vcol_set_arg;
> + if (file)
> + file->column_bitmaps_signal();
> + }
> inline void column_bitmaps_set_no_signal(MY_BITMAP *read_set_arg,
> MY_BITMAP *write_set_arg)
> {
> read_set= read_set_arg;
> write_set= write_set_arg;
> }
> + inline void column_bitmaps_set_no_signal(MY_BITMAP *read_set_arg,
> + MY_BITMAP *write_set_arg,
> + MY_BITMAP *vcol_set_arg)
> + {
> + read_set= read_set_arg;
> + write_set= write_set_arg;
> + vcol_set= vcol_set_arg;
> + }
> inline void use_all_columns()
> {
> column_bitmaps_set(&s->all_set, &s->all_set);
> @@ -918,6 +935,7 @@ struct st_table {
> {
> read_set= &def_read_set;
> write_set= &def_write_set;
> + vcol_set= &def_vcol_set;
> }
> /* Is table open or should be treated as such by name-locking? */
> inline bool is_name_opened() { return db_stat || open_placeholder; }
>
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
--
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0
Welcome,
I am a programmer in Comarch (Poland) and I would like to know, what
should we (Comarch) do,
to release our Custom Storage Engine called CLDB on GPLv2 licence... Our
custom storage engine
is using MySQL/MariaDB custom storage engine API and some sources from "sql"
directory to
use with condition pushdown... CLDB is column oriented database... Currently
one-threaded...
What documents should we have/fill to obtain GPL lecence ??
Should we share our codes and where ??
Thank you for Your answers..
-----------------------------------------------------------------------------
Mateusz Matan
IT Security R&D Department, C/C++ programmer
ComArch S.A., Al. Jana Pawła II 41d, 31-864 Kraków
tel: (+48 12) 684 8411
e-mail: Mateusz.Matan(a)comarch.pl
2
1

[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-mwl89/ branch (timour:2804)
by timour@askmonty.org 18 Jul '10
by timour@askmonty.org 18 Jul '10
18 Jul '10
#At file:///home/tsk/mprog/src/5.3-mwl89/ based on revid:timour@askmonty.org-20100718114608-wiz9ji9z80pzjw2k
2804 timour(a)askmonty.org 2010-07-18
MWL#89: Cost-based choice between Materialization and IN->EXISTS transformation
Step2 in the separation of the creation of IN->EXISTS equi-join conditions from
their injection. The goal of this separation is to make it possible that the
IN->EXISTS conditions can be used for cost estimation without actually modifying
the subquery.
This patch separates row_value_in_to_exists_transformer() into two methods:
- create_row_value_in_to_exists_cond(), and
- inject_row_value_in_to_exists_cond()
The patch performs minimal refactoring of the code so that it is easier to solve
problems resulting from the separation. There is a lot to be simplified in this
code, but this will be done separately.
modified:
sql/item_subselect.cc
sql/item_subselect.h
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-07-18 11:46:08 +0000
+++ b/sql/item_subselect.cc 2010-07-18 12:59:24 +0000
@@ -1524,16 +1524,16 @@ Item_subselect::trans_res
Item_in_subselect::single_value_in_to_exists_transformer(JOIN * join,
Comp_creator *func)
{
- Item *where_term;
- Item *having_term;
+ Item *where_item;
+ Item *having_item;
Item_subselect::trans_res res;
res= create_single_value_in_to_exists_cond(join, func,
- &where_term, &having_term);
+ &where_item, &having_item);
if (res != RES_OK)
return res;
res= inject_single_value_in_to_exists_cond(join, func,
- where_term, having_term);
+ where_item, having_item);
return res;
}
@@ -1541,8 +1541,8 @@ Item_in_subselect::single_value_in_to_ex
Item_subselect::trans_res
Item_in_subselect::create_single_value_in_to_exists_cond(JOIN * join,
Comp_creator *func,
- Item **where_term,
- Item **having_term)
+ Item **where_item,
+ Item **having_item)
{
SELECT_LEX *select_lex= join->select_lex;
DBUG_ENTER("Item_in_subselect::create_single_value_in_to_exists_cond");
@@ -1569,8 +1569,8 @@ Item_in_subselect::create_single_value_i
if (item->fix_fields(thd, 0))
DBUG_RETURN(RES_ERROR);
- *having_term= item;
- *where_term= NULL;
+ *having_item= item;
+ *where_item= NULL;
}
else
{
@@ -1595,7 +1595,7 @@ Item_in_subselect::create_single_value_i
if (having->fix_fields(thd, 0))
DBUG_RETURN(RES_ERROR);
- *having_term= having;
+ *having_item= having;
item= new Item_cond_or(item,
new Item_func_isnull(orig_item));
@@ -1613,7 +1613,7 @@ Item_in_subselect::create_single_value_i
if (item->fix_fields(thd, 0))
DBUG_RETURN(RES_ERROR);
- *where_term= item;
+ *where_item= item;
}
else
{
@@ -1640,13 +1640,13 @@ Item_in_subselect::create_single_value_i
if (new_having->fix_fields(thd, 0))
DBUG_RETURN(RES_ERROR);
- *having_term= new_having;
- *where_term= NULL;
+ *having_item= new_having;
+ *where_item= NULL;
}
else
{
- *having_term= NULL;
- *where_term= (Item*) select_lex->item_list.head();
+ *having_item= NULL;
+ *where_item= (Item*) select_lex->item_list.head();
}
}
}
@@ -1659,8 +1659,8 @@ Item_in_subselect::create_single_value_i
Item_subselect::trans_res
Item_in_subselect::inject_single_value_in_to_exists_cond(JOIN * join,
Comp_creator *func,
- Item *where_term,
- Item *having_term)
+ Item *where_item,
+ Item *having_item)
{
SELECT_LEX *select_lex= join->select_lex;
bool fix_res;
@@ -1675,9 +1675,9 @@ Item_in_subselect::inject_single_value_i
we can assign select_lex->having here, and pass 0 as last
argument (reference) to fix_fields()
*/
- select_lex->having= join->having= and_items(join->having, having_term);
- if (join->having == having_term)
- having_term->name= (char*)in_having_cond;
+ select_lex->having= join->having= and_items(join->having, having_item);
+ if (join->having == having_item)
+ having_item->name= (char*)in_having_cond;
select_lex->having_fix_field= 1;
/*
we do not check join->having->fixed, because Item_and (from and_items)
@@ -1707,8 +1707,8 @@ Item_in_subselect::inject_single_value_i
we can assign select_lex->having here, and pass 0 as last
argument (reference) to fix_fields()
*/
- having_term->name= (char*)in_having_cond;
- select_lex->having= join->having= having_term;
+ having_item->name= (char*)in_having_cond;
+ select_lex->having= join->having= having_item;
select_lex->having_fix_field= 1;
/*
we do not check join->having->fixed, because Item_and (from
@@ -1726,14 +1726,14 @@ Item_in_subselect::inject_single_value_i
single_value_transformer but there is no corresponding action in
row_value_transformer?
*/
- where_term->name= (char *)in_additional_cond;
+ where_item->name= (char *)in_additional_cond;
/*
AND can't be changed during fix_fields()
we can assign select_lex->having here, and pass 0 as last
argument (reference) to fix_fields()
*/
- select_lex->where= join->conds= and_items(join->conds, where_term);
+ select_lex->where= join->conds= and_items(join->conds, where_item);
select_lex->where->top_level_item();
/*
we do not check join->conds->fixed, because Item_and can't be fixed
@@ -1746,8 +1746,8 @@ Item_in_subselect::inject_single_value_i
{
if (select_lex->master_unit()->is_union())
{
- having_term->name= (char*)in_having_cond;
- select_lex->having= join->having= having_term;
+ having_item->name= (char*)in_having_cond;
+ select_lex->having= join->having= having_item;
select_lex->having_fix_field= 1;
/*
@@ -1765,11 +1765,11 @@ Item_in_subselect::inject_single_value_i
// it is single select without tables => possible optimization
// remove the dependence mark since the item is moved to upper
// select and is not outer anymore.
- where_term->walk(&Item::remove_dependence_processor, 0,
+ where_item->walk(&Item::remove_dependence_processor, 0,
(uchar *) select_lex->outer_select());
- where_term= func->create(left_expr, where_term);
+ where_item= func->create(left_expr, where_item);
// fix_field of item will be done in time of substituting
- substitution= where_term;
+ substitution= where_item;
have_to_be_excluded= 1;
if (thd->lex->describe)
{
@@ -1866,20 +1866,37 @@ Item_in_subselect::row_value_transformer
add the equi-join and the "is null" to WHERE
add the is_not_null_test to HAVING
*/
-
Item_subselect::trans_res
Item_in_subselect::row_value_in_to_exists_transformer(JOIN * join)
{
+ Item *where_item;
+ Item *having_item;
+ Item_subselect::trans_res res;
+
+ res= create_row_value_in_to_exists_cond(join, &where_item, &having_item);
+ if (res != RES_OK)
+ return res;
+ res= inject_row_value_in_to_exists_cond(join, where_item, having_item);
+ return res;
+}
+
+
+Item_subselect::trans_res
+Item_in_subselect::create_row_value_in_to_exists_cond(JOIN * join,
+ Item **where_item,
+ Item **having_item)
+{
SELECT_LEX *select_lex= join->select_lex;
- Item *having_item= 0;
uint cols_num= left_expr->cols();
bool is_having_used= (join->having || select_lex->with_sum_func ||
select_lex->group_list.first ||
!select_lex->table_list.elements);
- DBUG_ENTER("Item_in_subselect::row_value_in_to_exists_transformer");
+ DBUG_ENTER("Item_in_subselect::create_row_value_in_to_exists_cond");
+
+ *where_item= NULL;
+ *having_item= NULL;
- select_lex->uncacheable|= UNCACHEABLE_DEPENDENT;
if (is_having_used)
{
/*
@@ -1899,6 +1916,7 @@ Item_in_subselect::row_value_in_to_exist
for (uint i= 0; i < cols_num; i++)
{
DBUG_ASSERT((left_expr->fixed &&
+
select_lex->ref_pointer_array[i]->fixed) ||
(select_lex->ref_pointer_array[i]->type() == REF_ITEM &&
((Item_ref*)(select_lex->ref_pointer_array[i]))->ref_type() ==
@@ -1932,8 +1950,8 @@ Item_in_subselect::row_value_in_to_exist
if (!(col_item= new Item_func_trig_cond(col_item, get_cond_guard(i))))
DBUG_RETURN(RES_ERROR);
}
- having_item= and_items(having_item, col_item);
-
+ *having_item= and_items(*having_item, col_item);
+
Item *item_nnull_test=
new Item_is_not_null_test(this,
new Item_ref(&select_lex->context,
@@ -1950,8 +1968,8 @@ Item_in_subselect::row_value_in_to_exist
item_having_part2= and_items(item_having_part2, item_nnull_test);
item_having_part2->top_level_item();
}
- having_item= and_items(having_item, item_having_part2);
- having_item->top_level_item();
+ *having_item= and_items(*having_item, item_having_part2);
+ (*having_item)->top_level_item();
}
else
{
@@ -1972,7 +1990,6 @@ Item_in_subselect::row_value_in_to_exist
(l2 = v2) and
(l3 = v3)
*/
- Item *where_item= 0;
for (uint i= 0; i < cols_num; i++)
{
Item *item, *item_isnull;
@@ -2030,10 +2047,33 @@ Item_in_subselect::row_value_in_to_exist
new Item_func_trig_cond(having_col_item, get_cond_guard(i))))
DBUG_RETURN(RES_ERROR);
}
- having_item= and_items(having_item, having_col_item);
+ *having_item= and_items(*having_item, having_col_item);
}
- where_item= and_items(where_item, item);
+ *where_item= and_items(*where_item, item);
}
+ (*where_item)->fix_fields(thd, 0);
+ }
+
+ DBUG_RETURN(RES_OK);
+}
+
+
+Item_subselect::trans_res
+Item_in_subselect::inject_row_value_in_to_exists_cond(JOIN * join,
+ Item *where_item,
+ Item *having_item)
+{
+ SELECT_LEX *select_lex= join->select_lex;
+ bool is_having_used= (join->having || select_lex->with_sum_func ||
+ select_lex->group_list.first ||
+ !select_lex->table_list.elements);
+
+ DBUG_ENTER("Item_in_subselect::inject_row_value_in_to_exists_cond");
+
+ select_lex->uncacheable|= UNCACHEABLE_DEPENDENT;
+
+ if (!is_having_used)
+ {
/*
AND can't be changed during fix_fields()
we can assign select_lex->where here, and pass 0 as last
@@ -2041,9 +2081,10 @@ Item_in_subselect::row_value_in_to_exist
*/
select_lex->where= join->conds= and_items(join->conds, where_item);
select_lex->where->top_level_item();
- if (join->conds->fix_fields(thd, 0))
+ if (!join->conds->fixed && join->conds->fix_fields(thd, 0))
DBUG_RETURN(RES_ERROR);
}
+
if (having_item)
{
bool res;
@@ -2057,12 +2098,11 @@ Item_in_subselect::row_value_in_to_exist
argument (reference) to fix_fields()
*/
select_lex->having_fix_field= 1;
- res= join->having->fix_fields(thd, 0);
+ if (!join->having->fixed)
+ res= join->having->fix_fields(thd, 0);
select_lex->having_fix_field= 0;
if (res)
- {
DBUG_RETURN(RES_ERROR);
- }
}
DBUG_RETURN(RES_OK);
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-07-18 11:46:08 +0000
+++ b/sql/item_subselect.h 2010-07-18 12:59:24 +0000
@@ -438,6 +438,13 @@ public:
Item *having_term);
trans_res row_value_in_to_exists_transformer(JOIN * join);
+ trans_res create_row_value_in_to_exists_cond(JOIN * join,
+ Item **where_term,
+ Item **having_term);
+ trans_res inject_row_value_in_to_exists_cond(JOIN * join,
+ Item *where_term,
+ Item *having_term);
+
virtual bool exec();
longlong val_int();
double val_real();
1
0

[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-mwl89/ branch (timour:2803)
by timour@askmonty.org 18 Jul '10
by timour@askmonty.org 18 Jul '10
18 Jul '10
#At file:///home/tsk/mprog/src/5.3-mwl89/ based on revid:timour@askmonty.org-20100716121055-6pesx07gvsmivwm3
2803 timour(a)askmonty.org 2010-07-18
MWL#89: Cost-based choice between Materialization and IN->EXISTS transformation
Step1 in the separation of the creation of IN->EXISTS equi-join conditions from
their injection. The goal of this separation is to make it possible that the
IN->EXISTS conditions can be used for cost estimation without actually modifying
the subquery.
This patch separates single_value_in_to_exists_transformer() into two methods:
- create_single_value_in_to_exists_cond(), and
- inject_single_value_in_to_exists_cond()
The patch performs minimal refactoring of the code so that it is easier to solve
problems resulting from the separation. There is a lot to be simplified in this
code, but this will be done separately.
modified:
sql/item_subselect.cc
sql/item_subselect.h
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-07-16 12:10:55 +0000
+++ b/sql/item_subselect.cc 2010-07-18 11:46:08 +0000
@@ -1521,16 +1521,35 @@ Item_in_subselect::single_value_transfor
*/
Item_subselect::trans_res
-Item_in_subselect::single_value_in_to_exists_transformer(JOIN * join, Comp_creator *func)
+Item_in_subselect::single_value_in_to_exists_transformer(JOIN * join,
+ Comp_creator *func)
+{
+ Item *where_term;
+ Item *having_term;
+ Item_subselect::trans_res res;
+
+ res= create_single_value_in_to_exists_cond(join, func,
+ &where_term, &having_term);
+ if (res != RES_OK)
+ return res;
+ res= inject_single_value_in_to_exists_cond(join, func,
+ where_term, having_term);
+ return res;
+}
+
+
+Item_subselect::trans_res
+Item_in_subselect::create_single_value_in_to_exists_cond(JOIN * join,
+ Comp_creator *func,
+ Item **where_term,
+ Item **having_term)
{
SELECT_LEX *select_lex= join->select_lex;
- DBUG_ENTER("Item_in_subselect::single_value_in_to_exists_transformer");
+ DBUG_ENTER("Item_in_subselect::create_single_value_in_to_exists_cond");
- select_lex->uncacheable|= UNCACHEABLE_DEPENDENT;
if (join->having || select_lex->with_sum_func ||
select_lex->group_list.elements)
{
- bool tmp;
Item *item= func->create(expr,
new Item_ref_null_helper(&select_lex->context,
this,
@@ -1546,132 +1565,199 @@ Item_in_subselect::single_value_in_to_ex
*/
item= new Item_func_trig_cond(item, get_cond_guard(0));
}
-
+
+ if (item->fix_fields(thd, 0))
+ DBUG_RETURN(RES_ERROR);
+
+ *having_term= item;
+ *where_term= NULL;
+ }
+ else
+ {
+ Item *item= (Item*) select_lex->item_list.head();
+
+ if (select_lex->table_list.elements)
+ {
+ Item *having= item;
+ Item *orig_item= item;
+
+ item= func->create(expr, item);
+ if (!abort_on_null && orig_item->maybe_null)
+ {
+ having= new Item_is_not_null_test(this, having);
+ if (left_expr->maybe_null)
+ {
+ if (!(having= new Item_func_trig_cond(having,
+ get_cond_guard(0))))
+ DBUG_RETURN(RES_ERROR);
+ }
+
+ if (having->fix_fields(thd, 0))
+ DBUG_RETURN(RES_ERROR);
+
+ *having_term= having;
+
+ item= new Item_cond_or(item,
+ new Item_func_isnull(orig_item));
+ }
+ /*
+ If we may encounter NULL IN (SELECT ...) and care whether subquery
+ result is NULL or FALSE, wrap condition in a trig_cond.
+ */
+ if (!abort_on_null && left_expr->maybe_null)
+ {
+ if (!(item= new Item_func_trig_cond(item, get_cond_guard(0))))
+ DBUG_RETURN(RES_ERROR);
+ }
+
+ if (item->fix_fields(thd, 0))
+ DBUG_RETURN(RES_ERROR);
+
+ *where_term= item;
+ }
+ else
+ {
+ if (select_lex->master_unit()->is_union())
+ {
+ /*
+ comparison functions can't be changed during fix_fields()
+ we can assign select_lex->having here, and pass 0 as last
+ argument (reference) to fix_fields()
+ */
+ Item *new_having=
+ func->create(expr,
+ new Item_ref_null_helper(&select_lex->context, this,
+ select_lex->ref_pointer_array,
+ (char *)"<no matter>",
+ (char *)"<result>"));
+ if (!abort_on_null && left_expr->maybe_null)
+ {
+ if (!(new_having= new Item_func_trig_cond(new_having,
+ get_cond_guard(0))))
+ DBUG_RETURN(RES_ERROR);
+ }
+
+ if (new_having->fix_fields(thd, 0))
+ DBUG_RETURN(RES_ERROR);
+
+ *having_term= new_having;
+ *where_term= NULL;
+ }
+ else
+ {
+ *having_term= NULL;
+ *where_term= (Item*) select_lex->item_list.head();
+ }
+ }
+ }
+
+ DBUG_RETURN(RES_OK);
+}
+
+
+
+Item_subselect::trans_res
+Item_in_subselect::inject_single_value_in_to_exists_cond(JOIN * join,
+ Comp_creator *func,
+ Item *where_term,
+ Item *having_term)
+{
+ SELECT_LEX *select_lex= join->select_lex;
+ bool fix_res;
+ DBUG_ENTER("Item_in_subselect::single_value_in_to_exists_transformer");
+
+ select_lex->uncacheable|= UNCACHEABLE_DEPENDENT;
+ if (join->having || select_lex->with_sum_func ||
+ select_lex->group_list.elements)
+ {
/*
AND and comparison functions can't be changed during fix_fields()
we can assign select_lex->having here, and pass 0 as last
argument (reference) to fix_fields()
*/
- select_lex->having= join->having= and_items(join->having, item);
- if (join->having == item)
- item->name= (char*)in_having_cond;
+ select_lex->having= join->having= and_items(join->having, having_term);
+ if (join->having == having_term)
+ having_term->name= (char*)in_having_cond;
select_lex->having_fix_field= 1;
/*
we do not check join->having->fixed, because Item_and (from and_items)
or comparison function (from func->create) can't be fixed after creation
*/
- tmp= join->having->fix_fields(thd, 0);
+ if (!join->having->fixed)
+ fix_res= join->having->fix_fields(thd, 0);
select_lex->having_fix_field= 0;
- if (tmp)
+ if (fix_res)
DBUG_RETURN(RES_ERROR);
}
else
{
- Item *item= (Item*) select_lex->item_list.head();
-
if (select_lex->table_list.elements)
{
- bool tmp;
- Item *having= item, *orig_item= item;
+ Item *orig_item= (Item*) select_lex->item_list.head();
select_lex->item_list.empty();
select_lex->item_list.push_back(new Item_int("Not_used",
(longlong) 1,
MY_INT64_NUM_DECIMAL_DIGITS));
select_lex->ref_pointer_array[0]= select_lex->item_list.head();
- item= func->create(expr, item);
if (!abort_on_null && orig_item->maybe_null)
{
- having= new Item_is_not_null_test(this, having);
- if (left_expr->maybe_null)
- {
- if (!(having= new Item_func_trig_cond(having,
- get_cond_guard(0))))
- DBUG_RETURN(RES_ERROR);
- }
/*
Item_is_not_null_test can't be changed during fix_fields()
we can assign select_lex->having here, and pass 0 as last
argument (reference) to fix_fields()
*/
- having->name= (char*)in_having_cond;
- select_lex->having= join->having= having;
+ having_term->name= (char*)in_having_cond;
+ select_lex->having= join->having= having_term;
select_lex->having_fix_field= 1;
/*
we do not check join->having->fixed, because Item_and (from
and_items) or comparison function (from func->create) can't be
fixed after creation
*/
- tmp= join->having->fix_fields(thd, 0);
+ if (!join->having->fixed)
+ fix_res= join->having->fix_fields(thd, 0);
select_lex->having_fix_field= 0;
- if (tmp)
+ if (fix_res)
DBUG_RETURN(RES_ERROR);
- item= new Item_cond_or(item,
- new Item_func_isnull(orig_item));
- }
- /*
- If we may encounter NULL IN (SELECT ...) and care whether subquery
- result is NULL or FALSE, wrap condition in a trig_cond.
- */
- if (!abort_on_null && left_expr->maybe_null)
- {
- if (!(item= new Item_func_trig_cond(item, get_cond_guard(0))))
- DBUG_RETURN(RES_ERROR);
}
/*
TODO: figure out why the following is done here in
single_value_transformer but there is no corresponding action in
row_value_transformer?
*/
- item->name= (char *)in_additional_cond;
+ where_term->name= (char *)in_additional_cond;
/*
AND can't be changed during fix_fields()
we can assign select_lex->having here, and pass 0 as last
argument (reference) to fix_fields()
*/
- select_lex->where= join->conds= and_items(join->conds, item);
+ select_lex->where= join->conds= and_items(join->conds, where_term);
select_lex->where->top_level_item();
/*
we do not check join->conds->fixed, because Item_and can't be fixed
after creation
*/
- if (join->conds->fix_fields(thd, 0))
- DBUG_RETURN(RES_ERROR);
+ if (!join->conds->fixed && join->conds->fix_fields(thd, 0))
+ DBUG_RETURN(RES_ERROR);
}
else
{
- bool tmp;
if (select_lex->master_unit()->is_union())
{
- /*
- comparison functions can't be changed during fix_fields()
- we can assign select_lex->having here, and pass 0 as last
- argument (reference) to fix_fields()
- */
- Item *new_having=
- func->create(expr,
- new Item_ref_null_helper(&select_lex->context, this,
- select_lex->ref_pointer_array,
- (char *)"<no matter>",
- (char *)"<result>"));
- if (!abort_on_null && left_expr->maybe_null)
- {
- if (!(new_having= new Item_func_trig_cond(new_having,
- get_cond_guard(0))))
- DBUG_RETURN(RES_ERROR);
- }
- new_having->name= (char*)in_having_cond;
- select_lex->having= join->having= new_having;
+ having_term->name= (char*)in_having_cond;
+ select_lex->having= join->having= having_term;
select_lex->having_fix_field= 1;
/*
we do not check join->having->fixed, because comparison function
(from func->create) can't be fixed after creation
*/
- tmp= join->having->fix_fields(thd, 0);
+ if (!join->having->fixed)
+ fix_res= join->having->fix_fields(thd, 0);
select_lex->having_fix_field= 0;
- if (tmp)
+ if (fix_res)
DBUG_RETURN(RES_ERROR);
}
else
@@ -1679,11 +1765,11 @@ Item_in_subselect::single_value_in_to_ex
// it is single select without tables => possible optimization
// remove the dependence mark since the item is moved to upper
// select and is not outer anymore.
- item->walk(&Item::remove_dependence_processor, 0,
- (uchar *) select_lex->outer_select());
- item= func->create(left_expr, item);
+ where_term->walk(&Item::remove_dependence_processor, 0,
+ (uchar *) select_lex->outer_select());
+ where_term= func->create(left_expr, where_term);
// fix_field of item will be done in time of substituting
- substitution= item;
+ substitution= where_term;
have_to_be_excluded= 1;
if (thd->lex->describe)
{
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-07-16 10:52:02 +0000
+++ b/sql/item_subselect.h 2010-07-18 11:46:08 +0000
@@ -425,8 +425,18 @@ public:
trans_res select_in_like_transformer(JOIN *join, Comp_creator *func);
trans_res single_value_transformer(JOIN *join, Comp_creator *func);
trans_res row_value_transformer(JOIN * join);
+
trans_res single_value_in_to_exists_transformer(JOIN * join,
Comp_creator *func);
+ trans_res create_single_value_in_to_exists_cond(JOIN * join,
+ Comp_creator *func,
+ Item **where_term,
+ Item **having_term);
+ trans_res inject_single_value_in_to_exists_cond(JOIN * join,
+ Comp_creator *func,
+ Item *where_term,
+ Item *having_term);
+
trans_res row_value_in_to_exists_transformer(JOIN * join);
virtual bool exec();
longlong val_int();
1
0

Re: [Maria-developers] [Fwd: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2823) Bug#604503]
by Sergey Petrunya 17 Jul '10
by Sergey Petrunya 17 Jul '10
17 Jul '10
Hello Igor,
On Sat, Jul 17, 2010 at 12:42:49AM -0700, Igor Babaev wrote:
> === modified file 'sql/table.cc'
> --- a/sql/table.cc 2010-07-13 14:34:14 +0000
> +++ b/sql/table.cc 2010-07-17 07:37:48 +0000
> @@ -1930,8 +1930,6 @@ end:
> semantic analysis of the item by calling the the function
> fix_vcol_expr.
> Since the defining expression is part of the table definition the item
> for it is created in table->memroot within a separate Query_arena.
Please explicitly refer to TABLE::expr_arena in the above comment.
> - The free_list of this arena is saved in field->vcol_info.item_free_list
> - to be freed when the table defition is removed from the TABLE_SHARE
> cache.
>
> @note
> Before passing 'vcol_expr" to the parser the function embraces it in
...
> === modified file 'sql/table.h'
> --- a/sql/table.h 2010-06-03 09:28:54 +0000
> +++ b/sql/table.h 2010-07-17 07:37:48 +0000
> @@ -27,6 +27,7 @@ class st_select_lex;
> class partition_info;
> class COND_EQUAL;
> class Security_context;
> +class Query_arena;
>
> /*************************************************************************/
>
> @@ -869,6 +870,7 @@ struct st_table {
> MEM_ROOT mem_root;
> GRANT_INFO grant;
> FILESORT_INFO sort;
> + Query_arena *expr_arena;
Please add a comment saying what is the new member for.
> #ifdef WITH_PARTITION_STORAGE_ENGINE
> partition_info *part_info; /* Partition related information */
> bool no_partitions_used; /* If true, all partitions have been pruned
> away */
Ok to push after the above is addressed.
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

Re: [Maria-developers] [Fwd: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2823) Bug#603186]
by Sergey Petrunya 16 Jul '10
by Sergey Petrunya 16 Jul '10
16 Jul '10
Hello Igor,
On Thu, Jul 15, 2010 at 04:54:37PM -0700, Igor Babaev wrote:
> Please review this patch for the 5.2 tree.
>
> Regards,
> Igor.
Ok to push.
> -------- Original Message --------
> Subject: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2
> branch (igor:2823) Bug#603186
> Date: Thu, 15 Jul 2010 16:51:17 -0700 (PDT)
> From: Igor Babaev <igor(a)askmonty.org>
> Reply-To: maria-developers(a)lists.launchpad.net
> To: commits(a)mariadb.org
>
> #At lp:maria/5.2 based on
> revid:igor@askmonty.org-20100713174523-mjvsvvp6ow8dc81x
>
> 2823 Igor Babaev 2010-07-15
> Fixed bug #603186.
> There were two problems that caused wrong results reported with
> this bug.
> 1. In some cases stored(persistent) virtual columns were not marked
> in the write_set and in the vcol_set bitmaps.
> 2. If the list of fields in an insert command was empty then the
> values of
> the stored virtual columns were set to default.
>
> To fix the first problem the function
> st_table::mark_virtual_columns_for_write
> was modified. Now the function has a parameter that says whether
> the virtual
> columns are to be marked for insert or for update.
> To fix the second problem a special handling of empty insert lists is
> added in the function fill_record().
> modified:
> mysql-test/suite/vcol/r/vcol_misc.result
> mysql-test/suite/vcol/t/vcol_misc.test
> sql/sql_base.cc
> sql/sql_insert.cc
> sql/sql_lex.cc
> sql/sql_lex.h
> sql/sql_table.cc
> sql/table.cc
> sql/table.h
>
> === modified file 'mysql-test/suite/vcol/r/vcol_misc.result'
> --- a/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-13 17:45:23 +0000
> +++ b/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-15 23:51:05 +0000
> @@ -45,3 +45,20 @@ C
> 1
> 1
> DROP TABLE t1;
> +CREATE TABLE t1(a int, b int DEFAULT 0, v INT AS (b+10) PERSISTENT);
> +INSERT INTO t1(a) VALUES (1);
> +SELECT b, v FROM t1;
> +b v
> +0 10
> +DROP TABLE t1;
> +CREATE TABLE t1(a int DEFAULT 100, v int AS (a+1) PERSISTENT);
> +INSERT INTO t1 () VALUES ();
> +CREATE TABLE t2(a int DEFAULT 100 , v int AS (a+1));
> +INSERT INTO t2 () VALUES ();
> +SELECT a, v FROM t1;
> +a v
> +100 101
> +SELECT a, v FROM t2;
> +a v
> +100 101
> +DROP TABLE t1,t2;
>
> === modified file 'mysql-test/suite/vcol/t/vcol_misc.test'
> --- a/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-13 17:45:23 +0000
> +++ b/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-15 23:51:05 +0000
> @@ -43,5 +43,22 @@ SELECT 1 AS C FROM t1 ORDER BY v;
>
> DROP TABLE t1;
>
> +#
> +# Bug#603186: Insert for a table with stored vurtual columns
> +#
>
> +CREATE TABLE t1(a int, b int DEFAULT 0, v INT AS (b+10) PERSISTENT);
> +INSERT INTO t1(a) VALUES (1);
> +SELECT b, v FROM t1;
>
> +DROP TABLE t1;
> +
> +CREATE TABLE t1(a int DEFAULT 100, v int AS (a+1) PERSISTENT);
> +INSERT INTO t1 () VALUES ();
> +CREATE TABLE t2(a int DEFAULT 100 , v int AS (a+1));
> +INSERT INTO t2 () VALUES ();
> +
> +SELECT a, v FROM t1;
> +SELECT a, v FROM t2;
> +
> +DROP TABLE t1,t2;
>
> === modified file 'sql/sql_base.cc'
> --- a/sql/sql_base.cc 2010-06-01 19:52:20 +0000
> +++ b/sql/sql_base.cc 2010-07-15 23:51:05 +0000
> @@ -8204,6 +8204,8 @@ fill_record(THD * thd, List<Item> &field
> table->auto_increment_field_not_null= FALSE;
> f.rewind();
> }
> + else if (thd->lex->unit.insert_table_with_stored_vcol)
> + tbl_list.push_back(thd->lex->unit.insert_table_with_stored_vcol);
> while ((fld= f++))
> {
> if (!(field= fld->filed_for_view_update()))
>
> === modified file 'sql/sql_insert.cc'
> --- a/sql/sql_insert.cc 2010-06-01 19:52:20 +0000
> +++ b/sql/sql_insert.cc 2010-07-15 23:51:05 +0000
> @@ -273,7 +273,7 @@ static int check_insert_fields(THD *thd,
> }
> /* Mark virtual columns used in the insert statement */
> if (table->vfield)
> - table->mark_virtual_columns_for_write();
> + table->mark_virtual_columns_for_write(TRUE);
> // For the values we need select_priv
> #ifndef NO_EMBEDDED_ACCESS_CHECKS
> table->grant.want_privilege= (SELECT_ACL & ~table->grant.privilege);
> @@ -1267,7 +1267,6 @@ bool mysql_prepare_insert(THD *thd, TABL
> if (mysql_prepare_insert_check_table(thd, table_list, fields,
> select_insert))
> DBUG_RETURN(TRUE);
>
> -
> /* Prepare the fields in the statement. */
> if (values)
> {
> @@ -1320,6 +1319,18 @@ bool mysql_prepare_insert(THD *thd, TABL
> if (!table)
> table= table_list->table;
>
> + if (!fields.elements && table->vfield)
> + {
> + for (Field **vfield_ptr= table->vfield; *vfield_ptr; vfield_ptr++)
> + {
> + if ((*vfield_ptr)->stored_in_db)
> + {
> + thd->lex->unit.insert_table_with_stored_vcol= table;
> + break;
> + }
> + }
> + }
> +
> if (!select_insert)
> {
> Item *fake_conds= 0;
>
> === modified file 'sql/sql_lex.cc'
> --- a/sql/sql_lex.cc 2010-06-01 19:52:20 +0000
> +++ b/sql/sql_lex.cc 2010-07-15 23:51:05 +0000
> @@ -1590,6 +1590,7 @@ void st_select_lex_unit::init_query()
> item_list.empty();
> describe= 0;
> found_rows_for_union= 0;
> + insert_table_with_stored_vcol= 0;
> }
>
> void st_select_lex::init_query()
>
> === modified file 'sql/sql_lex.h'
> --- a/sql/sql_lex.h 2010-06-01 19:52:20 +0000
> +++ b/sql/sql_lex.h 2010-07-15 23:51:05 +0000
> @@ -532,6 +532,13 @@ public:
> bool describe; /* union exec() called for EXPLAIN */
> Procedure *last_procedure; /* Pointer to procedure, if such exists */
>
> + /*
> + Insert table with stored virtual columns.
> + This is used only in those rare cases
> + when the list of inserted values is empty.
> + */
> + TABLE *insert_table_with_stored_vcol;
> +
> void init_query();
> st_select_lex_unit* master_unit();
> st_select_lex* outer_select();
>
> === modified file 'sql/sql_table.cc'
> --- a/sql/sql_table.cc 2010-06-05 14:53:36 +0000
> +++ b/sql/sql_table.cc 2010-07-15 23:51:05 +0000
> @@ -7876,7 +7876,7 @@ copy_data_between_tables(TABLE *from,TAB
>
> /* Tell handler that we have values for all columns in the to table */
> to->use_all_columns();
> - to->mark_virtual_columns_for_write();
> + to->mark_virtual_columns_for_write(TRUE);
> init_read_record(&info, thd, from, (SQL_SELECT *) 0, 1, 1, FALSE);
> errpos= 4;
> if (ignore)
>
> === modified file 'sql/table.cc'
> --- a/sql/table.cc 2010-07-13 14:34:14 +0000
> +++ b/sql/table.cc 2010-07-15 23:51:05 +0000
> @@ -5024,7 +5024,7 @@ void st_table::mark_columns_needed_for_u
> }
> }
> /* Mark all virtual columns needed for update */
> - mark_virtual_columns_for_write();
> + mark_virtual_columns_for_write(FALSE);
> DBUG_VOID_RETURN;
> }
>
> @@ -5052,7 +5052,7 @@ void st_table::mark_columns_needed_for_i
> if (found_next_number_field)
> mark_auto_increment_column();
> /* Mark virtual columns for insert */
> - mark_virtual_columns_for_write();
> + mark_virtual_columns_for_write(TRUE);
> }
>
>
> @@ -5090,10 +5090,14 @@ bool st_table::mark_virtual_col(Field *f
>
> /*
> @brief Mark virtual columns for update/insert commands
> +
> + @param insert_fl <-> virtual columns are marked for insert command
>
> @details
> The function marks virtual columns used in a update/insert commands
> in the vcol_set bitmap.
> + For an insert command a virtual column is always marked in write_set if
> + it is a stored column.
> If a virtual column is from write_set it is always marked in vcol_set.
> If a stored virtual column is not from write_set but it is computed
> through columns from write_set it is also marked in vcol_set, and,
> @@ -5112,7 +5116,7 @@ bool st_table::mark_virtual_col(Field *f
> be added to read_set either.
> */
>
> -void st_table::mark_virtual_columns_for_write(void)
> +void st_table::mark_virtual_columns_for_write(bool insert_fl)
> {
> Field **vfield_ptr, *tmp_vfield;
> bool bitmap_updated= FALSE;
> @@ -5124,16 +5128,21 @@ void st_table::mark_virtual_columns_for_
> bitmap_updated= mark_virtual_col(tmp_vfield);
> else if (tmp_vfield->stored_in_db)
> {
> - MY_BITMAP *save_read_set;
> - Item *vcol_item= tmp_vfield->vcol_info->expr_item;
> - DBUG_ASSERT(vcol_item);
> - bitmap_clear_all(&tmp_set);
> - save_read_set= read_set;
> - read_set= &tmp_set;
> - vcol_item->walk(&Item::register_field_in_read_map, 1, (uchar *) 0);
> - read_set= save_read_set;
> - bitmap_intersect(&tmp_set, write_set);
> - if (!bitmap_is_clear_all(&tmp_set))
> + bool mark_fl= insert_fl;
> + if (!mark_fl)
> + {
> + MY_BITMAP *save_read_set;
> + Item *vcol_item= tmp_vfield->vcol_info->expr_item;
> + DBUG_ASSERT(vcol_item);
> + bitmap_clear_all(&tmp_set);
> + save_read_set= read_set;
> + read_set= &tmp_set;
> + vcol_item->walk(&Item::register_field_in_read_map, 1, (uchar *) 0);
> + read_set= save_read_set;
> + bitmap_intersect(&tmp_set, write_set);
> + mark_fl= !bitmap_is_clear_all(&tmp_set);
> + }
> + if (mark_fl)
> {
> bitmap_set_bit(write_set, tmp_vfield->field_index);
> mark_virtual_col(tmp_vfield);
>
> === modified file 'sql/table.h'
> --- a/sql/table.h 2010-06-03 09:28:54 +0000
> +++ b/sql/table.h 2010-07-15 23:51:05 +0000
> @@ -886,7 +886,7 @@ struct st_table {
> void mark_columns_needed_for_delete(void);
> void mark_columns_needed_for_insert(void);
> bool mark_virtual_col(Field *field);
> - void mark_virtual_columns_for_write(void);
> + void mark_virtual_columns_for_write(bool insert_fl);
> inline void column_bitmaps_set(MY_BITMAP *read_set_arg,
> MY_BITMAP *write_set_arg)
> {
>
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
--
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-mwl89/ branch (timour:2802)
by timour@askmonty.org 16 Jul '10
by timour@askmonty.org 16 Jul '10
16 Jul '10
#At file:///home/tsk/mprog/src/5.3-mwl89/ based on revid:timour@askmonty.org-20100716105202-8narq4tzhka2n1a5
2802 timour(a)askmonty.org 2010-07-16 [merge]
Merge main 5.3 into 5.3-mwl89.
added:
mysql-test/r/optimizer_switch.result
mysql-test/t/optimizer_switch.test
modified:
.bzrignore
configure.in
include/queues.h
include/thr_alarm.h
mysql-test/r/index_merge_myisam.result
mysql-test/r/myisam_mrr.result
mysql-test/r/order_by.result
mysql-test/r/subselect_mat.result
mysql-test/r/subselect_no_mat.result
mysql-test/r/subselect_no_opts.result
mysql-test/r/subselect_no_semijoin.result
mysql-test/r/subselect_sj.result
mysql-test/r/subselect_sj_jcl6.result
mysql-test/t/index_merge_myisam.test
mysql-test/t/myisam_mrr.test
mysql-test/t/order_by.test
mysql-test/t/subselect_mat.test
mysql-test/t/subselect_no_mat.test
mysql-test/t/subselect_no_opts.test
mysql-test/t/subselect_no_semijoin.test
mysql-test/t/subselect_sj.test
mysys/queues.c
mysys/thr_alarm.c
sql/create_options.cc
sql/event_queue.cc
sql/filesort.cc
sql/ha_partition.cc
sql/ha_partition.h
sql/item_cmpfunc.cc
sql/item_subselect.cc
sql/mysqld.cc
sql/net_serv.cc
sql/opt_range.cc
sql/sql_class.cc
sql/sql_class.h
sql/sql_union.cc
sql/uniques.cc
storage/maria/ma_ft_boolean_search.c
storage/maria/ma_ft_nlq_search.c
storage/maria/ma_sort.c
storage/maria/maria_pack.c
storage/myisam/ft_boolean_search.c
storage/myisam/ft_nlq_search.c
storage/myisam/mi_test_all.sh
storage/myisam/myisampack.c
storage/myisam/sort.c
storage/myisammrg/myrg_queue.c
storage/myisammrg/myrg_rnext.c
storage/myisammrg/myrg_rnext_same.c
storage/myisammrg/myrg_rprev.c
=== modified file '.bzrignore'
--- a/.bzrignore 2010-06-26 10:05:41 +0000
+++ b/.bzrignore 2010-07-16 07:33:01 +0000
@@ -1940,3 +1940,4 @@ sql/client_plugin.c
*.dgcov
libmysqld/create_options.cc
storage/pbxt/bin/xtstat
+libmysqld/sql_expression_cache.cc
=== modified file 'configure.in'
--- a/configure.in 2010-06-26 19:55:33 +0000
+++ b/configure.in 2010-07-16 08:02:05 +0000
@@ -17,7 +17,7 @@ dnl When merging new MySQL releases, upd
dnl MySQL version number.
dnl
dnl Note: the following line must be parseable by win/configure.js:GetVersion()
-AC_INIT([MariaDB Server], [5.2.1-MariaDB-beta], [], [mysql])
+AC_INIT([MariaDB Server], [5.3.0-MariaDB-alpha], [], [mysql])
AC_CONFIG_SRCDIR([sql/mysqld.cc])
AC_CANONICAL_SYSTEM
=== modified file 'include/queues.h'
--- a/include/queues.h 2007-11-14 18:20:31 +0000
+++ b/include/queues.h 2010-07-16 07:33:01 +0000
@@ -1,23 +1,31 @@
-/* Copyright (C) 2000 MySQL AB
+/* Copyright (C) 2010 Monty Program Ab
+ All Rights reserved
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; version 2 of the License.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ SUCH DAMAGE.
+*/
/*
Code for generell handling of priority Queues.
Implemention of queues from "Algoritms in C" by Robert Sedgewick.
- Copyright Monty Program KB.
- By monty.
*/
#ifndef _queues_h
@@ -31,30 +39,34 @@ typedef struct st_queue {
void *first_cmp_arg;
uint elements;
uint max_elements;
- uint offset_to_key; /* compare is done on element+offset */
+ uint offset_to_key; /* compare is done on element+offset */
+ uint offset_to_queue_pos; /* If we want to store position in element */
+ uint auto_extent;
int max_at_top; /* Normally 1, set to -1 if queue_top gives max */
int (*compare)(void *, uchar *,uchar *);
- uint auto_extent;
} QUEUE;
+#define queue_first_element(queue) 1
+#define queue_last_element(queue) (queue)->elements
#define queue_top(queue) ((queue)->root[1])
-#define queue_element(queue,index) ((queue)->root[index+1])
+#define queue_element(queue,index) ((queue)->root[index])
#define queue_end(queue) ((queue)->root[(queue)->elements])
-#define queue_replaced(queue) _downheap(queue,1)
+#define queue_replace(queue, idx) _downheap(queue, idx, (queue)->root[idx])
+#define queue_replace_top(queue) _downheap(queue, 1, (queue)->root[1])
#define queue_set_cmp_arg(queue, set_arg) (queue)->first_cmp_arg= set_arg
#define queue_set_max_at_top(queue, set_arg) \
(queue)->max_at_top= set_arg ? -1 : 1
+#define queue_remove_top(queue_arg) queue_remove((queue_arg), queue_first_element(queue_arg))
typedef int (*queue_compare)(void *,uchar *, uchar *);
int init_queue(QUEUE *queue,uint max_elements,uint offset_to_key,
pbool max_at_top, queue_compare compare,
- void *first_cmp_arg);
-int init_queue_ex(QUEUE *queue,uint max_elements,uint offset_to_key,
- pbool max_at_top, queue_compare compare,
- void *first_cmp_arg, uint auto_extent);
+ void *first_cmp_arg, uint offset_to_queue_pos,
+ uint auto_extent);
int reinit_queue(QUEUE *queue,uint max_elements,uint offset_to_key,
pbool max_at_top, queue_compare compare,
- void *first_cmp_arg);
+ void *first_cmp_arg, uint offset_to_queue_pos,
+ uint auto_extent);
int resize_queue(QUEUE *queue, uint max_elements);
void delete_queue(QUEUE *queue);
void queue_insert(QUEUE *queue,uchar *element);
@@ -62,7 +74,7 @@ int queue_insert_safe(QUEUE *queue, ucha
uchar *queue_remove(QUEUE *queue,uint idx);
#define queue_remove_all(queue) { (queue)->elements= 0; }
#define queue_is_full(queue) (queue->elements == queue->max_elements)
-void _downheap(QUEUE *queue,uint idx);
+void _downheap(QUEUE *queue, uint idx, uchar *element);
void queue_fix(QUEUE *queue);
#define is_queue_inited(queue) ((queue)->root != 0)
=== modified file 'include/thr_alarm.h'
--- a/include/thr_alarm.h 2008-04-28 16:24:05 +0000
+++ b/include/thr_alarm.h 2010-07-16 07:33:01 +0000
@@ -34,7 +34,7 @@ extern "C" {
typedef struct st_alarm_info
{
- ulong next_alarm_time;
+ time_t next_alarm_time;
uint active_alarms;
uint max_used_alarms;
} ALARM_INFO;
@@ -78,10 +78,11 @@ typedef int thr_alarm_entry;
typedef thr_alarm_entry* thr_alarm_t;
typedef struct st_alarm {
- ulong expire_time;
+ time_t expire_time;
thr_alarm_entry alarmed; /* set when alarm is due */
pthread_t thread;
my_thread_id thread_id;
+ uint index_in_queue;
my_bool malloced;
} ALARM;
=== modified file 'mysql-test/r/index_merge_myisam.result'
--- a/mysql-test/r/index_merge_myisam.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/index_merge_myisam.result 2010-07-16 08:58:24 +0000
@@ -1413,66 +1413,6 @@ WHERE
`RUNID`= '' AND `SUBMITNR`= '' AND `ORDERNR`='' AND `PROGRAMM`='' AND
`TESTID`='' AND `UCCHECK`='';
drop table t1;
-#
-# Generic @@optimizer_switch tests (move those into a separate file if
-# we get another @@optimizer_switch user)
-#
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='index_merge=off,index_merge_union=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='index_merge_union=on';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,index_merge_sort_union=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch=4;
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
-set optimizer_switch=NULL;
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'NULL'
-set optimizer_switch='default,index_merge';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge'
-set optimizer_switch='index_merge=index_merge';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge=index_merge'
-set optimizer_switch='index_merge=on,but...';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'but...'
-set optimizer_switch='index_merge=';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge='
-set optimizer_switch='index_merge';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge'
-set optimizer_switch='on';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'on'
-set optimizer_switch='index_merge=on,index_merge=off';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge=off'
-set optimizer_switch='index_merge_union=on,index_merge_union=default';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge_union=default'
-set optimizer_switch='default,index_merge=on,index_merge=off,default';
-ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge=off,default'
-set optimizer_switch=default;
-set optimizer_switch='index_merge=off,index_merge_union=off,default';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch=default;
-select @@global.optimizer_switch;
-@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set @@global.optimizer_switch=default;
-select @@global.optimizer_switch;
-@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-#
-# Check index_merge's @@optimizer_switch flags
-#
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -1580,7 +1520,4 @@ explain select * from t1 where a=10 and
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE t1 index_merge a,b,c a,c 5,5 NULL 54 Using sort_union(a,c); Using where
set optimizer_switch=default;
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
drop table t0, t1;
=== modified file 'mysql-test/r/myisam_mrr.result'
--- a/mysql-test/r/myisam_mrr.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/myisam_mrr.result 2010-07-16 08:58:24 +0000
@@ -392,9 +392,9 @@ drop table t0, t1;
# Part of MWL#67: DS-MRR backport: add an @@optimizer_switch flag for
# index_condition pushdown:
# - engine_condition_pushdown does not affect ICP
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+select @@optimizer_switch like '%index_condition_pushdown=on%';
+@@optimizer_switch like '%index_condition_pushdown=on%'
+1
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, key(a));
=== added file 'mysql-test/r/optimizer_switch.result'
--- a/mysql-test/r/optimizer_switch.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/optimizer_switch.result 2010-07-16 08:58:24 +0000
@@ -0,0 +1,99 @@
+#
+# Generic @@optimizer_switch tests
+#
+#
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='index_merge=off,index_merge_union=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='index_merge_union=on';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,index_merge_sort_union=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch=4;
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
+set optimizer_switch=NULL;
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'NULL'
+set optimizer_switch='default,index_merge';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge'
+set optimizer_switch='index_merge=index_merge';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge=index_merge'
+set optimizer_switch='index_merge=on,but...';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'but...'
+set optimizer_switch='index_merge=';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge='
+set optimizer_switch='index_merge';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge'
+set optimizer_switch='on';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'on'
+set optimizer_switch='index_merge=on,index_merge=off';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge=off'
+set optimizer_switch='index_merge_union=on,index_merge_union=default';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge_union=default'
+set optimizer_switch='default,index_merge=on,index_merge=off,default';
+ERROR 42000: Variable 'optimizer_switch' can't be set to the value of 'index_merge=off,default'
+set optimizer_switch=default;
+set optimizer_switch='index_merge=off,index_merge_union=off,default';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch=default;
+select @@global.optimizer_switch;
+@@global.optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set @@global.optimizer_switch=default;
+select @@global.optimizer_switch;
+@@global.optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+#
+# Check index_merge's @@optimizer_switch flags
+#
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+
+BUG#37120 optimizer_switch allowable values not according to specification
+
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,materialization=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,semijoin=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,loosescan=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,semijoin=off,materialization=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,materialization=off,semijoin=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,semijoin=off,loosescan=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch='default,materialization=off,loosescan=off';
+select @@optimizer_switch;
+@@optimizer_switch
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+set optimizer_switch=default;
=== modified file 'mysql-test/r/order_by.result'
--- a/mysql-test/r/order_by.result 2010-03-20 12:01:47 +0000
+++ b/mysql-test/r/order_by.result 2010-07-15 14:07:01 +0000
@@ -607,9 +607,14 @@ FieldKey LongVal StringVal
1 0 2
1 1 3
1 2 1
-EXPLAIN SELECT * FROM t1 WHERE FieldKey > '2' ORDER BY LongVal;
+DS-MRR: use two IGNORE INDEX queries, otherwise we get cost races, because
+DS-MRR: records_in_range/read_time return the same numbers for all three indexes
+EXPLAIN SELECT * FROM t1 IGNORE INDEX (LongField, StringField) WHERE FieldKey > '2' ORDER BY LongVal;
id select_type table type possible_keys key key_len ref rows Extra
-1 SIMPLE t1 range FieldKey,LongField,StringField FieldKey 38 NULL 4 Using index condition; Using where; Using MRR; Using filesort
+1 SIMPLE t1 range FieldKey FieldKey 38 NULL 4 Using index condition; Using MRR; Using filesort
+EXPLAIN SELECT * FROM t1 IGNORE INDEX (FieldKey, LongField) WHERE FieldKey > '2' ORDER BY LongVal;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 range StringField StringField 38 NULL 4 Using where; Using filesort
SELECT * FROM t1 WHERE FieldKey > '2' ORDER BY LongVal;
FieldKey LongVal StringVal
3 1 2
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-07-16 10:52:02 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-07-16 12:10:55 +0000
@@ -1246,3 +1246,29 @@ i
4
set session optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
+create table t0 (a int);
+insert into t0 values (0),(1),(2);
+create table t1 (a int);
+insert into t1 values (0),(1),(2);
+explain select a, a in (select a from t1) from t0;
+id select_type table type possible_keys key key_len ref rows Extra
+1 PRIMARY t0 ALL NULL NULL NULL NULL 3
+2 SUBQUERY t1 ALL NULL NULL NULL NULL 3
+select a, a in (select a from t1) from t0;
+a a in (select a from t1)
+0 1
+1 1
+2 1
+prepare s from 'select a, a in (select a from t1) from t0';
+execute s;
+a a in (select a from t1)
+0 1
+1 1
+2 1
+update t1 set a=123;
+execute s;
+a a in (select a from t1)
+0 0
+1 0
+2 0
+drop table t0, t1;
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-07-16 08:58:24 +0000
@@ -1,6 +1,6 @@
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+select @@optimizer_switch like '%materialization=on%';
+@@optimizer_switch like '%materialization=on%'
+1
set optimizer_switch='materialization=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
set @save_optimizer_switch=@@optimizer_switch;
@@ -4925,6 +4925,6 @@ DROP TABLE t3;
DROP TABLE t2;
DROP TABLE t1;
set optimizer_switch=default;
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
+select @@optimizer_switch like '%materialization=on%';
+@@optimizer_switch like '%materialization=on%'
+1
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-07-16 08:58:24 +0000
@@ -1,6 +1,3 @@
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
set optimizer_switch='materialization=off,semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
set @save_optimizer_switch=@@optimizer_switch;
@@ -4925,6 +4922,3 @@ DROP TABLE t3;
DROP TABLE t2;
DROP TABLE t1;
set optimizer_switch=default;
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-07-16 08:58:24 +0000
@@ -1,6 +1,3 @@
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
set optimizer_switch='semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
set @save_optimizer_switch=@@optimizer_switch;
@@ -4925,6 +4922,3 @@ DROP TABLE t3;
DROP TABLE t2;
DROP TABLE t1;
set optimizer_switch=default;
-show variables like 'optimizer_switch';
-Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-07-16 08:58:24 +0000
@@ -197,45 +197,6 @@ id select_type table type possible_keys
1 PRIMARY t1 ALL NULL NULL NULL NULL 103 100.00 Using where; Using join buffer
Warnings:
Note 1003 select `test`.`t1`.`a` AS `a`,`test`.`t1`.`b` AS `b` from `test`.`t10` join `test`.`t1` where ((`test`.`t1`.`a` = `test`.`t10`.`pk`) and (`test`.`t10`.`pk` < 3))
-
-BUG#37120 optimizer_switch allowable values not according to specification
-
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,materialization=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off,materialization=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,materialization=off,semijoin=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,materialization=off,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-07-10 10:37:30 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-07-16 08:58:24 +0000
@@ -201,45 +201,6 @@ id select_type table type possible_keys
1 PRIMARY t1 ALL NULL NULL NULL NULL 103 100.00 Using where; Using join buffer
Warnings:
Note 1003 select `test`.`t1`.`a` AS `a`,`test`.`t1`.`b` AS `b` from `test`.`t10` join `test`.`t1` where ((`test`.`t1`.`a` = `test`.`t10`.`pk`) and (`test`.`t10`.`pk` < 3))
-
-BUG#37120 optimizer_switch allowable values not according to specification
-
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,materialization=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off,materialization=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,materialization=off,semijoin=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,semijoin=off,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch='default,materialization=off,loosescan=off';
-select @@optimizer_switch;
-@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on
-set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/t/index_merge_myisam.test'
--- a/mysql-test/t/index_merge_myisam.test 2009-08-24 19:10:48 +0000
+++ b/mysql-test/t/index_merge_myisam.test 2010-07-16 08:58:24 +0000
@@ -20,78 +20,6 @@ let $merge_table_support= 1;
--source include/index_merge_2sweeps.inc
--source include/index_merge_ror_cpk.inc
---echo #
---echo # Generic @@optimizer_switch tests (move those into a separate file if
---echo # we get another @@optimizer_switch user)
---echo #
-
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='index_merge=off,index_merge_union=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='index_merge_union=on';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,index_merge_sort_union=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch=4;
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch=NULL;
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='default,index_merge';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='index_merge=index_merge';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='index_merge=on,but...';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='index_merge=';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='index_merge';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='on';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='index_merge=on,index_merge=off';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='index_merge_union=on,index_merge_union=default';
-
---error ER_WRONG_VALUE_FOR_VAR
-set optimizer_switch='default,index_merge=on,index_merge=off,default';
-
-set optimizer_switch=default;
-set optimizer_switch='index_merge=off,index_merge_union=off,default';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-set optimizer_switch=default;
-
-# Check setting defaults for global vars
---replace_regex /,table_elimination=on//
-select @@global.optimizer_switch;
-set @@global.optimizer_switch=default;
---replace_regex /,table_elimination=on//
-select @@global.optimizer_switch;
-
---echo #
---echo # Check index_merge's @@optimizer_switch flags
---echo #
---replace_regex /,table_elimination.on//
-select @@optimizer_switch;
-
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -190,8 +118,6 @@ set optimizer_switch='default,index_merg
explain select * from t1 where a=10 and b=10 or c=10;
set optimizer_switch=default;
---replace_regex /,table_elimination.on//
-show variables like 'optimizer_switch';
drop table t0, t1;
=== modified file 'mysql-test/t/myisam_mrr.test'
--- a/mysql-test/t/myisam_mrr.test 2009-12-22 14:43:00 +0000
+++ b/mysql-test/t/myisam_mrr.test 2010-07-16 08:58:24 +0000
@@ -103,8 +103,7 @@ drop table t0, t1;
# Check that optimizer_switch is present
---replace_regex /,table_elimination=o[nf]*//
-select @@optimizer_switch;
+select @@optimizer_switch like '%index_condition_pushdown=on%';
# Check if it affects ICP
create table t0 (a int);
=== added file 'mysql-test/t/optimizer_switch.test'
--- a/mysql-test/t/optimizer_switch.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/optimizer_switch.test 2010-07-16 08:58:24 +0000
@@ -0,0 +1,113 @@
+--echo #
+--echo # Generic @@optimizer_switch tests
+--echo #
+--echo #
+
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='index_merge=off,index_merge_union=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='index_merge_union=on';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,index_merge_sort_union=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch=4;
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch=NULL;
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='default,index_merge';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='index_merge=index_merge';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='index_merge=on,but...';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='index_merge=';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='index_merge';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='on';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='index_merge=on,index_merge=off';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='index_merge_union=on,index_merge_union=default';
+
+--error ER_WRONG_VALUE_FOR_VAR
+set optimizer_switch='default,index_merge=on,index_merge=off,default';
+
+set optimizer_switch=default;
+set optimizer_switch='index_merge=off,index_merge_union=off,default';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+set optimizer_switch=default;
+
+# Check setting defaults for global vars
+--replace_regex /,table_elimination=on//
+select @@global.optimizer_switch;
+set @@global.optimizer_switch=default;
+--replace_regex /,table_elimination=on//
+select @@global.optimizer_switch;
+
+--echo #
+--echo # Check index_merge's @@optimizer_switch flags
+--echo #
+--replace_regex /,table_elimination.on//
+select @@optimizer_switch;
+
+--echo
+--echo BUG#37120 optimizer_switch allowable values not according to specification
+--echo
+
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,materialization=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,semijoin=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,loosescan=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,semijoin=off,materialization=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,materialization=off,semijoin=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,semijoin=off,loosescan=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+
+set optimizer_switch='default,materialization=off,loosescan=off';
+--replace_regex /,table_elimination=on//
+select @@optimizer_switch;
+set optimizer_switch=default;
+
+
=== modified file 'mysql-test/t/order_by.test'
--- a/mysql-test/t/order_by.test 2010-03-04 08:03:07 +0000
+++ b/mysql-test/t/order_by.test 2010-07-15 14:07:01 +0000
@@ -402,7 +402,11 @@ CREATE TABLE t1 (
INSERT INTO t1 VALUES ('0',3,'0'),('0',2,'1'),('0',1,'2'),('1',2,'1'),('1',1,'3'), ('1',0,'2'),('2',3,'0'),('2',2,'1'),('2',1,'2'),('2',3,'0'),('2',2,'1'),('2',1,'2'),('3',2,'1'),('3',1,'2'),('3','3','3');
EXPLAIN SELECT * FROM t1 WHERE FieldKey = '1' ORDER BY LongVal;
SELECT * FROM t1 WHERE FieldKey = '1' ORDER BY LongVal;
-EXPLAIN SELECT * FROM t1 WHERE FieldKey > '2' ORDER BY LongVal;
+--echo DS-MRR: use two IGNORE INDEX queries, otherwise we get cost races, because
+--echo DS-MRR: records_in_range/read_time return the same numbers for all three indexes
+EXPLAIN SELECT * FROM t1 IGNORE INDEX (LongField, StringField) WHERE FieldKey > '2' ORDER BY LongVal;
+EXPLAIN SELECT * FROM t1 IGNORE INDEX (FieldKey, LongField) WHERE FieldKey > '2' ORDER BY LongVal;
+
SELECT * FROM t1 WHERE FieldKey > '2' ORDER BY LongVal;
EXPLAIN SELECT * FROM t1 WHERE FieldKey > '2' ORDER BY FieldKey, LongVal;
SELECT * FROM t1 WHERE FieldKey > '2' ORDER BY FieldKey, LongVal;
=== modified file 'mysql-test/t/subselect_mat.test'
--- a/mysql-test/t/subselect_mat.test 2010-03-13 20:04:52 +0000
+++ b/mysql-test/t/subselect_mat.test 2010-07-16 11:02:15 +0000
@@ -905,3 +905,19 @@ select * from t1 where t1.i in (select t
set session optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
+#
+# Test that the contents of the temp table of a materialized subquery is
+# cleaned up between PS re-executions.
+#
+
+create table t0 (a int);
+insert into t0 values (0),(1),(2);
+create table t1 (a int);
+insert into t1 values (0),(1),(2);
+explain select a, a in (select a from t1) from t0;
+select a, a in (select a from t1) from t0;
+prepare s from 'select a, a in (select a from t1) from t0';
+execute s;
+update t1 set a=123;
+execute s;
+drop table t0, t1;
=== modified file 'mysql-test/t/subselect_no_mat.test'
--- a/mysql-test/t/subselect_no_mat.test 2010-02-21 07:33:54 +0000
+++ b/mysql-test/t/subselect_no_mat.test 2010-07-16 08:58:24 +0000
@@ -1,13 +1,11 @@
#
# Run subselect.test without semi-join optimization (test materialize)
#
---replace_regex /,table_elimination=on//
-show variables like 'optimizer_switch';
+select @@optimizer_switch like '%materialization=on%';
set optimizer_switch='materialization=off';
--source t/subselect.test
set optimizer_switch=default;
---replace_regex /,table_elimination=on//
-show variables like 'optimizer_switch';
+select @@optimizer_switch like '%materialization=on%';
=== modified file 'mysql-test/t/subselect_no_opts.test'
--- a/mysql-test/t/subselect_no_opts.test 2010-02-21 07:33:54 +0000
+++ b/mysql-test/t/subselect_no_opts.test 2010-07-16 08:58:24 +0000
@@ -1,13 +1,9 @@
#
# Run subselect.test without semi-join optimization (test materialize)
#
---replace_regex /,table_elimination=on//
-show variables like 'optimizer_switch';
set optimizer_switch='materialization=off,semijoin=off';
--source t/subselect.test
set optimizer_switch=default;
---replace_regex /,table_elimination=on//
-show variables like 'optimizer_switch';
=== modified file 'mysql-test/t/subselect_no_semijoin.test'
--- a/mysql-test/t/subselect_no_semijoin.test 2010-02-21 07:33:54 +0000
+++ b/mysql-test/t/subselect_no_semijoin.test 2010-07-16 08:58:24 +0000
@@ -1,13 +1,8 @@
#
# Run subselect.test without semi-join optimization (test materialize)
#
---replace_regex /,table_elimination=on//
-show variables like 'optimizer_switch';
set optimizer_switch='semijoin=off';
--source t/subselect.test
set optimizer_switch=default;
---replace_regex /,table_elimination=on//
-show variables like 'optimizer_switch';
-
=== modified file 'mysql-test/t/subselect_sj.test'
--- a/mysql-test/t/subselect_sj.test 2010-03-15 06:32:54 +0000
+++ b/mysql-test/t/subselect_sj.test 2010-07-16 08:58:24 +0000
@@ -92,46 +92,6 @@ execute s1;
insert into t1 select (A.a + 10 * B.a),1 from t0 A, t0 B;
explain extended select * from t1 where a in (select pk from t10 where pk<3);
---echo
---echo BUG#37120 optimizer_switch allowable values not according to specification
---echo
-
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,materialization=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,semijoin=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,loosescan=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,semijoin=off,materialization=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,materialization=off,semijoin=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,semijoin=off,loosescan=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-
-set optimizer_switch='default,materialization=off,loosescan=off';
---replace_regex /,table_elimination=on//
-select @@optimizer_switch;
-set optimizer_switch=default;
-
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysys/queues.c'
--- a/mysys/queues.c 2008-02-18 22:29:39 +0000
+++ b/mysys/queues.c 2010-07-16 07:33:01 +0000
@@ -1,25 +1,42 @@
-/* Copyright (C) 2000, 2005 MySQL AB
+/* Copyright (C) 2010 Monty Program Ab
+ All Rights reserved
- This program is free software; you can redistribute it and/or modify
- it under the terms of the GNU General Public License as published by
- the Free Software Foundation; version 2 of the License.
-
- This program is distributed in the hope that it will be useful,
- but WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- GNU General Public License for more details.
-
- You should have received a copy of the GNU General Public License
- along with this program; if not, write to the Free Software
- Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are met:
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
+ FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
+ <COPYRIGHT HOLDER> BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
+ OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ SUCH DAMAGE.
+*/
/*
+ This code originates from the Unireg project.
+
Code for generell handling of priority Queues.
Implemention of queues from "Algoritms in C" by Robert Sedgewick.
- An optimisation of _downheap suggested in Exercise 7.51 in "Data
- Structures & Algorithms in C++" by Mark Allen Weiss, Second Edition
- was implemented by Mikael Ronstrom 2005. Also the O(N) algorithm
- of queue_fix was implemented.
+
+ The queue can optionally store the position in queue in the element
+ that is in the queue. This allows one to remove any element from the queue
+ in O(1) time.
+
+ Optimisation of _downheap() and queue_fix() is inspired by code done
+ by Mikael Ronström, based on an optimisation of _downheap from
+ Exercise 7.51 in "Data Structures & Algorithms in C++" by Mark Allen
+ Weiss, Second Edition.
*/
#include "mysys_priv.h"
@@ -39,6 +56,10 @@
max_at_top Set to 1 if you want biggest element on top.
compare Compare function for elements, takes 3 arguments.
first_cmp_arg First argument to compare function
+ offset_to_queue_pos If <> 0, then offset+1 in element to store position
+ in queue (for fast delete of element in queue)
+ auto_extent When the queue is full and there is insert operation
+ extend the queue.
NOTES
Will allocate max_element pointers for queue array
@@ -50,74 +71,33 @@
int init_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
pbool max_at_top, int (*compare) (void *, uchar *, uchar *),
- void *first_cmp_arg)
+ void *first_cmp_arg, uint offset_to_queue_pos,
+ uint auto_extent)
+
{
DBUG_ENTER("init_queue");
- if ((queue->root= (uchar **) my_malloc((max_elements+1)*sizeof(void*),
+ if ((queue->root= (uchar **) my_malloc((max_elements + 1) * sizeof(void*),
MYF(MY_WME))) == 0)
DBUG_RETURN(1);
- queue->elements=0;
- queue->compare=compare;
- queue->first_cmp_arg=first_cmp_arg;
- queue->max_elements=max_elements;
- queue->offset_to_key=offset_to_key;
+ queue->elements= 0;
+ queue->compare= compare;
+ queue->first_cmp_arg= first_cmp_arg;
+ queue->max_elements= max_elements;
+ queue->offset_to_key= offset_to_key;
+ queue->offset_to_queue_pos= offset_to_queue_pos;
+ queue->auto_extent= auto_extent;
queue_set_max_at_top(queue, max_at_top);
DBUG_RETURN(0);
}
-
-/*
- Init queue, uses init_queue internally for init work but also accepts
- auto_extent as parameter
-
- SYNOPSIS
- init_queue_ex()
- queue Queue to initialise
- max_elements Max elements that will be put in queue
- offset_to_key Offset to key in element stored in queue
- Used when sending pointers to compare function
- max_at_top Set to 1 if you want biggest element on top.
- compare Compare function for elements, takes 3 arguments.
- first_cmp_arg First argument to compare function
- auto_extent When the queue is full and there is insert operation
- extend the queue.
-
- NOTES
- Will allocate max_element pointers for queue array
-
- RETURN
- 0 ok
- 1 Could not allocate memory
-*/
-
-int init_queue_ex(QUEUE *queue, uint max_elements, uint offset_to_key,
- pbool max_at_top, int (*compare) (void *, uchar *, uchar *),
- void *first_cmp_arg, uint auto_extent)
-{
- int ret;
- DBUG_ENTER("init_queue_ex");
-
- if ((ret= init_queue(queue, max_elements, offset_to_key, max_at_top, compare,
- first_cmp_arg)))
- DBUG_RETURN(ret);
-
- queue->auto_extent= auto_extent;
- DBUG_RETURN(0);
-}
-
/*
Reinitialize queue for other usage
SYNOPSIS
reinit_queue()
queue Queue to initialise
- max_elements Max elements that will be put in queue
- offset_to_key Offset to key in element stored in queue
- Used when sending pointers to compare function
- max_at_top Set to 1 if you want biggest element on top.
- compare Compare function for elements, takes 3 arguments.
- first_cmp_arg First argument to compare function
+ For rest of arguments, see init_queue() above
NOTES
This will delete all elements from the queue. If you don't want this,
@@ -125,21 +105,23 @@ int init_queue_ex(QUEUE *queue, uint max
RETURN
0 ok
- EE_OUTOFMEMORY Wrong max_elements
+ 1 Wrong max_elements; Queue has old size
*/
int reinit_queue(QUEUE *queue, uint max_elements, uint offset_to_key,
pbool max_at_top, int (*compare) (void *, uchar *, uchar *),
- void *first_cmp_arg)
+ void *first_cmp_arg, uint offset_to_queue_pos,
+ uint auto_extent)
{
DBUG_ENTER("reinit_queue");
- queue->elements=0;
- queue->compare=compare;
- queue->first_cmp_arg=first_cmp_arg;
- queue->offset_to_key=offset_to_key;
+ queue->elements= 0;
+ queue->compare= compare;
+ queue->first_cmp_arg= first_cmp_arg;
+ queue->offset_to_key= offset_to_key;
+ queue->offset_to_queue_pos= offset_to_queue_pos;
+ queue->auto_extent= auto_extent;
queue_set_max_at_top(queue, max_at_top);
- resize_queue(queue, max_elements);
- DBUG_RETURN(0);
+ DBUG_RETURN(resize_queue(queue, max_elements));
}
@@ -167,8 +149,8 @@ int resize_queue(QUEUE *queue, uint max_
if (queue->max_elements == max_elements)
DBUG_RETURN(0);
if ((new_root= (uchar **) my_realloc((void *)queue->root,
- (max_elements+1)*sizeof(void*),
- MYF(MY_WME))) == 0)
+ (max_elements + 1)* sizeof(void*),
+ MYF(MY_WME))) == 0)
DBUG_RETURN(1);
set_if_smaller(queue->elements, max_elements);
queue->max_elements= max_elements;
@@ -197,39 +179,58 @@ void delete_queue(QUEUE *queue)
if (queue->root)
{
my_free((uchar*) queue->root,MYF(0));
- queue->root=0;
+ queue->root=0; /* Allow multiple calls */
}
DBUG_VOID_RETURN;
}
- /* Code for insert, search and delete of elements */
+/*
+ Insert element in queue
+
+ SYNOPSIS
+ queue_insert()
+ queue Queue to use
+ element Element to insert
+*/
void queue_insert(register QUEUE *queue, uchar *element)
{
reg2 uint idx, next;
+ uint offset_to_queue_pos= queue->offset_to_queue_pos;
DBUG_ASSERT(queue->elements < queue->max_elements);
- queue->root[0]= element;
+
idx= ++queue->elements;
/* max_at_top swaps the comparison if we want to order by desc */
- while ((queue->compare(queue->first_cmp_arg,
+ while (idx > 1 &&
+ (queue->compare(queue->first_cmp_arg,
element + queue->offset_to_key,
queue->root[(next= idx >> 1)] +
queue->offset_to_key) * queue->max_at_top) < 0)
{
queue->root[idx]= queue->root[next];
+ if (offset_to_queue_pos)
+ (*(uint*) (queue->root[idx] + offset_to_queue_pos-1))= idx;
idx= next;
}
queue->root[idx]= element;
+ if (offset_to_queue_pos)
+ (*(uint*) (element+ offset_to_queue_pos-1))= idx;
}
+
/*
- Does safe insert. If no more space left on the queue resize it.
- Return codes:
- 0 - OK
- 1 - Cannot allocate more memory
- 2 - auto_extend is 0, the operation would
-
+ Like queue_insert, but resize queue if queue is full
+
+ SYNOPSIS
+ queue_insert_safe()
+ queue Queue to use
+ element Element to insert
+
+ RETURN
+ 0 OK
+ 1 Cannot allocate more memory
+ 2 auto_extend is 0; No insertion done
*/
int queue_insert_safe(register QUEUE *queue, uchar *element)
@@ -239,7 +240,7 @@ int queue_insert_safe(register QUEUE *qu
{
if (!queue->auto_extent)
return 2;
- else if (resize_queue(queue, queue->max_elements + queue->auto_extent))
+ if (resize_queue(queue, queue->max_elements + queue->auto_extent))
return 1;
}
@@ -248,40 +249,48 @@ int queue_insert_safe(register QUEUE *qu
}
- /* Remove item from queue */
- /* Returns pointer to removed element */
+/*
+ Remove item from queue
+
+ SYNOPSIS
+ queue_remove()
+ queue Queue to use
+ element Index of element to remove.
+ First element in queue is 'queue_first_element(queue)'
+
+ RETURN
+ pointer to removed element
+*/
uchar *queue_remove(register QUEUE *queue, uint idx)
{
uchar *element;
- DBUG_ASSERT(idx < queue->max_elements);
- element= queue->root[++idx]; /* Intern index starts from 1 */
- queue->root[idx]= queue->root[queue->elements--];
- _downheap(queue, idx);
+ DBUG_ASSERT(idx >= 1 && idx <= queue->elements);
+ element= queue->root[idx];
+ _downheap(queue, idx, queue->root[queue->elements--]);
return element;
}
- /* Fix when element on top has been replaced */
-#ifndef queue_replaced
-void queue_replaced(QUEUE *queue)
-{
- _downheap(queue,1);
-}
-#endif
+/*
+ Add element to fixed position and update heap
-#ifndef OLD_VERSION
+ SYNOPSIS
+ _downheap()
+ queue Queue to use
+ idx Index of element to change
+ element Element to store at 'idx'
+*/
-void _downheap(register QUEUE *queue, uint idx)
+void _downheap(register QUEUE *queue, uint start_idx, uchar *element)
{
- uchar *element;
- uint elements,half_queue,offset_to_key, next_index;
+ uint elements,half_queue,offset_to_key, next_index, offset_to_queue_pos;
+ register uint idx= start_idx;
my_bool first= TRUE;
- uint start_idx= idx;
offset_to_key=queue->offset_to_key;
- element=queue->root[idx];
- half_queue=(elements=queue->elements) >> 1;
+ offset_to_queue_pos= queue->offset_to_queue_pos;
+ half_queue= (elements= queue->elements) >> 1;
while (idx <= half_queue)
{
@@ -298,393 +307,49 @@ void _downheap(register QUEUE *queue, ui
element+offset_to_key) * queue->max_at_top) >= 0)))
{
queue->root[idx]= element;
+ if (offset_to_queue_pos)
+ (*(uint*) (element + offset_to_queue_pos-1))= idx;
return;
}
- queue->root[idx]=queue->root[next_index];
- idx=next_index;
first= FALSE;
- }
-
- next_index= idx >> 1;
- while (next_index > start_idx)
- {
- if ((queue->compare(queue->first_cmp_arg,
- queue->root[next_index]+offset_to_key,
- element+offset_to_key) *
- queue->max_at_top) < 0)
- break;
- queue->root[idx]=queue->root[next_index];
+ queue->root[idx]= queue->root[next_index];
+ if (offset_to_queue_pos)
+ (*(uint*) (queue->root[idx] + offset_to_queue_pos-1))= idx;
idx=next_index;
- next_index= idx >> 1;
}
- queue->root[idx]=element;
-}
-#else
/*
- The old _downheap version is kept for comparisons with the benchmark
- suit or new benchmarks anyone wants to run for comparisons.
+ Insert the element into the right position. This is the same code
+ as we have in queue_insert()
*/
- /* Fix heap when index have changed */
-void _downheap(register QUEUE *queue, uint idx)
-{
- uchar *element;
- uint elements,half_queue,next_index,offset_to_key;
-
- offset_to_key=queue->offset_to_key;
- element=queue->root[idx];
- half_queue=(elements=queue->elements) >> 1;
-
- while (idx <= half_queue)
- {
- next_index=idx+idx;
- if (next_index < elements &&
- (queue->compare(queue->first_cmp_arg,
- queue->root[next_index]+offset_to_key,
- queue->root[next_index+1]+offset_to_key) *
- queue->max_at_top) > 0)
- next_index++;
- if ((queue->compare(queue->first_cmp_arg,
- queue->root[next_index]+offset_to_key,
- element+offset_to_key) * queue->max_at_top) >= 0)
- break;
- queue->root[idx]=queue->root[next_index];
- idx=next_index;
+ while ((next_index= (idx >> 1)) > start_idx &&
+ queue->compare(queue->first_cmp_arg,
+ element+offset_to_key,
+ queue->root[next_index]+offset_to_key)*
+ queue->max_at_top < 0)
+ {
+ queue->root[idx]= queue->root[next_index];
+ if (offset_to_queue_pos)
+ (*(uint*) (queue->root[idx] + offset_to_queue_pos-1))= idx;
+ idx= next_index;
}
- queue->root[idx]=element;
+ queue->root[idx]= element;
+ if (offset_to_queue_pos)
+ (*(uint*) (element + offset_to_queue_pos-1))= idx;
}
-#endif
-
/*
Fix heap when every element was changed.
+
+ SYNOPSIS
+ queue_fix()
+ queue Queue to use
*/
void queue_fix(QUEUE *queue)
{
uint i;
for (i= queue->elements >> 1; i > 0; i--)
- _downheap(queue, i);
-}
-
-#ifdef MAIN
- /*
- A test program for the priority queue implementation.
- It can also be used to benchmark changes of the implementation
- Build by doing the following in the directory mysys
- make test_priority_queue
- ./test_priority_queue
-
- Written by Mikael Ronström, 2005
- */
-
-static uint num_array[1025];
-static uint tot_no_parts= 0;
-static uint tot_no_loops= 0;
-static uint expected_part= 0;
-static uint expected_num= 0;
-static bool max_ind= 0;
-static bool fix_used= 0;
-static ulonglong start_time= 0;
-
-static bool is_divisible_by(uint num, uint divisor)
-{
- uint quotient= num / divisor;
- if (quotient * divisor == num)
- return TRUE;
- return FALSE;
-}
-
-void calculate_next()
-{
- uint part= expected_part, num= expected_num;
- uint no_parts= tot_no_parts;
- if (max_ind)
- {
- do
- {
- while (++part <= no_parts)
- {
- if (is_divisible_by(num, part) &&
- (num <= ((1 << 21) + part)))
- {
- expected_part= part;
- expected_num= num;
- return;
- }
- }
- part= 0;
- } while (--num);
- }
- else
- {
- do
- {
- while (--part > 0)
- {
- if (is_divisible_by(num, part))
- {
- expected_part= part;
- expected_num= num;
- return;
- }
- }
- part= no_parts + 1;
- } while (++num);
- }
-}
-
-void calculate_end_next(uint part)
-{
- uint no_parts= tot_no_parts, num;
- num_array[part]= 0;
- if (max_ind)
- {
- expected_num= 0;
- for (part= no_parts; part > 0 ; part--)
- {
- if (num_array[part])
- {
- num= num_array[part] & 0x3FFFFF;
- if (num >= expected_num)
- {
- expected_num= num;
- expected_part= part;
- }
- }
- }
- if (expected_num == 0)
- expected_part= 0;
- }
- else
- {
- expected_num= 0xFFFFFFFF;
- for (part= 1; part <= no_parts; part++)
- {
- if (num_array[part])
- {
- num= num_array[part] & 0x3FFFFF;
- if (num <= expected_num)
- {
- expected_num= num;
- expected_part= part;
- }
- }
- }
- if (expected_num == 0xFFFFFFFF)
- expected_part= 0;
- }
- return;
-}
-static int test_compare(void *null_arg, uchar *a, uchar *b)
-{
- uint a_num= (*(uint*)a) & 0x3FFFFF;
- uint b_num= (*(uint*)b) & 0x3FFFFF;
- uint a_part, b_part;
- if (a_num > b_num)
- return +1;
- if (a_num < b_num)
- return -1;
- a_part= (*(uint*)a) >> 22;
- b_part= (*(uint*)b) >> 22;
- if (a_part < b_part)
- return +1;
- if (a_part > b_part)
- return -1;
- return 0;
-}
-
-bool check_num(uint num_part)
-{
- uint part= num_part >> 22;
- uint num= num_part & 0x3FFFFF;
- if (part == expected_part)
- if (num == expected_num)
- return FALSE;
- printf("Expect part %u Expect num 0x%x got part %u num 0x%x max_ind %u fix_used %u \n",
- expected_part, expected_num, part, num, max_ind, fix_used);
- return TRUE;
-}
-
-
-void perform_insert(QUEUE *queue)
-{
- uint i= 1, no_parts= tot_no_parts;
- uint backward_start= 0;
-
- expected_part= 1;
- expected_num= 1;
-
- if (max_ind)
- backward_start= 1 << 21;
-
- do
- {
- uint num= (i + backward_start);
- if (max_ind)
- {
- while (!is_divisible_by(num, i))
- num--;
- if (max_ind && (num > expected_num ||
- (num == expected_num && i < expected_part)))
- {
- expected_num= num;
- expected_part= i;
- }
- }
- num_array[i]= num + (i << 22);
- if (fix_used)
- queue_element(queue, i-1)= (uchar*)&num_array[i];
- else
- queue_insert(queue, (uchar*)&num_array[i]);
- } while (++i <= no_parts);
- if (fix_used)
- {
- queue->elements= no_parts;
- queue_fix(queue);
- }
-}
-
-bool perform_ins_del(QUEUE *queue, bool max_ind)
-{
- uint i= 0, no_loops= tot_no_loops, j= tot_no_parts;
- do
- {
- uint num_part= *(uint*)queue_top(queue);
- uint part= num_part >> 22;
- if (check_num(num_part))
- return TRUE;
- if (j++ >= no_loops)
- {
- calculate_end_next(part);
- queue_remove(queue, (uint) 0);
- }
- else
- {
- calculate_next();
- if (max_ind)
- num_array[part]-= part;
- else
- num_array[part]+= part;
- queue_top(queue)= (uchar*)&num_array[part];
- queue_replaced(queue);
- }
- } while (++i < no_loops);
- return FALSE;
-}
-
-bool do_test(uint no_parts, uint l_max_ind, bool l_fix_used)
-{
- QUEUE queue;
- bool result;
- max_ind= l_max_ind;
- fix_used= l_fix_used;
- init_queue(&queue, no_parts, 0, max_ind, test_compare, NULL);
- tot_no_parts= no_parts;
- tot_no_loops= 1024;
- perform_insert(&queue);
- if ((result= perform_ins_del(&queue, max_ind)))
- delete_queue(&queue);
- if (result)
- {
- printf("Error\n");
- return TRUE;
- }
- return FALSE;
-}
-
-static void start_measurement()
-{
- start_time= my_getsystime();
-}
-
-static void stop_measurement()
-{
- ulonglong stop_time= my_getsystime();
- uint time_in_micros;
- stop_time-= start_time;
- stop_time/= 10; /* Convert to microseconds */
- time_in_micros= (uint)stop_time;
- printf("Time expired is %u microseconds \n", time_in_micros);
-}
-
-static void benchmark_test()
-{
- QUEUE queue_real;
- QUEUE *queue= &queue_real;
- uint i, add;
- fix_used= TRUE;
- max_ind= FALSE;
- tot_no_parts= 1024;
- init_queue(queue, tot_no_parts, 0, max_ind, test_compare, NULL);
- /*
- First benchmark whether queue_fix is faster than using queue_insert
- for sizes of 16 partitions.
- */
- for (tot_no_parts= 2, add=2; tot_no_parts < 128;
- tot_no_parts+= add, add++)
- {
- printf("Start benchmark queue_fix, tot_no_parts= %u \n", tot_no_parts);
- start_measurement();
- for (i= 0; i < 128; i++)
- {
- perform_insert(queue);
- queue_remove_all(queue);
- }
- stop_measurement();
-
- fix_used= FALSE;
- printf("Start benchmark queue_insert\n");
- start_measurement();
- for (i= 0; i < 128; i++)
- {
- perform_insert(queue);
- queue_remove_all(queue);
- }
- stop_measurement();
- }
- /*
- Now benchmark insertion and deletion of 16400 elements.
- Used in consecutive runs this shows whether the optimised _downheap
- is faster than the standard implementation.
- */
- printf("Start benchmarking _downheap \n");
- start_measurement();
- perform_insert(queue);
- for (i= 0; i < 65536; i++)
- {
- uint num, part;
- num= *(uint*)queue_top(queue);
- num+= 16;
- part= num >> 22;
- num_array[part]= num;
- queue_top(queue)= (uchar*)&num_array[part];
- queue_replaced(queue);
- }
- for (i= 0; i < 16; i++)
- queue_remove(queue, (uint) 0);
- queue_remove_all(queue);
- stop_measurement();
-}
-
-int main()
-{
- int i, add= 1;
- for (i= 1; i < 1024; i+=add, add++)
- {
- printf("Start test for priority queue of size %u\n", i);
- if (do_test(i, 0, 1))
- return -1;
- if (do_test(i, 1, 1))
- return -1;
- if (do_test(i, 0, 0))
- return -1;
- if (do_test(i, 1, 0))
- return -1;
- }
- benchmark_test();
- printf("OK\n");
- return 0;
+ _downheap(queue, i, queue_element(queue, i));
}
-#endif
=== modified file 'mysys/thr_alarm.c'
--- a/mysys/thr_alarm.c 2008-10-10 15:28:41 +0000
+++ b/mysys/thr_alarm.c 2010-07-16 07:33:01 +0000
@@ -41,6 +41,19 @@ volatile my_bool alarm_thread_running= 0
time_t next_alarm_expire_time= ~ (time_t) 0;
static sig_handler process_alarm_part2(int sig);
+#ifdef DBUG_OFF
+#define reset_index_in_queue(alarm_data)
+#else
+#define reset_index_in_queue(alarm_data) alarm_data->index_in_queue= 0;
+#endif /* DBUG_OFF */
+
+#ifndef USE_ONE_SIGNAL_HAND
+#define one_signal_hand_sigmask(A,B,C) pthread_sigmask((A), (B), (C))
+#else
+#define one_signal_hand_sigmask(A,B,C)
+#endif
+
+
#if !defined(__WIN__)
static pthread_mutex_t LOCK_alarm;
@@ -72,8 +85,8 @@ void init_thr_alarm(uint max_alarms)
DBUG_ENTER("init_thr_alarm");
alarm_aborted=0;
next_alarm_expire_time= ~ (time_t) 0;
- init_queue(&alarm_queue,max_alarms+1,offsetof(ALARM,expire_time),0,
- compare_ulong,NullS);
+ init_queue(&alarm_queue, max_alarms+1, offsetof(ALARM,expire_time), 0,
+ compare_ulong, NullS, offsetof(ALARM, index_in_queue)+1, 0);
sigfillset(&full_signal_set); /* Neaded to block signals */
pthread_mutex_init(&LOCK_alarm,MY_MUTEX_INIT_FAST);
pthread_cond_init(&COND_alarm,NULL);
@@ -151,7 +164,7 @@ void resize_thr_alarm(uint max_alarms)
my_bool thr_alarm(thr_alarm_t *alrm, uint sec, ALARM *alarm_data)
{
- time_t now;
+ time_t now, next;
#ifndef USE_ONE_SIGNAL_HAND
sigset_t old_mask;
#endif
@@ -161,79 +174,68 @@ my_bool thr_alarm(thr_alarm_t *alrm, uin
DBUG_PRINT("enter",("thread: %s sec: %d",my_thread_name(),sec));
now= my_time(0);
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_BLOCK,&full_signal_set,&old_mask);
-#endif
+ if (!alarm_data)
+ {
+ if (!(alarm_data=(ALARM*) my_malloc(sizeof(ALARM),MYF(MY_WME))))
+ goto abort_no_unlock;
+ alarm_data->malloced= 1;
+ }
+ else
+ alarm_data->malloced= 0;
+ next= now + sec;
+ alarm_data->expire_time= next;
+ alarm_data->alarmed= 0;
+ alarm_data->thread= current_my_thread_var->pthread_self;
+ alarm_data->thread_id= current_my_thread_var->id;
+
+ one_signal_hand_sigmask(SIG_BLOCK,&full_signal_set,&old_mask);
pthread_mutex_lock(&LOCK_alarm); /* Lock from threads & alarms */
- if (alarm_aborted > 0)
+ if (unlikely(alarm_aborted))
{ /* No signal thread */
DBUG_PRINT("info", ("alarm aborted"));
- *alrm= 0; /* No alarm */
- pthread_mutex_unlock(&LOCK_alarm);
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_SETMASK,&old_mask,NULL);
-#endif
- DBUG_RETURN(1);
- }
- if (alarm_aborted < 0)
+ if (alarm_aborted > 0)
+ goto abort;
sec= 1; /* Abort mode */
-
+ }
if (alarm_queue.elements >= max_used_alarms)
{
if (alarm_queue.elements == alarm_queue.max_elements)
{
DBUG_PRINT("info", ("alarm queue full"));
fprintf(stderr,"Warning: thr_alarm queue is full\n");
- *alrm= 0; /* No alarm */
- pthread_mutex_unlock(&LOCK_alarm);
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_SETMASK,&old_mask,NULL);
-#endif
- DBUG_RETURN(1);
+ goto abort;
}
max_used_alarms=alarm_queue.elements+1;
}
- reschedule= (ulong) next_alarm_expire_time > (ulong) now + sec;
- if (!alarm_data)
- {
- if (!(alarm_data=(ALARM*) my_malloc(sizeof(ALARM),MYF(MY_WME))))
- {
- DBUG_PRINT("info", ("failed my_malloc()"));
- *alrm= 0; /* No alarm */
- pthread_mutex_unlock(&LOCK_alarm);
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_SETMASK,&old_mask,NULL);
-#endif
- DBUG_RETURN(1);
- }
- alarm_data->malloced=1;
- }
- else
- alarm_data->malloced=0;
- alarm_data->expire_time=now+sec;
- alarm_data->alarmed=0;
- alarm_data->thread= current_my_thread_var->pthread_self;
- alarm_data->thread_id= current_my_thread_var->id;
+ reschedule= (ulong) next_alarm_expire_time > (ulong) next;
queue_insert(&alarm_queue,(uchar*) alarm_data);
+ assert(alarm_data->index_in_queue > 0);
/* Reschedule alarm if the current one has more than sec left */
- if (reschedule)
+ if (unlikely(reschedule))
{
DBUG_PRINT("info", ("reschedule"));
if (pthread_equal(pthread_self(),alarm_thread))
{
alarm(sec); /* purecov: inspected */
- next_alarm_expire_time= now + sec;
+ next_alarm_expire_time= next;
}
else
reschedule_alarms(); /* Reschedule alarms */
}
pthread_mutex_unlock(&LOCK_alarm);
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_SETMASK,&old_mask,NULL);
-#endif
+ one_signal_hand_sigmask(SIG_SETMASK,&old_mask,NULL);
(*alrm)= &alarm_data->alarmed;
DBUG_RETURN(0);
+
+abort:
+ if (alarm_data->malloced)
+ my_free(alarm_data, MYF(0));
+ pthread_mutex_unlock(&LOCK_alarm);
+ one_signal_hand_sigmask(SIG_SETMASK,&old_mask,NULL);
+abort_no_unlock:
+ *alrm= 0; /* No alarm */
+ DBUG_RETURN(1);
}
@@ -247,41 +249,18 @@ void thr_end_alarm(thr_alarm_t *alarmed)
#ifndef USE_ONE_SIGNAL_HAND
sigset_t old_mask;
#endif
- uint i, found=0;
DBUG_ENTER("thr_end_alarm");
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_BLOCK,&full_signal_set,&old_mask);
-#endif
- pthread_mutex_lock(&LOCK_alarm);
-
+ one_signal_hand_sigmask(SIG_BLOCK,&full_signal_set,&old_mask);
alarm_data= (ALARM*) ((uchar*) *alarmed - offsetof(ALARM,alarmed));
- for (i=0 ; i < alarm_queue.elements ; i++)
- {
- if ((ALARM*) queue_element(&alarm_queue,i) == alarm_data)
- {
- queue_remove(&alarm_queue,i),MYF(0);
- if (alarm_data->malloced)
- my_free((uchar*) alarm_data,MYF(0));
- found++;
-#ifdef DBUG_OFF
- break;
-#endif
- }
- }
- DBUG_ASSERT(!*alarmed || found == 1);
- if (!found)
- {
- if (*alarmed)
- fprintf(stderr,"Warning: Didn't find alarm 0x%lx in queue of %d alarms\n",
- (long) *alarmed, alarm_queue.elements);
- DBUG_PRINT("warning",("Didn't find alarm 0x%lx in queue\n",
- (long) *alarmed));
- }
+ pthread_mutex_lock(&LOCK_alarm);
+ DBUG_ASSERT(alarm_data->index_in_queue != 0);
+ DBUG_ASSERT(queue_element(&alarm_queue, alarm_data->index_in_queue) ==
+ alarm_data);
+ queue_remove(&alarm_queue, alarm_data->index_in_queue);
pthread_mutex_unlock(&LOCK_alarm);
-#ifndef USE_ONE_SIGNAL_HAND
- pthread_sigmask(SIG_SETMASK,&old_mask,NULL);
-#endif
+ one_signal_hand_sigmask(SIG_SETMASK,&old_mask,NULL);
+ reset_index_in_queue(alarm_data);
DBUG_VOID_RETURN;
}
@@ -344,12 +323,13 @@ static sig_handler process_alarm_part2(i
#if defined(MAIN) && !defined(__bsdi__)
printf("process_alarm\n"); fflush(stdout);
#endif
- if (alarm_queue.elements)
+ if (likely(alarm_queue.elements))
{
- if (alarm_aborted)
+ if (unlikely(alarm_aborted))
{
uint i;
- for (i=0 ; i < alarm_queue.elements ;)
+ for (i= queue_first_element(&alarm_queue) ;
+ i <= queue_last_element(&alarm_queue) ;)
{
alarm_data=(ALARM*) queue_element(&alarm_queue,i);
alarm_data->alarmed=1; /* Info to thread */
@@ -360,6 +340,7 @@ static sig_handler process_alarm_part2(i
printf("Warning: pthread_kill couldn't find thread!!!\n");
#endif
queue_remove(&alarm_queue,i); /* No thread. Remove alarm */
+ reset_index_in_queue(alarm_data);
}
else
i++; /* Signal next thread */
@@ -371,8 +352,8 @@ static sig_handler process_alarm_part2(i
}
else
{
- ulong now=(ulong) my_time(0);
- ulong next=now+10-(now%10);
+ time_t now= my_time(0);
+ time_t next= now+10-(now%10);
while ((alarm_data=(ALARM*) queue_top(&alarm_queue))->expire_time <= now)
{
alarm_data->alarmed=1; /* Info to thread */
@@ -382,15 +363,16 @@ static sig_handler process_alarm_part2(i
{
#ifdef MAIN
printf("Warning: pthread_kill couldn't find thread!!!\n");
-#endif
- queue_remove(&alarm_queue,0); /* No thread. Remove alarm */
+#endif /* MAIN */
+ queue_remove_top(&alarm_queue); /* No thread. Remove alarm */
+ reset_index_in_queue(alarm_data);
if (!alarm_queue.elements)
break;
}
else
{
alarm_data->expire_time=next;
- queue_replaced(&alarm_queue);
+ queue_replace_top(&alarm_queue);
}
}
#ifndef USE_ALARM_THREAD
@@ -486,13 +468,15 @@ void thr_alarm_kill(my_thread_id thread_
if (alarm_aborted)
return;
pthread_mutex_lock(&LOCK_alarm);
- for (i=0 ; i < alarm_queue.elements ; i++)
+ for (i= queue_first_element(&alarm_queue) ;
+ i <= queue_last_element(&alarm_queue);
+ i++)
{
- if (((ALARM*) queue_element(&alarm_queue,i))->thread_id == thread_id)
+ ALARM *element= (ALARM*) queue_element(&alarm_queue,i);
+ if (element->thread_id == thread_id)
{
- ALARM *tmp=(ALARM*) queue_remove(&alarm_queue,i);
- tmp->expire_time=0;
- queue_insert(&alarm_queue,(uchar*) tmp);
+ element->expire_time= 0;
+ queue_replace(&alarm_queue, i);
reschedule_alarms();
break;
}
@@ -508,7 +492,7 @@ void thr_alarm_info(ALARM_INFO *info)
info->max_used_alarms= max_used_alarms;
if ((info->active_alarms= alarm_queue.elements))
{
- ulong now=(ulong) my_time(0);
+ time_t now= my_time(0);
long time_diff;
ALARM *alarm_data= (ALARM*) queue_top(&alarm_queue);
time_diff= (long) (alarm_data->expire_time - now);
@@ -556,7 +540,7 @@ static void *alarm_handler(void *arg __a
{
if (alarm_queue.elements)
{
- ulong sleep_time,now= my_time(0);
+ time_t sleep_time,now= my_time(0);
if (alarm_aborted)
sleep_time=now+1;
else
@@ -792,20 +776,6 @@ static void *test_thread(void *arg)
return 0;
}
-#ifdef USE_ONE_SIGNAL_HAND
-static sig_handler print_signal_warning(int sig)
-{
- printf("Warning: Got signal %d from thread %s\n",sig,my_thread_name());
- fflush(stdout);
-#ifdef DONT_REMEMBER_SIGNAL
- my_sigset(sig,print_signal_warning); /* int. thread system calls */
-#endif
- if (sig == SIGALRM)
- alarm(2); /* reschedule alarm */
-}
-#endif /* USE_ONE_SIGNAL_HAND */
-
-
static void *signal_hand(void *arg __attribute__((unused)))
{
sigset_t set;
=== modified file 'sql/create_options.cc'
--- a/sql/create_options.cc 2010-05-12 17:56:05 +0000
+++ b/sql/create_options.cc 2010-07-16 07:33:01 +0000
@@ -583,9 +583,9 @@ my_bool engine_table_options_frm_read(co
}
if (buff < buff_end)
- sql_print_warning("Table %`s was created in a later MariaDB version - "
+ sql_print_warning("Table '%s' was created in a later MariaDB version - "
"unknown table attributes were ignored",
- share->table_name);
+ share->table_name.str);
DBUG_RETURN(buff > buff_end);
}
=== modified file 'sql/event_queue.cc'
--- a/sql/event_queue.cc 2008-12-02 22:02:52 +0000
+++ b/sql/event_queue.cc 2010-07-16 07:33:01 +0000
@@ -136,9 +136,9 @@ Event_queue::init_queue(THD *thd)
LOCK_QUEUE_DATA();
- if (init_queue_ex(&queue, EVENT_QUEUE_INITIAL_SIZE , 0 /*offset*/,
- 0 /*max_on_top*/, event_queue_element_compare_q,
- NULL, EVENT_QUEUE_EXTENT))
+ if (::init_queue(&queue, EVENT_QUEUE_INITIAL_SIZE , 0 /*offset*/,
+ 0 /*max_on_top*/, event_queue_element_compare_q,
+ NullS, 0, EVENT_QUEUE_EXTENT))
{
sql_print_error("Event Scheduler: Can't initialize the execution queue");
goto err;
@@ -325,11 +325,13 @@ void
Event_queue::drop_matching_events(THD *thd, LEX_STRING pattern,
bool (*comparator)(LEX_STRING, Event_basic *))
{
- uint i= 0;
+ uint i;
DBUG_ENTER("Event_queue::drop_matching_events");
DBUG_PRINT("enter", ("pattern=%s", pattern.str));
- while (i < queue.elements)
+ for (i= queue_first_element(&queue) ;
+ i <= queue_last_element(&queue) ;
+ )
{
Event_queue_element *et= (Event_queue_element *) queue_element(&queue, i);
DBUG_PRINT("info", ("[%s.%s]?", et->dbname.str, et->name.str));
@@ -339,7 +341,8 @@ Event_queue::drop_matching_events(THD *t
The queue is ordered. If we remove an element, then all elements
after it will shift one position to the left, if we imagine it as
an array from left to the right. In this case we should not
- increment the counter and the (i < queue.elements) condition is ok.
+ increment the counter and the (i <= queue_last_element() condition
+ is ok.
*/
queue_remove(&queue, i);
delete et;
@@ -403,7 +406,9 @@ Event_queue::find_n_remove_event(LEX_STR
uint i;
DBUG_ENTER("Event_queue::find_n_remove_event");
- for (i= 0; i < queue.elements; ++i)
+ for (i= queue_first_element(&queue);
+ i <= queue_last_element(&queue);
+ i++)
{
Event_queue_element *et= (Event_queue_element *) queue_element(&queue, i);
DBUG_PRINT("info", ("[%s.%s]==[%s.%s]?", db.str, name.str,
@@ -441,7 +446,9 @@ Event_queue::recalculate_activation_time
LOCK_QUEUE_DATA();
DBUG_PRINT("info", ("%u loaded events to be recalculated", queue.elements));
- for (i= 0; i < queue.elements; i++)
+ for (i= queue_first_element(&queue);
+ i <= queue_last_element(&queue);
+ i++)
{
((Event_queue_element*)queue_element(&queue, i))->compute_next_execution_time();
((Event_queue_element*)queue_element(&queue, i))->update_timing_fields(thd);
@@ -454,16 +461,19 @@ Event_queue::recalculate_activation_time
have removed all. The queue has been ordered in a way the disabled
events are at the end.
*/
- for (i= queue.elements; i > 0; i--)
+ for (i= queue_last_element(&queue);
+ (int) i >= (int) queue_first_element(&queue);
+ i--)
{
- Event_queue_element *element = (Event_queue_element*)queue_element(&queue, i - 1);
+ Event_queue_element *element=
+ (Event_queue_element*)queue_element(&queue, i);
if (element->status != Event_parse_data::DISABLED)
break;
/*
This won't cause queue re-order, because we remove
always the last element.
*/
- queue_remove(&queue, i - 1);
+ queue_remove(&queue, i);
delete element;
}
UNLOCK_QUEUE_DATA();
@@ -499,7 +509,9 @@ Event_queue::empty_queue()
sql_print_information("Event Scheduler: Purging the queue. %u events",
queue.elements);
/* empty the queue */
- for (i= 0; i < queue.elements; ++i)
+ for (i= queue_first_element(&queue);
+ i <= queue_last_element(&queue);
+ i++)
{
Event_queue_element *et= (Event_queue_element *) queue_element(&queue, i);
delete et;
@@ -525,7 +537,9 @@ Event_queue::dbug_dump_queue(time_t now)
uint i;
DBUG_ENTER("Event_queue::dbug_dump_queue");
DBUG_PRINT("info", ("Dumping queue . Elements=%u", queue.elements));
- for (i = 0; i < queue.elements; i++)
+ for (i= queue_first_element(&queue);
+ i <= queue_last_element(&queue);
+ i++)
{
et= ((Event_queue_element*)queue_element(&queue, i));
DBUG_PRINT("info", ("et: 0x%lx name: %s.%s", (long) et,
@@ -592,7 +606,7 @@ Event_queue::get_top_for_execution_if_ti
continue;
}
- top= ((Event_queue_element*) queue_element(&queue, 0));
+ top= (Event_queue_element*) queue_top(&queue);
thd->set_current_time(); /* Get current time */
@@ -634,10 +648,10 @@ Event_queue::get_top_for_execution_if_ti
top->dbname.str, top->name.str,
top->dropped? "Dropping.":"");
delete top;
- queue_remove(&queue, 0);
+ queue_remove_top(&queue);
}
else
- queue_replaced(&queue);
+ queue_replace_top(&queue);
dbug_dump_queue(thd->query_start());
break;
=== modified file 'sql/filesort.cc'
--- a/sql/filesort.cc 2010-06-01 19:52:20 +0000
+++ b/sql/filesort.cc 2010-07-16 07:33:01 +0000
@@ -1151,7 +1151,9 @@ uint read_to_buffer(IO_CACHE *fromfile,
void reuse_freed_buff(QUEUE *queue, BUFFPEK *reuse, uint key_length)
{
uchar *reuse_end= reuse->base + reuse->max_keys * key_length;
- for (uint i= 0; i < queue->elements; ++i)
+ for (uint i= queue_first_element(queue);
+ i <= queue_last_element(queue);
+ i++)
{
BUFFPEK *bp= (BUFFPEK *) queue_element(queue, i);
if (bp->base + bp->max_keys * key_length == reuse->base)
@@ -1240,7 +1242,7 @@ int merge_buffers(SORTPARAM *param, IO_C
first_cmp_arg= (void*) &sort_length;
}
if (init_queue(&queue, (uint) (Tb-Fb)+1, offsetof(BUFFPEK,key), 0,
- (queue_compare) cmp, first_cmp_arg))
+ (queue_compare) cmp, first_cmp_arg, 0, 0))
DBUG_RETURN(1); /* purecov: inspected */
for (buffpek= Fb ; buffpek <= Tb ; buffpek++)
{
@@ -1277,7 +1279,7 @@ int merge_buffers(SORTPARAM *param, IO_C
error= 0; /* purecov: inspected */
goto end; /* purecov: inspected */
}
- queue_replaced(&queue); // Top element has been used
+ queue_replace_top(&queue); // Top element has been used
}
else
cmp= 0; // Not unique
@@ -1325,14 +1327,14 @@ int merge_buffers(SORTPARAM *param, IO_C
if (!(error= (int) read_to_buffer(from_file,buffpek,
rec_length)))
{
- VOID(queue_remove(&queue,0));
+ VOID(queue_remove_top(&queue));
reuse_freed_buff(&queue, buffpek, rec_length);
break; /* One buffer have been removed */
}
else if (error == -1)
goto err; /* purecov: inspected */
}
- queue_replaced(&queue); /* Top element has been replaced */
+ queue_replace_top(&queue); /* Top element has been replaced */
}
}
buffpek= (BUFFPEK*) queue_top(&queue);
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc 2010-06-05 14:53:36 +0000
+++ b/sql/ha_partition.cc 2010-07-16 07:33:01 +0000
@@ -2567,7 +2567,7 @@ int ha_partition::open(const char *name,
Initialize priority queue, initialized to reading forward.
*/
if ((error= init_queue(&m_queue, m_tot_parts, (uint) PARTITION_BYTES_IN_POS,
- 0, key_rec_cmp, (void*)this)))
+ 0, key_rec_cmp, (void*)this, 0, 0)))
goto err_handler;
/*
@@ -4622,7 +4622,7 @@ int ha_partition::handle_unordered_scan_
int ha_partition::handle_ordered_index_scan(uchar *buf, bool reverse_order)
{
uint i;
- uint j= 0;
+ uint j= queue_first_element(&m_queue);
bool found= FALSE;
DBUG_ENTER("ha_partition::handle_ordered_index_scan");
@@ -4716,7 +4716,7 @@ int ha_partition::handle_ordered_index_s
*/
queue_set_max_at_top(&m_queue, reverse_order);
queue_set_cmp_arg(&m_queue, (void*)m_curr_key_info);
- m_queue.elements= j;
+ m_queue.elements= j - queue_first_element(&m_queue);
queue_fix(&m_queue);
return_top_record(buf);
table->status= 0;
@@ -4787,7 +4787,7 @@ int ha_partition::handle_ordered_next(uc
if (error == HA_ERR_END_OF_FILE)
{
/* Return next buffered row */
- queue_remove(&m_queue, (uint) 0);
+ queue_remove_top(&m_queue);
if (m_queue.elements)
{
DBUG_PRINT("info", ("Record returned from partition %u (2)",
@@ -4799,7 +4799,7 @@ int ha_partition::handle_ordered_next(uc
}
DBUG_RETURN(error);
}
- queue_replaced(&m_queue);
+ queue_replace_top(&m_queue);
return_top_record(buf);
DBUG_PRINT("info", ("Record returned from partition %u", m_top_entry));
DBUG_RETURN(0);
@@ -4830,7 +4830,7 @@ int ha_partition::handle_ordered_prev(uc
{
if (error == HA_ERR_END_OF_FILE)
{
- queue_remove(&m_queue, (uint) 0);
+ queue_remove_top(&m_queue);
if (m_queue.elements)
{
return_top_record(buf);
@@ -4842,7 +4842,7 @@ int ha_partition::handle_ordered_prev(uc
}
DBUG_RETURN(error);
}
- queue_replaced(&m_queue);
+ queue_replace_top(&m_queue);
return_top_record(buf);
DBUG_PRINT("info", ("Record returned from partition %d", m_top_entry));
DBUG_RETURN(0);
=== modified file 'sql/ha_partition.h'
--- a/sql/ha_partition.h 2010-06-05 14:53:36 +0000
+++ b/sql/ha_partition.h 2010-07-16 07:33:01 +0000
@@ -1129,7 +1129,7 @@ public:
virtual handlerton *partition_ht() const
{
handlerton *h= m_file[0]->ht;
- for (int i=1; i < m_tot_parts; i++)
+ for (uint i=1; i < m_tot_parts; i++)
DBUG_ASSERT(h == m_file[i]->ht);
return h;
}
=== modified file 'sql/item_cmpfunc.cc'
--- a/sql/item_cmpfunc.cc 2010-07-10 10:37:30 +0000
+++ b/sql/item_cmpfunc.cc 2010-07-16 07:33:01 +0000
@@ -1778,10 +1778,12 @@ Item *Item_in_optimizer::expr_cache_inse
if (args[0]->cols() == 1)
depends_on.push_front((Item**)args);
else
- for (int i= 0; i < args[0]->cols(); i++)
+ {
+ for (uint i= 0; i < args[0]->cols(); i++)
{
depends_on.push_front(args[0]->addr(i));
}
+ }
if (args[1]->expr_cache_is_needed(thd))
DBUG_RETURN(set_expr_cache(thd, depends_on));
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-07-16 10:52:02 +0000
+++ b/sql/item_subselect.cc 2010-07-16 12:10:55 +0000
@@ -4880,7 +4880,8 @@ subselect_rowid_merge_engine::init(MY_BI
merge_keys[i]->sort_keys();
if (init_queue(&pq, keys_count, 0, FALSE,
- subselect_rowid_merge_engine::cmp_keys_by_cur_rownum, NULL))
+ subselect_rowid_merge_engine::cmp_keys_by_cur_rownum, NULL,
+ 0, 0))
return TRUE;
return FALSE;
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-07-10 10:37:30 +0000
+++ b/sql/mysqld.cc 2010-07-16 08:58:24 +0000
@@ -409,6 +409,12 @@ static const char *optimizer_switch_str=
"index_merge_sort_union=on,"
"index_merge_intersection=on,"
"index_condition_pushdown=on,"
+ "firstmatch=on,"
+ "loosescan=on,"
+ "materialization=on,"
+ "semijoin=on,"
+ "partial_match_rowid_merge=on,"
+ "partial_match_table_scan=on,"
"subquery_cache=on"
#ifndef DBUG_OFF
",table_elimination=on";
@@ -7227,7 +7233,9 @@ The minimum value for this variable is 4
{"optimizer_switch", OPT_OPTIMIZER_SWITCH,
"optimizer_switch=option=val[,option=val...], where option={index_merge, "
"index_merge_union, index_merge_sort_union, index_merge_intersection, "
- "index_condition_pushdown, subquery_cache"
+ "index_condition_pushdown, firstmatch, loosescan, materialization, "
+ "semijoin, partial_match_rowid_merge, partial_match_table_scan, "
+ "subquery_cache"
#ifndef DBUG_OFF
", table_elimination"
#endif
=== modified file 'sql/net_serv.cc'
--- a/sql/net_serv.cc 2010-05-26 18:55:40 +0000
+++ b/sql/net_serv.cc 2010-07-16 07:33:01 +0000
@@ -262,18 +262,20 @@ static int net_data_is_ready(my_socket s
#endif /* EMBEDDED_LIBRARY */
/**
- Remove unwanted characters from connection
- and check if disconnected.
+ Intialize NET handler for new reads:
- Read from socket until there is nothing more to read. Discard
- what is read.
-
- If there is anything when to read 'net_clear' is called this
- normally indicates an error in the protocol.
-
- When connection is properly closed (for TCP it means with
- a FIN packet), then select() considers a socket "ready to read",
- in the sense that there's EOF to read, but read() returns 0.
+ - Read from socket until there is nothing more to read. Discard
+ what is read.
+ - Initialize net for new net_read/net_write calls.
+
+ If there is anything when to read 'net_clear' is called this
+ normally indicates an error in the protocol. Normally one should not
+ need to do clear the communication buffer. If one compiles without
+ -DUSE_NET_CLEAR then one wins one read call / query.
+
+ When connection is properly closed (for TCP it means with
+ a FIN packet), then select() considers a socket "ready to read",
+ in the sense that there's EOF to read, but read() returns 0.
@param net NET handler
@param clear_buffer if <> 0, then clear all data from comm buff
@@ -281,20 +283,18 @@ static int net_data_is_ready(my_socket s
void net_clear(NET *net, my_bool clear_buffer __attribute__((unused)))
{
-#if !defined(EMBEDDED_LIBRARY) && defined(DBUG_OFF)
- size_t count;
- int ready;
-#endif
DBUG_ENTER("net_clear");
/*
- We don't do a clear in case of DBUG_OFF to catch bugs
- in the protocol handling
+ We don't do a clear in case of not DBUG_OFF to catch bugs in the
+ protocol handling.
*/
-#if !defined(EMBEDDED_LIBRARY) && defined(DBUG_OFF)
+#if (!defined(EMBEDDED_LIBRARY) && defined(DBUG_OFF)) || defined(USE_NET_CLEAR)
if (clear_buffer)
{
+ size_t count;
+ int ready;
while ((ready= net_data_is_ready(net->vio->sd)) > 0)
{
/* The socket is ready */
=== modified file 'sql/opt_range.cc'
--- a/sql/opt_range.cc 2010-07-10 10:37:30 +0000
+++ b/sql/opt_range.cc 2010-07-16 07:33:01 +0000
@@ -1155,10 +1155,7 @@ QUICK_SELECT_I::QUICK_SELECT_I()
QUICK_RANGE_SELECT::QUICK_RANGE_SELECT(THD *thd, TABLE *table, uint key_nr,
bool no_alloc, MEM_ROOT *parent_alloc,
bool *create_error)
- :dont_free(0),doing_key_read(0),/*error(0),*/free_file(0),/*in_range(0),*/cur_range(NULL),last_range(0)
- //psergey3-merge: check whether we need doing_key_read and last_range
- // was:
- // :free_file(0),cur_range(NULL),last_range(0),dont_free(0)
+ :doing_key_read(0),/*error(0),*/free_file(0),/*in_range(0),*/cur_range(NULL),last_range(0),dont_free(0)
{
my_bitmap_map *bitmap;
DBUG_ENTER("QUICK_RANGE_SELECT::QUICK_RANGE_SELECT");
@@ -1594,7 +1591,7 @@ int QUICK_ROR_UNION_SELECT::init()
DBUG_ENTER("QUICK_ROR_UNION_SELECT::init");
if (init_queue(&queue, quick_selects.elements, 0,
FALSE , QUICK_ROR_UNION_SELECT::queue_cmp,
- (void*) this))
+ (void*) this, 0, 0))
{
bzero(&queue, sizeof(QUEUE));
DBUG_RETURN(1);
@@ -8293,12 +8290,12 @@ int QUICK_ROR_UNION_SELECT::get_next()
{
if (error != HA_ERR_END_OF_FILE)
DBUG_RETURN(error);
- queue_remove(&queue, 0);
+ queue_remove_top(&queue);
}
else
{
quick->save_last_pos();
- queue_replaced(&queue);
+ queue_replace_top(&queue);
}
if (!have_prev_rowid)
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-07-16 10:52:02 +0000
+++ b/sql/sql_class.cc 2010-07-16 12:10:55 +0000
@@ -2994,14 +2994,28 @@ create_result_table(THD *thd_arg, List<I
if (!stat)
return TRUE;
- cleanup();
-
+ reset();
table->file->extra(HA_EXTRA_WRITE_CACHE);
table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
return FALSE;
}
+void select_materialize_with_stats::reset()
+{
+ memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
+ max_nulls_in_row= 0;
+ count_rows= 0;
+}
+
+
+void select_materialize_with_stats::cleanup()
+{
+ reset();
+ select_union::cleanup();
+}
+
+
/**
Override select_union::send_data to analyze each row for NULLs and to
update null_statistics before sending data to the client.
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-07-16 10:52:02 +0000
+++ b/sql/sql_class.h 2010-07-16 12:10:55 +0000
@@ -2907,11 +2907,11 @@ public:
bool send_data(List<Item> &items);
bool send_eof();
bool flush();
- TMP_TABLE_PARAM *get_tmp_table_param() { return &tmp_table_param; }
-
+ void cleanup();
virtual bool create_result_table(THD *thd, List<Item> *column_types,
bool is_distinct, ulonglong options,
const char *alias, bool bit_fields_as_long);
+ TMP_TABLE_PARAM *get_tmp_table_param() { return &tmp_table_param; }
};
/* Base subselect interface class */
@@ -2971,6 +2971,9 @@ protected:
*/
ha_rows count_rows;
+protected:
+ void reset();
+
public:
select_materialize_with_stats() { tmp_table_param.init(); }
virtual bool create_result_table(THD *thd, List<Item> *column_types,
@@ -2978,12 +2981,7 @@ public:
const char *alias, bool bit_fields_as_long);
bool init_result_table(ulonglong select_options);
bool send_data(List<Item> &items);
- void cleanup()
- {
- memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
- max_nulls_in_row= 0;
- count_rows= 0;
- }
+ void cleanup();
ha_rows get_null_count_of_col(uint idx)
{
DBUG_ASSERT(idx < table->s->fields);
=== modified file 'sql/sql_union.cc'
--- a/sql/sql_union.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_union.cc 2010-07-16 11:02:15 +0000
@@ -136,6 +136,22 @@ select_union::create_result_table(THD *t
}
+/**
+ Reset and empty the temporary table that stores the materialized query result.
+
+ @note The cleanup performed here is exactly the same as for the two temp
+ tables of JOIN - exec_tmp_table_[1 | 2].
+*/
+
+void select_union::cleanup()
+{
+ table->file->extra(HA_EXTRA_RESET_STATE);
+ table->file->ha_delete_all_rows();
+ free_io_cache(table);
+ filesort_free_buffers(table,0);
+}
+
+
/*
initialization procedures before fake_select_lex preparation()
=== modified file 'sql/uniques.cc'
--- a/sql/uniques.cc 2009-09-07 20:50:10 +0000
+++ b/sql/uniques.cc 2010-07-16 07:33:01 +0000
@@ -423,7 +423,7 @@ static bool merge_walk(uchar *merge_buff
if (end <= begin ||
merge_buffer_size < (ulong) (key_length * (end - begin + 1)) ||
init_queue(&queue, (uint) (end - begin), offsetof(BUFFPEK, key), 0,
- buffpek_compare, &compare_context))
+ buffpek_compare, &compare_context, 0, 0))
return 1;
/* we need space for one key when a piece of merge buffer is re-read */
merge_buffer_size-= key_length;
@@ -468,7 +468,7 @@ static bool merge_walk(uchar *merge_buff
*/
top->key+= key_length;
if (--top->mem_count)
- queue_replaced(&queue);
+ queue_replace_top(&queue);
else /* next piece should be read */
{
/* save old_key not to overwrite it in read_to_buffer */
@@ -478,14 +478,14 @@ static bool merge_walk(uchar *merge_buff
if (bytes_read == (uint) (-1))
goto end;
else if (bytes_read > 0) /* top->key, top->mem_count are reset */
- queue_replaced(&queue); /* in read_to_buffer */
+ queue_replace_top(&queue); /* in read_to_buffer */
else
{
/*
Tree for old 'top' element is empty: remove it from the queue and
give all its memory to the nearest tree.
*/
- queue_remove(&queue, 0);
+ queue_remove_top(&queue);
reuse_freed_buff(&queue, top, key_length);
}
}
=== modified file 'storage/maria/ma_ft_boolean_search.c'
--- a/storage/maria/ma_ft_boolean_search.c 2010-01-06 19:20:16 +0000
+++ b/storage/maria/ma_ft_boolean_search.c 2010-07-16 07:33:01 +0000
@@ -473,14 +473,15 @@ static void _ftb_init_index_search(FT_IN
int i;
FTB_WORD *ftbw;
- if ((ftb->state != READY && ftb->state !=INDEX_DONE) ||
- ftb->keynr == NO_SUCH_KEY)
+ if (ftb->state == UNINITIALIZED || ftb->keynr == NO_SUCH_KEY)
return;
ftb->state=INDEX_SEARCH;
- for (i=ftb->queue.elements; i; i--)
+ for (i= queue_last_element(&ftb->queue);
+ (int) i >= (int) queue_first_element(&ftb->queue);
+ i--)
{
- ftbw=(FTB_WORD *)(ftb->queue.root[i]);
+ ftbw=(FTB_WORD *)(queue_element(&ftb->queue, i));
if (ftbw->flags & FTB_FLAG_TRUNC)
{
@@ -585,7 +586,7 @@ FT_INFO * maria_ft_init_boolean_search(M
sizeof(void *))))
goto err;
reinit_queue(&ftb->queue, ftb->queue.max_elements, 0, 0,
- (int (*)(void*, uchar*, uchar*))FTB_WORD_cmp, 0);
+ (int (*)(void*, uchar*, uchar*))FTB_WORD_cmp, 0, 0, 0);
for (ftbw= ftb->last_word; ftbw; ftbw= ftbw->prev)
queue_insert(&ftb->queue, (uchar *)ftbw);
ftb->list=(FTB_WORD **)alloc_root(&ftb->mem_root,
@@ -828,7 +829,7 @@ int maria_ft_boolean_read_next(FT_INFO *
/* update queue */
_ft2_search(ftb, ftbw, 0);
- queue_replaced(& ftb->queue);
+ queue_replace_top(&ftb->queue);
}
ftbe=ftb->root;
=== modified file 'storage/maria/ma_ft_nlq_search.c'
--- a/storage/maria/ma_ft_nlq_search.c 2009-11-30 13:36:06 +0000
+++ b/storage/maria/ma_ft_nlq_search.c 2010-07-16 07:33:01 +0000
@@ -253,12 +253,12 @@ FT_INFO *maria_ft_init_nlq_search(MARIA_
{
QUEUE best;
init_queue(&best,ft_query_expansion_limit,0,0, (queue_compare) &FT_DOC_cmp,
- 0);
+ 0, 0, 0);
tree_walk(&aio.dtree, (tree_walk_action) &walk_and_push,
&best, left_root_right);
while (best.elements)
{
- my_off_t docid=((FT_DOC *)queue_remove(& best, 0))->dpos;
+ my_off_t docid= ((FT_DOC *)queue_remove_top(&best))->dpos;
if (!(*info->read_record)(info, record, docid))
{
info->update|= HA_STATE_AKTIV;
=== modified file 'storage/maria/ma_sort.c'
--- a/storage/maria/ma_sort.c 2009-11-29 23:08:56 +0000
+++ b/storage/maria/ma_sort.c 2010-07-16 07:33:01 +0000
@@ -933,7 +933,7 @@ merge_buffers(MARIA_SORT_PARAM *info, ui
if (init_queue(&queue,(uint) (Tb-Fb)+1,offsetof(BUFFPEK,key),0,
(int (*)(void*, uchar *,uchar*)) info->key_cmp,
- (void*) info))
+ (void*) info, 0, 0))
DBUG_RETURN(1); /* purecov: inspected */
for (buffpek= Fb ; buffpek <= Tb ; buffpek++)
@@ -982,7 +982,7 @@ merge_buffers(MARIA_SORT_PARAM *info, ui
uchar *base= buffpek->base;
uint max_keys=buffpek->max_keys;
- VOID(queue_remove(&queue,0));
+ VOID(queue_remove_top(&queue));
/* Put room used by buffer to use in other buffer */
for (refpek= (BUFFPEK**) &queue_top(&queue);
@@ -1007,7 +1007,7 @@ merge_buffers(MARIA_SORT_PARAM *info, ui
}
else if (error == -1)
goto err; /* purecov: inspected */
- queue_replaced(&queue); /* Top element has been replaced */
+ queue_replace_top(&queue); /* Top element has been replaced */
}
}
buffpek=(BUFFPEK*) queue_top(&queue);
=== modified file 'storage/maria/maria_pack.c'
--- a/storage/maria/maria_pack.c 2009-02-19 09:01:25 +0000
+++ b/storage/maria/maria_pack.c 2010-07-16 07:33:01 +0000
@@ -590,7 +590,7 @@ static int compress(PACK_MRG_INFO *mrg,c
Create a global priority queue in preparation for making
temporary Huffman trees.
*/
- if (init_queue(&queue,256,0,0,compare_huff_elements,0))
+ if (init_queue(&queue, 256, 0, 0, compare_huff_elements, 0, 0, 0))
goto err;
/*
@@ -1521,7 +1521,7 @@ static int make_huff_tree(HUFF_TREE *huf
if (queue.max_elements < found)
{
delete_queue(&queue);
- if (init_queue(&queue,found,0,0,compare_huff_elements,0))
+ if (init_queue(&queue,found, 0, 0, compare_huff_elements, 0, 0, 0))
return -1;
}
@@ -1625,8 +1625,7 @@ static int make_huff_tree(HUFF_TREE *huf
Make a priority queue from the queue. Construct its index so that we
have a partially ordered tree.
*/
- for (i=found/2 ; i > 0 ; i--)
- _downheap(&queue,i);
+ queue_fix(&queue);
/* The Huffman algorithm. */
bytes_packed=0; bits_packed=0;
@@ -1637,12 +1636,9 @@ static int make_huff_tree(HUFF_TREE *huf
Popping from a priority queue includes a re-ordering of the queue,
to get the next least incidence element to the top.
*/
- a=(HUFF_ELEMENT*) queue_remove(&queue,0);
- /*
- Copy the next least incidence element. The queue implementation
- reserves root[0] for temporary purposes. root[1] is the top.
- */
- b=(HUFF_ELEMENT*) queue.root[1];
+ a=(HUFF_ELEMENT*) queue_remove_top(&queue);
+ /* Copy the next least incidence element */
+ b=(HUFF_ELEMENT*) queue_top(&queue);
/* Get a new element from the element buffer. */
new_huff_el=huff_tree->element_buffer+found+i;
/* The new element gets the sum of the two least incidence elements. */
@@ -1664,8 +1660,8 @@ static int make_huff_tree(HUFF_TREE *huf
Replace the copied top element by the new element and re-order the
queue.
*/
- queue.root[1]=(uchar*) new_huff_el;
- queue_replaced(&queue);
+ queue_top(&queue)= (uchar*) new_huff_el;
+ queue_replace_top(&queue);
}
huff_tree->root=(HUFF_ELEMENT*) queue.root[1];
huff_tree->bytes_packed=bytes_packed+(bits_packed+7)/8;
@@ -1796,8 +1792,7 @@ static my_off_t calc_packed_length(HUFF_
Make a priority queue from the queue. Construct its index so that we
have a partially ordered tree.
*/
- for (i=(found+1)/2 ; i > 0 ; i--)
- _downheap(&queue,i);
+ queue_fix(&queue);
/* The Huffman algorithm. */
for (i=0 ; i < found-1 ; i++)
@@ -1811,12 +1806,9 @@ static my_off_t calc_packed_length(HUFF_
incidence). Popping from a priority queue includes a re-ordering
of the queue, to get the next least incidence element to the top.
*/
- a= (my_off_t*) queue_remove(&queue, 0);
- /*
- Copy the next least incidence element. The queue implementation
- reserves root[0] for temporary purposes. root[1] is the top.
- */
- b= (my_off_t*) queue.root[1];
+ a= (my_off_t*) queue_remove_top(&queue);
+ /* Copy the next least incidence element. */
+ b= (my_off_t*) queue_top(&queue);
/* Create a new element in a local (automatic) buffer. */
new_huff_el= element_buffer + i;
/* The new element gets the sum of the two least incidence elements. */
@@ -1836,8 +1828,8 @@ static my_off_t calc_packed_length(HUFF_
queue. This successively replaces the references to counts by
references to HUFF_ELEMENTs.
*/
- queue.root[1]=(uchar*) new_huff_el;
- queue_replaced(&queue);
+ queue_top(&queue)= (uchar*) new_huff_el;
+ queue_replace_top(&queue);
}
DBUG_RETURN(bytes_packed+(bits_packed+7)/8);
}
=== modified file 'storage/myisam/ft_boolean_search.c'
--- a/storage/myisam/ft_boolean_search.c 2010-06-01 19:52:20 +0000
+++ b/storage/myisam/ft_boolean_search.c 2010-07-16 07:33:01 +0000
@@ -482,16 +482,18 @@ static int _ft2_search(FTB *ftb, FTB_WOR
static void _ftb_init_index_search(FT_INFO *ftb)
{
- int i;
+ uint i;
FTB_WORD *ftbw;
if (ftb->state == UNINITIALIZED || ftb->keynr == NO_SUCH_KEY)
return;
ftb->state=INDEX_SEARCH;
- for (i=ftb->queue.elements; i; i--)
+ for (i= queue_last_element(&ftb->queue);
+ (int) i >= (int) queue_first_element(&ftb->queue);
+ i--)
{
- ftbw=(FTB_WORD *)(ftb->queue.root[i]);
+ ftbw=(FTB_WORD *)(queue_element(&ftb->queue, i));
if (ftbw->flags & FTB_FLAG_TRUNC)
{
@@ -595,12 +597,12 @@ FT_INFO * ft_init_boolean_search(MI_INFO
sizeof(void *))))
goto err;
reinit_queue(&ftb->queue, ftb->queue.max_elements, 0, 0,
- (int (*)(void*, uchar*, uchar*))FTB_WORD_cmp, 0);
+ (int (*)(void*, uchar*, uchar*))FTB_WORD_cmp, 0, 0, 0);
for (ftbw= ftb->last_word; ftbw; ftbw= ftbw->prev)
queue_insert(&ftb->queue, (uchar *)ftbw);
ftb->list=(FTB_WORD **)alloc_root(&ftb->mem_root,
sizeof(FTB_WORD *)*ftb->queue.elements);
- memcpy(ftb->list, ftb->queue.root+1, sizeof(FTB_WORD *)*ftb->queue.elements);
+ memcpy(ftb->list, &queue_top(&ftb->queue), sizeof(FTB_WORD *)*ftb->queue.elements);
my_qsort2(ftb->list, ftb->queue.elements, sizeof(FTB_WORD *),
(qsort2_cmp)FTB_WORD_cmp_list, (void*) ftb->charset);
if (ftb->queue.elements<2) ftb->with_scan &= ~FTB_FLAG_TRUNC;
@@ -839,7 +841,7 @@ int ft_boolean_read_next(FT_INFO *ftb, c
/* update queue */
_ft2_search(ftb, ftbw, 0);
- queue_replaced(& ftb->queue);
+ queue_replace_top(&ftb->queue);
}
ftbe=ftb->root;
=== modified file 'storage/myisam/ft_nlq_search.c'
--- a/storage/myisam/ft_nlq_search.c 2010-01-27 21:53:08 +0000
+++ b/storage/myisam/ft_nlq_search.c 2010-07-16 07:33:01 +0000
@@ -250,12 +250,12 @@ FT_INFO *ft_init_nlq_search(MI_INFO *inf
{
QUEUE best;
init_queue(&best,ft_query_expansion_limit,0,0, (queue_compare) &FT_DOC_cmp,
- 0);
+ 0, 0, 0);
tree_walk(&aio.dtree, (tree_walk_action) &walk_and_push,
&best, left_root_right);
while (best.elements)
{
- my_off_t docid=((FT_DOC *)queue_remove(& best, 0))->dpos;
+ my_off_t docid= ((FT_DOC *)queue_remove_top(&best))->dpos;
if (!(*info->read_record)(info,docid,record))
{
info->update|= HA_STATE_AKTIV;
=== modified file 'storage/myisam/mi_test_all.sh'
--- a/storage/myisam/mi_test_all.sh 2007-07-28 11:36:20 +0000
+++ b/storage/myisam/mi_test_all.sh 2010-07-16 07:33:01 +0000
@@ -5,6 +5,7 @@
valgrind="valgrind --alignment=8 --leak-check=yes"
silent="-s"
+rm -f test1.TMD
if test -f mi_test1$MACH ; then suffix=$MACH ; else suffix=""; fi
./mi_test1$suffix $silent
=== modified file 'storage/myisam/myisampack.c'
--- a/storage/myisam/myisampack.c 2009-02-19 09:01:25 +0000
+++ b/storage/myisam/myisampack.c 2010-07-16 07:33:01 +0000
@@ -576,7 +576,7 @@ static int compress(PACK_MRG_INFO *mrg,c
Create a global priority queue in preparation for making
temporary Huffman trees.
*/
- if (init_queue(&queue,256,0,0,compare_huff_elements,0))
+ if (init_queue(&queue, 256, 0, 0, compare_huff_elements, 0, 0, 0))
goto err;
/*
@@ -1511,7 +1511,7 @@ static int make_huff_tree(HUFF_TREE *huf
if (queue.max_elements < found)
{
delete_queue(&queue);
- if (init_queue(&queue,found,0,0,compare_huff_elements,0))
+ if (init_queue(&queue,found, 0, 0, compare_huff_elements, 0, 0, 0))
return -1;
}
@@ -1615,8 +1615,7 @@ static int make_huff_tree(HUFF_TREE *huf
Make a priority queue from the queue. Construct its index so that we
have a partially ordered tree.
*/
- for (i=found/2 ; i > 0 ; i--)
- _downheap(&queue,i);
+ queue_fix(&queue);
/* The Huffman algorithm. */
bytes_packed=0; bits_packed=0;
@@ -1627,12 +1626,9 @@ static int make_huff_tree(HUFF_TREE *huf
Popping from a priority queue includes a re-ordering of the queue,
to get the next least incidence element to the top.
*/
- a=(HUFF_ELEMENT*) queue_remove(&queue,0);
- /*
- Copy the next least incidence element. The queue implementation
- reserves root[0] for temporary purposes. root[1] is the top.
- */
- b=(HUFF_ELEMENT*) queue.root[1];
+ a=(HUFF_ELEMENT*) queue_remove_top(&queue);
+ /* Copy the next least incidence element */
+ b=(HUFF_ELEMENT*) queue_top(&queue);
/* Get a new element from the element buffer. */
new_huff_el=huff_tree->element_buffer+found+i;
/* The new element gets the sum of the two least incidence elements. */
@@ -1654,8 +1650,8 @@ static int make_huff_tree(HUFF_TREE *huf
Replace the copied top element by the new element and re-order the
queue.
*/
- queue.root[1]=(uchar*) new_huff_el;
- queue_replaced(&queue);
+ queue_top(&queue)= (uchar*) new_huff_el;
+ queue_replace_top(&queue);
}
huff_tree->root=(HUFF_ELEMENT*) queue.root[1];
huff_tree->bytes_packed=bytes_packed+(bits_packed+7)/8;
@@ -1786,8 +1782,7 @@ static my_off_t calc_packed_length(HUFF_
Make a priority queue from the queue. Construct its index so that we
have a partially ordered tree.
*/
- for (i=(found+1)/2 ; i > 0 ; i--)
- _downheap(&queue,i);
+ queue_fix(&queue);
/* The Huffman algorithm. */
for (i=0 ; i < found-1 ; i++)
@@ -1801,12 +1796,9 @@ static my_off_t calc_packed_length(HUFF_
incidence). Popping from a priority queue includes a re-ordering
of the queue, to get the next least incidence element to the top.
*/
- a= (my_off_t*) queue_remove(&queue, 0);
- /*
- Copy the next least incidence element. The queue implementation
- reserves root[0] for temporary purposes. root[1] is the top.
- */
- b= (my_off_t*) queue.root[1];
+ a= (my_off_t*) queue_remove_top(&queue);
+ /* Copy the next least incidence element. */
+ b= (my_off_t*) queue_top(&queue);
/* Create a new element in a local (automatic) buffer. */
new_huff_el= element_buffer + i;
/* The new element gets the sum of the two least incidence elements. */
@@ -1826,8 +1818,8 @@ static my_off_t calc_packed_length(HUFF_
queue. This successively replaces the references to counts by
references to HUFF_ELEMENTs.
*/
- queue.root[1]=(uchar*) new_huff_el;
- queue_replaced(&queue);
+ queue_top(&queue)= (uchar*) new_huff_el;
+ queue_replace_top(&queue);
}
DBUG_RETURN(bytes_packed+(bits_packed+7)/8);
}
=== modified file 'storage/myisam/sort.c'
--- a/storage/myisam/sort.c 2010-04-28 12:52:24 +0000
+++ b/storage/myisam/sort.c 2010-07-16 07:33:01 +0000
@@ -920,7 +920,7 @@ merge_buffers(MI_SORT_PARAM *info, uint
if (init_queue(&queue,(uint) (Tb-Fb)+1,offsetof(BUFFPEK,key),0,
(int (*)(void*, uchar *,uchar*)) info->key_cmp,
- (void*) info))
+ (void*) info, 0, 0))
DBUG_RETURN(1); /* purecov: inspected */
for (buffpek= Fb ; buffpek <= Tb ; buffpek++)
@@ -969,7 +969,7 @@ merge_buffers(MI_SORT_PARAM *info, uint
uchar *base= buffpek->base;
uint max_keys=buffpek->max_keys;
- VOID(queue_remove(&queue,0));
+ VOID(queue_remove_top(&queue));
/* Put room used by buffer to use in other buffer */
for (refpek= (BUFFPEK**) &queue_top(&queue);
@@ -994,7 +994,7 @@ merge_buffers(MI_SORT_PARAM *info, uint
}
else if (error == -1)
goto err; /* purecov: inspected */
- queue_replaced(&queue); /* Top element has been replaced */
+ queue_replace_top(&queue); /* Top element has been replaced */
}
}
buffpek=(BUFFPEK*) queue_top(&queue);
=== modified file 'storage/myisammrg/myrg_queue.c'
--- a/storage/myisammrg/myrg_queue.c 2007-05-10 09:59:39 +0000
+++ b/storage/myisammrg/myrg_queue.c 2010-07-16 07:33:01 +0000
@@ -52,7 +52,7 @@ int _myrg_init_queue(MYRG_INFO *info,int
if (init_queue(q,info->tables, 0,
(myisam_readnext_vec[search_flag] == SEARCH_SMALLER),
queue_key_cmp,
- info->open_tables->table->s->keyinfo[inx].seg))
+ info->open_tables->table->s->keyinfo[inx].seg, 0, 0))
error=my_errno;
}
else
@@ -60,7 +60,7 @@ int _myrg_init_queue(MYRG_INFO *info,int
if (reinit_queue(q,info->tables, 0,
(myisam_readnext_vec[search_flag] == SEARCH_SMALLER),
queue_key_cmp,
- info->open_tables->table->s->keyinfo[inx].seg))
+ info->open_tables->table->s->keyinfo[inx].seg, 0, 0))
error=my_errno;
}
}
=== modified file 'storage/myisammrg/myrg_rnext.c'
--- a/storage/myisammrg/myrg_rnext.c 2007-05-10 09:59:39 +0000
+++ b/storage/myisammrg/myrg_rnext.c 2010-07-16 07:33:01 +0000
@@ -32,7 +32,7 @@ int myrg_rnext(MYRG_INFO *info, uchar *b
{
if (err == HA_ERR_END_OF_FILE)
{
- queue_remove(&(info->by_key),0);
+ queue_remove_top(&(info->by_key));
if (!info->by_key.elements)
return HA_ERR_END_OF_FILE;
}
@@ -43,7 +43,7 @@ int myrg_rnext(MYRG_INFO *info, uchar *b
{
/* Found here, adding to queue */
queue_top(&(info->by_key))=(uchar *)(info->current_table);
- queue_replaced(&(info->by_key));
+ queue_replace_top(&(info->by_key));
}
/* now, mymerge's read_next is as simple as one queue_top */
=== modified file 'storage/myisammrg/myrg_rnext_same.c'
--- a/storage/myisammrg/myrg_rnext_same.c 2007-05-10 09:59:39 +0000
+++ b/storage/myisammrg/myrg_rnext_same.c 2010-07-16 07:33:01 +0000
@@ -29,7 +29,7 @@ int myrg_rnext_same(MYRG_INFO *info, uch
{
if (err == HA_ERR_END_OF_FILE)
{
- queue_remove(&(info->by_key),0);
+ queue_remove_top(&(info->by_key));
if (!info->by_key.elements)
return HA_ERR_END_OF_FILE;
}
@@ -40,7 +40,7 @@ int myrg_rnext_same(MYRG_INFO *info, uch
{
/* Found here, adding to queue */
queue_top(&(info->by_key))=(uchar *)(info->current_table);
- queue_replaced(&(info->by_key));
+ queue_replace_top(&(info->by_key));
}
/* now, mymerge's read_next is as simple as one queue_top */
=== modified file 'storage/myisammrg/myrg_rprev.c'
--- a/storage/myisammrg/myrg_rprev.c 2007-05-10 09:59:39 +0000
+++ b/storage/myisammrg/myrg_rprev.c 2010-07-16 07:33:01 +0000
@@ -32,7 +32,7 @@ int myrg_rprev(MYRG_INFO *info, uchar *b
{
if (err == HA_ERR_END_OF_FILE)
{
- queue_remove(&(info->by_key),0);
+ queue_remove_top(&(info->by_key));
if (!info->by_key.elements)
return HA_ERR_END_OF_FILE;
}
@@ -43,7 +43,7 @@ int myrg_rprev(MYRG_INFO *info, uchar *b
{
/* Found here, adding to queue */
queue_top(&(info->by_key))=(uchar *)(info->current_table);
- queue_replaced(&(info->by_key));
+ queue_replace_top(&(info->by_key));
}
/* now, mymerge's read_prev is as simple as one queue_top */
1
0

[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3/ branch (timour:2805)
by timour@askmonty.org 16 Jul '10
by timour@askmonty.org 16 Jul '10
16 Jul '10
#At file:///home/tsk/mprog/src/5.3/ based on revid:psergey@askmonty.org-20100716090711-5ijpspzyvmoi5mix
2805 timour(a)askmonty.org 2010-07-16
Fixed a problem where the temp table of a materialized subquery
was not cleaned up between PS re-executions. The reason was two-fold:
- a merge with mysql-6.0 missed select_union::cleanup() that should
have cleaned up the temp table, and
- the subclass of select_union used by materialization didn't call
the base class cleanup() method.
modified:
mysql-test/r/subselect_mat.result
mysql-test/t/subselect_mat.test
sql/sql_class.cc
sql/sql_class.h
sql/sql_union.cc
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-06-26 10:05:41 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-07-16 11:02:15 +0000
@@ -1246,3 +1246,29 @@ i
4
set session optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
+create table t0 (a int);
+insert into t0 values (0),(1),(2);
+create table t1 (a int);
+insert into t1 values (0),(1),(2);
+explain select a, a in (select a from t1) from t0;
+id select_type table type possible_keys key key_len ref rows Extra
+1 PRIMARY t0 ALL NULL NULL NULL NULL 3
+2 SUBQUERY t1 ALL NULL NULL NULL NULL 3
+select a, a in (select a from t1) from t0;
+a a in (select a from t1)
+0 1
+1 1
+2 1
+prepare s from 'select a, a in (select a from t1) from t0';
+execute s;
+a a in (select a from t1)
+0 1
+1 1
+2 1
+update t1 set a=123;
+execute s;
+a a in (select a from t1)
+0 0
+1 0
+2 0
+drop table t0, t1;
=== modified file 'mysql-test/t/subselect_mat.test'
--- a/mysql-test/t/subselect_mat.test 2010-03-13 20:04:52 +0000
+++ b/mysql-test/t/subselect_mat.test 2010-07-16 11:02:15 +0000
@@ -905,3 +905,19 @@ select * from t1 where t1.i in (select t
set session optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
+#
+# Test that the contents of the temp table of a materialized subquery is
+# cleaned up between PS re-executions.
+#
+
+create table t0 (a int);
+insert into t0 values (0),(1),(2);
+create table t1 (a int);
+insert into t1 values (0),(1),(2);
+explain select a, a in (select a from t1) from t0;
+select a, a in (select a from t1) from t0;
+prepare s from 'select a, a in (select a from t1) from t0';
+execute s;
+update t1 set a=123;
+execute s;
+drop table t0, t1;
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_class.cc 2010-07-16 11:02:15 +0000
@@ -2994,14 +2994,28 @@ create_result_table(THD *thd_arg, List<I
if (!stat)
return TRUE;
- cleanup();
-
+ reset();
table->file->extra(HA_EXTRA_WRITE_CACHE);
table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
return FALSE;
}
+void select_materialize_with_stats::reset()
+{
+ memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
+ max_nulls_in_row= 0;
+ count_rows= 0;
+}
+
+
+void select_materialize_with_stats::cleanup()
+{
+ reset();
+ select_union::cleanup();
+}
+
+
/**
Override select_union::send_data to analyze each row for NULLs and to
update null_statistics before sending data to the client.
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-07-10 10:37:30 +0000
+++ b/sql/sql_class.h 2010-07-16 11:02:15 +0000
@@ -2905,7 +2905,7 @@ public:
bool send_data(List<Item> &items);
bool send_eof();
bool flush();
-
+ void cleanup();
virtual bool create_result_table(THD *thd, List<Item> *column_types,
bool is_distinct, ulonglong options,
const char *alias, bool bit_fields_as_long);
@@ -2968,6 +2968,9 @@ protected:
*/
ha_rows count_rows;
+protected:
+ void reset();
+
public:
select_materialize_with_stats() {}
virtual bool create_result_table(THD *thd, List<Item> *column_types,
@@ -2975,12 +2978,7 @@ public:
const char *alias, bool bit_fields_as_long);
bool init_result_table(ulonglong select_options);
bool send_data(List<Item> &items);
- void cleanup()
- {
- memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
- max_nulls_in_row= 0;
- count_rows= 0;
- }
+ void cleanup();
ha_rows get_null_count_of_col(uint idx)
{
DBUG_ASSERT(idx < table->s->fields);
=== modified file 'sql/sql_union.cc'
--- a/sql/sql_union.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_union.cc 2010-07-16 11:02:15 +0000
@@ -136,6 +136,22 @@ select_union::create_result_table(THD *t
}
+/**
+ Reset and empty the temporary table that stores the materialized query result.
+
+ @note The cleanup performed here is exactly the same as for the two temp
+ tables of JOIN - exec_tmp_table_[1 | 2].
+*/
+
+void select_union::cleanup()
+{
+ table->file->extra(HA_EXTRA_RESET_STATE);
+ table->file->ha_delete_all_rows();
+ free_io_cache(table);
+ filesort_free_buffers(table,0);
+}
+
+
/*
initialization procedures before fake_select_lex preparation()
1
0

[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-mwl89/ branch (timour:2801)
by timour@askmonty.org 16 Jul '10
by timour@askmonty.org 16 Jul '10
16 Jul '10
#At file:///home/tsk/mprog/src/5.3-mwl89/ based on revid:sanja@askmonty.org-20100710103730-ayy6a61pdibspf4o
2801 timour(a)askmonty.org 2010-07-16
MWL#89: Cost-based choice between Materialization and IN->EXISTS transformation
1. Changed the lazy optimization for subqueries that can be
materialized into bottom-up optimization during the optimization of
the main query.
The main change is implemented by the method
Item_in_subselect::setup_engine.
All other changes were required to correct problems resulting from
changing the order of optimization. Most of these problems followed
the same pattern - there are some shared structures between a
subquery and its parent query. Depending on which one is optimized
first (parent or child query), these shared strucutres may get
different values, thus resulting in an inconsistent query plan.
2. Changed the code-generation for subquery materialization to be
performed in runtime memory for each (re)execution, instead of in
statement memory (once per prepared statement).
- Item_in_subselect::setup_engine() no longer creates materialization
related objects in statement memory.
- Merged subselect_hash_sj_engine::init_permanent and
subselect_hash_sj_engine::init_runtime into
subselect_hash_sj_engine::init, which is called for each
(re)execution.
- Fixed deletion of the temp table accordingly.
@ mysql-test/r/subselect_mat.result
Adjusted changed EXPLAIN because of earlier optimization of subqueries.
modified:
mysql-test/r/subselect_mat.result
sql/item_subselect.cc
sql/item_subselect.h
sql/sql_class.cc
sql/sql_class.h
sql/sql_select.cc
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-06-26 10:05:41 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-07-16 10:52:02 +0000
@@ -1139,7 +1139,7 @@ insert into t1 values (5);
explain select min(a1) from t1 where 7 in (select b1 from t2 group by b1);
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Select tables optimized away
-2 SUBQUERY t2 system NULL NULL NULL NULL 0 const row not found
+2 SUBQUERY NULL NULL NULL NULL NULL NULL NULL no matching row in const table
select min(a1) from t1 where 7 in (select b1 from t2 group by b1);
min(a1)
set @@optimizer_switch='default,materialization=off';
@@ -1153,7 +1153,7 @@ set @@optimizer_switch='default,semijoin
explain select min(a1) from t1 where 7 in (select b1 from t2);
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Select tables optimized away
-2 SUBQUERY t2 system NULL NULL NULL NULL 0 const row not found
+2 SUBQUERY NULL NULL NULL NULL NULL NULL NULL no matching row in const table
select min(a1) from t1 where 7 in (select b1 from t2);
min(a1)
set @@optimizer_switch='default,materialization=off';
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-07-10 10:37:30 +0000
+++ b/sql/item_subselect.cc 2010-07-16 10:52:02 +0000
@@ -166,6 +166,7 @@ void Item_in_subselect::cleanup()
Item_subselect::~Item_subselect()
{
delete engine;
+ engine= NULL;
}
Item_subselect::trans_res
@@ -2220,73 +2221,73 @@ void Item_in_subselect::update_used_tabl
bool Item_in_subselect::setup_engine()
{
- subselect_hash_sj_engine *new_engine= NULL;
- bool res= FALSE;
+ subselect_hash_sj_engine *mat_engine= NULL;
+ subselect_single_select_engine *select_engine;
DBUG_ENTER("Item_in_subselect::setup_engine");
- if (engine->engine_type() == subselect_engine::SINGLE_SELECT_ENGINE)
- {
- /* Create/initialize objects in permanent memory. */
- subselect_single_select_engine *old_engine;
- Query_arena *arena= thd->stmt_arena, backup;
- old_engine= (subselect_single_select_engine*) engine;
+ SELECT_LEX *save_select= thd->lex->current_select;
+ thd->lex->current_select= get_select_lex();
+ int res= thd->lex->current_select->join->optimize();
+ thd->lex->current_select= save_select;
+ if (res)
+ DBUG_RETURN(TRUE);
- if (arena->is_conventional())
- arena= 0;
- else
- thd->set_n_backup_active_arena(arena, &backup);
+ /*
+ The select_engine (that executes transformed IN=>EXISTS subselects) is
+ pre-created at parse time, and is stored in statment memory (preserved
+ across PS executions).
+ */
+ DBUG_ASSERT(engine->engine_type() == subselect_engine::SINGLE_SELECT_ENGINE);
+ select_engine= (subselect_single_select_engine*) engine;
- if (!(new_engine= new subselect_hash_sj_engine(thd, this,
- old_engine)) ||
- new_engine->init_permanent(unit->get_unit_column_types()))
- {
- Item_subselect::trans_res trans_res;
- /*
- If for some reason we cannot use materialization for this IN predicate,
- delete all materialization-related objects, and apply the IN=>EXISTS
- transformation.
- */
- delete new_engine;
- new_engine= NULL;
- exec_method= NOT_TRANSFORMED;
- if (left_expr->cols() == 1)
- trans_res= single_value_in_to_exists_transformer(old_engine->join,
- &eq_creator);
- else
- trans_res= row_value_in_to_exists_transformer(old_engine->join);
- res= (trans_res != Item_subselect::RES_OK);
- }
- if (new_engine)
- engine= new_engine;
+ /* Create/initialize execution objects. */
+ if (!(mat_engine= new subselect_hash_sj_engine(thd, this, select_engine)))
+ DBUG_RETURN(TRUE);
- if (arena)
- thd->restore_active_arena(arena, &backup);
- }
- else
+ if (mat_engine->init(&select_engine->join->fields_list))
{
- DBUG_ASSERT(engine->engine_type() == subselect_engine::HASH_SJ_ENGINE);
- new_engine= (subselect_hash_sj_engine*) engine;
- }
+ Item_subselect::trans_res trans_res;
+ /*
+ If for some reason we cannot use materialization for this IN predicate,
+ delete all materialization-related objects, and apply the IN=>EXISTS
+ transformation.
+ */
+ delete mat_engine;
+ mat_engine= NULL;
+ exec_method= NOT_TRANSFORMED;
+
+ if (left_expr->cols() == 1)
+ trans_res= single_value_in_to_exists_transformer(select_engine->join,
+ &eq_creator);
+ else
+ trans_res= row_value_in_to_exists_transformer(select_engine->join);
- /* Initilizations done in runtime memory, repeated for each execution. */
- if (new_engine)
- {
/*
- Reset the LIMIT 1 set in Item_exists_subselect::fix_length_and_dec.
- TODO:
- Currently we set the subquery LIMIT to infinity, and this is correct
- because we forbid at parse time LIMIT inside IN subqueries (see
- Item_in_subselect::test_limit). However, once we allow this, here
- we should set the correct limit if given in the query.
+ The IN=>EXISTS transformation above injects new predicates into the
+ WHERE and HAVING clauses. Since the subquery was already optimized,
+ below we force its reoptimization with the new injected conditions
+ by the first call to subselect_single_select_engine::exec().
+ This is the only case of lazy subquery optimization in the server.
*/
- unit->global_parameters->select_limit= NULL;
- if ((res= new_engine->init_runtime()))
- DBUG_RETURN(res);
+ DBUG_ASSERT(select_engine->join->optimized);
+ select_engine->join->optimized= false;
+ DBUG_RETURN(trans_res != Item_subselect::RES_OK);
}
- DBUG_RETURN(res);
+ /*
+ Reset the "LIMIT 1" set in Item_exists_subselect::fix_length_and_dec.
+ TODO:
+ Currently we set the subquery LIMIT to infinity, and this is correct
+ because we forbid at parse time LIMIT inside IN subqueries (see
+ Item_in_subselect::test_limit). However, once we allow this, here
+ we should set the correct limit if given in the query.
+ */
+ unit->global_parameters->select_limit= NULL;
+
+ engine= mat_engine;
+ DBUG_RETURN(FALSE);
}
@@ -3787,13 +3788,14 @@ bitmap_init_memroot(MY_BITMAP *map, uint
@retval FALSE otherwise
*/
-bool subselect_hash_sj_engine::init_permanent(List<Item> *tmp_columns)
+bool subselect_hash_sj_engine::init(List<Item> *tmp_columns)
{
+ select_union *result_sink;
/* Options to create_tmp_table. */
ulonglong tmp_create_options= thd->options | TMP_TABLE_ALL_COLUMNS;
/* | TMP_TABLE_FORCE_MYISAM; TIMOUR: force MYISAM */
- DBUG_ENTER("subselect_hash_sj_engine::init_permanent");
+ DBUG_ENTER("subselect_hash_sj_engine::init");
if (bitmap_init_memroot(&non_null_key_parts, tmp_columns->elements,
thd->mem_root) ||
@@ -3822,15 +3824,16 @@ bool subselect_hash_sj_engine::init_perm
DBUG_RETURN(TRUE);
}
*/
- if (!(result= new select_materialize_with_stats))
+ if (!(result_sink= new select_materialize_with_stats))
DBUG_RETURN(TRUE);
-
- if (((select_union*) result)->create_result_table(
- thd, tmp_columns, TRUE, tmp_create_options,
- "materialized subselect", TRUE))
+ result_sink->get_tmp_table_param()->materialized_subquery= true;
+ if (result_sink->create_result_table(thd, tmp_columns, TRUE,
+ tmp_create_options,
+ "materialized subselect", TRUE))
DBUG_RETURN(TRUE);
- tmp_table= ((select_union*) result)->table;
+ tmp_table= result_sink->table;
+ result= result_sink;
/*
If the subquery has blobs, or the total key lenght is bigger than
@@ -3867,6 +3870,17 @@ bool subselect_hash_sj_engine::init_perm
!(lookup_engine= make_unique_engine()))
DBUG_RETURN(TRUE);
+ /*
+ Repeat name resolution for 'cond' since cond is not part of any
+ clause of the query, and it is not 'fixed' during JOIN::prepare.
+ */
+ if (semi_join_conds && !semi_join_conds->fixed &&
+ semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
+ DBUG_RETURN(TRUE);
+ /* Let our engine reuse this query plan for materialization. */
+ materialize_join= materialize_engine->join;
+ materialize_join->change_result(result);
+
DBUG_RETURN(FALSE);
}
@@ -3957,8 +3971,6 @@ subselect_hash_sj_engine::make_unique_en
Item_iterator_row it(item_in->left_expr);
/* The only index on the temporary table. */
KEY *tmp_key= tmp_table->key_info;
- /* Number of keyparts in tmp_key. */
- uint tmp_key_parts= tmp_key->key_parts;
JOIN_TAB *tab;
DBUG_ENTER("subselect_hash_sj_engine::make_unique_engine");
@@ -3981,41 +3993,22 @@ subselect_hash_sj_engine::make_unique_en
}
-/**
- Initialize members of the engine that need to be re-initilized at each
- execution.
+subselect_hash_sj_engine::~subselect_hash_sj_engine()
+{
+ delete lookup_engine;
+ delete result;
+ if (tmp_table)
+ free_tmp_table(thd, tmp_table);
+}
- @retval TRUE if a memory allocation error occurred
- @retval FALSE if success
-*/
-bool subselect_hash_sj_engine::init_runtime()
+int subselect_hash_sj_engine::prepare()
{
/*
Create and optimize the JOIN that will be used to materialize
the subquery if not yet created.
*/
- materialize_engine->prepare();
- /*
- Repeat name resolution for 'cond' since cond is not part of any
- clause of the query, and it is not 'fixed' during JOIN::prepare.
- */
- if (semi_join_conds && !semi_join_conds->fixed &&
- semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
- return TRUE;
- /* Let our engine reuse this query plan for materialization. */
- materialize_join= materialize_engine->join;
- materialize_join->change_result(result);
- return FALSE;
-}
-
-
-subselect_hash_sj_engine::~subselect_hash_sj_engine()
-{
- delete lookup_engine;
- delete result;
- if (tmp_table)
- free_tmp_table(thd, tmp_table);
+ return materialize_engine->prepare();
}
@@ -4036,6 +4029,12 @@ void subselect_hash_sj_engine::cleanup()
count_null_only_columns= 0;
strategy= UNDEFINED;
materialize_engine->cleanup();
+ /*
+ Restore the original Item_in_subselect engine. This engine is created once
+ at parse time and stored across executions, while all other materialization
+ related engines are created and chosen for each execution.
+ */
+ ((Item_in_subselect *) item)->engine= materialize_engine;
if (lookup_engine_type == TABLE_SCAN_ENGINE ||
lookup_engine_type == ROWID_MERGE_ENGINE)
{
@@ -4052,6 +4051,9 @@ void subselect_hash_sj_engine::cleanup()
DBUG_ASSERT(lookup_engine->engine_type() == UNIQUESUBQUERY_ENGINE);
lookup_engine->cleanup();
result->cleanup(); /* Resets the temp table as well. */
+ DBUG_ASSERT(tmp_table);
+ free_tmp_table(thd, tmp_table);
+ tmp_table= NULL;
}
@@ -4080,9 +4082,8 @@ int subselect_hash_sj_engine::exec()
the subquery predicate.
*/
thd->lex->current_select= materialize_engine->select_lex;
- if ((res= materialize_join->optimize()))
- goto err; /* purecov: inspected */
- DBUG_ASSERT(!is_materialized); /* We should materialize only once. */
+ /* The subquery should be optimized, and materialized only once. */
+ DBUG_ASSERT(materialize_join->optimized && !is_materialized);
materialize_join->exec();
if ((res= test(materialize_join->error || thd->is_fatal_error)))
goto err;
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-07-10 10:37:30 +0000
+++ b/sql/item_subselect.h 2010-07-16 10:52:02 +0000
@@ -817,10 +817,9 @@ public:
}
~subselect_hash_sj_engine();
- bool init_permanent(List<Item> *tmp_columns);
- bool init_runtime();
+ bool init(List<Item> *tmp_columns);
void cleanup();
- int prepare() { return 0; } /* Override virtual function in base class. */
+ int prepare();
int exec();
virtual void print(String *str, enum_query_type query_type);
uint cols()
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_class.cc 2010-07-16 10:52:02 +0000
@@ -3052,6 +3052,7 @@ void TMP_TABLE_PARAM::init()
table_charset= 0;
precomputed_group_by= 0;
bit_fields_as_long= 0;
+ materialized_subquery= 0;
skip_create_table= 0;
DBUG_VOID_RETURN;
}
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-07-10 10:37:30 +0000
+++ b/sql/sql_class.h 2010-07-16 10:52:02 +0000
@@ -2852,6 +2852,8 @@ public:
uint convert_blob_length;
CHARSET_INFO *table_charset;
bool schema_table;
+ /* TRUE if the temp table is created for subquery materialization. */
+ bool materialized_subquery;
/*
True if GROUP BY and its aggregate functions are already computed
by a table access method (e.g. by loose index scan). In this case
@@ -2875,8 +2877,8 @@ public:
TMP_TABLE_PARAM()
:copy_field(0), group_parts(0),
group_length(0), group_null_parts(0), convert_blob_length(0),
- schema_table(0), precomputed_group_by(0), force_copy_fields(0),
- bit_fields_as_long(0), skip_create_table(0)
+ schema_table(0), materialized_subquery(0), precomputed_group_by(0),
+ force_copy_fields(0), bit_fields_as_long(0), skip_create_table(0)
{}
~TMP_TABLE_PARAM()
{
@@ -2905,6 +2907,7 @@ public:
bool send_data(List<Item> &items);
bool send_eof();
bool flush();
+ TMP_TABLE_PARAM *get_tmp_table_param() { return &tmp_table_param; }
virtual bool create_result_table(THD *thd, List<Item> *column_types,
bool is_distinct, ulonglong options,
@@ -2969,7 +2972,7 @@ protected:
ha_rows count_rows;
public:
- select_materialize_with_stats() {}
+ select_materialize_with_stats() { tmp_table_param.init(); }
virtual bool create_result_table(THD *thd, List<Item> *column_types,
bool is_distinct, ulonglong options,
const char *alias, bool bit_fields_as_long);
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_select.cc 2010-07-16 10:52:02 +0000
@@ -2586,14 +2586,13 @@ err:
Setup for execution all subqueries of a query, for which the optimizer
chose hash semi-join.
- @details Iterate over all subqueries of the query, and if they are under an
- IN predicate, and the optimizer chose to compute it via hash semi-join:
- - try to initialize all data structures needed for the materialized execution
- of the IN predicate,
- - if this fails, then perform the IN=>EXISTS transformation which was
- previously blocked during JOIN::prepare.
-
- This method is part of the "code generation" query processing phase.
+ @details Iterate over all immediate child subqueries of the query, and if
+ they are under an IN predicate, and the optimizer chose to compute it via
+ materialization:
+ - optimize each subquery,
+ - choose an optimial execution strategy for the IN predicate - either
+ materialization, or an IN=>EXISTS transformation with an approriate
+ engine.
This phase must be called after substitute_for_best_equal_field() because
that function may replace items with other items from a multiple equality,
@@ -7925,7 +7924,7 @@ bool TABLE_REF::tmp_table_index_lookup_i
use that information instead.
*/
cur_ref_buff + null_count,
- null_count ? key_buff : 0,
+ null_count ? cur_ref_buff : 0,
cur_key_part->length, items[i], value);
cur_ref_buff+= cur_key_part->store_length;
}
@@ -11408,10 +11407,30 @@ create_tmp_table(THD *thd,TMP_TABLE_PARA
{
if (thd->is_fatal_error)
goto err; // Got OOM
- continue; // Some kindf of const item
+ continue; // Some kind of const item
}
if (type == Item::SUM_FUNC_ITEM)
- ((Item_sum *) item)->result_field= new_field;
+ {
+ Item_sum *agg_item= (Item_sum *) item;
+ /*
+ Update the result field only if it has never been set, or if the
+ created temporary table is not to be used for subquery
+ materialization.
+
+ The reason is that for subqueries that require materialization as part
+ of their plan, we create the 'external' temporary table needed for IN
+ execution, after the 'internal' temporary table needed for grouping.
+ Since both the external and the internal temporary tables are created
+ for the same list of SELECT fields of the subquery, setting
+ 'result_field' for each invocation of create_tmp_table overrides the
+ previous value of 'result_field'.
+
+ The condition below prevents the creation of the external temp table
+ to override the 'result_field' that was set for the internal temp table.
+ */
+ if (!agg_item->result_field || !param->materialized_subquery)
+ agg_item->result_field= new_field;
+ }
tmp_from_field++;
reclength+=new_field->pack_length();
if (!(new_field->flags & NOT_NULL_FLAG))
@@ -19240,6 +19259,8 @@ bool JOIN::change_result(select_result *
{
DBUG_ENTER("JOIN::change_result");
result= res;
+ if (tmp_join)
+ tmp_join->result= res;
if (!procedure && (result->prepare(fields_list, select_lex->master_unit()) ||
result->prepare2()))
{
1
0

[Maria-developers] WL#126 New (by Monty): Sync also maria_control_file when doing a flush tables
by worklog-noreply@askmonty.org 16 Jul '10
by worklog-noreply@askmonty.org 16 Jul '10
16 Jul '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Sync also maria_control_file when doing a flush tables
CREATION DATE..: Fri, 16 Jul 2010, 09:45
SUPERVISOR.....:
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Maria-BackLog
TASK ID........: 126 (http://askmonty.org/worklog/?tid=126)
VERSION........: WorkLog-4.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 8 (hours remain)
ORIG. ESTIMATE.: 8
PROGRESS NOTES:
DESCRIPTION:
Sync also maria_control_file when doing a flush tables
This is to solve the following problem:
- Fill data in a maria table
- Flush tables
- kill mysqld (without shutdown)
Now when one does a maria_check table, one can get a warning like:
maria_chk: error: Found row with transaction id 206985052 when max transaction
id according to maria_control_file is 206984765
which is a bit confusing.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

16 Jul '10
Hi All,
Yes, PBXT uses a lot of file handles: generally 3 per table. It opens
all tables before recovery. Unfortunately, I have not worked on
optimizing this behavior.
Although the only reason for this is to resolve possible FK/PK
relationships. So I think this should be considered a PBXT bug, and
the problem should be fixed.
Anyway this will certainly be the problem in this case. As Kristian
says, select() is severely limited in that it only works with the low
numbered file handles.
To get the server up and running again you can just delete the tables
manually (the table files: .frm, .xtd, .xtr, .xti), when MySQL is not
running. PBXT will complain on startup, but should recover anyway.
Best regards,
Paul
On Jul 15, 2010, at 9:48 AM, Michael Widenius wrote:
>
> Hi!
>
>>>>>> "Time" == Time Less <timelessness(a)gmail.com> writes:
>
>>> I think it is very likely that you are hitting this bug:
>>>
>>> http://bugs.mysql.com/bug.php?id=48929
>>>
>
> Time> Ah, yes, could be. Though my strace was different than this
> bug shows.
>
>
>>> The problem is that MySQL/MariaDB is using select() to accept new
>>> connections. But select() has a hard-coded limit of 1024 on the
>>> max number
>>> of
>>> open files it can support. It seems PBXT uses an open file
>>> descriptor per
>>> table,
>
>
> Time> Not two per table? It has it looks like many log files, then
> also a data and
> Time> index file per table. On my system where I'm trying to use
> 1,000 tables, I
> Time> expect about 2,000+[mumble] file handles.
>
>>> Maybe MariaDB should backport the fix, it is actually a buffer
>>> overflow
>>> (though it is hard to see how it could be exploitable), but
>>> perhaps more
>>> relevant it is a rather nasty state for the server to get into,
>>> and not
>>> really
>>> clear how to get it out of it again :-(.
>>>
>
> Time> You can't get out of the state again. The server won't accept
> connections,
> Time> so you can't drop any tables. You just have to wipe the DB and
> start over.
> Time> If PBXT is resilient to its tables disappearing between server
> restarts, you
> Time> could rm files from the data directory.
>
> I didn't know that PBXT would open up all files at startup. If that's
> the case, we should switch to use poll in MariaDB 5.3 ASAP (and
> provide a patch for those that wants it for MariaDB 5.1).
>
> Paul, can you verify the above is the case for PBXT (ie, that PBXT
> uses one file descriptor per table and don't free these at all?)
>
> Regards,
> Monty
>
> _______________________________________________
> Mailing list: https://launchpad.net/~maria-developers
> Post to : maria-developers(a)lists.launchpad.net
> Unsubscribe : https://launchpad.net/~maria-developers
> More help : https://help.launchpad.net/ListHelp
1
0

[Maria-developers] Rev 2801: Fixed an error in the creation of REF access method for materialized in file:///home/tsk/mprog/src/5.3/
by timour@askmonty.org 15 Jul '10
by timour@askmonty.org 15 Jul '10
15 Jul '10
At file:///home/tsk/mprog/src/5.3/
------------------------------------------------------------
revno: 2801
revision-id: timour(a)askmonty.org-20100715135910-y1gvcc3d63sod6xt
parent: sanja(a)askmonty.org-20100710103730-ayy6a61pdibspf4o
committer: timour(a)askmonty.org
branch nick: 5.3
timestamp: Thu 2010-07-15 16:59:10 +0300
message:
Fixed an error in the creation of REF access method for materialized
subquery execution, where the the REF buffer format was mistaken to
be in record format instead of key format. The error was that the null
byte for all fields of the record was in the front of the buffer,
and not before each field data.
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-07-10 10:37:30 +0000
+++ b/sql/item_subselect.cc 2010-07-15 13:59:10 +0000
@@ -3957,8 +3957,6 @@
Item_iterator_row it(item_in->left_expr);
/* The only index on the temporary table. */
KEY *tmp_key= tmp_table->key_info;
- /* Number of keyparts in tmp_key. */
- uint tmp_key_parts= tmp_key->key_parts;
JOIN_TAB *tab;
DBUG_ENTER("subselect_hash_sj_engine::make_unique_engine");
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-07-10 10:37:30 +0000
+++ b/sql/sql_select.cc 2010-07-15 13:59:10 +0000
@@ -7925,7 +7925,7 @@
use that information instead.
*/
cur_ref_buff + null_count,
- null_count ? key_buff : 0,
+ null_count ? cur_ref_buff : 0,
cur_key_part->length, items[i], value);
cur_ref_buff+= cur_key_part->store_length;
}
1
0

15 Jul '10
Hi!
>>>>> "Time" == Time Less <timelessness(a)gmail.com> writes:
>> I think it is very likely that you are hitting this bug:
>>
>> http://bugs.mysql.com/bug.php?id=48929
>>
Time> Ah, yes, could be. Though my strace was different than this bug shows.
>> The problem is that MySQL/MariaDB is using select() to accept new
>> connections. But select() has a hard-coded limit of 1024 on the max number
>> of
>> open files it can support. It seems PBXT uses an open file descriptor per
>> table,
Time> Not two per table? It has it looks like many log files, then also a data and
Time> index file per table. On my system where I'm trying to use 1,000 tables, I
Time> expect about 2,000+[mumble] file handles.
>> Maybe MariaDB should backport the fix, it is actually a buffer overflow
>> (though it is hard to see how it could be exploitable), but perhaps more
>> relevant it is a rather nasty state for the server to get into, and not
>> really
>> clear how to get it out of it again :-(.
>>
Time> You can't get out of the state again. The server won't accept connections,
Time> so you can't drop any tables. You just have to wipe the DB and start over.
Time> If PBXT is resilient to its tables disappearing between server restarts, you
Time> could rm files from the data directory.
I didn't know that PBXT would open up all files at startup. If that's
the case, we should switch to use poll in MariaDB 5.3 ASAP (and
provide a patch for those that wants it for MariaDB 5.1).
Paul, can you verify the above is the case for PBXT (ie, that PBXT
uses one file descriptor per table and don't free these at all?)
Regards,
Monty
2
1

14 Jul '10
Time Less <timelessness(a)gmail.com> writes:
>> The problem is that MySQL/MariaDB is using select() to accept new
>> connections. But select() has a hard-coded limit of 1024 on the max number
>> of
>> open files it can support. It seems PBXT uses an open file descriptor per
>> table,
>
>
> Not two per table? It has it looks like many log files, then also a data and
> index file per table. On my system where I'm trying to use 1,000 tables, I
> expect about 2,000+[mumble] file handles.
Sure could be, I don't know the details, just that it seems to open >1000
files simultaneously.
> You can't get out of the state again. The server won't accept connections,
> so you can't drop any tables. You just have to wipe the DB and start over.
> If PBXT is resilient to its tables disappearing between server restarts, you
> could rm files from the data directory.
One way is to use the --bootstrap parameter to mysqld (after stopping the
server).
This allows to start the server, run a set of commands, and shut down, without
needing to connect. Something like
(echo "DROP TABLE t1;" ; echo "DROP TABLE t2;"; ...) | mysqld --defaults-file=/etc/my.cnf --bootstrap
(I actually tried this).
One way or the other, it's a nasty bug.
- Kristian.
1
0

14 Jul '10
Hi!
I am resending this to maria-developers, as this is a more appropriate
address for PBXT issues.
>>>>> "Time" == Time Less <timelessness(a)gmail.com> writes:
Time> Just thought I'd try out PBXT. I created a database instance with 10
Time> databases, inside each 100 tables. They were MyISAM tables. Then I did an
Time> "alter table <name> engine=pbxt" across all of them. That worked fine. Then
Time> I restarted MySQL, and now it won't come up. Error logs are filled with
Time> thousands of:
Time> # tail /var/log/mysql.err
Time> 100713 12:33:33 [ERROR] Error in accept: Bad file descriptor
Time> 100713 12:33:33 [ERROR] Error in accept: Bad file descriptor
Time> 100713 12:33:33 [ERROR] Error in accept: Bad file descriptor
Time> To the tune of tens (or hundreds?) per second. I've tried restarting mysqld
Time> with *ulimit -n 32768* and with my.cnf setting *open_files_limit = 32768* to
Time> no effect. Does anyone have experience with this? Extensive Google searches
Time> turn up nothing except esoteric OpenBSD problems or such.
Time> MariaDB is MariaDB-server-5.1.44-75.el5.
Sorry, but this is an issue that I haven't heard about before.
However, I am sure that the PBXT team will have some suggestions for
you how to solve this.
Regards,
Monty
2
1

Re: [Maria-developers] [Fwd: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2822) Bug#603654]
by Sergey Petrunya 13 Jul '10
by Sergey Petrunya 13 Jul '10
13 Jul '10
Hello Igor,
Ok to push.
On Mon, Jul 12, 2010 at 07:08:58PM -0700, Igor Babaev wrote:
> Please review this patch for the 5.2 tree.
>
> Regards,
> Igor.
>
>
> -------- Original Message --------
> Subject: [Commits] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2
> branch (igor:2822) Bug#603654
> Date: Mon, 12 Jul 2010 19:05:37 -0700 (PDT)
> From: Igor Babaev <igor(a)askmonty.org>
> Reply-To: maria-developers(a)lists.launchpad.net
> To: commits(a)mariadb.org
>
> #At lp:maria/5.2 based on
> revid:igor@askmonty.org-20100713012307-rnom77fx57ef900o
>
> 2822 Igor Babaev 2010-07-12
> Fixed bug #603654.
> If a virtual column was used in the ORDER BY clause of a query
> and some of the columns this virtual column was based upon were
> not referred anywhere in the query then the execution of the
> query could cause an assertion failure.
> It happened because in this case the bitmap of the columns used
> for ordering keys was not formed correctly.
> modified:
> mysql-test/suite/vcol/r/vcol_misc.result
> mysql-test/suite/vcol/t/vcol_misc.test
> sql/filesort.cc
>
> === modified file 'mysql-test/suite/vcol/r/vcol_misc.result'
> --- a/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-13 01:23:07 +0000
> +++ b/mysql-test/suite/vcol/r/vcol_misc.result 2010-07-13 02:05:28 +0000
> @@ -35,3 +35,13 @@ a int NOT NULL DEFAULT '0',
> v double AS ((1, a)) VIRTUAL
> );
> ERROR HY000: Expression for computed column cannot return a row
> +CREATE TABLE t1 (
> +a CHAR(255) BINARY NOT NULL DEFAULT 0,
> +b CHAR(255) BINARY NOT NULL DEFAULT 0,
> +v CHAR(255) BINARY AS (CONCAT(a,b)) VIRTUAL );
> +INSERT INTO t1(a,b) VALUES ('4','7'), ('4','6');
> +SELECT 1 AS C FROM t1 ORDER BY v;
> +C
> +1
> +1
> +DROP TABLE t1;
>
> === modified file 'mysql-test/suite/vcol/t/vcol_misc.test'
> --- a/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-13 01:23:07 +0000
> +++ b/mysql-test/suite/vcol/t/vcol_misc.test 2010-07-13 02:05:28 +0000
> @@ -30,6 +30,19 @@ CREATE TABLE t1 (
> v double AS ((1, a)) VIRTUAL
> );
>
> +#
> +# Bug#603654: Virtual column in ORDER BY, no other references of table
> columns
> +#
> +
> +CREATE TABLE t1 (
> + a CHAR(255) BINARY NOT NULL DEFAULT 0,
> + b CHAR(255) BINARY NOT NULL DEFAULT 0,
> + v CHAR(255) BINARY AS (CONCAT(a,b)) VIRTUAL );
> +INSERT INTO t1(a,b) VALUES ('4','7'), ('4','6');
> +SELECT 1 AS C FROM t1 ORDER BY v;
> +
> +DROP TABLE t1;
> +
>
>
>
>
> === modified file 'sql/filesort.cc'
> --- a/sql/filesort.cc 2010-06-01 19:52:20 +0000
> +++ b/sql/filesort.cc 2010-07-13 02:05:28 +0000
> @@ -1009,7 +1009,14 @@ static void register_used_fields(SORTPAR
> if ((field= sort_field->field))
> {
> if (field->table == table)
> - bitmap_set_bit(bitmap, field->field_index);
> + {
> + if (field->vcol_info)
> + {
> + Item *vcol_item= field->vcol_info->expr_item;
> + vcol_item->walk(&Item::register_field_in_read_map, 1, (uchar
> *) 0);
> + }
> + bitmap_set_bit(bitmap, field->field_index);
> + }
> }
> else
> { // Item
>
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
--
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0
Hi people,
Our cmake scripts weren't able to build outside the source dir. This is
bad for 3 reasons:
- it's supposed to work
- cmake recommends building outside the src dir
- on Windows, cmake can't build 32 and 64 bit in the same solution, so
patches has to be manually copied to test compilation on the other
The attached patch fixes all the issues. I ran a find on the source dir
before and after the the build, and they were identical. I didn't
actually check that no existing files we're modified, but bzr di claim
this isn't the case.
I didn't port the zip creation script, but that shouldn't be a problem.
src != blddir is more a development thing, and the buildbot slave
producing the zip file will just continue to build in the same dir. The
cpack installer builds just fine.
OK to push to 5.1?
Should I send this patch to the Oracle as well to minimize the drift
between the cmake files?
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
1
0
Hi,
I have pushed SphinxSE into MariaDB 5.2. It should be available in the
upcoming MariaDB 5.2.2, which is currently planned within a few weeks.
For now, only the code for the storage engine is pushed, not for the
mysql-test-run.pl stuff. Sergei Golubchik wanted to modify mysql-test-run.pl
first so that engine-specific code can be added in a more modular fashion (he
will do it when he returns from vacation).
For now, the engine is by default built as a loadable .so plugin. This is the
general policy for MariaDB for new plugins. I think the idea is that engines
that prove to be stable and well supported by their maintainer can get
promoted to build statically by default (we are still flexing out the new
policy). Note that the engine will still be available by default, the user
will just need to do a one-time INSTALL PLUGIN command. Note also that
SphinxSE can still be built statically with ./configure --with-plugin-sphinx,
only the default is dynamic.
Daniel, you should probably look into documenting the SphinxSE for MariaDB
5.2.2. Andrew kindly gave us permission to use what we need from the Sphinx
documentation:
http://sphinxsearch.com/docs/current.html#sphinxse
Andrew suggested including also a link to the Sphinx webpage in the MariaDB
documentation for SphinxSE, which is of course a very good idea.
Also you should document how to do the INSTALL PLUGIN command (it's similar to
for OQGraph). Note that the sphinx plugin will be included in eg. deb packages
and so on and installed by default, just the INSTALL PLUGIN command is needed
to enable it.
Andrew, thanks for all your work with this! It is very good to have SphinxSE
included, there have been several requests for this.
We (mainly Sergei) made some changes to the code during review. I attach a
patch for all the changes since your original import, in case you are
interested. Note that I re-added the code when I merged to 5.2 to get a clean
history, so future work on SphinxSE in MariaDB should be based on
lp:maria/5.2, not our old sphinx bzr trees.
- Kristian.
1
0

08 Jul '10
The attached patch reduces the warnings in a 32 bit build with Visual
Studio 2008 to a set in the flex/bison generated code. I'll handle those
later.
I'll add comments on some of the changes here.
The libmysqld.def file: The description line isn't supported in the more
recent compilers.
In fsp0fsp.c, there is an explicit cast to ullint. In this case, the
code does what's intended, and the compiler warning is one of the "are
you sure this is right" warnings. C4334 gives a warning if you make a
1u << 10 and store that in a 64 bit variable, because you could have
meant 1i64 << 10.
The change in i_s.cc is the one I'm most worried about. It used to store
the unsigned long long in a double. The change I did can be wrong. But
even if it's not, I'm worried what happens when this runs on an existing
set of tables.
We had a very long discussion about the cast in row0sel.c. The
conclusion was that
* auto_increment on doubles or floats is a very odd case
* there is a compiler bug in Visual Studio for very large values (above
around 2^53 it converts doubles wrong to uint64
I'd like to separate this discussion from this patch, though, and submit
a bug report on the innodb code on mysql.
I assume there will be something here I should change, but if parts of
the patch are ready for pushing, let me know that.
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
1
1
Hi Philip, all
Philip's bug on recovery from a kill -9 during DML reminded me... from
my experience, aborting a DDL query (ALTER TABLE - things like adding
an index usually) can be tricky and more often than not causes hassles.
By abort in this case I mean killing the connection/thread that does
the DDL.
It would be great to somehow have a few test cases in the suite and it
needs to be tested with all storage engines.
Provided the table it operates on is big enough, it should be possible
to just zap it by timing.
Thoughts?
Cheers,
Arjen.
2
2

07 Jul '10
Danny,
this the log from a short discussion about how to provide the
documentation for Sphinx to our users. Can you think of some
easy way to automate at least the notification process that
the docs changed?
<timour> knielsen, shodan IMHO it wuold be best if Sphinx could notify us automatically about changes in the docs, rather than having a possibly outdated documentation, with a link to the up-to-date docs.
<timour> If we take this manual approach to copy/paste docs for a dozen storage engines, we will end up in a big mess.
<shodan> timour: would http://code.google.com/feeds/p/sphinxsearch/svnchanges/basic do for notifications?
<timour> shodan, isn't that a feed with all changes?
<shodan> timour: yes, and so doc/ changes are there too
<timour> knielsen, perhaps we could pull the docs automatically?
<shodan> timour: maybe we could setup a special hook on commits to just doc/ but def not to a specific section
<timour> shodan, I think we have to ask our docs guy what is the best way to at least be automatically notified of doc changes (for the same release).
<timour> To me there are 2 kinds of doc changes:
<timour> - improvements of the docs for the same version of the code,
<timour> - changes due to a new release.
<timour> The latter will be straightforward, because we will know when we release a new version of Sphinx, but not the former.
What do you think about the problem with outdated docs of storage engines?
Timour
2
1
Hi all,
The attached patch adds a -64 keyword to the script
make_mariadb_win_dist. With this patch, the script produces both 32 bit
zip files (without any arguments) and 64 bit zip files (with -64). I
also expanded it so it's possible to run it with "-nobuild -64" and a
help text.
Kristian, when this patch is pushed, you can add a new buildbot slave
based on the one that builds the 32 bit zip file and installer. Adding
the -64 argument on the call to this script is the only thing necessary
for buildbot to produce 64 bit binaries. The result files are named
-win64 instead of -win32.
Ok to push this to 5.1?
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
2
4

05 Jul '10
The three companies Continuent, Codership, and Monty Program are planning to
start working on some enhancements to the replication system in MariaDB,
together with anyone interested in joining in.
At this stage, there are no fixed directions for the project, and to do this
in as open a way possible with the maximum community involvement and interest,
we agreed to start with an email discussion on the maria-developers@ mailing
list. So consider it started!
The plan so far is:
1) The parties taking this initiative, MP, Continuent, and Codership, present
their own ideas in this thread on maria-developers@ (and everyone else who
wants to chime in at this stage).
2) Once we have some concrete suggestions as a starting point, we use this to
reach out in a broader way with some blog posts on planetmysql / planetmariadb
to encourage further input and discussions for possible directions of the
project. Eventually we want to end up with a list of the most important goals
and a possible roadmap for replication enhancements.
(It is best to have something concrete as a basis of a broad community
discussion/process).
To start of, here are some points of interest that I collected. Everyone
please chime in with your own additional points, as well as comments and
further details on these one.
Three areas in particular seem to be of high interest in the community
currently (not excluding any other areas):
- High Availability
* Most seems to focus on building HA solutions on top of MySQL
replication, eg. MMM and Tungsten.
* For this project, seems mainly to be to implement improvements to
replication that help facilitate improving these on-top HA solutions.
* Tools to automate (or help automate) failover from a master.
* Better facilities to do initial setup of new slave without downtime, or
re-sync of an old master or slave that has been outside of the
replication topology for some period of time.
- Performance, especially scalability
* Multi-threaded slave SQL thread.
* Store the binlog inside a transactional engine (eg. InnoDB) to reduce
I/O, solve problems like group commit, and simplify crash recovery.
- More pluggable replication
* Make the replication code and APIs be more suitable for people to build
extra functionality into or on top of the stock MySQL replication.
* Better documentation of C++ APIs and binlog format.
* Adding extra information to binlog that can be useful for non-standard
replication stuff. For example column names (for RBR), checksums.
* Refactoring the server code to be more modular with APIs more suitable
for external usage.
* Add support for replication plugins, types to be determined. For example
binlog filtering plugins?
It is also very important to consider the work that the replication team at
MySQL is doing (and has done). I found a good deal of interesting information
about this here:
http://forge.mysql.com/wiki/MySQL_Replication:_Walk-through_of_the_new_5.1_…)
This describes a number of 6.0/5.4 and preview features that we could merge
and/or contribute to. Here are the highlights that I found:
- Features included in 6.0/5.4 (which are cancelled I think, but presumably
this will go in a milestone release):
* CHANGE MASTER ... IGNORE_SERVER_IDS for better support of circular
replication.
* Replication heartbeat.
* sync_relay_log_info, sync_master_info, sync_relay_log, relay_log_recovery
for crash recovery on slave.
* Binlog Performance Optimization (lock contention improvement).
* Semi-synchronous Replication, with Pluggable Replication Architecture.
http://forge.mysql.com/wiki/ReplicationFeatures/SemiSyncReplication
- Feature previews:
* Parallel slave application: WL#4648
http://forge.mysql.com/wiki/ReplicationFeatures/ParallelSlave
* Time-delayed replication: WL#344
http://forge.mysql.com/wiki/ReplicationFeatures/DelayedReplication
* Scriptable Replication: WL#4008
http://forge.mysql.com/wiki/ReplicationFeatures/ScriptableReplication
* Synchronous Replication.
Drizzle is also doing work on a new replication system. I read through the
series of blog posts that Jay Pipes wrote on this subject. They mostly deal
with how this is designed in terms of the Drizzle server code, and is low on
detail about how the replication will actually work (the only thing I could
really extract was that it is a form of row-based replication). If someone has
links to information about this that I missed, it could be interesting.
Let the discussion begin!
- Kristian.
13
67

[Maria-developers] MWL#123, sql layer part (was: Re: mwl#121: follow up)
by Sergey Petrunya 05 Jul '10
by Sergey Petrunya 05 Jul '10
05 Jul '10
Hello Igor,
On Thu, Jul 01, 2010 at 11:55:58PM -0700, Igor Babaev wrote:
> According to our agreement I introduced a new flag for MRR.
> I called it HA_MRR_MATERIALIZED_KEYS. This flag passed to any MMR
> interface function that takes mrr_mode as a parameter says:
> key values used in ranges are materialized in some buffers external to MRR.
>
This patch only gives DS-MRR information that the keys are materialized.
However, DS-MRR needs to work on (key, range_id) pairs. The patch doesn't
allow to assoicate key with range_id (or vice versa), so DS-MRR will have to
keep (key_pointer, range_id) tuples.
If we consider 64-bit environment, then sizeof(key_value_pointer)==
sizeof(key_value), and this patch won't bring any benefit at all.
I was expecting that the patch will provide some means to associate
key_value_pointer with range_id, so that DS-MRR won't need to store both.
Can we discuss this on scrum meeting or Monday evening?
> === modified file 'sql/handler.h'
> --- sql/handler.h 2010-03-20 12:01:47 +0000
> +++ sql/handler.h 2010-07-01 20:00:35 +0000
> @@ -1212,6 +1212,12 @@ void get_sweep_read_cost(TABLE *table, h
> */
> #define HA_MRR_NO_NULL_ENDPOINTS 128
>
> +/*
> + The MRR user has materialized range keys somewhere in the user's buffer.
> + This can be used for optimization of the procedure that sorts these keys
> + since in this case key values don't have to be copied into the MRR buffer.
> +*/
> +#define HA_MRR_MATERIALIZED_KEYS 256
>
>
> /*
>
> === modified file 'sql/sql_join_cache.cc'
> --- sql/sql_join_cache.cc 2010-03-07 15:41:45 +0000
> +++ sql/sql_join_cache.cc 2010-07-01 19:59:55 +0000
> @@ -651,6 +651,9 @@ int JOIN_CACHE_BKA::init()
>
> use_emb_key= check_emb_key_usage();
>
> + if (use_emb_key)
> + mrr_mode|= HA_MRR_MATERIALIZED_KEYS;
> +
> create_remaining_fields(FALSE);
>
> set_constants();
> @@ -2617,6 +2620,8 @@ int JOIN_CACHE_BKA_UNIQUE::init()
> data_fields_offset+= copy->length;
> }
>
> + mrr_mode|= HA_MRR_MATERIALIZED_KEYS;
> +
> DBUG_RETURN(rc);
> }
>
>
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] [5.3 merge] Item_in_subselect::init_left_expr_cache() question.
by Sergey Petrunya 03 Jul '10
by Sergey Petrunya 03 Jul '10
03 Jul '10
Hi Timour,
I'm having difficulties with finishing 5.2->5.3 merge. Could you please take a look at
https://bugs.launchpad.net/maria/+bug/598972 ? The questions are in the bug
entry.
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
1

[Maria-developers] DS-MRR: extra work filed as new WL entries, questions
by Sergey Petrunya 03 Jul '10
by Sergey Petrunya 03 Jul '10
03 Jul '10
Hello Igor,
Based on our discussions, I've filed
* http://askmonty.org/worklog/Server-RawIdeaBin/index.pl?tid=123
"DS-MRR for clustered PKs: more efficient buffer use"
* http://askmonty.org/worklog/Server-RawIdeaBin/index.pl?tid=124
"DS-MRR for clustered PKs: cost function"
* http://askmonty.org/worklog/Client-BackLog/index.pl?tid=125
"Make DS-MRR sort the ranges before scanning the index"
Could you please check if that correctly describes the conclusions we've
arrived at? Also, WL texts contain several unresolved questions, marked with
"TODO".
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] WL#122 New (by Sergei): fix locking in XA tc_log
by worklog-noreply@askmonty.org 30 Jun '10
by worklog-noreply@askmonty.org 30 Jun '10
30 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: fix locking in XA tc_log
CREATION DATE..: Wed, 30 Jun 2010, 17:26
SUPERVISOR.....: Sergei
IMPLEMENTOR....: Sergei
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 122 (http://askmonty.org/worklog/?tid=122)
VERSION........: Server-5.1
STATUS.........: Assigned
PRIORITY.......: 90
WORKED HOURS...: 0
ESTIMATE.......: 8 (hours remain)
ORIG. ESTIMATE.: 8
PROGRESS NOTES:
DESCRIPTION:
We need to analyze and fix the mutex lock order in TC_LOG code of the XA.
It's needed to fix the https://bugs.launchpad.net/maria/+bug/578117
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#99 Updated (by Sergei): dummy test task
by worklog-noreply@askmonty.org 30 Jun '10
by worklog-noreply@askmonty.org 30 Jun '10
30 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: dummy test task
CREATION DATE..: Thu, 04 Mar 2010, 07:18
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 99 (http://askmonty.org/worklog/?tid=99)
VERSION........: Server-9.x
STATUS.........: Cancelled
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Sergei - Wed, 30 Jun 2010, 16:27)=-=-
Version updated.
--- /tmp/wklog.99.old.30336 2010-06-30 16:27:06.000000000 +0000
+++ /tmp/wklog.99.new.30336 2010-06-30 16:27:06.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Sergei - Wed, 30 Jun 2010, 16:27)=-=-
Status updated.
--- /tmp/wklog.99.old.30336 2010-06-30 16:27:06.000000000 +0000
+++ /tmp/wklog.99.new.30336 2010-06-30 16:27:06.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Cancelled
DESCRIPTION:
test missing estimate
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

30 Jun '10
Hi everyone,
This patch adds a page to the Windows installer that asks the user if he
wants to set up MariaDB as a Windows service.
The way to do this is to hack the NSIS template for CPack. IMHO, this is
a bad way, but it is the recommended (and only) way to do stuff like this.
The patch in the attached file install-service.patch is the only thing
necessary for the current sources. It's a one liner. So far so good :)
The actual patch adds a file win\cmake\NSIS.template.in which is a copy
of the NSIS.template.in from CMake. The attached patch
NSIS.template.in.patch shows the diff with the code I have added.
Bo Thorsen.
Monty Program AB.
--
MariaDB: MySQL replacement
Community developed. Feature enhanced. Backward compatible.
2
1

[Maria-developers] WL#85 Updated (by Sergei): Partitioned Key Cache for MyISAM
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 85 (http://askmonty.org/worklog/?tid=85)
VERSION........: Server-5.2
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:05)=-=-
Status updated.
--- /tmp/wklog.85.old.32131 2010-06-29 14:05:44.000000000 +0000
+++ /tmp/wklog.85.new.32131 2010-06-29 14:05:44.000000000 +0000
@@ -1 +1 @@
-Assigned
+Complete
-=-=(Igor - Tue, 16 Mar 2010, 19:34)=-=-
High Level Description modified.
--- /tmp/wklog.85.old.22371 2010-03-16 19:34:33.000000000 +0000
+++ /tmp/wklog.85.new.22371 2010-03-16 19:34:33.000000000 +0000
@@ -15,4 +15,5 @@
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
+our external contributers (see the attached file segmented_keycache_v2.diff with
+the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Category updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Version updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
-=-=(Igor - Sun, 14 Feb 2010, 00:12)=-=-
New attachment: 'segmented_keycache_v2.diff'
DESCRIPTION:
A partitioned key cache is a collection of structures for regular MyiSAM key
caches called key cache partitions. Any page from a file can be placed into a
buffer of only one partition. The number of the partition is calculated from the
file number and the position of the page in the file, and it's always the same
for the page. The function that maps pages into partitions takes care of even
distribution of pages among partitions.
Partition key cache mitigate one of the major problem of simple key cache:
thread contention for key cache lock (mutex). Every call of a key cache
interface function must acquire this lock. So threads compete for this lock even
in the case when they have acquired shared locks for the file and pages they
want read from are in the key cache buffers. When working with a partitioned key
cache any key cache interface function that needs only one page has to acquire
the key cache lock only for the partition the page is ascribed to. This makes
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
our external contributers (see the attached file segmented_keycache_v2.diff with
the original patch from the contributor).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#85 Updated (by Sergei): Partitioned Key Cache for MyISAM
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 85 (http://askmonty.org/worklog/?tid=85)
VERSION........: Server-5.2
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:05)=-=-
Status updated.
--- /tmp/wklog.85.old.32131 2010-06-29 14:05:44.000000000 +0000
+++ /tmp/wklog.85.new.32131 2010-06-29 14:05:44.000000000 +0000
@@ -1 +1 @@
-Assigned
+Complete
-=-=(Igor - Tue, 16 Mar 2010, 19:34)=-=-
High Level Description modified.
--- /tmp/wklog.85.old.22371 2010-03-16 19:34:33.000000000 +0000
+++ /tmp/wklog.85.new.22371 2010-03-16 19:34:33.000000000 +0000
@@ -15,4 +15,5 @@
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
+our external contributers (see the attached file segmented_keycache_v2.diff with
+the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Category updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Version updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
-=-=(Igor - Sun, 14 Feb 2010, 00:12)=-=-
New attachment: 'segmented_keycache_v2.diff'
DESCRIPTION:
A partitioned key cache is a collection of structures for regular MyiSAM key
caches called key cache partitions. Any page from a file can be placed into a
buffer of only one partition. The number of the partition is calculated from the
file number and the position of the page in the file, and it's always the same
for the page. The function that maps pages into partitions takes care of even
distribution of pages among partitions.
Partition key cache mitigate one of the major problem of simple key cache:
thread contention for key cache lock (mutex). Every call of a key cache
interface function must acquire this lock. So threads compete for this lock even
in the case when they have acquired shared locks for the file and pages they
want read from are in the key cache buffers. When working with a partitioned key
cache any key cache interface function that needs only one page has to acquire
the key cache lock only for the partition the page is ascribed to. This makes
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
our external contributers (see the attached file segmented_keycache_v2.diff with
the original patch from the contributor).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#85 Updated (by Sergei): Partitioned Key Cache for MyISAM
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 85 (http://askmonty.org/worklog/?tid=85)
VERSION........: Server-5.2
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:05)=-=-
Status updated.
--- /tmp/wklog.85.old.32131 2010-06-29 14:05:44.000000000 +0000
+++ /tmp/wklog.85.new.32131 2010-06-29 14:05:44.000000000 +0000
@@ -1 +1 @@
-Assigned
+Complete
-=-=(Igor - Tue, 16 Mar 2010, 19:34)=-=-
High Level Description modified.
--- /tmp/wklog.85.old.22371 2010-03-16 19:34:33.000000000 +0000
+++ /tmp/wklog.85.new.22371 2010-03-16 19:34:33.000000000 +0000
@@ -15,4 +15,5 @@
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
+our external contributers (see the attached file segmented_keycache_v2.diff with
+the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Category updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Version updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
-=-=(Igor - Sun, 14 Feb 2010, 00:12)=-=-
New attachment: 'segmented_keycache_v2.diff'
DESCRIPTION:
A partitioned key cache is a collection of structures for regular MyiSAM key
caches called key cache partitions. Any page from a file can be placed into a
buffer of only one partition. The number of the partition is calculated from the
file number and the position of the page in the file, and it's always the same
for the page. The function that maps pages into partitions takes care of even
distribution of pages among partitions.
Partition key cache mitigate one of the major problem of simple key cache:
thread contention for key cache lock (mutex). Every call of a key cache
interface function must acquire this lock. So threads compete for this lock even
in the case when they have acquired shared locks for the file and pages they
want read from are in the key cache buffers. When working with a partitioned key
cache any key cache interface function that needs only one page has to acquire
the key cache lock only for the partition the page is ascribed to. This makes
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
our external contributers (see the attached file segmented_keycache_v2.diff with
the original patch from the contributor).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#112 Updated (by Sergei): Merge OQGraph into MariaDB
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Merge OQGraph into MariaDB
CREATION DATE..: Mon, 29 Mar 2010, 18:00
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 112 (http://askmonty.org/worklog/?tid=112)
VERSION........: Server-5.2
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 13
ESTIMATE.......: 2 (hours remain)
ORIG. ESTIMATE.: 15
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:04)=-=-
Status updated.
--- /tmp/wklog.112.old.32115 2010-06-29 14:04:40.000000000 +0000
+++ /tmp/wklog.112.new.32115 2010-06-29 14:04:40.000000000 +0000
@@ -1 +1 @@
-Code-Review
+Complete
-=-=(Knielsen - Tue, 06 Apr 2010, 15:28)=-=-
Fixed all issues from first code review.
Implement packaging for OQGraph in bakery.
Set up buildbot hosts for including OQGraph, including binary packaging.
-=-=(Knielsen - Wed, 31 Mar 2010, 13:38)=-=-
Status updated.
--- /tmp/wklog.112.old.12166 2010-03-31 13:38:25.000000000 +0000
+++ /tmp/wklog.112.new.12166 2010-03-31 13:38:25.000000000 +0000
@@ -1 +1 @@
-Assigned
+Code-Review
-=-=(Knielsen - Wed, 31 Mar 2010, 13:38)=-=-
High-Level Specification modified.
--- /tmp/wklog.112.old.12070 2010-03-31 13:38:08.000000000 +0000
+++ /tmp/wklog.112.new.12070 2010-03-31 13:38:08.000000000 +0000
@@ -15,3 +15,5 @@
Fix OQGraph plug.in to detect boost version >= 1.40.0, and only enable OQGraph
if such boost is found.
+Update the packaging in ourdelta/bakery to include the oqgraph_engine.so and
+link with g++ rather than gcc.
-=-=(Knielsen - Mon, 29 Mar 2010, 21:46)=-=-
High-Level Specification modified.
--- /tmp/wklog.112.old.31142 2010-03-29 21:46:11.000000000 +0000
+++ /tmp/wklog.112.new.31142 2010-03-29 21:46:11.000000000 +0000
@@ -1,28 +1,17 @@
Tasks:
-Find the latest version of OQGraph to base this on (there should be a
-Launchpad branch somewhere, match it up with what is in the OQGraph patch for
-MySQL 5.0 in the ourdelta stuff).
-
-Extract the correct version of Boost from the MySQL 5.0 ourdelta patch. This
-is a patched version of Boost fixing a bug that is supposedly fatal for
-OQGraph (details are not known at the time of writing).
+Base work on the Launchpad branch lp:~knielsen/maria/mariadb-5.1-oqgraph
-Document in OQGraph README the need for boost of a specific version, and point
-to where it can be obtained. Also include the patch for boost if the correct
-base version of boost to do this against can be determined.
+OQGraph requires Boost >= 1.40.0 (earlier versions have a bug that affects
+OQGraph).
-Install the patched boost in /usr/local/ on the build machines (release builds
-and selected Buildbot slaves).
+Document in OQGraph README the need for boost of a specific version, and point
+to where it can be obtained.
-Fix OQGraph plug.in to detect correct version of OQGraph that makes the build
-not break. Check which version in Ubuntu starts working (I think it was
-Jaunty), and require at least that version.
-
-Setup some repository or source tarball of the patched boost
-somewhere. Preferably a Launchpad branch or similar (if upstream project can
-be found).
+Install the patched boost in /usr/local/include/boost on the build machines
+(release builds and selected Buildbot slaves). G++ seems to by default look in
+/usr/local/include, so that is sufficient to find it.
-Setup in plug.in or /configure.in appropriate --with-boost=xxx. Or in a pinch,
-we can make do with CFLAGS=-Ixxx, or even default look in /usr/local/.
+Fix OQGraph plug.in to detect boost version >= 1.40.0, and only enable OQGraph
+if such boost is found.
-=-=(Knielsen - Mon, 29 Mar 2010, 18:09)=-=-
High-Level Specification modified.
--- /tmp/wklog.112.old.23061 2010-03-29 18:09:27.000000000 +0000
+++ /tmp/wklog.112.new.23061 2010-03-29 18:09:27.000000000 +0000
@@ -1 +1,28 @@
+Tasks:
+
+Find the latest version of OQGraph to base this on (there should be a
+Launchpad branch somewhere, match it up with what is in the OQGraph patch for
+MySQL 5.0 in the ourdelta stuff).
+
+Extract the correct version of Boost from the MySQL 5.0 ourdelta patch. This
+is a patched version of Boost fixing a bug that is supposedly fatal for
+OQGraph (details are not known at the time of writing).
+
+Document in OQGraph README the need for boost of a specific version, and point
+to where it can be obtained. Also include the patch for boost if the correct
+base version of boost to do this against can be determined.
+
+Install the patched boost in /usr/local/ on the build machines (release builds
+and selected Buildbot slaves).
+
+Fix OQGraph plug.in to detect correct version of OQGraph that makes the build
+not break. Check which version in Ubuntu starts working (I think it was
+Jaunty), and require at least that version.
+
+Setup some repository or source tarball of the patched boost
+somewhere. Preferably a Launchpad branch or similar (if upstream project can
+be found).
+
+Setup in plug.in or /configure.in appropriate --with-boost=xxx. Or in a pinch,
+we can make do with CFLAGS=-Ixxx, or even default look in /usr/local/.
DESCRIPTION:
Get the OQGraph storage engine merged into MariaDB, fixing the remaining
problems blocking the merge.
HIGH-LEVEL SPECIFICATION:
Tasks:
Base work on the Launchpad branch lp:~knielsen/maria/mariadb-5.1-oqgraph
OQGraph requires Boost >= 1.40.0 (earlier versions have a bug that affects
OQGraph).
Document in OQGraph README the need for boost of a specific version, and point
to where it can be obtained.
Install the patched boost in /usr/local/include/boost on the build machines
(release builds and selected Buildbot slaves). G++ seems to by default look in
/usr/local/include, so that is sufficient to find it.
Fix OQGraph plug.in to detect boost version >= 1.40.0, and only enable OQGraph
if such boost is found.
Update the packaging in ourdelta/bakery to include the oqgraph_engine.so and
link with g++ rather than gcc.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Connect by
CREATION DATE..: Thu, 26 Mar 2009, 00:30
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-BackLog
TASK ID........: 11 (http://askmonty.org/worklog/?tid=11)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 220 (hours remain)
ORIG. ESTIMATE.: 220
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:03)=-=-
Category updated.
--- /tmp/wklog.11.old.32063 2010-06-29 14:03:35.000000000 +0000
+++ /tmp/wklog.11.new.32063 2010-06-29 14:03:35.000000000 +0000
@@ -1 +1 @@
-Server-Sprint
+Server-BackLog
-=-=(Guest - Tue, 19 May 2009, 18:27)=-=-
High Level Description modified.
--- /tmp/wklog.11.old.21953 2009-05-19 18:27:14.000000000 +0300
+++ /tmp/wklog.11.new.21953 2009-05-19 18:27:14.000000000 +0300
@@ -1 +1,360 @@
-Add CONNECT BY syntax
+<contents>
+1. Background information
+2. CONNECT BY semantics, properties and limitations
+2.1 Additional CONNECT BY features
+2.2 Limitations
+3. Our implementation
+3.1 Scope Questions
+3.2 CONNECT BY execution
+3.2.1 Straightforward (recursive) evaluation algorithm
+3.2.2 Transitive-closure evaluation algorithms
+3.2.3 Other algorithms
+3.2.4 Loop detection
+3.2.4.1 The upper bound of produced records
+3.2.4.1 Straightforward approach: track chains
+3.2.3 Improvements for straightforward execution strategy
+3.3. Optimization
+4. Use-cases dump
+</contents>
+
+1. Background information
+-------------------------
+* CONNECT BY is a non-standard, Oracle's syntax. It is also supported by
+ EnterpriseDB (Q: any other implementations?)
+
+* PostgreSQL 8.4 (now beta) has support for SQL-standard compliant WITH
+ RECURSIVE (aka Common Table Expressions, CTE) query syntax:
+ http://www.postgresql.org/docs/8.4/static/queries-with.html
+ http://www.postgresql.org/about/news.1074
+ http://archives.postgresql.org/pgsql-hackers/2008-02/msg00642.php
+ http://archives.postgresql.org/pgsql-patches/2008-05/msg00362.php
+
+* Evgen's attempt:
+ http://lists.mysql.com/internals/15569
+
+DB2 and MS SQL support SQL standard's WITH RECURSIVE clause.
+
+2. CONNECT BY semantics, properties and limitations
+---------------------------------------------------
+From Oracle's manual:
+
+<almost-quote>
+
+ SELECT ...
+ FROM ...
+ WHERE ...
+ START WITH cond
+ CONNECT BY connect_cond
+ ORDER [SIBLINGS] BY
+
+In oracle, one expression in connect_cond must be
+
+ PRIOR expr = expr
+
+ or
+
+ expr = PRIOR expr
+
+The manner in which Oracle processes a WHERE clause (if any) in a hierarchical
+query depends on whether the WHERE clause contains a join:
+
+ * If the WHERE predicate contains a join, Oracle applies the join predicates
+ before doing the CONNECT BY processing.
+ * If the WHERE clause does not contain a join, Oracle applies all predicates
+ other than the CONNECT BY predicates after doing the CONNECT BY processing
+ without affecting the other rows of the hierarchy.
+</almost-quote>
+
+See http://www.adp-gmbh.ch/ora/sql/connect_by.html
+http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96540/queries4a.htm
+
+
+2.1 Additional CONNECT BY features
+----------------------------------
+
+LEVEL pseudocolumn
+ indicates ancestry depth of the record (inital row has level=1, its children
+ have level=2 and so forth). Can be used in CONNECT BY clause to limit
+ traversal depth.
+
+SYS_CONNECT_BY_PATH(column, 'char')
+ returns path from root to the node.
+
+NOCYCLE and CONNECT_BY_ISCYCLE
+ "With the 10g keyword NOCYCLE, hierarchical queries detect loops and do not
+ generate errors. CONNECT_BY_ISCYCLE pseudo-column is a flag that can be used
+ to detect which row is cycling"
+ http://www.dba-oracle.com/t_advanced_sql_connect_by_loop.htm
+
+ORDER SIBLINGS BY
+ CONNECT BY produces records in "children follow parents" order, with order
+ of the siblings unspecified. ORDER SIBLINGS BY orders siblings within each
+ "generation".
+
+2.2 Limitations
+---------------
+Other limitations (which we might or might not want to replicate)
+
+* There is this error:
+ ORA-01437: cannot have join with CONNECT BY
+ Cause: A join operation was specified with a CONNECT BY clause. If a
+ CONNECT BY clause is used in a SELECT statement for a tree-
+ structured query, only one table may be referenced in the query.
+ Action: Remove either the CONNECT BY clause or the join operation from
+ the SQL statement.
+ It seems oracle had this limitation before version 10G
+
+* LEVEL cannot be used on the left side of IN-comparison if the right side is a
+ subquery
+http://download.oracle.com/docs/cd/B10501_01/server.920/a96540/sql_elements6a.htm#9547
+ This seems to have been lifted in version 10?
+
+3. Our implementation
+---------------------
+
+3.1 Scope Questions
+-------------------
+* Are we sure we want CONNECT BY syntax and not SQL standard' one? (I'm not
+ suggesting one or the other, just want to make sure we've made a conscious
+ decision)
+
+* Any use-cases we need to make sure to handle well?
+
+Will we implement any of these features:
+
+* Output is ordered (children follow parents)
+* "ORDER SIBLINGS BY" variant of ORDER BY
+* NOCYCLE/CONNECT_BY_ISCYCLE
+ - It seems any checking for cycles will cause overhead. Do we implement a
+ mode for those who know what they are doing, where the server doesn't
+ actually check cycles but only reports error if it happened to enumerate,
+ say MAX(1M, #records_in_table * 10) records? (This doesn't guarantee that
+ there are no cycles, but this is just beyond what one could logically want)
+
+* Oracle's treatment of WHERE (if there's a join - the WHERE is applied after
+ connect by, otherwise before) [Yes]
+* Can one use SYS_CONNECT_BY_PATH in the CONNECT BY expression?
+
+
+3.2 CONNECT BY execution
+------------------------
+
+3.2.1 Straightforward (recursive) evaluation algorithm
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+As specified in CONNECT BY definition, breadth-first, parent-to-children
+traversal:
+
+ start with a scan that retrieves records using the START WITH condition;
+ pass rows to ouptut and also record them (i.e. needed columns) in
+ some sort of growable, overflow-to-disk buffer in_buf;
+
+ while(in_buf is not empty)
+ {
+ for each record in the buffer
+ {
+ do a scan based on CONNECT BY condition;
+ pass rows to output and also record them (i.e. needed columns) in
+ a growable, overflow-to-disk buffer out_buf;
+ }
+ in_buf= out_buf;
+ }
+
+This algorithm will produce rows in the required order.
+
+3.2.2 Transitive-closure evaluation algorithms
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+When CONNECT BY clause refers only to current and PRIOR records (and doesn't
+refer to connect path using LEVEL or SYS_CONNECT_BY_PATH functions), then
+evaluation of CONNECT BY operation is equivalent to building a transitive
+closure of a certain relation.
+
+TODO: can we use LEVEL/SYS_CONNECT_BY_PATH in select list with these
+ algorithms? looks like no?
+
+There are special algorithms to build transitive closure of relation that is
+represented as a table of edges, e.g. Blocked Warshall Algorithm.
+
+Q: Do we investigate further in this direction?
+
+3.2.3 Other algorithms
+----------------------
+To be resolved: Do we always start from the first clause and go to children?
+Does it make sense to proceed in other direction, from children to parents?
+Looks like no? TODO need definite answer.
+
+3.2.4 Loop detection
+~~~~~~~~~~~~~~~~~~~~
+Transitive-closure algorithms can detect loops (it seems some of them can also
+handle loop avoidance but that needs to be verified).
+
+Straightforward-evaluation algorithm will work forever if there is a loop,
+hence will need assistance in loop detection/avoidance.
+
+3.2.4.1 The upper bound of produced records
+-------------------------------------------
+There is an upper bound of the amount of records CONNECT BY runtime can
+generate without generating a loop.
+
+The worst case is when
+ * every record in a source table was in the parent generation (and thus has
+ started a parent->child->child->... chain)
+ * every chain is of #table-records length.
+
+example of such case:
+
+ SELECT * FROM employees
+ START WITH true
+ CONNECT BY
+ PRIOR emp_id = (emp_id + 1) MOD $n_employees AND
+ length(SYS_CONNECT_BY_PATH('-')) = $n_employees -- guard againist
+ -- forming loops
+
+this gives that we can at most generate O(#table_records^2) records. This
+limitation can be used as a primitive way to stop evaluation.
+
+
+3.2.4.1 Straightforward approach: track chains
+----------------------------------------------
+In general case, we will have to track which records we have seen across each
+of the parent-child chains. The same record can show up in different chains
+at different times and this won't form a loop:
+
+ parent generation1 generation2
+
+ row1- --+---row2---- ---row3-- (chain1)
+ |
+ \--row3-+-- ---row2-- (chain2)
+ |
+ \- ---row4-- (chain3)
+ row4- ...
+
+Tracking can be done by
+- Numbering the chains and using one structure (e.g temptable) to store
+ (rowid, chain#) pairs and check them for uniqueness.
+
+- Using per-chain data structure which we could serialize/deserialize. This
+ could be
+ - serializable hashtable
+ - ordered rowid list
+ - serializable sparse bitmap
+
+One can expect a lot of chains to have common starts (eg. look at chain2 and
+chain3). I don't see how one could take advantage of that, though.
+
+3.2.3 Improvements for straightforward execution strategy
+---------------------------------------------------------
+
+* If the query is a join, it may make sense to materialize it join result
+ (including creation of appropriate index) so we're able to make
+ parent-to-child transitions faster.
+ This seems to be connected to Evgen's work on FROM subqueries.
+
+* If there is a suitable index, we can employ a variant of BatchedKeyAccess.
+
+* Part of CONNECT BY expression that places restrictions on subsequent
+ generation can be moved to the WHERE. If we do that, we get two recordsets:
+
+1. Initial START WITH recordset
+
+2. A recordset to be used to advance to subsequent generation
+
+
+3.3. Optimization
+-----------------
+It seems it is nearly impossible to estimate how many iterations we'll have
+to make and how many records we will end up producing.
+
+TODO: some bad estimates.. assume a fixed number of generations, reuse ref
+accces estimations for fanount, which gives
+
+ access_method_estimate ^ number_of_generations
+
+estimate?
+
+4. Use-cases dump
+=================
+
+http://www.orafaq.com/usenet/comp.databases.oracle.server/2007/01/05/0264.htm:
+ select mak_xx,nr_porz,level lvl from spacer_strona
+ where nvl(dervlvl,0)<3
+ start with mak_xx=125414 and nr_porz=0
+ connect by mak_xx = prior derv_mak_xx and nr_porz = prior derv_nr_porz
+ and prior dervlvl=3
+
+
+http://www.orafaq.com/usenet/comp.databases.oracle.server/2007/01/04/0196.htm:
+ SELECT OPM_N_ID FROM RAFC_ADM.AFC_T_OPERATION_METIER START WITH OPM_N_ID IN
+ (
+ SELECT OPM_N_ID FROM RAFC_ADM.AFC_T_OPERATION_METIER x
+ START WITH x.OPM_N_ID IN (4846)
+ CONNECT BY ((PRIOR x.OPM_MERE_OPM_N_ID = x.OPM_N_ID)
+ OR (PRIOR x.OPM_ANNULEE_OPM_N_ID = x.OPM_N_ID))
+ )
+ CONNECT BY ((PRIOR OPM_N_ID = OPM_MERE_OPM_N_ID) OR (PRIOR OPM_N_ID =
+OPM_ANNULEE_OPM_N_ID))
+
+http://forums.enterprisedb.com/posts/list/737.page:
+ select lpad(' ',2*(level-1)) || to_char(child) s
+ from x
+ start with parent is null
+ connect by prior child = parent;
+
+ select *
+ from emp, dept
+ where dept.deptno = emp.deptno
+ start with mgr is null
+ connect by mgr = prior empno
+
+http://forums.oracle.com/forums/thread.jspa?threadID=623173:
+ SELECT cust_number
+ FROM customer
+ START WITH cust_number = '5568677999'
+ CONNECT BY PRIOR cust_number = cust_group_code.
+
+http://www.orafaq.com/forum/t/118879/0/
+ SELECT COUNT(a.dataid), c.name
+ FROM dauditnew a, dtree b, kuaf c
+ WHERE a.auditdate > SYSDATE-10 AND a.auditstr IN ('Create', 'AddVersion')
+ AND a.dataid = b.dataid AND c.id = a.performerid
+ AND a.SUBTYPE = 0
+ START WITH b.dataid = 6132086 CONNECT BY PRIOR a.dataid = b.parentid GROUP BY
+c.name
+
+
+http://www.postgresql-support.de/blog/blog_hans.html
+ SELECT METIER_ID||'|'||ORGANISATION_ID AS JOBORG
+ FROM INTRA_METIER,INTRA_ORGANISATION
+ WHERE METIER_ID IN(
+ SELECT METIER_ID
+ FROM INTRA_METIER
+ START WITH METIER_ID= '99533220-e8b2-4121-998c-808ea8ca2da7'
+ CONNECT BY METIER_ID= PRIOR PARENT_METIER_ID
+ ) AND ORGANISATION_ID IN (
+ SELECT ORGANISATION_ID
+ FROM INTRA_ORGANISATION
+ START WITH ORGANISATION_ID='025ee58f-35a3-4183-8679-01472838f753'
+ CONNECT BY ORGANISATION_ID= PRIOR PARENT_ORGANISATION_ID
+ );
+
+http://oracle.com
+ Oracle database uses CONNECT BY to generate EXPLAINs.
+
+http://practical-sql-tuning.blogspot.com/2009/01/use-of-statistically-incorrect.html
+
+ select sum(human_cnt) from facts
+ where territory_id in (select territory_id
+ from dic$territory
+ start with territory_code = :code
+ connect by prior territory_id = territory_parent);
+
+http://www.dbasupport.com/forums/archive/index.php/t-30008.html
+
+
+ SELECT LEVEL,LPAD(' ',8*(LEVEL-1))||T_COM_OBJ.OBJ_NAME, T_COM_OBJ.OBJ_PARENT,
+T_COM_OBJ.OBJ_ID
+ FROM VDR.T_COM_OBJ
+ START WITH T_COM_OBJ.OBJ_ID in (select obj_id obj_main from vdr.t_com_obj
+where obj_id=obj_parent)
+ CONNECT BY PRIOR T_COM_OBJ.OBJ_ID = T_COM_OBJ.OBJ_PARENT
+
+
DESCRIPTION:
<contents>
1. Background information
2. CONNECT BY semantics, properties and limitations
2.1 Additional CONNECT BY features
2.2 Limitations
3. Our implementation
3.1 Scope Questions
3.2 CONNECT BY execution
3.2.1 Straightforward (recursive) evaluation algorithm
3.2.2 Transitive-closure evaluation algorithms
3.2.3 Other algorithms
3.2.4 Loop detection
3.2.4.1 The upper bound of produced records
3.2.4.1 Straightforward approach: track chains
3.2.3 Improvements for straightforward execution strategy
3.3. Optimization
4. Use-cases dump
</contents>
1. Background information
-------------------------
* CONNECT BY is a non-standard, Oracle's syntax. It is also supported by
EnterpriseDB (Q: any other implementations?)
* PostgreSQL 8.4 (now beta) has support for SQL-standard compliant WITH
RECURSIVE (aka Common Table Expressions, CTE) query syntax:
http://www.postgresql.org/docs/8.4/static/queries-with.html
http://www.postgresql.org/about/news.1074
http://archives.postgresql.org/pgsql-hackers/2008-02/msg00642.php
http://archives.postgresql.org/pgsql-patches/2008-05/msg00362.php
* Evgen's attempt:
http://lists.mysql.com/internals/15569
DB2 and MS SQL support SQL standard's WITH RECURSIVE clause.
2. CONNECT BY semantics, properties and limitations
---------------------------------------------------
>From Oracle's manual:
<almost-quote>
SELECT ...
FROM ...
WHERE ...
START WITH cond
CONNECT BY connect_cond
ORDER [SIBLINGS] BY
In oracle, one expression in connect_cond must be
PRIOR expr = expr
or
expr = PRIOR expr
The manner in which Oracle processes a WHERE clause (if any) in a hierarchical
query depends on whether the WHERE clause contains a join:
* If the WHERE predicate contains a join, Oracle applies the join predicates
before doing the CONNECT BY processing.
* If the WHERE clause does not contain a join, Oracle applies all predicates
other than the CONNECT BY predicates after doing the CONNECT BY processing
without affecting the other rows of the hierarchy.
</almost-quote>
See http://www.adp-gmbh.ch/ora/sql/connect_by.html
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96540/queries4a…
2.1 Additional CONNECT BY features
----------------------------------
LEVEL pseudocolumn
indicates ancestry depth of the record (inital row has level=1, its children
have level=2 and so forth). Can be used in CONNECT BY clause to limit
traversal depth.
SYS_CONNECT_BY_PATH(column, 'char')
returns path from root to the node.
NOCYCLE and CONNECT_BY_ISCYCLE
"With the 10g keyword NOCYCLE, hierarchical queries detect loops and do not
generate errors. CONNECT_BY_ISCYCLE pseudo-column is a flag that can be used
to detect which row is cycling"
http://www.dba-oracle.com/t_advanced_sql_connect_by_loop.htm
ORDER SIBLINGS BY
CONNECT BY produces records in "children follow parents" order, with order
of the siblings unspecified. ORDER SIBLINGS BY orders siblings within each
"generation".
2.2 Limitations
---------------
Other limitations (which we might or might not want to replicate)
* There is this error:
ORA-01437: cannot have join with CONNECT BY
Cause: A join operation was specified with a CONNECT BY clause. If a
CONNECT BY clause is used in a SELECT statement for a tree-
structured query, only one table may be referenced in the query.
Action: Remove either the CONNECT BY clause or the join operation from
the SQL statement.
It seems oracle had this limitation before version 10G
* LEVEL cannot be used on the left side of IN-comparison if the right side is a
subquery
http://download.oracle.com/docs/cd/B10501_01/server.920/a96540/sql_elements…
This seems to have been lifted in version 10?
3. Our implementation
---------------------
3.1 Scope Questions
-------------------
* Are we sure we want CONNECT BY syntax and not SQL standard' one? (I'm not
suggesting one or the other, just want to make sure we've made a conscious
decision)
* Any use-cases we need to make sure to handle well?
Will we implement any of these features:
* Output is ordered (children follow parents)
* "ORDER SIBLINGS BY" variant of ORDER BY
* NOCYCLE/CONNECT_BY_ISCYCLE
- It seems any checking for cycles will cause overhead. Do we implement a
mode for those who know what they are doing, where the server doesn't
actually check cycles but only reports error if it happened to enumerate,
say MAX(1M, #records_in_table * 10) records? (This doesn't guarantee that
there are no cycles, but this is just beyond what one could logically want)
* Oracle's treatment of WHERE (if there's a join - the WHERE is applied after
connect by, otherwise before) [Yes]
* Can one use SYS_CONNECT_BY_PATH in the CONNECT BY expression?
3.2 CONNECT BY execution
------------------------
3.2.1 Straightforward (recursive) evaluation algorithm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As specified in CONNECT BY definition, breadth-first, parent-to-children
traversal:
start with a scan that retrieves records using the START WITH condition;
pass rows to ouptut and also record them (i.e. needed columns) in
some sort of growable, overflow-to-disk buffer in_buf;
while(in_buf is not empty)
{
for each record in the buffer
{
do a scan based on CONNECT BY condition;
pass rows to output and also record them (i.e. needed columns) in
a growable, overflow-to-disk buffer out_buf;
}
in_buf= out_buf;
}
This algorithm will produce rows in the required order.
3.2.2 Transitive-closure evaluation algorithms
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When CONNECT BY clause refers only to current and PRIOR records (and doesn't
refer to connect path using LEVEL or SYS_CONNECT_BY_PATH functions), then
evaluation of CONNECT BY operation is equivalent to building a transitive
closure of a certain relation.
TODO: can we use LEVEL/SYS_CONNECT_BY_PATH in select list with these
algorithms? looks like no?
There are special algorithms to build transitive closure of relation that is
represented as a table of edges, e.g. Blocked Warshall Algorithm.
Q: Do we investigate further in this direction?
3.2.3 Other algorithms
----------------------
To be resolved: Do we always start from the first clause and go to children?
Does it make sense to proceed in other direction, from children to parents?
Looks like no? TODO need definite answer.
3.2.4 Loop detection
~~~~~~~~~~~~~~~~~~~~
Transitive-closure algorithms can detect loops (it seems some of them can also
handle loop avoidance but that needs to be verified).
Straightforward-evaluation algorithm will work forever if there is a loop,
hence will need assistance in loop detection/avoidance.
3.2.4.1 The upper bound of produced records
-------------------------------------------
There is an upper bound of the amount of records CONNECT BY runtime can
generate without generating a loop.
The worst case is when
* every record in a source table was in the parent generation (and thus has
started a parent->child->child->... chain)
* every chain is of #table-records length.
example of such case:
SELECT * FROM employees
START WITH true
CONNECT BY
PRIOR emp_id = (emp_id + 1) MOD $n_employees AND
length(SYS_CONNECT_BY_PATH('-')) = $n_employees -- guard againist
-- forming loops
this gives that we can at most generate O(#table_records^2) records. This
limitation can be used as a primitive way to stop evaluation.
3.2.4.1 Straightforward approach: track chains
----------------------------------------------
In general case, we will have to track which records we have seen across each
of the parent-child chains. The same record can show up in different chains
at different times and this won't form a loop:
parent generation1 generation2
row1- --+---row2---- ---row3-- (chain1)
|
\--row3-+-- ---row2-- (chain2)
|
\- ---row4-- (chain3)
row4- ...
Tracking can be done by
- Numbering the chains and using one structure (e.g temptable) to store
(rowid, chain#) pairs and check them for uniqueness.
- Using per-chain data structure which we could serialize/deserialize. This
could be
- serializable hashtable
- ordered rowid list
- serializable sparse bitmap
One can expect a lot of chains to have common starts (eg. look at chain2 and
chain3). I don't see how one could take advantage of that, though.
3.2.3 Improvements for straightforward execution strategy
---------------------------------------------------------
* If the query is a join, it may make sense to materialize it join result
(including creation of appropriate index) so we're able to make
parent-to-child transitions faster.
This seems to be connected to Evgen's work on FROM subqueries.
* If there is a suitable index, we can employ a variant of BatchedKeyAccess.
* Part of CONNECT BY expression that places restrictions on subsequent
generation can be moved to the WHERE. If we do that, we get two recordsets:
1. Initial START WITH recordset
2. A recordset to be used to advance to subsequent generation
3.3. Optimization
-----------------
It seems it is nearly impossible to estimate how many iterations we'll have
to make and how many records we will end up producing.
TODO: some bad estimates.. assume a fixed number of generations, reuse ref
accces estimations for fanount, which gives
access_method_estimate ^ number_of_generations
estimate?
4. Use-cases dump
=================
http://www.orafaq.com/usenet/comp.databases.oracle.server/2007/01/05/0264.h…:
select mak_xx,nr_porz,level lvl from spacer_strona
where nvl(dervlvl,0)<3
start with mak_xx=125414 and nr_porz=0
connect by mak_xx = prior derv_mak_xx and nr_porz = prior derv_nr_porz
and prior dervlvl=3
http://www.orafaq.com/usenet/comp.databases.oracle.server/2007/01/04/0196.h…:
SELECT OPM_N_ID FROM RAFC_ADM.AFC_T_OPERATION_METIER START WITH OPM_N_ID IN
(
SELECT OPM_N_ID FROM RAFC_ADM.AFC_T_OPERATION_METIER x
START WITH x.OPM_N_ID IN (4846)
CONNECT BY ((PRIOR x.OPM_MERE_OPM_N_ID = x.OPM_N_ID)
OR (PRIOR x.OPM_ANNULEE_OPM_N_ID = x.OPM_N_ID))
)
CONNECT BY ((PRIOR OPM_N_ID = OPM_MERE_OPM_N_ID) OR (PRIOR OPM_N_ID =
OPM_ANNULEE_OPM_N_ID))
http://forums.enterprisedb.com/posts/list/737.page:
select lpad(' ',2*(level-1)) || to_char(child) s
from x
start with parent is null
connect by prior child = parent;
select *
from emp, dept
where dept.deptno = emp.deptno
start with mgr is null
connect by mgr = prior empno
http://forums.oracle.com/forums/thread.jspa?threadID=623173:
SELECT cust_number
FROM customer
START WITH cust_number = '5568677999'
CONNECT BY PRIOR cust_number = cust_group_code.
http://www.orafaq.com/forum/t/118879/0/
SELECT COUNT(a.dataid), c.name
FROM dauditnew a, dtree b, kuaf c
WHERE a.auditdate > SYSDATE-10 AND a.auditstr IN ('Create', 'AddVersion')
AND a.dataid = b.dataid AND c.id = a.performerid
AND a.SUBTYPE = 0
START WITH b.dataid = 6132086 CONNECT BY PRIOR a.dataid = b.parentid GROUP BY
c.name
http://www.postgresql-support.de/blog/blog_hans.html
SELECT METIER_ID||'|'||ORGANISATION_ID AS JOBORG
FROM INTRA_METIER,INTRA_ORGANISATION
WHERE METIER_ID IN(
SELECT METIER_ID
FROM INTRA_METIER
START WITH METIER_ID= '99533220-e8b2-4121-998c-808ea8ca2da7'
CONNECT BY METIER_ID= PRIOR PARENT_METIER_ID
) AND ORGANISATION_ID IN (
SELECT ORGANISATION_ID
FROM INTRA_ORGANISATION
START WITH ORGANISATION_ID='025ee58f-35a3-4183-8679-01472838f753'
CONNECT BY ORGANISATION_ID= PRIOR PARENT_ORGANISATION_ID
);
http://oracle.com
Oracle database uses CONNECT BY to generate EXPLAINs.
http://practical-sql-tuning.blogspot.com/2009/01/use-of-statistically-incor…
select sum(human_cnt) from facts
where territory_id in (select territory_id
from dic$territory
start with territory_code = :code
connect by prior territory_id = territory_parent);
http://www.dbasupport.com/forums/archive/index.php/t-30008.html
SELECT LEVEL,LPAD(' ',8*(LEVEL-1))||T_COM_OBJ.OBJ_NAME, T_COM_OBJ.OBJ_PARENT,
T_COM_OBJ.OBJ_ID
FROM VDR.T_COM_OBJ
START WITH T_COM_OBJ.OBJ_ID in (select obj_id obj_main from vdr.t_com_obj
where obj_id=obj_parent)
CONNECT BY PRIOR T_COM_OBJ.OBJ_ID = T_COM_OBJ.OBJ_PARENT
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#10 Updated (by Sergei): Microseconds
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Microseconds
CREATION DATE..: Thu, 26 Mar 2009, 00:29
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-BackLog
TASK ID........: 10 (http://askmonty.org/worklog/?tid=10)
VERSION........: Server-5.3
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:03)=-=-
Category updated.
--- /tmp/wklog.10.old.32058 2010-06-29 14:03:11.000000000 +0000
+++ /tmp/wklog.10.new.32058 2010-06-29 14:03:11.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-BackLog
-=-=(Sergei - Tue, 29 Jun 2010, 14:03)=-=-
Category updated.
--- /tmp/wklog.10.old.31970 2010-06-29 14:03:01.000000000 +0000
+++ /tmp/wklog.10.new.31970 2010-06-29 14:03:01.000000000 +0000
@@ -1 +1 @@
-Server-Sprint
+Server-RawIdeaBin
-=-=(Monty - Fri, 29 Jan 2010, 19:05)=-=-
Version updated.
--- /tmp/wklog.10.old.5698 2010-01-29 19:05:42.000000000 +0200
+++ /tmp/wklog.10.new.5698 2010-01-29 19:05:42.000000000 +0200
@@ -1 +1 @@
-Server-5.2
+Server-5.3
DESCRIPTION:
Add microsecond precision to NOW()
Add new field types for time and datetime with microprecision
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#10 Updated (by Sergei): Microseconds
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Microseconds
CREATION DATE..: Thu, 26 Mar 2009, 00:29
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 10 (http://askmonty.org/worklog/?tid=10)
VERSION........: Server-5.3
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:03)=-=-
Category updated.
--- /tmp/wklog.10.old.31970 2010-06-29 14:03:01.000000000 +0000
+++ /tmp/wklog.10.new.31970 2010-06-29 14:03:01.000000000 +0000
@@ -1 +1 @@
-Server-Sprint
+Server-RawIdeaBin
-=-=(Monty - Fri, 29 Jan 2010, 19:05)=-=-
Version updated.
--- /tmp/wklog.10.old.5698 2010-01-29 19:05:42.000000000 +0200
+++ /tmp/wklog.10.new.5698 2010-01-29 19:05:42.000000000 +0200
@@ -1 +1 @@
-Server-5.2
+Server-5.3
DESCRIPTION:
Add microsecond precision to NOW()
Add new field types for time and datetime with microprecision
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#24 Updated (by Sergei): index_merge: fair choice between index_merge union and range access
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: fair choice between index_merge union and range access
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Psergey
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:00)=-=-
Category updated.
--- /tmp/wklog.24.old.31772 2010-06-29 14:00:05.000000000 +0000
+++ /tmp/wklog.24.new.31772 2010-06-29 14:00:05.000000000 +0000
@@ -1 +1 @@
-Server-Sprint
+Server-RawIdeaBin
-=-=(Guest - Sun, 16 Aug 2009, 02:13)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.23383 2009-08-16 02:13:54.000000000 +0300
+++ /tmp/wklog.24.new.23383 2009-08-16 02:13:54.000000000 +0300
@@ -125,7 +125,7 @@
The optimizer will generate the plans that only use the "col1=c1" part. The
right side of the AND will be ignored even if it has good selectivity.
-(Here an imerge for col2=c2 OR col3=c3 won't be built since neither col2=c2 nor
+(Here no imerge for col2=c2 OR col3=c3 will be built since neither col2=c2 nor
col3=c3 represent index ranges.)
@@ -199,7 +199,7 @@
O2. "Create index_merge accesses when possible"
Current tree_or() will not create index_merge access when it could create
- non-index merge access (see DISCARD-IMERGE-3 and its example in the "Problems
+ non-index merge access (see DISCARD-IMERGE-2 and its example in the "Problems
in the current implementation" section). This will be changed to work as
follows: we will create index_merge made for index scans that didn't have
their match in the other sel_tree.
-=-=(Guest - Sun, 16 Aug 2009, 01:03)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.20767 2009-08-16 01:03:11.000000000 +0300
+++ /tmp/wklog.24.new.20767 2009-08-16 01:03:11.000000000 +0300
@@ -18,6 +18,8 @@
# a range tree has range access options, possibly for several keys
range_tree = range(key1) AND range(key2) AND ... AND range(keyN);
+ (here range(keyi) may represent ranges not for initial keyi prefixes,
+ but ranges for any infixes for keyi)
# merge tree represents several way to index_merge
imerge_tree = imerge1 AND imerge2 AND ...
@@ -47,13 +49,13 @@
R.add(range_union(A.range(i), B.range(i)));
if (R has at least one range access)
- return R;
+ return R; // DISCARD-IMERGE-2
else
{
/* could not build any range accesses. construct index_merge */
- remove non-ranges from A; // DISCARD-IMERGE-2
+ remove non-ranges from A;
remove non-ranges from B;
- return new index_merge(A, B);
+ return new index_merge(A, B); // DISCARD-IMERGE-3
}
}
else if (A is range tree and B is index_merge tree (or vice versa))
@@ -65,12 +67,12 @@
(range_treeB_11 OR range_treeB_12 OR ... OR range_treeB_1N) AND
(range_treeB_21 OR range_treeB_22 OR ... OR range_treeB_2N) AND
...
- (range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN) AND
+ (range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN)
=
(range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
(range_treeA OR range_treeB_21 OR ... OR range_treeB_2N) AND
...
- (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
+ (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N)
Now each line represents an index_merge..
}
@@ -82,18 +84,18 @@
OR
imergeB1 AND imergeB2 AND ... AND imergeBN
- -> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-3
+ -> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-4
imergeA1
OR
- imergeB1 AND imergeB2 AND ... AND imergeBN =
+ imergeB1 =
- = (combine imergeA1 with each of the imergeB{i} ) =
+ = (combine imergeA1 with each of the range_treeB_1{i} ) =
- combine(imergeA1 OR imergeB1) AND
- combine(imergeA1 OR imergeB2) AND
+ combine(imergeA1 OR range_treeB_11) AND
+ combine(imergeA1 OR range_treeB_12) AND
... AND
- combine(imergeA1 OR imergeBN)
+ combine(imergeA1 OR range_treeB_1N)
}
}
@@ -109,7 +111,7 @@
DISCARD-IMERGE-2 step will cause index_merge option to be discarded when
the WHERE clause has this form (conditions t.badkey may have abritrary form):
- (t.badkey<c1 AND t.key1=c1) OR (t.key1=c2 AND t.badkey < c2)
+ (t.badkey<c1 AND t.key1=c1) OR (t.key2=c2 AND t.badkey < c2)
DISCARD-IMERGE-3 manifests itself as the following effect: suppose there are
two indexes:
@@ -123,6 +125,8 @@
The optimizer will generate the plans that only use the "col1=c1" part. The
right side of the AND will be ignored even if it has good selectivity.
+(Here an imerge for col2=c2 OR col3=c3 won't be built since neither col2=c2 nor
+col3=c3 represent index ranges.)
2. New implementation
-=-=(Guest - Mon, 20 Jul 2009, 17:13)=-=-
Dependency deleted: 30 no longer depends on 24
-=-=(Guest - Sat, 20 Jun 2009, 09:34)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.21663 2009-06-20 09:34:48.000000000 +0300
+++ /tmp/wklog.24.new.21663 2009-06-20 09:34:48.000000000 +0300
@@ -4,6 +4,7 @@
2. New implementation
2.1 New tree_and()
2.2 New tree_or()
+3. Testing and required coverage
</contents>
1. Current implementation overview
@@ -240,3 +241,14 @@
In order to limit the impact of this combinatorial explosion, we will
introduce a rule that we won't generate more than #defined
MAX_IMERGE_OPTS options.
+
+3. Testing and required coverage
+================================
+So far could find the following user cases:
+
+* BUG#17259: Query optimizer chooses wrong index
+* BUG#17673: Optimizer does not use Index Merge optimization in some cases
+* BUG#23322: Optimizer sometimes erroniously prefers other index over index merge
+* BUG#30151: optimizer is very reluctant to chose index_merge algorithm
+
+
-=-=(Guest - Thu, 18 Jun 2009, 16:55)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.19152 2009-06-18 16:55:00.000000000 +0300
+++ /tmp/wklog.24.new.19152 2009-06-18 16:55:00.000000000 +0300
@@ -141,13 +141,15 @@
Operations on SEL_ARG trees will be modified to produce/process the trees of
this kind:
+
2.1 New tree_and()
------------------
In order not to lose plans, we'll make these changes:
-1. Don't remove index_merge part of the tree.
+A1. Don't remove index_merge part of the tree (this will take care of
+ DISCARD-IMERGE-1 problem)
-2. Push range conditions down into index_merge trees that may support them.
+A2. Push range conditions down into index_merge trees that may support them.
if one tree has range(key1) and the other tree has imerge(key1 OR key2)
then perform an equvalent of this operation:
@@ -155,8 +157,86 @@
(rangeA(key1) AND rangeB(key1)) OR (rangeA(key1) AND rangeB(key2))
-3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
+A3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
concatenate them together.
-2.2 New tree_or()
+2.2 New tree_or()
+-----------------
+O1. Dont remove non-range plans:
+ Current tree_or() code will refuse to produce index_merge plans for
+ conditions like
+
+ "t.key1part2=const OR t.key2part1=const"
+
+ (this is marked as DISCARD-IMERGE-3). This was justifed as the left part of
+ the AND condition is not usable for range access, and the operation of
+ tree_and() guaranteed that there was no way it could changed to make a
+ usable range plan. With new tree_and() and rule A2, this is no longer the
+ case. For example for this query:
+
+ (t.key1part2=const OR t.key2part1=const) AND t.key1part1=const
+
+ it will construct a
+
+ imerge(t.key1part2=const OR t.key2part1=const), range(t.key1part1=const)
+
+ then tree_and() will apply rule A2 to push the range down into index merge
+ and after that we'll have:
+
+ range(t.key1part1=const)
+ imerge(
+ t.key1part2=const AND t.key1part1=const,
+ t.key2part1=const
+ )
+ note that imerge(...) describes a usable index_merge plan and it's possible
+ that it will be the best access path.
+
+O2. "Create index_merge accesses when possible"
+ Current tree_or() will not create index_merge access when it could create
+ non-index merge access (see DISCARD-IMERGE-3 and its example in the "Problems
+ in the current implementation" section). This will be changed to work as
+ follows: we will create index_merge made for index scans that didn't have
+ their match in the other sel_tree.
+ Ilustrating it with an example:
+
+ | sel_tree_A | sel_tree_B | A or B | include in index_merge?
+ ------+------------+------------+--------+------------------------
+ key1 | cond1 | cond2 | condM | no
+ key2 | cond3 | cond4 | NULL | no
+ key3 | cond5 | | | yes, A-side
+ key4 | cond6 | | | yes, A-side
+ key5 | | cond7 | | yes, B-side
+ key6 | | cond8 | | yes, B-side
+
+ here we assume that
+ - (cond1 OR cond2) did produce a combined range. Not including them in
+ index_merge.
+ - (cond3 OR cond4) didn't produce a usable range (e.g. they were
+ t.key1part1=c1 AND t.key1part2=c1, respectively, and combining them
+ didn't yield any range list)
+ - All other scand didn't have their counterparts, so we'll end up with a
+ SEL_TREE of:
+
+ range(condM) AND index_merge((cond5 AND cond6),(cond7 AND cond8))
+ .
+
+O4. There is no O4. DISCARD-INDEX-MERGE-4 will remain there. The idea is
+that although DISCARD-INDEX-MERGE-4 does discard plans, so far we haven
+seen any complaints that could be attributed to it.
+If we face the need to lift DISCARD-INDEX-MERGE-4, our answer will be to
+lift it ,and produce a cross-product:
+
+ ((key1p OR key2p) AND (key3p OR key4p))
+ OR
+ ((key5p OR key6p) AND (key7p OR key8p))
+
+ = (key1p OR key2p OR key5p OR key6p) AND // this part is currently
+ (key3p OR key4p OR key5p OR key6p) AND // produced
+
+ (key1p OR key2p OR key5p OR key6p) AND // this part will be added
+ (key3p OR key4p OR key5p OR key6p) //.
+
+In order to limit the impact of this combinatorial explosion, we will
+introduce a rule that we won't generate more than #defined
+MAX_IMERGE_OPTS options.
-=-=(Guest - Thu, 18 Jun 2009, 14:56)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.15612 2009-06-18 14:56:09.000000000 +0300
+++ /tmp/wklog.24.new.15612 2009-06-18 14:56:09.000000000 +0300
@@ -1 +1,162 @@
+<contents>
+1. Current implementation overview
+1.1. Problems in the current implementation
+2. New implementation
+2.1 New tree_and()
+2.2 New tree_or()
+</contents>
+
+1. Current implementation overview
+==================================
+At the moment, range analyzer works as follows:
+
+SEL_TREE structure represents
+
+ # There are sel_trees, a sel_tree is either range or merge tree
+ sel_tree = range_tree | imerge_tree
+
+ # a range tree has range access options, possibly for several keys
+ range_tree = range(key1) AND range(key2) AND ... AND range(keyN);
+
+ # merge tree represents several way to index_merge
+ imerge_tree = imerge1 AND imerge2 AND ...
+
+ # a way to do index merge == a set to use of different indexes.
+ imergeX = range_tree1 OR range_tree2 OR ..
+ where no pair of range_treeX have ranges over the same index.
+
+
+ tree_and(A, B)
+ {
+ if (both A and B are range trees)
+ return a range_tree with computed intersection for each range;
+ if (only one of A and B is a range tree)
+ return that tree; // DISCARD-IMERGE-1
+ // at this point both trees are index_merge trees
+ return concat_lists( A.imerge1 ... A.imergeN, B.imerge1 ... B.imergeN);
+ }
+
+
+ tree_or(A, B)
+ {
+ if (A and B are range trees)
+ {
+ R = new range_tree;
+ for each index i
+ R.add(range_union(A.range(i), B.range(i)));
+
+ if (R has at least one range access)
+ return R;
+ else
+ {
+ /* could not build any range accesses. construct index_merge */
+ remove non-ranges from A; // DISCARD-IMERGE-2
+ remove non-ranges from B;
+ return new index_merge(A, B);
+ }
+ }
+ else if (A is range tree and B is index_merge tree (or vice versa))
+ {
+ Perform this transformation:
+
+ range_treeA // this is A
+ OR
+ (range_treeB_11 OR range_treeB_12 OR ... OR range_treeB_1N) AND
+ (range_treeB_21 OR range_treeB_22 OR ... OR range_treeB_2N) AND
+ ...
+ (range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN) AND
+ =
+ (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
+ (range_treeA OR range_treeB_21 OR ... OR range_treeB_2N) AND
+ ...
+ (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
+
+ Now each line represents an index_merge..
+ }
+ else if (both A and B are index_merge trees)
+ {
+ Perform this transformation:
+
+ imergeA1 AND imergeA2 AND ... AND imergeAN
+ OR
+ imergeB1 AND imergeB2 AND ... AND imergeBN
+
+ -> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-3
+
+ imergeA1
+ OR
+ imergeB1 AND imergeB2 AND ... AND imergeBN =
+
+ = (combine imergeA1 with each of the imergeB{i} ) =
+
+ combine(imergeA1 OR imergeB1) AND
+ combine(imergeA1 OR imergeB2) AND
+ ... AND
+ combine(imergeA1 OR imergeBN)
+ }
+ }
+
+1.1. Problems in the current implementation
+-------------------------------------------
+As marked in the code above:
+
+DISCARD-IMERGE-1 step will cause index_merge option to be discarded when
+the WHERE clause has this form:
+
+ (t.key1=c1 OR t.key2=c2) AND t.badkey < c3
+
+DISCARD-IMERGE-2 step will cause index_merge option to be discarded when
+the WHERE clause has this form (conditions t.badkey may have abritrary form):
+
+ (t.badkey<c1 AND t.key1=c1) OR (t.key1=c2 AND t.badkey < c2)
+
+DISCARD-IMERGE-3 manifests itself as the following effect: suppose there are
+two indexes:
+
+ INDEX i1(col1, col2),
+ INDEX i2(col1, col3)
+
+and this WHERE clause:
+
+ col1=c1 AND (col2=c2 OR col3=c3)
+
+The optimizer will generate the plans that only use the "col1=c1" part. The
+right side of the AND will be ignored even if it has good selectivity.
+
+
+2. New implementation
+=====================
+
+<general idea>
+* Don't start fighting combinatorial explosion until we've actually got one.
+</>
+
+SEL_TREE structure will be now able to hold both index_merge and range scan
+candidates at the same time. That is,
+
+ sel_tree2 = range_tree AND imerge_tree
+
+where both parts are optional (i.e. can be empty)
+
+Operations on SEL_ARG trees will be modified to produce/process the trees of
+this kind:
+
+2.1 New tree_and()
+------------------
+In order not to lose plans, we'll make these changes:
+
+1. Don't remove index_merge part of the tree.
+
+2. Push range conditions down into index_merge trees that may support them.
+ if one tree has range(key1) and the other tree has imerge(key1 OR key2)
+ then perform an equvalent of this operation:
+
+ rangeA(key1) AND ( rangeB(key1) OR rangeB(key2)) =
+
+ (rangeA(key1) AND rangeB(key1)) OR (rangeA(key1) AND rangeB(key2))
+
+3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
+ concatenate them together.
+
+2.2 New tree_or()
-=-=(Psergey - Wed, 03 Jun 2009, 12:09)=-=-
Dependency created: 30 now depends on 24
-=-=(Guest - Mon, 01 Jun 2009, 23:30)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.21580 2009-06-01 23:30:06.000000000 +0300
+++ /tmp/wklog.24.new.21580 2009-06-01 23:30:06.000000000 +0300
@@ -64,6 +64,9 @@
* How strict is the limitation on the form of the WHERE?
+* Which version should this be based on? 5.1? Which patches are should be in
+ (google's/percona's/maria/etc?)
+
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Title modified.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-index_merge optimizer: dont discard index_merge union strategies when range is available
+index_merge: fair choice between index_merge union and range access
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=24&nolimit=1
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* Which version should this be based on? 5.1? Which patches are should be in
(google's/percona's/maria/etc?)
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
LOW-LEVEL DESIGN:
<contents>
1. Current implementation overview
1.1. Problems in the current implementation
2. New implementation
2.1 New tree_and()
2.2 New tree_or()
3. Testing and required coverage
</contents>
1. Current implementation overview
==================================
At the moment, range analyzer works as follows:
SEL_TREE structure represents
# There are sel_trees, a sel_tree is either range or merge tree
sel_tree = range_tree | imerge_tree
# a range tree has range access options, possibly for several keys
range_tree = range(key1) AND range(key2) AND ... AND range(keyN);
(here range(keyi) may represent ranges not for initial keyi prefixes,
but ranges for any infixes for keyi)
# merge tree represents several way to index_merge
imerge_tree = imerge1 AND imerge2 AND ...
# a way to do index merge == a set to use of different indexes.
imergeX = range_tree1 OR range_tree2 OR ..
where no pair of range_treeX have ranges over the same index.
tree_and(A, B)
{
if (both A and B are range trees)
return a range_tree with computed intersection for each range;
if (only one of A and B is a range tree)
return that tree; // DISCARD-IMERGE-1
// at this point both trees are index_merge trees
return concat_lists( A.imerge1 ... A.imergeN, B.imerge1 ... B.imergeN);
}
tree_or(A, B)
{
if (A and B are range trees)
{
R = new range_tree;
for each index i
R.add(range_union(A.range(i), B.range(i)));
if (R has at least one range access)
return R; // DISCARD-IMERGE-2
else
{
/* could not build any range accesses. construct index_merge */
remove non-ranges from A;
remove non-ranges from B;
return new index_merge(A, B); // DISCARD-IMERGE-3
}
}
else if (A is range tree and B is index_merge tree (or vice versa))
{
Perform this transformation:
range_treeA // this is A
OR
(range_treeB_11 OR range_treeB_12 OR ... OR range_treeB_1N) AND
(range_treeB_21 OR range_treeB_22 OR ... OR range_treeB_2N) AND
...
(range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN)
=
(range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
(range_treeA OR range_treeB_21 OR ... OR range_treeB_2N) AND
...
(range_treeA OR range_treeB_11 OR ... OR range_treeB_1N)
Now each line represents an index_merge..
}
else if (both A and B are index_merge trees)
{
Perform this transformation:
imergeA1 AND imergeA2 AND ... AND imergeAN
OR
imergeB1 AND imergeB2 AND ... AND imergeBN
-> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-4
imergeA1
OR
imergeB1 =
= (combine imergeA1 with each of the range_treeB_1{i} ) =
combine(imergeA1 OR range_treeB_11) AND
combine(imergeA1 OR range_treeB_12) AND
... AND
combine(imergeA1 OR range_treeB_1N)
}
}
1.1. Problems in the current implementation
-------------------------------------------
As marked in the code above:
DISCARD-IMERGE-1 step will cause index_merge option to be discarded when
the WHERE clause has this form:
(t.key1=c1 OR t.key2=c2) AND t.badkey < c3
DISCARD-IMERGE-2 step will cause index_merge option to be discarded when
the WHERE clause has this form (conditions t.badkey may have abritrary form):
(t.badkey<c1 AND t.key1=c1) OR (t.key2=c2 AND t.badkey < c2)
DISCARD-IMERGE-3 manifests itself as the following effect: suppose there are
two indexes:
INDEX i1(col1, col2),
INDEX i2(col1, col3)
and this WHERE clause:
col1=c1 AND (col2=c2 OR col3=c3)
The optimizer will generate the plans that only use the "col1=c1" part. The
right side of the AND will be ignored even if it has good selectivity.
(Here no imerge for col2=c2 OR col3=c3 will be built since neither col2=c2 nor
col3=c3 represent index ranges.)
2. New implementation
=====================
<general idea>
* Don't start fighting combinatorial explosion until we've actually got one.
</>
SEL_TREE structure will be now able to hold both index_merge and range scan
candidates at the same time. That is,
sel_tree2 = range_tree AND imerge_tree
where both parts are optional (i.e. can be empty)
Operations on SEL_ARG trees will be modified to produce/process the trees of
this kind:
2.1 New tree_and()
------------------
In order not to lose plans, we'll make these changes:
A1. Don't remove index_merge part of the tree (this will take care of
DISCARD-IMERGE-1 problem)
A2. Push range conditions down into index_merge trees that may support them.
if one tree has range(key1) and the other tree has imerge(key1 OR key2)
then perform an equvalent of this operation:
rangeA(key1) AND ( rangeB(key1) OR rangeB(key2)) =
(rangeA(key1) AND rangeB(key1)) OR (rangeA(key1) AND rangeB(key2))
A3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
concatenate them together.
2.2 New tree_or()
-----------------
O1. Dont remove non-range plans:
Current tree_or() code will refuse to produce index_merge plans for
conditions like
"t.key1part2=const OR t.key2part1=const"
(this is marked as DISCARD-IMERGE-3). This was justifed as the left part of
the AND condition is not usable for range access, and the operation of
tree_and() guaranteed that there was no way it could changed to make a
usable range plan. With new tree_and() and rule A2, this is no longer the
case. For example for this query:
(t.key1part2=const OR t.key2part1=const) AND t.key1part1=const
it will construct a
imerge(t.key1part2=const OR t.key2part1=const), range(t.key1part1=const)
then tree_and() will apply rule A2 to push the range down into index merge
and after that we'll have:
range(t.key1part1=const)
imerge(
t.key1part2=const AND t.key1part1=const,
t.key2part1=const
)
note that imerge(...) describes a usable index_merge plan and it's possible
that it will be the best access path.
O2. "Create index_merge accesses when possible"
Current tree_or() will not create index_merge access when it could create
non-index merge access (see DISCARD-IMERGE-2 and its example in the "Problems
in the current implementation" section). This will be changed to work as
follows: we will create index_merge made for index scans that didn't have
their match in the other sel_tree.
Ilustrating it with an example:
| sel_tree_A | sel_tree_B | A or B | include in index_merge?
------+------------+------------+--------+------------------------
key1 | cond1 | cond2 | condM | no
key2 | cond3 | cond4 | NULL | no
key3 | cond5 | | | yes, A-side
key4 | cond6 | | | yes, A-side
key5 | | cond7 | | yes, B-side
key6 | | cond8 | | yes, B-side
here we assume that
- (cond1 OR cond2) did produce a combined range. Not including them in
index_merge.
- (cond3 OR cond4) didn't produce a usable range (e.g. they were
t.key1part1=c1 AND t.key1part2=c1, respectively, and combining them
didn't yield any range list)
- All other scand didn't have their counterparts, so we'll end up with a
SEL_TREE of:
range(condM) AND index_merge((cond5 AND cond6),(cond7 AND cond8))
.
O4. There is no O4. DISCARD-INDEX-MERGE-4 will remain there. The idea is
that although DISCARD-INDEX-MERGE-4 does discard plans, so far we haven
seen any complaints that could be attributed to it.
If we face the need to lift DISCARD-INDEX-MERGE-4, our answer will be to
lift it ,and produce a cross-product:
((key1p OR key2p) AND (key3p OR key4p))
OR
((key5p OR key6p) AND (key7p OR key8p))
= (key1p OR key2p OR key5p OR key6p) AND // this part is currently
(key3p OR key4p OR key5p OR key6p) AND // produced
(key1p OR key2p OR key5p OR key6p) AND // this part will be added
(key3p OR key4p OR key5p OR key6p) //.
In order to limit the impact of this combinatorial explosion, we will
introduce a rule that we won't generate more than #defined
MAX_IMERGE_OPTS options.
3. Testing and required coverage
================================
So far could find the following user cases:
* BUG#17259: Query optimizer chooses wrong index
* BUG#17673: Optimizer does not use Index Merge optimization in some cases
* BUG#23322: Optimizer sometimes erroniously prefers other index over index merge
* BUG#30151: optimizer is very reluctant to chose index_merge algorithm
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#24 Updated (by Sergei): index_merge: fair choice between index_merge union and range access
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: index_merge: fair choice between index_merge union and range access
CREATION DATE..: Tue, 26 May 2009, 12:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Psergey
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 24 (http://askmonty.org/worklog/?tid=24)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 14:00)=-=-
Category updated.
--- /tmp/wklog.24.old.31772 2010-06-29 14:00:05.000000000 +0000
+++ /tmp/wklog.24.new.31772 2010-06-29 14:00:05.000000000 +0000
@@ -1 +1 @@
-Server-Sprint
+Server-RawIdeaBin
-=-=(Guest - Sun, 16 Aug 2009, 02:13)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.23383 2009-08-16 02:13:54.000000000 +0300
+++ /tmp/wklog.24.new.23383 2009-08-16 02:13:54.000000000 +0300
@@ -125,7 +125,7 @@
The optimizer will generate the plans that only use the "col1=c1" part. The
right side of the AND will be ignored even if it has good selectivity.
-(Here an imerge for col2=c2 OR col3=c3 won't be built since neither col2=c2 nor
+(Here no imerge for col2=c2 OR col3=c3 will be built since neither col2=c2 nor
col3=c3 represent index ranges.)
@@ -199,7 +199,7 @@
O2. "Create index_merge accesses when possible"
Current tree_or() will not create index_merge access when it could create
- non-index merge access (see DISCARD-IMERGE-3 and its example in the "Problems
+ non-index merge access (see DISCARD-IMERGE-2 and its example in the "Problems
in the current implementation" section). This will be changed to work as
follows: we will create index_merge made for index scans that didn't have
their match in the other sel_tree.
-=-=(Guest - Sun, 16 Aug 2009, 01:03)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.20767 2009-08-16 01:03:11.000000000 +0300
+++ /tmp/wklog.24.new.20767 2009-08-16 01:03:11.000000000 +0300
@@ -18,6 +18,8 @@
# a range tree has range access options, possibly for several keys
range_tree = range(key1) AND range(key2) AND ... AND range(keyN);
+ (here range(keyi) may represent ranges not for initial keyi prefixes,
+ but ranges for any infixes for keyi)
# merge tree represents several way to index_merge
imerge_tree = imerge1 AND imerge2 AND ...
@@ -47,13 +49,13 @@
R.add(range_union(A.range(i), B.range(i)));
if (R has at least one range access)
- return R;
+ return R; // DISCARD-IMERGE-2
else
{
/* could not build any range accesses. construct index_merge */
- remove non-ranges from A; // DISCARD-IMERGE-2
+ remove non-ranges from A;
remove non-ranges from B;
- return new index_merge(A, B);
+ return new index_merge(A, B); // DISCARD-IMERGE-3
}
}
else if (A is range tree and B is index_merge tree (or vice versa))
@@ -65,12 +67,12 @@
(range_treeB_11 OR range_treeB_12 OR ... OR range_treeB_1N) AND
(range_treeB_21 OR range_treeB_22 OR ... OR range_treeB_2N) AND
...
- (range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN) AND
+ (range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN)
=
(range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
(range_treeA OR range_treeB_21 OR ... OR range_treeB_2N) AND
...
- (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
+ (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N)
Now each line represents an index_merge..
}
@@ -82,18 +84,18 @@
OR
imergeB1 AND imergeB2 AND ... AND imergeBN
- -> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-3
+ -> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-4
imergeA1
OR
- imergeB1 AND imergeB2 AND ... AND imergeBN =
+ imergeB1 =
- = (combine imergeA1 with each of the imergeB{i} ) =
+ = (combine imergeA1 with each of the range_treeB_1{i} ) =
- combine(imergeA1 OR imergeB1) AND
- combine(imergeA1 OR imergeB2) AND
+ combine(imergeA1 OR range_treeB_11) AND
+ combine(imergeA1 OR range_treeB_12) AND
... AND
- combine(imergeA1 OR imergeBN)
+ combine(imergeA1 OR range_treeB_1N)
}
}
@@ -109,7 +111,7 @@
DISCARD-IMERGE-2 step will cause index_merge option to be discarded when
the WHERE clause has this form (conditions t.badkey may have abritrary form):
- (t.badkey<c1 AND t.key1=c1) OR (t.key1=c2 AND t.badkey < c2)
+ (t.badkey<c1 AND t.key1=c1) OR (t.key2=c2 AND t.badkey < c2)
DISCARD-IMERGE-3 manifests itself as the following effect: suppose there are
two indexes:
@@ -123,6 +125,8 @@
The optimizer will generate the plans that only use the "col1=c1" part. The
right side of the AND will be ignored even if it has good selectivity.
+(Here an imerge for col2=c2 OR col3=c3 won't be built since neither col2=c2 nor
+col3=c3 represent index ranges.)
2. New implementation
-=-=(Guest - Mon, 20 Jul 2009, 17:13)=-=-
Dependency deleted: 30 no longer depends on 24
-=-=(Guest - Sat, 20 Jun 2009, 09:34)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.21663 2009-06-20 09:34:48.000000000 +0300
+++ /tmp/wklog.24.new.21663 2009-06-20 09:34:48.000000000 +0300
@@ -4,6 +4,7 @@
2. New implementation
2.1 New tree_and()
2.2 New tree_or()
+3. Testing and required coverage
</contents>
1. Current implementation overview
@@ -240,3 +241,14 @@
In order to limit the impact of this combinatorial explosion, we will
introduce a rule that we won't generate more than #defined
MAX_IMERGE_OPTS options.
+
+3. Testing and required coverage
+================================
+So far could find the following user cases:
+
+* BUG#17259: Query optimizer chooses wrong index
+* BUG#17673: Optimizer does not use Index Merge optimization in some cases
+* BUG#23322: Optimizer sometimes erroniously prefers other index over index merge
+* BUG#30151: optimizer is very reluctant to chose index_merge algorithm
+
+
-=-=(Guest - Thu, 18 Jun 2009, 16:55)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.19152 2009-06-18 16:55:00.000000000 +0300
+++ /tmp/wklog.24.new.19152 2009-06-18 16:55:00.000000000 +0300
@@ -141,13 +141,15 @@
Operations on SEL_ARG trees will be modified to produce/process the trees of
this kind:
+
2.1 New tree_and()
------------------
In order not to lose plans, we'll make these changes:
-1. Don't remove index_merge part of the tree.
+A1. Don't remove index_merge part of the tree (this will take care of
+ DISCARD-IMERGE-1 problem)
-2. Push range conditions down into index_merge trees that may support them.
+A2. Push range conditions down into index_merge trees that may support them.
if one tree has range(key1) and the other tree has imerge(key1 OR key2)
then perform an equvalent of this operation:
@@ -155,8 +157,86 @@
(rangeA(key1) AND rangeB(key1)) OR (rangeA(key1) AND rangeB(key2))
-3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
+A3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
concatenate them together.
-2.2 New tree_or()
+2.2 New tree_or()
+-----------------
+O1. Dont remove non-range plans:
+ Current tree_or() code will refuse to produce index_merge plans for
+ conditions like
+
+ "t.key1part2=const OR t.key2part1=const"
+
+ (this is marked as DISCARD-IMERGE-3). This was justifed as the left part of
+ the AND condition is not usable for range access, and the operation of
+ tree_and() guaranteed that there was no way it could changed to make a
+ usable range plan. With new tree_and() and rule A2, this is no longer the
+ case. For example for this query:
+
+ (t.key1part2=const OR t.key2part1=const) AND t.key1part1=const
+
+ it will construct a
+
+ imerge(t.key1part2=const OR t.key2part1=const), range(t.key1part1=const)
+
+ then tree_and() will apply rule A2 to push the range down into index merge
+ and after that we'll have:
+
+ range(t.key1part1=const)
+ imerge(
+ t.key1part2=const AND t.key1part1=const,
+ t.key2part1=const
+ )
+ note that imerge(...) describes a usable index_merge plan and it's possible
+ that it will be the best access path.
+
+O2. "Create index_merge accesses when possible"
+ Current tree_or() will not create index_merge access when it could create
+ non-index merge access (see DISCARD-IMERGE-3 and its example in the "Problems
+ in the current implementation" section). This will be changed to work as
+ follows: we will create index_merge made for index scans that didn't have
+ their match in the other sel_tree.
+ Ilustrating it with an example:
+
+ | sel_tree_A | sel_tree_B | A or B | include in index_merge?
+ ------+------------+------------+--------+------------------------
+ key1 | cond1 | cond2 | condM | no
+ key2 | cond3 | cond4 | NULL | no
+ key3 | cond5 | | | yes, A-side
+ key4 | cond6 | | | yes, A-side
+ key5 | | cond7 | | yes, B-side
+ key6 | | cond8 | | yes, B-side
+
+ here we assume that
+ - (cond1 OR cond2) did produce a combined range. Not including them in
+ index_merge.
+ - (cond3 OR cond4) didn't produce a usable range (e.g. they were
+ t.key1part1=c1 AND t.key1part2=c1, respectively, and combining them
+ didn't yield any range list)
+ - All other scand didn't have their counterparts, so we'll end up with a
+ SEL_TREE of:
+
+ range(condM) AND index_merge((cond5 AND cond6),(cond7 AND cond8))
+ .
+
+O4. There is no O4. DISCARD-INDEX-MERGE-4 will remain there. The idea is
+that although DISCARD-INDEX-MERGE-4 does discard plans, so far we haven
+seen any complaints that could be attributed to it.
+If we face the need to lift DISCARD-INDEX-MERGE-4, our answer will be to
+lift it ,and produce a cross-product:
+
+ ((key1p OR key2p) AND (key3p OR key4p))
+ OR
+ ((key5p OR key6p) AND (key7p OR key8p))
+
+ = (key1p OR key2p OR key5p OR key6p) AND // this part is currently
+ (key3p OR key4p OR key5p OR key6p) AND // produced
+
+ (key1p OR key2p OR key5p OR key6p) AND // this part will be added
+ (key3p OR key4p OR key5p OR key6p) //.
+
+In order to limit the impact of this combinatorial explosion, we will
+introduce a rule that we won't generate more than #defined
+MAX_IMERGE_OPTS options.
-=-=(Guest - Thu, 18 Jun 2009, 14:56)=-=-
Low Level Design modified.
--- /tmp/wklog.24.old.15612 2009-06-18 14:56:09.000000000 +0300
+++ /tmp/wklog.24.new.15612 2009-06-18 14:56:09.000000000 +0300
@@ -1 +1,162 @@
+<contents>
+1. Current implementation overview
+1.1. Problems in the current implementation
+2. New implementation
+2.1 New tree_and()
+2.2 New tree_or()
+</contents>
+
+1. Current implementation overview
+==================================
+At the moment, range analyzer works as follows:
+
+SEL_TREE structure represents
+
+ # There are sel_trees, a sel_tree is either range or merge tree
+ sel_tree = range_tree | imerge_tree
+
+ # a range tree has range access options, possibly for several keys
+ range_tree = range(key1) AND range(key2) AND ... AND range(keyN);
+
+ # merge tree represents several way to index_merge
+ imerge_tree = imerge1 AND imerge2 AND ...
+
+ # a way to do index merge == a set to use of different indexes.
+ imergeX = range_tree1 OR range_tree2 OR ..
+ where no pair of range_treeX have ranges over the same index.
+
+
+ tree_and(A, B)
+ {
+ if (both A and B are range trees)
+ return a range_tree with computed intersection for each range;
+ if (only one of A and B is a range tree)
+ return that tree; // DISCARD-IMERGE-1
+ // at this point both trees are index_merge trees
+ return concat_lists( A.imerge1 ... A.imergeN, B.imerge1 ... B.imergeN);
+ }
+
+
+ tree_or(A, B)
+ {
+ if (A and B are range trees)
+ {
+ R = new range_tree;
+ for each index i
+ R.add(range_union(A.range(i), B.range(i)));
+
+ if (R has at least one range access)
+ return R;
+ else
+ {
+ /* could not build any range accesses. construct index_merge */
+ remove non-ranges from A; // DISCARD-IMERGE-2
+ remove non-ranges from B;
+ return new index_merge(A, B);
+ }
+ }
+ else if (A is range tree and B is index_merge tree (or vice versa))
+ {
+ Perform this transformation:
+
+ range_treeA // this is A
+ OR
+ (range_treeB_11 OR range_treeB_12 OR ... OR range_treeB_1N) AND
+ (range_treeB_21 OR range_treeB_22 OR ... OR range_treeB_2N) AND
+ ...
+ (range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN) AND
+ =
+ (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
+ (range_treeA OR range_treeB_21 OR ... OR range_treeB_2N) AND
+ ...
+ (range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
+
+ Now each line represents an index_merge..
+ }
+ else if (both A and B are index_merge trees)
+ {
+ Perform this transformation:
+
+ imergeA1 AND imergeA2 AND ... AND imergeAN
+ OR
+ imergeB1 AND imergeB2 AND ... AND imergeBN
+
+ -> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-3
+
+ imergeA1
+ OR
+ imergeB1 AND imergeB2 AND ... AND imergeBN =
+
+ = (combine imergeA1 with each of the imergeB{i} ) =
+
+ combine(imergeA1 OR imergeB1) AND
+ combine(imergeA1 OR imergeB2) AND
+ ... AND
+ combine(imergeA1 OR imergeBN)
+ }
+ }
+
+1.1. Problems in the current implementation
+-------------------------------------------
+As marked in the code above:
+
+DISCARD-IMERGE-1 step will cause index_merge option to be discarded when
+the WHERE clause has this form:
+
+ (t.key1=c1 OR t.key2=c2) AND t.badkey < c3
+
+DISCARD-IMERGE-2 step will cause index_merge option to be discarded when
+the WHERE clause has this form (conditions t.badkey may have abritrary form):
+
+ (t.badkey<c1 AND t.key1=c1) OR (t.key1=c2 AND t.badkey < c2)
+
+DISCARD-IMERGE-3 manifests itself as the following effect: suppose there are
+two indexes:
+
+ INDEX i1(col1, col2),
+ INDEX i2(col1, col3)
+
+and this WHERE clause:
+
+ col1=c1 AND (col2=c2 OR col3=c3)
+
+The optimizer will generate the plans that only use the "col1=c1" part. The
+right side of the AND will be ignored even if it has good selectivity.
+
+
+2. New implementation
+=====================
+
+<general idea>
+* Don't start fighting combinatorial explosion until we've actually got one.
+</>
+
+SEL_TREE structure will be now able to hold both index_merge and range scan
+candidates at the same time. That is,
+
+ sel_tree2 = range_tree AND imerge_tree
+
+where both parts are optional (i.e. can be empty)
+
+Operations on SEL_ARG trees will be modified to produce/process the trees of
+this kind:
+
+2.1 New tree_and()
+------------------
+In order not to lose plans, we'll make these changes:
+
+1. Don't remove index_merge part of the tree.
+
+2. Push range conditions down into index_merge trees that may support them.
+ if one tree has range(key1) and the other tree has imerge(key1 OR key2)
+ then perform an equvalent of this operation:
+
+ rangeA(key1) AND ( rangeB(key1) OR rangeB(key2)) =
+
+ (rangeA(key1) AND rangeB(key1)) OR (rangeA(key1) AND rangeB(key2))
+
+3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
+ concatenate them together.
+
+2.2 New tree_or()
-=-=(Psergey - Wed, 03 Jun 2009, 12:09)=-=-
Dependency created: 30 now depends on 24
-=-=(Guest - Mon, 01 Jun 2009, 23:30)=-=-
High-Level Specification modified.
--- /tmp/wklog.24.old.21580 2009-06-01 23:30:06.000000000 +0300
+++ /tmp/wklog.24.new.21580 2009-06-01 23:30:06.000000000 +0300
@@ -64,6 +64,9 @@
* How strict is the limitation on the form of the WHERE?
+* Which version should this be based on? 5.1? Which patches are should be in
+ (google's/percona's/maria/etc?)
+
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
-=-=(Guest - Wed, 27 May 2009, 13:59)=-=-
Title modified.
--- /tmp/wklog.24.old.9498 2009-05-27 13:59:23.000000000 +0300
+++ /tmp/wklog.24.new.9498 2009-05-27 13:59:23.000000000 +0300
@@ -1 +1 @@
-index_merge optimizer: dont discard index_merge union strategies when range is available
+index_merge: fair choice between index_merge union and range access
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=24&nolimit=1
DESCRIPTION:
Current range optimizer will discard possible index_merge/[sort]union
strategies when there is a possible range plan. This action is a part of
measures we take to avoid combinatorial explosion of possible range/
index_merge strategies.
A bad side effect of this is that for WHERE clauses in form
t.key1= 'very-frequent-value' AND (t.key2='rare-value1' OR t.key3='rare-value2')
the optimizer will
- discard union(key2,key3) in favor of range(key1)
- consider costs of using range(key1) and discard that plan also
and the overall effect is that possible poor range access will cause possible
good index_merge access not to be considered.
This WL is to about lifting this limitation at least for some subset of WHERE
clauses.
HIGH-LEVEL SPECIFICATION:
(Not a ready HLS but draft)
<contents>
Solution overview
Limitations
TODO
</contents>
Solution overview
=================
The idea is to delay discarding potential index_merge plans until the point
where it is really necessary.
This way, we won't have to do much changes in the range analyzer, but will be
able to keep potential index_merge plan just enough so that it's possible to
take it into consideration together with range access plans.
Since there are no changes in the optimizer, the ability to consider both
range and index_merge options will be limited to WHERE clauses of this form:
WHERE := range_cond(key1_1) AND
range_cond(key2_1) AND
other_cond AND
index_merge_OR_cond1(key3_1, key3_2, ...)
index_merge_OR_cond2(key4_1, key4_2, ...)
where
index_merge_OR_cond{N} := (range_cond(keyN_1) OR
range_cond(keyN_2) OR ...)
range_cond(keyX) := condition that allows to construct range access of keyX
and doesn't allow to construct range/index_merge accesses
for any keys of the table in question.
For such WHERE clauses, the range analyzer will produce SEL_TREE of this form:
SEL_TREE(
range(key1_1),
...
range(key2_1),
SEL_IMERGE( (1)
SEL_TREE(key3_1})
SEL_TREE(key3_2})
...
)
...
)
which can be used to make a cost-based choice between range and index_merge.
Limitations
-----------
This will not be a full solution in a sense that the range analyzer will not
be able to produce sel_tree (1) if the WHERE clause is specified in other form
(e.g. brackets were opened).
TODO
----
* is it a problem if there are keys that are referred to both from
index_merge and from range access?
* How strict is the limitation on the form of the WHERE?
* Which version should this be based on? 5.1? Which patches are should be in
(google's/percona's/maria/etc?)
* TODO: The optimizer didn't compare costs of index_merge and range before (ok
it did but that was done for accesses to different tables). Will there be any
possible gotchas here?
LOW-LEVEL DESIGN:
<contents>
1. Current implementation overview
1.1. Problems in the current implementation
2. New implementation
2.1 New tree_and()
2.2 New tree_or()
3. Testing and required coverage
</contents>
1. Current implementation overview
==================================
At the moment, range analyzer works as follows:
SEL_TREE structure represents
# There are sel_trees, a sel_tree is either range or merge tree
sel_tree = range_tree | imerge_tree
# a range tree has range access options, possibly for several keys
range_tree = range(key1) AND range(key2) AND ... AND range(keyN);
(here range(keyi) may represent ranges not for initial keyi prefixes,
but ranges for any infixes for keyi)
# merge tree represents several way to index_merge
imerge_tree = imerge1 AND imerge2 AND ...
# a way to do index merge == a set to use of different indexes.
imergeX = range_tree1 OR range_tree2 OR ..
where no pair of range_treeX have ranges over the same index.
tree_and(A, B)
{
if (both A and B are range trees)
return a range_tree with computed intersection for each range;
if (only one of A and B is a range tree)
return that tree; // DISCARD-IMERGE-1
// at this point both trees are index_merge trees
return concat_lists( A.imerge1 ... A.imergeN, B.imerge1 ... B.imergeN);
}
tree_or(A, B)
{
if (A and B are range trees)
{
R = new range_tree;
for each index i
R.add(range_union(A.range(i), B.range(i)));
if (R has at least one range access)
return R; // DISCARD-IMERGE-2
else
{
/* could not build any range accesses. construct index_merge */
remove non-ranges from A;
remove non-ranges from B;
return new index_merge(A, B); // DISCARD-IMERGE-3
}
}
else if (A is range tree and B is index_merge tree (or vice versa))
{
Perform this transformation:
range_treeA // this is A
OR
(range_treeB_11 OR range_treeB_12 OR ... OR range_treeB_1N) AND
(range_treeB_21 OR range_treeB_22 OR ... OR range_treeB_2N) AND
...
(range_treeB_K1 OR range_treeB_K2 OR ... OR range_treeB_kN)
=
(range_treeA OR range_treeB_11 OR ... OR range_treeB_1N) AND
(range_treeA OR range_treeB_21 OR ... OR range_treeB_2N) AND
...
(range_treeA OR range_treeB_11 OR ... OR range_treeB_1N)
Now each line represents an index_merge..
}
else if (both A and B are index_merge trees)
{
Perform this transformation:
imergeA1 AND imergeA2 AND ... AND imergeAN
OR
imergeB1 AND imergeB2 AND ... AND imergeBN
-> (discard all imergeA{i=2,3,...} -> // DISCARD-IMERGE-4
imergeA1
OR
imergeB1 =
= (combine imergeA1 with each of the range_treeB_1{i} ) =
combine(imergeA1 OR range_treeB_11) AND
combine(imergeA1 OR range_treeB_12) AND
... AND
combine(imergeA1 OR range_treeB_1N)
}
}
1.1. Problems in the current implementation
-------------------------------------------
As marked in the code above:
DISCARD-IMERGE-1 step will cause index_merge option to be discarded when
the WHERE clause has this form:
(t.key1=c1 OR t.key2=c2) AND t.badkey < c3
DISCARD-IMERGE-2 step will cause index_merge option to be discarded when
the WHERE clause has this form (conditions t.badkey may have abritrary form):
(t.badkey<c1 AND t.key1=c1) OR (t.key2=c2 AND t.badkey < c2)
DISCARD-IMERGE-3 manifests itself as the following effect: suppose there are
two indexes:
INDEX i1(col1, col2),
INDEX i2(col1, col3)
and this WHERE clause:
col1=c1 AND (col2=c2 OR col3=c3)
The optimizer will generate the plans that only use the "col1=c1" part. The
right side of the AND will be ignored even if it has good selectivity.
(Here no imerge for col2=c2 OR col3=c3 will be built since neither col2=c2 nor
col3=c3 represent index ranges.)
2. New implementation
=====================
<general idea>
* Don't start fighting combinatorial explosion until we've actually got one.
</>
SEL_TREE structure will be now able to hold both index_merge and range scan
candidates at the same time. That is,
sel_tree2 = range_tree AND imerge_tree
where both parts are optional (i.e. can be empty)
Operations on SEL_ARG trees will be modified to produce/process the trees of
this kind:
2.1 New tree_and()
------------------
In order not to lose plans, we'll make these changes:
A1. Don't remove index_merge part of the tree (this will take care of
DISCARD-IMERGE-1 problem)
A2. Push range conditions down into index_merge trees that may support them.
if one tree has range(key1) and the other tree has imerge(key1 OR key2)
then perform an equvalent of this operation:
rangeA(key1) AND ( rangeB(key1) OR rangeB(key2)) =
(rangeA(key1) AND rangeB(key1)) OR (rangeA(key1) AND rangeB(key2))
A3. Just as before: if both sel_tree A and sel_tree B have index_merge options,
concatenate them together.
2.2 New tree_or()
-----------------
O1. Dont remove non-range plans:
Current tree_or() code will refuse to produce index_merge plans for
conditions like
"t.key1part2=const OR t.key2part1=const"
(this is marked as DISCARD-IMERGE-3). This was justifed as the left part of
the AND condition is not usable for range access, and the operation of
tree_and() guaranteed that there was no way it could changed to make a
usable range plan. With new tree_and() and rule A2, this is no longer the
case. For example for this query:
(t.key1part2=const OR t.key2part1=const) AND t.key1part1=const
it will construct a
imerge(t.key1part2=const OR t.key2part1=const), range(t.key1part1=const)
then tree_and() will apply rule A2 to push the range down into index merge
and after that we'll have:
range(t.key1part1=const)
imerge(
t.key1part2=const AND t.key1part1=const,
t.key2part1=const
)
note that imerge(...) describes a usable index_merge plan and it's possible
that it will be the best access path.
O2. "Create index_merge accesses when possible"
Current tree_or() will not create index_merge access when it could create
non-index merge access (see DISCARD-IMERGE-2 and its example in the "Problems
in the current implementation" section). This will be changed to work as
follows: we will create index_merge made for index scans that didn't have
their match in the other sel_tree.
Ilustrating it with an example:
| sel_tree_A | sel_tree_B | A or B | include in index_merge?
------+------------+------------+--------+------------------------
key1 | cond1 | cond2 | condM | no
key2 | cond3 | cond4 | NULL | no
key3 | cond5 | | | yes, A-side
key4 | cond6 | | | yes, A-side
key5 | | cond7 | | yes, B-side
key6 | | cond8 | | yes, B-side
here we assume that
- (cond1 OR cond2) did produce a combined range. Not including them in
index_merge.
- (cond3 OR cond4) didn't produce a usable range (e.g. they were
t.key1part1=c1 AND t.key1part2=c1, respectively, and combining them
didn't yield any range list)
- All other scand didn't have their counterparts, so we'll end up with a
SEL_TREE of:
range(condM) AND index_merge((cond5 AND cond6),(cond7 AND cond8))
.
O4. There is no O4. DISCARD-INDEX-MERGE-4 will remain there. The idea is
that although DISCARD-INDEX-MERGE-4 does discard plans, so far we haven
seen any complaints that could be attributed to it.
If we face the need to lift DISCARD-INDEX-MERGE-4, our answer will be to
lift it ,and produce a cross-product:
((key1p OR key2p) AND (key3p OR key4p))
OR
((key5p OR key6p) AND (key7p OR key8p))
= (key1p OR key2p OR key5p OR key6p) AND // this part is currently
(key3p OR key4p OR key5p OR key6p) AND // produced
(key1p OR key2p OR key5p OR key6p) AND // this part will be added
(key3p OR key4p OR key5p OR key6p) //.
In order to limit the impact of this combinatorial explosion, we will
introduce a rule that we won't generate more than #defined
MAX_IMERGE_OPTS options.
3. Testing and required coverage
================================
So far could find the following user cases:
* BUG#17259: Query optimizer chooses wrong index
* BUG#17673: Optimizer does not use Index Merge optimization in some cases
* BUG#23322: Optimizer sometimes erroniously prefers other index over index merge
* BUG#30151: optimizer is very reluctant to chose index_merge algorithm
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#67 Updated (by Psergey): ICP/MRR backport
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: ICP/MRR backport
CREATION DATE..: Thu, 26 Nov 2009, 15:19
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 67 (http://askmonty.org/worklog/?tid=67)
VERSION........: Server-9.x
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Tue, 29 Jun 2010, 13:57)=-=-
Status updated.
--- /tmp/wklog.67.old.31561 2010-06-29 13:57:50.000000000 +0000
+++ /tmp/wklog.67.new.31561 2010-06-29 13:57:50.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Complete
-=-=(Guest - Sun, 13 Jun 2010, 16:57)=-=-
Dependency deleted: 91 no longer depends on 67
-=-=(Igor - Wed, 10 Mar 2010, 19:14)=-=-
High Level Description modified.
--- /tmp/wklog.67.old.25641 2010-03-10 19:14:45.000000000 +0000
+++ /tmp/wklog.67.new.25641 2010-03-10 19:14:45.000000000 +0000
@@ -1,2 +1,2 @@
-Backport DS-MRR into MariaDB-5.2 codebase, also adding certain extra features to
-make it more usable.
+Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
+features to make it more usable.
-=-=(Guest - Wed, 10 Mar 2010, 19:12)=-=-
Title modified.
--- /tmp/wklog.67.old.25456 2010-03-10 19:12:57.000000000 +0000
+++ /tmp/wklog.67.new.25456 2010-03-10 19:12:57.000000000 +0000
@@ -1 +1 @@
-MRR backport
+ICP/MRR backport
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:09)=-=-
Dependency created: 94 now depends on 67
-=-=(Psergey - Thu, 26 Nov 2009, 20:21)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.9329 2009-11-26 20:21:28.000000000 +0200
+++ /tmp/wklog.67.new.9329 2009-11-26 20:21:28.000000000 +0200
@@ -65,17 +65,19 @@
2.5 Make MRR code more of a module
----------------------------------
-Some code in handler.cc can be moved to separate file.
-But changes in opt_range.cc can't.
-TODO: Sort out how much we really can do here. Initial guess is not much as the
-code consists of:
+It is not possible to make MRR to be a totally separate module, as its code
+consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
- calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
-- DS-MRR implementations which are spread over storage engines.
-and the only modularization we see is to move #1 into a separate file which
-won't achieve much.
+- DS-MRR impelementations which are spread over storage engines.
+
+We'll try to modularize what we can:
+- Move out default MRR implementation from handler.cc
+- Move possible parts out of opt_range.cc into a separate file.
+
+
2.6 Improve the cost model
--------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 19:06)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.6449 2009-11-26 19:06:04.000000000 +0200
+++ /tmp/wklog.67.new.6449 2009-11-26 19:06:04.000000000 +0200
@@ -1,4 +1,3 @@
-
<contents>
1. Requirements
2. Required actions
@@ -44,6 +43,7 @@
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.4161 2009-11-26 18:15:36.000000000 +0200
+++ /tmp/wklog.67.new.4161 2009-11-26 18:15:36.000000000 +0200
@@ -1,3 +1,17 @@
+
+<contents>
+1. Requirements
+2. Required actions
+2.1 Fix DS-MRR/InnoDB bugs
+2.2 Backport DS-MRR code to MariaDB 5.2
+2.3 Introduce control variables
+2.4 Other backport issues
+2.5 Make MRR code more of a module
+2.6 Improve the cost model
+2.7 Let DS-MRR support clustered primary keys
+</contents>
+
+
1. Requirements
===============
@@ -63,4 +77,28 @@
and the only modularization we see is to move #1 into a separate file which
won't achieve much.
+2.6 Improve the cost model
+--------------------------
+At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
+records_in_range() calls, followed by index_only_read_time() or read_time()
+calls to produce the estimate for read cost.
+
+ We should change this (TODO sort out how exactly)
+
+Note: this means that the query plans will change from MariaDB 5.2.
+
+2.7 Let DS-MRR support clustered primary keys
+---------------------------------------------
+At the moment DS-MRR is not supported for clustered primary keys. It is not
+needed when MRR is used for range access, because range access is done over
+an ordered list of ranges, but it is useful for BKA.
+
+TODO:
+ it's useful for BKA because BKA makes MRR scans over un-orderered
+ non-disjoint lists of ranges. Then we can sort these and do ordered scans.
+ There is still no use for DS-MRR over clustered primary key for range
+ access, where the ranges are disjoint and ordered.
+ How about postponing this item until BKA is backported?
+
+
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=67&nolimit=1
DESCRIPTION:
Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
features to make it more usable.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Requirements
2. Required actions
2.1 Fix DS-MRR/InnoDB bugs
2.2 Backport DS-MRR code to MariaDB 5.2
2.3 Introduce control variables
2.4 Other backport issues
2.5 Make MRR code more of a module
2.6 Improve the cost model
2.7 Let DS-MRR support clustered primary keys
</contents>
1. Requirements
===============
We need the following:
1. Latest MRR interface support, including extensions to support ICP when
using BKA.
2. Let DS-MRR support clustered primary keys (needed when using BKA).
3. Remove conditions used for key access from the condition pushed to index
(ATM this manifests itself as "Using index condition" appearing where there
was no "Using where". TODO: example of this?)
4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
is switched on/off by @@engine_condition_pushdown)
5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
makes it unobvious for a number of users.
6. Rename multi_range_read_info_const() to look like it is not a part of MRR
interface.
8. Try to make MRR to be more of a module
7. Improve MRR's cost model.
2. Required actions
===================
Roughly in the order in which it will be done:
2.1 Fix DS-MRR/InnoDB bugs
--------------------------
We need to fix the bugs listed here:
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
The easiest way seems to be to to manually move the needed code from mysql-6.0
(or whatever it's called now) to MariaDB.
2.3 Introduce control variables
-------------------------------
Act on items #4 and #5 from the requirements. Should be easy as
@@optimizer_switch is supported in 5.1 codebase.
2.4 Other backport issues
-------------------------
* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
but merging it into 5.1 can be very labor-intensive.
Will it be ok to disable NDB/MRR altogether?
2.5 Make MRR code more of a module
----------------------------------
It is not possible to make MRR to be a totally separate module, as its code
consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
- DS-MRR impelementations which are spread over storage engines.
We'll try to modularize what we can:
- Move out default MRR implementation from handler.cc
- Move possible parts out of opt_range.cc into a separate file.
2.6 Improve the cost model
--------------------------
At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
records_in_range() calls, followed by index_only_read_time() or read_time()
calls to produce the estimate for read cost.
We should change this (TODO sort out how exactly)
Note: this means that the query plans will change from MariaDB 5.2.
2.7 Let DS-MRR support clustered primary keys
---------------------------------------------
At the moment DS-MRR is not supported for clustered primary keys. It is not
needed when MRR is used for range access, because range access is done over
an ordered list of ranges, but it is useful for BKA.
TODO:
it's useful for BKA because BKA makes MRR scans over un-orderered
non-disjoint lists of ranges. Then we can sort these and do ordered scans.
There is still no use for DS-MRR over clustered primary key for range
access, where the ranges are disjoint and ordered.
How about postponing this item until BKA is backported?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#67 Updated (by Psergey): ICP/MRR backport
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: ICP/MRR backport
CREATION DATE..: Thu, 26 Nov 2009, 15:19
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 67 (http://askmonty.org/worklog/?tid=67)
VERSION........: Server-9.x
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Tue, 29 Jun 2010, 13:57)=-=-
Status updated.
--- /tmp/wklog.67.old.31561 2010-06-29 13:57:50.000000000 +0000
+++ /tmp/wklog.67.new.31561 2010-06-29 13:57:50.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Complete
-=-=(Guest - Sun, 13 Jun 2010, 16:57)=-=-
Dependency deleted: 91 no longer depends on 67
-=-=(Igor - Wed, 10 Mar 2010, 19:14)=-=-
High Level Description modified.
--- /tmp/wklog.67.old.25641 2010-03-10 19:14:45.000000000 +0000
+++ /tmp/wklog.67.new.25641 2010-03-10 19:14:45.000000000 +0000
@@ -1,2 +1,2 @@
-Backport DS-MRR into MariaDB-5.2 codebase, also adding certain extra features to
-make it more usable.
+Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
+features to make it more usable.
-=-=(Guest - Wed, 10 Mar 2010, 19:12)=-=-
Title modified.
--- /tmp/wklog.67.old.25456 2010-03-10 19:12:57.000000000 +0000
+++ /tmp/wklog.67.new.25456 2010-03-10 19:12:57.000000000 +0000
@@ -1 +1 @@
-MRR backport
+ICP/MRR backport
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:09)=-=-
Dependency created: 94 now depends on 67
-=-=(Psergey - Thu, 26 Nov 2009, 20:21)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.9329 2009-11-26 20:21:28.000000000 +0200
+++ /tmp/wklog.67.new.9329 2009-11-26 20:21:28.000000000 +0200
@@ -65,17 +65,19 @@
2.5 Make MRR code more of a module
----------------------------------
-Some code in handler.cc can be moved to separate file.
-But changes in opt_range.cc can't.
-TODO: Sort out how much we really can do here. Initial guess is not much as the
-code consists of:
+It is not possible to make MRR to be a totally separate module, as its code
+consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
- calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
-- DS-MRR implementations which are spread over storage engines.
-and the only modularization we see is to move #1 into a separate file which
-won't achieve much.
+- DS-MRR impelementations which are spread over storage engines.
+
+We'll try to modularize what we can:
+- Move out default MRR implementation from handler.cc
+- Move possible parts out of opt_range.cc into a separate file.
+
+
2.6 Improve the cost model
--------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 19:06)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.6449 2009-11-26 19:06:04.000000000 +0200
+++ /tmp/wklog.67.new.6449 2009-11-26 19:06:04.000000000 +0200
@@ -1,4 +1,3 @@
-
<contents>
1. Requirements
2. Required actions
@@ -44,6 +43,7 @@
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.4161 2009-11-26 18:15:36.000000000 +0200
+++ /tmp/wklog.67.new.4161 2009-11-26 18:15:36.000000000 +0200
@@ -1,3 +1,17 @@
+
+<contents>
+1. Requirements
+2. Required actions
+2.1 Fix DS-MRR/InnoDB bugs
+2.2 Backport DS-MRR code to MariaDB 5.2
+2.3 Introduce control variables
+2.4 Other backport issues
+2.5 Make MRR code more of a module
+2.6 Improve the cost model
+2.7 Let DS-MRR support clustered primary keys
+</contents>
+
+
1. Requirements
===============
@@ -63,4 +77,28 @@
and the only modularization we see is to move #1 into a separate file which
won't achieve much.
+2.6 Improve the cost model
+--------------------------
+At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
+records_in_range() calls, followed by index_only_read_time() or read_time()
+calls to produce the estimate for read cost.
+
+ We should change this (TODO sort out how exactly)
+
+Note: this means that the query plans will change from MariaDB 5.2.
+
+2.7 Let DS-MRR support clustered primary keys
+---------------------------------------------
+At the moment DS-MRR is not supported for clustered primary keys. It is not
+needed when MRR is used for range access, because range access is done over
+an ordered list of ranges, but it is useful for BKA.
+
+TODO:
+ it's useful for BKA because BKA makes MRR scans over un-orderered
+ non-disjoint lists of ranges. Then we can sort these and do ordered scans.
+ There is still no use for DS-MRR over clustered primary key for range
+ access, where the ranges are disjoint and ordered.
+ How about postponing this item until BKA is backported?
+
+
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=67&nolimit=1
DESCRIPTION:
Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
features to make it more usable.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Requirements
2. Required actions
2.1 Fix DS-MRR/InnoDB bugs
2.2 Backport DS-MRR code to MariaDB 5.2
2.3 Introduce control variables
2.4 Other backport issues
2.5 Make MRR code more of a module
2.6 Improve the cost model
2.7 Let DS-MRR support clustered primary keys
</contents>
1. Requirements
===============
We need the following:
1. Latest MRR interface support, including extensions to support ICP when
using BKA.
2. Let DS-MRR support clustered primary keys (needed when using BKA).
3. Remove conditions used for key access from the condition pushed to index
(ATM this manifests itself as "Using index condition" appearing where there
was no "Using where". TODO: example of this?)
4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
is switched on/off by @@engine_condition_pushdown)
5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
makes it unobvious for a number of users.
6. Rename multi_range_read_info_const() to look like it is not a part of MRR
interface.
8. Try to make MRR to be more of a module
7. Improve MRR's cost model.
2. Required actions
===================
Roughly in the order in which it will be done:
2.1 Fix DS-MRR/InnoDB bugs
--------------------------
We need to fix the bugs listed here:
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
The easiest way seems to be to to manually move the needed code from mysql-6.0
(or whatever it's called now) to MariaDB.
2.3 Introduce control variables
-------------------------------
Act on items #4 and #5 from the requirements. Should be easy as
@@optimizer_switch is supported in 5.1 codebase.
2.4 Other backport issues
-------------------------
* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
but merging it into 5.1 can be very labor-intensive.
Will it be ok to disable NDB/MRR altogether?
2.5 Make MRR code more of a module
----------------------------------
It is not possible to make MRR to be a totally separate module, as its code
consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
- DS-MRR impelementations which are spread over storage engines.
We'll try to modularize what we can:
- Move out default MRR implementation from handler.cc
- Move possible parts out of opt_range.cc into a separate file.
2.6 Improve the cost model
--------------------------
At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
records_in_range() calls, followed by index_only_read_time() or read_time()
calls to produce the estimate for read cost.
We should change this (TODO sort out how exactly)
Note: this means that the query plans will change from MariaDB 5.2.
2.7 Let DS-MRR support clustered primary keys
---------------------------------------------
At the moment DS-MRR is not supported for clustered primary keys. It is not
needed when MRR is used for range access, because range access is done over
an ordered list of ranges, but it is useful for BKA.
TODO:
it's useful for BKA because BKA makes MRR scans over un-orderered
non-disjoint lists of ranges. Then we can sort these and do ordered scans.
There is still no use for DS-MRR over clustered primary key for range
access, where the ranges are disjoint and ordered.
How about postponing this item until BKA is backported?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#120 Updated (by Knielsen): Replication API for stacked event generators
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Replication API for stacked event generators
CREATION DATE..: Mon, 07 Jun 2010, 13:13
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 120 (http://askmonty.org/worklog/?tid=120)
VERSION........: Server-9.x
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 2
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Knielsen - Tue, 29 Jun 2010, 13:51)=-=-
Status updated.
--- /tmp/wklog.120.old.31179 2010-06-29 13:51:20.000000000 +0000
+++ /tmp/wklog.120.new.31179 2010-06-29 13:51:20.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Knielsen - Mon, 28 Jun 2010, 07:11)=-=-
High-Level Specification modified.
--- /tmp/wklog.120.old.18416 2010-06-28 07:11:09.000000000 +0000
+++ /tmp/wklog.120.new.18416 2010-06-28 07:11:09.000000000 +0000
@@ -14,10 +14,14 @@
An example of an event consumer is the writing of the binlog on a master.
-Event generators are not really plugins. Rather, there are specific points in
-the server where events are generated. However, a generator can be part of a
-plugin, for example a PBXT engine-level replication event generator would be
-part of the PBXT storage engine plugin.
+Some event generators are not really plugins. Rather, there are specific
+points in the server where events are generated. However, a generator can be
+part of a plugin, for example a PBXT engine-level replication event generator
+would be part of the PBXT storage engine plugin. And for example we could
+write a filter plugin, which would be stacked on top of an existing generator
+and provide the same event types and interfaces, but filtered in some way (for
+example by removing certain events on the master side, or by re-writing events
+in certain ways).
Event consumers on the other hand could be a plugin.
@@ -85,6 +89,9 @@
some is kept as reference to context (eg. THD) only. This however looses most
of the mentioned advantages for materialisation.
+Also, the non-materialising interface should be a good interface on top of
+which to build a materialising interface.
+
The design proposed here aims for as little materialisation as possible.
-=-=(Knielsen - Thu, 24 Jun 2010, 14:29)=-=-
Category updated.
--- /tmp/wklog.120.old.7440 2010-06-24 14:29:38.000000000 +0000
+++ /tmp/wklog.120.new.7440 2010-06-24 14:29:38.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Knielsen - Thu, 24 Jun 2010, 14:29)=-=-
Status updated.
--- /tmp/wklog.120.old.7440 2010-06-24 14:29:38.000000000 +0000
+++ /tmp/wklog.120.new.7440 2010-06-24 14:29:38.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Knielsen - Thu, 24 Jun 2010, 14:29)=-=-
Low Level Design modified.
--- /tmp/wklog.120.old.7355 2010-06-24 14:29:11.000000000 +0000
+++ /tmp/wklog.120.new.7355 2010-06-24 14:29:11.000000000 +0000
@@ -1 +1,385 @@
+A consumer is implented as a virtual class (interface). There is one virtual
+function for every event that can be received. A consumer would derive from
+the base class and override methods for the events it wants to receive.
+
+There is one consumer interface for each generator. When a generator A is
+stacked on B, the consumer interface for A inherits from the interface for
+B. This way, when A defers an event to B, the consumer for A will receive the
+corresponding event from B.
+
+There are methods for a consumer to register itself to receive events from
+each generator. I still need to find a way for a consumer in one plugin to
+register itself with a generator implemented in another plugin (eg. PBXT
+engine-level replication). I also need to add a way for consumers to
+de-register themselves.
+
+The current design has consumer callbacks return 0 for success and error code
+otherwise. I still need to think more about whether this is useful (ie. what
+is the semantics of returning an error from a consumer callback).
+
+Each event passed to consumers is defined as a class with public accessor
+methods to a private context (which is mostly the THD).
+
+My intension is to make all events passed around const, so that the same event
+can be passed to each of multiple registered consumers (and to emphasise that
+consumers do not have the ability to modify events). It still needs to be seen
+whether that const-ness will be feasible in practise without very heavy
+modification/constification of exiting code.
+
+What follows is a partial draft of a possible definition of the API as
+concrete C++ class definitions.
+
+-----------------------------------------------------------------------
+
+/*
+ Virtual base class for generated replication events.
+
+ This is the parent of events generated from all kinds of generators. Only
+ child classes can be instantiated.
+
+ This class can be used by code that wants to treat events in a generic way,
+ without any knowledge of event details. I still need to decide whether such
+ generic code is sensible.
+*/
+class rpl_event_base
+{
+ /*
+ Maybe we will want the ability to materialise an event to a standard
+ binary format. This could be achieved with a base method like this. The
+ actual materialisation would be implemented in each deriving class. The
+ public methods would provide different interfaces for specifying the
+ buffer or for writing directly into IO_CACHE or file.
+ */
+
+ /* Return 0 on success, -1 on error, -2 on out-of-buffer. */
+ int materialise(uchar *buffer, size_t buflen) const;
+ /*
+ Returns NULL on error or else malloc()ed buffer with materialised event,
+ caller must free().
+ */
+ uchar *materialise() const;
+ /* Same but using passed in memroot. */
+ uchar *materialise(mem_root *memroot) const;
+ /*
+ Materialise to user-supplied writer function (could write directly to file
+ or the like).
+ */
+ int materialise(int (*writer)(uchar *data, size_t len, void *context)) const;
+
+ /*
+ As to for what to do with a materialised event, there are a couple of
+ possibilities.
+
+ One is to have a de_materialise() method somewhere that can construct an
+ rpl_event_base (really a derived class of course) from a buffer or writer
+ function. This would require each accessor function to conditionally read
+ its data from either THD context or buffer (GCC is able to optimise
+ several such conditionals in multiple accessor function calls into one
+ conditional), or we can make all accessors virtual if the performance hit
+ is acceptable.
+
+ Another is to have different classes for accessing events read from
+ materialised event data.
+
+ Also, I still need to think about whether it is at all useful to be able
+ to generically materialise an event at this level. It may be that any
+ binlog/transport will in any case need to undertand more of the format of
+ events, so that such materialisation/transport is better done at a
+ different layer.
+ */
+
+protected:
+ /* Implementation which is the basis for materialise(). */
+ virtual int do_materialise(int (*writer)(uchar *data, size_t len,
+ void *context)) const = 0;
+
+private:
+ /* Virtual base class, private constructor to prevent instantiation. */
+ rpl_event_base();
+};
+
+
+/*
+ These are the event types output from the transaction event generator.
+
+ This generator is not stacked on anything.
+
+ The transaction event generator marks the start and end (commit or rollback)
+ of transactions. It also gives information about whether the transaction was
+ a full transaction or autocommitted statement, whether transactional tables
+ were involved, whether non-transactional tables were involved, and XA
+ information (ToDo).
+*/
+
+/* Base class for transaction events. */
+class rpl_event_transaction_base : public rpl_event_base
+{
+public:
+ /*
+ Get the local transaction id. This idea is only unique within one server.
+ It is allocated whenever a new transaction is started.
+ Can be used to identify events belonging to the same transaction in a
+ binlog-like stream of events streamed in parallel among multiple
+ transactions.
+ */
+ uint64_t get_local_trx_id() const { return thd->local_trx_id; };
+
+ bool get_is_autocommit() const;
+
+private:
+ /* The context is the THD. */
+ THD *thd;
+
+ rpl_event_transaction_base(THD *_thd) : thd(_thd) { };
+};
+
+/* Transaction start event. */
+class rpl_event_transaction_start : public rpl_event_transaction_base
+{
+
+};
+
+/* Transaction commit. */
+class rpl_event_transaction_commit : public rpl_event_transaction_base
+{
+public:
+ /*
+ The global transaction id is unique cross-server.
+
+ It can be used to identify the position from which to start a slave
+ replicating from a master.
+
+ This global ID is only available once the transaction is decided to commit
+ by the TC manager / primary redundancy service. This TC also allocates the
+ ID and decides the exact semantics (can there be gaps, etc); however the
+ format is fixed (cluster_id, running_counter).
+ */
+ struct global_transaction_id
+ {
+ uint32_t cluster_id;
+ uint64_t counter;
+ };
+
+ const global_transaction_id *get_global_transaction_id() const;
+};
+
+/* Transaction rollback. */
+class rpl_event_transaction_rollback : public rpl_event_transaction_base
+{
+
+};
+
+
+/* Base class for statement events. */
+class rpl_event_statement_base : public rpl_event_base
+{
+public:
+ LEX_STRING get_current_db() const;
+};
+
+class rpl_event_statement_start : public rpl_event_statement_base
+{
+
+};
+
+class rpl_event_statement_end : public rpl_event_statement_base
+{
+public:
+ int get_errorcode() const;
+};
+
+class rpl_event_statement_query : public rpl_event_statement_base
+{
+public:
+ LEX_STRING get_query_string();
+ ulong get_sql_mode();
+ const CHARSET_INFO *get_character_set_client();
+ const CHARSET_INFO *get_collation_connection();
+ const CHARSET_INFO *get_collation_server();
+ const CHARSET_INFO *get_collation_default_db();
+
+ /*
+ Access to relevant flags that affect query execution.
+
+ Use as if (ev->get_flags() & (uint32)ROW_FOREIGN_KEY_CHECKS) { ... }
+ */
+ enum flag_bits
+ {
+ STMT_FOREIGN_KEY_CHECKS, // @@foreign_key_checks
+ STMT_UNIQUE_KEY_CHECKS, // @@unique_checks
+ STMT_AUTO_IS_NULL, // @@sql_auto_is_null
+ };
+ uint32_t get_flags();
+
+ ulong get_auto_increment_offset();
+ ulong get_auto_increment_increment();
+
+ // And so on for: time zone; day/month names; connection id; LAST_INSERT_ID;
+ // INSERT_ID; random seed; user variables.
+ //
+ // We probably also need get_uses_temporary_table(), get_used_user_vars(),
+ // get_uses_auto_increment() and so on, so a consumer can get more
+ // information about what kind of context information a query will need when
+ // executed on a slave.
+};
+
+class rpl_event_statement_load_query : public rpl_event_statement_query
+{
+
+};
+
+/*
+ This event is fired with blocks of data for files read (from server-local
+ file or client connection) for LOAD DATA.
+*/
+class rpl_event_statement_load_data_block : public rpl_event_statement_base
+{
+public:
+ struct block
+ {
+ const uchar *ptr;
+ size_t size;
+ };
+ block get_block() const;
+};
+
+/* Base class for row-based replication events. */
+class rpl_event_row_base : public rpl_event_base
+{
+public:
+ /*
+ Access to relevant handler extra flags and other flags that affect row
+ operations.
+
+ Use as if (ev->get_flags() & (uint32)ROW_WRITE_CAN_REPLACE) { ... }
+ */
+ enum flag_bits
+ {
+ ROW_WRITE_CAN_REPLACE, // HA_EXTRA_WRITE_CAN_REPLACE
+ ROW_IGNORE_DUP_KEY, // HA_EXTRA_IGNORE_DUP_KEY
+ ROW_IGNORE_NO_KEY, // HA_EXTRA_IGNORE_NO_KEY
+ ROW_DISABLE_FOREIGN_KEY_CHECKS, // ! @@foreign_key_checks
+ ROW_DISABLE_UNIQUE_KEY_CHECKS, // ! @@unique_checks
+ };
+ uint32_t get_flags();
+
+ /* Access to list of tables modified. */
+ class table_iterator
+ {
+ public:
+ /* Returns table, NULL after last. */
+ const TABLE *get_next();
+ private:
+ // ...
+ };
+ table_iterator get_modified_tables() const;
+
+private:
+ /* Context used to provide accessors. */
+ THD *thd;
+
+protected:
+ rpl_event_row_base(THD *_thd) : thd(_thd) { }
+};
+
+
+class rpl_event_row_write : public rpl_event_row_base
+{
+public:
+ const BITMAP *get_write_set() const;
+ const uchar *get_after_image() const;
+};
+
+class rpl_event_row_update : public rpl_event_row_base
+{
+public:
+ const BITMAP *get_read_set() const;
+ const BITMAP *get_write_set() const;
+ const uchar *get_before_image() const;
+ const uchar *get_after_image() const;
+};
+
+class rpl_event_row_delete : public rpl_event_row_base
+{
+public:
+ const BITMAP *get_read_set() const;
+ const uchar *get_before_image() const;
+};
+
+
+/*
+ Event consumer callbacks.
+
+ An event consumer registers with an event generator to receive event
+ notifications from that generator.
+
+ The consumer has callbacks (in the form of virtual functions) for the
+ individual event types the consumer is interested in. Only callbacks that
+ are non-NULL will be invoked. If an event applies to multiple callbacks in a
+ single callback struct, it will only be passed to the most specific non-NULL
+ callback (so events never fire more than once per registration).
+
+ The lifetime of the memory holding the event is only for the duration of the
+ callback invocation, unless otherwise noted.
+
+ Callbacks return 0 for success or error code (ToDo: does this make sense?).
+*/
+
+struct rpl_event_consumer_transaction
+{
+ virtual int trx_start(const rpl_event_transaction_start *) { return 0; }
+ virtual int trx_commit(const rpl_event_transaction_commit *) { return 0; }
+ virtual int trx_rollback(const rpl_event_transaction_rollback *) { return 0; }
+};
+
+/*
+ Consuming statement-based events.
+
+ The statement event generator is stacked on top of the transaction event
+ generator, so we can receive those events as well.
+*/
+struct rpl_event_consumer_statement : public rpl_event_consumer_transaction
+{
+ virtual int stmt_start(const rpl_event_statement_start *) { return 0; }
+ virtual int stmt_end(const rpl_event_statement_end *) { return 0; }
+
+ virtual int stmt_query(const rpl_event_statement_query *) { return 0; }
+
+ /* Data for a file used in LOAD DATA [LOCAL] INFILE. */
+ virtual int stmt_load_data_block(const rpl_event_statement_load_data_block *)
+ { return 0; }
+
+ /*
+ These are specific kinds of statements; if specified they override
+ consume_stmt_query() for the corresponding event.
+ */
+ virtual int stmt_load_query(const rpl_event_statement_load_query *ev)
+ { return stmt_query(ev); }
+};
+
+/*
+ Consuming row-based events.
+
+ The row event generator is stacked on top of the statement event generator.
+*/
+struct rpl_event_consumer_row : public rpl_event_consumer_statement
+{
+ virtual int row_write(const rpl_event_row_write *) { return 0; }
+ virtual int row_update(const rpl_event_row_update *) { return 0; }
+ virtual int row_delete(const rpl_event_row_delete *) { return 0; }
+};
+
+
+/*
+ Registration functions.
+
+ ToDo: Make a way to de-register.
+
+ ToDo: Find a good way for a plugin (eg. PBXT) to publish a generator
+ registration method.
+*/
+
+int rpl_event_transaction_register(const rpl_event_consumer_transaction *cbs);
+int rpl_event_statement_register(const rpl_event_consumer_statement *cbs);
+int rpl_event_row_register(const rpl_event_consumer_row *cbs);
-=-=(Knielsen - Thu, 24 Jun 2010, 14:28)=-=-
High-Level Specification modified.
--- /tmp/wklog.120.old.7341 2010-06-24 14:28:17.000000000 +0000
+++ /tmp/wklog.120.new.7341 2010-06-24 14:28:17.000000000 +0000
@@ -1 +1,159 @@
+Generators and consumbers
+-------------------------
+
+We have the two concepts:
+
+1. Event _generators_, that produce events describing all changes to data in a
+ server.
+
+2. Event consumers, that receive such events and use them in various ways.
+
+Examples of event generators is execution of SQL statements, which generates
+events like those used for statement-based replication. Another example is
+PBXT engine-level replication.
+
+An example of an event consumer is the writing of the binlog on a master.
+
+Event generators are not really plugins. Rather, there are specific points in
+the server where events are generated. However, a generator can be part of a
+plugin, for example a PBXT engine-level replication event generator would be
+part of the PBXT storage engine plugin.
+
+Event consumers on the other hand could be a plugin.
+
+One generator can be stacked on top of another. This means that a generator on
+top (for example row-based events) will handle some events itself
+(eg. non-deterministic update in mixed-mode binlogging). Other events that it
+does not want to or cannot handle (for example deterministic delete or DDL)
+will be defered to the generator below (for example statement-based events).
+
+
+Materialisation (or not)
+------------------------
+
+A central decision is how to represent events that are generated in the API at
+the point of generation.
+
+I want to avoid making the API require that events are materialised. By
+"Materialised" I mean that all (or most) of the data for the event is written
+into memory in a struct/class used inside the server or serialised in a data
+buffer (byte buffer) in a format suitable for network transport or disk
+storage.
+
+Using a non-materialised event means storing just a reference to appropriate
+context that allows to retrieve all information for the event using
+accessors. Ie. typically this would be based on getting the event information
+from the THD pointer.
+
+Some reasons to avoid using materialised events in the API:
+
+ - Replication events have a _lot_ of detailed context information that can be
+ needed in events: user-defined variables, random seed, character sets,
+ table column names and types, etc. etc. If we make the API based on
+ materialisation, then the initial decision about which context information
+ to include with which events will have to be done in the API, while ideally
+ we want this decision to be done by the individual consumer plugin. There
+ will this be a conflict between what to include (to allow consumers access)
+ and what to exclude (to avoid excessive needless work).
+
+ - Materialising means defining a very specific format, which will tend to
+ make the API less generic and flexible.
+
+ - Unless the materialised format is made _very_ specific (and thus very
+ inflexible), it is unlikely to be directly useful for transport
+ (eg. binlog), so it will need to be re-materialised into a different format
+ anyway, wasting work.
+
+ - If a generator on top handles an event, then we want to avoid wasting work
+ materialising an event in a generator below which would be completely
+ unused. Thus there would be a need for the upper generator to somehow
+ notify the lower generator ahead of event generation time to not fire an
+ event, complicating the API.
+
+Some advantages for materialisation:
+
+ - Using an API based on passing around some well-defined struct event (or
+ byte buffer) will be simpler than the complex class hierarchy proposed here
+ with no requirement for materialisation.
+
+ - Defining a materialised format would allow an easy way to use the same
+ consumer code on a generator that produces events at the source of
+ execution and on a generator that produces events from eg. reading them
+ from an event log.
+
+Note that there can be some middle way, where some data is materialised and
+some is kept as reference to context (eg. THD) only. This however looses most
+of the mentioned advantages for materialisation.
+
+The design proposed here aims for as little materialisation as possible.
+
+
+Default materialisation format
+------------------------------
+
+
+While the proposed API doesn't _require_ materialisation, we can still think
+about providing the _option_ for built-in materialisation. This could be
+useful if such materialisation is made suitable for transport to a different
+server (eg. no endian-dependance etc). If there is a facility for such
+materialisation built-in to the API, it becomes possible to write something
+like a generic binlog plugin or generic network transport plugin. This would
+be really useful for eg. PBXT engine-level replication, as it could be
+implemented without having to re-invent a binlog format.
+
+I added in the proposed API a simple facility to materialise every event as a
+string of bytes. To use this, I still need to add a suitable facility to
+de-materialise the event.
+
+However, it is still an open question whether such a facility will be at all
+useful. It still has some of the problems with materialisation mentioned
+above. And I think it is likely that a good binlog implementation will need
+to do more than just blindly copy opaque events from one endpoint to
+another. For example, it might need different event boundaries (merge and/or
+split events); it might need to augment or modify events, or inject new
+events, etc.
+
+So I think maybe it is better to add such a generic materialisation facility
+on top of the basic event generator API. Such a facility would provide
+materialisation of an replication event stream, not of individual events, so
+would be more flexible in providing a good implementation. It would be
+implemented for all generators. It would separate from both the event
+generator API (so we have flexibility to put a filter class in-between
+generator and materialisation), and could also be separate from the actual
+transport handling stuff like fsync() of binlog files and socket connections
+etc. It would be paired with a corresponding applier API which would handle
+executing events on a slave.
+
+Then we can have a default materialised event format, which is available, but
+not mandatory. So there can still be other formats alongside (like legacy
+MySQL 5.1 binlog event format and maybe Tungsten would have its own format).
+
+
+Encapsulation
+-------------
+
+Another fundamental question about the design is the level of encapsulation
+used for the API.
+
+At the implementation level, a lot of the work is basically to pull out all of
+the needed information from the THD object/context. The API I propose tries to
+_not_ expose the THD to consumers. Instead it provides accessor functions for
+all the bits and pieces relevant to each replication event, while the event
+class itself likely will be more or less just an encapsulated THD.
+
+So an alternative would be to have a generic event that was just (type, THD).
+Then consumers could just pull out whatever information they want from the
+THD. The THD implementation is already exposed to storage engines. This would
+of course greatly reduce the size of the API, eliminating lots of class
+definitions and accessor functions. Though arguably it wouldn't really
+simplify the API, as the complexity would just be in understanding the THD
+class.
+
+Note that we do not have to take any performance hit from using encapsulated
+accessors since compilers can inline them (though if inlining then we do not
+get any ABI stability with respect to THD implemetation).
+
+For now, the API is proposed without exposing the THD class. (Similar
+encapsulation could be added in actual implementation to also not expose TABLE
+and similar classes).
-=-=(Knielsen - Thu, 24 Jun 2010, 12:04)=-=-
Dependency created: 107 now depends on 120
-=-=(Knielsen - Thu, 24 Jun 2010, 11:59)=-=-
High Level Description modified.
--- /tmp/wklog.120.old.516 2010-06-24 11:59:24.000000000 +0000
+++ /tmp/wklog.120.new.516 2010-06-24 11:59:24.000000000 +0000
@@ -11,4 +11,4 @@
Event generators can be stacked, and a generator may defer event generation to
the next one down the stack. For example, the row-level replication event
-generator may defer DLL to the statement-level replication event generator.
+generator may defer DDL to the statement-level replication event generator.
-=-=(Knielsen - Mon, 21 Jun 2010, 08:35)=-=-
Research and design thoughts.
DESCRIPTION:
A part of the replication project, MWL#107.
Events are produced by event Generators. Examples are
- Generation of statement-based replication events
- Generation of row-based events
- Generation of PBXT engine-level replication events
and maybe reading of events from relay log on slave may also be an example of
generating events.
Event generators can be stacked, and a generator may defer event generation to
the next one down the stack. For example, the row-level replication event
generator may defer DDL to the statement-level replication event generator.
HIGH-LEVEL SPECIFICATION:
Generators and consumbers
-------------------------
We have the two concepts:
1. Event _generators_, that produce events describing all changes to data in a
server.
2. Event consumers, that receive such events and use them in various ways.
Examples of event generators is execution of SQL statements, which generates
events like those used for statement-based replication. Another example is
PBXT engine-level replication.
An example of an event consumer is the writing of the binlog on a master.
Some event generators are not really plugins. Rather, there are specific
points in the server where events are generated. However, a generator can be
part of a plugin, for example a PBXT engine-level replication event generator
would be part of the PBXT storage engine plugin. And for example we could
write a filter plugin, which would be stacked on top of an existing generator
and provide the same event types and interfaces, but filtered in some way (for
example by removing certain events on the master side, or by re-writing events
in certain ways).
Event consumers on the other hand could be a plugin.
One generator can be stacked on top of another. This means that a generator on
top (for example row-based events) will handle some events itself
(eg. non-deterministic update in mixed-mode binlogging). Other events that it
does not want to or cannot handle (for example deterministic delete or DDL)
will be defered to the generator below (for example statement-based events).
Materialisation (or not)
------------------------
A central decision is how to represent events that are generated in the API at
the point of generation.
I want to avoid making the API require that events are materialised. By
"Materialised" I mean that all (or most) of the data for the event is written
into memory in a struct/class used inside the server or serialised in a data
buffer (byte buffer) in a format suitable for network transport or disk
storage.
Using a non-materialised event means storing just a reference to appropriate
context that allows to retrieve all information for the event using
accessors. Ie. typically this would be based on getting the event information
from the THD pointer.
Some reasons to avoid using materialised events in the API:
- Replication events have a _lot_ of detailed context information that can be
needed in events: user-defined variables, random seed, character sets,
table column names and types, etc. etc. If we make the API based on
materialisation, then the initial decision about which context information
to include with which events will have to be done in the API, while ideally
we want this decision to be done by the individual consumer plugin. There
will this be a conflict between what to include (to allow consumers access)
and what to exclude (to avoid excessive needless work).
- Materialising means defining a very specific format, which will tend to
make the API less generic and flexible.
- Unless the materialised format is made _very_ specific (and thus very
inflexible), it is unlikely to be directly useful for transport
(eg. binlog), so it will need to be re-materialised into a different format
anyway, wasting work.
- If a generator on top handles an event, then we want to avoid wasting work
materialising an event in a generator below which would be completely
unused. Thus there would be a need for the upper generator to somehow
notify the lower generator ahead of event generation time to not fire an
event, complicating the API.
Some advantages for materialisation:
- Using an API based on passing around some well-defined struct event (or
byte buffer) will be simpler than the complex class hierarchy proposed here
with no requirement for materialisation.
- Defining a materialised format would allow an easy way to use the same
consumer code on a generator that produces events at the source of
execution and on a generator that produces events from eg. reading them
from an event log.
Note that there can be some middle way, where some data is materialised and
some is kept as reference to context (eg. THD) only. This however looses most
of the mentioned advantages for materialisation.
Also, the non-materialising interface should be a good interface on top of
which to build a materialising interface.
The design proposed here aims for as little materialisation as possible.
Default materialisation format
------------------------------
While the proposed API doesn't _require_ materialisation, we can still think
about providing the _option_ for built-in materialisation. This could be
useful if such materialisation is made suitable for transport to a different
server (eg. no endian-dependance etc). If there is a facility for such
materialisation built-in to the API, it becomes possible to write something
like a generic binlog plugin or generic network transport plugin. This would
be really useful for eg. PBXT engine-level replication, as it could be
implemented without having to re-invent a binlog format.
I added in the proposed API a simple facility to materialise every event as a
string of bytes. To use this, I still need to add a suitable facility to
de-materialise the event.
However, it is still an open question whether such a facility will be at all
useful. It still has some of the problems with materialisation mentioned
above. And I think it is likely that a good binlog implementation will need
to do more than just blindly copy opaque events from one endpoint to
another. For example, it might need different event boundaries (merge and/or
split events); it might need to augment or modify events, or inject new
events, etc.
So I think maybe it is better to add such a generic materialisation facility
on top of the basic event generator API. Such a facility would provide
materialisation of an replication event stream, not of individual events, so
would be more flexible in providing a good implementation. It would be
implemented for all generators. It would separate from both the event
generator API (so we have flexibility to put a filter class in-between
generator and materialisation), and could also be separate from the actual
transport handling stuff like fsync() of binlog files and socket connections
etc. It would be paired with a corresponding applier API which would handle
executing events on a slave.
Then we can have a default materialised event format, which is available, but
not mandatory. So there can still be other formats alongside (like legacy
MySQL 5.1 binlog event format and maybe Tungsten would have its own format).
Encapsulation
-------------
Another fundamental question about the design is the level of encapsulation
used for the API.
At the implementation level, a lot of the work is basically to pull out all of
the needed information from the THD object/context. The API I propose tries to
_not_ expose the THD to consumers. Instead it provides accessor functions for
all the bits and pieces relevant to each replication event, while the event
class itself likely will be more or less just an encapsulated THD.
So an alternative would be to have a generic event that was just (type, THD).
Then consumers could just pull out whatever information they want from the
THD. The THD implementation is already exposed to storage engines. This would
of course greatly reduce the size of the API, eliminating lots of class
definitions and accessor functions. Though arguably it wouldn't really
simplify the API, as the complexity would just be in understanding the THD
class.
Note that we do not have to take any performance hit from using encapsulated
accessors since compilers can inline them (though if inlining then we do not
get any ABI stability with respect to THD implemetation).
For now, the API is proposed without exposing the THD class. (Similar
encapsulation could be added in actual implementation to also not expose TABLE
and similar classes).
LOW-LEVEL DESIGN:
A consumer is implented as a virtual class (interface). There is one virtual
function for every event that can be received. A consumer would derive from
the base class and override methods for the events it wants to receive.
There is one consumer interface for each generator. When a generator A is
stacked on B, the consumer interface for A inherits from the interface for
B. This way, when A defers an event to B, the consumer for A will receive the
corresponding event from B.
There are methods for a consumer to register itself to receive events from
each generator. I still need to find a way for a consumer in one plugin to
register itself with a generator implemented in another plugin (eg. PBXT
engine-level replication). I also need to add a way for consumers to
de-register themselves.
The current design has consumer callbacks return 0 for success and error code
otherwise. I still need to think more about whether this is useful (ie. what
is the semantics of returning an error from a consumer callback).
Each event passed to consumers is defined as a class with public accessor
methods to a private context (which is mostly the THD).
My intension is to make all events passed around const, so that the same event
can be passed to each of multiple registered consumers (and to emphasise that
consumers do not have the ability to modify events). It still needs to be seen
whether that const-ness will be feasible in practise without very heavy
modification/constification of exiting code.
What follows is a partial draft of a possible definition of the API as
concrete C++ class definitions.
-----------------------------------------------------------------------
/*
Virtual base class for generated replication events.
This is the parent of events generated from all kinds of generators. Only
child classes can be instantiated.
This class can be used by code that wants to treat events in a generic way,
without any knowledge of event details. I still need to decide whether such
generic code is sensible.
*/
class rpl_event_base
{
/*
Maybe we will want the ability to materialise an event to a standard
binary format. This could be achieved with a base method like this. The
actual materialisation would be implemented in each deriving class. The
public methods would provide different interfaces for specifying the
buffer or for writing directly into IO_CACHE or file.
*/
/* Return 0 on success, -1 on error, -2 on out-of-buffer. */
int materialise(uchar *buffer, size_t buflen) const;
/*
Returns NULL on error or else malloc()ed buffer with materialised event,
caller must free().
*/
uchar *materialise() const;
/* Same but using passed in memroot. */
uchar *materialise(mem_root *memroot) const;
/*
Materialise to user-supplied writer function (could write directly to file
or the like).
*/
int materialise(int (*writer)(uchar *data, size_t len, void *context)) const;
/*
As to for what to do with a materialised event, there are a couple of
possibilities.
One is to have a de_materialise() method somewhere that can construct an
rpl_event_base (really a derived class of course) from a buffer or writer
function. This would require each accessor function to conditionally read
its data from either THD context or buffer (GCC is able to optimise
several such conditionals in multiple accessor function calls into one
conditional), or we can make all accessors virtual if the performance hit
is acceptable.
Another is to have different classes for accessing events read from
materialised event data.
Also, I still need to think about whether it is at all useful to be able
to generically materialise an event at this level. It may be that any
binlog/transport will in any case need to undertand more of the format of
events, so that such materialisation/transport is better done at a
different layer.
*/
protected:
/* Implementation which is the basis for materialise(). */
virtual int do_materialise(int (*writer)(uchar *data, size_t len,
void *context)) const = 0;
private:
/* Virtual base class, private constructor to prevent instantiation. */
rpl_event_base();
};
/*
These are the event types output from the transaction event generator.
This generator is not stacked on anything.
The transaction event generator marks the start and end (commit or rollback)
of transactions. It also gives information about whether the transaction was
a full transaction or autocommitted statement, whether transactional tables
were involved, whether non-transactional tables were involved, and XA
information (ToDo).
*/
/* Base class for transaction events. */
class rpl_event_transaction_base : public rpl_event_base
{
public:
/*
Get the local transaction id. This idea is only unique within one server.
It is allocated whenever a new transaction is started.
Can be used to identify events belonging to the same transaction in a
binlog-like stream of events streamed in parallel among multiple
transactions.
*/
uint64_t get_local_trx_id() const { return thd->local_trx_id; };
bool get_is_autocommit() const;
private:
/* The context is the THD. */
THD *thd;
rpl_event_transaction_base(THD *_thd) : thd(_thd) { };
};
/* Transaction start event. */
class rpl_event_transaction_start : public rpl_event_transaction_base
{
};
/* Transaction commit. */
class rpl_event_transaction_commit : public rpl_event_transaction_base
{
public:
/*
The global transaction id is unique cross-server.
It can be used to identify the position from which to start a slave
replicating from a master.
This global ID is only available once the transaction is decided to commit
by the TC manager / primary redundancy service. This TC also allocates the
ID and decides the exact semantics (can there be gaps, etc); however the
format is fixed (cluster_id, running_counter).
*/
struct global_transaction_id
{
uint32_t cluster_id;
uint64_t counter;
};
const global_transaction_id *get_global_transaction_id() const;
};
/* Transaction rollback. */
class rpl_event_transaction_rollback : public rpl_event_transaction_base
{
};
/* Base class for statement events. */
class rpl_event_statement_base : public rpl_event_base
{
public:
LEX_STRING get_current_db() const;
};
class rpl_event_statement_start : public rpl_event_statement_base
{
};
class rpl_event_statement_end : public rpl_event_statement_base
{
public:
int get_errorcode() const;
};
class rpl_event_statement_query : public rpl_event_statement_base
{
public:
LEX_STRING get_query_string();
ulong get_sql_mode();
const CHARSET_INFO *get_character_set_client();
const CHARSET_INFO *get_collation_connection();
const CHARSET_INFO *get_collation_server();
const CHARSET_INFO *get_collation_default_db();
/*
Access to relevant flags that affect query execution.
Use as if (ev->get_flags() & (uint32)ROW_FOREIGN_KEY_CHECKS) { ... }
*/
enum flag_bits
{
STMT_FOREIGN_KEY_CHECKS, // @@foreign_key_checks
STMT_UNIQUE_KEY_CHECKS, // @@unique_checks
STMT_AUTO_IS_NULL, // @@sql_auto_is_null
};
uint32_t get_flags();
ulong get_auto_increment_offset();
ulong get_auto_increment_increment();
// And so on for: time zone; day/month names; connection id; LAST_INSERT_ID;
// INSERT_ID; random seed; user variables.
//
// We probably also need get_uses_temporary_table(), get_used_user_vars(),
// get_uses_auto_increment() and so on, so a consumer can get more
// information about what kind of context information a query will need when
// executed on a slave.
};
class rpl_event_statement_load_query : public rpl_event_statement_query
{
};
/*
This event is fired with blocks of data for files read (from server-local
file or client connection) for LOAD DATA.
*/
class rpl_event_statement_load_data_block : public rpl_event_statement_base
{
public:
struct block
{
const uchar *ptr;
size_t size;
};
block get_block() const;
};
/* Base class for row-based replication events. */
class rpl_event_row_base : public rpl_event_base
{
public:
/*
Access to relevant handler extra flags and other flags that affect row
operations.
Use as if (ev->get_flags() & (uint32)ROW_WRITE_CAN_REPLACE) { ... }
*/
enum flag_bits
{
ROW_WRITE_CAN_REPLACE, // HA_EXTRA_WRITE_CAN_REPLACE
ROW_IGNORE_DUP_KEY, // HA_EXTRA_IGNORE_DUP_KEY
ROW_IGNORE_NO_KEY, // HA_EXTRA_IGNORE_NO_KEY
ROW_DISABLE_FOREIGN_KEY_CHECKS, // ! @@foreign_key_checks
ROW_DISABLE_UNIQUE_KEY_CHECKS, // ! @@unique_checks
};
uint32_t get_flags();
/* Access to list of tables modified. */
class table_iterator
{
public:
/* Returns table, NULL after last. */
const TABLE *get_next();
private:
// ...
};
table_iterator get_modified_tables() const;
private:
/* Context used to provide accessors. */
THD *thd;
protected:
rpl_event_row_base(THD *_thd) : thd(_thd) { }
};
class rpl_event_row_write : public rpl_event_row_base
{
public:
const BITMAP *get_write_set() const;
const uchar *get_after_image() const;
};
class rpl_event_row_update : public rpl_event_row_base
{
public:
const BITMAP *get_read_set() const;
const BITMAP *get_write_set() const;
const uchar *get_before_image() const;
const uchar *get_after_image() const;
};
class rpl_event_row_delete : public rpl_event_row_base
{
public:
const BITMAP *get_read_set() const;
const uchar *get_before_image() const;
};
/*
Event consumer callbacks.
An event consumer registers with an event generator to receive event
notifications from that generator.
The consumer has callbacks (in the form of virtual functions) for the
individual event types the consumer is interested in. Only callbacks that
are non-NULL will be invoked. If an event applies to multiple callbacks in a
single callback struct, it will only be passed to the most specific non-NULL
callback (so events never fire more than once per registration).
The lifetime of the memory holding the event is only for the duration of the
callback invocation, unless otherwise noted.
Callbacks return 0 for success or error code (ToDo: does this make sense?).
*/
struct rpl_event_consumer_transaction
{
virtual int trx_start(const rpl_event_transaction_start *) { return 0; }
virtual int trx_commit(const rpl_event_transaction_commit *) { return 0; }
virtual int trx_rollback(const rpl_event_transaction_rollback *) { return 0; }
};
/*
Consuming statement-based events.
The statement event generator is stacked on top of the transaction event
generator, so we can receive those events as well.
*/
struct rpl_event_consumer_statement : public rpl_event_consumer_transaction
{
virtual int stmt_start(const rpl_event_statement_start *) { return 0; }
virtual int stmt_end(const rpl_event_statement_end *) { return 0; }
virtual int stmt_query(const rpl_event_statement_query *) { return 0; }
/* Data for a file used in LOAD DATA [LOCAL] INFILE. */
virtual int stmt_load_data_block(const rpl_event_statement_load_data_block *)
{ return 0; }
/*
These are specific kinds of statements; if specified they override
consume_stmt_query() for the corresponding event.
*/
virtual int stmt_load_query(const rpl_event_statement_load_query *ev)
{ return stmt_query(ev); }
};
/*
Consuming row-based events.
The row event generator is stacked on top of the statement event generator.
*/
struct rpl_event_consumer_row : public rpl_event_consumer_statement
{
virtual int row_write(const rpl_event_row_write *) { return 0; }
virtual int row_update(const rpl_event_row_update *) { return 0; }
virtual int row_delete(const rpl_event_row_delete *) { return 0; }
};
/*
Registration functions.
ToDo: Make a way to de-register.
ToDo: Find a good way for a plugin (eg. PBXT) to publish a generator
registration method.
*/
int rpl_event_transaction_register(const rpl_event_consumer_transaction *cbs);
int rpl_event_statement_register(const rpl_event_consumer_statement *cbs);
int rpl_event_row_register(const rpl_event_consumer_row *cbs);
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#120 Updated (by Knielsen): Replication API for stacked event generators
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Replication API for stacked event generators
CREATION DATE..: Mon, 07 Jun 2010, 13:13
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 120 (http://askmonty.org/worklog/?tid=120)
VERSION........: Server-9.x
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 2
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Knielsen - Tue, 29 Jun 2010, 13:51)=-=-
Status updated.
--- /tmp/wklog.120.old.31179 2010-06-29 13:51:20.000000000 +0000
+++ /tmp/wklog.120.new.31179 2010-06-29 13:51:20.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Knielsen - Mon, 28 Jun 2010, 07:11)=-=-
High-Level Specification modified.
--- /tmp/wklog.120.old.18416 2010-06-28 07:11:09.000000000 +0000
+++ /tmp/wklog.120.new.18416 2010-06-28 07:11:09.000000000 +0000
@@ -14,10 +14,14 @@
An example of an event consumer is the writing of the binlog on a master.
-Event generators are not really plugins. Rather, there are specific points in
-the server where events are generated. However, a generator can be part of a
-plugin, for example a PBXT engine-level replication event generator would be
-part of the PBXT storage engine plugin.
+Some event generators are not really plugins. Rather, there are specific
+points in the server where events are generated. However, a generator can be
+part of a plugin, for example a PBXT engine-level replication event generator
+would be part of the PBXT storage engine plugin. And for example we could
+write a filter plugin, which would be stacked on top of an existing generator
+and provide the same event types and interfaces, but filtered in some way (for
+example by removing certain events on the master side, or by re-writing events
+in certain ways).
Event consumers on the other hand could be a plugin.
@@ -85,6 +89,9 @@
some is kept as reference to context (eg. THD) only. This however looses most
of the mentioned advantages for materialisation.
+Also, the non-materialising interface should be a good interface on top of
+which to build a materialising interface.
+
The design proposed here aims for as little materialisation as possible.
-=-=(Knielsen - Thu, 24 Jun 2010, 14:29)=-=-
Category updated.
--- /tmp/wklog.120.old.7440 2010-06-24 14:29:38.000000000 +0000
+++ /tmp/wklog.120.new.7440 2010-06-24 14:29:38.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Knielsen - Thu, 24 Jun 2010, 14:29)=-=-
Status updated.
--- /tmp/wklog.120.old.7440 2010-06-24 14:29:38.000000000 +0000
+++ /tmp/wklog.120.new.7440 2010-06-24 14:29:38.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Knielsen - Thu, 24 Jun 2010, 14:29)=-=-
Low Level Design modified.
--- /tmp/wklog.120.old.7355 2010-06-24 14:29:11.000000000 +0000
+++ /tmp/wklog.120.new.7355 2010-06-24 14:29:11.000000000 +0000
@@ -1 +1,385 @@
+A consumer is implented as a virtual class (interface). There is one virtual
+function for every event that can be received. A consumer would derive from
+the base class and override methods for the events it wants to receive.
+
+There is one consumer interface for each generator. When a generator A is
+stacked on B, the consumer interface for A inherits from the interface for
+B. This way, when A defers an event to B, the consumer for A will receive the
+corresponding event from B.
+
+There are methods for a consumer to register itself to receive events from
+each generator. I still need to find a way for a consumer in one plugin to
+register itself with a generator implemented in another plugin (eg. PBXT
+engine-level replication). I also need to add a way for consumers to
+de-register themselves.
+
+The current design has consumer callbacks return 0 for success and error code
+otherwise. I still need to think more about whether this is useful (ie. what
+is the semantics of returning an error from a consumer callback).
+
+Each event passed to consumers is defined as a class with public accessor
+methods to a private context (which is mostly the THD).
+
+My intension is to make all events passed around const, so that the same event
+can be passed to each of multiple registered consumers (and to emphasise that
+consumers do not have the ability to modify events). It still needs to be seen
+whether that const-ness will be feasible in practise without very heavy
+modification/constification of exiting code.
+
+What follows is a partial draft of a possible definition of the API as
+concrete C++ class definitions.
+
+-----------------------------------------------------------------------
+
+/*
+ Virtual base class for generated replication events.
+
+ This is the parent of events generated from all kinds of generators. Only
+ child classes can be instantiated.
+
+ This class can be used by code that wants to treat events in a generic way,
+ without any knowledge of event details. I still need to decide whether such
+ generic code is sensible.
+*/
+class rpl_event_base
+{
+ /*
+ Maybe we will want the ability to materialise an event to a standard
+ binary format. This could be achieved with a base method like this. The
+ actual materialisation would be implemented in each deriving class. The
+ public methods would provide different interfaces for specifying the
+ buffer or for writing directly into IO_CACHE or file.
+ */
+
+ /* Return 0 on success, -1 on error, -2 on out-of-buffer. */
+ int materialise(uchar *buffer, size_t buflen) const;
+ /*
+ Returns NULL on error or else malloc()ed buffer with materialised event,
+ caller must free().
+ */
+ uchar *materialise() const;
+ /* Same but using passed in memroot. */
+ uchar *materialise(mem_root *memroot) const;
+ /*
+ Materialise to user-supplied writer function (could write directly to file
+ or the like).
+ */
+ int materialise(int (*writer)(uchar *data, size_t len, void *context)) const;
+
+ /*
+ As to for what to do with a materialised event, there are a couple of
+ possibilities.
+
+ One is to have a de_materialise() method somewhere that can construct an
+ rpl_event_base (really a derived class of course) from a buffer or writer
+ function. This would require each accessor function to conditionally read
+ its data from either THD context or buffer (GCC is able to optimise
+ several such conditionals in multiple accessor function calls into one
+ conditional), or we can make all accessors virtual if the performance hit
+ is acceptable.
+
+ Another is to have different classes for accessing events read from
+ materialised event data.
+
+ Also, I still need to think about whether it is at all useful to be able
+ to generically materialise an event at this level. It may be that any
+ binlog/transport will in any case need to undertand more of the format of
+ events, so that such materialisation/transport is better done at a
+ different layer.
+ */
+
+protected:
+ /* Implementation which is the basis for materialise(). */
+ virtual int do_materialise(int (*writer)(uchar *data, size_t len,
+ void *context)) const = 0;
+
+private:
+ /* Virtual base class, private constructor to prevent instantiation. */
+ rpl_event_base();
+};
+
+
+/*
+ These are the event types output from the transaction event generator.
+
+ This generator is not stacked on anything.
+
+ The transaction event generator marks the start and end (commit or rollback)
+ of transactions. It also gives information about whether the transaction was
+ a full transaction or autocommitted statement, whether transactional tables
+ were involved, whether non-transactional tables were involved, and XA
+ information (ToDo).
+*/
+
+/* Base class for transaction events. */
+class rpl_event_transaction_base : public rpl_event_base
+{
+public:
+ /*
+ Get the local transaction id. This idea is only unique within one server.
+ It is allocated whenever a new transaction is started.
+ Can be used to identify events belonging to the same transaction in a
+ binlog-like stream of events streamed in parallel among multiple
+ transactions.
+ */
+ uint64_t get_local_trx_id() const { return thd->local_trx_id; };
+
+ bool get_is_autocommit() const;
+
+private:
+ /* The context is the THD. */
+ THD *thd;
+
+ rpl_event_transaction_base(THD *_thd) : thd(_thd) { };
+};
+
+/* Transaction start event. */
+class rpl_event_transaction_start : public rpl_event_transaction_base
+{
+
+};
+
+/* Transaction commit. */
+class rpl_event_transaction_commit : public rpl_event_transaction_base
+{
+public:
+ /*
+ The global transaction id is unique cross-server.
+
+ It can be used to identify the position from which to start a slave
+ replicating from a master.
+
+ This global ID is only available once the transaction is decided to commit
+ by the TC manager / primary redundancy service. This TC also allocates the
+ ID and decides the exact semantics (can there be gaps, etc); however the
+ format is fixed (cluster_id, running_counter).
+ */
+ struct global_transaction_id
+ {
+ uint32_t cluster_id;
+ uint64_t counter;
+ };
+
+ const global_transaction_id *get_global_transaction_id() const;
+};
+
+/* Transaction rollback. */
+class rpl_event_transaction_rollback : public rpl_event_transaction_base
+{
+
+};
+
+
+/* Base class for statement events. */
+class rpl_event_statement_base : public rpl_event_base
+{
+public:
+ LEX_STRING get_current_db() const;
+};
+
+class rpl_event_statement_start : public rpl_event_statement_base
+{
+
+};
+
+class rpl_event_statement_end : public rpl_event_statement_base
+{
+public:
+ int get_errorcode() const;
+};
+
+class rpl_event_statement_query : public rpl_event_statement_base
+{
+public:
+ LEX_STRING get_query_string();
+ ulong get_sql_mode();
+ const CHARSET_INFO *get_character_set_client();
+ const CHARSET_INFO *get_collation_connection();
+ const CHARSET_INFO *get_collation_server();
+ const CHARSET_INFO *get_collation_default_db();
+
+ /*
+ Access to relevant flags that affect query execution.
+
+ Use as if (ev->get_flags() & (uint32)ROW_FOREIGN_KEY_CHECKS) { ... }
+ */
+ enum flag_bits
+ {
+ STMT_FOREIGN_KEY_CHECKS, // @@foreign_key_checks
+ STMT_UNIQUE_KEY_CHECKS, // @@unique_checks
+ STMT_AUTO_IS_NULL, // @@sql_auto_is_null
+ };
+ uint32_t get_flags();
+
+ ulong get_auto_increment_offset();
+ ulong get_auto_increment_increment();
+
+ // And so on for: time zone; day/month names; connection id; LAST_INSERT_ID;
+ // INSERT_ID; random seed; user variables.
+ //
+ // We probably also need get_uses_temporary_table(), get_used_user_vars(),
+ // get_uses_auto_increment() and so on, so a consumer can get more
+ // information about what kind of context information a query will need when
+ // executed on a slave.
+};
+
+class rpl_event_statement_load_query : public rpl_event_statement_query
+{
+
+};
+
+/*
+ This event is fired with blocks of data for files read (from server-local
+ file or client connection) for LOAD DATA.
+*/
+class rpl_event_statement_load_data_block : public rpl_event_statement_base
+{
+public:
+ struct block
+ {
+ const uchar *ptr;
+ size_t size;
+ };
+ block get_block() const;
+};
+
+/* Base class for row-based replication events. */
+class rpl_event_row_base : public rpl_event_base
+{
+public:
+ /*
+ Access to relevant handler extra flags and other flags that affect row
+ operations.
+
+ Use as if (ev->get_flags() & (uint32)ROW_WRITE_CAN_REPLACE) { ... }
+ */
+ enum flag_bits
+ {
+ ROW_WRITE_CAN_REPLACE, // HA_EXTRA_WRITE_CAN_REPLACE
+ ROW_IGNORE_DUP_KEY, // HA_EXTRA_IGNORE_DUP_KEY
+ ROW_IGNORE_NO_KEY, // HA_EXTRA_IGNORE_NO_KEY
+ ROW_DISABLE_FOREIGN_KEY_CHECKS, // ! @@foreign_key_checks
+ ROW_DISABLE_UNIQUE_KEY_CHECKS, // ! @@unique_checks
+ };
+ uint32_t get_flags();
+
+ /* Access to list of tables modified. */
+ class table_iterator
+ {
+ public:
+ /* Returns table, NULL after last. */
+ const TABLE *get_next();
+ private:
+ // ...
+ };
+ table_iterator get_modified_tables() const;
+
+private:
+ /* Context used to provide accessors. */
+ THD *thd;
+
+protected:
+ rpl_event_row_base(THD *_thd) : thd(_thd) { }
+};
+
+
+class rpl_event_row_write : public rpl_event_row_base
+{
+public:
+ const BITMAP *get_write_set() const;
+ const uchar *get_after_image() const;
+};
+
+class rpl_event_row_update : public rpl_event_row_base
+{
+public:
+ const BITMAP *get_read_set() const;
+ const BITMAP *get_write_set() const;
+ const uchar *get_before_image() const;
+ const uchar *get_after_image() const;
+};
+
+class rpl_event_row_delete : public rpl_event_row_base
+{
+public:
+ const BITMAP *get_read_set() const;
+ const uchar *get_before_image() const;
+};
+
+
+/*
+ Event consumer callbacks.
+
+ An event consumer registers with an event generator to receive event
+ notifications from that generator.
+
+ The consumer has callbacks (in the form of virtual functions) for the
+ individual event types the consumer is interested in. Only callbacks that
+ are non-NULL will be invoked. If an event applies to multiple callbacks in a
+ single callback struct, it will only be passed to the most specific non-NULL
+ callback (so events never fire more than once per registration).
+
+ The lifetime of the memory holding the event is only for the duration of the
+ callback invocation, unless otherwise noted.
+
+ Callbacks return 0 for success or error code (ToDo: does this make sense?).
+*/
+
+struct rpl_event_consumer_transaction
+{
+ virtual int trx_start(const rpl_event_transaction_start *) { return 0; }
+ virtual int trx_commit(const rpl_event_transaction_commit *) { return 0; }
+ virtual int trx_rollback(const rpl_event_transaction_rollback *) { return 0; }
+};
+
+/*
+ Consuming statement-based events.
+
+ The statement event generator is stacked on top of the transaction event
+ generator, so we can receive those events as well.
+*/
+struct rpl_event_consumer_statement : public rpl_event_consumer_transaction
+{
+ virtual int stmt_start(const rpl_event_statement_start *) { return 0; }
+ virtual int stmt_end(const rpl_event_statement_end *) { return 0; }
+
+ virtual int stmt_query(const rpl_event_statement_query *) { return 0; }
+
+ /* Data for a file used in LOAD DATA [LOCAL] INFILE. */
+ virtual int stmt_load_data_block(const rpl_event_statement_load_data_block *)
+ { return 0; }
+
+ /*
+ These are specific kinds of statements; if specified they override
+ consume_stmt_query() for the corresponding event.
+ */
+ virtual int stmt_load_query(const rpl_event_statement_load_query *ev)
+ { return stmt_query(ev); }
+};
+
+/*
+ Consuming row-based events.
+
+ The row event generator is stacked on top of the statement event generator.
+*/
+struct rpl_event_consumer_row : public rpl_event_consumer_statement
+{
+ virtual int row_write(const rpl_event_row_write *) { return 0; }
+ virtual int row_update(const rpl_event_row_update *) { return 0; }
+ virtual int row_delete(const rpl_event_row_delete *) { return 0; }
+};
+
+
+/*
+ Registration functions.
+
+ ToDo: Make a way to de-register.
+
+ ToDo: Find a good way for a plugin (eg. PBXT) to publish a generator
+ registration method.
+*/
+
+int rpl_event_transaction_register(const rpl_event_consumer_transaction *cbs);
+int rpl_event_statement_register(const rpl_event_consumer_statement *cbs);
+int rpl_event_row_register(const rpl_event_consumer_row *cbs);
-=-=(Knielsen - Thu, 24 Jun 2010, 14:28)=-=-
High-Level Specification modified.
--- /tmp/wklog.120.old.7341 2010-06-24 14:28:17.000000000 +0000
+++ /tmp/wklog.120.new.7341 2010-06-24 14:28:17.000000000 +0000
@@ -1 +1,159 @@
+Generators and consumbers
+-------------------------
+
+We have the two concepts:
+
+1. Event _generators_, that produce events describing all changes to data in a
+ server.
+
+2. Event consumers, that receive such events and use them in various ways.
+
+Examples of event generators is execution of SQL statements, which generates
+events like those used for statement-based replication. Another example is
+PBXT engine-level replication.
+
+An example of an event consumer is the writing of the binlog on a master.
+
+Event generators are not really plugins. Rather, there are specific points in
+the server where events are generated. However, a generator can be part of a
+plugin, for example a PBXT engine-level replication event generator would be
+part of the PBXT storage engine plugin.
+
+Event consumers on the other hand could be a plugin.
+
+One generator can be stacked on top of another. This means that a generator on
+top (for example row-based events) will handle some events itself
+(eg. non-deterministic update in mixed-mode binlogging). Other events that it
+does not want to or cannot handle (for example deterministic delete or DDL)
+will be defered to the generator below (for example statement-based events).
+
+
+Materialisation (or not)
+------------------------
+
+A central decision is how to represent events that are generated in the API at
+the point of generation.
+
+I want to avoid making the API require that events are materialised. By
+"Materialised" I mean that all (or most) of the data for the event is written
+into memory in a struct/class used inside the server or serialised in a data
+buffer (byte buffer) in a format suitable for network transport or disk
+storage.
+
+Using a non-materialised event means storing just a reference to appropriate
+context that allows to retrieve all information for the event using
+accessors. Ie. typically this would be based on getting the event information
+from the THD pointer.
+
+Some reasons to avoid using materialised events in the API:
+
+ - Replication events have a _lot_ of detailed context information that can be
+ needed in events: user-defined variables, random seed, character sets,
+ table column names and types, etc. etc. If we make the API based on
+ materialisation, then the initial decision about which context information
+ to include with which events will have to be done in the API, while ideally
+ we want this decision to be done by the individual consumer plugin. There
+ will this be a conflict between what to include (to allow consumers access)
+ and what to exclude (to avoid excessive needless work).
+
+ - Materialising means defining a very specific format, which will tend to
+ make the API less generic and flexible.
+
+ - Unless the materialised format is made _very_ specific (and thus very
+ inflexible), it is unlikely to be directly useful for transport
+ (eg. binlog), so it will need to be re-materialised into a different format
+ anyway, wasting work.
+
+ - If a generator on top handles an event, then we want to avoid wasting work
+ materialising an event in a generator below which would be completely
+ unused. Thus there would be a need for the upper generator to somehow
+ notify the lower generator ahead of event generation time to not fire an
+ event, complicating the API.
+
+Some advantages for materialisation:
+
+ - Using an API based on passing around some well-defined struct event (or
+ byte buffer) will be simpler than the complex class hierarchy proposed here
+ with no requirement for materialisation.
+
+ - Defining a materialised format would allow an easy way to use the same
+ consumer code on a generator that produces events at the source of
+ execution and on a generator that produces events from eg. reading them
+ from an event log.
+
+Note that there can be some middle way, where some data is materialised and
+some is kept as reference to context (eg. THD) only. This however looses most
+of the mentioned advantages for materialisation.
+
+The design proposed here aims for as little materialisation as possible.
+
+
+Default materialisation format
+------------------------------
+
+
+While the proposed API doesn't _require_ materialisation, we can still think
+about providing the _option_ for built-in materialisation. This could be
+useful if such materialisation is made suitable for transport to a different
+server (eg. no endian-dependance etc). If there is a facility for such
+materialisation built-in to the API, it becomes possible to write something
+like a generic binlog plugin or generic network transport plugin. This would
+be really useful for eg. PBXT engine-level replication, as it could be
+implemented without having to re-invent a binlog format.
+
+I added in the proposed API a simple facility to materialise every event as a
+string of bytes. To use this, I still need to add a suitable facility to
+de-materialise the event.
+
+However, it is still an open question whether such a facility will be at all
+useful. It still has some of the problems with materialisation mentioned
+above. And I think it is likely that a good binlog implementation will need
+to do more than just blindly copy opaque events from one endpoint to
+another. For example, it might need different event boundaries (merge and/or
+split events); it might need to augment or modify events, or inject new
+events, etc.
+
+So I think maybe it is better to add such a generic materialisation facility
+on top of the basic event generator API. Such a facility would provide
+materialisation of an replication event stream, not of individual events, so
+would be more flexible in providing a good implementation. It would be
+implemented for all generators. It would separate from both the event
+generator API (so we have flexibility to put a filter class in-between
+generator and materialisation), and could also be separate from the actual
+transport handling stuff like fsync() of binlog files and socket connections
+etc. It would be paired with a corresponding applier API which would handle
+executing events on a slave.
+
+Then we can have a default materialised event format, which is available, but
+not mandatory. So there can still be other formats alongside (like legacy
+MySQL 5.1 binlog event format and maybe Tungsten would have its own format).
+
+
+Encapsulation
+-------------
+
+Another fundamental question about the design is the level of encapsulation
+used for the API.
+
+At the implementation level, a lot of the work is basically to pull out all of
+the needed information from the THD object/context. The API I propose tries to
+_not_ expose the THD to consumers. Instead it provides accessor functions for
+all the bits and pieces relevant to each replication event, while the event
+class itself likely will be more or less just an encapsulated THD.
+
+So an alternative would be to have a generic event that was just (type, THD).
+Then consumers could just pull out whatever information they want from the
+THD. The THD implementation is already exposed to storage engines. This would
+of course greatly reduce the size of the API, eliminating lots of class
+definitions and accessor functions. Though arguably it wouldn't really
+simplify the API, as the complexity would just be in understanding the THD
+class.
+
+Note that we do not have to take any performance hit from using encapsulated
+accessors since compilers can inline them (though if inlining then we do not
+get any ABI stability with respect to THD implemetation).
+
+For now, the API is proposed without exposing the THD class. (Similar
+encapsulation could be added in actual implementation to also not expose TABLE
+and similar classes).
-=-=(Knielsen - Thu, 24 Jun 2010, 12:04)=-=-
Dependency created: 107 now depends on 120
-=-=(Knielsen - Thu, 24 Jun 2010, 11:59)=-=-
High Level Description modified.
--- /tmp/wklog.120.old.516 2010-06-24 11:59:24.000000000 +0000
+++ /tmp/wklog.120.new.516 2010-06-24 11:59:24.000000000 +0000
@@ -11,4 +11,4 @@
Event generators can be stacked, and a generator may defer event generation to
the next one down the stack. For example, the row-level replication event
-generator may defer DLL to the statement-level replication event generator.
+generator may defer DDL to the statement-level replication event generator.
-=-=(Knielsen - Mon, 21 Jun 2010, 08:35)=-=-
Research and design thoughts.
DESCRIPTION:
A part of the replication project, MWL#107.
Events are produced by event Generators. Examples are
- Generation of statement-based replication events
- Generation of row-based events
- Generation of PBXT engine-level replication events
and maybe reading of events from relay log on slave may also be an example of
generating events.
Event generators can be stacked, and a generator may defer event generation to
the next one down the stack. For example, the row-level replication event
generator may defer DDL to the statement-level replication event generator.
HIGH-LEVEL SPECIFICATION:
Generators and consumbers
-------------------------
We have the two concepts:
1. Event _generators_, that produce events describing all changes to data in a
server.
2. Event consumers, that receive such events and use them in various ways.
Examples of event generators is execution of SQL statements, which generates
events like those used for statement-based replication. Another example is
PBXT engine-level replication.
An example of an event consumer is the writing of the binlog on a master.
Some event generators are not really plugins. Rather, there are specific
points in the server where events are generated. However, a generator can be
part of a plugin, for example a PBXT engine-level replication event generator
would be part of the PBXT storage engine plugin. And for example we could
write a filter plugin, which would be stacked on top of an existing generator
and provide the same event types and interfaces, but filtered in some way (for
example by removing certain events on the master side, or by re-writing events
in certain ways).
Event consumers on the other hand could be a plugin.
One generator can be stacked on top of another. This means that a generator on
top (for example row-based events) will handle some events itself
(eg. non-deterministic update in mixed-mode binlogging). Other events that it
does not want to or cannot handle (for example deterministic delete or DDL)
will be defered to the generator below (for example statement-based events).
Materialisation (or not)
------------------------
A central decision is how to represent events that are generated in the API at
the point of generation.
I want to avoid making the API require that events are materialised. By
"Materialised" I mean that all (or most) of the data for the event is written
into memory in a struct/class used inside the server or serialised in a data
buffer (byte buffer) in a format suitable for network transport or disk
storage.
Using a non-materialised event means storing just a reference to appropriate
context that allows to retrieve all information for the event using
accessors. Ie. typically this would be based on getting the event information
from the THD pointer.
Some reasons to avoid using materialised events in the API:
- Replication events have a _lot_ of detailed context information that can be
needed in events: user-defined variables, random seed, character sets,
table column names and types, etc. etc. If we make the API based on
materialisation, then the initial decision about which context information
to include with which events will have to be done in the API, while ideally
we want this decision to be done by the individual consumer plugin. There
will this be a conflict between what to include (to allow consumers access)
and what to exclude (to avoid excessive needless work).
- Materialising means defining a very specific format, which will tend to
make the API less generic and flexible.
- Unless the materialised format is made _very_ specific (and thus very
inflexible), it is unlikely to be directly useful for transport
(eg. binlog), so it will need to be re-materialised into a different format
anyway, wasting work.
- If a generator on top handles an event, then we want to avoid wasting work
materialising an event in a generator below which would be completely
unused. Thus there would be a need for the upper generator to somehow
notify the lower generator ahead of event generation time to not fire an
event, complicating the API.
Some advantages for materialisation:
- Using an API based on passing around some well-defined struct event (or
byte buffer) will be simpler than the complex class hierarchy proposed here
with no requirement for materialisation.
- Defining a materialised format would allow an easy way to use the same
consumer code on a generator that produces events at the source of
execution and on a generator that produces events from eg. reading them
from an event log.
Note that there can be some middle way, where some data is materialised and
some is kept as reference to context (eg. THD) only. This however looses most
of the mentioned advantages for materialisation.
Also, the non-materialising interface should be a good interface on top of
which to build a materialising interface.
The design proposed here aims for as little materialisation as possible.
Default materialisation format
------------------------------
While the proposed API doesn't _require_ materialisation, we can still think
about providing the _option_ for built-in materialisation. This could be
useful if such materialisation is made suitable for transport to a different
server (eg. no endian-dependance etc). If there is a facility for such
materialisation built-in to the API, it becomes possible to write something
like a generic binlog plugin or generic network transport plugin. This would
be really useful for eg. PBXT engine-level replication, as it could be
implemented without having to re-invent a binlog format.
I added in the proposed API a simple facility to materialise every event as a
string of bytes. To use this, I still need to add a suitable facility to
de-materialise the event.
However, it is still an open question whether such a facility will be at all
useful. It still has some of the problems with materialisation mentioned
above. And I think it is likely that a good binlog implementation will need
to do more than just blindly copy opaque events from one endpoint to
another. For example, it might need different event boundaries (merge and/or
split events); it might need to augment or modify events, or inject new
events, etc.
So I think maybe it is better to add such a generic materialisation facility
on top of the basic event generator API. Such a facility would provide
materialisation of an replication event stream, not of individual events, so
would be more flexible in providing a good implementation. It would be
implemented for all generators. It would separate from both the event
generator API (so we have flexibility to put a filter class in-between
generator and materialisation), and could also be separate from the actual
transport handling stuff like fsync() of binlog files and socket connections
etc. It would be paired with a corresponding applier API which would handle
executing events on a slave.
Then we can have a default materialised event format, which is available, but
not mandatory. So there can still be other formats alongside (like legacy
MySQL 5.1 binlog event format and maybe Tungsten would have its own format).
Encapsulation
-------------
Another fundamental question about the design is the level of encapsulation
used for the API.
At the implementation level, a lot of the work is basically to pull out all of
the needed information from the THD object/context. The API I propose tries to
_not_ expose the THD to consumers. Instead it provides accessor functions for
all the bits and pieces relevant to each replication event, while the event
class itself likely will be more or less just an encapsulated THD.
So an alternative would be to have a generic event that was just (type, THD).
Then consumers could just pull out whatever information they want from the
THD. The THD implementation is already exposed to storage engines. This would
of course greatly reduce the size of the API, eliminating lots of class
definitions and accessor functions. Though arguably it wouldn't really
simplify the API, as the complexity would just be in understanding the THD
class.
Note that we do not have to take any performance hit from using encapsulated
accessors since compilers can inline them (though if inlining then we do not
get any ABI stability with respect to THD implemetation).
For now, the API is proposed without exposing the THD class. (Similar
encapsulation could be added in actual implementation to also not expose TABLE
and similar classes).
LOW-LEVEL DESIGN:
A consumer is implented as a virtual class (interface). There is one virtual
function for every event that can be received. A consumer would derive from
the base class and override methods for the events it wants to receive.
There is one consumer interface for each generator. When a generator A is
stacked on B, the consumer interface for A inherits from the interface for
B. This way, when A defers an event to B, the consumer for A will receive the
corresponding event from B.
There are methods for a consumer to register itself to receive events from
each generator. I still need to find a way for a consumer in one plugin to
register itself with a generator implemented in another plugin (eg. PBXT
engine-level replication). I also need to add a way for consumers to
de-register themselves.
The current design has consumer callbacks return 0 for success and error code
otherwise. I still need to think more about whether this is useful (ie. what
is the semantics of returning an error from a consumer callback).
Each event passed to consumers is defined as a class with public accessor
methods to a private context (which is mostly the THD).
My intension is to make all events passed around const, so that the same event
can be passed to each of multiple registered consumers (and to emphasise that
consumers do not have the ability to modify events). It still needs to be seen
whether that const-ness will be feasible in practise without very heavy
modification/constification of exiting code.
What follows is a partial draft of a possible definition of the API as
concrete C++ class definitions.
-----------------------------------------------------------------------
/*
Virtual base class for generated replication events.
This is the parent of events generated from all kinds of generators. Only
child classes can be instantiated.
This class can be used by code that wants to treat events in a generic way,
without any knowledge of event details. I still need to decide whether such
generic code is sensible.
*/
class rpl_event_base
{
/*
Maybe we will want the ability to materialise an event to a standard
binary format. This could be achieved with a base method like this. The
actual materialisation would be implemented in each deriving class. The
public methods would provide different interfaces for specifying the
buffer or for writing directly into IO_CACHE or file.
*/
/* Return 0 on success, -1 on error, -2 on out-of-buffer. */
int materialise(uchar *buffer, size_t buflen) const;
/*
Returns NULL on error or else malloc()ed buffer with materialised event,
caller must free().
*/
uchar *materialise() const;
/* Same but using passed in memroot. */
uchar *materialise(mem_root *memroot) const;
/*
Materialise to user-supplied writer function (could write directly to file
or the like).
*/
int materialise(int (*writer)(uchar *data, size_t len, void *context)) const;
/*
As to for what to do with a materialised event, there are a couple of
possibilities.
One is to have a de_materialise() method somewhere that can construct an
rpl_event_base (really a derived class of course) from a buffer or writer
function. This would require each accessor function to conditionally read
its data from either THD context or buffer (GCC is able to optimise
several such conditionals in multiple accessor function calls into one
conditional), or we can make all accessors virtual if the performance hit
is acceptable.
Another is to have different classes for accessing events read from
materialised event data.
Also, I still need to think about whether it is at all useful to be able
to generically materialise an event at this level. It may be that any
binlog/transport will in any case need to undertand more of the format of
events, so that such materialisation/transport is better done at a
different layer.
*/
protected:
/* Implementation which is the basis for materialise(). */
virtual int do_materialise(int (*writer)(uchar *data, size_t len,
void *context)) const = 0;
private:
/* Virtual base class, private constructor to prevent instantiation. */
rpl_event_base();
};
/*
These are the event types output from the transaction event generator.
This generator is not stacked on anything.
The transaction event generator marks the start and end (commit or rollback)
of transactions. It also gives information about whether the transaction was
a full transaction or autocommitted statement, whether transactional tables
were involved, whether non-transactional tables were involved, and XA
information (ToDo).
*/
/* Base class for transaction events. */
class rpl_event_transaction_base : public rpl_event_base
{
public:
/*
Get the local transaction id. This idea is only unique within one server.
It is allocated whenever a new transaction is started.
Can be used to identify events belonging to the same transaction in a
binlog-like stream of events streamed in parallel among multiple
transactions.
*/
uint64_t get_local_trx_id() const { return thd->local_trx_id; };
bool get_is_autocommit() const;
private:
/* The context is the THD. */
THD *thd;
rpl_event_transaction_base(THD *_thd) : thd(_thd) { };
};
/* Transaction start event. */
class rpl_event_transaction_start : public rpl_event_transaction_base
{
};
/* Transaction commit. */
class rpl_event_transaction_commit : public rpl_event_transaction_base
{
public:
/*
The global transaction id is unique cross-server.
It can be used to identify the position from which to start a slave
replicating from a master.
This global ID is only available once the transaction is decided to commit
by the TC manager / primary redundancy service. This TC also allocates the
ID and decides the exact semantics (can there be gaps, etc); however the
format is fixed (cluster_id, running_counter).
*/
struct global_transaction_id
{
uint32_t cluster_id;
uint64_t counter;
};
const global_transaction_id *get_global_transaction_id() const;
};
/* Transaction rollback. */
class rpl_event_transaction_rollback : public rpl_event_transaction_base
{
};
/* Base class for statement events. */
class rpl_event_statement_base : public rpl_event_base
{
public:
LEX_STRING get_current_db() const;
};
class rpl_event_statement_start : public rpl_event_statement_base
{
};
class rpl_event_statement_end : public rpl_event_statement_base
{
public:
int get_errorcode() const;
};
class rpl_event_statement_query : public rpl_event_statement_base
{
public:
LEX_STRING get_query_string();
ulong get_sql_mode();
const CHARSET_INFO *get_character_set_client();
const CHARSET_INFO *get_collation_connection();
const CHARSET_INFO *get_collation_server();
const CHARSET_INFO *get_collation_default_db();
/*
Access to relevant flags that affect query execution.
Use as if (ev->get_flags() & (uint32)ROW_FOREIGN_KEY_CHECKS) { ... }
*/
enum flag_bits
{
STMT_FOREIGN_KEY_CHECKS, // @@foreign_key_checks
STMT_UNIQUE_KEY_CHECKS, // @@unique_checks
STMT_AUTO_IS_NULL, // @@sql_auto_is_null
};
uint32_t get_flags();
ulong get_auto_increment_offset();
ulong get_auto_increment_increment();
// And so on for: time zone; day/month names; connection id; LAST_INSERT_ID;
// INSERT_ID; random seed; user variables.
//
// We probably also need get_uses_temporary_table(), get_used_user_vars(),
// get_uses_auto_increment() and so on, so a consumer can get more
// information about what kind of context information a query will need when
// executed on a slave.
};
class rpl_event_statement_load_query : public rpl_event_statement_query
{
};
/*
This event is fired with blocks of data for files read (from server-local
file or client connection) for LOAD DATA.
*/
class rpl_event_statement_load_data_block : public rpl_event_statement_base
{
public:
struct block
{
const uchar *ptr;
size_t size;
};
block get_block() const;
};
/* Base class for row-based replication events. */
class rpl_event_row_base : public rpl_event_base
{
public:
/*
Access to relevant handler extra flags and other flags that affect row
operations.
Use as if (ev->get_flags() & (uint32)ROW_WRITE_CAN_REPLACE) { ... }
*/
enum flag_bits
{
ROW_WRITE_CAN_REPLACE, // HA_EXTRA_WRITE_CAN_REPLACE
ROW_IGNORE_DUP_KEY, // HA_EXTRA_IGNORE_DUP_KEY
ROW_IGNORE_NO_KEY, // HA_EXTRA_IGNORE_NO_KEY
ROW_DISABLE_FOREIGN_KEY_CHECKS, // ! @@foreign_key_checks
ROW_DISABLE_UNIQUE_KEY_CHECKS, // ! @@unique_checks
};
uint32_t get_flags();
/* Access to list of tables modified. */
class table_iterator
{
public:
/* Returns table, NULL after last. */
const TABLE *get_next();
private:
// ...
};
table_iterator get_modified_tables() const;
private:
/* Context used to provide accessors. */
THD *thd;
protected:
rpl_event_row_base(THD *_thd) : thd(_thd) { }
};
class rpl_event_row_write : public rpl_event_row_base
{
public:
const BITMAP *get_write_set() const;
const uchar *get_after_image() const;
};
class rpl_event_row_update : public rpl_event_row_base
{
public:
const BITMAP *get_read_set() const;
const BITMAP *get_write_set() const;
const uchar *get_before_image() const;
const uchar *get_after_image() const;
};
class rpl_event_row_delete : public rpl_event_row_base
{
public:
const BITMAP *get_read_set() const;
const uchar *get_before_image() const;
};
/*
Event consumer callbacks.
An event consumer registers with an event generator to receive event
notifications from that generator.
The consumer has callbacks (in the form of virtual functions) for the
individual event types the consumer is interested in. Only callbacks that
are non-NULL will be invoked. If an event applies to multiple callbacks in a
single callback struct, it will only be passed to the most specific non-NULL
callback (so events never fire more than once per registration).
The lifetime of the memory holding the event is only for the duration of the
callback invocation, unless otherwise noted.
Callbacks return 0 for success or error code (ToDo: does this make sense?).
*/
struct rpl_event_consumer_transaction
{
virtual int trx_start(const rpl_event_transaction_start *) { return 0; }
virtual int trx_commit(const rpl_event_transaction_commit *) { return 0; }
virtual int trx_rollback(const rpl_event_transaction_rollback *) { return 0; }
};
/*
Consuming statement-based events.
The statement event generator is stacked on top of the transaction event
generator, so we can receive those events as well.
*/
struct rpl_event_consumer_statement : public rpl_event_consumer_transaction
{
virtual int stmt_start(const rpl_event_statement_start *) { return 0; }
virtual int stmt_end(const rpl_event_statement_end *) { return 0; }
virtual int stmt_query(const rpl_event_statement_query *) { return 0; }
/* Data for a file used in LOAD DATA [LOCAL] INFILE. */
virtual int stmt_load_data_block(const rpl_event_statement_load_data_block *)
{ return 0; }
/*
These are specific kinds of statements; if specified they override
consume_stmt_query() for the corresponding event.
*/
virtual int stmt_load_query(const rpl_event_statement_load_query *ev)
{ return stmt_query(ev); }
};
/*
Consuming row-based events.
The row event generator is stacked on top of the statement event generator.
*/
struct rpl_event_consumer_row : public rpl_event_consumer_statement
{
virtual int row_write(const rpl_event_row_write *) { return 0; }
virtual int row_update(const rpl_event_row_update *) { return 0; }
virtual int row_delete(const rpl_event_row_delete *) { return 0; }
};
/*
Registration functions.
ToDo: Make a way to de-register.
ToDo: Find a good way for a plugin (eg. PBXT) to publish a generator
registration method.
*/
int rpl_event_transaction_register(const rpl_event_consumer_transaction *cbs);
int rpl_event_statement_register(const rpl_event_consumer_statement *cbs);
int rpl_event_row_register(const rpl_event_consumer_row *cbs);
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#107 Updated (by Sergei): New replication APIs
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 69
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 13:50)=-=-
Status updated.
--- /tmp/wklog.107.old.31164 2010-06-29 13:50:15.000000000 +0000
+++ /tmp/wklog.107.new.31164 2010-06-29 13:50:15.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+In-Progress
-=-=(Knielsen - Thu, 24 Jun 2010, 12:04)=-=-
Dependency created: 107 now depends on 120
-=-=(Knielsen - Mon, 21 Jun 2010, 08:36)=-=-
Research and design thoughts.
-=-=(Knielsen - Mon, 07 Jun 2010, 12:11)=-=-
High Level Description modified.
--- /tmp/wklog.107.old.31097 2010-06-07 12:11:57.000000000 +0000
+++ /tmp/wklog.107.new.31097 2010-06-07 12:11:57.000000000 +0000
@@ -7,3 +7,6 @@
https://lists.launchpad.net/maria-developers/msg01998.html
+Wiki page for the project:
+
+ http://askmonty.org/wiki/ReplicationProject
-=-=(Knielsen - Mon, 29 Mar 2010, 07:33)=-=-
Research and design discussions: Galera, 2pc/XA, group commit, multi-engine transactions.
-=-=(Knielsen - Wed, 24 Mar 2010, 10:39)=-=-
Design discussions
-=-=(Knielsen - Mon, 15 Mar 2010, 14:28)=-=-
Research into the problem, and discussions on phone/mailing list
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Sergei - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
Wiki page for the project:
http://askmonty.org/wiki/ReplicationProject
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0

[Maria-developers] WL#107 Updated (by Sergei): New replication APIs
by worklog-noreply@askmonty.org 29 Jun '10
by worklog-noreply@askmonty.org 29 Jun '10
29 Jun '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 69
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Sergei - Tue, 29 Jun 2010, 13:50)=-=-
Status updated.
--- /tmp/wklog.107.old.31164 2010-06-29 13:50:15.000000000 +0000
+++ /tmp/wklog.107.new.31164 2010-06-29 13:50:15.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+In-Progress
-=-=(Knielsen - Thu, 24 Jun 2010, 12:04)=-=-
Dependency created: 107 now depends on 120
-=-=(Knielsen - Mon, 21 Jun 2010, 08:36)=-=-
Research and design thoughts.
-=-=(Knielsen - Mon, 07 Jun 2010, 12:11)=-=-
High Level Description modified.
--- /tmp/wklog.107.old.31097 2010-06-07 12:11:57.000000000 +0000
+++ /tmp/wklog.107.new.31097 2010-06-07 12:11:57.000000000 +0000
@@ -7,3 +7,6 @@
https://lists.launchpad.net/maria-developers/msg01998.html
+Wiki page for the project:
+
+ http://askmonty.org/wiki/ReplicationProject
-=-=(Knielsen - Mon, 29 Mar 2010, 07:33)=-=-
Research and design discussions: Galera, 2pc/XA, group commit, multi-engine transactions.
-=-=(Knielsen - Wed, 24 Mar 2010, 10:39)=-=-
Design discussions
-=-=(Knielsen - Mon, 15 Mar 2010, 14:28)=-=-
Research into the problem, and discussions on phone/mailing list
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Sergei - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
Wiki page for the project:
http://askmonty.org/wiki/ReplicationProject
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v4.0.0)
1
0