developers
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- 8 participants
- 6811 discussions
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2834: Fixed compiler warnings and sporadic failures in test cases
by noreply@launchpad.net 28 Mar '10
by noreply@launchpad.net 28 Mar '10
28 Mar '10
------------------------------------------------------------
revno: 2834
committer: Michael Widenius <monty(a)askmonty.org>
branch nick: maria-5.1
timestamp: Sun 2010-03-28 21:10:00 +0300
message:
Fixed compiler warnings and sporadic failures in test cases
modified:
mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test
mysql-test/include/default_mysqld.cnf
mysql-test/lib/My/SafeProcess/safe_process.cc
mysql-test/lib/v1/mysql-test-run.pl
mysql-test/suite/rpl/r/rpl_do_grant.result
mysql-test/suite/rpl/t/rpl_do_grant.test
mysql-test/suite/rpl/t/rpl_name_const.test
mysql-test/suite/rpl/t/rpl_row_basic_11bugs.test
mysql-test/t/bug47671-master.opt
mysql-test/t/ctype_latin1_de-master.opt
mysql-test/t/ctype_ucs2_def-master.opt
sql-common/client.c
sql/item.cc
sql/item.h
sql/item_cmpfunc.cc
sql/item_create.cc
sql/item_create.h
sql/item_sum.cc
sql/item_sum.h
sql/set_var.cc
sql/sql_yacc.yy
storage/example/ha_example.h
storage/maria/ma_search.c
storage/maria/maria_def.h
storage/myisam/ft_stopwords.c
storage/xtradb/fil/fil0fil.c
storage/xtradb/include/page0page.h
storage/xtradb/include/page0page.ic
support-files/compiler_warnings.supp
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
Hi,
Kristian, AM_MAKEFLAGS is a nice feature in the build scripts, but
there's one problem with it - we also have branches in buildbot that do
not support AM_MAKEFLAGS. For example, mysql-5.1-testing, but I suppose
it's not the only one.
What can we do ? Add AM_MAKEFLAGS support to mysql-5.1-testing and other
branches, it should be pretty safe. Or implement a more robust limiting
of buildbot slave resources, that does not depend on cooperating build
scripts.
Regards,
Sergei
2
1
[Maria-developers] Rev 2751: options for CREATE TABLE (MWL#43) in file:///home/bell/maria/bzr/work-maria-5.2-createoptions2/
by sanja@askmonty.org 26 Mar '10
by sanja@askmonty.org 26 Mar '10
26 Mar '10
At file:///home/bell/maria/bzr/work-maria-5.2-createoptions2/
------------------------------------------------------------
revno: 2751
revision-id: sanja(a)askmonty.org-20100326214958-d7twnmqeoxt6hpl3
parent: sergii(a)pisem.net-20100323092233-t2gwaclx94hd6exa
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-createoptions2
timestamp: Fri 2010-03-26 23:49:58 +0200
message:
options for CREATE TABLE (MWL#43)
=== modified file 'Docs/sp-imp-spec.txt'
--- a/Docs/sp-imp-spec.txt 2004-03-23 11:04:40 +0000
+++ b/Docs/sp-imp-spec.txt 2010-03-26 21:49:58 +0000
@@ -1075,7 +1075,7 @@
'PIPES_AS_CONCAT',
'ANSI_QUOTES',
'IGNORE_SPACE',
- 'NOT_USED',
+ 'CREATE_OPTIONS_ERR',
'ONLY_FULL_GROUP_BY',
'NO_UNSIGNED_SUBTRACTION',
'NO_DIR_IN_CREATE',
@@ -1097,4 +1097,4 @@
) comment='Stored Procedures';
--
-
\ No newline at end of file
+
=== modified file 'include/my_base.h'
--- a/include/my_base.h 2010-02-10 19:06:24 +0000
+++ b/include/my_base.h 2010-03-26 21:49:58 +0000
@@ -314,6 +314,8 @@
#define HA_OPTION_RELIES_ON_SQL_LAYER 512
#define HA_OPTION_NULL_FIELDS 1024
#define HA_OPTION_PAGE_CHECKSUM 2048
+/* .frm has extra create options in linked-list format */
+#define HA_OPTION_TEXT_CREATE_OPTIONS (1L << 14)
#define HA_OPTION_TEMP_COMPRESS_RECORD (1L << 15) /* set by isamchk */
#define HA_OPTION_READ_ONLY_DATA (1L << 16) /* Set by isamchk */
#define HA_OPTION_NO_CHECKSUM (1L << 17)
=== modified file 'libmysqld/CMakeLists.txt'
--- a/libmysqld/CMakeLists.txt 2010-01-31 09:13:21 +0000
+++ b/libmysqld/CMakeLists.txt 2010-03-26 21:49:58 +0000
@@ -139,7 +139,8 @@
../sql/strfunc.cc ../sql/table.cc ../sql/thr_malloc.cc
../sql/time.cc ../sql/tztime.cc ../sql/uniques.cc ../sql/unireg.cc
../sql/partition_info.cc ../sql/sql_connect.cc
- ../sql/scheduler.cc ../sql/event_parse_data.cc
+ ../sql/scheduler.cc ../sql/event_parse_data.cc
+ ../sql/create_options.cc
${GEN_SOURCES}
${LIB_SOURCES})
=== modified file 'libmysqld/Makefile.am'
--- a/libmysqld/Makefile.am 2009-12-03 11:19:05 +0000
+++ b/libmysqld/Makefile.am 2010-03-26 21:49:58 +0000
@@ -75,7 +75,7 @@
parse_file.cc sql_view.cc sql_trigger.cc my_decimal.cc \
rpl_filter.cc sql_partition.cc sql_builtin.cc sql_plugin.cc \
debug_sync.cc \
- sql_tablespace.cc \
+ sql_tablespace.cc create_options.cc \
rpl_injector.cc my_user.c partition_info.cc \
sql_servers.cc event_parse_data.cc opt_table_elimination.cc
=== added file 'mysql-test/r/create_options.result'
--- a/mysql-test/r/create_options.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/create_options.result 2010-03-26 21:49:58 +0000
@@ -0,0 +1,169 @@
+drop table if exists t1;
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+Warnings:
+Warning 1650 Unknown option 'fkey'='vvv'
+Warning 1650 Unknown option 'dff'='vvv'
+Warning 1650 Unknown option 'tkey1'='1v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v1'
+drop table t1;
+#reassiginig options in the same line
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1 TKEY1=DEFAULT tkey1=1v2 tkey2=2v1;
+Warnings:
+Warning 1650 Unknown option 'fkey'='vvv'
+Warning 1650 Unknown option 'dff'='vvv'
+Warning 1650 Unknown option 'tkey1'='1v2'
+Warning 1650 Unknown option 'tkey2'='2v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v2' tkey2='2v1'
+#add option
+alter table t1 tkey4=4v1;
+Warnings:
+Warning 1650 Unknown option 'tkey4'='4v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v2' tkey2='2v1' tkey4='4v1'
+#remove options
+alter table t1 tkey3=DEFAULT tkey4=DEFAULT;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v2' tkey2='2v1'
+drop table t1;
+create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1 tkey1=1v2 TKEY1=DEFAULT tkey2=2v1 tkey3=3v1;
+Warnings:
+Warning 1650 Unknown option 'fkey1'='v1'
+Warning 1650 Unknown option 'kkey1'='v1'
+Warning 1650 Unknown option 'tkey2'='2v1'
+Warning 1650 Unknown option 'tkey3'='3v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v1',
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#change field with option with the same option
+alter table t1 change a a int `FKEY1`='v1';
+Warnings:
+Warning 1650 Unknown option 'FKEY1'='v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL FKEY1='v1',
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#change field with option with a different option
+alter table t1 change a a int fkey1=v2;
+Warnings:
+Warning 1650 Unknown option 'fkey1'='v2'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new column no options
+alter table t1 add column b int;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new key with options
+alter table t1 add key bkey (b) kkey2=v1;
+Warnings:
+Warning 1650 Unknown option 'kkey2'='v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `bkey` (`b`) kkey2='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new column with options
+alter table t1 add column c int fkey1=v1 fkey2=v2;
+Warnings:
+Warning 1650 Unknown option 'fkey1'='v1'
+Warning 1650 Unknown option 'fkey2'='v2'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `bkey` (`b`) kkey2='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new key no options
+alter table t1 add key ckey (c);
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `bkey` (`b`) kkey2='v1',
+ KEY `ckey` (`c`)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#drop column
+alter table t1 drop b;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `ckey` (`c`)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#add column with options after delete
+alter table t1 add column b int fkey2=v1;
+Warnings:
+Warning 1650 Unknown option 'fkey2'='v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ `b` int(11) DEFAULT NULL fkey2='v1',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `ckey` (`c`)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#add key
+alter table t1 add key bkey (b) kkey2=v2;
+Warnings:
+Warning 1650 Unknown option 'kkey2'='v2'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ `b` int(11) DEFAULT NULL fkey2='v1',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `ckey` (`c`),
+ KEY `bkey` (`b`) kkey2='v2'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+drop table t1;
+#error on unknown option
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+ERROR HY000: Unknown option 'fkey'='vvv'
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/r/events_bugs.result'
--- a/mysql-test/r/events_bugs.result 2009-03-11 20:30:56 +0000
+++ b/mysql-test/r/events_bugs.result 2010-03-26 21:49:58 +0000
@@ -729,9 +729,8 @@
create event e1 on schedule every 1 day do select 1;
select @@sql_mode;
@@sql_mode
-REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,?,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
+REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,CREATE_OPTIONS_ERR,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
set @@sql_mode= @old_mode;
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.event where name = 'p' and sql_mode = @full_mode;
name
=== modified file 'mysql-test/r/information_schema.result'
--- a/mysql-test/r/information_schema.result 2010-03-15 11:51:23 +0000
+++ b/mysql-test/r/information_schema.result 2010-03-26 21:49:58 +0000
@@ -615,7 +615,7 @@
proc definer char(77)
proc created timestamp
proc modified timestamp
-proc sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+proc sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
proc comment char(64)
proc character_set_client char(32)
proc collation_connection char(32)
=== modified file 'mysql-test/r/plugin_load.result'
--- a/mysql-test/r/plugin_load.result 2008-01-26 00:05:15 +0000
+++ b/mysql-test/r/plugin_load.result 2010-03-26 21:49:58 +0000
@@ -1,3 +1,30 @@
SELECT @@global.example_enum_var = 'e2';
@@global.example_enum_var = 'e2'
1
+#legal values
+CREATE TABLE t1 ( a int complex='c,f,f,f' ) ENGINE=example UUL=10000 STR='dskj' one_or_two='one' YESNO=0;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL complex='c,f,f,f'
+) ENGINE=EXAMPLE DEFAULT CHARSET=latin1 UUL=10000 STR='dskj' one_or_two='one' YESNO=0
+drop table t1;
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+#illegal value fixed
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+Warnings:
+Warning 1651 Incorrect option value 'UUL'='10000000000000000000'
+Warning 1651 Incorrect option value 'one_or_two'='ttt'
+Warning 1651 Incorrect option value 'YESNO'='SSS'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL
+) ENGINE=EXAMPLE DEFAULT CHARSET=latin1 UUL=4294967295 YESNO=0
+drop table t1;
+#illegal value error
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+ERROR HY000: Incorrect option value 'UUL'='10000000000000000000'
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/r/sp.result'
--- a/mysql-test/r/sp.result 2009-12-23 13:44:03 +0000
+++ b/mysql-test/r/sp.result 2010-03-26 21:49:58 +0000
@@ -6940,9 +6940,8 @@
call p();
select @@sql_mode;
@@sql_mode
-REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,?,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
+REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,CREATE_OPTIONS_ERR,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
set @@sql_mode= @old_mode;
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.proc where name = 'p' and sql_mode = @full_mode;
name
=== modified file 'mysql-test/r/system_mysql_db.result'
--- a/mysql-test/r/system_mysql_db.result 2009-10-27 10:09:36 +0000
+++ b/mysql-test/r/system_mysql_db.result 2010-03-26 21:49:58 +0000
@@ -200,7 +200,7 @@
`definer` char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`modified` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
- `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
+ `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
`comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`character_set_client` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
`collation_connection` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
@@ -225,7 +225,7 @@
`ends` datetime DEFAULT NULL,
`status` enum('ENABLED','DISABLED','SLAVESIDE_DISABLED') NOT NULL DEFAULT 'ENABLED',
`on_completion` enum('DROP','PRESERVE') NOT NULL DEFAULT 'DROP',
- `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
+ `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
`comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`originator` int(10) unsigned NOT NULL,
`time_zone` char(64) CHARACTER SET latin1 NOT NULL DEFAULT 'SYSTEM',
=== modified file 'mysql-test/suite/funcs_1/r/is_columns_mysql.result'
--- a/mysql-test/suite/funcs_1/r/is_columns_mysql.result 2009-10-28 09:23:02 +0000
+++ b/mysql-test/suite/funcs_1/r/is_columns_mysql.result 2010-03-26 21:49:58 +0000
@@ -49,7 +49,7 @@
NULL mysql event name 2 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64) PRI select,insert,update,references
NULL mysql event on_completion 14 DROP NO enum 8 24 NULL NULL utf8 utf8_general_ci enum('DROP','PRESERVE') select,insert,update,references
NULL mysql event originator 17 NULL NO int NULL NULL 10 0 NULL NULL int(10) unsigned select,insert,update,references
-NULL mysql event sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
+NULL mysql event sql_mode 15 NO set 488 1464 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
NULL mysql event starts 11 NULL YES datetime NULL NULL NULL NULL NULL NULL datetime select,insert,update,references
NULL mysql event status 13 ENABLED NO enum 18 54 NULL NULL utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED') select,insert,update,references
NULL mysql event time_zone 18 SYSTEM NO char 64 64 NULL NULL latin1 latin1_swedish_ci char(64) select,insert,update,references
@@ -124,7 +124,7 @@
NULL mysql proc security_type 8 DEFINER NO enum 7 21 NULL NULL utf8 utf8_general_ci enum('INVOKER','DEFINER') select,insert,update,references
NULL mysql proc specific_name 4 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64) select,insert,update,references
NULL mysql proc sql_data_access 6 CONTAINS_SQL NO enum 17 51 NULL NULL utf8 utf8_general_ci enum('CONTAINS_SQL','NO_SQL','READS_SQL_DATA','MODIFIES_SQL_DATA') select,insert,update,references
-NULL mysql proc sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
+NULL mysql proc sql_mode 15 NO set 488 1464 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
NULL mysql proc type 3 NULL NO enum 9 27 NULL NULL utf8 utf8_general_ci enum('FUNCTION','PROCEDURE') PRI select,insert,update,references
NULL mysql procs_priv Db 2 NO char 64 192 NULL NULL utf8 utf8_bin char(64) PRI select,insert,update,references
NULL mysql procs_priv Grantor 6 NO char 77 231 NULL NULL utf8 utf8_bin char(77) MUL select,insert,update,references
@@ -327,7 +327,7 @@
NULL mysql event ends datetime NULL NULL NULL NULL datetime
3.0000 mysql event status enum 18 54 utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED')
3.0000 mysql event on_completion enum 8 24 utf8 utf8_general_ci enum('DROP','PRESERVE')
-3.0000 mysql event sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql event sql_mode set 488 1464 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql event comment char 64 192 utf8 utf8_bin char(64)
NULL mysql event originator int NULL NULL NULL NULL int(10) unsigned
1.0000 mysql event time_zone char 64 64 latin1 latin1_swedish_ci char(64)
@@ -402,7 +402,7 @@
3.0000 mysql proc definer char 77 231 utf8 utf8_bin char(77)
NULL mysql proc created timestamp NULL NULL NULL NULL timestamp
NULL mysql proc modified timestamp NULL NULL NULL NULL timestamp
-3.0000 mysql proc sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql proc sql_mode set 488 1464 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql proc comment char 64 192 utf8 utf8_bin char(64)
3.0000 mysql proc character_set_client char 32 96 utf8 utf8_bin char(32)
3.0000 mysql proc collation_connection char 32 96 utf8 utf8_bin char(32)
=== modified file 'mysql-test/suite/funcs_1/r/is_columns_mysql_embedded.result'
--- a/mysql-test/suite/funcs_1/r/is_columns_mysql_embedded.result 2009-05-19 16:43:50 +0000
+++ b/mysql-test/suite/funcs_1/r/is_columns_mysql_embedded.result 2010-03-26 21:49:58 +0000
@@ -49,7 +49,7 @@
NULL mysql event name 2 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64) PRI
NULL mysql event on_completion 14 DROP NO enum 8 24 NULL NULL utf8 utf8_general_ci enum('DROP','PRESERVE')
NULL mysql event originator 17 NULL NO int NULL NULL 10 0 NULL NULL int(10) unsigned
-NULL mysql event sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+NULL mysql event sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
NULL mysql event starts 11 NULL YES datetime NULL NULL NULL NULL NULL NULL datetime
NULL mysql event status 13 ENABLED NO enum 18 54 NULL NULL utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED')
NULL mysql event time_zone 18 SYSTEM NO char 64 64 NULL NULL latin1 latin1_swedish_ci char(64)
@@ -124,7 +124,7 @@
NULL mysql proc security_type 8 DEFINER NO enum 7 21 NULL NULL utf8 utf8_general_ci enum('INVOKER','DEFINER')
NULL mysql proc specific_name 4 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64)
NULL mysql proc sql_data_access 6 CONTAINS_SQL NO enum 17 51 NULL NULL utf8 utf8_general_ci enum('CONTAINS_SQL','NO_SQL','READS_SQL_DATA','MODIFIES_SQL_DATA')
-NULL mysql proc sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+NULL mysql proc sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
NULL mysql proc type 3 NULL NO enum 9 27 NULL NULL utf8 utf8_general_ci enum('FUNCTION','PROCEDURE') PRI
NULL mysql procs_priv Db 2 NO char 64 192 NULL NULL utf8 utf8_bin char(64) PRI
NULL mysql procs_priv Grantor 6 NO char 77 231 NULL NULL utf8 utf8_bin char(77) MUL
@@ -327,7 +327,7 @@
NULL mysql event ends datetime NULL NULL NULL NULL datetime
3.0000 mysql event status enum 18 54 utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED')
3.0000 mysql event on_completion enum 8 24 utf8 utf8_general_ci enum('DROP','PRESERVE')
-3.0000 mysql event sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql event sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql event comment char 64 192 utf8 utf8_bin char(64)
NULL mysql event originator int NULL NULL NULL NULL int(10) unsigned
1.0000 mysql event time_zone char 64 64 latin1 latin1_swedish_ci char(64)
@@ -402,7 +402,7 @@
3.0000 mysql proc definer char 77 231 utf8 utf8_bin char(77)
NULL mysql proc created timestamp NULL NULL NULL NULL timestamp
NULL mysql proc modified timestamp NULL NULL NULL NULL timestamp
-3.0000 mysql proc sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql proc sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql proc comment char 64 192 utf8 utf8_bin char(64)
3.0000 mysql proc character_set_client char 32 96 utf8 utf8_bin char(32)
3.0000 mysql proc collation_connection char 32 96 utf8 utf8_bin char(32)
=== added file 'mysql-test/t/create_options.test'
--- a/mysql-test/t/create_options.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/create_options.test 2010-03-26 21:49:58 +0000
@@ -0,0 +1,63 @@
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+show create table t1;
+drop table t1;
+
+--echo #reassiginig options in the same line
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1 TKEY1=DEFAULT tkey1=1v2 tkey2=2v1;
+show create table t1;
+
+-- echo #add option
+alter table t1 tkey4=4v1;
+show create table t1;
+
+--echo #remove options
+alter table t1 tkey3=DEFAULT tkey4=DEFAULT;
+show create table t1;
+
+drop table t1;
+
+create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1 tkey1=1v2 TKEY1=DEFAULT tkey2=2v1 tkey3=3v1;
+show create table t1;
+
+--echo #change field with option with the same option
+alter table t1 change a a int `FKEY1`='v1';
+show create table t1;
+--echo #change field with option with a different option
+alter table t1 change a a int fkey1=v2;
+show create table t1;
+--echo #new column no options
+alter table t1 add column b int;
+show create table t1;
+--echo #new key with options
+alter table t1 add key bkey (b) kkey2=v1;
+show create table t1;
+--echo #new column with options
+alter table t1 add column c int fkey1=v1 fkey2=v2;
+show create table t1;
+--echo #new key no options
+alter table t1 add key ckey (c);
+show create table t1;
+--echo #drop column
+alter table t1 drop b;
+show create table t1;
+--echo #add column with options after delete
+alter table t1 add column b int fkey2=v1;
+show create table t1;
+--echo #add key
+alter table t1 add key bkey (b) kkey2=v2;
+show create table t1;
+drop table t1;
+
+--echo #error on unknown option
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+--error ER_UNKNOWN_OPTION
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/t/events_bugs.test'
--- a/mysql-test/t/events_bugs.test 2009-03-11 20:30:56 +0000
+++ b/mysql-test/t/events_bugs.test 2010-03-26 21:49:58 +0000
@@ -1204,7 +1204,6 @@
select @@sql_mode;
set @@sql_mode= @old_mode;
# Rename SQL modes that differ in name between the server and the table definition.
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.event where name = 'p' and sql_mode = @full_mode;
drop event e1;
=== modified file 'mysql-test/t/exampledb.test'
--- a/mysql-test/t/exampledb.test 2006-05-05 17:08:40 +0000
+++ b/mysql-test/t/exampledb.test 2010-03-26 21:49:58 +0000
@@ -20,3 +20,4 @@
drop table t1;
# End of 4.1 tests
+
=== modified file 'mysql-test/t/plugin_load.test'
--- a/mysql-test/t/plugin_load.test 2009-10-08 08:39:15 +0000
+++ b/mysql-test/t/plugin_load.test 2010-03-26 21:49:58 +0000
@@ -2,3 +2,30 @@
--source include/have_example_plugin.inc
SELECT @@global.example_enum_var = 'e2';
+
+--echo #legal values
+CREATE TABLE t1 ( a int complex='c,f,f,f' ) ENGINE=example UUL=10000 STR='dskj' one_or_two='one' YESNO=0;
+show create table t1;
+drop table t1;
+
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+
+--echo #illegal value fixed
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+show create table t1;
+
+--echo #alter table
+alter table t1 UUL=10000000;
+show create table t1;
+alter table t1 change a a int complex='c,c,c';
+show create table t1;
+drop table t1;
+
+
+--echo #illegal value error
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+--error ER_BAD_OPTION_VALUE
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/t/sp.test'
--- a/mysql-test/t/sp.test 2009-12-23 13:44:03 +0000
+++ b/mysql-test/t/sp.test 2010-03-26 21:49:58 +0000
@@ -8210,7 +8210,6 @@
select @@sql_mode;
set @@sql_mode= @old_mode;
# Rename SQL modes that differ in name between the server and the table definition.
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.proc where name = 'p' and sql_mode = @full_mode;
drop procedure p;
=== modified file 'scripts/mysql_system_tables.sql'
--- a/scripts/mysql_system_tables.sql 2009-10-27 10:09:36 +0000
+++ b/scripts/mysql_system_tables.sql 2010-03-26 21:49:58 +0000
@@ -60,7 +60,7 @@
CREATE TABLE IF NOT EXISTS time_zone_leap_second ( Transition_time bigint signed NOT NULL, Correction int signed NOT NULL, PRIMARY KEY TranTime (Transition_time) ) engine=MyISAM CHARACTER SET utf8 comment='Leap seconds information for time zones';
-CREATE TABLE IF NOT EXISTS proc (db char(64) collate utf8_bin DEFAULT '' NOT NULL, name char(64) DEFAULT '' NOT NULL, type enum('FUNCTION','PROCEDURE') NOT NULL, specific_name char(64) DEFAULT '' NOT NULL, language enum('SQL') DEFAULT 'SQL' NOT NULL, sql_data_access enum( 'CONTAINS_SQL', 'NO_SQL', 'READS_SQL_DATA', 'MODIFIES_SQL_DATA') DEFAULT 'CONTAINS_SQL' NOT NULL, is_deterministic enum('YES','NO') DEFAULT 'NO' NOT NULL, security_type enum('INVOKER','DEFINER') DEFAULT 'DEFINER' NOT NULL, param_list blob NOT NULL, returns longblob DEFAULT '' NOT NULL, body longblob NOT NULL, definer char(77) collate utf8_bin DEFAULT '' NOT NULL, created timestamp, modified timestamp, sql_mode set( 'REAL_AS_FLOAT', 'PIPES_AS_CONCAT', 'ANSI_QUOTES', 'IGNORE_SPACE', 'NOT_USED', 'ONLY_FULL_GROUP_BY', 'NO_UNSIGNED_SUBTRACTION', 'NO_DIR_IN_CREATE', 'POSTGRESQL', 'ORACLE', 'MSSQL', 'DB2', 'MAXDB', 'NO_KEY_OPTIONS', 'NO_TABLE_OPTIONS', 'NO_FIELD_OPTIONS', 'MYSQL323', 'MYSQL40', 'ANSI', 'NO_AUTO_VALUE_ON_ZERO', 'NO_BACKSLASH_ESCAPES', 'STRICT_TRANS_TABLES', 'STRICT_ALL_TABLES', 'NO_ZERO_IN_DATE', 'NO_ZERO_DATE', 'INVALID_DATES', 'ERROR_FOR_DIVISION_BY_ZERO', 'TRADITIONAL', 'NO_AUTO_CREATE_USER', 'HIGH_NOT_PRECEDENCE', 'NO_ENGINE_SUBSTITUTION', 'PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) collate utf8_bin DEFAULT '' NOT NULL, character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db,name,type)) engine=MyISAM character set utf8 comment='Stored Procedures';
+CREATE TABLE IF NOT EXISTS proc (db char(64) collate utf8_bin DEFAULT '' NOT NULL, name char(64) DEFAULT '' NOT NULL, type enum('FUNCTION','PROCEDURE') NOT NULL, specific_name char(64) DEFAULT '' NOT NULL, language enum('SQL') DEFAULT 'SQL' NOT NULL, sql_data_access enum( 'CONTAINS_SQL', 'NO_SQL', 'READS_SQL_DATA', 'MODIFIES_SQL_DATA') DEFAULT 'CONTAINS_SQL' NOT NULL, is_deterministic enum('YES','NO') DEFAULT 'NO' NOT NULL, security_type enum('INVOKER','DEFINER') DEFAULT 'DEFINER' NOT NULL, param_list blob NOT NULL, returns longblob DEFAULT '' NOT NULL, body longblob NOT NULL, definer char(77) collate utf8_bin DEFAULT '' NOT NULL, created timestamp, modified timestamp, sql_mode set( 'REAL_AS_FLOAT', 'PIPES_AS_CONCAT', 'ANSI_QUOTES', 'IGNORE_SPACE', 'CREATE_OPTIONS_ERR', 'ONLY_FULL_GROUP_BY', 'NO_UNSIGNED_SUBTRACTION', 'NO_DIR_IN_CREATE', 'POSTGRESQL', 'ORACLE', 'MSSQL', 'DB2', 'MAXDB', 'NO_KEY_OPTIONS', 'NO_TABLE_OPTIONS', 'NO_FIELD_OPTIONS', 'MYSQL323', 'MYSQL40', 'ANSI', 'NO_AUTO_VALUE_ON_ZERO', 'NO_BACKSLASH_ESCAPES', 'STRICT_TRANS_TABLES', 'STRICT_ALL_TABLES', 'NO_ZERO_IN_DATE', 'NO_ZERO_DATE', 'INVALID_DATES', 'ERROR_FOR_DIVISION_BY_ZERO', 'TRADITIONAL', 'NO_AUTO_CREATE_USER', 'HIGH_NOT_PRECEDENCE', 'NO_ENGINE_SUBSTITUTION', 'PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) collate utf8_bin DEFAULT '' NOT NULL, character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db,name,type)) engine=MyISAM character set utf8 comment='Stored Procedures';
CREATE TABLE IF NOT EXISTS procs_priv ( Host char(60) binary DEFAULT '' NOT NULL, Db char(64) binary DEFAULT '' NOT NULL, User char(16) binary DEFAULT '' NOT NULL, Routine_name char(64) COLLATE utf8_general_ci DEFAULT '' NOT NULL, Routine_type enum('FUNCTION','PROCEDURE') NOT NULL, Grantor char(77) DEFAULT '' NOT NULL, Proc_priv set('Execute','Alter Routine','Grant') COLLATE utf8_general_ci DEFAULT '' NOT NULL, Timestamp timestamp(14), PRIMARY KEY (Host,Db,User,Routine_name,Routine_type), KEY Grantor (Grantor) ) engine=MyISAM CHARACTER SET utf8 COLLATE utf8_bin comment='Procedure privileges';
@@ -80,7 +80,7 @@
EXECUTE stmt;
DROP PREPARE stmt;
-CREATE TABLE IF NOT EXISTS event ( db char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', name char(64) CHARACTER SET utf8 NOT NULL default '', body longblob NOT NULL, definer char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', execute_at DATETIME default NULL, interval_value int(11) default NULL, interval_field ENUM('YEAR','QUARTER','MONTH','DAY','HOUR','MINUTE','WEEK','SECOND','MICROSECOND','YEAR_MONTH','DAY_HOUR','DAY_MINUTE','DAY_SECOND','HOUR_MINUTE','HOUR_SECOND','MINUTE_SECOND','DAY_MICROSECOND','HOUR_MICROSECOND','MINUTE_MICROSECOND','SECOND_MICROSECOND') default NULL, created TIMESTAMP NOT NULL, modified TIMESTAMP NOT NULL, last_executed DATETIME default NULL, starts DATETIME default NULL, ends DATETIME default NULL, status ENUM('ENABLED','DISABLED','SLAVESIDE_DISABLED') NOT NULL default 'ENABLED', on_completion ENUM('DROP','PRESERVE') NOT NULL default 'DROP', sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', originator INTEGER UNSIGNED NOT NULL, time_zone char(64) CHARACTER SET latin1 NOT NULL DEFAULT 'SYSTEM', character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db, name) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT 'Events';
+CREATE TABLE IF NOT EXISTS event ( db char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', name char(64) CHARACTER SET utf8 NOT NULL default '', body longblob NOT NULL, definer char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', execute_at DATETIME default NULL, interval_value int(11) default NULL, interval_field ENUM('YEAR','QUARTER','MONTH','DAY','HOUR','MINUTE','WEEK','SECOND','MICROSECOND','YEAR_MONTH','DAY_HOUR','DAY_MINUTE','DAY_SECOND','HOUR_MINUTE','HOUR_SECOND','MINUTE_SECOND','DAY_MICROSECOND','HOUR_MICROSECOND','MINUTE_MICROSECOND','SECOND_MICROSECOND') default NULL, created TIMESTAMP NOT NULL, modified TIMESTAMP NOT NULL, last_executed DATETIME default NULL, starts DATETIME default NULL, ends DATETIME default NULL, status ENUM('ENABLED','DISABLED','SLAVESIDE_DISABLED') NOT NULL default 'ENABLED', on_completion ENUM('DROP','PRESERVE') NOT NULL default 'DROP', sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', originator INTEGER UNSIGNED NOT NULL, time_zone char(64) CHARACTER SET latin1 NOT NULL DEFAULT 'SYSTEM', character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db, name) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT 'Events';
CREATE TABLE IF NOT EXISTS ndb_binlog_index (Position BIGINT UNSIGNED NOT NULL, File VARCHAR(255) NOT NULL, epoch BIGINT UNSIGNED NOT NULL, inserts BIGINT UNSIGNED NOT NULL, updates BIGINT UNSIGNED NOT NULL, deletes BIGINT UNSIGNED NOT NULL, schemaops BIGINT UNSIGNED NOT NULL, PRIMARY KEY(epoch)) ENGINE=MYISAM;
=== modified file 'scripts/mysql_system_tables_fix.sql'
--- a/scripts/mysql_system_tables_fix.sql 2009-12-03 16:15:47 +0000
+++ b/scripts/mysql_system_tables_fix.sql 2010-03-26 21:49:58 +0000
@@ -368,7 +368,7 @@
'PIPES_AS_CONCAT',
'ANSI_QUOTES',
'IGNORE_SPACE',
- 'NOT_USED',
+ 'CREATE_OPTIONS_ERR',
'ONLY_FULL_GROUP_BY',
'NO_UNSIGNED_SUBTRACTION',
'NO_DIR_IN_CREATE',
@@ -482,14 +482,14 @@
ALTER TABLE event DROP PRIMARY KEY;
ALTER TABLE event ADD PRIMARY KEY(db, name);
# Add sql_mode column just in case.
-ALTER TABLE event ADD sql_mode set ('NOT_USED') AFTER on_completion;
+ALTER TABLE event ADD sql_mode set ('CREATE_OPTIONS_ERR') AFTER on_completion;
# Update list of sql_mode values.
ALTER TABLE event MODIFY sql_mode
set('REAL_AS_FLOAT',
'PIPES_AS_CONCAT',
'ANSI_QUOTES',
'IGNORE_SPACE',
- 'NOT_USED',
+ 'CREATE_OPTIONS_ERR',
'ONLY_FULL_GROUP_BY',
'NO_UNSIGNED_SUBTRACTION',
'NO_DIR_IN_CREATE',
=== modified file 'sql/CMakeLists.txt'
--- a/sql/CMakeLists.txt 2010-03-03 14:44:14 +0000
+++ b/sql/CMakeLists.txt 2010-03-26 21:49:58 +0000
@@ -77,6 +77,7 @@
rpl_rli.cc rpl_mi.cc sql_servers.cc
sql_connect.cc scheduler.cc
sql_profile.cc event_parse_data.cc opt_table_elimination.cc
+ create_options.cc
${PROJECT_SOURCE_DIR}/sql/sql_yacc.cc
${PROJECT_SOURCE_DIR}/sql/sql_yacc.h
${PROJECT_SOURCE_DIR}/include/mysqld_error.h
=== modified file 'sql/Makefile.am'
--- a/sql/Makefile.am 2010-03-03 14:44:14 +0000
+++ b/sql/Makefile.am 2010-03-26 21:49:58 +0000
@@ -78,7 +78,8 @@
sql_plugin.h authors.h event_parse_data.h \
event_data_objects.h event_scheduler.h \
sql_partition.h partition_info.h partition_element.h \
- contributors.h sql_servers.h
+ contributors.h sql_servers.h \
+ create_options.h
mysqld_SOURCES = sql_lex.cc sql_handler.cc sql_partition.cc \
item.cc item_sum.cc item_buff.cc item_func.cc \
@@ -124,7 +125,7 @@
sql_plugin.cc sql_binlog.cc \
sql_builtin.cc sql_tablespace.cc partition_info.cc \
sql_servers.cc event_parse_data.cc \
- opt_table_elimination.cc
+ opt_table_elimination.cc create_options.cc
nodist_mysqld_SOURCES = mini_client_errors.c pack.c client.c my_time.c my_user.c
=== modified file 'sql/event_db_repository.cc'
--- a/sql/event_db_repository.cc 2010-03-15 11:51:23 +0000
+++ b/sql/event_db_repository.cc 2010-03-26 21:49:58 +0000
@@ -105,7 +105,8 @@
{
{ C_STRING_WITH_LEN("sql_mode") },
{ C_STRING_WITH_LEN("set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES',"
- "'IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION',"
+ "'IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY',"
+ "'NO_UNSIGNED_SUBTRACTION',"
"'NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB',"
"'NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40',"
"'ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES',"
=== modified file 'sql/field.cc'
--- a/sql/field.cc 2010-03-17 02:32:31 +0000
+++ b/sql/field.cc 2010-03-26 21:49:58 +0000
@@ -1308,7 +1308,8 @@
utype unireg_check_arg, const char *field_name_arg)
:ptr(ptr_arg), null_ptr(null_ptr_arg),
table(0), orig_table(0), table_name(0),
- field_name(field_name_arg),
+ field_name(field_name_arg), option_list(0),
+ option_struct(0), new_option_struct(0),
key_start(0), part_of_key(0), part_of_key_not_clustered(0),
part_of_sortkey(0), unireg_check(unireg_check_arg),
field_length(length_arg), null_bit(null_bit_arg),
@@ -9567,7 +9568,8 @@
Item *fld_on_update_value, LEX_STRING *fld_comment,
char *fld_change, List<String> *fld_interval_list,
CHARSET_INFO *fld_charset, uint fld_geom_type,
- Virtual_column_info *fld_vcol_info)
+ Virtual_column_info *fld_vcol_info,
+ engine_option_value *create_opt)
{
uint sign_len, allowed_type_modifier= 0;
ulong max_field_charlength= MAX_FIELD_CHARLENGTH;
@@ -9578,6 +9580,7 @@
field_name= fld_name;
def= fld_default_value;
flags= fld_type_modifier;
+ option_list= create_opt;
unireg_check= (fld_type_modifier & AUTO_INCREMENT_FLAG ?
Field::NEXT_NUMBER : Field::NONE);
decimals= fld_decimals ? (uint)atoi(fld_decimals) : 0;
@@ -10217,6 +10220,8 @@
decimals= old_field->decimals();
vcol_info= old_field->vcol_info;
stored_in_db= old_field->stored_in_db;
+ option_list= old_field->option_list;
+ option_struct= NULL;
/* Fix if the original table had 4 byte pointer blobs */
if (flags & BLOB_FLAG)
@@ -10291,6 +10296,22 @@
/**
+ Makes a clone of this object for ALTER/CREATE TABLE
+
+ @note: We need to do the clone of the list because in
+ ALTER TABLE we may change the list for the cloned field
+
+ @param mem_root MEM_ROOT where to clone the field
+*/
+
+Create_field *Create_field::clone(MEM_ROOT *mem_root) const
+{
+ Create_field *res= new (mem_root) Create_field(*this);
+ return res;
+}
+
+
+/**
maximum possible display length for blob.
@return
=== modified file 'sql/field.h'
--- a/sql/field.h 2010-03-15 11:51:23 +0000
+++ b/sql/field.h 2010-03-26 21:49:58 +0000
@@ -137,6 +137,14 @@
struct st_table *table; // Pointer for table
struct st_table *orig_table; // Pointer to original table
const char **table_name, *field_name;
+ /** reference to the list of options or NULL */
+ engine_option_value *option_list;
+ void *option_struct; /* structure with parsed options */
+ /**
+ structure with parsed new field parameters in ALTER TABLE for
+ check_if_incompatible_data()
+ */
+ void *new_option_struct;
LEX_STRING comment;
/* Field is part of the following keys */
key_map key_start, part_of_key, part_of_key_not_clustered;
@@ -2145,6 +2153,9 @@
CHARSET_INFO *charset;
Field::geometry_type geom_type;
Field *field; // For alter table
+ engine_option_value *option_list;
+ /** structure with parsed options (for comparing fields in ALTER TABLE) */
+ void *option_struct;
uint8 row,col,sc_length,interval_id; // For rea_create_table
uint offset,pack_flag;
@@ -2162,11 +2173,11 @@
*/
bool stored_in_db;
- Create_field() :after(0) {}
+ Create_field() :after(0), option_list(NULL), option_struct(NULL)
+ {}
Create_field(Field *field, Field *orig_field);
/* Used to make a clone of this object for ALTER/CREATE TABLE */
- Create_field *clone(MEM_ROOT *mem_root) const
- { return new (mem_root) Create_field(*this); }
+ Create_field *clone(MEM_ROOT *mem_root) const;
void create_length_to_internal_length(void);
/* Init for a tmp table field. To be extended if need be. */
@@ -2178,8 +2189,8 @@
char *decimals, uint type_modifier, Item *default_value,
Item *on_update_value, LEX_STRING *comment, char *change,
List<String> *interval_list, CHARSET_INFO *cs,
- uint uint_geom_type,
- Virtual_column_info *vcol_info);
+ uint uint_geom_type, Virtual_column_info *vcol_info,
+ engine_option_value *option_list);
bool field_flags_are_binary()
{
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc 2010-03-15 11:51:23 +0000
+++ b/sql/ha_partition.cc 2010-03-26 21:49:58 +0000
@@ -1218,7 +1218,9 @@
DBUG_ENTER("prepare_new_partition");
if ((error= set_up_table_before_create(tbl, part_name, create_info,
- 0, p_elem)))
+ 0, p_elem)) ||
+ parse_engine_table_options(ha_thd(), file->ht,
+ file->table_share))
goto error_create;
if ((error= file->ha_create(part_name, tbl, create_info)))
{
@@ -1869,6 +1871,8 @@
{
if ((error= set_up_table_before_create(table_arg, from_buff,
create_info, i, NULL)) ||
+ parse_engine_table_options(ha_thd(), (*file)->ht,
+ (*file)->table_share) ||
((error= (*file)->ha_create(from_buff, table_arg, create_info))))
goto create_error;
}
=== modified file 'sql/handler.cc'
--- a/sql/handler.cc 2010-03-15 11:51:23 +0000
+++ b/sql/handler.cc 2010-03-26 21:49:58 +0000
@@ -3716,7 +3716,12 @@
name= get_canonical_filename(table.file, share.path.str, name_buff);
+ if (parse_engine_table_options(thd, table.file->ht, &share))
+ goto err;
+
error= table.file->ha_create(name, &table, create_info);
+
+
VOID(closefrm(&table, 0));
if (error)
{
=== modified file 'sql/handler.h'
--- a/sql/handler.h 2010-02-01 06:14:12 +0000
+++ b/sql/handler.h 2010-03-26 21:49:58 +0000
@@ -16,6 +16,9 @@
/* Definitions for parameters to do with handler-routines */
+#ifndef _HANDLER_H
+#define _HANDLER_H
+
#ifdef USE_PRAGMA_INTERFACE
#pragma interface /* gcc class implementation */
#endif
@@ -23,6 +26,7 @@
#include <my_handler.h>
#include <ft_global.h>
#include <keycache.h>
+#include "create_options.h"
#ifndef NO_HASH
#define NO_HASH /* Not yet implemented */
@@ -516,6 +520,7 @@
struct st_table;
typedef struct st_table TABLE;
typedef struct st_table_share TABLE_SHARE;
+class engine_option;
struct st_foreign_key_info;
typedef struct st_foreign_key_info FOREIGN_KEY_INFO;
typedef bool (stat_print_fn)(THD *thd, const char *type, uint type_len,
@@ -549,6 +554,71 @@
enum log_status status;
};
+enum ha_option_type { HA_OPTION_TYPE_ULL, /* unsigned long long */
+ HA_OPTION_TYPE_STRING, /* char * */
+ HA_OPTION_TYPE_ENUM, /* uint */
+ HA_OPTION_TYPE_BOOL}; /* uint */
+
+#define HA_xOPTION_ULL(name, struc, field, def, min, max, blk_siz) \
+ { HA_OPTION_TYPE_ULL, name, sizeof(name)-1, \
+ offsetof(struc, field), def, min, max, blk_siz, 0 }
+#define HA_xOPTION_STRING(name, struc, field) \
+ { HA_OPTION_TYPE_STRING, name, sizeof(name)-1, \
+ offsetof(struc, field), 0, 0, 0, 0, 0 }
+#define HA_xOPTION_ENUM(name, struc, field, values, def) \
+ { HA_OPTION_TYPE_ENUM, name, sizeof(name)-1, \
+ offsetof(struc, field), def, 0, \
+ sizeof(values)-1, 0, values }
+#define HA_xOPTION_BOOL(name, struc, field, def) \
+ { HA_OPTION_TYPE_BOOL, name, sizeof(name)-1, \
+ offsetof(struc, field), def, 0, 1, 0, 0 }
+#define HA_xOPTION_END { HA_OPTION_TYPE_ULL, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+#define HA_TOPTION_ULL(name, field, def, min, max, blk_siz) \
+ HA_xOPTION_ULL(name, ha_table_option_struct, field, def, min, max, blk_siz)
+#define HA_TOPTION_STRING(name, field) \
+ HA_xOPTION_STRING(name, ha_table_option_struct, field)
+#define HA_TOPTION_ENUM(name, field, values, def) \
+ HA_xOPTION_ENUM(name, ha_table_option_struct, field, values, def)
+#define HA_TOPTION_BOOL(name, field, def) \
+ HA_xOPTION_BOOL(name, ha_table_option_struct, field, def)
+#define HA_TOPTION_END HA_xOPTION_END
+
+#define HA_FOPTION_ULL(name, field, def, min, max, blk_siz) \
+ HA_xOPTION_ULL(name, ha_field_option_struct, field, def, min, max, blk_siz)
+#define HA_FOPTION_STRING(name, field) \
+ HA_xOPTION_STRING(name, ha_field_option_struct, field)
+#define HA_FOPTION_ENUM(name, field, values, def) \
+ HA_xOPTION_ENUM(name, ha_field_option_struct, field, values, def)
+#define HA_FOPTION_BOOL(name, field, def) \
+ HA_xOPTION_BOOL(name, ha_field_option_struct, field, def)
+#define HA_FOPTION_END HA_xOPTION_END
+
+#define HA_KOPTION_ULL(name, field, def, min, max, blk_siz) \
+ HA_xOPTION_ULL(name, ha_key_option_struct, field, def, min, max, blk_siz)
+#define HA_KOPTION_STRING(name, field) \
+ HA_xOPTION_STRING(name, ha_key_option_struct, field)
+#define HA_KOPTION_ENUM(name, field, values, def) \
+ HA_xOPTION_ENUM(name, ha_key_option_struct, field, values, def)
+#define HA_KOPTION_BOOL(name, field, values, def) \
+ HA_xOPTION_BOOL(name, ha_key_option_struct, field, values, def)
+#define HA_KOPTION_END HA_xOPTION_END
+
+typedef struct st_ha_create_table_option {
+ enum ha_option_type type;
+ const char *name;
+ size_t name_length;
+ ptrdiff_t offset;
+ ulonglong def_value;
+ ulonglong min_value, max_value, block_size;
+ const char *values;
+} ha_create_table_option;
+
+typedef struct st_ha_create_table_option_rules {
+ ha_create_table_option *table,
+ *field,
+ *key;
+} ha_create_table_option_rules;
enum handler_iterator_type
{
@@ -721,7 +791,7 @@
int (*table_exists_in_engine)(handlerton *hton, THD* thd, const char *db,
const char *name);
uint32 license; /* Flag for Engine License */
- void *data; /* Location for engines to keep personal structures */
+ ha_create_table_option_rules *table_options_rules;
};
@@ -950,6 +1020,16 @@
bool varchar; /* 1 if table has a VARCHAR */
enum ha_storage_media storage_media; /* DEFAULT, DISK or MEMORY */
enum ha_choice page_checksum; /* If we have page_checksums */
+ engine_option_value *option_list; /* list of table create options */
+ engine_option_value *option_list_last;
+ /** structure with parsed options (for comparing fields in ALTER TABLE) */
+ void *option_struct;
+ /* following 4 fields assigned only for check_if_incompatible_data() */
+ void *old_option_struct;
+ Field **old_field;
+ KEY *old_key_info;
+ uint old_keys;
+
} HA_CREATE_INFO;
@@ -2241,3 +2321,5 @@
#define ha_binlog_wait(a) do {} while (0)
#define ha_binlog_end(a) do {} while (0)
#endif
+
+#endif
=== modified file 'sql/log_event.h'
--- a/sql/log_event.h 2010-03-15 11:51:23 +0000
+++ b/sql/log_event.h 2010-03-26 21:49:58 +0000
@@ -1371,7 +1371,7 @@
MODE_PIPES_AS_CONCAT==0x2
MODE_ANSI_QUOTES==0x4
MODE_IGNORE_SPACE==0x8
- MODE_NOT_USED==0x10
+ MODE_CREATE_OPTIONS_ERR==0x10
MODE_ONLY_FULL_GROUP_BY==0x20
MODE_NO_UNSIGNED_SUBTRACTION==0x40
MODE_NO_DIR_IN_CREATE==0x80
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-03-15 11:51:23 +0000
+++ b/sql/mysql_priv.h 2010-03-26 21:49:58 +0000
@@ -54,7 +54,6 @@
#include "sql_plugin.h"
#include "scheduler.h"
#include "log_slow.h"
-
class Parser_state;
/**
@@ -520,7 +519,7 @@
#define MODE_PIPES_AS_CONCAT 2
#define MODE_ANSI_QUOTES 4
#define MODE_IGNORE_SPACE 8
-#define MODE_NOT_USED 16
+#define MODE_CREATE_OPTIONS_ERR 16
#define MODE_ONLY_FULL_GROUP_BY 32
#define MODE_NO_UNSIGNED_SUBTRACTION 64
#define MODE_NO_DIR_IN_CREATE 128
@@ -783,6 +782,7 @@
ulonglong *engine_data);
#include "sql_string.h"
#include "sql_list.h"
+#include "create_options.h"
#include "sql_map.h"
#include "my_decimal.h"
#include "handler.h"
@@ -1508,7 +1508,8 @@
char *change, List<String> *interval_list,
CHARSET_INFO *cs,
uint uint_geom_type,
- Virtual_column_info *vcol_info);
+ Virtual_column_info *vcol_info,
+ engine_option_value *create_options);
Create_field * new_create_field(THD *thd, char *field_name, enum_field_types type,
char *length, char *decimals,
uint type_modifier,
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-03-15 11:51:23 +0000
+++ b/sql/mysqld.cc 2010-03-26 21:49:58 +0000
@@ -243,7 +243,7 @@
static const char *sql_mode_names[]=
{
"REAL_AS_FLOAT", "PIPES_AS_CONCAT", "ANSI_QUOTES", "IGNORE_SPACE",
- "?", "ONLY_FULL_GROUP_BY", "NO_UNSIGNED_SUBTRACTION",
+ "CREATE_OPTIONS_ERR", "ONLY_FULL_GROUP_BY", "NO_UNSIGNED_SUBTRACTION",
"NO_DIR_IN_CREATE",
"POSTGRESQL", "ORACLE", "MSSQL", "DB2", "MAXDB", "NO_KEY_OPTIONS",
"NO_TABLE_OPTIONS", "NO_FIELD_OPTIONS", "MYSQL323", "MYSQL40", "ANSI",
@@ -263,7 +263,7 @@
/*PIPES_AS_CONCAT*/ 15,
/*ANSI_QUOTES*/ 11,
/*IGNORE_SPACE*/ 12,
- /*?*/ 1,
+ /*CREATE_OPTIONS_ERR*/ 18,
/*ONLY_FULL_GROUP_BY*/ 18,
/*NO_UNSIGNED_SUBTRACTION*/ 23,
/*NO_DIR_IN_CREATE*/ 16,
=== modified file 'sql/share/errmsg.txt'
--- a/sql/share/errmsg.txt 2010-03-15 11:51:23 +0000
+++ b/sql/share/errmsg.txt 2010-03-26 21:49:58 +0000
@@ -6240,3 +6240,8 @@
ER_DEBUG_SYNC_HIT_LIMIT
eng "debug sync point hit limit reached"
ger "Debug Sync Point Hit Limit erreicht"
+
+ER_UNKNOWN_OPTION
+ eng "Unknown option '%-.64s'='%-.64s'"
+ER_BAD_OPTION_VALUE
+ eng "Incorrect option value '%-.64s'='%-.64s'"
=== modified file 'sql/sp.cc'
--- a/sql/sp.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sp.cc 2010-03-26 21:49:58 +0000
@@ -147,7 +147,8 @@
{
{ C_STRING_WITH_LEN("sql_mode") },
{ C_STRING_WITH_LEN("set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES',"
- "'IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION',"
+ "'IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY',"
+ "'NO_UNSIGNED_SUBTRACTION',"
"'NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB',"
"'NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40',"
"'ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES',"
=== modified file 'sql/sp_head.cc'
--- a/sql/sp_head.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sp_head.cc 2010-03-26 21:49:58 +0000
@@ -2216,7 +2216,7 @@
lex->charset ? lex->charset :
thd->variables.collation_database,
lex->uint_geom_type,
- lex->vcol_info))
+ lex->vcol_info, lex->option_list))
return TRUE;
if (field_def->interval_list.elements)
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-03-16 12:38:35 +0000
+++ b/sql/sql_class.cc 2010-03-26 21:49:58 +0000
@@ -106,6 +106,7 @@
key_create_info(rhs.key_create_info),
columns(rhs.columns, mem_root),
name(rhs.name),
+ option_list(rhs.option_list),
generated(rhs.generated)
{
list_copy_and_replace_each_value(columns, mem_root);
@@ -775,6 +776,7 @@
void THD::push_internal_handler(Internal_error_handler *handler)
{
+ DBUG_ENTER("THD::push_internal_handler");
if (m_internal_handler)
{
handler->m_prev_internal_handler= m_internal_handler;
@@ -784,6 +786,7 @@
{
m_internal_handler= handler;
}
+ DBUG_VOID_RETURN;
}
@@ -803,8 +806,10 @@
void THD::pop_internal_handler()
{
+ DBUG_ENTER("THD::pop_internal_handler");
DBUG_ASSERT(m_internal_handler != NULL);
m_internal_handler= m_internal_handler->m_prev_internal_handler;
+ DBUG_VOID_RETURN;
}
extern "C"
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-03-15 11:51:23 +0000
+++ b/sql/sql_class.h 2010-03-26 21:49:58 +0000
@@ -204,13 +204,15 @@
KEY_CREATE_INFO key_create_info;
List<Key_part_spec> columns;
const char *name;
+ engine_option_value *option_list;
bool generated;
Key(enum Keytype type_par, const char *name_arg,
KEY_CREATE_INFO *key_info_arg,
- bool generated_arg, List<Key_part_spec> &cols)
+ bool generated_arg, List<Key_part_spec> &cols,
+ engine_option_value *create_opt)
:type(type_par), key_create_info(*key_info_arg), columns(cols),
- name(name_arg), generated(generated_arg)
+ name(name_arg), option_list(create_opt), generated(generated_arg)
{}
Key(const Key &rhs, MEM_ROOT *mem_root);
virtual ~Key() {}
@@ -239,7 +241,7 @@
Foreign_key(const char *name_arg, List<Key_part_spec> &cols,
Table_ident *table, List<Key_part_spec> &ref_cols,
uint delete_opt_arg, uint update_opt_arg, uint match_opt_arg)
- :Key(FOREIGN_KEY, name_arg, &default_key_create_info, 0, cols),
+ :Key(FOREIGN_KEY, name_arg, &default_key_create_info, 0, cols, NULL),
ref_table(table), ref_columns(ref_cols),
delete_opt(delete_opt_arg), update_opt(update_opt_arg),
match_opt(match_opt_arg)
=== modified file 'sql/sql_lex.h'
--- a/sql/sql_lex.h 2010-03-15 11:51:23 +0000
+++ b/sql/sql_lex.h 2010-03-26 21:49:58 +0000
@@ -869,6 +869,7 @@
#define ALTER_ALL_PARTITION (1L << 21)
#define ALTER_REMOVE_PARTITIONING (1L << 22)
#define ALTER_FOREIGN_KEY (1L << 23)
+#define ALTER_CREATE_OPT (1L << 24)
enum enum_alter_table_change_level
{
@@ -1747,6 +1748,11 @@
const char *stmt_definition_end;
/**
+ Collects create options for Field and KEY
+ */
+ engine_option_value *option_list, *option_list_last;
+
+ /**
During name resolution search only in the table list given by
Name_resolution_context::first_name_resolution_table and
Name_resolution_context::last_name_resolution_table
=== modified file 'sql/sql_parse.cc'
--- a/sql/sql_parse.cc 2010-03-16 12:38:35 +0000
+++ b/sql/sql_parse.cc 2010-03-26 21:49:58 +0000
@@ -6155,7 +6155,8 @@
char *change,
List<String> *interval_list, CHARSET_INFO *cs,
uint uint_geom_type,
- Virtual_column_info *vcol_info)
+ Virtual_column_info *vcol_info,
+ engine_option_value *create_options)
{
register Create_field *new_field;
LEX *lex= thd->lex;
@@ -6173,7 +6174,7 @@
lex->col_list.push_back(new Key_part_spec(field_name->str, 0));
key= new Key(Key::PRIMARY, NullS,
&default_key_create_info,
- 0, lex->col_list);
+ 0, lex->col_list, NULL);
lex->alter_info.key_list.push_back(key);
lex->col_list.empty();
}
@@ -6183,7 +6184,7 @@
lex->col_list.push_back(new Key_part_spec(field_name->str, 0));
key= new Key(Key::UNIQUE, NullS,
&default_key_create_info, 0,
- lex->col_list);
+ lex->col_list, NULL);
lex->alter_info.key_list.push_back(key);
lex->col_list.empty();
}
@@ -6241,7 +6242,8 @@
if (!(new_field= new Create_field()) ||
new_field->init(thd, field_name->str, type, length, decimals, type_modifier,
default_value, on_update_value, comment, change,
- interval_list, cs, uint_geom_type, vcol_info))
+ interval_list, cs, uint_geom_type, vcol_info,
+ create_options))
DBUG_RETURN(1);
lex->alter_info.create_list.push_back(new_field);
=== modified file 'sql/sql_show.cc'
--- a/sql/sql_show.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sql_show.cc 2010-03-26 21:49:58 +0000
@@ -83,6 +83,11 @@
static void
append_algorithm(TABLE_LIST *table, String *buff);
+static void
+append_quoted(THD *thd, String *packet, const char *name, uint length,
+ int q);
+static int get_quote_char_for_option(THD *thd, const char *name, uint length);
+
static COND * make_cond_for_info_schema(COND *cond, TABLE_LIST *table);
/***************************************************************************
@@ -951,6 +956,30 @@
DBUG_RETURN(0);
}
+
+/**
+ Goes through all character combinations and ensure that it is number.
+
+ @param name attribute name
+ @param name_length length of name
+
+ @retval # Pointer to conflicting character
+ @retval 0 No conflicting character
+*/
+
+static const char *is_unsigned_number(const char *name, uint name_length)
+{
+ const char *end= name + name_length;
+
+ for (; name < end ; name++)
+ {
+ uchar chr= (uchar) *name;
+ if (chr < '0' || chr > '9')
+ return name;
+ }
+ return 0;
+}
+
/*
Go through all character combinations and ensure that sql_lex.cc can
parse it as an identifier.
@@ -1001,19 +1030,26 @@
void
append_identifier(THD *thd, String *packet, const char *name, uint length)
{
+ int q= get_quote_char_for_identifier(thd, name, length);
+
+ append_quoted(thd, packet, name, length, q);
+}
+
+static void
+append_quoted(THD *thd, String *packet, const char *name, uint length,
+ int q)
+{
+ char quote_char;
const char *name_end;
- char quote_char;
- int q= get_quote_char_for_identifier(thd, name, length);
if (q == EOF)
{
packet->append(name, length, packet->charset());
return;
}
-
/*
The identifier must be quoted as it includes a quote character or
- it's a keyword
+ it's a keyword
*/
VOID(packet->reserve(length*2 + 2));
@@ -1076,6 +1112,27 @@
return '`';
}
+/**
+ Gets the quote character for displaying an option key.
+
+ @param thd Thread handler
+ @param name name to quote
+ @param length length of name
+
+ @retval EOF No quote character is needed
+ @retval # Quote character
+*/
+
+static int get_quote_char_for_option(THD *thd, const char *name, uint length)
+{
+ if (length &&
+ !require_quotes(name, length))
+ return EOF;
+ if (thd->variables.sql_mode & MODE_ANSI_QUOTES)
+ return '"';
+ return '`';
+}
+
/* Append directory name (if exists) to CREATE INFO */
@@ -1173,6 +1230,35 @@
return has_default;
}
+
+/**
+ Appends list of options to string
+
+ @param thd thread handler
+ @param packet string to append
+ @param opt list of options
+*/
+
+static void append_create_options(THD *thd, String *packet,
+ engine_option_value *opt)
+{
+ for(; opt; opt= opt->next)
+ {
+ packet->append(' ');
+ {
+ int q= get_quote_char_for_option(thd, opt->name.str, opt->name.length);
+
+ append_quoted(thd, packet, opt->name.str, opt->name.length, q);
+ }
+ packet->append('=');
+ if (opt->value.length < 21 &&
+ is_unsigned_number(opt->value.str, opt->value.length) == NULL)
+ packet->append(opt->value.str, opt->value.length);
+ else
+ append_unescaped(packet, opt->value.str, opt->value.length);
+ }
+}
+
/*
Build a CREATE TABLE statement for a table.
@@ -1355,6 +1441,8 @@
packet->append(STRING_WITH_LEN(" COMMENT "));
append_unescaped(packet, field->comment.str, field->comment.length);
}
+ if (field->option_list)
+ append_create_options(thd, packet, field->option_list);
}
key_info= table->key_info;
@@ -1426,6 +1514,8 @@
append_identifier(thd, packet, parser_name->str, parser_name->length);
packet->append(STRING_WITH_LEN(" */ "));
}
+ if (key_info->option_list)
+ append_create_options(thd, packet, key_info->option_list);
}
/*
@@ -1585,6 +1675,10 @@
packet->append(STRING_WITH_LEN(" CONNECTION="));
append_unescaped(packet, share->connect_string.str, share->connect_string.length);
}
+ /* create_table_options can be NULL for temporary tables */
+ if (share->option_list)
+ append_create_options(thd, packet,
+ share->option_list);
append_directory(thd, packet, "DATA", create_info.data_file_name);
append_directory(thd, packet, "INDEX", create_info.index_file_name);
}
=== modified file 'sql/sql_table.cc'
--- a/sql/sql_table.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sql_table.cc 2010-03-26 21:49:58 +0000
@@ -2562,6 +2562,7 @@
ulong record_offset= 0;
KEY *key_info;
KEY_PART_INFO *key_part_info;
+ ha_create_table_option_rules *rules, fake_empty={NULL,NULL,NULL};
int timestamps= 0, timestamps_with_niladic= 0;
int field_no,dup_no;
int select_field_pos,auto_increment=0;
@@ -2570,6 +2571,10 @@
uint total_uneven_bit_length= 0;
DBUG_ENTER("mysql_prepare_create_table");
+ rules= (create_info->db_type->table_options_rules ?
+ create_info->db_type->table_options_rules:
+ &fake_empty);
+
select_field_pos= alter_info->create_list.elements - select_field_count;
null_fields=blob_columns=0;
create_info->varchar= 0;
@@ -2863,6 +2868,11 @@
sql_field->offset= record_offset;
if (MTYP_TYPENR(sql_field->unireg_check) == Field::NEXT_NUMBER)
auto_increment++;
+ if (parse_option_list(thd, &sql_field->option_struct,
+ sql_field->option_list,
+ rules->field, FALSE,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
/*
For now skip fields that are not physically stored in the database
(virtual fields) and update their offset later
@@ -3061,6 +3071,12 @@
key_info->key_part=key_part_info;
key_info->usable_key_parts= key_number;
key_info->algorithm= key->key_create_info.algorithm;
+ key_info->option_list= key->option_list;
+ if (parse_option_list(thd, &key_info->option_struct,
+ key_info->option_list,
+ rules->key, FALSE,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
if (key->type == Key::FULLTEXT)
{
@@ -3438,6 +3454,12 @@
}
}
+ if (parse_option_list(thd, &create_info->option_struct,
+ create_info->option_list,
+ rules->table, FALSE,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
+
DBUG_RETURN(FALSE);
}
@@ -5809,6 +5831,9 @@
DBUG_RETURN(0);
}
+ /* to allow check_if_incompatible_data compare */
+ field->new_option_struct= tmp_new_field->option_struct;
+
/* Don't pack rows in old tables if the user has requested this. */
if (create_info->row_type == ROW_TYPE_DYNAMIC ||
(tmp_new_field->flags & BLOB_FLAG) ||
@@ -5956,9 +5981,15 @@
}
DBUG_PRINT("info", ("index added: '%s'", new_key->name));
}
+ else
+ table_key->new_option_struct= new_key->option_struct;
}
/* Check if changes are compatible with current handler without a copy */
+ create_info->old_option_struct= table->s->option_struct;
+ create_info->old_field= table->field;
+ create_info->old_keys= table->s->keys;
+ create_info->old_key_info= table->key_info;
if (table->file->check_if_incompatible_data(create_info, changes))
{
DBUG_PRINT("info", ("check_if_incompatible_data() -> "
@@ -6132,6 +6163,16 @@
}
restore_record(table, s->default_values); // Empty record for DEFAULT
+ if (create_info->option_list)
+ {
+ create_info->option_list=
+ merge_engine_table_options(table->s->option_list,
+ create_info->option_list,
+ thd->mem_root);
+ }
+ else
+ create_info->option_list= table->s->option_list;
+
/*
First collect all fields from table which isn't in drop_list
*/
@@ -6384,7 +6425,7 @@
key= new Key(key_type, key_name,
&key_create_info,
test(key_info->flags & HA_GENERATED_KEY),
- key_parts);
+ key_parts, key_info->option_list);
new_key_list.push_back(key);
}
}
=== modified file 'sql/sql_view.cc'
--- a/sql/sql_view.cc 2010-03-04 08:03:07 +0000
+++ b/sql/sql_view.cc 2010-03-26 21:49:58 +0000
@@ -1183,7 +1183,7 @@
+ MODE_PIPES_AS_CONCAT affect expression parsing
+ MODE_ANSI_QUOTES affect expression parsing
+ MODE_IGNORE_SPACE affect expression parsing
- - MODE_NOT_USED not used :)
+ - MODE_CREATE_OPTIONS_ERR affect only CREATE/ALTER TABLE parsing
* MODE_ONLY_FULL_GROUP_BY affect execution
* MODE_NO_UNSIGNED_SUBTRACTION affect execution
- MODE_NO_DIR_IN_CREATE affect table creation only
=== modified file 'sql/sql_yacc.yy'
--- a/sql/sql_yacc.yy 2010-03-15 11:51:23 +0000
+++ b/sql/sql_yacc.yy 2010-03-26 21:49:58 +0000
@@ -607,6 +607,7 @@
lex->alter_info.flags= ALTER_ADD_INDEX;
lex->col_list.empty();
lex->change= NullS;
+ lex->option_list= lex->option_list_last= NULL;
return FALSE;
}
@@ -616,7 +617,7 @@
{
Key *key;
key= new Key(type, name, info ? info : &lex->key_create_info, generated,
- lex->col_list);
+ lex->col_list, lex->option_list);
if (key == NULL)
return TRUE;
@@ -1858,6 +1859,8 @@
lex->create_info.default_table_charset= NULL;
lex->name.str= 0;
lex->name.length= 0;
+ lex->create_info.option_list=
+ lex->create_info.option_list_last= NULL;
}
create2
{
@@ -2340,6 +2343,7 @@
lex->interval_list.empty();
lex->uint_geom_type= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
;
@@ -4748,6 +4752,43 @@
Lex->create_info.used_fields|= HA_CREATE_USED_TRANSACTIONAL;
Lex->create_info.transactional= $3;
}
+ | IDENT_sys equal TEXT_STRING_sys
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ident
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ulonglong_num
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last,
+ YYTHD->mem_root);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal DEFAULT
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ NULL, 0,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
;
default_charset:
@@ -4869,25 +4910,33 @@
;
key_def:
- normal_key_type opt_ident key_alg '(' key_list ')' normal_key_options
+ normal_key_type opt_ident key_alg '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ normal_key_options
{
if (add_create_index (Lex, $1, $2))
MYSQL_YYABORT;
}
| fulltext opt_key_or_index opt_ident init_key_options
- '(' key_list ')' fulltext_key_options
+ '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ fulltext_key_options
{
if (add_create_index (Lex, $1, $3))
MYSQL_YYABORT;
}
| spatial opt_key_or_index opt_ident init_key_options
- '(' key_list ')' spatial_key_options
+ '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ spatial_key_options
{
if (add_create_index (Lex, $1, $3))
MYSQL_YYABORT;
}
| opt_constraint constraint_key_type opt_ident key_alg
- '(' key_list ')' normal_key_options
+ '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ normal_key_options
{
if (add_create_index (Lex, $2, $3 ? $3 : $1))
MYSQL_YYABORT;
@@ -4950,6 +4999,7 @@
lex->comment=null_lex_str;
lex->charset=NULL;
lex->vcol_info= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
field_def
{
@@ -4960,7 +5010,7 @@
&lex->comment,
lex->change,&lex->interval_list,lex->charset,
lex->uint_geom_type,
- lex->vcol_info))
+ lex->vcol_info, lex->option_list))
MYSQL_YYABORT;
}
;
@@ -5380,6 +5430,43 @@
Lex->charset=$2;
}
}
+ | IDENT_sys equal TEXT_STRING_sys
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ident
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ulonglong_num
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3,
+ &Lex->option_list,
+ &Lex->option_list_last,
+ YYTHD->mem_root);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal DEFAULT
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ NULL, 0,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
;
now_or_signed_literal:
@@ -5669,6 +5756,43 @@
all_key_opt:
KEY_BLOCK_SIZE opt_equal ulong_num
{ Lex->key_create_info.block_size= $3; }
+ | IDENT_sys equal TEXT_STRING_sys
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ident
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ulonglong_num
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3,
+ &Lex->option_list,
+ &Lex->option_list_last,
+ YYTHD->mem_root);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal DEFAULT
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ NULL, 0,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
;
normal_key_opt:
@@ -6158,6 +6282,7 @@
LEX *lex=Lex;
lex->change= $3.str;
lex->alter_info.flags|= ALTER_CHANGE_COLUMN;
+ lex->option_list= lex->option_list_last= NULL;
}
field_spec opt_place
| MODIFY_SYM opt_column field_ident
@@ -6169,6 +6294,7 @@
lex->charset= NULL;
lex->alter_info.flags|= ALTER_CHANGE_COLUMN;
lex->vcol_info= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
field_def
{
@@ -6180,7 +6306,7 @@
&lex->comment,
$3.str, &lex->interval_list, lex->charset,
lex->uint_geom_type,
- lex->vcol_info))
+ lex->vcol_info, lex->option_list))
MYSQL_YYABORT;
}
opt_place
@@ -6287,8 +6413,7 @@
}
| create_table_options_space_separated
{
- LEX *lex=Lex;
- lex->alter_info.flags|= ALTER_OPTIONS;
+ Lex->alter_info.flags|= ALTER_OPTIONS;
}
| FORCE_SYM
{
@@ -13630,6 +13755,7 @@
lex->interval_list.empty();
lex->type= 0;
lex->vcol_info= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
type /* $11 */
{ /* $12 */
@@ -13880,6 +14006,7 @@
}
;
+
/**
@} (end of group Parser)
*/
=== modified file 'sql/structs.h'
--- a/sql/structs.h 2010-02-01 06:14:12 +0000
+++ b/sql/structs.h 2010-03-26 21:49:58 +0000
@@ -68,6 +68,7 @@
uint8 null_bit; /* Position to null_bit */
} KEY_PART_INFO ;
+class engine_option_value;
typedef struct st_key {
uint key_length; /* Tot length of key */
@@ -101,6 +102,14 @@
int bdb_return_if_eq;
} handler;
struct st_table *table;
+ /** reference to the list of options or NULL */
+ engine_option_value *option_list;
+ void *option_struct; /* structure with parsed options */
+ /**
+ structure with parsed new field parameters in ALTER TABLE for
+ check_if_incompatible_data()
+ */
+ void *new_option_struct;
} KEY;
=== modified file 'sql/table.cc'
--- a/sql/table.cc 2010-03-15 11:51:23 +0000
+++ b/sql/table.cc 2010-03-26 21:49:58 +0000
@@ -667,12 +667,13 @@
uint db_create_options, keys, key_parts, n_length;
uint key_info_length, com_length, null_bit_pos;
uint vcol_screen_length;
- uint extra_rec_buf_length;
+ uint extra_rec_buf_length, options_len;
uint i,j;
bool use_hash;
char *keynames, *names, *comment_pos, *vcol_screen_pos;
uchar *record;
- uchar *disk_buff, *strpos, *null_flags, *null_pos;
+ uchar *disk_buff, *strpos, *null_flags, *null_pos, *options;
+ uchar *buff= 0;
ulong pos, record_offset, *rec_per_key, rec_buff_length;
handler *handler_file= 0;
KEY *keyinfo;
@@ -788,7 +789,6 @@
for (i=0 ; i < keys ; i++, keyinfo++)
{
- keyinfo->table= 0; // Updated in open_frm
if (new_frm_ver >= 3)
{
keyinfo->flags= (uint) uint2korr(strpos) ^ HA_NOSAME;
@@ -858,15 +858,14 @@
if ((n_length= uint4korr(head+55)))
{
/* Read extra data segment */
- uchar *buff, *next_chunk, *buff_end;
+ uchar *next_chunk, *buff_end;
DBUG_PRINT("info", ("extra segment size is %u bytes", n_length));
if (!(next_chunk= buff= (uchar*) my_malloc(n_length, MYF(MY_WME))))
goto err;
if (my_pread(file, buff, n_length, record_offset + share->reclength,
MYF(MY_NABP)))
{
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
share->connect_string.length= uint2korr(buff);
if (!(share->connect_string.str= strmake_root(&share->mem_root,
@@ -874,8 +873,7 @@
share->connect_string.
length)))
{
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
next_chunk+= share->connect_string.length + 2;
buff_end= buff + n_length;
@@ -895,8 +893,7 @@
plugin_data(tmp_plugin, handlerton *)))
{
/* bad file, legacy_db_type did not match the name */
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
/*
tmp_plugin is locked with a local lock.
@@ -925,8 +922,7 @@
error= 8;
my_error(ER_OPTION_PREVENTS_STATEMENT, MYF(0),
"--skip-partition");
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
plugin_unlock(NULL, share->db_plugin);
share->db_plugin= ha_lock_engine(NULL, partition_hton);
@@ -940,8 +936,7 @@
/* purecov: begin inspected */
error= 8;
my_error(ER_UNKNOWN_STORAGE_ENGINE, MYF(0), name.str);
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
/* purecov: end */
}
next_chunk+= str_db_type_length + 2;
@@ -957,16 +952,14 @@
memdup_root(&share->mem_root, next_chunk + 4,
partition_info_len + 1)))
{
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
}
#else
if (partition_info_len)
{
DBUG_PRINT("info", ("WITH_PARTITION_STORAGE_ENGINE is not defined"));
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
#endif
next_chunk+= 5 + partition_info_len;
@@ -992,6 +985,17 @@
#endif
next_chunk++;
}
+ if (share->db_create_options & HA_OPTION_TEXT_CREATE_OPTIONS)
+ {
+ /*
+ store options position, but skip till the time we will
+ know number of fields
+ */
+ options_len= uint4korr(next_chunk);
+ options= next_chunk + 4;
+ next_chunk+= options_len;
+ options_len-= 4;
+ }
keyinfo= share->key_info;
for (i= 0; i < keys; i++, keyinfo++)
{
@@ -1002,8 +1006,7 @@
{
DBUG_PRINT("error",
("fulltext key uses parser that is not defined in .frm"));
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
parser_name.str= (char*) next_chunk;
parser_name.length= strlen((char*) next_chunk);
@@ -1013,12 +1016,10 @@
if (! keyinfo->parser)
{
my_error(ER_PLUGIN_IS_NOT_LOADED, MYF(0), parser_name.str);
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
}
}
- my_free(buff, MYF(0));
}
share->key_block_size= uint2korr(head+62);
@@ -1028,21 +1029,21 @@
share->rec_buff_length= rec_buff_length;
if (!(record= (uchar *) alloc_root(&share->mem_root,
rec_buff_length)))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
share->default_values= record;
if (my_pread(file, record, (size_t) share->reclength,
record_offset, MYF(MY_NABP)))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
VOID(my_seek(file,pos,MY_SEEK_SET,MYF(0)));
if (my_read(file, head,288,MYF(MY_NABP)))
- goto err;
+ goto free_and_err;
#ifdef HAVE_CRYPTED_FRM
if (crypted)
{
crypted->decode((char*) head+256,288-256);
if (sint2korr(head+284) != 0) // Should be 0
- goto err; // Wrong password
+ goto free_and_err; // Wrong password
}
#endif
@@ -1062,6 +1063,8 @@
share->comment.length);
DBUG_PRINT("info",("i_count: %d i_parts: %d index: %d n_length: %d int_length: %d com_length: %d vcol_screen_length: %d", interval_count,interval_parts, share->keys,n_length,int_length, com_length, vcol_screen_length));
+
+
if (!(field_ptr = (Field **)
alloc_root(&share->mem_root,
(uint) ((share->fields+1)*sizeof(Field*)+
@@ -1070,14 +1073,14 @@
keys+3)*sizeof(char *)+
(n_length+int_length+com_length+
vcol_screen_length)))))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
share->field= field_ptr;
read_length=(uint) (share->fields * field_pack_length +
pos+ (uint) (n_length+int_length+com_length+
vcol_screen_length));
if (read_string(file,(uchar**) &disk_buff,read_length))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
#ifdef HAVE_CRYPTED_FRM
if (crypted)
{
@@ -1104,7 +1107,7 @@
fix_type_pointers(&interval_array, &share->fieldnames, 1, &names);
if (share->fieldnames.count != share->fields)
- goto err;
+ goto free_and_err;
fix_type_pointers(&interval_array, share->intervals, interval_count,
&names);
@@ -1118,7 +1121,7 @@
uint count= (uint) (interval->count + 1) * sizeof(uint);
if (!(interval->type_lengths= (uint *) alloc_root(&share->mem_root,
count)))
- goto err;
+ goto free_and_err;
for (count= 0; count < interval->count; count++)
{
char *val= (char*) interval->type_names[count];
@@ -1134,7 +1137,7 @@
/* Allocate handler */
if (!(handler_file= get_new_handler(share, thd->mem_root,
share->db_type())))
- goto err;
+ goto free_and_err;
record= share->default_values-1; /* Fieldstart = 1 */
if (share->null_field_first)
@@ -1196,7 +1199,7 @@
charset= &my_charset_bin;
#else
error= 4; // unsupported field type
- goto err;
+ goto free_and_err;
#endif
}
else
@@ -1207,7 +1210,7 @@
{
error= 5; // Unknown or unavailable charset
errarg= (int) strpos[14];
- goto err;
+ goto free_and_err;
}
}
@@ -1247,7 +1250,7 @@
if ((uint)vcol_screen_pos[0] != 1)
{
error= 4;
- goto err;
+ goto free_and_err;
}
field_type= (enum_field_types) (uchar) vcol_screen_pos[1];
fld_stored_in_db= (bool) (uint) vcol_screen_pos[2];
@@ -1256,7 +1259,7 @@
(char *)memdup_root(&share->mem_root,
vcol_screen_pos+(uint)FRM_VCOL_HEADER_SIZE,
vcol_expr_length)))
- goto err;
+ goto free_and_err;
vcol_info->expr_str.length= vcol_expr_length;
vcol_screen_pos+= vcol_info_length;
share->vfields++;
@@ -1346,7 +1349,7 @@
if (!reg_field) // Not supported field type
{
error= 4;
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
}
reg_field->field_index= i;
@@ -1385,7 +1388,7 @@
sent (OOM).
*/
error= 8;
- goto err;
+ goto free_and_err;
}
}
if (!reg_field->stored_in_db)
@@ -1462,7 +1465,7 @@
if (!key_part->fieldnr)
{
error= 4; // Wrong file
- goto err;
+ goto free_and_err;
}
field= key_part->field= share->field[key_part->fieldnr-1];
key_part->type= field->key_type();
@@ -1627,6 +1630,15 @@
null_length, 255);
}
+ if (share->db_create_options & HA_OPTION_TEXT_CREATE_OPTIONS)
+ {
+ DBUG_ASSERT(options_len);
+ if (engine_table_options_frm_read(options, options_len, share) ||
+ parse_engine_table_options(thd, handler_file->ht, share))
+ goto free_and_err;
+ }
+ my_free(buff, MYF(MY_ALLOW_ZERO_PTR));
+
if (share->found_next_number_field)
{
reg_field= *share->found_next_number_field;
@@ -1685,6 +1697,8 @@
#endif
DBUG_RETURN (0);
+ free_and_err:
+ my_free(buff, MYF(MY_ALLOW_ZERO_PTR));
err:
share->error= error;
share->open_errno= my_errno;
@@ -2883,6 +2897,7 @@
ulong length;
uchar fill[IO_SIZE];
int create_flags= O_RDWR | O_TRUNC;
+ DBUG_ENTER("create_frm");
if (create_info->options & HA_LEX_CREATE_TMP_TABLE)
create_flags|= O_EXCL | O_NOFOLLOW;
@@ -2964,7 +2979,7 @@
{
VOID(my_close(file,MYF(0)));
VOID(my_delete(name,MYF(0)));
- return(-1);
+ DBUG_RETURN(-1);
}
}
}
@@ -2975,7 +2990,7 @@
else
my_error(ER_CANT_CREATE_TABLE,MYF(0),table,my_errno);
}
- return (file);
+ DBUG_RETURN(file);
} /* create_frm */
@@ -2993,7 +3008,7 @@
create_info->table_charset= 0;
create_info->comment= share->comment;
create_info->transactional= share->transactional;
- create_info->page_checksum= share->page_checksum;
+ create_info->option_list= share->option_list;
DBUG_VOID_RETURN;
}
=== modified file 'sql/table.h'
--- a/sql/table.h 2010-02-12 08:47:31 +0000
+++ b/sql/table.h 2010-03-26 21:49:58 +0000
@@ -340,6 +340,8 @@
#ifdef NOT_YET
struct st_table *open_tables; /* link to open tables */
#endif
+ engine_option_value *option_list; /* text options for table */
+ void *option_struct; /* structure with parsed options */
/* The following is copied to each TABLE on OPEN */
Field **field;
=== modified file 'sql/unireg.cc'
--- a/sql/unireg.cc 2010-01-04 17:54:42 +0000
+++ b/sql/unireg.cc 2010-03-26 21:49:58 +0000
@@ -46,6 +46,13 @@
uint reclength, ulong data_offset,
handler *handler);
+uint engine_table_options_frm_length(engine_option_value *table_option_list,
+ List<Create_field> &create_fields,
+ uint keys, KEY *key_info);
+uchar *engine_table_options_frm_image(uchar *buff,
+ engine_option_value *table_option_list,
+ List<Create_field> &create_fields,
+ uint keys, KEY *key_info);
/**
An interceptor to hijack ER_TOO_MANY_FIELDS error from
pack_screens and retry again without UNIREG screens.
@@ -75,6 +82,7 @@
return is_handled;
}
+
/*
Create a frm (table definition) file
@@ -107,6 +115,7 @@
ulong key_buff_length;
File file;
ulong filepos, data_offset;
+ uint options_len= 0;
uchar fileinfo[64],forminfo[288],*keybuff;
TYPELIB formnames;
uchar *screen_buff;
@@ -183,6 +192,17 @@
create_info->extra_size+= key_info[i].parser_name->length + 1;
}
+ {
+ options_len= engine_table_options_frm_length(create_info->option_list,
+ create_fields,
+ keys, key_info);
+ if (options_len)
+ {
+ create_info->table_options|= HA_OPTION_TEXT_CREATE_OPTIONS;
+ create_info->extra_size+= (options_len+= 4);
+ }
+ }
+
if ((file=create_frm(thd, file_name, db, table, reclength, fileinfo,
create_info, keys)) < 0)
{
@@ -294,6 +314,25 @@
if (my_write(file, (uchar*) buff, 6, MYF_RW))
goto err;
}
+
+ if (options_len)
+ {
+ uchar *optbuff= (uchar *)my_malloc(options_len, MYF(0));
+ my_bool error;
+ DBUG_PRINT("info", ("Create options length: %u", options_len));
+ if (!optbuff)
+ goto err;
+ int4store(optbuff, options_len);
+ engine_table_options_frm_image(optbuff + 4,
+ create_info->option_list,
+ create_fields,
+ keys, key_info);
+ error= my_write(file, optbuff, options_len, MYF_RW);
+ my_free(optbuff, MYF(0));
+ if (error)
+ goto err;
+ }
+
for (i= 0; i < keys; i++)
{
if (key_info[i].parser_name)
=== modified file 'storage/example/ha_example.cc'
--- a/storage/example/ha_example.cc 2010-03-03 14:44:14 +0000
+++ b/storage/example/ha_example.cc 2010-03-26 21:49:58 +0000
@@ -62,7 +62,7 @@
ha_example::rnd_next
ha_example::rnd_next
ha_example::rnd_next
- ha_example::rnd_next
+ ha_example::rnd_nex/t
ha_example::rnd_next
ha_example::rnd_next
ha_example::rnd_next
@@ -113,6 +113,55 @@
/* The mutex used to init the hash; variable for example share methods */
pthread_mutex_t example_mutex;
+
+/**
+ structure for CREATE TABLE options (table options)
+*/
+
+struct example_table_options_struct
+{
+ const char *strparam;
+ ulonglong ullparam;
+ uint enumparam;
+ uint boolparam;
+};
+
+
+/**
+ structure for CREATE TABLE options (table options)
+*/
+
+struct example_field_options_struct
+{
+ const char *compex_param_to_parse_it_in_engine;
+};
+
+#define ha_table_option_struct example_table_options_struct
+ha_create_table_option example_table_option_list[]=
+{
+ HA_TOPTION_ULL("UUL", ullparam, UINT_MAX32, 0, UINT_MAX32, 1),
+ HA_TOPTION_STRING("STR", strparam),
+ HA_TOPTION_ENUM("one_or_two", enumparam, "one,two", 0),
+ HA_TOPTION_BOOL("YESNO", boolparam, 1),
+ HA_TOPTION_END
+};
+
+#define ha_field_option_struct example_field_options_struct
+ha_create_table_option example_field_option_list[]=
+{
+ HA_FOPTION_STRING("COMPLEX", compex_param_to_parse_it_in_engine),
+ HA_FOPTION_END
+};
+
+
+ha_create_table_option_rules example_table_option_list_rules=
+{
+ example_table_option_list,
+ example_field_option_list,
+ NULL
+};
+
+
/**
@brief
Function we use in the creation of our hash to get key.
@@ -138,6 +187,7 @@
example_hton->state= SHOW_OPTION_YES;
example_hton->create= example_create_handler;
example_hton->flags= HTON_CAN_RECREATE;
+ example_hton->table_options_rules= &example_table_option_list_rules;
DBUG_RETURN(0);
}
@@ -789,7 +839,7 @@
int ha_example::rename_table(const char * from, const char * to)
{
DBUG_ENTER("ha_example::rename_table ");
- DBUG_RETURN(HA_ERR_WRONG_COMMAND);
+ DBUG_RETURN(0);
}
@@ -836,14 +886,85 @@
int ha_example::create(const char *name, TABLE *table_arg,
HA_CREATE_INFO *create_info)
{
+ example_table_options_struct *prm=
+ (example_table_options_struct *)table_arg->s->option_struct;
DBUG_ENTER("ha_example::create");
/*
This is not implemented but we want someone to be able to see that it
works.
*/
+
+ DBUG_ASSERT(prm);
+ DBUG_PRINT("info", ("strparam: '%-.64s' ullparam: %llu enumparam: %u "\
+ "boolparam: %u",
+ (prm->strparam ? prm->strparam : "<NULL>"),
+ prm->ullparam, prm->enumparam, prm->boolparam));
+ for (Field **field= table_arg->s->field; *field; field++)
+ {
+ example_field_options_struct *fprm=
+ (example_field_options_struct *)(*field)->option_struct;
+ DBUG_ASSERT(fprm);
+ DBUG_PRINT("info", ("field: %s complex: '%-.64s'",
+ (*field)->field_name,
+ (fprm->compex_param_to_parse_it_in_engine ?
+ fprm->compex_param_to_parse_it_in_engine :
+ "<NULL>")));
+
+ }
+
DBUG_RETURN(0);
}
+bool ha_example::check_if_incompatible_data(HA_CREATE_INFO *info,
+ uint table_changes)
+{
+ example_table_options_struct *prm;
+ DBUG_ENTER("ha_example::check_if_incompatible_data");
+ DBUG_ASSERT(info->option_struct);
+ DBUG_ASSERT(info->old_option_struct);
+ DBUG_ASSERT(info->old_option_struct);
+ prm= (example_table_options_struct *)info->option_struct;
+ DBUG_PRINT("info", ("new strparam: '%-.64s' ullparam: %llu enumparam: %u "\
+ "boolparam: %u",
+ (prm->strparam ? prm->strparam : "<NULL>"),
+ prm->ullparam, prm->enumparam, prm->boolparam));
+
+ prm= (example_table_options_struct *)info->old_option_struct;
+ DBUG_PRINT("info", ("old strparam: '%-.64s' ullparam: %llu enumparam: %u "\
+ "boolparam: %u",
+ (prm->strparam ? prm->strparam : "<NULL>"),
+ prm->ullparam, prm->enumparam, prm->boolparam));
+
+ for (Field **field= info->old_field; *field; field++)
+ {
+ example_field_options_struct *fprm;
+ if ((*field)->new_option_struct)
+ {
+ fprm=
+ (example_field_options_struct *)(*field)->new_option_struct;
+ DBUG_PRINT("info", ("new field: %s complex: '%-.64s'",
+ (*field)->field_name,
+ (fprm->compex_param_to_parse_it_in_engine ?
+ fprm->compex_param_to_parse_it_in_engine :
+ "<NULL>")));
+ }
+ else
+ DBUG_PRINT("info", ("new field %s is the same", (*field)->field_name));
+
+ fprm=
+ (example_field_options_struct *)(*field)->option_struct;
+ DBUG_ASSERT(fprm);
+ DBUG_PRINT("info", ("old field: %s complex: '%-.64s'",
+ (*field)->field_name,
+ (fprm->compex_param_to_parse_it_in_engine ?
+ fprm->compex_param_to_parse_it_in_engine :
+ "<NULL>")));
+ }
+
+ DBUG_RETURN(COMPATIBLE_DATA_YES);
+}
+
+
struct st_mysql_storage_engine example_storage_engine=
{ MYSQL_HANDLERTON_INTERFACE_VERSION };
=== modified file 'storage/example/ha_example.h'
--- a/storage/example/ha_example.h 2007-08-13 13:11:25 +0000
+++ b/storage/example/ha_example.h 2010-03-26 21:49:58 +0000
@@ -245,6 +245,8 @@
int rename_table(const char * from, const char * to);
int create(const char *name, TABLE *form,
HA_CREATE_INFO *create_info); ///< required
+ bool check_if_incompatible_data(HA_CREATE_INFO *info,
+ uint table_changes);
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
enum thr_lock_type lock_type); ///< required
=== modified file 'storage/pbxt/src/discover_xt.cc'
--- a/storage/pbxt/src/discover_xt.cc 2010-02-01 06:14:12 +0000
+++ b/storage/pbxt/src/discover_xt.cc 2010-03-26 21:49:58 +0000
@@ -1623,7 +1623,7 @@
#endif
NULL /*default_value*/, NULL /*on_update_value*/, &comment, NULL /*change*/,
NULL /*interval_list*/, info->field_charset, 0 /*uint_geom_type*/,
- NULL /*vcol_info*/))
+ NULL /*vcol_info*/, NULL /* create options */))
#endif
goto error;
1
0
[Maria-developers] Rev 2751: options for CREATE TABLE (MWL#43) in file:///home/bell/maria/bzr/work-maria-5.2-createoptions2/
by sanja@askmonty.org 26 Mar '10
by sanja@askmonty.org 26 Mar '10
26 Mar '10
At file:///home/bell/maria/bzr/work-maria-5.2-createoptions2/
------------------------------------------------------------
revno: 2751
revision-id: sanja(a)askmonty.org-20100326191133-m67ekl1rviaf347m
parent: sergii(a)pisem.net-20100323092233-t2gwaclx94hd6exa
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-createoptions2
timestamp: Fri 2010-03-26 21:11:33 +0200
message:
options for CREATE TABLE (MWL#43)
=== modified file 'Docs/sp-imp-spec.txt'
--- a/Docs/sp-imp-spec.txt 2004-03-23 11:04:40 +0000
+++ b/Docs/sp-imp-spec.txt 2010-03-26 19:11:33 +0000
@@ -1075,7 +1075,7 @@
'PIPES_AS_CONCAT',
'ANSI_QUOTES',
'IGNORE_SPACE',
- 'NOT_USED',
+ 'CREATE_OPTIONS_ERR',
'ONLY_FULL_GROUP_BY',
'NO_UNSIGNED_SUBTRACTION',
'NO_DIR_IN_CREATE',
@@ -1097,4 +1097,4 @@
) comment='Stored Procedures';
--
-
\ No newline at end of file
+
=== modified file 'include/my_base.h'
--- a/include/my_base.h 2010-02-10 19:06:24 +0000
+++ b/include/my_base.h 2010-03-26 19:11:33 +0000
@@ -314,6 +314,8 @@
#define HA_OPTION_RELIES_ON_SQL_LAYER 512
#define HA_OPTION_NULL_FIELDS 1024
#define HA_OPTION_PAGE_CHECKSUM 2048
+/* .frm has extra create options in linked-list format */
+#define HA_OPTION_TEXT_CREATE_OPTIONS (1L << 14)
#define HA_OPTION_TEMP_COMPRESS_RECORD (1L << 15) /* set by isamchk */
#define HA_OPTION_READ_ONLY_DATA (1L << 16) /* Set by isamchk */
#define HA_OPTION_NO_CHECKSUM (1L << 17)
=== modified file 'libmysqld/CMakeLists.txt'
--- a/libmysqld/CMakeLists.txt 2010-01-31 09:13:21 +0000
+++ b/libmysqld/CMakeLists.txt 2010-03-26 19:11:33 +0000
@@ -139,7 +139,8 @@
../sql/strfunc.cc ../sql/table.cc ../sql/thr_malloc.cc
../sql/time.cc ../sql/tztime.cc ../sql/uniques.cc ../sql/unireg.cc
../sql/partition_info.cc ../sql/sql_connect.cc
- ../sql/scheduler.cc ../sql/event_parse_data.cc
+ ../sql/scheduler.cc ../sql/event_parse_data.cc
+ ../sql/create_options.cc
${GEN_SOURCES}
${LIB_SOURCES})
=== modified file 'libmysqld/Makefile.am'
--- a/libmysqld/Makefile.am 2009-12-03 11:19:05 +0000
+++ b/libmysqld/Makefile.am 2010-03-26 19:11:33 +0000
@@ -75,7 +75,7 @@
parse_file.cc sql_view.cc sql_trigger.cc my_decimal.cc \
rpl_filter.cc sql_partition.cc sql_builtin.cc sql_plugin.cc \
debug_sync.cc \
- sql_tablespace.cc \
+ sql_tablespace.cc create_options.cc \
rpl_injector.cc my_user.c partition_info.cc \
sql_servers.cc event_parse_data.cc opt_table_elimination.cc
=== added file 'mysql-test/r/create_options.result'
--- a/mysql-test/r/create_options.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/create_options.result 2010-03-26 19:11:33 +0000
@@ -0,0 +1,169 @@
+drop table if exists t1;
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+Warnings:
+Warning 1650 Unknown option 'fkey'='vvv'
+Warning 1650 Unknown option 'dff'='vvv'
+Warning 1650 Unknown option 'tkey1'='1v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v1'
+drop table t1;
+#reassiginig options in the same line
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1 TKEY1=DEFAULT tkey1=1v2 tkey2=2v1;
+Warnings:
+Warning 1650 Unknown option 'fkey'='vvv'
+Warning 1650 Unknown option 'dff'='vvv'
+Warning 1650 Unknown option 'tkey1'='1v2'
+Warning 1650 Unknown option 'tkey2'='2v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v2' tkey2='2v1'
+#add option
+alter table t1 tkey4=4v1;
+Warnings:
+Warning 1650 Unknown option 'tkey4'='4v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v2' tkey2='2v1' tkey4='4v1'
+#remove options
+alter table t1 tkey3=DEFAULT tkey4=DEFAULT;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey='vvv',
+ KEY `akey` (`a`) dff='vvv'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey1='1v2' tkey2='2v1'
+drop table t1;
+create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1 tkey1=1v2 TKEY1=DEFAULT tkey2=2v1 tkey3=3v1;
+Warnings:
+Warning 1650 Unknown option 'fkey1'='v1'
+Warning 1650 Unknown option 'kkey1'='v1'
+Warning 1650 Unknown option 'tkey2'='2v1'
+Warning 1650 Unknown option 'tkey3'='3v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v1',
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#change field with option with the same option
+alter table t1 change a a int `FKEY1`='v1';
+Warnings:
+Warning 1650 Unknown option 'FKEY1'='v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL FKEY1='v1',
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#change field with option with a different option
+alter table t1 change a a int fkey1=v2;
+Warnings:
+Warning 1650 Unknown option 'fkey1'='v2'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new column no options
+alter table t1 add column b int;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ KEY `akey` (`a`) kkey1='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new key with options
+alter table t1 add key bkey (b) kkey2=v1;
+Warnings:
+Warning 1650 Unknown option 'kkey2'='v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `bkey` (`b`) kkey2='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new column with options
+alter table t1 add column c int fkey1=v1 fkey2=v2;
+Warnings:
+Warning 1650 Unknown option 'fkey1'='v1'
+Warning 1650 Unknown option 'fkey2'='v2'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `bkey` (`b`) kkey2='v1'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#new key no options
+alter table t1 add key ckey (c);
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `b` int(11) DEFAULT NULL,
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `bkey` (`b`) kkey2='v1',
+ KEY `ckey` (`c`)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#drop column
+alter table t1 drop b;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `ckey` (`c`)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#add column with options after delete
+alter table t1 add column b int fkey2=v1;
+Warnings:
+Warning 1650 Unknown option 'fkey2'='v1'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ `b` int(11) DEFAULT NULL fkey2='v1',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `ckey` (`c`)
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+#add key
+alter table t1 add key bkey (b) kkey2=v2;
+Warnings:
+Warning 1650 Unknown option 'kkey2'='v2'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL fkey1='v2',
+ `c` int(11) DEFAULT NULL fkey1='v1' fkey2='v2',
+ `b` int(11) DEFAULT NULL fkey2='v1',
+ KEY `akey` (`a`) kkey1='v1',
+ KEY `ckey` (`c`),
+ KEY `bkey` (`b`) kkey2='v2'
+) ENGINE=MyISAM DEFAULT CHARSET=latin1 tkey2='2v1' tkey3='3v1'
+drop table t1;
+#error on unknown option
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+ERROR HY000: Unknown option 'fkey'='vvv'
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/r/events_bugs.result'
--- a/mysql-test/r/events_bugs.result 2009-03-11 20:30:56 +0000
+++ b/mysql-test/r/events_bugs.result 2010-03-26 19:11:33 +0000
@@ -729,9 +729,8 @@
create event e1 on schedule every 1 day do select 1;
select @@sql_mode;
@@sql_mode
-REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,?,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
+REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,CREATE_OPTIONS_ERR,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
set @@sql_mode= @old_mode;
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.event where name = 'p' and sql_mode = @full_mode;
name
=== modified file 'mysql-test/r/information_schema.result'
--- a/mysql-test/r/information_schema.result 2010-03-15 11:51:23 +0000
+++ b/mysql-test/r/information_schema.result 2010-03-26 19:11:33 +0000
@@ -615,7 +615,7 @@
proc definer char(77)
proc created timestamp
proc modified timestamp
-proc sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+proc sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
proc comment char(64)
proc character_set_client char(32)
proc collation_connection char(32)
=== modified file 'mysql-test/r/plugin_load.result'
--- a/mysql-test/r/plugin_load.result 2008-01-26 00:05:15 +0000
+++ b/mysql-test/r/plugin_load.result 2010-03-26 19:11:33 +0000
@@ -1,3 +1,30 @@
SELECT @@global.example_enum_var = 'e2';
@@global.example_enum_var = 'e2'
1
+#legal values
+CREATE TABLE t1 ( a int complex='c,f,f,f' ) ENGINE=example UUL=10000 STR='dskj' one_or_two='one' YESNO=0;
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL complex='c,f,f,f'
+) ENGINE=EXAMPLE DEFAULT CHARSET=latin1 UUL=10000 STR='dskj' one_or_two='one' YESNO=0
+drop table t1;
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+#illegal value fixed
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+Warnings:
+Warning 1651 Incorrect option value 'UUL'='10000000000000000000'
+Warning 1651 Incorrect option value 'one_or_two'='ttt'
+Warning 1651 Incorrect option value 'YESNO'='SSS'
+show create table t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `a` int(11) DEFAULT NULL
+) ENGINE=EXAMPLE DEFAULT CHARSET=latin1 UUL=4294967295 YESNO=0
+drop table t1;
+#illegal value error
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+ERROR HY000: Incorrect option value 'UUL'='10000000000000000000'
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/r/sp.result'
--- a/mysql-test/r/sp.result 2009-12-23 13:44:03 +0000
+++ b/mysql-test/r/sp.result 2010-03-26 19:11:33 +0000
@@ -6940,9 +6940,8 @@
call p();
select @@sql_mode;
@@sql_mode
-REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,?,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
+REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,CREATE_OPTIONS_ERR,ONLY_FULL_GROUP_BY,NO_UNSIGNED_SUBTRACTION,NO_DIR_IN_CREATE,POSTGRESQL,ORACLE,MSSQL,DB2,MAXDB,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,MYSQL323,MYSQL40,ANSI,NO_AUTO_VALUE_ON_ZERO,NO_BACKSLASH_ESCAPES,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ALLOW_INVALID_DATES,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,HIGH_NOT_PRECEDENCE,NO_ENGINE_SUBSTITUTION,PAD_CHAR_TO_FULL_LENGTH
set @@sql_mode= @old_mode;
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.proc where name = 'p' and sql_mode = @full_mode;
name
=== modified file 'mysql-test/r/system_mysql_db.result'
--- a/mysql-test/r/system_mysql_db.result 2009-10-27 10:09:36 +0000
+++ b/mysql-test/r/system_mysql_db.result 2010-03-26 19:11:33 +0000
@@ -200,7 +200,7 @@
`definer` char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`modified` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
- `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
+ `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
`comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`character_set_client` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
`collation_connection` char(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
@@ -225,7 +225,7 @@
`ends` datetime DEFAULT NULL,
`status` enum('ENABLED','DISABLED','SLAVESIDE_DISABLED') NOT NULL DEFAULT 'ENABLED',
`on_completion` enum('DROP','PRESERVE') NOT NULL DEFAULT 'DROP',
- `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
+ `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') NOT NULL DEFAULT '',
`comment` char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL DEFAULT '',
`originator` int(10) unsigned NOT NULL,
`time_zone` char(64) CHARACTER SET latin1 NOT NULL DEFAULT 'SYSTEM',
=== modified file 'mysql-test/suite/funcs_1/r/is_columns_mysql.result'
--- a/mysql-test/suite/funcs_1/r/is_columns_mysql.result 2009-10-28 09:23:02 +0000
+++ b/mysql-test/suite/funcs_1/r/is_columns_mysql.result 2010-03-26 19:11:33 +0000
@@ -49,7 +49,7 @@
NULL mysql event name 2 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64) PRI select,insert,update,references
NULL mysql event on_completion 14 DROP NO enum 8 24 NULL NULL utf8 utf8_general_ci enum('DROP','PRESERVE') select,insert,update,references
NULL mysql event originator 17 NULL NO int NULL NULL 10 0 NULL NULL int(10) unsigned select,insert,update,references
-NULL mysql event sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
+NULL mysql event sql_mode 15 NO set 488 1464 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
NULL mysql event starts 11 NULL YES datetime NULL NULL NULL NULL NULL NULL datetime select,insert,update,references
NULL mysql event status 13 ENABLED NO enum 18 54 NULL NULL utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED') select,insert,update,references
NULL mysql event time_zone 18 SYSTEM NO char 64 64 NULL NULL latin1 latin1_swedish_ci char(64) select,insert,update,references
@@ -124,7 +124,7 @@
NULL mysql proc security_type 8 DEFINER NO enum 7 21 NULL NULL utf8 utf8_general_ci enum('INVOKER','DEFINER') select,insert,update,references
NULL mysql proc specific_name 4 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64) select,insert,update,references
NULL mysql proc sql_data_access 6 CONTAINS_SQL NO enum 17 51 NULL NULL utf8 utf8_general_ci enum('CONTAINS_SQL','NO_SQL','READS_SQL_DATA','MODIFIES_SQL_DATA') select,insert,update,references
-NULL mysql proc sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
+NULL mysql proc sql_mode 15 NO set 488 1464 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') select,insert,update,references
NULL mysql proc type 3 NULL NO enum 9 27 NULL NULL utf8 utf8_general_ci enum('FUNCTION','PROCEDURE') PRI select,insert,update,references
NULL mysql procs_priv Db 2 NO char 64 192 NULL NULL utf8 utf8_bin char(64) PRI select,insert,update,references
NULL mysql procs_priv Grantor 6 NO char 77 231 NULL NULL utf8 utf8_bin char(77) MUL select,insert,update,references
@@ -327,7 +327,7 @@
NULL mysql event ends datetime NULL NULL NULL NULL datetime
3.0000 mysql event status enum 18 54 utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED')
3.0000 mysql event on_completion enum 8 24 utf8 utf8_general_ci enum('DROP','PRESERVE')
-3.0000 mysql event sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql event sql_mode set 488 1464 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql event comment char 64 192 utf8 utf8_bin char(64)
NULL mysql event originator int NULL NULL NULL NULL int(10) unsigned
1.0000 mysql event time_zone char 64 64 latin1 latin1_swedish_ci char(64)
@@ -402,7 +402,7 @@
3.0000 mysql proc definer char 77 231 utf8 utf8_bin char(77)
NULL mysql proc created timestamp NULL NULL NULL NULL timestamp
NULL mysql proc modified timestamp NULL NULL NULL NULL timestamp
-3.0000 mysql proc sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql proc sql_mode set 488 1464 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql proc comment char 64 192 utf8 utf8_bin char(64)
3.0000 mysql proc character_set_client char 32 96 utf8 utf8_bin char(32)
3.0000 mysql proc collation_connection char 32 96 utf8 utf8_bin char(32)
=== modified file 'mysql-test/suite/funcs_1/r/is_columns_mysql_embedded.result'
--- a/mysql-test/suite/funcs_1/r/is_columns_mysql_embedded.result 2009-05-19 16:43:50 +0000
+++ b/mysql-test/suite/funcs_1/r/is_columns_mysql_embedded.result 2010-03-26 19:11:33 +0000
@@ -49,7 +49,7 @@
NULL mysql event name 2 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64) PRI
NULL mysql event on_completion 14 DROP NO enum 8 24 NULL NULL utf8 utf8_general_ci enum('DROP','PRESERVE')
NULL mysql event originator 17 NULL NO int NULL NULL 10 0 NULL NULL int(10) unsigned
-NULL mysql event sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+NULL mysql event sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
NULL mysql event starts 11 NULL YES datetime NULL NULL NULL NULL NULL NULL datetime
NULL mysql event status 13 ENABLED NO enum 18 54 NULL NULL utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED')
NULL mysql event time_zone 18 SYSTEM NO char 64 64 NULL NULL latin1 latin1_swedish_ci char(64)
@@ -124,7 +124,7 @@
NULL mysql proc security_type 8 DEFINER NO enum 7 21 NULL NULL utf8 utf8_general_ci enum('INVOKER','DEFINER')
NULL mysql proc specific_name 4 NO char 64 192 NULL NULL utf8 utf8_general_ci char(64)
NULL mysql proc sql_data_access 6 CONTAINS_SQL NO enum 17 51 NULL NULL utf8 utf8_general_ci enum('CONTAINS_SQL','NO_SQL','READS_SQL_DATA','MODIFIES_SQL_DATA')
-NULL mysql proc sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+NULL mysql proc sql_mode 15 NO set 478 1434 NULL NULL utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
NULL mysql proc type 3 NULL NO enum 9 27 NULL NULL utf8 utf8_general_ci enum('FUNCTION','PROCEDURE') PRI
NULL mysql procs_priv Db 2 NO char 64 192 NULL NULL utf8 utf8_bin char(64) PRI
NULL mysql procs_priv Grantor 6 NO char 77 231 NULL NULL utf8 utf8_bin char(77) MUL
@@ -327,7 +327,7 @@
NULL mysql event ends datetime NULL NULL NULL NULL datetime
3.0000 mysql event status enum 18 54 utf8 utf8_general_ci enum('ENABLED','DISABLED','SLAVESIDE_DISABLED')
3.0000 mysql event on_completion enum 8 24 utf8 utf8_general_ci enum('DROP','PRESERVE')
-3.0000 mysql event sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql event sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql event comment char 64 192 utf8 utf8_bin char(64)
NULL mysql event originator int NULL NULL NULL NULL int(10) unsigned
1.0000 mysql event time_zone char 64 64 latin1 latin1_swedish_ci char(64)
@@ -402,7 +402,7 @@
3.0000 mysql proc definer char 77 231 utf8 utf8_bin char(77)
NULL mysql proc created timestamp NULL NULL NULL NULL timestamp
NULL mysql proc modified timestamp NULL NULL NULL NULL timestamp
-3.0000 mysql proc sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
+3.0000 mysql proc sql_mode set 478 1434 utf8 utf8_general_ci set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH')
3.0000 mysql proc comment char 64 192 utf8 utf8_bin char(64)
3.0000 mysql proc character_set_client char 32 96 utf8 utf8_bin char(32)
3.0000 mysql proc collation_connection char 32 96 utf8 utf8_bin char(32)
=== added file 'mysql-test/t/create_options.test'
--- a/mysql-test/t/create_options.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/create_options.test 2010-03-26 19:11:33 +0000
@@ -0,0 +1,63 @@
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+show create table t1;
+drop table t1;
+
+--echo #reassiginig options in the same line
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1 TKEY1=DEFAULT tkey1=1v2 tkey2=2v1;
+show create table t1;
+
+-- echo #add option
+alter table t1 tkey4=4v1;
+show create table t1;
+
+--echo #remove options
+alter table t1 tkey3=DEFAULT tkey4=DEFAULT;
+show create table t1;
+
+drop table t1;
+
+create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1 tkey1=1v2 TKEY1=DEFAULT tkey2=2v1 tkey3=3v1;
+show create table t1;
+
+--echo #change field with option with the same option
+alter table t1 change a a int `FKEY1`='v1';
+show create table t1;
+--echo #change field with option with a different option
+alter table t1 change a a int fkey1=v2;
+show create table t1;
+--echo #new column no options
+alter table t1 add column b int;
+show create table t1;
+--echo #new key with options
+alter table t1 add key bkey (b) kkey2=v1;
+show create table t1;
+--echo #new column with options
+alter table t1 add column c int fkey1=v1 fkey2=v2;
+show create table t1;
+--echo #new key no options
+alter table t1 add key ckey (c);
+show create table t1;
+--echo #drop column
+alter table t1 drop b;
+show create table t1;
+--echo #add column with options after delete
+alter table t1 add column b int fkey2=v1;
+show create table t1;
+--echo #add key
+alter table t1 add key bkey (b) kkey2=v2;
+show create table t1;
+drop table t1;
+
+--echo #error on unknown option
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+--error ER_UNKNOWN_OPTION
+create table t1 (a int fkey=vvv, key akey (a) dff=vvv) tkey1=1v1;
+
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/t/events_bugs.test'
--- a/mysql-test/t/events_bugs.test 2009-03-11 20:30:56 +0000
+++ b/mysql-test/t/events_bugs.test 2010-03-26 19:11:33 +0000
@@ -1204,7 +1204,6 @@
select @@sql_mode;
set @@sql_mode= @old_mode;
# Rename SQL modes that differ in name between the server and the table definition.
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.event where name = 'p' and sql_mode = @full_mode;
drop event e1;
=== modified file 'mysql-test/t/exampledb.test'
--- a/mysql-test/t/exampledb.test 2006-05-05 17:08:40 +0000
+++ b/mysql-test/t/exampledb.test 2010-03-26 19:11:33 +0000
@@ -20,3 +20,4 @@
drop table t1;
# End of 4.1 tests
+
=== modified file 'mysql-test/t/plugin_load.test'
--- a/mysql-test/t/plugin_load.test 2009-10-08 08:39:15 +0000
+++ b/mysql-test/t/plugin_load.test 2010-03-26 19:11:33 +0000
@@ -2,3 +2,23 @@
--source include/have_example_plugin.inc
SELECT @@global.example_enum_var = 'e2';
+
+--echo #legal values
+CREATE TABLE t1 ( a int complex='c,f,f,f' ) ENGINE=example UUL=10000 STR='dskj' one_or_two='one' YESNO=0;
+show create table t1;
+drop table t1;
+
+SET @OLD_SQL_MODE=@@SQL_MODE;
+SET SQL_MODE='';
+
+--echo #illegal value fixed
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+show create table t1;
+drop table t1;
+
+--echo #illegal value error
+SET SQL_MODE='CREATE_OPTIONS_ERR';
+--error ER_BAD_OPTION_VALUE
+CREATE TABLE t1 (a int) ENGINE=example UUL=10000000000000000000 one_or_two='ttt' YESNO=SSS;
+
+SET @@SQL_MODE=@OLD_SQL_MODE;
=== modified file 'mysql-test/t/sp.test'
--- a/mysql-test/t/sp.test 2009-12-23 13:44:03 +0000
+++ b/mysql-test/t/sp.test 2010-03-26 19:11:33 +0000
@@ -8210,7 +8210,6 @@
select @@sql_mode;
set @@sql_mode= @old_mode;
# Rename SQL modes that differ in name between the server and the table definition.
-select replace(@full_mode, '?', 'NOT_USED') into @full_mode;
select replace(@full_mode, 'ALLOW_INVALID_DATES', 'INVALID_DATES') into @full_mode;
select name from mysql.proc where name = 'p' and sql_mode = @full_mode;
drop procedure p;
=== modified file 'scripts/mysql_system_tables.sql'
--- a/scripts/mysql_system_tables.sql 2009-10-27 10:09:36 +0000
+++ b/scripts/mysql_system_tables.sql 2010-03-26 19:11:33 +0000
@@ -60,7 +60,7 @@
CREATE TABLE IF NOT EXISTS time_zone_leap_second ( Transition_time bigint signed NOT NULL, Correction int signed NOT NULL, PRIMARY KEY TranTime (Transition_time) ) engine=MyISAM CHARACTER SET utf8 comment='Leap seconds information for time zones';
-CREATE TABLE IF NOT EXISTS proc (db char(64) collate utf8_bin DEFAULT '' NOT NULL, name char(64) DEFAULT '' NOT NULL, type enum('FUNCTION','PROCEDURE') NOT NULL, specific_name char(64) DEFAULT '' NOT NULL, language enum('SQL') DEFAULT 'SQL' NOT NULL, sql_data_access enum( 'CONTAINS_SQL', 'NO_SQL', 'READS_SQL_DATA', 'MODIFIES_SQL_DATA') DEFAULT 'CONTAINS_SQL' NOT NULL, is_deterministic enum('YES','NO') DEFAULT 'NO' NOT NULL, security_type enum('INVOKER','DEFINER') DEFAULT 'DEFINER' NOT NULL, param_list blob NOT NULL, returns longblob DEFAULT '' NOT NULL, body longblob NOT NULL, definer char(77) collate utf8_bin DEFAULT '' NOT NULL, created timestamp, modified timestamp, sql_mode set( 'REAL_AS_FLOAT', 'PIPES_AS_CONCAT', 'ANSI_QUOTES', 'IGNORE_SPACE', 'NOT_USED', 'ONLY_FULL_GROUP_BY', 'NO_UNSIGNED_SUBTRACTION', 'NO_DIR_IN_CREATE', 'POSTGRESQL', 'ORACLE', 'MSSQL', 'DB2', 'MAXDB', 'NO_KEY_OPTIONS', 'NO_TABLE_OPTIONS', 'NO_FIELD_OPTIONS', 'MYSQL323', 'MYSQL40', 'ANSI', 'NO_AUTO_VALUE_ON_ZERO', 'NO_BACKSLASH_ESCAPES', 'STRICT_TRANS_TABLES', 'STRICT_ALL_TABLES', 'NO_ZERO_IN_DATE', 'NO_ZERO_DATE', 'INVALID_DATES', 'ERROR_FOR_DIVISION_BY_ZERO', 'TRADITIONAL', 'NO_AUTO_CREATE_USER', 'HIGH_NOT_PRECEDENCE', 'NO_ENGINE_SUBSTITUTION', 'PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) collate utf8_bin DEFAULT '' NOT NULL, character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db,name,type)) engine=MyISAM character set utf8 comment='Stored Procedures';
+CREATE TABLE IF NOT EXISTS proc (db char(64) collate utf8_bin DEFAULT '' NOT NULL, name char(64) DEFAULT '' NOT NULL, type enum('FUNCTION','PROCEDURE') NOT NULL, specific_name char(64) DEFAULT '' NOT NULL, language enum('SQL') DEFAULT 'SQL' NOT NULL, sql_data_access enum( 'CONTAINS_SQL', 'NO_SQL', 'READS_SQL_DATA', 'MODIFIES_SQL_DATA') DEFAULT 'CONTAINS_SQL' NOT NULL, is_deterministic enum('YES','NO') DEFAULT 'NO' NOT NULL, security_type enum('INVOKER','DEFINER') DEFAULT 'DEFINER' NOT NULL, param_list blob NOT NULL, returns longblob DEFAULT '' NOT NULL, body longblob NOT NULL, definer char(77) collate utf8_bin DEFAULT '' NOT NULL, created timestamp, modified timestamp, sql_mode set( 'REAL_AS_FLOAT', 'PIPES_AS_CONCAT', 'ANSI_QUOTES', 'IGNORE_SPACE', 'CREATE_OPTIONS_ERR', 'ONLY_FULL_GROUP_BY', 'NO_UNSIGNED_SUBTRACTION', 'NO_DIR_IN_CREATE', 'POSTGRESQL', 'ORACLE', 'MSSQL', 'DB2', 'MAXDB', 'NO_KEY_OPTIONS', 'NO_TABLE_OPTIONS', 'NO_FIELD_OPTIONS', 'MYSQL323', 'MYSQL40', 'ANSI', 'NO_AUTO_VALUE_ON_ZERO', 'NO_BACKSLASH_ESCAPES', 'STRICT_TRANS_TABLES', 'STRICT_ALL_TABLES', 'NO_ZERO_IN_DATE', 'NO_ZERO_DATE', 'INVALID_DATES', 'ERROR_FOR_DIVISION_BY_ZERO', 'TRADITIONAL', 'NO_AUTO_CREATE_USER', 'HIGH_NOT_PRECEDENCE', 'NO_ENGINE_SUBSTITUTION', 'PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) collate utf8_bin DEFAULT '' NOT NULL, character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db,name,type)) engine=MyISAM character set utf8 comment='Stored Procedures';
CREATE TABLE IF NOT EXISTS procs_priv ( Host char(60) binary DEFAULT '' NOT NULL, Db char(64) binary DEFAULT '' NOT NULL, User char(16) binary DEFAULT '' NOT NULL, Routine_name char(64) COLLATE utf8_general_ci DEFAULT '' NOT NULL, Routine_type enum('FUNCTION','PROCEDURE') NOT NULL, Grantor char(77) DEFAULT '' NOT NULL, Proc_priv set('Execute','Alter Routine','Grant') COLLATE utf8_general_ci DEFAULT '' NOT NULL, Timestamp timestamp(14), PRIMARY KEY (Host,Db,User,Routine_name,Routine_type), KEY Grantor (Grantor) ) engine=MyISAM CHARACTER SET utf8 COLLATE utf8_bin comment='Procedure privileges';
@@ -80,7 +80,7 @@
EXECUTE stmt;
DROP PREPARE stmt;
-CREATE TABLE IF NOT EXISTS event ( db char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', name char(64) CHARACTER SET utf8 NOT NULL default '', body longblob NOT NULL, definer char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', execute_at DATETIME default NULL, interval_value int(11) default NULL, interval_field ENUM('YEAR','QUARTER','MONTH','DAY','HOUR','MINUTE','WEEK','SECOND','MICROSECOND','YEAR_MONTH','DAY_HOUR','DAY_MINUTE','DAY_SECOND','HOUR_MINUTE','HOUR_SECOND','MINUTE_SECOND','DAY_MICROSECOND','HOUR_MICROSECOND','MINUTE_MICROSECOND','SECOND_MICROSECOND') default NULL, created TIMESTAMP NOT NULL, modified TIMESTAMP NOT NULL, last_executed DATETIME default NULL, starts DATETIME default NULL, ends DATETIME default NULL, status ENUM('ENABLED','DISABLED','SLAVESIDE_DISABLED') NOT NULL default 'ENABLED', on_completion ENUM('DROP','PRESERVE') NOT NULL default 'DROP', sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', originator INTEGER UNSIGNED NOT NULL, time_zone char(64) CHARACTER SET latin1 NOT NULL DEFAULT 'SYSTEM', character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db, name) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT 'Events';
+CREATE TABLE IF NOT EXISTS event ( db char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', name char(64) CHARACTER SET utf8 NOT NULL default '', body longblob NOT NULL, definer char(77) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', execute_at DATETIME default NULL, interval_value int(11) default NULL, interval_field ENUM('YEAR','QUARTER','MONTH','DAY','HOUR','MINUTE','WEEK','SECOND','MICROSECOND','YEAR_MONTH','DAY_HOUR','DAY_MINUTE','DAY_SECOND','HOUR_MINUTE','HOUR_SECOND','MINUTE_SECOND','DAY_MICROSECOND','HOUR_MICROSECOND','MINUTE_MICROSECOND','SECOND_MICROSECOND') default NULL, created TIMESTAMP NOT NULL, modified TIMESTAMP NOT NULL, last_executed DATETIME default NULL, starts DATETIME default NULL, ends DATETIME default NULL, status ENUM('ENABLED','DISABLED','SLAVESIDE_DISABLED') NOT NULL default 'ENABLED', on_completion ENUM('DROP','PRESERVE') NOT NULL default 'DROP', sql_mode set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') DEFAULT '' NOT NULL, comment char(64) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL default '', originator INTEGER UNSIGNED NOT NULL, time_zone char(64) CHARACTER SET latin1 NOT NULL DEFAULT 'SYSTEM', character_set_client char(32) collate utf8_bin, collation_connection char(32) collate utf8_bin, db_collation char(32) collate utf8_bin, body_utf8 longblob, PRIMARY KEY (db, name) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT 'Events';
CREATE TABLE IF NOT EXISTS ndb_binlog_index (Position BIGINT UNSIGNED NOT NULL, File VARCHAR(255) NOT NULL, epoch BIGINT UNSIGNED NOT NULL, inserts BIGINT UNSIGNED NOT NULL, updates BIGINT UNSIGNED NOT NULL, deletes BIGINT UNSIGNED NOT NULL, schemaops BIGINT UNSIGNED NOT NULL, PRIMARY KEY(epoch)) ENGINE=MYISAM;
=== modified file 'scripts/mysql_system_tables_fix.sql'
--- a/scripts/mysql_system_tables_fix.sql 2009-12-03 16:15:47 +0000
+++ b/scripts/mysql_system_tables_fix.sql 2010-03-26 19:11:33 +0000
@@ -368,7 +368,7 @@
'PIPES_AS_CONCAT',
'ANSI_QUOTES',
'IGNORE_SPACE',
- 'NOT_USED',
+ 'CREATE_OPTIONS_ERR',
'ONLY_FULL_GROUP_BY',
'NO_UNSIGNED_SUBTRACTION',
'NO_DIR_IN_CREATE',
@@ -482,14 +482,14 @@
ALTER TABLE event DROP PRIMARY KEY;
ALTER TABLE event ADD PRIMARY KEY(db, name);
# Add sql_mode column just in case.
-ALTER TABLE event ADD sql_mode set ('NOT_USED') AFTER on_completion;
+ALTER TABLE event ADD sql_mode set ('CREATE_OPTIONS_ERR') AFTER on_completion;
# Update list of sql_mode values.
ALTER TABLE event MODIFY sql_mode
set('REAL_AS_FLOAT',
'PIPES_AS_CONCAT',
'ANSI_QUOTES',
'IGNORE_SPACE',
- 'NOT_USED',
+ 'CREATE_OPTIONS_ERR',
'ONLY_FULL_GROUP_BY',
'NO_UNSIGNED_SUBTRACTION',
'NO_DIR_IN_CREATE',
=== modified file 'sql/CMakeLists.txt'
--- a/sql/CMakeLists.txt 2010-03-03 14:44:14 +0000
+++ b/sql/CMakeLists.txt 2010-03-26 19:11:33 +0000
@@ -77,6 +77,7 @@
rpl_rli.cc rpl_mi.cc sql_servers.cc
sql_connect.cc scheduler.cc
sql_profile.cc event_parse_data.cc opt_table_elimination.cc
+ create_options.cc
${PROJECT_SOURCE_DIR}/sql/sql_yacc.cc
${PROJECT_SOURCE_DIR}/sql/sql_yacc.h
${PROJECT_SOURCE_DIR}/include/mysqld_error.h
=== modified file 'sql/Makefile.am'
--- a/sql/Makefile.am 2010-03-03 14:44:14 +0000
+++ b/sql/Makefile.am 2010-03-26 19:11:33 +0000
@@ -78,7 +78,8 @@
sql_plugin.h authors.h event_parse_data.h \
event_data_objects.h event_scheduler.h \
sql_partition.h partition_info.h partition_element.h \
- contributors.h sql_servers.h
+ contributors.h sql_servers.h \
+ create_options.h
mysqld_SOURCES = sql_lex.cc sql_handler.cc sql_partition.cc \
item.cc item_sum.cc item_buff.cc item_func.cc \
@@ -124,7 +125,7 @@
sql_plugin.cc sql_binlog.cc \
sql_builtin.cc sql_tablespace.cc partition_info.cc \
sql_servers.cc event_parse_data.cc \
- opt_table_elimination.cc
+ opt_table_elimination.cc create_options.cc
nodist_mysqld_SOURCES = mini_client_errors.c pack.c client.c my_time.c my_user.c
=== modified file 'sql/event_db_repository.cc'
--- a/sql/event_db_repository.cc 2010-03-15 11:51:23 +0000
+++ b/sql/event_db_repository.cc 2010-03-26 19:11:33 +0000
@@ -105,7 +105,8 @@
{
{ C_STRING_WITH_LEN("sql_mode") },
{ C_STRING_WITH_LEN("set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES',"
- "'IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION',"
+ "'IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY',"
+ "'NO_UNSIGNED_SUBTRACTION',"
"'NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB',"
"'NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40',"
"'ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES',"
=== modified file 'sql/field.cc'
--- a/sql/field.cc 2010-03-17 02:32:31 +0000
+++ b/sql/field.cc 2010-03-26 19:11:33 +0000
@@ -1308,7 +1308,8 @@
utype unireg_check_arg, const char *field_name_arg)
:ptr(ptr_arg), null_ptr(null_ptr_arg),
table(0), orig_table(0), table_name(0),
- field_name(field_name_arg),
+ field_name(field_name_arg), option_list(0),
+ option_struct(0), new_option_struct(0),
key_start(0), part_of_key(0), part_of_key_not_clustered(0),
part_of_sortkey(0), unireg_check(unireg_check_arg),
field_length(length_arg), null_bit(null_bit_arg),
@@ -9567,7 +9568,8 @@
Item *fld_on_update_value, LEX_STRING *fld_comment,
char *fld_change, List<String> *fld_interval_list,
CHARSET_INFO *fld_charset, uint fld_geom_type,
- Virtual_column_info *fld_vcol_info)
+ Virtual_column_info *fld_vcol_info,
+ engine_option_value *create_opt)
{
uint sign_len, allowed_type_modifier= 0;
ulong max_field_charlength= MAX_FIELD_CHARLENGTH;
@@ -9578,6 +9580,7 @@
field_name= fld_name;
def= fld_default_value;
flags= fld_type_modifier;
+ option_list= create_opt;
unireg_check= (fld_type_modifier & AUTO_INCREMENT_FLAG ?
Field::NEXT_NUMBER : Field::NONE);
decimals= fld_decimals ? (uint)atoi(fld_decimals) : 0;
@@ -10217,6 +10220,8 @@
decimals= old_field->decimals();
vcol_info= old_field->vcol_info;
stored_in_db= old_field->stored_in_db;
+ option_list= old_field->option_list;
+ option_struct= NULL;
/* Fix if the original table had 4 byte pointer blobs */
if (flags & BLOB_FLAG)
@@ -10291,6 +10296,22 @@
/**
+ Makes a clone of this object for ALTER/CREATE TABLE
+
+ @note: We need to do the clone of the list because in
+ ALTER TABLE we may change the list for the cloned field
+
+ @param mem_root MEM_ROOT where to clone the field
+*/
+
+Create_field *Create_field::clone(MEM_ROOT *mem_root) const
+{
+ Create_field *res= new (mem_root) Create_field(*this);
+ return res;
+}
+
+
+/**
maximum possible display length for blob.
@return
=== modified file 'sql/field.h'
--- a/sql/field.h 2010-03-15 11:51:23 +0000
+++ b/sql/field.h 2010-03-26 19:11:33 +0000
@@ -137,6 +137,14 @@
struct st_table *table; // Pointer for table
struct st_table *orig_table; // Pointer to original table
const char **table_name, *field_name;
+ /** reference to the list of options or NULL */
+ engine_option_value *option_list;
+ void *option_struct; /* structure with parsed options */
+ /**
+ structure with parsed new field parameters in ALTER TABLE for
+ check_if_incompatible_data()
+ */
+ void *new_option_struct;
LEX_STRING comment;
/* Field is part of the following keys */
key_map key_start, part_of_key, part_of_key_not_clustered;
@@ -2145,6 +2153,9 @@
CHARSET_INFO *charset;
Field::geometry_type geom_type;
Field *field; // For alter table
+ engine_option_value *option_list;
+ /** structure with parsed options (for comparing fields in ALTER TABLE) */
+ void *option_struct;
uint8 row,col,sc_length,interval_id; // For rea_create_table
uint offset,pack_flag;
@@ -2162,11 +2173,11 @@
*/
bool stored_in_db;
- Create_field() :after(0) {}
+ Create_field() :after(0), option_list(NULL), option_struct(NULL)
+ {}
Create_field(Field *field, Field *orig_field);
/* Used to make a clone of this object for ALTER/CREATE TABLE */
- Create_field *clone(MEM_ROOT *mem_root) const
- { return new (mem_root) Create_field(*this); }
+ Create_field *clone(MEM_ROOT *mem_root) const;
void create_length_to_internal_length(void);
/* Init for a tmp table field. To be extended if need be. */
@@ -2178,8 +2189,8 @@
char *decimals, uint type_modifier, Item *default_value,
Item *on_update_value, LEX_STRING *comment, char *change,
List<String> *interval_list, CHARSET_INFO *cs,
- uint uint_geom_type,
- Virtual_column_info *vcol_info);
+ uint uint_geom_type, Virtual_column_info *vcol_info,
+ engine_option_value *option_list);
bool field_flags_are_binary()
{
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc 2010-03-15 11:51:23 +0000
+++ b/sql/ha_partition.cc 2010-03-26 19:11:33 +0000
@@ -1218,7 +1218,9 @@
DBUG_ENTER("prepare_new_partition");
if ((error= set_up_table_before_create(tbl, part_name, create_info,
- 0, p_elem)))
+ 0, p_elem)) ||
+ parse_engine_table_options(ha_thd(), file->ht,
+ file->table_share))
goto error_create;
if ((error= file->ha_create(part_name, tbl, create_info)))
{
@@ -1869,6 +1871,8 @@
{
if ((error= set_up_table_before_create(table_arg, from_buff,
create_info, i, NULL)) ||
+ parse_engine_table_options(ha_thd(), (*file)->ht,
+ (*file)->table_share) ||
((error= (*file)->ha_create(from_buff, table_arg, create_info))))
goto create_error;
}
=== modified file 'sql/handler.cc'
--- a/sql/handler.cc 2010-03-15 11:51:23 +0000
+++ b/sql/handler.cc 2010-03-26 19:11:33 +0000
@@ -3716,7 +3716,12 @@
name= get_canonical_filename(table.file, share.path.str, name_buff);
+ if (parse_engine_table_options(thd, table.file->ht, &share))
+ goto err;
+
error= table.file->ha_create(name, &table, create_info);
+
+
VOID(closefrm(&table, 0));
if (error)
{
=== modified file 'sql/handler.h'
--- a/sql/handler.h 2010-02-01 06:14:12 +0000
+++ b/sql/handler.h 2010-03-26 19:11:33 +0000
@@ -16,6 +16,9 @@
/* Definitions for parameters to do with handler-routines */
+#ifndef _HANDLER_H
+#define _HANDLER_H
+
#ifdef USE_PRAGMA_INTERFACE
#pragma interface /* gcc class implementation */
#endif
@@ -23,6 +26,7 @@
#include <my_handler.h>
#include <ft_global.h>
#include <keycache.h>
+#include "create_options.h"
#ifndef NO_HASH
#define NO_HASH /* Not yet implemented */
@@ -516,6 +520,7 @@
struct st_table;
typedef struct st_table TABLE;
typedef struct st_table_share TABLE_SHARE;
+class engine_option;
struct st_foreign_key_info;
typedef struct st_foreign_key_info FOREIGN_KEY_INFO;
typedef bool (stat_print_fn)(THD *thd, const char *type, uint type_len,
@@ -549,6 +554,71 @@
enum log_status status;
};
+enum ha_option_type { HA_OPTION_TYPE_ULL, /* unsigned long long */
+ HA_OPTION_TYPE_STRING, /* char * */
+ HA_OPTION_TYPE_ENUM, /* uint */
+ HA_OPTION_TYPE_BOOL}; /* uint */
+
+#define HA_xOPTION_ULL(name, struc, field, def, min, max, blk_siz) \
+ { HA_OPTION_TYPE_ULL, name, sizeof(name)-1, \
+ offsetof(struc, field), def, min, max, blk_siz, 0 }
+#define HA_xOPTION_STRING(name, struc, field) \
+ { HA_OPTION_TYPE_STRING, name, sizeof(name)-1, \
+ offsetof(struc, field), 0, 0, 0, 0, 0 }
+#define HA_xOPTION_ENUM(name, struc, field, values, def) \
+ { HA_OPTION_TYPE_ENUM, name, sizeof(name)-1, \
+ offsetof(struc, field), def, 0, \
+ sizeof(values)-1, 0, values }
+#define HA_xOPTION_BOOL(name, struc, field, def) \
+ { HA_OPTION_TYPE_BOOL, name, sizeof(name)-1, \
+ offsetof(struc, field), def, 0, 1, 0, 0 }
+#define HA_xOPTION_END { HA_OPTION_TYPE_ULL, 0, 0, 0, 0, 0, 0, 0, 0 }
+
+#define HA_TOPTION_ULL(name, field, def, min, max, blk_siz) \
+ HA_xOPTION_ULL(name, ha_table_option_struct, field, def, min, max, blk_siz)
+#define HA_TOPTION_STRING(name, field) \
+ HA_xOPTION_STRING(name, ha_table_option_struct, field)
+#define HA_TOPTION_ENUM(name, field, values, def) \
+ HA_xOPTION_ENUM(name, ha_table_option_struct, field, values, def)
+#define HA_TOPTION_BOOL(name, field, def) \
+ HA_xOPTION_BOOL(name, ha_table_option_struct, field, def)
+#define HA_TOPTION_END HA_xOPTION_END
+
+#define HA_FOPTION_ULL(name, field, def, min, max, blk_siz) \
+ HA_xOPTION_ULL(name, ha_field_option_struct, field, def, min, max, blk_siz)
+#define HA_FOPTION_STRING(name, field) \
+ HA_xOPTION_STRING(name, ha_field_option_struct, field)
+#define HA_FOPTION_ENUM(name, field, values, def) \
+ HA_xOPTION_ENUM(name, ha_field_option_struct, field, values, def)
+#define HA_FOPTION_BOOL(name, field, def) \
+ HA_xOPTION_BOOL(name, ha_field_option_struct, field, def)
+#define HA_FOPTION_END HA_xOPTION_END
+
+#define HA_KOPTION_ULL(name, field, def, min, max, blk_siz) \
+ HA_xOPTION_ULL(name, ha_key_option_struct, field, def, min, max, blk_siz)
+#define HA_KOPTION_STRING(name, field) \
+ HA_xOPTION_STRING(name, ha_key_option_struct, field)
+#define HA_KOPTION_ENUM(name, field, values, def) \
+ HA_xOPTION_ENUM(name, ha_key_option_struct, field, values, def)
+#define HA_KOPTION_BOOL(name, field, values, def) \
+ HA_xOPTION_BOOL(name, ha_key_option_struct, field, values, def)
+#define HA_KOPTION_END HA_xOPTION_END
+
+typedef struct st_ha_create_table_option {
+ enum ha_option_type type;
+ const char *name;
+ size_t name_length;
+ ptrdiff_t offset;
+ ulonglong def_value;
+ ulonglong min_value, max_value, block_size;
+ const char *values;
+} ha_create_table_option;
+
+typedef struct st_ha_create_table_option_rules {
+ ha_create_table_option *table,
+ *field,
+ *key;
+} ha_create_table_option_rules;
enum handler_iterator_type
{
@@ -721,7 +791,7 @@
int (*table_exists_in_engine)(handlerton *hton, THD* thd, const char *db,
const char *name);
uint32 license; /* Flag for Engine License */
- void *data; /* Location for engines to keep personal structures */
+ ha_create_table_option_rules *table_options_rules;
};
@@ -950,6 +1020,10 @@
bool varchar; /* 1 if table has a VARCHAR */
enum ha_storage_media storage_media; /* DEFAULT, DISK or MEMORY */
enum ha_choice page_checksum; /* If we have page_checksums */
+ engine_option_value *option_list; /* list of table create options */
+ engine_option_value *option_list_last;
+ /** structure with parsed options (for comparing fields in ALTER TABLE) */
+ void *option_struct;
} HA_CREATE_INFO;
@@ -2241,3 +2315,5 @@
#define ha_binlog_wait(a) do {} while (0)
#define ha_binlog_end(a) do {} while (0)
#endif
+
+#endif
=== modified file 'sql/log_event.h'
--- a/sql/log_event.h 2010-03-15 11:51:23 +0000
+++ b/sql/log_event.h 2010-03-26 19:11:33 +0000
@@ -1371,7 +1371,7 @@
MODE_PIPES_AS_CONCAT==0x2
MODE_ANSI_QUOTES==0x4
MODE_IGNORE_SPACE==0x8
- MODE_NOT_USED==0x10
+ MODE_CREATE_OPTIONS_ERR==0x10
MODE_ONLY_FULL_GROUP_BY==0x20
MODE_NO_UNSIGNED_SUBTRACTION==0x40
MODE_NO_DIR_IN_CREATE==0x80
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-03-15 11:51:23 +0000
+++ b/sql/mysql_priv.h 2010-03-26 19:11:33 +0000
@@ -54,7 +54,6 @@
#include "sql_plugin.h"
#include "scheduler.h"
#include "log_slow.h"
-
class Parser_state;
/**
@@ -520,7 +519,7 @@
#define MODE_PIPES_AS_CONCAT 2
#define MODE_ANSI_QUOTES 4
#define MODE_IGNORE_SPACE 8
-#define MODE_NOT_USED 16
+#define MODE_CREATE_OPTIONS_ERR 16
#define MODE_ONLY_FULL_GROUP_BY 32
#define MODE_NO_UNSIGNED_SUBTRACTION 64
#define MODE_NO_DIR_IN_CREATE 128
@@ -783,6 +782,7 @@
ulonglong *engine_data);
#include "sql_string.h"
#include "sql_list.h"
+#include "create_options.h"
#include "sql_map.h"
#include "my_decimal.h"
#include "handler.h"
@@ -1508,7 +1508,8 @@
char *change, List<String> *interval_list,
CHARSET_INFO *cs,
uint uint_geom_type,
- Virtual_column_info *vcol_info);
+ Virtual_column_info *vcol_info,
+ engine_option_value *create_options);
Create_field * new_create_field(THD *thd, char *field_name, enum_field_types type,
char *length, char *decimals,
uint type_modifier,
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-03-15 11:51:23 +0000
+++ b/sql/mysqld.cc 2010-03-26 19:11:33 +0000
@@ -243,7 +243,7 @@
static const char *sql_mode_names[]=
{
"REAL_AS_FLOAT", "PIPES_AS_CONCAT", "ANSI_QUOTES", "IGNORE_SPACE",
- "?", "ONLY_FULL_GROUP_BY", "NO_UNSIGNED_SUBTRACTION",
+ "CREATE_OPTIONS_ERR", "ONLY_FULL_GROUP_BY", "NO_UNSIGNED_SUBTRACTION",
"NO_DIR_IN_CREATE",
"POSTGRESQL", "ORACLE", "MSSQL", "DB2", "MAXDB", "NO_KEY_OPTIONS",
"NO_TABLE_OPTIONS", "NO_FIELD_OPTIONS", "MYSQL323", "MYSQL40", "ANSI",
@@ -263,7 +263,7 @@
/*PIPES_AS_CONCAT*/ 15,
/*ANSI_QUOTES*/ 11,
/*IGNORE_SPACE*/ 12,
- /*?*/ 1,
+ /*CREATE_OPTIONS_ERR*/ 18,
/*ONLY_FULL_GROUP_BY*/ 18,
/*NO_UNSIGNED_SUBTRACTION*/ 23,
/*NO_DIR_IN_CREATE*/ 16,
=== modified file 'sql/share/errmsg.txt'
--- a/sql/share/errmsg.txt 2010-03-15 11:51:23 +0000
+++ b/sql/share/errmsg.txt 2010-03-26 19:11:33 +0000
@@ -6240,3 +6240,8 @@
ER_DEBUG_SYNC_HIT_LIMIT
eng "debug sync point hit limit reached"
ger "Debug Sync Point Hit Limit erreicht"
+
+ER_UNKNOWN_OPTION
+ eng "Unknown option '%-.64s'='%-.64s'"
+ER_BAD_OPTION_VALUE
+ eng "Incorrect option value '%-.64s'='%-.64s'"
=== modified file 'sql/sp.cc'
--- a/sql/sp.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sp.cc 2010-03-26 19:11:33 +0000
@@ -147,7 +147,8 @@
{
{ C_STRING_WITH_LEN("sql_mode") },
{ C_STRING_WITH_LEN("set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES',"
- "'IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION',"
+ "'IGNORE_SPACE','CREATE_OPTIONS_ERR','ONLY_FULL_GROUP_BY',"
+ "'NO_UNSIGNED_SUBTRACTION',"
"'NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB',"
"'NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40',"
"'ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES',"
=== modified file 'sql/sp_head.cc'
--- a/sql/sp_head.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sp_head.cc 2010-03-26 19:11:33 +0000
@@ -2216,7 +2216,7 @@
lex->charset ? lex->charset :
thd->variables.collation_database,
lex->uint_geom_type,
- lex->vcol_info))
+ lex->vcol_info, lex->option_list))
return TRUE;
if (field_def->interval_list.elements)
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-03-16 12:38:35 +0000
+++ b/sql/sql_class.cc 2010-03-26 19:11:33 +0000
@@ -106,6 +106,7 @@
key_create_info(rhs.key_create_info),
columns(rhs.columns, mem_root),
name(rhs.name),
+ option_list(rhs.option_list),
generated(rhs.generated)
{
list_copy_and_replace_each_value(columns, mem_root);
@@ -775,6 +776,7 @@
void THD::push_internal_handler(Internal_error_handler *handler)
{
+ DBUG_ENTER("THD::push_internal_handler");
if (m_internal_handler)
{
handler->m_prev_internal_handler= m_internal_handler;
@@ -784,6 +786,7 @@
{
m_internal_handler= handler;
}
+ DBUG_VOID_RETURN;
}
@@ -803,8 +806,10 @@
void THD::pop_internal_handler()
{
+ DBUG_ENTER("THD::pop_internal_handler");
DBUG_ASSERT(m_internal_handler != NULL);
m_internal_handler= m_internal_handler->m_prev_internal_handler;
+ DBUG_VOID_RETURN;
}
extern "C"
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-03-15 11:51:23 +0000
+++ b/sql/sql_class.h 2010-03-26 19:11:33 +0000
@@ -204,13 +204,15 @@
KEY_CREATE_INFO key_create_info;
List<Key_part_spec> columns;
const char *name;
+ engine_option_value *option_list;
bool generated;
Key(enum Keytype type_par, const char *name_arg,
KEY_CREATE_INFO *key_info_arg,
- bool generated_arg, List<Key_part_spec> &cols)
+ bool generated_arg, List<Key_part_spec> &cols,
+ engine_option_value *create_opt)
:type(type_par), key_create_info(*key_info_arg), columns(cols),
- name(name_arg), generated(generated_arg)
+ name(name_arg), option_list(create_opt), generated(generated_arg)
{}
Key(const Key &rhs, MEM_ROOT *mem_root);
virtual ~Key() {}
@@ -239,7 +241,7 @@
Foreign_key(const char *name_arg, List<Key_part_spec> &cols,
Table_ident *table, List<Key_part_spec> &ref_cols,
uint delete_opt_arg, uint update_opt_arg, uint match_opt_arg)
- :Key(FOREIGN_KEY, name_arg, &default_key_create_info, 0, cols),
+ :Key(FOREIGN_KEY, name_arg, &default_key_create_info, 0, cols, NULL),
ref_table(table), ref_columns(ref_cols),
delete_opt(delete_opt_arg), update_opt(update_opt_arg),
match_opt(match_opt_arg)
=== modified file 'sql/sql_lex.h'
--- a/sql/sql_lex.h 2010-03-15 11:51:23 +0000
+++ b/sql/sql_lex.h 2010-03-26 19:11:33 +0000
@@ -869,6 +869,7 @@
#define ALTER_ALL_PARTITION (1L << 21)
#define ALTER_REMOVE_PARTITIONING (1L << 22)
#define ALTER_FOREIGN_KEY (1L << 23)
+#define ALTER_CREATE_OPT (1L << 24)
enum enum_alter_table_change_level
{
@@ -1747,6 +1748,11 @@
const char *stmt_definition_end;
/**
+ Collects create options for Field and KEY
+ */
+ engine_option_value *option_list, *option_list_last;
+
+ /**
During name resolution search only in the table list given by
Name_resolution_context::first_name_resolution_table and
Name_resolution_context::last_name_resolution_table
=== modified file 'sql/sql_parse.cc'
--- a/sql/sql_parse.cc 2010-03-16 12:38:35 +0000
+++ b/sql/sql_parse.cc 2010-03-26 19:11:33 +0000
@@ -6155,7 +6155,8 @@
char *change,
List<String> *interval_list, CHARSET_INFO *cs,
uint uint_geom_type,
- Virtual_column_info *vcol_info)
+ Virtual_column_info *vcol_info,
+ engine_option_value *create_options)
{
register Create_field *new_field;
LEX *lex= thd->lex;
@@ -6173,7 +6174,7 @@
lex->col_list.push_back(new Key_part_spec(field_name->str, 0));
key= new Key(Key::PRIMARY, NullS,
&default_key_create_info,
- 0, lex->col_list);
+ 0, lex->col_list, NULL);
lex->alter_info.key_list.push_back(key);
lex->col_list.empty();
}
@@ -6183,7 +6184,7 @@
lex->col_list.push_back(new Key_part_spec(field_name->str, 0));
key= new Key(Key::UNIQUE, NullS,
&default_key_create_info, 0,
- lex->col_list);
+ lex->col_list, NULL);
lex->alter_info.key_list.push_back(key);
lex->col_list.empty();
}
@@ -6241,7 +6242,8 @@
if (!(new_field= new Create_field()) ||
new_field->init(thd, field_name->str, type, length, decimals, type_modifier,
default_value, on_update_value, comment, change,
- interval_list, cs, uint_geom_type, vcol_info))
+ interval_list, cs, uint_geom_type, vcol_info,
+ create_options))
DBUG_RETURN(1);
lex->alter_info.create_list.push_back(new_field);
=== modified file 'sql/sql_show.cc'
--- a/sql/sql_show.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sql_show.cc 2010-03-26 19:11:33 +0000
@@ -83,6 +83,11 @@
static void
append_algorithm(TABLE_LIST *table, String *buff);
+static void
+append_quoted(THD *thd, String *packet, const char *name, uint length,
+ int q);
+static int get_quote_char_for_option(THD *thd, const char *name, uint length);
+
static COND * make_cond_for_info_schema(COND *cond, TABLE_LIST *table);
/***************************************************************************
@@ -951,6 +956,30 @@
DBUG_RETURN(0);
}
+
+/**
+ Goes through all character combinations and ensure that it is number.
+
+ @param name attribute name
+ @param name_length length of name
+
+ @retval # Pointer to conflicting character
+ @retval 0 No conflicting character
+*/
+
+static const char *is_unsigned_number(const char *name, uint name_length)
+{
+ const char *end= name + name_length;
+
+ for (; name < end ; name++)
+ {
+ uchar chr= (uchar) *name;
+ if (chr < '0' || chr > '9')
+ return name;
+ }
+ return 0;
+}
+
/*
Go through all character combinations and ensure that sql_lex.cc can
parse it as an identifier.
@@ -1001,19 +1030,26 @@
void
append_identifier(THD *thd, String *packet, const char *name, uint length)
{
+ int q= get_quote_char_for_identifier(thd, name, length);
+
+ append_quoted(thd, packet, name, length, q);
+}
+
+static void
+append_quoted(THD *thd, String *packet, const char *name, uint length,
+ int q)
+{
+ char quote_char;
const char *name_end;
- char quote_char;
- int q= get_quote_char_for_identifier(thd, name, length);
if (q == EOF)
{
packet->append(name, length, packet->charset());
return;
}
-
/*
The identifier must be quoted as it includes a quote character or
- it's a keyword
+ it's a keyword
*/
VOID(packet->reserve(length*2 + 2));
@@ -1076,6 +1112,27 @@
return '`';
}
+/**
+ Gets the quote character for displaying an option key.
+
+ @param thd Thread handler
+ @param name name to quote
+ @param length length of name
+
+ @retval EOF No quote character is needed
+ @retval # Quote character
+*/
+
+static int get_quote_char_for_option(THD *thd, const char *name, uint length)
+{
+ if (length &&
+ !require_quotes(name, length))
+ return EOF;
+ if (thd->variables.sql_mode & MODE_ANSI_QUOTES)
+ return '"';
+ return '`';
+}
+
/* Append directory name (if exists) to CREATE INFO */
@@ -1173,6 +1230,35 @@
return has_default;
}
+
+/**
+ Appends list of options to string
+
+ @param thd thread handler
+ @param packet string to append
+ @param opt list of options
+*/
+
+static void append_create_options(THD *thd, String *packet,
+ engine_option_value *opt)
+{
+ for(; opt; opt= opt->next)
+ {
+ packet->append(' ');
+ {
+ int q= get_quote_char_for_option(thd, opt->name.str, opt->name.length);
+
+ append_quoted(thd, packet, opt->name.str, opt->name.length, q);
+ }
+ packet->append('=');
+ if (opt->value.length < 21 &&
+ is_unsigned_number(opt->value.str, opt->value.length) == NULL)
+ packet->append(opt->value.str, opt->value.length);
+ else
+ append_unescaped(packet, opt->value.str, opt->value.length);
+ }
+}
+
/*
Build a CREATE TABLE statement for a table.
@@ -1355,6 +1441,8 @@
packet->append(STRING_WITH_LEN(" COMMENT "));
append_unescaped(packet, field->comment.str, field->comment.length);
}
+ if (field->option_list)
+ append_create_options(thd, packet, field->option_list);
}
key_info= table->key_info;
@@ -1426,6 +1514,8 @@
append_identifier(thd, packet, parser_name->str, parser_name->length);
packet->append(STRING_WITH_LEN(" */ "));
}
+ if (key_info->option_list)
+ append_create_options(thd, packet, key_info->option_list);
}
/*
@@ -1585,6 +1675,10 @@
packet->append(STRING_WITH_LEN(" CONNECTION="));
append_unescaped(packet, share->connect_string.str, share->connect_string.length);
}
+ /* create_table_options can be NULL for temporary tables */
+ if (share->option_list)
+ append_create_options(thd, packet,
+ share->option_list);
append_directory(thd, packet, "DATA", create_info.data_file_name);
append_directory(thd, packet, "INDEX", create_info.index_file_name);
}
=== modified file 'sql/sql_table.cc'
--- a/sql/sql_table.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sql_table.cc 2010-03-26 19:11:33 +0000
@@ -2562,6 +2562,7 @@
ulong record_offset= 0;
KEY *key_info;
KEY_PART_INFO *key_part_info;
+ ha_create_table_option_rules *rules, fake_empty={NULL,NULL,NULL};
int timestamps= 0, timestamps_with_niladic= 0;
int field_no,dup_no;
int select_field_pos,auto_increment=0;
@@ -2570,6 +2571,10 @@
uint total_uneven_bit_length= 0;
DBUG_ENTER("mysql_prepare_create_table");
+ rules= (create_info->db_type->table_options_rules ?
+ create_info->db_type->table_options_rules:
+ &fake_empty);
+
select_field_pos= alter_info->create_list.elements - select_field_count;
null_fields=blob_columns=0;
create_info->varchar= 0;
@@ -2863,6 +2868,11 @@
sql_field->offset= record_offset;
if (MTYP_TYPENR(sql_field->unireg_check) == Field::NEXT_NUMBER)
auto_increment++;
+ if (parse_option_list(thd, &sql_field->option_struct,
+ sql_field->option_list,
+ rules->field, FALSE,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
/*
For now skip fields that are not physically stored in the database
(virtual fields) and update their offset later
@@ -3061,6 +3071,12 @@
key_info->key_part=key_part_info;
key_info->usable_key_parts= key_number;
key_info->algorithm= key->key_create_info.algorithm;
+ key_info->option_list= key->option_list;
+ if (parse_option_list(thd, &key_info->option_struct,
+ key_info->option_list,
+ rules->key, FALSE,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
if (key->type == Key::FULLTEXT)
{
@@ -3438,6 +3454,12 @@
}
}
+ if (parse_option_list(thd, &create_info->option_struct,
+ create_info->option_list,
+ rules->table, FALSE,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
+
DBUG_RETURN(FALSE);
}
@@ -5756,7 +5778,8 @@
create_info->used_fields & HA_CREATE_USED_TRANSACTIONAL ||
create_info->used_fields & HA_CREATE_USED_PACK_KEYS ||
create_info->used_fields & HA_CREATE_USED_MAX_ROWS ||
- (alter_info->flags & (ALTER_RECREATE | ALTER_FOREIGN_KEY)) ||
+ (alter_info->flags & (ALTER_RECREATE | ALTER_FOREIGN_KEY |
+ ALTER_CREATE_OPT)) ||
order_num ||
!table->s->mysql_version ||
(table->s->frm_version < FRM_VER_TRUE_VARCHAR && varchar))
@@ -5809,6 +5832,9 @@
DBUG_RETURN(0);
}
+ /* to allow check_if_incompatible_data compare */
+ field->new_option_struct= tmp_new_field->option_struct;
+
/* Don't pack rows in old tables if the user has requested this. */
if (create_info->row_type == ROW_TYPE_DYNAMIC ||
(tmp_new_field->flags & BLOB_FLAG) ||
@@ -5956,6 +5982,8 @@
}
DBUG_PRINT("info", ("index added: '%s'", new_key->name));
}
+ else
+ table_key->new_option_struct= new_key->option_struct;
}
/* Check if changes are compatible with current handler without a copy */
@@ -6132,6 +6160,16 @@
}
restore_record(table, s->default_values); // Empty record for DEFAULT
+ if (create_info->option_list)
+ {
+ create_info->option_list=
+ merge_engine_table_options(table->s->option_list,
+ create_info->option_list,
+ thd->mem_root);
+ }
+ else
+ create_info->option_list= table->s->option_list;
+
/*
First collect all fields from table which isn't in drop_list
*/
@@ -6384,7 +6422,7 @@
key= new Key(key_type, key_name,
&key_create_info,
test(key_info->flags & HA_GENERATED_KEY),
- key_parts);
+ key_parts, key_info->option_list);
new_key_list.push_back(key);
}
}
=== modified file 'sql/sql_view.cc'
--- a/sql/sql_view.cc 2010-03-04 08:03:07 +0000
+++ b/sql/sql_view.cc 2010-03-26 19:11:33 +0000
@@ -1183,7 +1183,7 @@
+ MODE_PIPES_AS_CONCAT affect expression parsing
+ MODE_ANSI_QUOTES affect expression parsing
+ MODE_IGNORE_SPACE affect expression parsing
- - MODE_NOT_USED not used :)
+ - MODE_CREATE_OPTIONS_ERR affect only CREATE/ALTER TABLE parsing
* MODE_ONLY_FULL_GROUP_BY affect execution
* MODE_NO_UNSIGNED_SUBTRACTION affect execution
- MODE_NO_DIR_IN_CREATE affect table creation only
=== modified file 'sql/sql_yacc.yy'
--- a/sql/sql_yacc.yy 2010-03-15 11:51:23 +0000
+++ b/sql/sql_yacc.yy 2010-03-26 19:11:33 +0000
@@ -607,6 +607,7 @@
lex->alter_info.flags= ALTER_ADD_INDEX;
lex->col_list.empty();
lex->change= NullS;
+ lex->option_list= lex->option_list_last= NULL;
return FALSE;
}
@@ -616,7 +617,7 @@
{
Key *key;
key= new Key(type, name, info ? info : &lex->key_create_info, generated,
- lex->col_list);
+ lex->col_list, lex->option_list);
if (key == NULL)
return TRUE;
@@ -1858,6 +1859,8 @@
lex->create_info.default_table_charset= NULL;
lex->name.str= 0;
lex->name.length= 0;
+ lex->create_info.option_list=
+ lex->create_info.option_list_last= NULL;
}
create2
{
@@ -2340,6 +2343,7 @@
lex->interval_list.empty();
lex->uint_geom_type= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
;
@@ -4748,6 +4752,43 @@
Lex->create_info.used_fields|= HA_CREATE_USED_TRANSACTIONAL;
Lex->create_info.transactional= $3;
}
+ | IDENT_sys equal TEXT_STRING_sys
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ident
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ulonglong_num
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last,
+ YYTHD->mem_root);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal DEFAULT
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ NULL, 0,
+ &Lex->create_info.option_list,
+ &Lex->create_info.option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
;
default_charset:
@@ -4869,25 +4910,33 @@
;
key_def:
- normal_key_type opt_ident key_alg '(' key_list ')' normal_key_options
+ normal_key_type opt_ident key_alg '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ normal_key_options
{
if (add_create_index (Lex, $1, $2))
MYSQL_YYABORT;
}
| fulltext opt_key_or_index opt_ident init_key_options
- '(' key_list ')' fulltext_key_options
+ '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ fulltext_key_options
{
if (add_create_index (Lex, $1, $3))
MYSQL_YYABORT;
}
| spatial opt_key_or_index opt_ident init_key_options
- '(' key_list ')' spatial_key_options
+ '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ spatial_key_options
{
if (add_create_index (Lex, $1, $3))
MYSQL_YYABORT;
}
| opt_constraint constraint_key_type opt_ident key_alg
- '(' key_list ')' normal_key_options
+ '(' key_list ')'
+ { Lex->option_list= Lex->option_list_last= NULL; }
+ normal_key_options
{
if (add_create_index (Lex, $2, $3 ? $3 : $1))
MYSQL_YYABORT;
@@ -4950,6 +4999,7 @@
lex->comment=null_lex_str;
lex->charset=NULL;
lex->vcol_info= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
field_def
{
@@ -4960,7 +5010,7 @@
&lex->comment,
lex->change,&lex->interval_list,lex->charset,
lex->uint_geom_type,
- lex->vcol_info))
+ lex->vcol_info, lex->option_list))
MYSQL_YYABORT;
}
;
@@ -5380,6 +5430,43 @@
Lex->charset=$2;
}
}
+ | IDENT_sys equal TEXT_STRING_sys
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ident
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ulonglong_num
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3,
+ &Lex->option_list,
+ &Lex->option_list_last,
+ YYTHD->mem_root);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal DEFAULT
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ NULL, 0,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
;
now_or_signed_literal:
@@ -5669,6 +5756,43 @@
all_key_opt:
KEY_BLOCK_SIZE opt_equal ulong_num
{ Lex->key_create_info.block_size= $3; }
+ | IDENT_sys equal TEXT_STRING_sys
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ident
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3.str, $3.length,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal ulonglong_num
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ $3,
+ &Lex->option_list,
+ &Lex->option_list_last,
+ YYTHD->mem_root);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
+ | IDENT_sys equal DEFAULT
+ {
+ new (YYTHD->mem_root)
+ engine_option_value($1,
+ NULL, 0,
+ &Lex->option_list,
+ &Lex->option_list_last);
+ Lex->alter_info.flags|= ALTER_CREATE_OPT;
+ }
;
normal_key_opt:
@@ -6158,6 +6282,7 @@
LEX *lex=Lex;
lex->change= $3.str;
lex->alter_info.flags|= ALTER_CHANGE_COLUMN;
+ lex->option_list= lex->option_list_last= NULL;
}
field_spec opt_place
| MODIFY_SYM opt_column field_ident
@@ -6169,6 +6294,7 @@
lex->charset= NULL;
lex->alter_info.flags|= ALTER_CHANGE_COLUMN;
lex->vcol_info= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
field_def
{
@@ -6180,7 +6306,7 @@
&lex->comment,
$3.str, &lex->interval_list, lex->charset,
lex->uint_geom_type,
- lex->vcol_info))
+ lex->vcol_info, lex->option_list))
MYSQL_YYABORT;
}
opt_place
@@ -6287,8 +6413,7 @@
}
| create_table_options_space_separated
{
- LEX *lex=Lex;
- lex->alter_info.flags|= ALTER_OPTIONS;
+ Lex->alter_info.flags|= ALTER_OPTIONS;
}
| FORCE_SYM
{
@@ -13630,6 +13755,7 @@
lex->interval_list.empty();
lex->type= 0;
lex->vcol_info= 0;
+ lex->option_list= lex->option_list_last= NULL;
}
type /* $11 */
{ /* $12 */
@@ -13880,6 +14006,7 @@
}
;
+
/**
@} (end of group Parser)
*/
=== modified file 'sql/structs.h'
--- a/sql/structs.h 2010-02-01 06:14:12 +0000
+++ b/sql/structs.h 2010-03-26 19:11:33 +0000
@@ -68,6 +68,7 @@
uint8 null_bit; /* Position to null_bit */
} KEY_PART_INFO ;
+class engine_option_value;
typedef struct st_key {
uint key_length; /* Tot length of key */
@@ -101,6 +102,14 @@
int bdb_return_if_eq;
} handler;
struct st_table *table;
+ /** reference to the list of options or NULL */
+ engine_option_value *option_list;
+ void *option_struct; /* structure with parsed options */
+ /**
+ structure with parsed new field parameters in ALTER TABLE for
+ check_if_incompatible_data()
+ */
+ void *new_option_struct;
} KEY;
=== modified file 'sql/table.cc'
--- a/sql/table.cc 2010-03-15 11:51:23 +0000
+++ b/sql/table.cc 2010-03-26 19:11:33 +0000
@@ -667,12 +667,13 @@
uint db_create_options, keys, key_parts, n_length;
uint key_info_length, com_length, null_bit_pos;
uint vcol_screen_length;
- uint extra_rec_buf_length;
+ uint extra_rec_buf_length, options_len;
uint i,j;
bool use_hash;
char *keynames, *names, *comment_pos, *vcol_screen_pos;
uchar *record;
- uchar *disk_buff, *strpos, *null_flags, *null_pos;
+ uchar *disk_buff, *strpos, *null_flags, *null_pos, *options;
+ uchar *buff= 0;
ulong pos, record_offset, *rec_per_key, rec_buff_length;
handler *handler_file= 0;
KEY *keyinfo;
@@ -788,7 +789,6 @@
for (i=0 ; i < keys ; i++, keyinfo++)
{
- keyinfo->table= 0; // Updated in open_frm
if (new_frm_ver >= 3)
{
keyinfo->flags= (uint) uint2korr(strpos) ^ HA_NOSAME;
@@ -858,15 +858,14 @@
if ((n_length= uint4korr(head+55)))
{
/* Read extra data segment */
- uchar *buff, *next_chunk, *buff_end;
+ uchar *next_chunk, *buff_end;
DBUG_PRINT("info", ("extra segment size is %u bytes", n_length));
if (!(next_chunk= buff= (uchar*) my_malloc(n_length, MYF(MY_WME))))
goto err;
if (my_pread(file, buff, n_length, record_offset + share->reclength,
MYF(MY_NABP)))
{
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
share->connect_string.length= uint2korr(buff);
if (!(share->connect_string.str= strmake_root(&share->mem_root,
@@ -874,8 +873,7 @@
share->connect_string.
length)))
{
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
next_chunk+= share->connect_string.length + 2;
buff_end= buff + n_length;
@@ -895,8 +893,7 @@
plugin_data(tmp_plugin, handlerton *)))
{
/* bad file, legacy_db_type did not match the name */
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
/*
tmp_plugin is locked with a local lock.
@@ -925,8 +922,7 @@
error= 8;
my_error(ER_OPTION_PREVENTS_STATEMENT, MYF(0),
"--skip-partition");
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
plugin_unlock(NULL, share->db_plugin);
share->db_plugin= ha_lock_engine(NULL, partition_hton);
@@ -940,8 +936,7 @@
/* purecov: begin inspected */
error= 8;
my_error(ER_UNKNOWN_STORAGE_ENGINE, MYF(0), name.str);
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
/* purecov: end */
}
next_chunk+= str_db_type_length + 2;
@@ -957,16 +952,14 @@
memdup_root(&share->mem_root, next_chunk + 4,
partition_info_len + 1)))
{
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
}
#else
if (partition_info_len)
{
DBUG_PRINT("info", ("WITH_PARTITION_STORAGE_ENGINE is not defined"));
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
#endif
next_chunk+= 5 + partition_info_len;
@@ -992,6 +985,17 @@
#endif
next_chunk++;
}
+ if (share->db_create_options & HA_OPTION_TEXT_CREATE_OPTIONS)
+ {
+ /*
+ store options position, but skip till the time we will
+ know number of fields
+ */
+ options_len= uint4korr(next_chunk);
+ options= next_chunk + 4;
+ next_chunk+= options_len;
+ options_len-= 4;
+ }
keyinfo= share->key_info;
for (i= 0; i < keys; i++, keyinfo++)
{
@@ -1002,8 +1006,7 @@
{
DBUG_PRINT("error",
("fulltext key uses parser that is not defined in .frm"));
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
parser_name.str= (char*) next_chunk;
parser_name.length= strlen((char*) next_chunk);
@@ -1013,12 +1016,10 @@
if (! keyinfo->parser)
{
my_error(ER_PLUGIN_IS_NOT_LOADED, MYF(0), parser_name.str);
- my_free(buff, MYF(0));
- goto err;
+ goto free_and_err;
}
}
}
- my_free(buff, MYF(0));
}
share->key_block_size= uint2korr(head+62);
@@ -1028,21 +1029,21 @@
share->rec_buff_length= rec_buff_length;
if (!(record= (uchar *) alloc_root(&share->mem_root,
rec_buff_length)))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
share->default_values= record;
if (my_pread(file, record, (size_t) share->reclength,
record_offset, MYF(MY_NABP)))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
VOID(my_seek(file,pos,MY_SEEK_SET,MYF(0)));
if (my_read(file, head,288,MYF(MY_NABP)))
- goto err;
+ goto free_and_err;
#ifdef HAVE_CRYPTED_FRM
if (crypted)
{
crypted->decode((char*) head+256,288-256);
if (sint2korr(head+284) != 0) // Should be 0
- goto err; // Wrong password
+ goto free_and_err; // Wrong password
}
#endif
@@ -1062,6 +1063,8 @@
share->comment.length);
DBUG_PRINT("info",("i_count: %d i_parts: %d index: %d n_length: %d int_length: %d com_length: %d vcol_screen_length: %d", interval_count,interval_parts, share->keys,n_length,int_length, com_length, vcol_screen_length));
+
+
if (!(field_ptr = (Field **)
alloc_root(&share->mem_root,
(uint) ((share->fields+1)*sizeof(Field*)+
@@ -1070,14 +1073,14 @@
keys+3)*sizeof(char *)+
(n_length+int_length+com_length+
vcol_screen_length)))))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
share->field= field_ptr;
read_length=(uint) (share->fields * field_pack_length +
pos+ (uint) (n_length+int_length+com_length+
vcol_screen_length));
if (read_string(file,(uchar**) &disk_buff,read_length))
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
#ifdef HAVE_CRYPTED_FRM
if (crypted)
{
@@ -1104,7 +1107,7 @@
fix_type_pointers(&interval_array, &share->fieldnames, 1, &names);
if (share->fieldnames.count != share->fields)
- goto err;
+ goto free_and_err;
fix_type_pointers(&interval_array, share->intervals, interval_count,
&names);
@@ -1118,7 +1121,7 @@
uint count= (uint) (interval->count + 1) * sizeof(uint);
if (!(interval->type_lengths= (uint *) alloc_root(&share->mem_root,
count)))
- goto err;
+ goto free_and_err;
for (count= 0; count < interval->count; count++)
{
char *val= (char*) interval->type_names[count];
@@ -1134,7 +1137,7 @@
/* Allocate handler */
if (!(handler_file= get_new_handler(share, thd->mem_root,
share->db_type())))
- goto err;
+ goto free_and_err;
record= share->default_values-1; /* Fieldstart = 1 */
if (share->null_field_first)
@@ -1196,7 +1199,7 @@
charset= &my_charset_bin;
#else
error= 4; // unsupported field type
- goto err;
+ goto free_and_err;
#endif
}
else
@@ -1207,7 +1210,7 @@
{
error= 5; // Unknown or unavailable charset
errarg= (int) strpos[14];
- goto err;
+ goto free_and_err;
}
}
@@ -1247,7 +1250,7 @@
if ((uint)vcol_screen_pos[0] != 1)
{
error= 4;
- goto err;
+ goto free_and_err;
}
field_type= (enum_field_types) (uchar) vcol_screen_pos[1];
fld_stored_in_db= (bool) (uint) vcol_screen_pos[2];
@@ -1256,7 +1259,7 @@
(char *)memdup_root(&share->mem_root,
vcol_screen_pos+(uint)FRM_VCOL_HEADER_SIZE,
vcol_expr_length)))
- goto err;
+ goto free_and_err;
vcol_info->expr_str.length= vcol_expr_length;
vcol_screen_pos+= vcol_info_length;
share->vfields++;
@@ -1346,7 +1349,7 @@
if (!reg_field) // Not supported field type
{
error= 4;
- goto err; /* purecov: inspected */
+ goto free_and_err; /* purecov: inspected */
}
reg_field->field_index= i;
@@ -1385,7 +1388,7 @@
sent (OOM).
*/
error= 8;
- goto err;
+ goto free_and_err;
}
}
if (!reg_field->stored_in_db)
@@ -1462,7 +1465,7 @@
if (!key_part->fieldnr)
{
error= 4; // Wrong file
- goto err;
+ goto free_and_err;
}
field= key_part->field= share->field[key_part->fieldnr-1];
key_part->type= field->key_type();
@@ -1627,6 +1630,15 @@
null_length, 255);
}
+ if (share->db_create_options & HA_OPTION_TEXT_CREATE_OPTIONS)
+ {
+ DBUG_ASSERT(options_len);
+ if (engine_table_options_frm_read(options, options_len, share) ||
+ parse_engine_table_options(thd, handler_file->ht, share))
+ goto free_and_err;
+ }
+ my_free(buff, MYF(MY_ALLOW_ZERO_PTR));
+
if (share->found_next_number_field)
{
reg_field= *share->found_next_number_field;
@@ -1685,6 +1697,8 @@
#endif
DBUG_RETURN (0);
+ free_and_err:
+ my_free(buff, MYF(MY_ALLOW_ZERO_PTR));
err:
share->error= error;
share->open_errno= my_errno;
@@ -2883,6 +2897,7 @@
ulong length;
uchar fill[IO_SIZE];
int create_flags= O_RDWR | O_TRUNC;
+ DBUG_ENTER("create_frm");
if (create_info->options & HA_LEX_CREATE_TMP_TABLE)
create_flags|= O_EXCL | O_NOFOLLOW;
@@ -2964,7 +2979,7 @@
{
VOID(my_close(file,MYF(0)));
VOID(my_delete(name,MYF(0)));
- return(-1);
+ DBUG_RETURN(-1);
}
}
}
@@ -2975,7 +2990,7 @@
else
my_error(ER_CANT_CREATE_TABLE,MYF(0),table,my_errno);
}
- return (file);
+ DBUG_RETURN(file);
} /* create_frm */
@@ -2993,7 +3008,7 @@
create_info->table_charset= 0;
create_info->comment= share->comment;
create_info->transactional= share->transactional;
- create_info->page_checksum= share->page_checksum;
+ create_info->option_list= share->option_list;
DBUG_VOID_RETURN;
}
=== modified file 'sql/table.h'
--- a/sql/table.h 2010-02-12 08:47:31 +0000
+++ b/sql/table.h 2010-03-26 19:11:33 +0000
@@ -340,6 +340,8 @@
#ifdef NOT_YET
struct st_table *open_tables; /* link to open tables */
#endif
+ engine_option_value *option_list; /* text options for table */
+ void *option_struct; /* structure with parsed options */
/* The following is copied to each TABLE on OPEN */
Field **field;
=== modified file 'sql/unireg.cc'
--- a/sql/unireg.cc 2010-01-04 17:54:42 +0000
+++ b/sql/unireg.cc 2010-03-26 19:11:33 +0000
@@ -46,6 +46,13 @@
uint reclength, ulong data_offset,
handler *handler);
+uint engine_table_options_frm_length(engine_option_value *table_option_list,
+ List<Create_field> &create_fields,
+ uint keys, KEY *key_info);
+uchar *engine_table_options_frm_image(uchar *buff,
+ engine_option_value *table_option_list,
+ List<Create_field> &create_fields,
+ uint keys, KEY *key_info);
/**
An interceptor to hijack ER_TOO_MANY_FIELDS error from
pack_screens and retry again without UNIREG screens.
@@ -75,6 +82,7 @@
return is_handled;
}
+
/*
Create a frm (table definition) file
@@ -107,6 +115,7 @@
ulong key_buff_length;
File file;
ulong filepos, data_offset;
+ uint options_len= 0;
uchar fileinfo[64],forminfo[288],*keybuff;
TYPELIB formnames;
uchar *screen_buff;
@@ -183,6 +192,17 @@
create_info->extra_size+= key_info[i].parser_name->length + 1;
}
+ {
+ options_len= engine_table_options_frm_length(create_info->option_list,
+ create_fields,
+ keys, key_info);
+ if (options_len)
+ {
+ create_info->table_options|= HA_OPTION_TEXT_CREATE_OPTIONS;
+ create_info->extra_size+= (options_len+= 4);
+ }
+ }
+
if ((file=create_frm(thd, file_name, db, table, reclength, fileinfo,
create_info, keys)) < 0)
{
@@ -294,6 +314,25 @@
if (my_write(file, (uchar*) buff, 6, MYF_RW))
goto err;
}
+
+ if (options_len)
+ {
+ uchar *optbuff= (uchar *)my_malloc(options_len, MYF(0));
+ my_bool error;
+ DBUG_PRINT("info", ("Create options length: %u", options_len));
+ if (!optbuff)
+ goto err;
+ int4store(optbuff, options_len);
+ engine_table_options_frm_image(optbuff + 4,
+ create_info->option_list,
+ create_fields,
+ keys, key_info);
+ error= my_write(file, optbuff, options_len, MYF_RW);
+ my_free(optbuff, MYF(0));
+ if (error)
+ goto err;
+ }
+
for (i= 0; i < keys; i++)
{
if (key_info[i].parser_name)
=== modified file 'storage/example/ha_example.cc'
--- a/storage/example/ha_example.cc 2010-03-03 14:44:14 +0000
+++ b/storage/example/ha_example.cc 2010-03-26 19:11:33 +0000
@@ -113,6 +113,55 @@
/* The mutex used to init the hash; variable for example share methods */
pthread_mutex_t example_mutex;
+
+/**
+ structure for CREATE TABLE options (table options)
+*/
+
+struct example_table_options_struct
+{
+ const char *strparam;
+ ulonglong ullparam;
+ uint enumparam;
+ uint boolparam;
+};
+
+
+/**
+ structure for CREATE TABLE options (table options)
+*/
+
+struct example_field_options_struct
+{
+ const char *compex_param_to_parse_it_in_engine;
+};
+
+#define ha_table_option_struct example_table_options_struct
+ha_create_table_option example_table_option_list[]=
+{
+ HA_TOPTION_ULL("UUL", ullparam, UINT_MAX32, 0, UINT_MAX32, 1),
+ HA_TOPTION_STRING("STR", strparam),
+ HA_TOPTION_ENUM("one_or_two", enumparam, "one,two", 0),
+ HA_TOPTION_BOOL("YESNO", boolparam, 1),
+ HA_TOPTION_END
+};
+
+#define ha_field_option_struct example_field_options_struct
+ha_create_table_option example_field_option_list[]=
+{
+ HA_FOPTION_STRING("COMPLEX", compex_param_to_parse_it_in_engine),
+ HA_FOPTION_END
+};
+
+
+ha_create_table_option_rules example_table_option_list_rules=
+{
+ example_table_option_list,
+ example_field_option_list,
+ NULL
+};
+
+
/**
@brief
Function we use in the creation of our hash to get key.
@@ -138,6 +187,7 @@
example_hton->state= SHOW_OPTION_YES;
example_hton->create= example_create_handler;
example_hton->flags= HTON_CAN_RECREATE;
+ example_hton->table_options_rules= &example_table_option_list_rules;
DBUG_RETURN(0);
}
=== modified file 'storage/pbxt/src/discover_xt.cc'
--- a/storage/pbxt/src/discover_xt.cc 2010-02-01 06:14:12 +0000
+++ b/storage/pbxt/src/discover_xt.cc 2010-03-26 19:11:33 +0000
@@ -1623,7 +1623,7 @@
#endif
NULL /*default_value*/, NULL /*on_update_value*/, &comment, NULL /*change*/,
NULL /*interval_list*/, info->field_charset, 0 /*uint_geom_type*/,
- NULL /*vcol_info*/))
+ NULL /*vcol_info*/, NULL /* create options */))
#endif
goto error;
1
0
26 Mar '10
Hi!
>>>>> "Colin" == Colin Charles <colin(a)askmonty.org> writes:
Colin> Hi!
Colin> I found this:
Colin> http://tokutek.com/products/mysql-patches/
Colin> Its nice to know they mention MariaDB there. But more importantly, any
Colin> thoughts on implementing these patches that MySQL has just dropped the
Colin> ball on?
Colin> http://bugs.mysql.com/bug.php?id=44927 - auto increment counter (serg
Colin> has reviewed this before, fwiw)
Colin> http://bugs.mysql.com/bug.php?id=45458 - support for multiple
Colin> clustering indexes
Colin> http://bugs.mysql.com/bug.php?id=45759 - notify an engine of full scan
Colin> on secondary indexes (serg has also reviewed this before)
This is already in MariaDB.
Colin> Good job here. We implemented:
Colin> http://bugs.mysql.com/bug.php?id=45754 - increasing maximum columns in
Colin> an index from 16 to 32
This one too.
-------
Agree that we should look at adding the first two ones to MariaDB too,
if they make sense. I did read up on bug 44927 - auto increment
counter, and still not sure if Serg or Zardosht is right.
When checking the code, it looks like using HA_AUTO_PART_KEY in the
engine is the right way to go. After all, this is just a check if we
should allow auto_increment as a secondary part of a key.
In this case there is no patch to add to MariaDB.
Serg, can you please verify this.
Regarding http://bugs.mysql.com/bug.php?id=45458, it would be good
to use the new CREATE options we have in 5.2 to store the fact if a
key is clustered or not.
Serg, would you like to review the patch and see if it can go into 5.2
or 5.3 ?
Regards,
Monty
1
0
[Maria-developers] Rev 2751: options for CREATE TABLE (MWL#43) in file:///home/bell/maria/bzr/work-maria-5.2-createoptions2/
by sanja@askmonty.org 26 Mar '10
by sanja@askmonty.org 26 Mar '10
26 Mar '10
At file:///home/bell/maria/bzr/work-maria-5.2-createoptions2/
------------------------------------------------------------
revno: 2751
revision-id: sanja(a)askmonty.org-20100326161906-nl78vlyf8h00jzop
parent: sergii(a)pisem.net-20100323092233-t2gwaclx94hd6exa
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-createoptions2
timestamp: Fri 2010-03-26 18:19:06 +0200
message:
options for CREATE TABLE (MWL#43)
Diff too large for email (2307 lines, the limit is 1000).
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (monty:2825)
by Michael Widenius 26 Mar '10
by Michael Widenius 26 Mar '10
26 Mar '10
#At lp:maria based on revid:sergii@pisem.net-20100308170509-gsqfnt3a9rdaxj32
2825 Michael Widenius 2010-03-09
Added count of my_sync calls (to SHOW STATUS)
tmp_table_size can now be set to 0 (to disable in memory internal temp tables)
Improved speed for internal Maria temp tables:
- Don't use packed keys, except with long text fields.
- Don't copy key all accessed pages during key search.
Some new benchmark tests to sql-bench (for group by)
modified:
BUILD/compile-pentium64-gcov
BUILD/compile-pentium64-gprof
include/my_sys.h
mysql-test/r/variables.result
mysys/my_sync.c
sql-bench/test-select.sh
sql/mysqld.cc
sql/sql_select.cc
storage/maria/ma_key_recover.h
storage/maria/ma_page.c
storage/maria/ma_rkey.c
storage/maria/ma_search.c
storage/maria/ma_write.c
storage/maria/maria_def.h
per-file messages:
BUILD/compile-pentium64-gcov
Update script to use same pentium_config flags as other tests
BUILD/compile-pentium64-gprof
Update script to use same pentium_config flags as other tests
include/my_sys.h
Added count of my_sync calls
mysql-test/r/variables.result
tmp_table_size can now be set to 0
sql-bench/test-select.sh
Added some new test for GROUP BY on a not key field and group by with different order by
sql/mysqld.cc
Added count of my_sync calls
tmp_table_size can now be set to 0 (to disable in memory internal temp tables)
sql/sql_select.cc
If tmp_table_size is 0, don't use in memory temp tables (good for benchmarking MyISAM/Maria temp tables)
Don't pack keys for Maria tables; The 8K page size makes packed keys too slow for temp tables.
storage/maria/ma_key_recover.h
Moved definition to maria_def.h
storage/maria/ma_page.c
Moved code used to simplify comparing of identical Maria tables to own function (page_cleanup())
Fixed that one can read a page with a read lock.
storage/maria/ma_rkey.c
For not exact key reads, cache the page where we found key (to speed up future read-next/read-prev calls)
storage/maria/ma_search.c
Moved code to cache last key page to separate function.
Instead of copying pages, only get a link to the page. This notable speeds up key searches on bigger tables.
storage/maria/ma_write.c
Added comment
storage/maria/maria_def.h
Moved page_cleanup() to separate function.
=== modified file 'BUILD/compile-pentium64-gcov'
--- a/BUILD/compile-pentium64-gcov 2007-08-16 00:10:16 +0000
+++ b/BUILD/compile-pentium64-gcov 2010-03-09 19:22:24 +0000
@@ -9,9 +9,9 @@ export CCACHE_DISABLE
export LDFLAGS="$gcov_link_flags"
-extra_flags="$pentium64_cflags $debug_cflags $max_cflags $gcov_compile_flags"
+extra_flags="$pentium64_cflags $max_cflags $gcov_compile_flags"
c_warnings="$c_warnings $debug_extra_warnings"
cxx_warnings="$cxx_warnings $debug_extra_warnings"
-extra_configs="$pentium64_configs $debug_configs $gcov_configs $max_configs"
+extra_configs="$pentium_configs $debug_configs $gcov_configs $max_configs --with-zlib-dir=bundled"
. "$path/FINISH.sh"
=== modified file 'BUILD/compile-pentium64-gprof'
--- a/BUILD/compile-pentium64-gprof 2007-08-16 00:10:16 +0000
+++ b/BUILD/compile-pentium64-gprof 2010-03-09 19:22:24 +0000
@@ -4,6 +4,6 @@ path=`dirname $0`
. "$path/SETUP.sh"
extra_flags="$pentium64_cflags $gprof_compile_flags"
-extra_configs="$pentium64_configs $debug_configs $gprof_link_flags"
+extra_configs="$pentium_configs $max_configs $gprof_link_flags --with-zlib-dir=bundled"
. "$path/FINISH.sh"
=== modified file 'include/my_sys.h'
--- a/include/my_sys.h 2009-12-03 11:19:05 +0000
+++ b/include/my_sys.h 2010-03-09 19:22:24 +0000
@@ -247,6 +247,7 @@ extern CHARSET_INFO compiled_charsets[];
/* statistics */
extern ulong my_file_opened,my_stream_opened, my_tmp_file_created;
extern ulong my_file_total_opened;
+extern ulong my_sync_count;
extern uint mysys_usage_id;
extern my_bool my_init_done;
=== modified file 'mysql-test/r/variables.result'
--- a/mysql-test/r/variables.result 2010-02-10 19:06:24 +0000
+++ b/mysql-test/r/variables.result 2010-03-09 19:22:24 +0000
@@ -575,8 +575,6 @@ set storage_engine=myisam;
set global thread_cache_size=100;
set timestamp=1, timestamp=default;
set tmp_table_size=100;
-Warnings:
-Warning 1292 Truncated incorrect tmp_table_size value: '100'
set tx_isolation="READ-COMMITTED";
set wait_timeout=100;
set log_warnings=1;
=== modified file 'mysys/my_sync.c'
--- a/mysys/my_sync.c 2010-01-15 15:27:55 +0000
+++ b/mysys/my_sync.c 2010-03-09 19:22:24 +0000
@@ -17,6 +17,8 @@
#include "mysys_err.h"
#include <errno.h>
+ulong my_sync_count; /* Count number of sync calls */
+
/*
Sync data in file to disk
@@ -46,6 +48,7 @@ int my_sync(File fd, myf my_flags)
DBUG_ENTER("my_sync");
DBUG_PRINT("my",("fd: %d my_flags: %d", fd, my_flags));
+ statistic_increment(my_sync_count,&THR_LOCK_open);
do
{
#if defined(F_FULLFSYNC)
=== modified file 'sql-bench/test-select.sh'
--- a/sql-bench/test-select.sh 2010-02-17 20:10:02 +0000
+++ b/sql-bench/test-select.sh 2010-03-09 19:22:24 +0000
@@ -68,7 +68,8 @@ do_many($dbh,$server->create("bench1",
["region char(1) NOT NULL",
"idn integer(6) NOT NULL",
"rev_idn integer(6) NOT NULL",
- "grp integer(6) NOT NULL"],
+ "grp integer(6) NOT NULL",
+ "grp_no_key integer(6) NOT NULL"],
["primary key (region,idn)",
"unique (region,rev_idn)",
"unique (region,grp,idn)"]));
@@ -105,10 +106,10 @@ for ($id=0,$rev_id=$opt_loop_count-1 ; $
{
$grp=$id*3 % $opt_groups;
$region=chr(65+$id%$opt_regions);
- do_query($dbh,"$query'$region',$id,$rev_id,$grp)");
+ do_query($dbh,"$query'$region',$id,$rev_id,$grp,$grp)");
if ($id == $half_done)
{ # Test with different insert
- $query="insert into bench1 (region,idn,rev_idn,grp) values (";
+ $query="insert into bench1 (region,idn,rev_idn,grp,grp_no_key) values (";
}
}
@@ -323,6 +324,26 @@ if ($limits->{'group_functions'})
$end_time=new Benchmark;
print "Time for count_group_on_key_parts ($i:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
+
+ $loop_time=new Benchmark;
+ $rows=0;
+ for ($i=0 ; $i < $opt_medium_loop_count ; $i++)
+ {
+ $rows+=fetch_all_rows($dbh,"select grp_no_key,count(*) from bench1 group by grp_no_key");
+ }
+ $end_time=new Benchmark;
+ print "Time for count_group ($i:$rows): " .
+ timestr(timediff($end_time, $loop_time),"all") . "\n";
+
+ $loop_time=new Benchmark;
+ $rows=0;
+ for ($i=0 ; $i < $opt_medium_loop_count ; $i++)
+ {
+ $rows+=fetch_all_rows($dbh,"select grp_no_key,count(*) as cnt from bench1 group by grp_no_key order by cnt");
+ }
+ $end_time=new Benchmark;
+ print "Time for count_group_with_order ($i:$rows): " .
+ timestr(timediff($end_time, $loop_time),"all") . "\n";
}
if ($limits->{'group_distinct_functions'})
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-02-11 19:15:24 +0000
+++ b/sql/mysqld.cc 2010-03-09 19:22:24 +0000
@@ -7273,10 +7273,10 @@ The minimum value for this variable is 4
0, GET_STR, REQUIRED_ARG, 0, 0, 0, 0, 0, 0},
{"tmp_table_size", OPT_TMP_TABLE_SIZE,
"If an internal in-memory temporary table exceeds this size, MySQL will"
- " automatically convert it to an on-disk MyISAM table.",
+ " automatically convert it to an on-disk MyISAM/Maria table.",
(uchar**) &global_system_variables.tmp_table_size,
(uchar**) &max_system_variables.tmp_table_size, 0, GET_ULL,
- REQUIRED_ARG, 16*1024*1024L, 1024, MAX_MEM_TABLE_SIZE, 0, 1, 0},
+ REQUIRED_ARG, 16*1024*1024L, 0, MAX_MEM_TABLE_SIZE, 0, 1, 0},
{"transaction_alloc_block_size", OPT_TRANS_ALLOC_BLOCK_SIZE,
"Allocation block size for transactions to be stored in binary log",
(uchar**) &global_system_variables.trans_alloc_block_size,
@@ -7778,6 +7778,7 @@ SHOW_VAR status_vars[]= {
{"Ssl_verify_mode", (char*) &show_ssl_get_verify_mode, SHOW_FUNC},
{"Ssl_version", (char*) &show_ssl_get_version, SHOW_FUNC},
#endif /* HAVE_OPENSSL */
+ {"Syncs", (char*) &my_sync_count, SHOW_LONG_NOFLUSH},
{"Table_locks_immediate", (char*) &locks_immediate, SHOW_LONG},
{"Table_locks_waited", (char*) &locks_waited, SHOW_LONG},
#ifdef HAVE_MMAP
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-08 13:57:32 +0000
+++ b/sql/sql_select.cc 2010-03-09 19:22:24 +0000
@@ -10168,7 +10168,8 @@ create_tmp_table(THD *thd,TMP_TABLE_PARA
/* future: storage engine selection can be made dynamic? */
if (blob_count || using_unique_constraint ||
(select_options & (OPTION_BIG_TABLES | SELECT_SMALL_RESULT)) ==
- OPTION_BIG_TABLES || (select_options & TMP_TABLE_FORCE_MYISAM))
+ OPTION_BIG_TABLES || (select_options & TMP_TABLE_FORCE_MYISAM) ||
+ !thd->variables.tmp_table_size)
{
share->db_plugin= ha_lock_engine(0, TMP_ENGINE_HTON);
table->file= get_new_handler(share, &table->mem_root,
@@ -10707,7 +10708,7 @@ static bool create_internal_tmp_table(TA
{
/* Create an unique key */
bzero((char*) &keydef,sizeof(keydef));
- keydef.flag=HA_NOSAME | HA_BINARY_PACK_KEY | HA_PACK_KEY;
+ keydef.flag=HA_NOSAME;
keydef.keysegs= keyinfo->key_parts;
keydef.seg= seg;
}
@@ -10732,7 +10733,7 @@ static bool create_internal_tmp_table(TA
seg->type= keyinfo->key_part[i].type;
/* Tell handler if it can do suffic space compression */
if (field->real_type() == MYSQL_TYPE_STRING &&
- keyinfo->key_part[i].length > 4)
+ keyinfo->key_part[i].length > 32)
seg->flag|= HA_SPACE_PACK;
}
if (!(field->flags & NOT_NULL_FLAG))
=== modified file 'storage/maria/ma_key_recover.h'
--- a/storage/maria/ma_key_recover.h 2008-09-01 17:31:40 +0000
+++ b/storage/maria/ma_key_recover.h 2010-03-09 19:22:24 +0000
@@ -63,7 +63,6 @@ extern my_bool write_hook_for_undo_key_i
extern my_bool write_hook_for_undo_key_delete(enum translog_record_type type,
TRN *trn, MARIA_HA *tbl_info,
LSN *lsn, void *hook_arg);
-void _ma_unpin_all_pages(MARIA_HA *info, LSN undo_lsn);
my_bool _ma_log_prefix(MARIA_PAGE *page, uint changed_length, int move_length);
my_bool _ma_log_suffix(MARIA_PAGE *page, uint org_length,
=== modified file 'storage/maria/ma_page.c'
--- a/storage/maria/ma_page.c 2009-05-06 12:03:24 +0000
+++ b/storage/maria/ma_page.c 2010-03-09 19:22:24 +0000
@@ -64,6 +64,15 @@ void _ma_page_setup(MARIA_PAGE *page, MA
share->base.key_reflength : 0);
}
+#ifdef IDENTICAL_PAGES_AFTER_RECOVERY
+void page_cleanup(MARIA_SHARE *share, MARIA_PAGE *page)
+{
+ uint length= page->size;
+ DBUG_ASSERT(length <= block_size - KEYPAGE_CHECKSUM_SIZE);
+ bzero(page->buff + length, share->block_size - length);
+}
+#endif
+
/**
Fetch a key-page in memory
@@ -102,8 +111,10 @@ my_bool _ma_fetch_keypage(MARIA_PAGE *pa
if (lock != PAGECACHE_LOCK_LEFT_UNLOCKED)
{
- DBUG_ASSERT(lock == PAGECACHE_LOCK_WRITE);
- page_link.unlock= PAGECACHE_LOCK_WRITE_UNLOCK;
+ DBUG_ASSERT(lock == PAGECACHE_LOCK_WRITE || PAGECACHE_LOCK_READ);
+ page_link.unlock= (lock == PAGECACHE_LOCK_WRITE ?
+ PAGECACHE_LOCK_WRITE_UNLOCK :
+ PAGECACHE_LOCK_READ_UNLOCK);
page_link.changed= 0;
push_dynamic(&info->pinned_pages, (void*) &page_link);
page->link_offset= info->pinned_pages.elements-1;
@@ -209,14 +220,7 @@ my_bool _ma_write_keypage(MARIA_PAGE *pa
}
#endif
-#ifdef IDENTICAL_PAGES_AFTER_RECOVERY
- {
- uint length= page->size;
- DBUG_ASSERT(length <= block_size - KEYPAGE_CHECKSUM_SIZE);
- bzero(buff + length, block_size - length);
- }
-#endif
-
+ page_cleanup(share, page);
res= pagecache_write(share->pagecache,
&share->kfile,
(pgcache_page_no_t) (page->pos / block_size),
=== modified file 'storage/maria/ma_rkey.c'
--- a/storage/maria/ma_rkey.c 2008-06-26 05:18:28 +0000
+++ b/storage/maria/ma_rkey.c 2010-03-09 19:22:24 +0000
@@ -82,6 +82,9 @@ int maria_rkey(MARIA_HA *info, uchar *bu
rw_rdlock(&keyinfo->root_lock);
nextflag= maria_read_vec[search_flag] | key.flag;
+ if (search_flag != HA_READ_KEY_EXACT ||
+ ((keyinfo->flag & (HA_NOSAME | HA_NULL_PART)) != HA_NOSAME))
+ nextflag|= SEARCH_SAVE_BUFF;
switch (keyinfo->key_alg) {
#ifdef HAVE_RTREE_KEYS
=== modified file 'storage/maria/ma_search.c'
--- a/storage/maria/ma_search.c 2009-05-06 12:03:24 +0000
+++ b/storage/maria/ma_search.c 2010-03-09 19:22:24 +0000
@@ -18,6 +18,10 @@
#include "ma_fulltext.h"
#include "m_ctype.h"
+static int _ma_search_no_save(register MARIA_HA *info, MARIA_KEY *key,
+ uint32 nextflag, register my_off_t pos,
+ MARIA_PINNED_PAGE **res_page_link,
+ uchar **res_page_buff);
static my_bool _ma_get_prev_key(MARIA_KEY *key, MARIA_PAGE *ma_page,
uchar *keypos);
@@ -57,7 +61,51 @@ int _ma_check_index(MARIA_HA *info, int
*/
int _ma_search(register MARIA_HA *info, MARIA_KEY *key, uint32 nextflag,
- register my_off_t pos)
+ my_off_t pos)
+{
+ int error;
+ MARIA_PINNED_PAGE *page_link;
+ uchar *page_buff;
+
+ info->page_changed= 1; /* If page not saved */
+ if (!(error= _ma_search_no_save(info, key, nextflag, pos, &page_link,
+ &page_buff)))
+ {
+ if (nextflag & SEARCH_SAVE_BUFF)
+ {
+ bmove512(info->keyread_buff, page_buff, info->s->block_size);
+
+ /* Save position for a possible read next / previous */
+ info->int_keypos= info->keyread_buff + (ulonglong) info->int_keypos;
+ info->int_maxpos= info->keyread_buff + (ulonglong) info->int_maxpos;
+ info->int_keytree_version= key->keyinfo->version;
+ info->last_search_keypage= info->last_keypage;
+ info->page_changed= 0;
+ info->keyread_buff_used= 0;
+ }
+ }
+ _ma_unpin_all_pages(info, LSN_IMPOSSIBLE);
+ return (error);
+}
+
+/**
+ @breif Search after row by a key
+
+ ret_page_link Will contain pointer to page where we found key
+
+ @note
+ Position to row is stored in info->lastpos
+
+ @return
+ @retval 0 ok (key found)
+ @retval -1 Not found
+ @retval 1 If one should continue search on higher level
+*/
+
+static int _ma_search_no_save(register MARIA_HA *info, MARIA_KEY *key,
+ uint32 nextflag, register my_off_t pos,
+ MARIA_PINNED_PAGE **res_page_link,
+ uchar **res_page_buff)
{
my_bool last_key_not_used;
int error,flag;
@@ -66,6 +114,7 @@ int _ma_search(register MARIA_HA *info,
uchar lastkey[MARIA_MAX_KEY_BUFF];
MARIA_KEYDEF *keyinfo= key->keyinfo;
MARIA_PAGE page;
+ MARIA_PINNED_PAGE *page_link;
DBUG_ENTER("_ma_search");
DBUG_PRINT("enter",("pos: %lu nextflag: %u lastpos: %lu",
(ulong) pos, nextflag, (ulong) info->cur_row.lastpos));
@@ -81,10 +130,11 @@ int _ma_search(register MARIA_HA *info,
}
if (_ma_fetch_keypage(&page, info, keyinfo, pos,
- PAGECACHE_LOCK_LEFT_UNLOCKED,
- DFLT_INIT_HITS, info->keyread_buff,
- test(!(nextflag & SEARCH_SAVE_BUFF))))
+ PAGECACHE_LOCK_READ, DFLT_INIT_HITS, 0, 0))
goto err;
+ page_link= dynamic_element(&info->pinned_pages,
+ info->pinned_pages.elements-1,
+ MARIA_PINNED_PAGE*);
DBUG_DUMP("page", page.buff, page.size);
flag= (*keyinfo->bin_search)(key, &page, nextflag, &keypos, lastkey,
@@ -98,8 +148,9 @@ int _ma_search(register MARIA_HA *info,
if (flag)
{
- if ((error= _ma_search(info, key, nextflag,
- _ma_kpos(nod_flag,keypos))) <= 0)
+ if ((error= _ma_search_no_save(info, key, nextflag,
+ _ma_kpos(nod_flag,keypos),
+ res_page_link, res_page_buff)) <= 0)
DBUG_RETURN(error);
if (flag >0)
@@ -118,26 +169,15 @@ int _ma_search(register MARIA_HA *info,
((keyinfo->flag & (HA_NOSAME | HA_NULL_PART)) != HA_NOSAME ||
(key->flag & SEARCH_PART_KEY) || info->s->base.born_transactional))
{
- if ((error= _ma_search(info, key, (nextflag | SEARCH_FIND) &
- ~(SEARCH_BIGGER | SEARCH_SMALLER | SEARCH_LAST),
- _ma_kpos(nod_flag,keypos))) >= 0 ||
+ if ((error= _ma_search_no_save(info, key, (nextflag | SEARCH_FIND) &
+ ~(SEARCH_BIGGER | SEARCH_SMALLER |
+ SEARCH_LAST),
+ _ma_kpos(nod_flag,keypos),
+ res_page_link, res_page_buff)) >= 0 ||
my_errno != HA_ERR_KEY_NOT_FOUND)
DBUG_RETURN(error);
- info->last_keypage= HA_OFFSET_ERROR; /* Buffer not in mem */
}
}
- if (pos != info->last_keypage)
- {
- uchar *old_buff= page.buff;
- if (_ma_fetch_keypage(&page, info, keyinfo, pos,
- PAGECACHE_LOCK_LEFT_UNLOCKED,DFLT_INIT_HITS,
- info->keyread_buff,
- test(!(nextflag & SEARCH_SAVE_BUFF))))
- goto err;
- /* Restore position if page buffer moved */
- keypos= page.buff + (keypos - old_buff);
- maxpos= page.buff + (maxpos - old_buff);
- }
info->last_key.keyinfo= keyinfo;
if ((nextflag & (SEARCH_SMALLER | SEARCH_LAST)) && flag != 0)
@@ -172,16 +212,15 @@ int _ma_search(register MARIA_HA *info,
}
info->cur_row.lastpos= _ma_row_pos_from_key(&info->last_key);
info->cur_row.trid= _ma_trid_from_key(&info->last_key);
- /* Save position for a possible read next / previous */
- info->int_keypos= info->keyread_buff + (keypos - page.buff);
- info->int_maxpos= info->keyread_buff + (maxpos - page.buff);
- info->int_nod_flag=nod_flag;
- info->int_keytree_version=keyinfo->version;
- info->last_search_keypage=info->last_keypage;
- info->page_changed=0;
- /* Set marker that buffer was used (Marker for mi_search_next()) */
- info->keyread_buff_used= (info->keyread_buff != page.buff);
+ /* Store offset to key */
+ info->int_keypos= (uchar*) (keypos - page.buff);
+ info->int_maxpos= (uchar*) (maxpos - page.buff);
+ info->int_nod_flag= nod_flag;
+ info->last_keypage= pos;
+ *res_page_link= page_link;
+ *res_page_buff= page.buff;
+
DBUG_PRINT("exit",("found key at %lu",(ulong) info->cur_row.lastpos));
DBUG_RETURN(0);
@@ -190,7 +229,7 @@ err:
info->cur_row.lastpos= HA_OFFSET_ERROR;
info->page_changed=1;
DBUG_RETURN (-1);
-} /* _ma_search */
+}
/*
=== modified file 'storage/maria/ma_write.c'
--- a/storage/maria/ma_write.c 2009-02-19 09:01:25 +0000
+++ b/storage/maria/ma_write.c 2010-03-09 19:22:24 +0000
@@ -587,6 +587,12 @@ my_bool _ma_enlarge_root(MARIA_HA *info,
/*
Search after a position for a key and store it there
+ TODO:
+ Change this to use pagecache directly instead of creating a copy
+ of the page. To do this, we must however change write-key-on-page
+ algorithm to not overwrite the buffer but instead store any overflow
+ key in a separate buffer.
+
@return
@retval -1 error
@retval 0 ok
=== modified file 'storage/maria/maria_def.h'
--- a/storage/maria/maria_def.h 2010-02-10 19:06:24 +0000
+++ b/storage/maria/maria_def.h 2010-03-09 19:22:24 +0000
@@ -979,6 +979,11 @@ extern ulonglong transid_get_packed(MARI
#define page_store_info(share, page) \
_ma_store_keypage_flag((share), (page)->buff, (page)->flag); \
_ma_store_page_used((share), (page)->buff, (page)->size);
+#ifdef IDENTICAL_PAGES_AFTER_RECOVERY
+void page_cleanup(MARIA_SHARE *share, MARIA_PAGE *page)
+#else
+#define page_cleanup(A,B) while (0)
+#endif
extern MARIA_KEY *_ma_make_key(MARIA_HA *info, MARIA_KEY *int_key, uint keynr,
uchar *key, const uchar *record,
@@ -1197,7 +1202,7 @@ void _ma_tmp_disable_logging_for_table(M
my_bool log_incomplete);
my_bool _ma_reenable_logging_for_table(MARIA_HA *info, my_bool flush_pages);
my_bool write_log_record_for_bulk_insert(MARIA_HA *info);
-
+void _ma_unpin_all_pages(MARIA_HA *info, LSN undo_lsn);
#define MARIA_NO_CRC_NORMAL_PAGE 0xffffffff
#define MARIA_NO_CRC_BITMAP_PAGE 0xfffffffe
2
2
Re: [Maria-developers] [Bug 544173] [NEW] Server crash for multi-engine transaction with binlog disabled
by Arjen Lentz 25 Mar '10
by Arjen Lentz 25 Mar '10
25 Mar '10
Hi Kristian
On 22/03/2010, at 11:44 PM, Kristian Nielsen wrote:
> Public bug reported:
> If using both PBXT and XtraDB in the same transaction, and log_bin is
> disabled, the server crashes:
> [...]
> Note, the crash only happens when log_bin is disabled.
As a sideline of this, we need a test that verifies that the two-phase
commit process actually works.
This could be done by starting a transaction, doing some write in both
XtraDB and PBXT, but set things up in such a way that a commit will
fail on one - perhaps Paul can provide a good idea for this - then the
testcase can simply check the values in the XtraDB and PBXT tables
afterwards to assess what happened.
This test is as important as the above crash, because multi-engine
transactions are something people will be using and thus relying on
correctly working, however the code inside MySQL core that handles it
is wholly untested. The test is of course also important to guard
against regressions when other bugs around this code get found & fixed
over time.
Thanks
Regards,
Arjen.
--
Arjen Lentz, Exec.Director @ Open Query (http://openquery.com)
Exceptional Services for MySQL at a fixed budget.
Follow our blog at http://openquery.com/blog/
OurDelta: packages for MySQL and MariaDB @ http://ourdelta.org
4
3
Re: [Maria-developers] [Bug 544173] [NEW] Server crash for multi-engine transaction with binlog disabled
by Kristian Nielsen 25 Mar '10
by Kristian Nielsen 25 Mar '10
25 Mar '10
Paul McCullagh <paul.mccullagh(a)primebase.com> writes:
> On Mar 25, 2010, at 8:39 AM, Kristian Nielsen wrote:
>> Yes. This is simple enough to do with DBUG. Just insert code that
>> makes each
>> engine fail in their prepare() when the appropriate DBUG flag is set.
>>
>> Another test we need is to have similar code to crash the server at
>> the same
>> points. Then on server restart check that the other engine does
>> rollback.
>
> It would be great to have these tests. As Arjen says, this code is not
> well traveled.
An example for how to do this (from a random dive into the source tree) is in
mysql-test/suite/maria/t/maria-recovery.test, which uses
mysql-test/include/maria_verify_recovery.inc to crash the server at specific
point and verify that crash recovery works. It shouldn't be hard to do
something similar for this case (also with just commit fail instead of crash).
Hopefully this could be useful if someone wants to implement such test.
>> (For example current code has no protection against another
>> thransaction
>> seeing a transient state with one engine committed and another not,
>> even using
>> START TRANSACTION WITH CONSISTENT SNAPSHOT. And there are
>> fundamentally
>> unsolvable problems with transactions that span both MVCC- and lock-
>> based
>> engines).
>
> I would be interested in an actual example of something does not work.
> Right now I have a problem imagining why something would not work.
I just happened to run the following test two days ago:
Consider the following tables
create table t1 (a int primary key) engine=innodb
create table t2 (b int primary key) engine=pbxt
insert into t1 values (1)
insert into t2 values (1)
I run the following statement repeatedly in one thread:
UPDATE t1,t2 SET a=a+1,b=b+1
It would be natural to assume that other threads will never be able to see
different values for a and b in a single transaction, but that assumption
would be wrong. I run the following statement repeatedly in a different
thread:
SELECT a,b FROM t1,t2
After just a few iterations, this will return a row with a=b+1.
(The reason I did this test was that I was looking at the XA multi-engine
code, and not seeing any code to enforce cross-engine consistency. I guess
this test shows there just is no such code ...)
I didn't report it as a bug, as I was not sure if it is a bug or not ... maybe
it is? I'd want a better fix than just taking a global lock over every commit
(which could hurt performance a lot), and such fix may be non-trivial...
Here are the Perl long-liners I used to see this, just run them in parallel to
see the failure:
perl -MDBI -le 'use strict; my $dbh=DBI->connect("dbi:mysql:database=test", "<user>", "<password>", {RaiseError => 1}); $dbh->do("SET binlog_format=row"); $dbh->do($_) for ("drop table if exists t1,t2", "create table t1 (a int primary key) engine=innodb", "create table t2 (b int primary key) engine=pbxt", "begin", "insert into t1 values (1)", "insert into t2 values (1)", "commit"); for (;;) { $dbh->do("UPDATE t1,t2 SET a=a+1,b=b+1");}'
perl -MDBI -le 'use strict; my $dbh=DBI->connect("dbi:mysql:database=test", "<user>", "<password>", {RaiseError => 1}); $dbh->do("SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ"); for(;;) {my $a=$dbh->selectrow_arrayref("SELECT a,b FROM t1,t2"); print join(" ", @$a); die if $a->[0] != $a->[1];}'
(CONSISTENT SNAPSHOT does not make a difference, as is seen by replacing the
second command with this:
perl -MDBI -le 'use strict; my $dbh=DBI->connect("dbi:mysql:database=test", "<user>", "<password>", {RaiseError => 1}); $dbh->do("SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ"); for(;;) {$dbh->do("START TRANSACTION WITH CONSISTENT SNAPSHOT");my $a=$dbh->selectrow_arrayref("SELECT a,b FROM t1,t2"); print join(" ", @$a); die if $a->[0] != $a->[1];$dbh->do("COMMIT");}'
)
With respect to the fundamental problems with combining MVCC and locking
engines; I know I reported a documentation bug for this long ago, though
google failed to find it for me. And I'm not sure how to repeat it now, as I
don't have any non-mvcc engines easily available (MariaDB has no NDB, and BDB
is gone also). But it goes something like this:
In a locking engine, a transaction sees the (consistent) state of the database
as it is at the *end* of the transactions. So the transaction can see all other
transactions that committed before it did.
In an mvcc engine, a transaction sees the (consistent) state of the database
as it is at the *start* of the transaction. So it sees no transactions that
started after it did.
So to get an inconsistency, something like this should work:
TRN1: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
TRN1: BEGIN;
TRN1: SELECT * FROM mvcc_table;
TRN2: BEGIN;
TRN2: UPDATE mvcc_table SET amount - amount - 100;
TRN2: UPDATE locking_table SET amount = amount + 100;
TRN2: COMMIT;
TRN1: SELECT * from locking_table;
TRN1: COMMIT;
In such a case, TRN1 will see the update of the locking_table, but not the
update of the mvcc_table, which gives an inconsistent view of the database. I
don't really see any way to solve this. (Of course one could add LOCK IN
SHARE MODE to every select, in effect turning the mvcc engine into a locking
engine).
- Kristian.
2
1
Colin Charles <colin(a)askmonty.org> writes:
> On 25 Mar 2010, at 02:09, Daniel Bartholomew wrote:
>
>> If so, my thinking is that the first 5.2 release will be called
>> "5.2.1"
>> and then go up from there.
>
>
> This is the only logical way forward
>
> We can then say "5.2.1 branched from MySQL 5.x" (for example)
>
> We've got to be clear where we've pulled things from, because some
> things might be fixed in later versions of MySQL. Also in case there
> are changes (i.e. that may affect folk upgrading), its really
> important to know where things are branched from
>
> Its also good that we "deviate" from their numbering. Putting on a
> marketing hat, it does sound like we're doing well with a greater
> version number (ok, I don't necessarily believe this, but I was semi-
> convinced when I heard the explanation given to me by the marketing
> folk at MySQL - it helps CIOs think, maybe)
I don't want a marketing hat :)
But from a technical point of view I just want to make it clear that there is
no difference in this respect between MariaDB 5.1 and MariaDB 5.2. The only
meaningful difference between 5.1 and 5.2 is that at some point we stopped
adding features to 5.1 to make a stable release, and thus all additions that
are not bugfixes are now called 5.2.
But in terms of numbering from 1,2,3 or from corresponding MySQL versions,
there is no difference. Any argument for one or the other numbering applies
equally to both 5.1 and 5.2. The merging from MySQL is identical.
So the consistent way would be to release 5.2.44, 5.2.45, ...
(On the other hand there may be other reasons to prefer 5.2.1, 5.2.2, fine with
me. We can't go back to MariaDB 5.1.1, but for MariaDB 5.2 we can choose).
- Kristian.
2
3
[Maria-developers] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (monty:2751)
by Michael Widenius 25 Mar '10
by Michael Widenius 25 Mar '10
25 Mar '10
#At lp:maria/5.2 based on revid:sergii@pisem.net-20100323092233-t2gwaclx94hd6exa
2751 Michael Widenius 2010-03-25
simple speed & space optimization:
- Avoid full inline of mark_trx_read_write() for many functions
- Avoid somewhat expensive tests for every write/update/delete row
modified:
sql/handler.cc
sql/handler.h
sql/sql_base.cc
per-file messages:
sql/handler.h
Adde ha_start_of_new_statement() to reset internal variables as part of the code in "open_table" that resets TABLE object for the new statement
Faster mark_trx_read_write_part()
sql/sql_base.cc
Don't manipulate table->file internal structs directly
=== modified file 'sql/handler.cc'
--- a/sql/handler.cc 2010-03-15 11:51:23 +0000
+++ b/sql/handler.cc 2010-03-25 13:33:39 +0000
@@ -3110,11 +3110,14 @@ int handler::ha_check(THD *thd, HA_CHECK
if it is started.
*/
-inline
void
-handler::mark_trx_read_write()
+handler::mark_trx_read_write_part2()
{
Ha_trx_info *ha_info= &ha_thd()->ha_data[ht->slot].ha_info[0];
+
+ /* Don't call this function again for this statement */
+ mark_trx_done= TRUE;
+
/*
When a storage engine method is called, the transaction must
have been started, unless it's a DDL call, for which the
=== modified file 'sql/handler.h'
--- a/sql/handler.h 2010-02-01 06:14:12 +0000
+++ b/sql/handler.h 2010-03-25 13:33:39 +0000
@@ -1134,6 +1134,7 @@ public:
enum {NONE=0, INDEX, RND} inited;
bool locked;
bool implicit_emptied; /* Can be !=0 only if HEAP */
+ bool mark_trx_done;
const COND *pushed_cond;
/**
next_insert_id is the next value which should be inserted into the
@@ -1177,7 +1178,7 @@ public:
ref(0), key_used_on_scan(MAX_KEY), active_index(MAX_KEY),
ref_length(sizeof(my_off_t)),
ft_handler(0), inited(NONE),
- locked(FALSE), implicit_emptied(0),
+ locked(FALSE), implicit_emptied(FALSE), mark_trx_done(FALSE),
pushed_cond(0), next_insert_id(0), insert_id_for_cur_row(0),
auto_inc_intervals_count(0)
{
@@ -1232,6 +1233,13 @@ public:
DBUG_RETURN(rnd_end());
}
int ha_reset();
+ /* Tell handler (not storage engine) this is start of a new statement */
+ void ha_start_of_new_statement()
+ {
+ ft_handler= 0;
+ mark_trx_done= FALSE;
+ }
+
/* this is necessary in many places, e.g. in HANDLER command */
int ha_index_or_rnd_end()
{
@@ -1943,8 +1951,13 @@ protected:
private:
/* Private helpers */
- inline void mark_trx_read_write();
-private:
+ void mark_trx_read_write_part2();
+ inline void mark_trx_read_write()
+ {
+ if (!mark_trx_done)
+ mark_trx_read_write_part2();
+ }
+
/*
Low-level primitives for storage engines. These should be
overridden by the storage engine class. To call these methods, use
=== modified file 'sql/sql_base.cc'
--- a/sql/sql_base.cc 2010-03-15 11:51:23 +0000
+++ b/sql/sql_base.cc 2010-03-25 13:33:39 +0000
@@ -2996,7 +2996,7 @@ TABLE *open_table(THD *thd, TABLE_LIST *
table->status=STATUS_NO_RECORD;
table->insert_values= 0;
table->fulltext_searched= 0;
- table->file->ft_handler= 0;
+ table->file->ha_start_of_new_statement();
table->reginfo.impossible_range= 0;
/* Catch wrong handling of the auto_increment_field_not_null. */
DBUG_ASSERT(!table->auto_increment_field_not_null);
1
0
[Maria-developers] Rev 28: Added note about noop IO scheduler for Linux. Refactor variable name. in file:///Users/hakan/work/monty_program/mariadb-tools/
by Hakan Kuecuekyilmaz 25 Mar '10
by Hakan Kuecuekyilmaz 25 Mar '10
25 Mar '10
At file:///Users/hakan/work/monty_program/mariadb-tools/
------------------------------------------------------------
revno: 28
revision-id: hakan(a)askmonty.org-20100325014205-kwsruwixwlymz1ti
parent: hakan(a)askmonty.org-20100310010046-hwv56n4wfn4t4odp
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: mariadb-tools
timestamp: Thu 2010-03-25 02:42:05 +0100
message:
Added note about noop IO scheduler for Linux. Refactor variable name.
For MyISAM related tests added key_cache statistics dump out.
=== modified file 'sysbench/run-sysbench-myisam.sh'
--- a/sysbench/run-sysbench-myisam.sh 2010-03-10 01:00:46 +0000
+++ b/sysbench/run-sysbench-myisam.sh 2010-03-25 01:42:05 +0000
@@ -6,16 +6,20 @@
# * Do not run this script with root privileges. We use
# killall -9, which can cause severe side effects!
# * By bzr pull we mean bzr merge --pull
+# * For reasonable performance set your IO scheduler to noop or deadline, for
+# reference please check
+# http://www.mysqlperformanceblog.com/2009/01/30/linux-schedulers-in-tpcc-lik…
#
# Index sizes for 20 mio rows (--table-size=20000000).
-# * delete.lua: 313M sbtest.MYI
-# * insert.lua: 4.0K sbtest.MYI
-# * oltp_complex_ro.lua: 313M sbtest.MYI
-# * oltp_complex_rw.lua: 313M sbtest.MYI
-# * oltp_simple.lua: 325M sbtest.MYI
-# * select.lua: 313M sbtest.MYI
-# * update_index.lua: 313M sbtest.MYI
-# * update_non_index.lua: 313M sbtest.MYI
+# * delete.lua 313M sbtest.MYI
+# * insert.lua 4.0K sbtest.MYI
+# * oltp_complex_ro.lua 313M sbtest.MYI
+# * oltp_complex_rw.lua 313M sbtest.MYI
+# * oltp_simple.lua 325M sbtest.MYI
+# * select.lua 313M sbtest.MYI
+# * select_random_ranges.lua 313M sbtest.MYI
+# * update_index.lua 313M sbtest.MYI
+# * update_non_index.lua 313M sbtest.MYI
#
# Hakan Kuecuekyilmaz <hakan at askmonty dot org> 2010-02-19.
#
@@ -60,13 +64,14 @@
# change these, except you exactly know what you are doing.
#
MYSQLADMIN='client/mysqladmin'
+MYSQL='client/mysql'
#
# Variables.
#
MY_SOCKET="/tmp/mysql.sock"
MYSQLADMIN_OPTIONS="--no-defaults -uroot --socket=$MY_SOCKET"
-MYSQL_OPTIONS="--no-defaults \
+MYSQLD_OPTIONS="--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--key_buffer_size=32M \
@@ -104,6 +109,7 @@
oltp_complex_rw.lua \
oltp_simple.lua \
select.lua \
+ select_random_ranges.lua \
update_index.lua \
update_non_index.lua"
@@ -240,7 +246,7 @@
}
function start_mysqld {
- sql/mysqld $MYSQL_OPTIONS &
+ sql/mysqld $MYSQLD_OPTIONS &
j=0
STARTED=-1
@@ -269,7 +275,7 @@
#
# Write out configurations used for future refernce.
#
-echo $MYSQL_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/mysqld_options.txt
+echo $MYSQLD_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/mysqld_options.txt
echo $SYSBENCH_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
echo '' >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
echo "Warm up time is: $WARM_UP_TIME" >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
@@ -331,6 +337,7 @@
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Starting warm up of $WARM_UP_TIME seconds."
$SYSBENCH $SYSBENCH_OPTIONS_WARM_UP run
sync
+ echo 'FLUSH STATUS' | $MYSQL -uroot
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Finnished warm up."
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Starting actual sysbench run."
@@ -338,6 +345,8 @@
grep "write requests:" ${THIS_RESULT_DIR}/result${k}.txt | awk '{ print $4 }' | sed -e 's/(//' >> ${THIS_RESULT_DIR}/results.txt
+ echo 'SELECT * FROM INFORMATION_SCHEMA.KEY_CACHES' | $MYSQL -uroot > ${THIS_RESULT_DIR}/key_cache_stats{k}.txt
+
k=$(($k + 1))
done
=== modified file 'sysbench/run-sysbench.sh'
--- a/sysbench/run-sysbench.sh 2010-03-10 00:02:01 +0000
+++ b/sysbench/run-sysbench.sh 2010-03-25 01:42:05 +0000
@@ -6,6 +6,9 @@
# * Do not run this script with root privileges. We use
# killall -9, which can cause severe side effects!
# * By bzr pull we mean bzr merge --pull
+# * For reasonable performance set your IO scheduler to noop or deadline, for
+# reference please check
+# http://www.mysqlperformanceblog.com/2009/01/30/linux-schedulers-in-tpcc-lik…
#
# Hakan Kuecuekyilmaz <hakan at askmonty dot org> 2010-02-19.
#
@@ -56,7 +59,7 @@
#
MY_SOCKET="/tmp/mysql.sock"
MYSQLADMIN_OPTIONS="--no-defaults -uroot --socket=$MY_SOCKET"
-MYSQL_OPTIONS="--no-defaults \
+MYSQLD_OPTIONS="--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--max_connections=256 \
@@ -236,7 +239,7 @@
}
function start_mysqld {
- sql/mysqld $MYSQL_OPTIONS &
+ sql/mysqld $MYSQLD_OPTIONS &
j=0
STARTED=-1
@@ -265,7 +268,7 @@
#
# Write out configurations used for future refernce.
#
-echo $MYSQL_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/mysqld_options.txt
+echo $MYSQLD_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/mysqld_options.txt
echo $SYSBENCH_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
echo '' >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
echo "Warm up time is: $WARM_UP_TIME" >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2833: two crashes in the TC_LOG_MMAP:
by noreply@launchpad.net 24 Mar '10
by noreply@launchpad.net 24 Mar '10
24 Mar '10
------------------------------------------------------------
revno: 2833
fixes bug(s): https://launchpad.net/bugs/544173
committer: Sergei Golubchik <sergii(a)pisem.net>
branch nick: maria-5.1
timestamp: Wed 2010-03-24 23:12:39 +0100
message:
two crashes in the TC_LOG_MMAP:
1. don't forget to initialize page->ptr
2. don't signal active->cond, if active is NULL
added:
mysql-test/suite/pbxt/r/pbxt_xa.result
mysql-test/suite/pbxt/t/pbxt_xa.test
modified:
sql/log.cc
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] Updated (by Psergey): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Wed, 24 Mar 2010, 14:42)=-=-
Low Level Design modified.
--- /tmp/wklog.90.old.19182 2010-03-24 14:42:54.000000000 +0000
+++ /tmp/wklog.90.new.19182 2010-03-24 14:42:54.000000000 +0000
@@ -1 +1,140 @@
+<contents>
+1. Applicability check
+2. Representation
+2.1 Option #1: Convert to TABLE_LIST
+2.2 On subquery predicate removal
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+2.3 What is expected of the result of conversion
+3. Pre-optimization steps
+3.1 Constant detection
+3.3 update_ref_and_keys
+3.4 JOIN_TAB sorting criteria
+4. Optimization
+5. Execution
+User interface.
+</contents>
+
+We'll call the new execution strategy "jtbm-materialization", for the lack of
+better name.
+
+1. Applicability check
+======================
+The criteria for checking whether a subquery can be processed with
+jtbm-materialization can be checked at JOIN::prepare stage (like it
+happens with semi-join check)
+
+2. Representation
+=================
+
+2.1 Option #1: Convert to TABLE_LIST
+------------------------------------
+Make it work like semi-join nests: each jtbm-predicate is converted into a
+TABLE_LIST object. This will make it
+
+ - uniform with semi-joins (we've stepped on all rakes there)
+ - allow to process JTBM-subqueries in ON expressions
+
+simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
+tables.
+
+for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
+i.e. for
+
+ SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
+
+we'll print
+
+ SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
+
+the XX part is not clear. we don't want to print 'ie' the second time here?
+
+2.2 On subquery predicate removal
+---------------------------------
+Q: if we remove the subquery predicate permanently, who will run
+fix_fields() for it? For semi-joins we don't have the problem as we
+inject into ON expression (right? or not? we have sj_on_expr, too...
+(Investigation: we the the same Item* pointer both in WHERE and
+as sj_on_expr. fix_fields() is called for the WHERE part and that's
+how sj_on_expr gets fixed. This works as long as
+Item_func_eq::fix_fields() does not try to substitute itself with
+another item).
+A: ?
+
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+------------------------------------------------------------
+JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
+- make conversion fully undoable
+- perform it sufficiently late in the optimization process, at the point
+ where JOIN_TABs are already allocated.
+
+Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
+it will be impossible to handle JTBM queries inside/outside of outer joins.
+
+2.3 What is expected of the result of conversion
+------------------------------------------------
+Join [pre]optimization relies on each optimized entity to have a bit in
+table_map.
+
+TODO: where do we check if there will be enough bits for everyone? (at the
+ point where we assign them?)
+
+The bit stored in join_tab->table->map, and the apparent problem is that JTBM
+join_tabs do not naturally have TABLE* object.
+
+We could use the the one that will be used for Materialization, but that will
+stop working when we will have to include IN->EXISTS in the choice.
+
+Current approach: don't create a table. create a table_map element in JOIN_TAB
+instead. Evgen has probably done something like that already.
+
+3. Pre-optimization steps
+=========================
+JOIN_TABs are allocated in make_join_statistics(). This where the changes will
+be needed: for JOIN_TABs that correspond to JTBM-tables:
+
+- don't set tab->table, set tab->jtbm_select (or whatever)
+- run subquery's optimizer to get its output cardinality
+
+3.1 Constant detection
+----------------------
+What about subqueries that "are constant"?
+ const_item IN (SELECT uncorrelated) -> is constant, but not something
+ we would want to evaluate.
+ something IN (SELECT from_constant_join) -> is constant
+
+Do we need to mark their JOIN_TABs as constant?
+
+3.3 update_ref_and_keys
+-----------------------
+* Walk through JTBM elements and inject KEYUSE elements for their
+ IN-equalities.
+
+TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
+
+3.4 JOIN_TAB sorting criteria
+-----------------------------
+Q: Where do we put JTBM's join_tab when pre-sorting records?
+A: it should sort as regular table.
+
+TODO: where do we remove the predicates from the WHERE?
+ - remove them like SJ-converter does
+ - remove them with optimizer (like remove_eq_conds does)
+
+4. Optimization
+===============
+Add a branch in best_access_path to account for
+- JTBM-Materialization
+- JTBM-Materialization-Scan.
+
+5. Execution
+============
+* We should be able to reuse item_subselect.cc code for lookups
+* But will have to use our own temptable scan code
+
+TODO: is it possible to have any unification with SJ-Materialization?
+
+User interface
+--------------
+Any @@optimizer_switch flags for all this?
+
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=90&nolimit=1
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
LOW-LEVEL DESIGN:
<contents>
1. Applicability check
2. Representation
2.1 Option #1: Convert to TABLE_LIST
2.2 On subquery predicate removal
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
2.3 What is expected of the result of conversion
3. Pre-optimization steps
3.1 Constant detection
3.3 update_ref_and_keys
3.4 JOIN_TAB sorting criteria
4. Optimization
5. Execution
User interface.
</contents>
We'll call the new execution strategy "jtbm-materialization", for the lack of
better name.
1. Applicability check
======================
The criteria for checking whether a subquery can be processed with
jtbm-materialization can be checked at JOIN::prepare stage (like it
happens with semi-join check)
2. Representation
=================
2.1 Option #1: Convert to TABLE_LIST
------------------------------------
Make it work like semi-join nests: each jtbm-predicate is converted into a
TABLE_LIST object. This will make it
- uniform with semi-joins (we've stepped on all rakes there)
- allow to process JTBM-subqueries in ON expressions
simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
tables.
for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
i.e. for
SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
we'll print
SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
the XX part is not clear. we don't want to print 'ie' the second time here?
2.2 On subquery predicate removal
---------------------------------
Q: if we remove the subquery predicate permanently, who will run
fix_fields() for it? For semi-joins we don't have the problem as we
inject into ON expression (right? or not? we have sj_on_expr, too...
(Investigation: we the the same Item* pointer both in WHERE and
as sj_on_expr. fix_fields() is called for the WHERE part and that's
how sj_on_expr gets fixed. This works as long as
Item_func_eq::fix_fields() does not try to substitute itself with
another item).
A: ?
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
------------------------------------------------------------
JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
- make conversion fully undoable
- perform it sufficiently late in the optimization process, at the point
where JOIN_TABs are already allocated.
Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
it will be impossible to handle JTBM queries inside/outside of outer joins.
2.3 What is expected of the result of conversion
------------------------------------------------
Join [pre]optimization relies on each optimized entity to have a bit in
table_map.
TODO: where do we check if there will be enough bits for everyone? (at the
point where we assign them?)
The bit stored in join_tab->table->map, and the apparent problem is that JTBM
join_tabs do not naturally have TABLE* object.
We could use the the one that will be used for Materialization, but that will
stop working when we will have to include IN->EXISTS in the choice.
Current approach: don't create a table. create a table_map element in JOIN_TAB
instead. Evgen has probably done something like that already.
3. Pre-optimization steps
=========================
JOIN_TABs are allocated in make_join_statistics(). This where the changes will
be needed: for JOIN_TABs that correspond to JTBM-tables:
- don't set tab->table, set tab->jtbm_select (or whatever)
- run subquery's optimizer to get its output cardinality
3.1 Constant detection
----------------------
What about subqueries that "are constant"?
const_item IN (SELECT uncorrelated) -> is constant, but not something
we would want to evaluate.
something IN (SELECT from_constant_join) -> is constant
Do we need to mark their JOIN_TABs as constant?
3.3 update_ref_and_keys
-----------------------
* Walk through JTBM elements and inject KEYUSE elements for their
IN-equalities.
TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
3.4 JOIN_TAB sorting criteria
-----------------------------
Q: Where do we put JTBM's join_tab when pre-sorting records?
A: it should sort as regular table.
TODO: where do we remove the predicates from the WHERE?
- remove them like SJ-converter does
- remove them with optimizer (like remove_eq_conds does)
4. Optimization
===============
Add a branch in best_access_path to account for
- JTBM-Materialization
- JTBM-Materialization-Scan.
5. Execution
============
* We should be able to reuse item_subselect.cc code for lookups
* But will have to use our own temptable scan code
TODO: is it possible to have any unification with SJ-Materialization?
User interface
--------------
Any @@optimizer_switch flags for all this?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Psergey): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Wed, 24 Mar 2010, 14:42)=-=-
Low Level Design modified.
--- /tmp/wklog.90.old.19182 2010-03-24 14:42:54.000000000 +0000
+++ /tmp/wklog.90.new.19182 2010-03-24 14:42:54.000000000 +0000
@@ -1 +1,140 @@
+<contents>
+1. Applicability check
+2. Representation
+2.1 Option #1: Convert to TABLE_LIST
+2.2 On subquery predicate removal
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+2.3 What is expected of the result of conversion
+3. Pre-optimization steps
+3.1 Constant detection
+3.3 update_ref_and_keys
+3.4 JOIN_TAB sorting criteria
+4. Optimization
+5. Execution
+User interface.
+</contents>
+
+We'll call the new execution strategy "jtbm-materialization", for the lack of
+better name.
+
+1. Applicability check
+======================
+The criteria for checking whether a subquery can be processed with
+jtbm-materialization can be checked at JOIN::prepare stage (like it
+happens with semi-join check)
+
+2. Representation
+=================
+
+2.1 Option #1: Convert to TABLE_LIST
+------------------------------------
+Make it work like semi-join nests: each jtbm-predicate is converted into a
+TABLE_LIST object. This will make it
+
+ - uniform with semi-joins (we've stepped on all rakes there)
+ - allow to process JTBM-subqueries in ON expressions
+
+simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
+tables.
+
+for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
+i.e. for
+
+ SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
+
+we'll print
+
+ SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
+
+the XX part is not clear. we don't want to print 'ie' the second time here?
+
+2.2 On subquery predicate removal
+---------------------------------
+Q: if we remove the subquery predicate permanently, who will run
+fix_fields() for it? For semi-joins we don't have the problem as we
+inject into ON expression (right? or not? we have sj_on_expr, too...
+(Investigation: we the the same Item* pointer both in WHERE and
+as sj_on_expr. fix_fields() is called for the WHERE part and that's
+how sj_on_expr gets fixed. This works as long as
+Item_func_eq::fix_fields() does not try to substitute itself with
+another item).
+A: ?
+
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+------------------------------------------------------------
+JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
+- make conversion fully undoable
+- perform it sufficiently late in the optimization process, at the point
+ where JOIN_TABs are already allocated.
+
+Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
+it will be impossible to handle JTBM queries inside/outside of outer joins.
+
+2.3 What is expected of the result of conversion
+------------------------------------------------
+Join [pre]optimization relies on each optimized entity to have a bit in
+table_map.
+
+TODO: where do we check if there will be enough bits for everyone? (at the
+ point where we assign them?)
+
+The bit stored in join_tab->table->map, and the apparent problem is that JTBM
+join_tabs do not naturally have TABLE* object.
+
+We could use the the one that will be used for Materialization, but that will
+stop working when we will have to include IN->EXISTS in the choice.
+
+Current approach: don't create a table. create a table_map element in JOIN_TAB
+instead. Evgen has probably done something like that already.
+
+3. Pre-optimization steps
+=========================
+JOIN_TABs are allocated in make_join_statistics(). This where the changes will
+be needed: for JOIN_TABs that correspond to JTBM-tables:
+
+- don't set tab->table, set tab->jtbm_select (or whatever)
+- run subquery's optimizer to get its output cardinality
+
+3.1 Constant detection
+----------------------
+What about subqueries that "are constant"?
+ const_item IN (SELECT uncorrelated) -> is constant, but not something
+ we would want to evaluate.
+ something IN (SELECT from_constant_join) -> is constant
+
+Do we need to mark their JOIN_TABs as constant?
+
+3.3 update_ref_and_keys
+-----------------------
+* Walk through JTBM elements and inject KEYUSE elements for their
+ IN-equalities.
+
+TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
+
+3.4 JOIN_TAB sorting criteria
+-----------------------------
+Q: Where do we put JTBM's join_tab when pre-sorting records?
+A: it should sort as regular table.
+
+TODO: where do we remove the predicates from the WHERE?
+ - remove them like SJ-converter does
+ - remove them with optimizer (like remove_eq_conds does)
+
+4. Optimization
+===============
+Add a branch in best_access_path to account for
+- JTBM-Materialization
+- JTBM-Materialization-Scan.
+
+5. Execution
+============
+* We should be able to reuse item_subselect.cc code for lookups
+* But will have to use our own temptable scan code
+
+TODO: is it possible to have any unification with SJ-Materialization?
+
+User interface
+--------------
+Any @@optimizer_switch flags for all this?
+
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=90&nolimit=1
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
LOW-LEVEL DESIGN:
<contents>
1. Applicability check
2. Representation
2.1 Option #1: Convert to TABLE_LIST
2.2 On subquery predicate removal
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
2.3 What is expected of the result of conversion
3. Pre-optimization steps
3.1 Constant detection
3.3 update_ref_and_keys
3.4 JOIN_TAB sorting criteria
4. Optimization
5. Execution
User interface.
</contents>
We'll call the new execution strategy "jtbm-materialization", for the lack of
better name.
1. Applicability check
======================
The criteria for checking whether a subquery can be processed with
jtbm-materialization can be checked at JOIN::prepare stage (like it
happens with semi-join check)
2. Representation
=================
2.1 Option #1: Convert to TABLE_LIST
------------------------------------
Make it work like semi-join nests: each jtbm-predicate is converted into a
TABLE_LIST object. This will make it
- uniform with semi-joins (we've stepped on all rakes there)
- allow to process JTBM-subqueries in ON expressions
simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
tables.
for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
i.e. for
SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
we'll print
SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
the XX part is not clear. we don't want to print 'ie' the second time here?
2.2 On subquery predicate removal
---------------------------------
Q: if we remove the subquery predicate permanently, who will run
fix_fields() for it? For semi-joins we don't have the problem as we
inject into ON expression (right? or not? we have sj_on_expr, too...
(Investigation: we the the same Item* pointer both in WHERE and
as sj_on_expr. fix_fields() is called for the WHERE part and that's
how sj_on_expr gets fixed. This works as long as
Item_func_eq::fix_fields() does not try to substitute itself with
another item).
A: ?
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
------------------------------------------------------------
JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
- make conversion fully undoable
- perform it sufficiently late in the optimization process, at the point
where JOIN_TABs are already allocated.
Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
it will be impossible to handle JTBM queries inside/outside of outer joins.
2.3 What is expected of the result of conversion
------------------------------------------------
Join [pre]optimization relies on each optimized entity to have a bit in
table_map.
TODO: where do we check if there will be enough bits for everyone? (at the
point where we assign them?)
The bit stored in join_tab->table->map, and the apparent problem is that JTBM
join_tabs do not naturally have TABLE* object.
We could use the the one that will be used for Materialization, but that will
stop working when we will have to include IN->EXISTS in the choice.
Current approach: don't create a table. create a table_map element in JOIN_TAB
instead. Evgen has probably done something like that already.
3. Pre-optimization steps
=========================
JOIN_TABs are allocated in make_join_statistics(). This where the changes will
be needed: for JOIN_TABs that correspond to JTBM-tables:
- don't set tab->table, set tab->jtbm_select (or whatever)
- run subquery's optimizer to get its output cardinality
3.1 Constant detection
----------------------
What about subqueries that "are constant"?
const_item IN (SELECT uncorrelated) -> is constant, but not something
we would want to evaluate.
something IN (SELECT from_constant_join) -> is constant
Do we need to mark their JOIN_TABs as constant?
3.3 update_ref_and_keys
-----------------------
* Walk through JTBM elements and inject KEYUSE elements for their
IN-equalities.
TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
3.4 JOIN_TAB sorting criteria
-----------------------------
Q: Where do we put JTBM's join_tab when pre-sorting records?
A: it should sort as regular table.
TODO: where do we remove the predicates from the WHERE?
- remove them like SJ-converter does
- remove them with optimizer (like remove_eq_conds does)
4. Optimization
===============
Add a branch in best_access_path to account for
- JTBM-Materialization
- JTBM-Materialization-Scan.
5. Execution
============
* We should be able to reuse item_subselect.cc code for lookups
* But will have to use our own temptable scan code
TODO: is it possible to have any unification with SJ-Materialization?
User interface
--------------
Any @@optimizer_switch flags for all this?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Psergey): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Wed, 24 Mar 2010, 14:42)=-=-
Low Level Design modified.
--- /tmp/wklog.90.old.19182 2010-03-24 14:42:54.000000000 +0000
+++ /tmp/wklog.90.new.19182 2010-03-24 14:42:54.000000000 +0000
@@ -1 +1,140 @@
+<contents>
+1. Applicability check
+2. Representation
+2.1 Option #1: Convert to TABLE_LIST
+2.2 On subquery predicate removal
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+2.3 What is expected of the result of conversion
+3. Pre-optimization steps
+3.1 Constant detection
+3.3 update_ref_and_keys
+3.4 JOIN_TAB sorting criteria
+4. Optimization
+5. Execution
+User interface.
+</contents>
+
+We'll call the new execution strategy "jtbm-materialization", for the lack of
+better name.
+
+1. Applicability check
+======================
+The criteria for checking whether a subquery can be processed with
+jtbm-materialization can be checked at JOIN::prepare stage (like it
+happens with semi-join check)
+
+2. Representation
+=================
+
+2.1 Option #1: Convert to TABLE_LIST
+------------------------------------
+Make it work like semi-join nests: each jtbm-predicate is converted into a
+TABLE_LIST object. This will make it
+
+ - uniform with semi-joins (we've stepped on all rakes there)
+ - allow to process JTBM-subqueries in ON expressions
+
+simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
+tables.
+
+for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
+i.e. for
+
+ SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
+
+we'll print
+
+ SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
+
+the XX part is not clear. we don't want to print 'ie' the second time here?
+
+2.2 On subquery predicate removal
+---------------------------------
+Q: if we remove the subquery predicate permanently, who will run
+fix_fields() for it? For semi-joins we don't have the problem as we
+inject into ON expression (right? or not? we have sj_on_expr, too...
+(Investigation: we the the same Item* pointer both in WHERE and
+as sj_on_expr. fix_fields() is called for the WHERE part and that's
+how sj_on_expr gets fixed. This works as long as
+Item_func_eq::fix_fields() does not try to substitute itself with
+another item).
+A: ?
+
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+------------------------------------------------------------
+JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
+- make conversion fully undoable
+- perform it sufficiently late in the optimization process, at the point
+ where JOIN_TABs are already allocated.
+
+Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
+it will be impossible to handle JTBM queries inside/outside of outer joins.
+
+2.3 What is expected of the result of conversion
+------------------------------------------------
+Join [pre]optimization relies on each optimized entity to have a bit in
+table_map.
+
+TODO: where do we check if there will be enough bits for everyone? (at the
+ point where we assign them?)
+
+The bit stored in join_tab->table->map, and the apparent problem is that JTBM
+join_tabs do not naturally have TABLE* object.
+
+We could use the the one that will be used for Materialization, but that will
+stop working when we will have to include IN->EXISTS in the choice.
+
+Current approach: don't create a table. create a table_map element in JOIN_TAB
+instead. Evgen has probably done something like that already.
+
+3. Pre-optimization steps
+=========================
+JOIN_TABs are allocated in make_join_statistics(). This where the changes will
+be needed: for JOIN_TABs that correspond to JTBM-tables:
+
+- don't set tab->table, set tab->jtbm_select (or whatever)
+- run subquery's optimizer to get its output cardinality
+
+3.1 Constant detection
+----------------------
+What about subqueries that "are constant"?
+ const_item IN (SELECT uncorrelated) -> is constant, but not something
+ we would want to evaluate.
+ something IN (SELECT from_constant_join) -> is constant
+
+Do we need to mark their JOIN_TABs as constant?
+
+3.3 update_ref_and_keys
+-----------------------
+* Walk through JTBM elements and inject KEYUSE elements for their
+ IN-equalities.
+
+TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
+
+3.4 JOIN_TAB sorting criteria
+-----------------------------
+Q: Where do we put JTBM's join_tab when pre-sorting records?
+A: it should sort as regular table.
+
+TODO: where do we remove the predicates from the WHERE?
+ - remove them like SJ-converter does
+ - remove them with optimizer (like remove_eq_conds does)
+
+4. Optimization
+===============
+Add a branch in best_access_path to account for
+- JTBM-Materialization
+- JTBM-Materialization-Scan.
+
+5. Execution
+============
+* We should be able to reuse item_subselect.cc code for lookups
+* But will have to use our own temptable scan code
+
+TODO: is it possible to have any unification with SJ-Materialization?
+
+User interface
+--------------
+Any @@optimizer_switch flags for all this?
+
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=90&nolimit=1
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
LOW-LEVEL DESIGN:
<contents>
1. Applicability check
2. Representation
2.1 Option #1: Convert to TABLE_LIST
2.2 On subquery predicate removal
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
2.3 What is expected of the result of conversion
3. Pre-optimization steps
3.1 Constant detection
3.3 update_ref_and_keys
3.4 JOIN_TAB sorting criteria
4. Optimization
5. Execution
User interface.
</contents>
We'll call the new execution strategy "jtbm-materialization", for the lack of
better name.
1. Applicability check
======================
The criteria for checking whether a subquery can be processed with
jtbm-materialization can be checked at JOIN::prepare stage (like it
happens with semi-join check)
2. Representation
=================
2.1 Option #1: Convert to TABLE_LIST
------------------------------------
Make it work like semi-join nests: each jtbm-predicate is converted into a
TABLE_LIST object. This will make it
- uniform with semi-joins (we've stepped on all rakes there)
- allow to process JTBM-subqueries in ON expressions
simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
tables.
for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
i.e. for
SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
we'll print
SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
the XX part is not clear. we don't want to print 'ie' the second time here?
2.2 On subquery predicate removal
---------------------------------
Q: if we remove the subquery predicate permanently, who will run
fix_fields() for it? For semi-joins we don't have the problem as we
inject into ON expression (right? or not? we have sj_on_expr, too...
(Investigation: we the the same Item* pointer both in WHERE and
as sj_on_expr. fix_fields() is called for the WHERE part and that's
how sj_on_expr gets fixed. This works as long as
Item_func_eq::fix_fields() does not try to substitute itself with
another item).
A: ?
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
------------------------------------------------------------
JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
- make conversion fully undoable
- perform it sufficiently late in the optimization process, at the point
where JOIN_TABs are already allocated.
Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
it will be impossible to handle JTBM queries inside/outside of outer joins.
2.3 What is expected of the result of conversion
------------------------------------------------
Join [pre]optimization relies on each optimized entity to have a bit in
table_map.
TODO: where do we check if there will be enough bits for everyone? (at the
point where we assign them?)
The bit stored in join_tab->table->map, and the apparent problem is that JTBM
join_tabs do not naturally have TABLE* object.
We could use the the one that will be used for Materialization, but that will
stop working when we will have to include IN->EXISTS in the choice.
Current approach: don't create a table. create a table_map element in JOIN_TAB
instead. Evgen has probably done something like that already.
3. Pre-optimization steps
=========================
JOIN_TABs are allocated in make_join_statistics(). This where the changes will
be needed: for JOIN_TABs that correspond to JTBM-tables:
- don't set tab->table, set tab->jtbm_select (or whatever)
- run subquery's optimizer to get its output cardinality
3.1 Constant detection
----------------------
What about subqueries that "are constant"?
const_item IN (SELECT uncorrelated) -> is constant, but not something
we would want to evaluate.
something IN (SELECT from_constant_join) -> is constant
Do we need to mark their JOIN_TABs as constant?
3.3 update_ref_and_keys
-----------------------
* Walk through JTBM elements and inject KEYUSE elements for their
IN-equalities.
TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
3.4 JOIN_TAB sorting criteria
-----------------------------
Q: Where do we put JTBM's join_tab when pre-sorting records?
A: it should sort as regular table.
TODO: where do we remove the predicates from the WHERE?
- remove them like SJ-converter does
- remove them with optimizer (like remove_eq_conds does)
4. Optimization
===============
Add a branch in best_access_path to account for
- JTBM-Materialization
- JTBM-Materialization-Scan.
5. Execution
============
* We should be able to reuse item_subselect.cc code for lookups
* But will have to use our own temptable scan code
TODO: is it possible to have any unification with SJ-Materialization?
User interface
--------------
Any @@optimizer_switch flags for all this?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Psergey): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Wed, 24 Mar 2010, 14:42)=-=-
Low Level Design modified.
--- /tmp/wklog.90.old.19182 2010-03-24 14:42:54.000000000 +0000
+++ /tmp/wklog.90.new.19182 2010-03-24 14:42:54.000000000 +0000
@@ -1 +1,140 @@
+<contents>
+1. Applicability check
+2. Representation
+2.1 Option #1: Convert to TABLE_LIST
+2.2 On subquery predicate removal
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+2.3 What is expected of the result of conversion
+3. Pre-optimization steps
+3.1 Constant detection
+3.3 update_ref_and_keys
+3.4 JOIN_TAB sorting criteria
+4. Optimization
+5. Execution
+User interface.
+</contents>
+
+We'll call the new execution strategy "jtbm-materialization", for the lack of
+better name.
+
+1. Applicability check
+======================
+The criteria for checking whether a subquery can be processed with
+jtbm-materialization can be checked at JOIN::prepare stage (like it
+happens with semi-join check)
+
+2. Representation
+=================
+
+2.1 Option #1: Convert to TABLE_LIST
+------------------------------------
+Make it work like semi-join nests: each jtbm-predicate is converted into a
+TABLE_LIST object. This will make it
+
+ - uniform with semi-joins (we've stepped on all rakes there)
+ - allow to process JTBM-subqueries in ON expressions
+
+simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
+tables.
+
+for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
+i.e. for
+
+ SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
+
+we'll print
+
+ SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
+
+the XX part is not clear. we don't want to print 'ie' the second time here?
+
+2.2 On subquery predicate removal
+---------------------------------
+Q: if we remove the subquery predicate permanently, who will run
+fix_fields() for it? For semi-joins we don't have the problem as we
+inject into ON expression (right? or not? we have sj_on_expr, too...
+(Investigation: we the the same Item* pointer both in WHERE and
+as sj_on_expr. fix_fields() is called for the WHERE part and that's
+how sj_on_expr gets fixed. This works as long as
+Item_func_eq::fix_fields() does not try to substitute itself with
+another item).
+A: ?
+
+2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
+------------------------------------------------------------
+JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
+- make conversion fully undoable
+- perform it sufficiently late in the optimization process, at the point
+ where JOIN_TABs are already allocated.
+
+Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
+it will be impossible to handle JTBM queries inside/outside of outer joins.
+
+2.3 What is expected of the result of conversion
+------------------------------------------------
+Join [pre]optimization relies on each optimized entity to have a bit in
+table_map.
+
+TODO: where do we check if there will be enough bits for everyone? (at the
+ point where we assign them?)
+
+The bit stored in join_tab->table->map, and the apparent problem is that JTBM
+join_tabs do not naturally have TABLE* object.
+
+We could use the the one that will be used for Materialization, but that will
+stop working when we will have to include IN->EXISTS in the choice.
+
+Current approach: don't create a table. create a table_map element in JOIN_TAB
+instead. Evgen has probably done something like that already.
+
+3. Pre-optimization steps
+=========================
+JOIN_TABs are allocated in make_join_statistics(). This where the changes will
+be needed: for JOIN_TABs that correspond to JTBM-tables:
+
+- don't set tab->table, set tab->jtbm_select (or whatever)
+- run subquery's optimizer to get its output cardinality
+
+3.1 Constant detection
+----------------------
+What about subqueries that "are constant"?
+ const_item IN (SELECT uncorrelated) -> is constant, but not something
+ we would want to evaluate.
+ something IN (SELECT from_constant_join) -> is constant
+
+Do we need to mark their JOIN_TABs as constant?
+
+3.3 update_ref_and_keys
+-----------------------
+* Walk through JTBM elements and inject KEYUSE elements for their
+ IN-equalities.
+
+TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
+
+3.4 JOIN_TAB sorting criteria
+-----------------------------
+Q: Where do we put JTBM's join_tab when pre-sorting records?
+A: it should sort as regular table.
+
+TODO: where do we remove the predicates from the WHERE?
+ - remove them like SJ-converter does
+ - remove them with optimizer (like remove_eq_conds does)
+
+4. Optimization
+===============
+Add a branch in best_access_path to account for
+- JTBM-Materialization
+- JTBM-Materialization-Scan.
+
+5. Execution
+============
+* We should be able to reuse item_subselect.cc code for lookups
+* But will have to use our own temptable scan code
+
+TODO: is it possible to have any unification with SJ-Materialization?
+
+User interface
+--------------
+Any @@optimizer_switch flags for all this?
+
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=90&nolimit=1
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
LOW-LEVEL DESIGN:
<contents>
1. Applicability check
2. Representation
2.1 Option #1: Convert to TABLE_LIST
2.2 On subquery predicate removal
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
2.3 What is expected of the result of conversion
3. Pre-optimization steps
3.1 Constant detection
3.3 update_ref_and_keys
3.4 JOIN_TAB sorting criteria
4. Optimization
5. Execution
User interface.
</contents>
We'll call the new execution strategy "jtbm-materialization", for the lack of
better name.
1. Applicability check
======================
The criteria for checking whether a subquery can be processed with
jtbm-materialization can be checked at JOIN::prepare stage (like it
happens with semi-join check)
2. Representation
=================
2.1 Option #1: Convert to TABLE_LIST
------------------------------------
Make it work like semi-join nests: each jtbm-predicate is converted into a
TABLE_LIST object. This will make it
- uniform with semi-joins (we've stepped on all rakes there)
- allow to process JTBM-subqueries in ON expressions
simplify_joins() will handle jtbm TABLE_LISTs as some kinds of opaque base
tables.
for EXPLAIN EXTENDED, it would be natural to print something semi-join like,
i.e. for
SELECT ... FROM ot WHERE oe IN (SELECT ie FROM materialized-non-sj-select)
we'll print
SELECT ... FROM ot SJ (SELECT ie FROM materialized-non-sj-select) ON oe=XX
the XX part is not clear. we don't want to print 'ie' the second time here?
2.2 On subquery predicate removal
---------------------------------
Q: if we remove the subquery predicate permanently, who will run
fix_fields() for it? For semi-joins we don't have the problem as we
inject into ON expression (right? or not? we have sj_on_expr, too...
(Investigation: we the the same Item* pointer both in WHERE and
as sj_on_expr. fix_fields() is called for the WHERE part and that's
how sj_on_expr gets fixed. This works as long as
Item_func_eq::fix_fields() does not try to substitute itself with
another item).
A: ?
2.3 Option #2: No TABLE_LIST, convert to JOIN_TAB right away
------------------------------------------------------------
JOIN_TABs only live for the duration of one PS re-execution, so we'll have to
- make conversion fully undoable
- perform it sufficiently late in the optimization process, at the point
where JOIN_TABs are already allocated.
Note that if we don't position JTBM predicates in join's TABLE_LIST tree, then
it will be impossible to handle JTBM queries inside/outside of outer joins.
2.3 What is expected of the result of conversion
------------------------------------------------
Join [pre]optimization relies on each optimized entity to have a bit in
table_map.
TODO: where do we check if there will be enough bits for everyone? (at the
point where we assign them?)
The bit stored in join_tab->table->map, and the apparent problem is that JTBM
join_tabs do not naturally have TABLE* object.
We could use the the one that will be used for Materialization, but that will
stop working when we will have to include IN->EXISTS in the choice.
Current approach: don't create a table. create a table_map element in JOIN_TAB
instead. Evgen has probably done something like that already.
3. Pre-optimization steps
=========================
JOIN_TABs are allocated in make_join_statistics(). This where the changes will
be needed: for JOIN_TABs that correspond to JTBM-tables:
- don't set tab->table, set tab->jtbm_select (or whatever)
- run subquery's optimizer to get its output cardinality
3.1 Constant detection
----------------------
What about subqueries that "are constant"?
const_item IN (SELECT uncorrelated) -> is constant, but not something
we would want to evaluate.
something IN (SELECT from_constant_join) -> is constant
Do we need to mark their JOIN_TABs as constant?
3.3 update_ref_and_keys
-----------------------
* Walk through JTBM elements and inject KEYUSE elements for their
IN-equalities.
TODO: KEYUSE elements imply presense of KEYs! Which we don't have!
3.4 JOIN_TAB sorting criteria
-----------------------------
Q: Where do we put JTBM's join_tab when pre-sorting records?
A: it should sort as regular table.
TODO: where do we remove the predicates from the WHERE?
- remove them like SJ-converter does
- remove them with optimizer (like remove_eq_conds does)
4. Optimization
===============
Add a branch in best_access_path to account for
- JTBM-Materialization
- JTBM-Materialization-Scan.
5. Execution
============
* We should be able to reuse item_subselect.cc code for lookups
* But will have to use our own temptable scan code
TODO: is it possible to have any unification with SJ-Materialization?
User interface
--------------
Any @@optimizer_switch flags for all this?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Progress (by Knielsen): New replication APIs (107)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 36
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Knielsen - Wed, 24 Mar 2010, 10:39)=-=-
Design discussions
Worked 11 hours and estimate 0 hours remain (original estimate increased by 11 hours).
-=-=(Knielsen - Mon, 15 Mar 2010, 14:28)=-=-
Research into the problem, and discussions on phone/mailing list
Worked 25 hours and estimate 0 hours remain (original estimate increased by 25 hours).
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Progress (by Knielsen): New replication APIs (107)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 36
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Knielsen - Wed, 24 Mar 2010, 10:39)=-=-
Design discussions
Worked 11 hours and estimate 0 hours remain (original estimate increased by 11 hours).
-=-=(Knielsen - Mon, 15 Mar 2010, 14:28)=-=-
Research into the problem, and discussions on phone/mailing list
Worked 25 hours and estimate 0 hours remain (original estimate increased by 25 hours).
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Knielsen): Table elimination (17)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Table elimination
CREATION DATE..: Sun, 10 May 2009, 19:57
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 17 (http://askmonty.org/worklog/?tid=17)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 1
ESTIMATE.......: 3 (hours remain)
ORIG. ESTIMATE.: 3
PROGRESS NOTES:
-=-=(Guest - Wed, 24 Mar 2010, 05:58)=-=-
Status updated.
--- /tmp/wklog.17.old.13223 2010-03-24 05:58:26.000000000 +0000
+++ /tmp/wklog.17.new.13223 2010-03-24 05:58:26.000000000 +0000
@@ -1 +1 @@
-In-Progress
+Assigned
-=-=(Guest - Wed, 24 Mar 2010, 05:58)=-=-
Privacy level updated.
--- /tmp/wklog.17.old.13223 2010-03-24 05:58:26.000000000 +0000
+++ /tmp/wklog.17.new.13223 2010-03-24 05:58:26.000000000 +0000
@@ -1 +1 @@
-n
+y
-=-=(Guest - Wed, 28 Oct 2009, 02:01)=-=-
Version updated.
--- /tmp/wklog.17.old.24041 2009-10-28 02:01:58.000000000 +0200
+++ /tmp/wklog.17.new.24041 2009-10-28 02:01:58.000000000 +0200
@@ -1 +1 @@
-9.x
+Server-9.x
-=-=(Guest - Sun, 16 Aug 2009, 16:16)=-=-
Category updated.
--- /tmp/wklog.17.old.24882 2009-08-16 16:16:49.000000000 +0300
+++ /tmp/wklog.17.new.24882 2009-08-16 16:16:49.000000000 +0300
@@ -1 +1 @@
-Client-BackLog
+Server-Sprint
-=-=(Guest - Sun, 16 Aug 2009, 16:16)=-=-
Version updated.
--- /tmp/wklog.17.old.24882 2009-08-16 16:16:49.000000000 +0300
+++ /tmp/wklog.17.new.24882 2009-08-16 16:16:49.000000000 +0300
@@ -1 +1 @@
-Server-5.1
+9.x
-=-=(Guest - Wed, 29 Jul 2009, 21:41)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.26011 2009-07-29 21:41:04.000000000 +0300
+++ /tmp/wklog.17.new.26011 2009-07-29 21:41:04.000000000 +0300
@@ -2,163 +2,146 @@
~maria-captains/maria/maria-5.1-table-elimination tree.
<contents>
-1. Conditions for removal
-1.1 Quick check if there are candidates
-2. Removal operation properties
-3. Removal operation
-4. User interface
-5. Tests and benchmarks
-6. Todo, issues to resolve
-6.1 To resolve
-6.2 Resolved
-7. Additional issues
+1. Elimination criteria
+2. No outside references check
+2.1 Quick check if there are tables with no outside references
+3. One-match check
+3.1 Functional dependency source #1: Potential eq_ref access
+3.2 Functional dependency source #2: col2=func(col1)
+3.3 Functional dependency source #3: One or zero records in the table
+3.4 Functional dependency check implementation
+3.4.1 Equality collection: Option1
+3.4.2 Equality collection: Option2
+3.4.3 Functional dependency propagation - option 1
+3.4.4 Functional dependency propagation - option 2
+4. Removal operation properties
+5. Removal operation
+6. User interface
+6.1 @@optimizer_switch flag
+6.2 EXPLAIN [EXTENDED]
+7. Miscellaneous adjustments
+7.1 Fix used_tables() of aggregate functions
+7.2 Make subquery predicates collect their outer references
+8. Other concerns
+8.1 Relationship with outer->inner joins converter
+8.2 Relationship with prepared statements
+8.3 Relationship with constant table detection
+9. Tests and benchmarks
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
-1. Conditions for removal
--------------------------
-We can eliminate an inner side of outer join if:
-1. For each record combination of outer tables, it will always produce
- exactly one record.
-2. There are no references to columns of the inner tables anywhere else in
+1. Elimination criteria
+=======================
+We can eliminate inner side of an outer join nest if:
+
+1. There are no references to columns of the inner tables anywhere else in
the query.
+2. For each record combination of outer tables, it will always produce
+ exactly one matching record combination.
+
+Most of effort in this WL entry is checking these two conditions.
-#1 means that every table inside the outer join nest is:
- - is a constant table:
- = because it can be accessed via eq_ref(const) access, or
- = it is a zero-rows or one-row MyISAM-like table [MARK1]
- - has an eq_ref access method candidate.
-
-#2 means that WHERE clause, ON clauses of embedding outer joins, ORDER BY,
- GROUP BY and HAVING do not refer to the inner tables of the outer join
- nest.
-
-1.1 Quick check if there are candidates
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Before we start to enumerate join nests, here is a quick way to check if
-there *can be* something to be removed:
+2. No outside references check
+==============================
+Criterion #1 means that the WHERE clause, ON clauses of embedding/subsequent
+outer joins, ORDER BY, GROUP BY and HAVING must have no references to inner
+tables of the outer join nest we're trying to remove.
+
+For multi-table UPDATE/DELETE we also must not remove tables that we're
+updating/deleting from or tables that are used in UPDATE's SET clause.
+
+2.1 Quick check if there are tables with no outside references
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Before we start searching for outer join nests that could be eliminated,
+we'll do a quick and cheap check if there possibly could be something that
+could be eliminated:
- if ((tables used in select_list |
+ if (there are outer joins &&
+ (tables used in select_list |
tables used in group/order by UNION |
- tables used in where) != bitmap_of_all_tables)
+ tables used in where) != bitmap_of_all_join_tables)
{
attempt table elimination;
}
-2. Removal operation properties
--------------------------------
-* There is always one way to remove (no choice to remove either this or that)
-* It is always better to remove as much tables as possible (at least within
- our cost model).
-Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
-3. Removal operation
---------------------
-* Remove the outer join nest's nested join structure (i.e. get the
- outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
- $OJ->embedding->nested_join. Update table_map's of all ancestor nested
- joins). [MARK2]
+3. One-match check
+==================
+We can eliminate inner side of outer join if it will always generate exactly
+one matching record combination.
-* Move the tables and their JOIN_TABs to front like it is done with const
- tables, with exception that if eliminated outer join nest was within
- another outer join nest, that shouldn't prevent us from moving away the
- eliminated tables.
+By definition of OUTER JOIN, a NULL-complemented record combination will be
+generated when the inner side of outer join has not produced any matches.
-* Update join->table_count and all-join-tables bitmap.
+What remains to be checked is that there is no possiblity that inner side of
+the outer join could produce more than one matching record combination.
-* That's it. Nothing else?
+We'll refer to one-match property as "functional dependency":
-4. User interface
------------------
-* We'll add an @@optimizer switch flag for table elimination. Tentative
- name: 'table_elimination'.
- (Note ^^ utility of the above questioned ^, as table elimination can never
- be worse than no elimination. We're leaning towards not adding the flag)
-
-* EXPLAIN will not show the removed tables at all. This will allow to check
- if tables were removed, and also will behave nicely with anchor model and
- VIEWs: stuff that user doesn't care about just won't be there.
+- A outer join nest is functionally dependent [wrt outer tables] if it will
+ produce one matching record combination per each record combination of
+ outer tables
-5. Tests and benchmarks
------------------------
-Create a benchmark in sql-bench which checks if the DBMS has table
-elimination.
-[According to Monty] Run
- - queries that would use elimination
- - queries that are very similar to one above (so that they would have same
- QEP, execution cost, etc) but cannot use table elimination.
-then compare run times and make a conclusion about whether dbms supports table
-elimination.
+- A table is functionally dependent wrt certain set of dependency tables, if
+ record combination of dependency tables uniquely identifies zero or one
+ matching record in the table
-6. Todo, issues to resolve
---------------------------
+- Definitions of functional dependency of keys (=column tuples) and columns are
+ apparent.
-6.1 To resolve
-~~~~~~~~~~~~~~
-- Relationship with prepared statements.
- On one hand, it's natural to desire to make table elimination a
- once-per-statement operation, like outer->inner join conversion. We'll have
- to limit the applicability by removing [MARK1] as that can change during
- lifetime of the statement.
-
- The other option is to do table elimination every time. This will require to
- rework operation [MARK2] to be undoable.
-
- I'm leaning towards doing the former. With anchor modeling, it is unlikely
- that we'll meet outer joins which have N inner tables of which some are 1-row
- MyISAM tables that do not have primary key.
-
-6.2 Resolved
-~~~~~~~~~~~~
-* outer->inner join conversion is not a problem for table elimination.
- We make outer->inner conversions based on predicates in WHERE. If the WHERE
- referred to an inner table (requirement for OJ->IJ conversion) then table
- elimination would not be applicable anyway.
-
-* For Multi-table UPDATEs/DELETEs, need to also analyze the SET clause:
- - affected tables must not be eliminated
- - tables that are used on the right side of the SET x=y assignments must
- not be eliminated either.
+Our goal is to prove that the entire join nest is functionally-dependent.
-* Aggregate functions used to report that they depend on all tables, that is,
+Join nest is functionally dependent (on the otside tables) if each of its
+elements (those can be either base tables or join nests) is functionally
+dependent.
- item_agg_func->used_tables() == (1ULL << join->tables) - 1
+Functional dependency is transitive: if table A is f-dependent on the outer
+tables and table B is f.dependent on {A, outer_tables} then B is functionally
+dependent on the outer tables.
+
+Subsequent sections list cases when we can declare a table to be
+functionally-dependent.
+
+3.1 Functional dependency source #1: Potential eq_ref access
+------------------------------------------------------------
+This is the most practically-important case. Taking the example from the HLD
+of this WL entry:
+
+ select
+ A.colA
+ from
+ tableA A
+ left outer join
+ tableB B
+ on
+ B.id = A.id;
- always. Fixed it, now aggregate function reports it depends on
- tables that its arguments depend on. In particular, COUNT(*) reports
- that it depends on no tables (item_count_star->used_tables()==0).
- One consequence of that is that "item->used_tables()==0" is not
- equivalent to "item->const_item()==true" anymore (not sure if it's
- "anymore" or this has been already happening).
-
-* EXPLAIN EXTENDED warning text was generated after the JOIN object has
- been discarded. This didn't allow to use information about join plan
- when printing the warning. Fixed this by keeping the JOIN objects until
- we've printed the warning (have also an intent to remove the const
- tables from the join output).
-
-7. Additional issues
---------------------
-* We remove ON clauses within outer join nests. If these clauses contain
- subqueries, they probably should be gone from EXPLAIN output also?
- Yes. Current approach: when removing an outer join nest, walk the ON clause
- and mark subselects as eliminated. Then let EXPLAIN code check if the
- SELECT was eliminated before the printing (EXPLAIN is generated by doing
- a recursive descent, so the check will also cause children of eliminated
- selects not to be printed)
-
-* Table elimination is performed after constant table detection (but before
- the range analysis). Constant tables are technically different from
- eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
- Considering we've already done the join_read_const_table() call, is there any
- real difference between constant table and eliminated one? If there is, should
- we mark const tables also as eliminated?
- from user/EXPLAIN point of view: no. constant table is the one that we read
- one record from. eliminated table is the one that we don't acccess at all.
+and generalizing it: a table TBL is functionally-dependent if the ON
+expression allows to construct a potential eq_ref access to table TBL that
+uses only outer or functionally-dependent tables.
+
+In other words: table TBL will have one match if the ON expression can be
+converted into this form
+
+ TBL.unique_key=func(one_match_tables) AND .. remainder ...
+
+(with appropriate extension for multi-part keys), where
+
+ one_match_tables= {
+ tables that are not on the inner side of the outer join in question, and
+ functionally dependent tables
+ }
+
+Note that this will cover constant tables, except those that are constant because
+they have 0/1 record or are partitioned and have no used partitions.
+
+
+3.2 Functional dependency source #2: col2=func(col1)
+----------------------------------------------------
+This comes from the second example in the HLS:
-* What is described above will not be able to eliminate this outer join
create unique index idx on tableB (id, fromDate);
...
left outer join
@@ -169,32 +152,331 @@
B.fromDate = (select max(sub.fromDate)
from tableB sub where sub.id = A.id);
- This is because condition "B.fromDate= func(tableB)" cannot be used.
- Reason#1: update_ref_and_keys() does not consider such conditions to
- be of any use (and indeed they are not usable for ref access)
- so they are not put into KEYUSE array.
- Reason#2: even if they were put there, we would need to be able to tell
- between predicates like
- B.fromDate= func(B.id) // guarantees only one matching row as
- // B.id is already bound by B.id=A.id
- // hence B.fromDate becomes bound too.
- and
- "B.fromDate= func(B.*)" // Can potentially have many matching
- // records.
- We need to
- - Have update_ref_and_keys() create KEYUSE elements for such equalities
- - Have eliminate_tables() and friends make a more accurate check.
- The right check is to check whether all parts of a unique key are bound.
- If we have keypartX to be bound, then t.keypartY=func(keypartX) makes
- keypartY to be bound.
- The difficulty here is that correlated subquery predicate cannot tell what
- columns it depends on (it only remembers tables).
- Traversing the predicate is expensive and complicated.
- We're leaning towards making each subquery predicate have a List<Item> with
- items that
- - are in the current select
- - and it depends on.
- This list will be useful in certain other subquery optimizations as well,
- it is cheap to collect it in fix_fields() phase, so it will be collected
- for every subquery predicate.
+Here it is apparent that tableB can be eliminated. It is not possible to
+construct eq_ref access to tableB, though, because for the second part of the
+primary key (fromDate column) we only got a condition in this form:
+
+ B.fromDate= func(tableB)
+
+(we write "func(tableB)" because ref optimizer can only determine which tables
+the right part of the equality depends on).
+
+In general case, equality like this doesn't guarantee functional dependency.
+For example, if func() == { return fromDate;}, i.e the ON expression is
+
+ ... ON B.id = A.id and B.fromDate = B.fromDate
+
+then that would allow table B to have multiple matches per record of table A.
+
+In order to be able to distinguish between these two cases, we'll need to go
+down to column level:
+
+- A table is functionally dependent if it has a unique key that's functionally
+ dependent
+
+- A unique key is functionally dependent when all of its columns are
+ functionally dependent
+
+- A table column is functionally dependent if the ON clause allows to extract
+ an AND-part in this form:
+
+ tbl.column = f(functionally-dependent columns or columns of outer tables)
+
+3.3 Functional dependency source #3: One or zero records in the table
+---------------------------------------------------------------------
+A table with one or zero records cannot generate more than one matching
+record. This source is of lesser importance as one/zero-record tables are only
+MyISAM tables.
+
+3.4 Functional dependency check implementation
+----------------------------------------------
+As shown above, we need something similar to KEYUSE structures, but not
+exactly that (we need things that current ref optimizer considers unusable and
+don't need things that it considers usable).
+
+3.4.1 Equality collection: Option1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+We could
+- extend KEYUSE structures to store all kinds of equalities we need
+- change update_ref_and_keys() and co. to collect equalities both for ref
+ access and for table elimination
+ = [possibly] Improve [eq_]ref access to be able to use equalities in
+ form keypart2=func(keypart1)
+- process the KEYUSE array both by table elimination and by ref access
+ optimizer.
+
++ This requires less effort.
+- Code will have to be changed all over sql_select.cc
+- update_ref_and_keys() and co. already do several unrelated things. Hooking
+ up table elimination will make it even worse.
+
+3.4.2 Equality collection: Option2
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Alternatively, we could process the WHERE clause totally on our own.
++ Table elimination is standalone and easy to detach module.
+- Some code duplication with update_ref_and_keys() and co.
+
+Having got the equalities, we'll to propagate functional dependency property
+to unique keys, tables and, ultimately, join nests.
+
+3.4.3 Functional dependency propagation - option 1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Borrow the approach used in constant table detection code:
+
+ do
+ {
+ converted= FALSE;
+ for each table T in join nest
+ {
+ if (check_if_functionally_dependent(T))
+ converted= TRUE;
+ }
+ } while (converted == TRUE);
+
+ check_if_functionally_dependent(T)
+ {
+ if (T has eq_ref access based on func_dep_tables)
+ return TRUE;
+
+ Apply the same do-while loop-based approach to available equalities
+ T.column1=func(other columns)
+ to spread the set of functionally-dependent columns. The goal is to get
+ all columns of a certain unique key to be bound.
+ }
+
+
+3.4.4 Functional dependency propagation - option 2
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Analyze the ON expression(s) and build a list of
+
+ tbl.field = expr(...)
+
+equalities. tbl here is a table that belongs to a join nest that could
+potentially be eliminated.
+
+besides those, add to the list
+ - An element for each unique key in the table that needs to be eliminated
+ - An element for each table that needs to be eliminated
+ - An element for each join nest that can be eliminated (i.e. has no
+ references from outside).
+
+Then, setup "reverse dependencies": each element should have pointers to
+elements that are functionally dependent on it:
+
+- "tbl.field=expr(...)" equality is functionally dependent on all fields that
+ are used in "expr(...)" (here we take into account only fields that belong
+ to tables that can potentially be eliminated).
+- a unique key is dependent on all of its components
+- a table is dependent on all of its unique keys
+- a join nest is dependent on all tables that it contains
+
+These pointers are stored in form of one bitmap, such that:
+
+ "X depends on Y" == test( bitmap[(X's number)*n_objects + (Y's number)] )
+
+Each object also stores a number of dependencies it needs to be satisfied
+before it itself is satisfied:
+
+- "tbl.field=expr(...)" needs all its underlying fields (if a field is
+ referenced many times it is counted only once)
+
+- a unique key needs all of its key parts
+
+- a table needs only one of its unique keys
+
+- a join nest needs all of its tables
+
+(TODO: so what do we do when we've marked a table as constant? We'll need to
+update the "field=expr(....)" elements that use fields of that table. And the
+problem is that we won't know how much to decrement from the counters of those
+elements.
+
+Solution#1: switch to table_map() based approach.
+Solution#2: introduce separate elements for each involved field.
+ field will depend on its table,
+ "field=expr" will depend on fields.
+)
+
+Besides the above, let each element have a pointer to another element, so that
+we can have a linked list of elements.
+
+After the above structures have been created, we start the main algorithm.
+
+The first step is to create a list of functionally-dependent elements. We walk
+across array of dependencies and mark those elements that are already bound
+(i.e. their dependencies are satisfied). At the moment those immediately-bound
+are only "field=expr" dependencies that don't refer to any columns that are
+not bound.
+
+The second step is the loop
+
+ while (bound_list is not empty)
+ {
+ Take the first bound element F off the list.
+ Use the bitmap to find out what other elements depended on it
+ for each such element E
+ {
+ if (E becomes bound after F is bound)
+ add E to the list;
+ }
+ }
+
+The last step is to walk through elements that represent the join nests. Those
+that are bound can be eliminated.
+
+4. Removal operation properties
+===============================
+* There is always one way to remove (no choice to remove either this or that)
+* It is always better to remove as much tables as possible (at least within
+ our cost model).
+Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
+
+
+5. Removal operation
+====================
+(This depends a lot on whether we make table elimination a one-off rewrite or
+conditional)
+
+At the moment table elimination is re-done for each join re-execution, hence
+the removal operation is designed not to modify any statement's permanent
+members.
+
+* Remove the outer join nest's nested join structure (i.e. get the
+ outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
+ $OJ->embedding->nested_join. Update table_map's of all ancestor nested
+ joins). [MARK2]
+
+* Move the tables and their JOIN_TABs to the front of join order, like it is
+ done with const tables, with exception that if eliminated outer join nest
+ was within another outer join nest, that shouldn't prevent us from moving
+ away the eliminated tables.
+
+* Update join->table_count and all-join-tables bitmap.
+ ^ TODO: not true anymore ^
+
+* That's it. Nothing else?
+
+6. User interface
+=================
+
+6.1 @@optimizer_switch flag
+---------------------------
+Argument againist adding the flag:
+* It is always better to perform table elimination than not to do it.
+
+Arguments for the flag:
+* It is always theoretically possible that the new code will cause unintended
+ slowdowns.
+* Having the flag is useful for QA and comparative benchmarking.
+
+Decision so far: add the flag under #ifdef. Make the flag be present in debug
+builds.
+
+6.2 EXPLAIN [EXTENDED]
+----------------------
+There are two possible options:
+1. Show eliminated tables, like we do with const tables.
+2. Do not show eliminated tables.
+
+We chose option 2, because:
+- the table is not accessed at all (besides locking it)
+- it is more natural for anchor model user - when he's querying an anchor-
+ and attributes view, he doesn't care about the unused attributes.
+
+EXPLAIN EXTENDED+SHOW WARNINGS won't show the removed table either.
+
+NOTE: Before this WL, the warning text was generated after all JOIN objects
+have been destroyed. This didn't allow to use information about join plan
+when printing the warning. We've fixed this by keeping the JOIN objects until
+the warning text has been generated.
+
+Table elimination removes inner sides of outer join, and logically the ON
+clause is also removed. If this clause has any subqueries, they will be
+also removed from EXPLAIN output.
+
+An exception to the above is that if we eliminate a derived table, it will
+still be shown in EXPLAIN output. This comes from the fact that the FROM
+subqueries are evaluated before table elimination is invoked.
+TODO: Is the above ok or still remove parts of FROM subqueries?
+
+7. Miscellaneous adjustments
+============================
+
+7.1 Fix used_tables() of aggregate functions
+--------------------------------------------
+Aggregate functions used to report that they depend on all tables, that is,
+
+ item_agg_func->used_tables() == (1ULL << join->tables) - 1
+
+always. Fixed it, now aggregate function reports that it depends on the
+tables that its arguments depend on. In particular, COUNT(*) reports that it
+depends on no tables (item_count_star->used_tables()==0). One consequence of
+that is that "item->used_tables()==0" is not equivalent to
+"item->const_item()==true" anymore (not sure if it's "anymore" or this has
+been already so for some items).
+
+7.2 Make subquery predicates collect their outer references
+-----------------------------------------------------------
+Per-column functional dependency analysis requires us to take a
+
+ tbl.field = func(...)
+
+equality and tell which columns of which tables are referred from func(...)
+expression. For scalar expressions, this is accomplished by Item::walk()-based
+traversal. It should be reasonably cheap (the only practical Item that can be
+expensive to traverse seems to be a special case of "col IN (const1,const2,
+...)". check if we traverse the long list for such items).
+
+For correlated subqueries, traversal can be expensive, it is cheaper to make
+each subquery item have a list of its outer references. The list can be
+collected at fix_fields() stage with very little extra cost, and then it could
+be used for other optimizations.
+
+
+8. Other concerns
+=================
+
+8.1 Relationship with outer->inner joins converter
+--------------------------------------------------
+One could suspect that outer->inner join conversion could get in the way
+of table elimination by changing outer joins (which could be eliminated)
+to inner (which we will not try to eliminate).
+This concern is not valid: we make outer->inner conversions based on
+predicates in WHERE. If the WHERE referred to an inner table (this is a
+requirement for the conversion) then table elimination would not be
+applicable anyway.
+
+8.2 Relationship with prepared statements
+-----------------------------------------
+On one hand, it's natural to desire to make table elimination a
+once-per-statement operation, like outer->inner join conversion. We'll have
+to limit the applicability by removing [MARK1] as that can change during
+lifetime of the statement.
+
+The other option is to do table elimination every time. This will require to
+rework operation [MARK2] to be undoable.
+
+
+8.3 Relationship with constant table detection
+----------------------------------------------
+Table elimination is performed after constant table detection (but before
+the range analysis). Constant tables are technically different from
+eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
+Considering we've already done the join_read_const_table() call, is there any
+real difference between constant table and eliminated one? If there is, should
+we mark const tables also as eliminated?
+from user/EXPLAIN point of view: no. constant table is the one that we read
+one record from. eliminated table is the one that we don't acccess at all.
+TODO
+
+9. Tests and benchmarks
+=======================
+Create a benchmark in sql-bench which checks if the DBMS has table
+elimination.
+[According to Monty] Run
+ - query Q1 that would use elimination
+ - query Q2 that is very similar to Q1 (so that they would have same
+ QEP, execution cost, etc) but cannot use table elimination.
+then compare run times and make a conclusion about whether the used dbms
+supports table elimination.
-=-=(Guest - Thu, 23 Jul 2009, 20:07)=-=-
Dependency created: 29 now depends on 17
-=-=(Monty - Thu, 23 Jul 2009, 09:19)=-=-
Version updated.
--- /tmp/wklog.17.old.24090 2009-07-23 09:19:32.000000000 +0300
+++ /tmp/wklog.17.new.24090 2009-07-23 09:19:32.000000000 +0300
@@ -1 +1 @@
-Server-9.x
+Server-5.1
-=-=(Guest - Mon, 20 Jul 2009, 14:28)=-=-
deukje weg
Worked 1 hour and estimate 3 hours remain (original estimate increased by 4 hours).
-=-=(Guest - Fri, 17 Jul 2009, 02:44)=-=-
Version updated.
--- /tmp/wklog.17.old.24138 2009-07-17 02:44:49.000000000 +0300
+++ /tmp/wklog.17.new.24138 2009-07-17 02:44:49.000000000 +0300
@@ -1 +1 @@
-9.x
+Server-9.x
------------------------------------------------------------
-=-=(View All Progress Notes, 31 total)=-=-
http://askmonty.org/worklog/index.pl?tid=17&nolimit=1
DESCRIPTION:
Eliminate not needed tables from SELECT queries..
This will speed up some views and automatically generated queries.
Example:
CREATE TABLE B (id int primary key);
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
In this case we can remove table B and the join from the query.
HIGH-LEVEL SPECIFICATION:
Here is an extended explanation of table elimination.
Table elimination is a feature found in some modern query optimizers, of
which Microsoft SQL Server 2005/2008 seems to have the most advanced
implementation. Oracle 11g has also been confirmed to use table
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (
select
max(sub.fromDate)
from
tableB sub
where
sub.id = A.id
);
In this example we have added another join condition, which ensures
that we only pick the matching row from tableB having the latest
fromDate. In this case tableB will contain duplicates of the column
B.id, so in order to ensure uniqueness the primary key has to contain
the fromDate column as well. In other words the primary key of tableB
is (B.id, B.fromDate).
Furthermore, since the subselect ensures that we only pick the latest
B.fromDate for a given B.id we know that at most one row will match
the join condition. We will again have the situation where joining
with tableB cannot affect the number of rows in the result set. Since
we do not select any columns from tableB, the whole join operation can
be eliminated from the execution plan.
SQL Server 2005/2008 will deploy table elimination in this situation as
well. We have not found a way to make Oracle 11g use it for this type of
query. Queries like these arise in two situations. Either when you have
denormalized model consisting of a fact table with several related
dimension tables, or when you have a highly normalized model where each
attribute is stored in its own table. The example with the subselect is
common whenever you store historized/versioned data.
LOW-LEVEL DESIGN:
The code (currently in development) is at lp:
~maria-captains/maria/maria-5.1-table-elimination tree.
<contents>
1. Elimination criteria
2. No outside references check
2.1 Quick check if there are tables with no outside references
3. One-match check
3.1 Functional dependency source #1: Potential eq_ref access
3.2 Functional dependency source #2: col2=func(col1)
3.3 Functional dependency source #3: One or zero records in the table
3.4 Functional dependency check implementation
3.4.1 Equality collection: Option1
3.4.2 Equality collection: Option2
3.4.3 Functional dependency propagation - option 1
3.4.4 Functional dependency propagation - option 2
4. Removal operation properties
5. Removal operation
6. User interface
6.1 @@optimizer_switch flag
6.2 EXPLAIN [EXTENDED]
7. Miscellaneous adjustments
7.1 Fix used_tables() of aggregate functions
7.2 Make subquery predicates collect their outer references
8. Other concerns
8.1 Relationship with outer->inner joins converter
8.2 Relationship with prepared statements
8.3 Relationship with constant table detection
9. Tests and benchmarks
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
1. Elimination criteria
=======================
We can eliminate inner side of an outer join nest if:
1. There are no references to columns of the inner tables anywhere else in
the query.
2. For each record combination of outer tables, it will always produce
exactly one matching record combination.
Most of effort in this WL entry is checking these two conditions.
2. No outside references check
==============================
Criterion #1 means that the WHERE clause, ON clauses of embedding/subsequent
outer joins, ORDER BY, GROUP BY and HAVING must have no references to inner
tables of the outer join nest we're trying to remove.
For multi-table UPDATE/DELETE we also must not remove tables that we're
updating/deleting from or tables that are used in UPDATE's SET clause.
2.1 Quick check if there are tables with no outside references
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we start searching for outer join nests that could be eliminated,
we'll do a quick and cheap check if there possibly could be something that
could be eliminated:
if (there are outer joins &&
(tables used in select_list |
tables used in group/order by UNION |
tables used in where) != bitmap_of_all_join_tables)
{
attempt table elimination;
}
3. One-match check
==================
We can eliminate inner side of outer join if it will always generate exactly
one matching record combination.
By definition of OUTER JOIN, a NULL-complemented record combination will be
generated when the inner side of outer join has not produced any matches.
What remains to be checked is that there is no possiblity that inner side of
the outer join could produce more than one matching record combination.
We'll refer to one-match property as "functional dependency":
- A outer join nest is functionally dependent [wrt outer tables] if it will
produce one matching record combination per each record combination of
outer tables
- A table is functionally dependent wrt certain set of dependency tables, if
record combination of dependency tables uniquely identifies zero or one
matching record in the table
- Definitions of functional dependency of keys (=column tuples) and columns are
apparent.
Our goal is to prove that the entire join nest is functionally-dependent.
Join nest is functionally dependent (on the otside tables) if each of its
elements (those can be either base tables or join nests) is functionally
dependent.
Functional dependency is transitive: if table A is f-dependent on the outer
tables and table B is f.dependent on {A, outer_tables} then B is functionally
dependent on the outer tables.
Subsequent sections list cases when we can declare a table to be
functionally-dependent.
3.1 Functional dependency source #1: Potential eq_ref access
------------------------------------------------------------
This is the most practically-important case. Taking the example from the HLD
of this WL entry:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
and generalizing it: a table TBL is functionally-dependent if the ON
expression allows to construct a potential eq_ref access to table TBL that
uses only outer or functionally-dependent tables.
In other words: table TBL will have one match if the ON expression can be
converted into this form
TBL.unique_key=func(one_match_tables) AND .. remainder ...
(with appropriate extension for multi-part keys), where
one_match_tables= {
tables that are not on the inner side of the outer join in question, and
functionally dependent tables
}
Note that this will cover constant tables, except those that are constant because
they have 0/1 record or are partitioned and have no used partitions.
3.2 Functional dependency source #2: col2=func(col1)
----------------------------------------------------
This comes from the second example in the HLS:
create unique index idx on tableB (id, fromDate);
...
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (select max(sub.fromDate)
from tableB sub where sub.id = A.id);
Here it is apparent that tableB can be eliminated. It is not possible to
construct eq_ref access to tableB, though, because for the second part of the
primary key (fromDate column) we only got a condition in this form:
B.fromDate= func(tableB)
(we write "func(tableB)" because ref optimizer can only determine which tables
the right part of the equality depends on).
In general case, equality like this doesn't guarantee functional dependency.
For example, if func() == { return fromDate;}, i.e the ON expression is
... ON B.id = A.id and B.fromDate = B.fromDate
then that would allow table B to have multiple matches per record of table A.
In order to be able to distinguish between these two cases, we'll need to go
down to column level:
- A table is functionally dependent if it has a unique key that's functionally
dependent
- A unique key is functionally dependent when all of its columns are
functionally dependent
- A table column is functionally dependent if the ON clause allows to extract
an AND-part in this form:
tbl.column = f(functionally-dependent columns or columns of outer tables)
3.3 Functional dependency source #3: One or zero records in the table
---------------------------------------------------------------------
A table with one or zero records cannot generate more than one matching
record. This source is of lesser importance as one/zero-record tables are only
MyISAM tables.
3.4 Functional dependency check implementation
----------------------------------------------
As shown above, we need something similar to KEYUSE structures, but not
exactly that (we need things that current ref optimizer considers unusable and
don't need things that it considers usable).
3.4.1 Equality collection: Option1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We could
- extend KEYUSE structures to store all kinds of equalities we need
- change update_ref_and_keys() and co. to collect equalities both for ref
access and for table elimination
= [possibly] Improve [eq_]ref access to be able to use equalities in
form keypart2=func(keypart1)
- process the KEYUSE array both by table elimination and by ref access
optimizer.
+ This requires less effort.
- Code will have to be changed all over sql_select.cc
- update_ref_and_keys() and co. already do several unrelated things. Hooking
up table elimination will make it even worse.
3.4.2 Equality collection: Option2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively, we could process the WHERE clause totally on our own.
+ Table elimination is standalone and easy to detach module.
- Some code duplication with update_ref_and_keys() and co.
Having got the equalities, we'll to propagate functional dependency property
to unique keys, tables and, ultimately, join nests.
3.4.3 Functional dependency propagation - option 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Borrow the approach used in constant table detection code:
do
{
converted= FALSE;
for each table T in join nest
{
if (check_if_functionally_dependent(T))
converted= TRUE;
}
} while (converted == TRUE);
check_if_functionally_dependent(T)
{
if (T has eq_ref access based on func_dep_tables)
return TRUE;
Apply the same do-while loop-based approach to available equalities
T.column1=func(other columns)
to spread the set of functionally-dependent columns. The goal is to get
all columns of a certain unique key to be bound.
}
3.4.4 Functional dependency propagation - option 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Analyze the ON expression(s) and build a list of
tbl.field = expr(...)
equalities. tbl here is a table that belongs to a join nest that could
potentially be eliminated.
besides those, add to the list
- An element for each unique key in the table that needs to be eliminated
- An element for each table that needs to be eliminated
- An element for each join nest that can be eliminated (i.e. has no
references from outside).
Then, setup "reverse dependencies": each element should have pointers to
elements that are functionally dependent on it:
- "tbl.field=expr(...)" equality is functionally dependent on all fields that
are used in "expr(...)" (here we take into account only fields that belong
to tables that can potentially be eliminated).
- a unique key is dependent on all of its components
- a table is dependent on all of its unique keys
- a join nest is dependent on all tables that it contains
These pointers are stored in form of one bitmap, such that:
"X depends on Y" == test( bitmap[(X's number)*n_objects + (Y's number)] )
Each object also stores a number of dependencies it needs to be satisfied
before it itself is satisfied:
- "tbl.field=expr(...)" needs all its underlying fields (if a field is
referenced many times it is counted only once)
- a unique key needs all of its key parts
- a table needs only one of its unique keys
- a join nest needs all of its tables
(TODO: so what do we do when we've marked a table as constant? We'll need to
update the "field=expr(....)" elements that use fields of that table. And the
problem is that we won't know how much to decrement from the counters of those
elements.
Solution#1: switch to table_map() based approach.
Solution#2: introduce separate elements for each involved field.
field will depend on its table,
"field=expr" will depend on fields.
)
Besides the above, let each element have a pointer to another element, so that
we can have a linked list of elements.
After the above structures have been created, we start the main algorithm.
The first step is to create a list of functionally-dependent elements. We walk
across array of dependencies and mark those elements that are already bound
(i.e. their dependencies are satisfied). At the moment those immediately-bound
are only "field=expr" dependencies that don't refer to any columns that are
not bound.
The second step is the loop
while (bound_list is not empty)
{
Take the first bound element F off the list.
Use the bitmap to find out what other elements depended on it
for each such element E
{
if (E becomes bound after F is bound)
add E to the list;
}
}
The last step is to walk through elements that represent the join nests. Those
that are bound can be eliminated.
4. Removal operation properties
===============================
* There is always one way to remove (no choice to remove either this or that)
* It is always better to remove as much tables as possible (at least within
our cost model).
Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
5. Removal operation
====================
(This depends a lot on whether we make table elimination a one-off rewrite or
conditional)
At the moment table elimination is re-done for each join re-execution, hence
the removal operation is designed not to modify any statement's permanent
members.
* Remove the outer join nest's nested join structure (i.e. get the
outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
$OJ->embedding->nested_join. Update table_map's of all ancestor nested
joins). [MARK2]
* Move the tables and their JOIN_TABs to the front of join order, like it is
done with const tables, with exception that if eliminated outer join nest
was within another outer join nest, that shouldn't prevent us from moving
away the eliminated tables.
* Update join->table_count and all-join-tables bitmap.
^ TODO: not true anymore ^
* That's it. Nothing else?
6. User interface
=================
6.1 @@optimizer_switch flag
---------------------------
Argument againist adding the flag:
* It is always better to perform table elimination than not to do it.
Arguments for the flag:
* It is always theoretically possible that the new code will cause unintended
slowdowns.
* Having the flag is useful for QA and comparative benchmarking.
Decision so far: add the flag under #ifdef. Make the flag be present in debug
builds.
6.2 EXPLAIN [EXTENDED]
----------------------
There are two possible options:
1. Show eliminated tables, like we do with const tables.
2. Do not show eliminated tables.
We chose option 2, because:
- the table is not accessed at all (besides locking it)
- it is more natural for anchor model user - when he's querying an anchor-
and attributes view, he doesn't care about the unused attributes.
EXPLAIN EXTENDED+SHOW WARNINGS won't show the removed table either.
NOTE: Before this WL, the warning text was generated after all JOIN objects
have been destroyed. This didn't allow to use information about join plan
when printing the warning. We've fixed this by keeping the JOIN objects until
the warning text has been generated.
Table elimination removes inner sides of outer join, and logically the ON
clause is also removed. If this clause has any subqueries, they will be
also removed from EXPLAIN output.
An exception to the above is that if we eliminate a derived table, it will
still be shown in EXPLAIN output. This comes from the fact that the FROM
subqueries are evaluated before table elimination is invoked.
TODO: Is the above ok or still remove parts of FROM subqueries?
7. Miscellaneous adjustments
============================
7.1 Fix used_tables() of aggregate functions
--------------------------------------------
Aggregate functions used to report that they depend on all tables, that is,
item_agg_func->used_tables() == (1ULL << join->tables) - 1
always. Fixed it, now aggregate function reports that it depends on the
tables that its arguments depend on. In particular, COUNT(*) reports that it
depends on no tables (item_count_star->used_tables()==0). One consequence of
that is that "item->used_tables()==0" is not equivalent to
"item->const_item()==true" anymore (not sure if it's "anymore" or this has
been already so for some items).
7.2 Make subquery predicates collect their outer references
-----------------------------------------------------------
Per-column functional dependency analysis requires us to take a
tbl.field = func(...)
equality and tell which columns of which tables are referred from func(...)
expression. For scalar expressions, this is accomplished by Item::walk()-based
traversal. It should be reasonably cheap (the only practical Item that can be
expensive to traverse seems to be a special case of "col IN (const1,const2,
...)". check if we traverse the long list for such items).
For correlated subqueries, traversal can be expensive, it is cheaper to make
each subquery item have a list of its outer references. The list can be
collected at fix_fields() stage with very little extra cost, and then it could
be used for other optimizations.
8. Other concerns
=================
8.1 Relationship with outer->inner joins converter
--------------------------------------------------
One could suspect that outer->inner join conversion could get in the way
of table elimination by changing outer joins (which could be eliminated)
to inner (which we will not try to eliminate).
This concern is not valid: we make outer->inner conversions based on
predicates in WHERE. If the WHERE referred to an inner table (this is a
requirement for the conversion) then table elimination would not be
applicable anyway.
8.2 Relationship with prepared statements
-----------------------------------------
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
rework operation [MARK2] to be undoable.
8.3 Relationship with constant table detection
----------------------------------------------
Table elimination is performed after constant table detection (but before
the range analysis). Constant tables are technically different from
eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
Considering we've already done the join_read_const_table() call, is there any
real difference between constant table and eliminated one? If there is, should
we mark const tables also as eliminated?
from user/EXPLAIN point of view: no. constant table is the one that we read
one record from. eliminated table is the one that we don't acccess at all.
TODO
9. Tests and benchmarks
=======================
Create a benchmark in sql-bench which checks if the DBMS has table
elimination.
[According to Monty] Run
- query Q1 that would use elimination
- query Q2 that is very similar to Q1 (so that they would have same
QEP, execution cost, etc) but cannot use table elimination.
then compare run times and make a conclusion about whether the used dbms
supports table elimination.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Table elimination (17)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Table elimination
CREATION DATE..: Sun, 10 May 2009, 19:57
SUPERVISOR.....: Monty
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 17 (http://askmonty.org/worklog/?tid=17)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 1
ESTIMATE.......: 3 (hours remain)
ORIG. ESTIMATE.: 3
PROGRESS NOTES:
-=-=(Guest - Wed, 24 Mar 2010, 05:58)=-=-
Status updated.
--- /tmp/wklog.17.old.13223 2010-03-24 05:58:26.000000000 +0000
+++ /tmp/wklog.17.new.13223 2010-03-24 05:58:26.000000000 +0000
@@ -1 +1 @@
-In-Progress
+Assigned
-=-=(Guest - Wed, 24 Mar 2010, 05:58)=-=-
Privacy level updated.
--- /tmp/wklog.17.old.13223 2010-03-24 05:58:26.000000000 +0000
+++ /tmp/wklog.17.new.13223 2010-03-24 05:58:26.000000000 +0000
@@ -1 +1 @@
-n
+y
-=-=(Guest - Wed, 28 Oct 2009, 02:01)=-=-
Version updated.
--- /tmp/wklog.17.old.24041 2009-10-28 02:01:58.000000000 +0200
+++ /tmp/wklog.17.new.24041 2009-10-28 02:01:58.000000000 +0200
@@ -1 +1 @@
-9.x
+Server-9.x
-=-=(Guest - Sun, 16 Aug 2009, 16:16)=-=-
Category updated.
--- /tmp/wklog.17.old.24882 2009-08-16 16:16:49.000000000 +0300
+++ /tmp/wklog.17.new.24882 2009-08-16 16:16:49.000000000 +0300
@@ -1 +1 @@
-Client-BackLog
+Server-Sprint
-=-=(Guest - Sun, 16 Aug 2009, 16:16)=-=-
Version updated.
--- /tmp/wklog.17.old.24882 2009-08-16 16:16:49.000000000 +0300
+++ /tmp/wklog.17.new.24882 2009-08-16 16:16:49.000000000 +0300
@@ -1 +1 @@
-Server-5.1
+9.x
-=-=(Guest - Wed, 29 Jul 2009, 21:41)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.26011 2009-07-29 21:41:04.000000000 +0300
+++ /tmp/wklog.17.new.26011 2009-07-29 21:41:04.000000000 +0300
@@ -2,163 +2,146 @@
~maria-captains/maria/maria-5.1-table-elimination tree.
<contents>
-1. Conditions for removal
-1.1 Quick check if there are candidates
-2. Removal operation properties
-3. Removal operation
-4. User interface
-5. Tests and benchmarks
-6. Todo, issues to resolve
-6.1 To resolve
-6.2 Resolved
-7. Additional issues
+1. Elimination criteria
+2. No outside references check
+2.1 Quick check if there are tables with no outside references
+3. One-match check
+3.1 Functional dependency source #1: Potential eq_ref access
+3.2 Functional dependency source #2: col2=func(col1)
+3.3 Functional dependency source #3: One or zero records in the table
+3.4 Functional dependency check implementation
+3.4.1 Equality collection: Option1
+3.4.2 Equality collection: Option2
+3.4.3 Functional dependency propagation - option 1
+3.4.4 Functional dependency propagation - option 2
+4. Removal operation properties
+5. Removal operation
+6. User interface
+6.1 @@optimizer_switch flag
+6.2 EXPLAIN [EXTENDED]
+7. Miscellaneous adjustments
+7.1 Fix used_tables() of aggregate functions
+7.2 Make subquery predicates collect their outer references
+8. Other concerns
+8.1 Relationship with outer->inner joins converter
+8.2 Relationship with prepared statements
+8.3 Relationship with constant table detection
+9. Tests and benchmarks
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
-1. Conditions for removal
--------------------------
-We can eliminate an inner side of outer join if:
-1. For each record combination of outer tables, it will always produce
- exactly one record.
-2. There are no references to columns of the inner tables anywhere else in
+1. Elimination criteria
+=======================
+We can eliminate inner side of an outer join nest if:
+
+1. There are no references to columns of the inner tables anywhere else in
the query.
+2. For each record combination of outer tables, it will always produce
+ exactly one matching record combination.
+
+Most of effort in this WL entry is checking these two conditions.
-#1 means that every table inside the outer join nest is:
- - is a constant table:
- = because it can be accessed via eq_ref(const) access, or
- = it is a zero-rows or one-row MyISAM-like table [MARK1]
- - has an eq_ref access method candidate.
-
-#2 means that WHERE clause, ON clauses of embedding outer joins, ORDER BY,
- GROUP BY and HAVING do not refer to the inner tables of the outer join
- nest.
-
-1.1 Quick check if there are candidates
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Before we start to enumerate join nests, here is a quick way to check if
-there *can be* something to be removed:
+2. No outside references check
+==============================
+Criterion #1 means that the WHERE clause, ON clauses of embedding/subsequent
+outer joins, ORDER BY, GROUP BY and HAVING must have no references to inner
+tables of the outer join nest we're trying to remove.
+
+For multi-table UPDATE/DELETE we also must not remove tables that we're
+updating/deleting from or tables that are used in UPDATE's SET clause.
+
+2.1 Quick check if there are tables with no outside references
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Before we start searching for outer join nests that could be eliminated,
+we'll do a quick and cheap check if there possibly could be something that
+could be eliminated:
- if ((tables used in select_list |
+ if (there are outer joins &&
+ (tables used in select_list |
tables used in group/order by UNION |
- tables used in where) != bitmap_of_all_tables)
+ tables used in where) != bitmap_of_all_join_tables)
{
attempt table elimination;
}
-2. Removal operation properties
--------------------------------
-* There is always one way to remove (no choice to remove either this or that)
-* It is always better to remove as much tables as possible (at least within
- our cost model).
-Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
-3. Removal operation
---------------------
-* Remove the outer join nest's nested join structure (i.e. get the
- outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
- $OJ->embedding->nested_join. Update table_map's of all ancestor nested
- joins). [MARK2]
+3. One-match check
+==================
+We can eliminate inner side of outer join if it will always generate exactly
+one matching record combination.
-* Move the tables and their JOIN_TABs to front like it is done with const
- tables, with exception that if eliminated outer join nest was within
- another outer join nest, that shouldn't prevent us from moving away the
- eliminated tables.
+By definition of OUTER JOIN, a NULL-complemented record combination will be
+generated when the inner side of outer join has not produced any matches.
-* Update join->table_count and all-join-tables bitmap.
+What remains to be checked is that there is no possiblity that inner side of
+the outer join could produce more than one matching record combination.
-* That's it. Nothing else?
+We'll refer to one-match property as "functional dependency":
-4. User interface
------------------
-* We'll add an @@optimizer switch flag for table elimination. Tentative
- name: 'table_elimination'.
- (Note ^^ utility of the above questioned ^, as table elimination can never
- be worse than no elimination. We're leaning towards not adding the flag)
-
-* EXPLAIN will not show the removed tables at all. This will allow to check
- if tables were removed, and also will behave nicely with anchor model and
- VIEWs: stuff that user doesn't care about just won't be there.
+- A outer join nest is functionally dependent [wrt outer tables] if it will
+ produce one matching record combination per each record combination of
+ outer tables
-5. Tests and benchmarks
------------------------
-Create a benchmark in sql-bench which checks if the DBMS has table
-elimination.
-[According to Monty] Run
- - queries that would use elimination
- - queries that are very similar to one above (so that they would have same
- QEP, execution cost, etc) but cannot use table elimination.
-then compare run times and make a conclusion about whether dbms supports table
-elimination.
+- A table is functionally dependent wrt certain set of dependency tables, if
+ record combination of dependency tables uniquely identifies zero or one
+ matching record in the table
-6. Todo, issues to resolve
---------------------------
+- Definitions of functional dependency of keys (=column tuples) and columns are
+ apparent.
-6.1 To resolve
-~~~~~~~~~~~~~~
-- Relationship with prepared statements.
- On one hand, it's natural to desire to make table elimination a
- once-per-statement operation, like outer->inner join conversion. We'll have
- to limit the applicability by removing [MARK1] as that can change during
- lifetime of the statement.
-
- The other option is to do table elimination every time. This will require to
- rework operation [MARK2] to be undoable.
-
- I'm leaning towards doing the former. With anchor modeling, it is unlikely
- that we'll meet outer joins which have N inner tables of which some are 1-row
- MyISAM tables that do not have primary key.
-
-6.2 Resolved
-~~~~~~~~~~~~
-* outer->inner join conversion is not a problem for table elimination.
- We make outer->inner conversions based on predicates in WHERE. If the WHERE
- referred to an inner table (requirement for OJ->IJ conversion) then table
- elimination would not be applicable anyway.
-
-* For Multi-table UPDATEs/DELETEs, need to also analyze the SET clause:
- - affected tables must not be eliminated
- - tables that are used on the right side of the SET x=y assignments must
- not be eliminated either.
+Our goal is to prove that the entire join nest is functionally-dependent.
-* Aggregate functions used to report that they depend on all tables, that is,
+Join nest is functionally dependent (on the otside tables) if each of its
+elements (those can be either base tables or join nests) is functionally
+dependent.
- item_agg_func->used_tables() == (1ULL << join->tables) - 1
+Functional dependency is transitive: if table A is f-dependent on the outer
+tables and table B is f.dependent on {A, outer_tables} then B is functionally
+dependent on the outer tables.
+
+Subsequent sections list cases when we can declare a table to be
+functionally-dependent.
+
+3.1 Functional dependency source #1: Potential eq_ref access
+------------------------------------------------------------
+This is the most practically-important case. Taking the example from the HLD
+of this WL entry:
+
+ select
+ A.colA
+ from
+ tableA A
+ left outer join
+ tableB B
+ on
+ B.id = A.id;
- always. Fixed it, now aggregate function reports it depends on
- tables that its arguments depend on. In particular, COUNT(*) reports
- that it depends on no tables (item_count_star->used_tables()==0).
- One consequence of that is that "item->used_tables()==0" is not
- equivalent to "item->const_item()==true" anymore (not sure if it's
- "anymore" or this has been already happening).
-
-* EXPLAIN EXTENDED warning text was generated after the JOIN object has
- been discarded. This didn't allow to use information about join plan
- when printing the warning. Fixed this by keeping the JOIN objects until
- we've printed the warning (have also an intent to remove the const
- tables from the join output).
-
-7. Additional issues
---------------------
-* We remove ON clauses within outer join nests. If these clauses contain
- subqueries, they probably should be gone from EXPLAIN output also?
- Yes. Current approach: when removing an outer join nest, walk the ON clause
- and mark subselects as eliminated. Then let EXPLAIN code check if the
- SELECT was eliminated before the printing (EXPLAIN is generated by doing
- a recursive descent, so the check will also cause children of eliminated
- selects not to be printed)
-
-* Table elimination is performed after constant table detection (but before
- the range analysis). Constant tables are technically different from
- eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
- Considering we've already done the join_read_const_table() call, is there any
- real difference between constant table and eliminated one? If there is, should
- we mark const tables also as eliminated?
- from user/EXPLAIN point of view: no. constant table is the one that we read
- one record from. eliminated table is the one that we don't acccess at all.
+and generalizing it: a table TBL is functionally-dependent if the ON
+expression allows to construct a potential eq_ref access to table TBL that
+uses only outer or functionally-dependent tables.
+
+In other words: table TBL will have one match if the ON expression can be
+converted into this form
+
+ TBL.unique_key=func(one_match_tables) AND .. remainder ...
+
+(with appropriate extension for multi-part keys), where
+
+ one_match_tables= {
+ tables that are not on the inner side of the outer join in question, and
+ functionally dependent tables
+ }
+
+Note that this will cover constant tables, except those that are constant because
+they have 0/1 record or are partitioned and have no used partitions.
+
+
+3.2 Functional dependency source #2: col2=func(col1)
+----------------------------------------------------
+This comes from the second example in the HLS:
-* What is described above will not be able to eliminate this outer join
create unique index idx on tableB (id, fromDate);
...
left outer join
@@ -169,32 +152,331 @@
B.fromDate = (select max(sub.fromDate)
from tableB sub where sub.id = A.id);
- This is because condition "B.fromDate= func(tableB)" cannot be used.
- Reason#1: update_ref_and_keys() does not consider such conditions to
- be of any use (and indeed they are not usable for ref access)
- so they are not put into KEYUSE array.
- Reason#2: even if they were put there, we would need to be able to tell
- between predicates like
- B.fromDate= func(B.id) // guarantees only one matching row as
- // B.id is already bound by B.id=A.id
- // hence B.fromDate becomes bound too.
- and
- "B.fromDate= func(B.*)" // Can potentially have many matching
- // records.
- We need to
- - Have update_ref_and_keys() create KEYUSE elements for such equalities
- - Have eliminate_tables() and friends make a more accurate check.
- The right check is to check whether all parts of a unique key are bound.
- If we have keypartX to be bound, then t.keypartY=func(keypartX) makes
- keypartY to be bound.
- The difficulty here is that correlated subquery predicate cannot tell what
- columns it depends on (it only remembers tables).
- Traversing the predicate is expensive and complicated.
- We're leaning towards making each subquery predicate have a List<Item> with
- items that
- - are in the current select
- - and it depends on.
- This list will be useful in certain other subquery optimizations as well,
- it is cheap to collect it in fix_fields() phase, so it will be collected
- for every subquery predicate.
+Here it is apparent that tableB can be eliminated. It is not possible to
+construct eq_ref access to tableB, though, because for the second part of the
+primary key (fromDate column) we only got a condition in this form:
+
+ B.fromDate= func(tableB)
+
+(we write "func(tableB)" because ref optimizer can only determine which tables
+the right part of the equality depends on).
+
+In general case, equality like this doesn't guarantee functional dependency.
+For example, if func() == { return fromDate;}, i.e the ON expression is
+
+ ... ON B.id = A.id and B.fromDate = B.fromDate
+
+then that would allow table B to have multiple matches per record of table A.
+
+In order to be able to distinguish between these two cases, we'll need to go
+down to column level:
+
+- A table is functionally dependent if it has a unique key that's functionally
+ dependent
+
+- A unique key is functionally dependent when all of its columns are
+ functionally dependent
+
+- A table column is functionally dependent if the ON clause allows to extract
+ an AND-part in this form:
+
+ tbl.column = f(functionally-dependent columns or columns of outer tables)
+
+3.3 Functional dependency source #3: One or zero records in the table
+---------------------------------------------------------------------
+A table with one or zero records cannot generate more than one matching
+record. This source is of lesser importance as one/zero-record tables are only
+MyISAM tables.
+
+3.4 Functional dependency check implementation
+----------------------------------------------
+As shown above, we need something similar to KEYUSE structures, but not
+exactly that (we need things that current ref optimizer considers unusable and
+don't need things that it considers usable).
+
+3.4.1 Equality collection: Option1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+We could
+- extend KEYUSE structures to store all kinds of equalities we need
+- change update_ref_and_keys() and co. to collect equalities both for ref
+ access and for table elimination
+ = [possibly] Improve [eq_]ref access to be able to use equalities in
+ form keypart2=func(keypart1)
+- process the KEYUSE array both by table elimination and by ref access
+ optimizer.
+
++ This requires less effort.
+- Code will have to be changed all over sql_select.cc
+- update_ref_and_keys() and co. already do several unrelated things. Hooking
+ up table elimination will make it even worse.
+
+3.4.2 Equality collection: Option2
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Alternatively, we could process the WHERE clause totally on our own.
++ Table elimination is standalone and easy to detach module.
+- Some code duplication with update_ref_and_keys() and co.
+
+Having got the equalities, we'll to propagate functional dependency property
+to unique keys, tables and, ultimately, join nests.
+
+3.4.3 Functional dependency propagation - option 1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Borrow the approach used in constant table detection code:
+
+ do
+ {
+ converted= FALSE;
+ for each table T in join nest
+ {
+ if (check_if_functionally_dependent(T))
+ converted= TRUE;
+ }
+ } while (converted == TRUE);
+
+ check_if_functionally_dependent(T)
+ {
+ if (T has eq_ref access based on func_dep_tables)
+ return TRUE;
+
+ Apply the same do-while loop-based approach to available equalities
+ T.column1=func(other columns)
+ to spread the set of functionally-dependent columns. The goal is to get
+ all columns of a certain unique key to be bound.
+ }
+
+
+3.4.4 Functional dependency propagation - option 2
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Analyze the ON expression(s) and build a list of
+
+ tbl.field = expr(...)
+
+equalities. tbl here is a table that belongs to a join nest that could
+potentially be eliminated.
+
+besides those, add to the list
+ - An element for each unique key in the table that needs to be eliminated
+ - An element for each table that needs to be eliminated
+ - An element for each join nest that can be eliminated (i.e. has no
+ references from outside).
+
+Then, setup "reverse dependencies": each element should have pointers to
+elements that are functionally dependent on it:
+
+- "tbl.field=expr(...)" equality is functionally dependent on all fields that
+ are used in "expr(...)" (here we take into account only fields that belong
+ to tables that can potentially be eliminated).
+- a unique key is dependent on all of its components
+- a table is dependent on all of its unique keys
+- a join nest is dependent on all tables that it contains
+
+These pointers are stored in form of one bitmap, such that:
+
+ "X depends on Y" == test( bitmap[(X's number)*n_objects + (Y's number)] )
+
+Each object also stores a number of dependencies it needs to be satisfied
+before it itself is satisfied:
+
+- "tbl.field=expr(...)" needs all its underlying fields (if a field is
+ referenced many times it is counted only once)
+
+- a unique key needs all of its key parts
+
+- a table needs only one of its unique keys
+
+- a join nest needs all of its tables
+
+(TODO: so what do we do when we've marked a table as constant? We'll need to
+update the "field=expr(....)" elements that use fields of that table. And the
+problem is that we won't know how much to decrement from the counters of those
+elements.
+
+Solution#1: switch to table_map() based approach.
+Solution#2: introduce separate elements for each involved field.
+ field will depend on its table,
+ "field=expr" will depend on fields.
+)
+
+Besides the above, let each element have a pointer to another element, so that
+we can have a linked list of elements.
+
+After the above structures have been created, we start the main algorithm.
+
+The first step is to create a list of functionally-dependent elements. We walk
+across array of dependencies and mark those elements that are already bound
+(i.e. their dependencies are satisfied). At the moment those immediately-bound
+are only "field=expr" dependencies that don't refer to any columns that are
+not bound.
+
+The second step is the loop
+
+ while (bound_list is not empty)
+ {
+ Take the first bound element F off the list.
+ Use the bitmap to find out what other elements depended on it
+ for each such element E
+ {
+ if (E becomes bound after F is bound)
+ add E to the list;
+ }
+ }
+
+The last step is to walk through elements that represent the join nests. Those
+that are bound can be eliminated.
+
+4. Removal operation properties
+===============================
+* There is always one way to remove (no choice to remove either this or that)
+* It is always better to remove as much tables as possible (at least within
+ our cost model).
+Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
+
+
+5. Removal operation
+====================
+(This depends a lot on whether we make table elimination a one-off rewrite or
+conditional)
+
+At the moment table elimination is re-done for each join re-execution, hence
+the removal operation is designed not to modify any statement's permanent
+members.
+
+* Remove the outer join nest's nested join structure (i.e. get the
+ outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
+ $OJ->embedding->nested_join. Update table_map's of all ancestor nested
+ joins). [MARK2]
+
+* Move the tables and their JOIN_TABs to the front of join order, like it is
+ done with const tables, with exception that if eliminated outer join nest
+ was within another outer join nest, that shouldn't prevent us from moving
+ away the eliminated tables.
+
+* Update join->table_count and all-join-tables bitmap.
+ ^ TODO: not true anymore ^
+
+* That's it. Nothing else?
+
+6. User interface
+=================
+
+6.1 @@optimizer_switch flag
+---------------------------
+Argument againist adding the flag:
+* It is always better to perform table elimination than not to do it.
+
+Arguments for the flag:
+* It is always theoretically possible that the new code will cause unintended
+ slowdowns.
+* Having the flag is useful for QA and comparative benchmarking.
+
+Decision so far: add the flag under #ifdef. Make the flag be present in debug
+builds.
+
+6.2 EXPLAIN [EXTENDED]
+----------------------
+There are two possible options:
+1. Show eliminated tables, like we do with const tables.
+2. Do not show eliminated tables.
+
+We chose option 2, because:
+- the table is not accessed at all (besides locking it)
+- it is more natural for anchor model user - when he's querying an anchor-
+ and attributes view, he doesn't care about the unused attributes.
+
+EXPLAIN EXTENDED+SHOW WARNINGS won't show the removed table either.
+
+NOTE: Before this WL, the warning text was generated after all JOIN objects
+have been destroyed. This didn't allow to use information about join plan
+when printing the warning. We've fixed this by keeping the JOIN objects until
+the warning text has been generated.
+
+Table elimination removes inner sides of outer join, and logically the ON
+clause is also removed. If this clause has any subqueries, they will be
+also removed from EXPLAIN output.
+
+An exception to the above is that if we eliminate a derived table, it will
+still be shown in EXPLAIN output. This comes from the fact that the FROM
+subqueries are evaluated before table elimination is invoked.
+TODO: Is the above ok or still remove parts of FROM subqueries?
+
+7. Miscellaneous adjustments
+============================
+
+7.1 Fix used_tables() of aggregate functions
+--------------------------------------------
+Aggregate functions used to report that they depend on all tables, that is,
+
+ item_agg_func->used_tables() == (1ULL << join->tables) - 1
+
+always. Fixed it, now aggregate function reports that it depends on the
+tables that its arguments depend on. In particular, COUNT(*) reports that it
+depends on no tables (item_count_star->used_tables()==0). One consequence of
+that is that "item->used_tables()==0" is not equivalent to
+"item->const_item()==true" anymore (not sure if it's "anymore" or this has
+been already so for some items).
+
+7.2 Make subquery predicates collect their outer references
+-----------------------------------------------------------
+Per-column functional dependency analysis requires us to take a
+
+ tbl.field = func(...)
+
+equality and tell which columns of which tables are referred from func(...)
+expression. For scalar expressions, this is accomplished by Item::walk()-based
+traversal. It should be reasonably cheap (the only practical Item that can be
+expensive to traverse seems to be a special case of "col IN (const1,const2,
+...)". check if we traverse the long list for such items).
+
+For correlated subqueries, traversal can be expensive, it is cheaper to make
+each subquery item have a list of its outer references. The list can be
+collected at fix_fields() stage with very little extra cost, and then it could
+be used for other optimizations.
+
+
+8. Other concerns
+=================
+
+8.1 Relationship with outer->inner joins converter
+--------------------------------------------------
+One could suspect that outer->inner join conversion could get in the way
+of table elimination by changing outer joins (which could be eliminated)
+to inner (which we will not try to eliminate).
+This concern is not valid: we make outer->inner conversions based on
+predicates in WHERE. If the WHERE referred to an inner table (this is a
+requirement for the conversion) then table elimination would not be
+applicable anyway.
+
+8.2 Relationship with prepared statements
+-----------------------------------------
+On one hand, it's natural to desire to make table elimination a
+once-per-statement operation, like outer->inner join conversion. We'll have
+to limit the applicability by removing [MARK1] as that can change during
+lifetime of the statement.
+
+The other option is to do table elimination every time. This will require to
+rework operation [MARK2] to be undoable.
+
+
+8.3 Relationship with constant table detection
+----------------------------------------------
+Table elimination is performed after constant table detection (but before
+the range analysis). Constant tables are technically different from
+eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
+Considering we've already done the join_read_const_table() call, is there any
+real difference between constant table and eliminated one? If there is, should
+we mark const tables also as eliminated?
+from user/EXPLAIN point of view: no. constant table is the one that we read
+one record from. eliminated table is the one that we don't acccess at all.
+TODO
+
+9. Tests and benchmarks
+=======================
+Create a benchmark in sql-bench which checks if the DBMS has table
+elimination.
+[According to Monty] Run
+ - query Q1 that would use elimination
+ - query Q2 that is very similar to Q1 (so that they would have same
+ QEP, execution cost, etc) but cannot use table elimination.
+then compare run times and make a conclusion about whether the used dbms
+supports table elimination.
-=-=(Guest - Thu, 23 Jul 2009, 20:07)=-=-
Dependency created: 29 now depends on 17
-=-=(Monty - Thu, 23 Jul 2009, 09:19)=-=-
Version updated.
--- /tmp/wklog.17.old.24090 2009-07-23 09:19:32.000000000 +0300
+++ /tmp/wklog.17.new.24090 2009-07-23 09:19:32.000000000 +0300
@@ -1 +1 @@
-Server-9.x
+Server-5.1
-=-=(Guest - Mon, 20 Jul 2009, 14:28)=-=-
deukje weg
Worked 1 hour and estimate 3 hours remain (original estimate increased by 4 hours).
-=-=(Guest - Fri, 17 Jul 2009, 02:44)=-=-
Version updated.
--- /tmp/wklog.17.old.24138 2009-07-17 02:44:49.000000000 +0300
+++ /tmp/wklog.17.new.24138 2009-07-17 02:44:49.000000000 +0300
@@ -1 +1 @@
-9.x
+Server-9.x
------------------------------------------------------------
-=-=(View All Progress Notes, 31 total)=-=-
http://askmonty.org/worklog/index.pl?tid=17&nolimit=1
DESCRIPTION:
Eliminate not needed tables from SELECT queries..
This will speed up some views and automatically generated queries.
Example:
CREATE TABLE B (id int primary key);
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
In this case we can remove table B and the join from the query.
HIGH-LEVEL SPECIFICATION:
Here is an extended explanation of table elimination.
Table elimination is a feature found in some modern query optimizers, of
which Microsoft SQL Server 2005/2008 seems to have the most advanced
implementation. Oracle 11g has also been confirmed to use table
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (
select
max(sub.fromDate)
from
tableB sub
where
sub.id = A.id
);
In this example we have added another join condition, which ensures
that we only pick the matching row from tableB having the latest
fromDate. In this case tableB will contain duplicates of the column
B.id, so in order to ensure uniqueness the primary key has to contain
the fromDate column as well. In other words the primary key of tableB
is (B.id, B.fromDate).
Furthermore, since the subselect ensures that we only pick the latest
B.fromDate for a given B.id we know that at most one row will match
the join condition. We will again have the situation where joining
with tableB cannot affect the number of rows in the result set. Since
we do not select any columns from tableB, the whole join operation can
be eliminated from the execution plan.
SQL Server 2005/2008 will deploy table elimination in this situation as
well. We have not found a way to make Oracle 11g use it for this type of
query. Queries like these arise in two situations. Either when you have
denormalized model consisting of a fact table with several related
dimension tables, or when you have a highly normalized model where each
attribute is stored in its own table. The example with the subselect is
common whenever you store historized/versioned data.
LOW-LEVEL DESIGN:
The code (currently in development) is at lp:
~maria-captains/maria/maria-5.1-table-elimination tree.
<contents>
1. Elimination criteria
2. No outside references check
2.1 Quick check if there are tables with no outside references
3. One-match check
3.1 Functional dependency source #1: Potential eq_ref access
3.2 Functional dependency source #2: col2=func(col1)
3.3 Functional dependency source #3: One or zero records in the table
3.4 Functional dependency check implementation
3.4.1 Equality collection: Option1
3.4.2 Equality collection: Option2
3.4.3 Functional dependency propagation - option 1
3.4.4 Functional dependency propagation - option 2
4. Removal operation properties
5. Removal operation
6. User interface
6.1 @@optimizer_switch flag
6.2 EXPLAIN [EXTENDED]
7. Miscellaneous adjustments
7.1 Fix used_tables() of aggregate functions
7.2 Make subquery predicates collect their outer references
8. Other concerns
8.1 Relationship with outer->inner joins converter
8.2 Relationship with prepared statements
8.3 Relationship with constant table detection
9. Tests and benchmarks
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
1. Elimination criteria
=======================
We can eliminate inner side of an outer join nest if:
1. There are no references to columns of the inner tables anywhere else in
the query.
2. For each record combination of outer tables, it will always produce
exactly one matching record combination.
Most of effort in this WL entry is checking these two conditions.
2. No outside references check
==============================
Criterion #1 means that the WHERE clause, ON clauses of embedding/subsequent
outer joins, ORDER BY, GROUP BY and HAVING must have no references to inner
tables of the outer join nest we're trying to remove.
For multi-table UPDATE/DELETE we also must not remove tables that we're
updating/deleting from or tables that are used in UPDATE's SET clause.
2.1 Quick check if there are tables with no outside references
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we start searching for outer join nests that could be eliminated,
we'll do a quick and cheap check if there possibly could be something that
could be eliminated:
if (there are outer joins &&
(tables used in select_list |
tables used in group/order by UNION |
tables used in where) != bitmap_of_all_join_tables)
{
attempt table elimination;
}
3. One-match check
==================
We can eliminate inner side of outer join if it will always generate exactly
one matching record combination.
By definition of OUTER JOIN, a NULL-complemented record combination will be
generated when the inner side of outer join has not produced any matches.
What remains to be checked is that there is no possiblity that inner side of
the outer join could produce more than one matching record combination.
We'll refer to one-match property as "functional dependency":
- A outer join nest is functionally dependent [wrt outer tables] if it will
produce one matching record combination per each record combination of
outer tables
- A table is functionally dependent wrt certain set of dependency tables, if
record combination of dependency tables uniquely identifies zero or one
matching record in the table
- Definitions of functional dependency of keys (=column tuples) and columns are
apparent.
Our goal is to prove that the entire join nest is functionally-dependent.
Join nest is functionally dependent (on the otside tables) if each of its
elements (those can be either base tables or join nests) is functionally
dependent.
Functional dependency is transitive: if table A is f-dependent on the outer
tables and table B is f.dependent on {A, outer_tables} then B is functionally
dependent on the outer tables.
Subsequent sections list cases when we can declare a table to be
functionally-dependent.
3.1 Functional dependency source #1: Potential eq_ref access
------------------------------------------------------------
This is the most practically-important case. Taking the example from the HLD
of this WL entry:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
and generalizing it: a table TBL is functionally-dependent if the ON
expression allows to construct a potential eq_ref access to table TBL that
uses only outer or functionally-dependent tables.
In other words: table TBL will have one match if the ON expression can be
converted into this form
TBL.unique_key=func(one_match_tables) AND .. remainder ...
(with appropriate extension for multi-part keys), where
one_match_tables= {
tables that are not on the inner side of the outer join in question, and
functionally dependent tables
}
Note that this will cover constant tables, except those that are constant because
they have 0/1 record or are partitioned and have no used partitions.
3.2 Functional dependency source #2: col2=func(col1)
----------------------------------------------------
This comes from the second example in the HLS:
create unique index idx on tableB (id, fromDate);
...
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (select max(sub.fromDate)
from tableB sub where sub.id = A.id);
Here it is apparent that tableB can be eliminated. It is not possible to
construct eq_ref access to tableB, though, because for the second part of the
primary key (fromDate column) we only got a condition in this form:
B.fromDate= func(tableB)
(we write "func(tableB)" because ref optimizer can only determine which tables
the right part of the equality depends on).
In general case, equality like this doesn't guarantee functional dependency.
For example, if func() == { return fromDate;}, i.e the ON expression is
... ON B.id = A.id and B.fromDate = B.fromDate
then that would allow table B to have multiple matches per record of table A.
In order to be able to distinguish between these two cases, we'll need to go
down to column level:
- A table is functionally dependent if it has a unique key that's functionally
dependent
- A unique key is functionally dependent when all of its columns are
functionally dependent
- A table column is functionally dependent if the ON clause allows to extract
an AND-part in this form:
tbl.column = f(functionally-dependent columns or columns of outer tables)
3.3 Functional dependency source #3: One or zero records in the table
---------------------------------------------------------------------
A table with one or zero records cannot generate more than one matching
record. This source is of lesser importance as one/zero-record tables are only
MyISAM tables.
3.4 Functional dependency check implementation
----------------------------------------------
As shown above, we need something similar to KEYUSE structures, but not
exactly that (we need things that current ref optimizer considers unusable and
don't need things that it considers usable).
3.4.1 Equality collection: Option1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We could
- extend KEYUSE structures to store all kinds of equalities we need
- change update_ref_and_keys() and co. to collect equalities both for ref
access and for table elimination
= [possibly] Improve [eq_]ref access to be able to use equalities in
form keypart2=func(keypart1)
- process the KEYUSE array both by table elimination and by ref access
optimizer.
+ This requires less effort.
- Code will have to be changed all over sql_select.cc
- update_ref_and_keys() and co. already do several unrelated things. Hooking
up table elimination will make it even worse.
3.4.2 Equality collection: Option2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively, we could process the WHERE clause totally on our own.
+ Table elimination is standalone and easy to detach module.
- Some code duplication with update_ref_and_keys() and co.
Having got the equalities, we'll to propagate functional dependency property
to unique keys, tables and, ultimately, join nests.
3.4.3 Functional dependency propagation - option 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Borrow the approach used in constant table detection code:
do
{
converted= FALSE;
for each table T in join nest
{
if (check_if_functionally_dependent(T))
converted= TRUE;
}
} while (converted == TRUE);
check_if_functionally_dependent(T)
{
if (T has eq_ref access based on func_dep_tables)
return TRUE;
Apply the same do-while loop-based approach to available equalities
T.column1=func(other columns)
to spread the set of functionally-dependent columns. The goal is to get
all columns of a certain unique key to be bound.
}
3.4.4 Functional dependency propagation - option 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Analyze the ON expression(s) and build a list of
tbl.field = expr(...)
equalities. tbl here is a table that belongs to a join nest that could
potentially be eliminated.
besides those, add to the list
- An element for each unique key in the table that needs to be eliminated
- An element for each table that needs to be eliminated
- An element for each join nest that can be eliminated (i.e. has no
references from outside).
Then, setup "reverse dependencies": each element should have pointers to
elements that are functionally dependent on it:
- "tbl.field=expr(...)" equality is functionally dependent on all fields that
are used in "expr(...)" (here we take into account only fields that belong
to tables that can potentially be eliminated).
- a unique key is dependent on all of its components
- a table is dependent on all of its unique keys
- a join nest is dependent on all tables that it contains
These pointers are stored in form of one bitmap, such that:
"X depends on Y" == test( bitmap[(X's number)*n_objects + (Y's number)] )
Each object also stores a number of dependencies it needs to be satisfied
before it itself is satisfied:
- "tbl.field=expr(...)" needs all its underlying fields (if a field is
referenced many times it is counted only once)
- a unique key needs all of its key parts
- a table needs only one of its unique keys
- a join nest needs all of its tables
(TODO: so what do we do when we've marked a table as constant? We'll need to
update the "field=expr(....)" elements that use fields of that table. And the
problem is that we won't know how much to decrement from the counters of those
elements.
Solution#1: switch to table_map() based approach.
Solution#2: introduce separate elements for each involved field.
field will depend on its table,
"field=expr" will depend on fields.
)
Besides the above, let each element have a pointer to another element, so that
we can have a linked list of elements.
After the above structures have been created, we start the main algorithm.
The first step is to create a list of functionally-dependent elements. We walk
across array of dependencies and mark those elements that are already bound
(i.e. their dependencies are satisfied). At the moment those immediately-bound
are only "field=expr" dependencies that don't refer to any columns that are
not bound.
The second step is the loop
while (bound_list is not empty)
{
Take the first bound element F off the list.
Use the bitmap to find out what other elements depended on it
for each such element E
{
if (E becomes bound after F is bound)
add E to the list;
}
}
The last step is to walk through elements that represent the join nests. Those
that are bound can be eliminated.
4. Removal operation properties
===============================
* There is always one way to remove (no choice to remove either this or that)
* It is always better to remove as much tables as possible (at least within
our cost model).
Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
5. Removal operation
====================
(This depends a lot on whether we make table elimination a one-off rewrite or
conditional)
At the moment table elimination is re-done for each join re-execution, hence
the removal operation is designed not to modify any statement's permanent
members.
* Remove the outer join nest's nested join structure (i.e. get the
outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
$OJ->embedding->nested_join. Update table_map's of all ancestor nested
joins). [MARK2]
* Move the tables and their JOIN_TABs to the front of join order, like it is
done with const tables, with exception that if eliminated outer join nest
was within another outer join nest, that shouldn't prevent us from moving
away the eliminated tables.
* Update join->table_count and all-join-tables bitmap.
^ TODO: not true anymore ^
* That's it. Nothing else?
6. User interface
=================
6.1 @@optimizer_switch flag
---------------------------
Argument againist adding the flag:
* It is always better to perform table elimination than not to do it.
Arguments for the flag:
* It is always theoretically possible that the new code will cause unintended
slowdowns.
* Having the flag is useful for QA and comparative benchmarking.
Decision so far: add the flag under #ifdef. Make the flag be present in debug
builds.
6.2 EXPLAIN [EXTENDED]
----------------------
There are two possible options:
1. Show eliminated tables, like we do with const tables.
2. Do not show eliminated tables.
We chose option 2, because:
- the table is not accessed at all (besides locking it)
- it is more natural for anchor model user - when he's querying an anchor-
and attributes view, he doesn't care about the unused attributes.
EXPLAIN EXTENDED+SHOW WARNINGS won't show the removed table either.
NOTE: Before this WL, the warning text was generated after all JOIN objects
have been destroyed. This didn't allow to use information about join plan
when printing the warning. We've fixed this by keeping the JOIN objects until
the warning text has been generated.
Table elimination removes inner sides of outer join, and logically the ON
clause is also removed. If this clause has any subqueries, they will be
also removed from EXPLAIN output.
An exception to the above is that if we eliminate a derived table, it will
still be shown in EXPLAIN output. This comes from the fact that the FROM
subqueries are evaluated before table elimination is invoked.
TODO: Is the above ok or still remove parts of FROM subqueries?
7. Miscellaneous adjustments
============================
7.1 Fix used_tables() of aggregate functions
--------------------------------------------
Aggregate functions used to report that they depend on all tables, that is,
item_agg_func->used_tables() == (1ULL << join->tables) - 1
always. Fixed it, now aggregate function reports that it depends on the
tables that its arguments depend on. In particular, COUNT(*) reports that it
depends on no tables (item_count_star->used_tables()==0). One consequence of
that is that "item->used_tables()==0" is not equivalent to
"item->const_item()==true" anymore (not sure if it's "anymore" or this has
been already so for some items).
7.2 Make subquery predicates collect their outer references
-----------------------------------------------------------
Per-column functional dependency analysis requires us to take a
tbl.field = func(...)
equality and tell which columns of which tables are referred from func(...)
expression. For scalar expressions, this is accomplished by Item::walk()-based
traversal. It should be reasonably cheap (the only practical Item that can be
expensive to traverse seems to be a special case of "col IN (const1,const2,
...)". check if we traverse the long list for such items).
For correlated subqueries, traversal can be expensive, it is cheaper to make
each subquery item have a list of its outer references. The list can be
collected at fix_fields() stage with very little extra cost, and then it could
be used for other optimizations.
8. Other concerns
=================
8.1 Relationship with outer->inner joins converter
--------------------------------------------------
One could suspect that outer->inner join conversion could get in the way
of table elimination by changing outer joins (which could be eliminated)
to inner (which we will not try to eliminate).
This concern is not valid: we make outer->inner conversions based on
predicates in WHERE. If the WHERE referred to an inner table (this is a
requirement for the conversion) then table elimination would not be
applicable anyway.
8.2 Relationship with prepared statements
-----------------------------------------
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
rework operation [MARK2] to be undoable.
8.3 Relationship with constant table detection
----------------------------------------------
Table elimination is performed after constant table detection (but before
the range analysis). Constant tables are technically different from
eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
Considering we've already done the join_read_const_table() call, is there any
real difference between constant table and eliminated one? If there is, should
we mark const tables also as eliminated?
from user/EXPLAIN point of view: no. constant table is the one that we read
one record from. eliminated table is the one that we don't acccess at all.
TODO
9. Tests and benchmarks
=======================
Create a benchmark in sql-bench which checks if the DBMS has table
elimination.
[According to Monty] Run
- query Q1 that would use elimination
- query Q2 that is very similar to Q1 (so that they would have same
QEP, execution cost, etc) but cannot use table elimination.
then compare run times and make a conclusion about whether the used dbms
supports table elimination.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Table elimination (17)
by worklog-noreply@askmonty.org 24 Mar '10
by worklog-noreply@askmonty.org 24 Mar '10
24 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Table elimination
CREATION DATE..: Sun, 10 May 2009, 19:57
SUPERVISOR.....: Monty
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 17 (http://askmonty.org/worklog/?tid=17)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 1
ESTIMATE.......: 3 (hours remain)
ORIG. ESTIMATE.: 3
PROGRESS NOTES:
-=-=(Guest - Wed, 24 Mar 2010, 05:58)=-=-
Status updated.
--- /tmp/wklog.17.old.13223 2010-03-24 05:58:26.000000000 +0000
+++ /tmp/wklog.17.new.13223 2010-03-24 05:58:26.000000000 +0000
@@ -1 +1 @@
-In-Progress
+Assigned
-=-=(Guest - Wed, 24 Mar 2010, 05:58)=-=-
Privacy level updated.
--- /tmp/wklog.17.old.13223 2010-03-24 05:58:26.000000000 +0000
+++ /tmp/wklog.17.new.13223 2010-03-24 05:58:26.000000000 +0000
@@ -1 +1 @@
-n
+y
-=-=(Guest - Wed, 28 Oct 2009, 02:01)=-=-
Version updated.
--- /tmp/wklog.17.old.24041 2009-10-28 02:01:58.000000000 +0200
+++ /tmp/wklog.17.new.24041 2009-10-28 02:01:58.000000000 +0200
@@ -1 +1 @@
-9.x
+Server-9.x
-=-=(Guest - Sun, 16 Aug 2009, 16:16)=-=-
Category updated.
--- /tmp/wklog.17.old.24882 2009-08-16 16:16:49.000000000 +0300
+++ /tmp/wklog.17.new.24882 2009-08-16 16:16:49.000000000 +0300
@@ -1 +1 @@
-Client-BackLog
+Server-Sprint
-=-=(Guest - Sun, 16 Aug 2009, 16:16)=-=-
Version updated.
--- /tmp/wklog.17.old.24882 2009-08-16 16:16:49.000000000 +0300
+++ /tmp/wklog.17.new.24882 2009-08-16 16:16:49.000000000 +0300
@@ -1 +1 @@
-Server-5.1
+9.x
-=-=(Guest - Wed, 29 Jul 2009, 21:41)=-=-
Low Level Design modified.
--- /tmp/wklog.17.old.26011 2009-07-29 21:41:04.000000000 +0300
+++ /tmp/wklog.17.new.26011 2009-07-29 21:41:04.000000000 +0300
@@ -2,163 +2,146 @@
~maria-captains/maria/maria-5.1-table-elimination tree.
<contents>
-1. Conditions for removal
-1.1 Quick check if there are candidates
-2. Removal operation properties
-3. Removal operation
-4. User interface
-5. Tests and benchmarks
-6. Todo, issues to resolve
-6.1 To resolve
-6.2 Resolved
-7. Additional issues
+1. Elimination criteria
+2. No outside references check
+2.1 Quick check if there are tables with no outside references
+3. One-match check
+3.1 Functional dependency source #1: Potential eq_ref access
+3.2 Functional dependency source #2: col2=func(col1)
+3.3 Functional dependency source #3: One or zero records in the table
+3.4 Functional dependency check implementation
+3.4.1 Equality collection: Option1
+3.4.2 Equality collection: Option2
+3.4.3 Functional dependency propagation - option 1
+3.4.4 Functional dependency propagation - option 2
+4. Removal operation properties
+5. Removal operation
+6. User interface
+6.1 @@optimizer_switch flag
+6.2 EXPLAIN [EXTENDED]
+7. Miscellaneous adjustments
+7.1 Fix used_tables() of aggregate functions
+7.2 Make subquery predicates collect their outer references
+8. Other concerns
+8.1 Relationship with outer->inner joins converter
+8.2 Relationship with prepared statements
+8.3 Relationship with constant table detection
+9. Tests and benchmarks
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
-1. Conditions for removal
--------------------------
-We can eliminate an inner side of outer join if:
-1. For each record combination of outer tables, it will always produce
- exactly one record.
-2. There are no references to columns of the inner tables anywhere else in
+1. Elimination criteria
+=======================
+We can eliminate inner side of an outer join nest if:
+
+1. There are no references to columns of the inner tables anywhere else in
the query.
+2. For each record combination of outer tables, it will always produce
+ exactly one matching record combination.
+
+Most of effort in this WL entry is checking these two conditions.
-#1 means that every table inside the outer join nest is:
- - is a constant table:
- = because it can be accessed via eq_ref(const) access, or
- = it is a zero-rows or one-row MyISAM-like table [MARK1]
- - has an eq_ref access method candidate.
-
-#2 means that WHERE clause, ON clauses of embedding outer joins, ORDER BY,
- GROUP BY and HAVING do not refer to the inner tables of the outer join
- nest.
-
-1.1 Quick check if there are candidates
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Before we start to enumerate join nests, here is a quick way to check if
-there *can be* something to be removed:
+2. No outside references check
+==============================
+Criterion #1 means that the WHERE clause, ON clauses of embedding/subsequent
+outer joins, ORDER BY, GROUP BY and HAVING must have no references to inner
+tables of the outer join nest we're trying to remove.
+
+For multi-table UPDATE/DELETE we also must not remove tables that we're
+updating/deleting from or tables that are used in UPDATE's SET clause.
+
+2.1 Quick check if there are tables with no outside references
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Before we start searching for outer join nests that could be eliminated,
+we'll do a quick and cheap check if there possibly could be something that
+could be eliminated:
- if ((tables used in select_list |
+ if (there are outer joins &&
+ (tables used in select_list |
tables used in group/order by UNION |
- tables used in where) != bitmap_of_all_tables)
+ tables used in where) != bitmap_of_all_join_tables)
{
attempt table elimination;
}
-2. Removal operation properties
--------------------------------
-* There is always one way to remove (no choice to remove either this or that)
-* It is always better to remove as much tables as possible (at least within
- our cost model).
-Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
-3. Removal operation
---------------------
-* Remove the outer join nest's nested join structure (i.e. get the
- outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
- $OJ->embedding->nested_join. Update table_map's of all ancestor nested
- joins). [MARK2]
+3. One-match check
+==================
+We can eliminate inner side of outer join if it will always generate exactly
+one matching record combination.
-* Move the tables and their JOIN_TABs to front like it is done with const
- tables, with exception that if eliminated outer join nest was within
- another outer join nest, that shouldn't prevent us from moving away the
- eliminated tables.
+By definition of OUTER JOIN, a NULL-complemented record combination will be
+generated when the inner side of outer join has not produced any matches.
-* Update join->table_count and all-join-tables bitmap.
+What remains to be checked is that there is no possiblity that inner side of
+the outer join could produce more than one matching record combination.
-* That's it. Nothing else?
+We'll refer to one-match property as "functional dependency":
-4. User interface
------------------
-* We'll add an @@optimizer switch flag for table elimination. Tentative
- name: 'table_elimination'.
- (Note ^^ utility of the above questioned ^, as table elimination can never
- be worse than no elimination. We're leaning towards not adding the flag)
-
-* EXPLAIN will not show the removed tables at all. This will allow to check
- if tables were removed, and also will behave nicely with anchor model and
- VIEWs: stuff that user doesn't care about just won't be there.
+- A outer join nest is functionally dependent [wrt outer tables] if it will
+ produce one matching record combination per each record combination of
+ outer tables
-5. Tests and benchmarks
------------------------
-Create a benchmark in sql-bench which checks if the DBMS has table
-elimination.
-[According to Monty] Run
- - queries that would use elimination
- - queries that are very similar to one above (so that they would have same
- QEP, execution cost, etc) but cannot use table elimination.
-then compare run times and make a conclusion about whether dbms supports table
-elimination.
+- A table is functionally dependent wrt certain set of dependency tables, if
+ record combination of dependency tables uniquely identifies zero or one
+ matching record in the table
-6. Todo, issues to resolve
---------------------------
+- Definitions of functional dependency of keys (=column tuples) and columns are
+ apparent.
-6.1 To resolve
-~~~~~~~~~~~~~~
-- Relationship with prepared statements.
- On one hand, it's natural to desire to make table elimination a
- once-per-statement operation, like outer->inner join conversion. We'll have
- to limit the applicability by removing [MARK1] as that can change during
- lifetime of the statement.
-
- The other option is to do table elimination every time. This will require to
- rework operation [MARK2] to be undoable.
-
- I'm leaning towards doing the former. With anchor modeling, it is unlikely
- that we'll meet outer joins which have N inner tables of which some are 1-row
- MyISAM tables that do not have primary key.
-
-6.2 Resolved
-~~~~~~~~~~~~
-* outer->inner join conversion is not a problem for table elimination.
- We make outer->inner conversions based on predicates in WHERE. If the WHERE
- referred to an inner table (requirement for OJ->IJ conversion) then table
- elimination would not be applicable anyway.
-
-* For Multi-table UPDATEs/DELETEs, need to also analyze the SET clause:
- - affected tables must not be eliminated
- - tables that are used on the right side of the SET x=y assignments must
- not be eliminated either.
+Our goal is to prove that the entire join nest is functionally-dependent.
-* Aggregate functions used to report that they depend on all tables, that is,
+Join nest is functionally dependent (on the otside tables) if each of its
+elements (those can be either base tables or join nests) is functionally
+dependent.
- item_agg_func->used_tables() == (1ULL << join->tables) - 1
+Functional dependency is transitive: if table A is f-dependent on the outer
+tables and table B is f.dependent on {A, outer_tables} then B is functionally
+dependent on the outer tables.
+
+Subsequent sections list cases when we can declare a table to be
+functionally-dependent.
+
+3.1 Functional dependency source #1: Potential eq_ref access
+------------------------------------------------------------
+This is the most practically-important case. Taking the example from the HLD
+of this WL entry:
+
+ select
+ A.colA
+ from
+ tableA A
+ left outer join
+ tableB B
+ on
+ B.id = A.id;
- always. Fixed it, now aggregate function reports it depends on
- tables that its arguments depend on. In particular, COUNT(*) reports
- that it depends on no tables (item_count_star->used_tables()==0).
- One consequence of that is that "item->used_tables()==0" is not
- equivalent to "item->const_item()==true" anymore (not sure if it's
- "anymore" or this has been already happening).
-
-* EXPLAIN EXTENDED warning text was generated after the JOIN object has
- been discarded. This didn't allow to use information about join plan
- when printing the warning. Fixed this by keeping the JOIN objects until
- we've printed the warning (have also an intent to remove the const
- tables from the join output).
-
-7. Additional issues
---------------------
-* We remove ON clauses within outer join nests. If these clauses contain
- subqueries, they probably should be gone from EXPLAIN output also?
- Yes. Current approach: when removing an outer join nest, walk the ON clause
- and mark subselects as eliminated. Then let EXPLAIN code check if the
- SELECT was eliminated before the printing (EXPLAIN is generated by doing
- a recursive descent, so the check will also cause children of eliminated
- selects not to be printed)
-
-* Table elimination is performed after constant table detection (but before
- the range analysis). Constant tables are technically different from
- eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
- Considering we've already done the join_read_const_table() call, is there any
- real difference between constant table and eliminated one? If there is, should
- we mark const tables also as eliminated?
- from user/EXPLAIN point of view: no. constant table is the one that we read
- one record from. eliminated table is the one that we don't acccess at all.
+and generalizing it: a table TBL is functionally-dependent if the ON
+expression allows to construct a potential eq_ref access to table TBL that
+uses only outer or functionally-dependent tables.
+
+In other words: table TBL will have one match if the ON expression can be
+converted into this form
+
+ TBL.unique_key=func(one_match_tables) AND .. remainder ...
+
+(with appropriate extension for multi-part keys), where
+
+ one_match_tables= {
+ tables that are not on the inner side of the outer join in question, and
+ functionally dependent tables
+ }
+
+Note that this will cover constant tables, except those that are constant because
+they have 0/1 record or are partitioned and have no used partitions.
+
+
+3.2 Functional dependency source #2: col2=func(col1)
+----------------------------------------------------
+This comes from the second example in the HLS:
-* What is described above will not be able to eliminate this outer join
create unique index idx on tableB (id, fromDate);
...
left outer join
@@ -169,32 +152,331 @@
B.fromDate = (select max(sub.fromDate)
from tableB sub where sub.id = A.id);
- This is because condition "B.fromDate= func(tableB)" cannot be used.
- Reason#1: update_ref_and_keys() does not consider such conditions to
- be of any use (and indeed they are not usable for ref access)
- so they are not put into KEYUSE array.
- Reason#2: even if they were put there, we would need to be able to tell
- between predicates like
- B.fromDate= func(B.id) // guarantees only one matching row as
- // B.id is already bound by B.id=A.id
- // hence B.fromDate becomes bound too.
- and
- "B.fromDate= func(B.*)" // Can potentially have many matching
- // records.
- We need to
- - Have update_ref_and_keys() create KEYUSE elements for such equalities
- - Have eliminate_tables() and friends make a more accurate check.
- The right check is to check whether all parts of a unique key are bound.
- If we have keypartX to be bound, then t.keypartY=func(keypartX) makes
- keypartY to be bound.
- The difficulty here is that correlated subquery predicate cannot tell what
- columns it depends on (it only remembers tables).
- Traversing the predicate is expensive and complicated.
- We're leaning towards making each subquery predicate have a List<Item> with
- items that
- - are in the current select
- - and it depends on.
- This list will be useful in certain other subquery optimizations as well,
- it is cheap to collect it in fix_fields() phase, so it will be collected
- for every subquery predicate.
+Here it is apparent that tableB can be eliminated. It is not possible to
+construct eq_ref access to tableB, though, because for the second part of the
+primary key (fromDate column) we only got a condition in this form:
+
+ B.fromDate= func(tableB)
+
+(we write "func(tableB)" because ref optimizer can only determine which tables
+the right part of the equality depends on).
+
+In general case, equality like this doesn't guarantee functional dependency.
+For example, if func() == { return fromDate;}, i.e the ON expression is
+
+ ... ON B.id = A.id and B.fromDate = B.fromDate
+
+then that would allow table B to have multiple matches per record of table A.
+
+In order to be able to distinguish between these two cases, we'll need to go
+down to column level:
+
+- A table is functionally dependent if it has a unique key that's functionally
+ dependent
+
+- A unique key is functionally dependent when all of its columns are
+ functionally dependent
+
+- A table column is functionally dependent if the ON clause allows to extract
+ an AND-part in this form:
+
+ tbl.column = f(functionally-dependent columns or columns of outer tables)
+
+3.3 Functional dependency source #3: One or zero records in the table
+---------------------------------------------------------------------
+A table with one or zero records cannot generate more than one matching
+record. This source is of lesser importance as one/zero-record tables are only
+MyISAM tables.
+
+3.4 Functional dependency check implementation
+----------------------------------------------
+As shown above, we need something similar to KEYUSE structures, but not
+exactly that (we need things that current ref optimizer considers unusable and
+don't need things that it considers usable).
+
+3.4.1 Equality collection: Option1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+We could
+- extend KEYUSE structures to store all kinds of equalities we need
+- change update_ref_and_keys() and co. to collect equalities both for ref
+ access and for table elimination
+ = [possibly] Improve [eq_]ref access to be able to use equalities in
+ form keypart2=func(keypart1)
+- process the KEYUSE array both by table elimination and by ref access
+ optimizer.
+
++ This requires less effort.
+- Code will have to be changed all over sql_select.cc
+- update_ref_and_keys() and co. already do several unrelated things. Hooking
+ up table elimination will make it even worse.
+
+3.4.2 Equality collection: Option2
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Alternatively, we could process the WHERE clause totally on our own.
++ Table elimination is standalone and easy to detach module.
+- Some code duplication with update_ref_and_keys() and co.
+
+Having got the equalities, we'll to propagate functional dependency property
+to unique keys, tables and, ultimately, join nests.
+
+3.4.3 Functional dependency propagation - option 1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Borrow the approach used in constant table detection code:
+
+ do
+ {
+ converted= FALSE;
+ for each table T in join nest
+ {
+ if (check_if_functionally_dependent(T))
+ converted= TRUE;
+ }
+ } while (converted == TRUE);
+
+ check_if_functionally_dependent(T)
+ {
+ if (T has eq_ref access based on func_dep_tables)
+ return TRUE;
+
+ Apply the same do-while loop-based approach to available equalities
+ T.column1=func(other columns)
+ to spread the set of functionally-dependent columns. The goal is to get
+ all columns of a certain unique key to be bound.
+ }
+
+
+3.4.4 Functional dependency propagation - option 2
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Analyze the ON expression(s) and build a list of
+
+ tbl.field = expr(...)
+
+equalities. tbl here is a table that belongs to a join nest that could
+potentially be eliminated.
+
+besides those, add to the list
+ - An element for each unique key in the table that needs to be eliminated
+ - An element for each table that needs to be eliminated
+ - An element for each join nest that can be eliminated (i.e. has no
+ references from outside).
+
+Then, setup "reverse dependencies": each element should have pointers to
+elements that are functionally dependent on it:
+
+- "tbl.field=expr(...)" equality is functionally dependent on all fields that
+ are used in "expr(...)" (here we take into account only fields that belong
+ to tables that can potentially be eliminated).
+- a unique key is dependent on all of its components
+- a table is dependent on all of its unique keys
+- a join nest is dependent on all tables that it contains
+
+These pointers are stored in form of one bitmap, such that:
+
+ "X depends on Y" == test( bitmap[(X's number)*n_objects + (Y's number)] )
+
+Each object also stores a number of dependencies it needs to be satisfied
+before it itself is satisfied:
+
+- "tbl.field=expr(...)" needs all its underlying fields (if a field is
+ referenced many times it is counted only once)
+
+- a unique key needs all of its key parts
+
+- a table needs only one of its unique keys
+
+- a join nest needs all of its tables
+
+(TODO: so what do we do when we've marked a table as constant? We'll need to
+update the "field=expr(....)" elements that use fields of that table. And the
+problem is that we won't know how much to decrement from the counters of those
+elements.
+
+Solution#1: switch to table_map() based approach.
+Solution#2: introduce separate elements for each involved field.
+ field will depend on its table,
+ "field=expr" will depend on fields.
+)
+
+Besides the above, let each element have a pointer to another element, so that
+we can have a linked list of elements.
+
+After the above structures have been created, we start the main algorithm.
+
+The first step is to create a list of functionally-dependent elements. We walk
+across array of dependencies and mark those elements that are already bound
+(i.e. their dependencies are satisfied). At the moment those immediately-bound
+are only "field=expr" dependencies that don't refer to any columns that are
+not bound.
+
+The second step is the loop
+
+ while (bound_list is not empty)
+ {
+ Take the first bound element F off the list.
+ Use the bitmap to find out what other elements depended on it
+ for each such element E
+ {
+ if (E becomes bound after F is bound)
+ add E to the list;
+ }
+ }
+
+The last step is to walk through elements that represent the join nests. Those
+that are bound can be eliminated.
+
+4. Removal operation properties
+===============================
+* There is always one way to remove (no choice to remove either this or that)
+* It is always better to remove as much tables as possible (at least within
+ our cost model).
+Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
+
+
+5. Removal operation
+====================
+(This depends a lot on whether we make table elimination a one-off rewrite or
+conditional)
+
+At the moment table elimination is re-done for each join re-execution, hence
+the removal operation is designed not to modify any statement's permanent
+members.
+
+* Remove the outer join nest's nested join structure (i.e. get the
+ outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
+ $OJ->embedding->nested_join. Update table_map's of all ancestor nested
+ joins). [MARK2]
+
+* Move the tables and their JOIN_TABs to the front of join order, like it is
+ done with const tables, with exception that if eliminated outer join nest
+ was within another outer join nest, that shouldn't prevent us from moving
+ away the eliminated tables.
+
+* Update join->table_count and all-join-tables bitmap.
+ ^ TODO: not true anymore ^
+
+* That's it. Nothing else?
+
+6. User interface
+=================
+
+6.1 @@optimizer_switch flag
+---------------------------
+Argument againist adding the flag:
+* It is always better to perform table elimination than not to do it.
+
+Arguments for the flag:
+* It is always theoretically possible that the new code will cause unintended
+ slowdowns.
+* Having the flag is useful for QA and comparative benchmarking.
+
+Decision so far: add the flag under #ifdef. Make the flag be present in debug
+builds.
+
+6.2 EXPLAIN [EXTENDED]
+----------------------
+There are two possible options:
+1. Show eliminated tables, like we do with const tables.
+2. Do not show eliminated tables.
+
+We chose option 2, because:
+- the table is not accessed at all (besides locking it)
+- it is more natural for anchor model user - when he's querying an anchor-
+ and attributes view, he doesn't care about the unused attributes.
+
+EXPLAIN EXTENDED+SHOW WARNINGS won't show the removed table either.
+
+NOTE: Before this WL, the warning text was generated after all JOIN objects
+have been destroyed. This didn't allow to use information about join plan
+when printing the warning. We've fixed this by keeping the JOIN objects until
+the warning text has been generated.
+
+Table elimination removes inner sides of outer join, and logically the ON
+clause is also removed. If this clause has any subqueries, they will be
+also removed from EXPLAIN output.
+
+An exception to the above is that if we eliminate a derived table, it will
+still be shown in EXPLAIN output. This comes from the fact that the FROM
+subqueries are evaluated before table elimination is invoked.
+TODO: Is the above ok or still remove parts of FROM subqueries?
+
+7. Miscellaneous adjustments
+============================
+
+7.1 Fix used_tables() of aggregate functions
+--------------------------------------------
+Aggregate functions used to report that they depend on all tables, that is,
+
+ item_agg_func->used_tables() == (1ULL << join->tables) - 1
+
+always. Fixed it, now aggregate function reports that it depends on the
+tables that its arguments depend on. In particular, COUNT(*) reports that it
+depends on no tables (item_count_star->used_tables()==0). One consequence of
+that is that "item->used_tables()==0" is not equivalent to
+"item->const_item()==true" anymore (not sure if it's "anymore" or this has
+been already so for some items).
+
+7.2 Make subquery predicates collect their outer references
+-----------------------------------------------------------
+Per-column functional dependency analysis requires us to take a
+
+ tbl.field = func(...)
+
+equality and tell which columns of which tables are referred from func(...)
+expression. For scalar expressions, this is accomplished by Item::walk()-based
+traversal. It should be reasonably cheap (the only practical Item that can be
+expensive to traverse seems to be a special case of "col IN (const1,const2,
+...)". check if we traverse the long list for such items).
+
+For correlated subqueries, traversal can be expensive, it is cheaper to make
+each subquery item have a list of its outer references. The list can be
+collected at fix_fields() stage with very little extra cost, and then it could
+be used for other optimizations.
+
+
+8. Other concerns
+=================
+
+8.1 Relationship with outer->inner joins converter
+--------------------------------------------------
+One could suspect that outer->inner join conversion could get in the way
+of table elimination by changing outer joins (which could be eliminated)
+to inner (which we will not try to eliminate).
+This concern is not valid: we make outer->inner conversions based on
+predicates in WHERE. If the WHERE referred to an inner table (this is a
+requirement for the conversion) then table elimination would not be
+applicable anyway.
+
+8.2 Relationship with prepared statements
+-----------------------------------------
+On one hand, it's natural to desire to make table elimination a
+once-per-statement operation, like outer->inner join conversion. We'll have
+to limit the applicability by removing [MARK1] as that can change during
+lifetime of the statement.
+
+The other option is to do table elimination every time. This will require to
+rework operation [MARK2] to be undoable.
+
+
+8.3 Relationship with constant table detection
+----------------------------------------------
+Table elimination is performed after constant table detection (but before
+the range analysis). Constant tables are technically different from
+eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
+Considering we've already done the join_read_const_table() call, is there any
+real difference between constant table and eliminated one? If there is, should
+we mark const tables also as eliminated?
+from user/EXPLAIN point of view: no. constant table is the one that we read
+one record from. eliminated table is the one that we don't acccess at all.
+TODO
+
+9. Tests and benchmarks
+=======================
+Create a benchmark in sql-bench which checks if the DBMS has table
+elimination.
+[According to Monty] Run
+ - query Q1 that would use elimination
+ - query Q2 that is very similar to Q1 (so that they would have same
+ QEP, execution cost, etc) but cannot use table elimination.
+then compare run times and make a conclusion about whether the used dbms
+supports table elimination.
-=-=(Guest - Thu, 23 Jul 2009, 20:07)=-=-
Dependency created: 29 now depends on 17
-=-=(Monty - Thu, 23 Jul 2009, 09:19)=-=-
Version updated.
--- /tmp/wklog.17.old.24090 2009-07-23 09:19:32.000000000 +0300
+++ /tmp/wklog.17.new.24090 2009-07-23 09:19:32.000000000 +0300
@@ -1 +1 @@
-Server-9.x
+Server-5.1
-=-=(Guest - Mon, 20 Jul 2009, 14:28)=-=-
deukje weg
Worked 1 hour and estimate 3 hours remain (original estimate increased by 4 hours).
-=-=(Guest - Fri, 17 Jul 2009, 02:44)=-=-
Version updated.
--- /tmp/wklog.17.old.24138 2009-07-17 02:44:49.000000000 +0300
+++ /tmp/wklog.17.new.24138 2009-07-17 02:44:49.000000000 +0300
@@ -1 +1 @@
-9.x
+Server-9.x
------------------------------------------------------------
-=-=(View All Progress Notes, 31 total)=-=-
http://askmonty.org/worklog/index.pl?tid=17&nolimit=1
DESCRIPTION:
Eliminate not needed tables from SELECT queries..
This will speed up some views and automatically generated queries.
Example:
CREATE TABLE B (id int primary key);
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
In this case we can remove table B and the join from the query.
HIGH-LEVEL SPECIFICATION:
Here is an extended explanation of table elimination.
Table elimination is a feature found in some modern query optimizers, of
which Microsoft SQL Server 2005/2008 seems to have the most advanced
implementation. Oracle 11g has also been confirmed to use table
elimination but not to the same extent.
Basically, what table elimination does, is to remove tables from the
execution plan when it is unnecessary to include them. This can, of
course, only happen if the right circumstances arise. Let us for example
look at the following query:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
When using A as the left table we ensure that the query will return at
least as many rows as there are in that table. For rows where the join
condition (B.id = A.id) is not met the selected column (A.colA) will
still contain it's original value. The not seen B.* row would contain all NULL:s.
However, the result set could actually contain more rows than what is
found in tableA if there are duplicates of the column B.id in tableB. If
A contains a row [1, "val1"] and B the rows [1, "other1a"],[1, "other1b"]
then two rows will match in the join condition. The only way to know
what the result will look like is to actually touch both tables during
execution.
Instead, let's say that tableB contains rows that make it possible to
place a unique constraint on the column B.id, for example and often the
case a primary key. In this situation we know that we will get exactly
as many rows as there are in tableA, since joining with tableB cannot
introduce any duplicates. If further, as in the example query, we do not
select any columns from tableB, touching that table during execution is
unnecessary. We can remove the whole join operation from the execution
plan.
Both SQL Server 2005/2008 and Oracle 11g will deploy table elimination
in the case described above. Let us look at a more advanced query, where
Oracle fails.
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (
select
max(sub.fromDate)
from
tableB sub
where
sub.id = A.id
);
In this example we have added another join condition, which ensures
that we only pick the matching row from tableB having the latest
fromDate. In this case tableB will contain duplicates of the column
B.id, so in order to ensure uniqueness the primary key has to contain
the fromDate column as well. In other words the primary key of tableB
is (B.id, B.fromDate).
Furthermore, since the subselect ensures that we only pick the latest
B.fromDate for a given B.id we know that at most one row will match
the join condition. We will again have the situation where joining
with tableB cannot affect the number of rows in the result set. Since
we do not select any columns from tableB, the whole join operation can
be eliminated from the execution plan.
SQL Server 2005/2008 will deploy table elimination in this situation as
well. We have not found a way to make Oracle 11g use it for this type of
query. Queries like these arise in two situations. Either when you have
denormalized model consisting of a fact table with several related
dimension tables, or when you have a highly normalized model where each
attribute is stored in its own table. The example with the subselect is
common whenever you store historized/versioned data.
LOW-LEVEL DESIGN:
The code (currently in development) is at lp:
~maria-captains/maria/maria-5.1-table-elimination tree.
<contents>
1. Elimination criteria
2. No outside references check
2.1 Quick check if there are tables with no outside references
3. One-match check
3.1 Functional dependency source #1: Potential eq_ref access
3.2 Functional dependency source #2: col2=func(col1)
3.3 Functional dependency source #3: One or zero records in the table
3.4 Functional dependency check implementation
3.4.1 Equality collection: Option1
3.4.2 Equality collection: Option2
3.4.3 Functional dependency propagation - option 1
3.4.4 Functional dependency propagation - option 2
4. Removal operation properties
5. Removal operation
6. User interface
6.1 @@optimizer_switch flag
6.2 EXPLAIN [EXTENDED]
7. Miscellaneous adjustments
7.1 Fix used_tables() of aggregate functions
7.2 Make subquery predicates collect their outer references
8. Other concerns
8.1 Relationship with outer->inner joins converter
8.2 Relationship with prepared statements
8.3 Relationship with constant table detection
9. Tests and benchmarks
</contents>
It's not really about elimination of tables, it's about elimination of inner
sides of outer joins.
1. Elimination criteria
=======================
We can eliminate inner side of an outer join nest if:
1. There are no references to columns of the inner tables anywhere else in
the query.
2. For each record combination of outer tables, it will always produce
exactly one matching record combination.
Most of effort in this WL entry is checking these two conditions.
2. No outside references check
==============================
Criterion #1 means that the WHERE clause, ON clauses of embedding/subsequent
outer joins, ORDER BY, GROUP BY and HAVING must have no references to inner
tables of the outer join nest we're trying to remove.
For multi-table UPDATE/DELETE we also must not remove tables that we're
updating/deleting from or tables that are used in UPDATE's SET clause.
2.1 Quick check if there are tables with no outside references
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we start searching for outer join nests that could be eliminated,
we'll do a quick and cheap check if there possibly could be something that
could be eliminated:
if (there are outer joins &&
(tables used in select_list |
tables used in group/order by UNION |
tables used in where) != bitmap_of_all_join_tables)
{
attempt table elimination;
}
3. One-match check
==================
We can eliminate inner side of outer join if it will always generate exactly
one matching record combination.
By definition of OUTER JOIN, a NULL-complemented record combination will be
generated when the inner side of outer join has not produced any matches.
What remains to be checked is that there is no possiblity that inner side of
the outer join could produce more than one matching record combination.
We'll refer to one-match property as "functional dependency":
- A outer join nest is functionally dependent [wrt outer tables] if it will
produce one matching record combination per each record combination of
outer tables
- A table is functionally dependent wrt certain set of dependency tables, if
record combination of dependency tables uniquely identifies zero or one
matching record in the table
- Definitions of functional dependency of keys (=column tuples) and columns are
apparent.
Our goal is to prove that the entire join nest is functionally-dependent.
Join nest is functionally dependent (on the otside tables) if each of its
elements (those can be either base tables or join nests) is functionally
dependent.
Functional dependency is transitive: if table A is f-dependent on the outer
tables and table B is f.dependent on {A, outer_tables} then B is functionally
dependent on the outer tables.
Subsequent sections list cases when we can declare a table to be
functionally-dependent.
3.1 Functional dependency source #1: Potential eq_ref access
------------------------------------------------------------
This is the most practically-important case. Taking the example from the HLD
of this WL entry:
select
A.colA
from
tableA A
left outer join
tableB B
on
B.id = A.id;
and generalizing it: a table TBL is functionally-dependent if the ON
expression allows to construct a potential eq_ref access to table TBL that
uses only outer or functionally-dependent tables.
In other words: table TBL will have one match if the ON expression can be
converted into this form
TBL.unique_key=func(one_match_tables) AND .. remainder ...
(with appropriate extension for multi-part keys), where
one_match_tables= {
tables that are not on the inner side of the outer join in question, and
functionally dependent tables
}
Note that this will cover constant tables, except those that are constant because
they have 0/1 record or are partitioned and have no used partitions.
3.2 Functional dependency source #2: col2=func(col1)
----------------------------------------------------
This comes from the second example in the HLS:
create unique index idx on tableB (id, fromDate);
...
left outer join
tableB B
on
B.id = A.id
and
B.fromDate = (select max(sub.fromDate)
from tableB sub where sub.id = A.id);
Here it is apparent that tableB can be eliminated. It is not possible to
construct eq_ref access to tableB, though, because for the second part of the
primary key (fromDate column) we only got a condition in this form:
B.fromDate= func(tableB)
(we write "func(tableB)" because ref optimizer can only determine which tables
the right part of the equality depends on).
In general case, equality like this doesn't guarantee functional dependency.
For example, if func() == { return fromDate;}, i.e the ON expression is
... ON B.id = A.id and B.fromDate = B.fromDate
then that would allow table B to have multiple matches per record of table A.
In order to be able to distinguish between these two cases, we'll need to go
down to column level:
- A table is functionally dependent if it has a unique key that's functionally
dependent
- A unique key is functionally dependent when all of its columns are
functionally dependent
- A table column is functionally dependent if the ON clause allows to extract
an AND-part in this form:
tbl.column = f(functionally-dependent columns or columns of outer tables)
3.3 Functional dependency source #3: One or zero records in the table
---------------------------------------------------------------------
A table with one or zero records cannot generate more than one matching
record. This source is of lesser importance as one/zero-record tables are only
MyISAM tables.
3.4 Functional dependency check implementation
----------------------------------------------
As shown above, we need something similar to KEYUSE structures, but not
exactly that (we need things that current ref optimizer considers unusable and
don't need things that it considers usable).
3.4.1 Equality collection: Option1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We could
- extend KEYUSE structures to store all kinds of equalities we need
- change update_ref_and_keys() and co. to collect equalities both for ref
access and for table elimination
= [possibly] Improve [eq_]ref access to be able to use equalities in
form keypart2=func(keypart1)
- process the KEYUSE array both by table elimination and by ref access
optimizer.
+ This requires less effort.
- Code will have to be changed all over sql_select.cc
- update_ref_and_keys() and co. already do several unrelated things. Hooking
up table elimination will make it even worse.
3.4.2 Equality collection: Option2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively, we could process the WHERE clause totally on our own.
+ Table elimination is standalone and easy to detach module.
- Some code duplication with update_ref_and_keys() and co.
Having got the equalities, we'll to propagate functional dependency property
to unique keys, tables and, ultimately, join nests.
3.4.3 Functional dependency propagation - option 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Borrow the approach used in constant table detection code:
do
{
converted= FALSE;
for each table T in join nest
{
if (check_if_functionally_dependent(T))
converted= TRUE;
}
} while (converted == TRUE);
check_if_functionally_dependent(T)
{
if (T has eq_ref access based on func_dep_tables)
return TRUE;
Apply the same do-while loop-based approach to available equalities
T.column1=func(other columns)
to spread the set of functionally-dependent columns. The goal is to get
all columns of a certain unique key to be bound.
}
3.4.4 Functional dependency propagation - option 2
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Analyze the ON expression(s) and build a list of
tbl.field = expr(...)
equalities. tbl here is a table that belongs to a join nest that could
potentially be eliminated.
besides those, add to the list
- An element for each unique key in the table that needs to be eliminated
- An element for each table that needs to be eliminated
- An element for each join nest that can be eliminated (i.e. has no
references from outside).
Then, setup "reverse dependencies": each element should have pointers to
elements that are functionally dependent on it:
- "tbl.field=expr(...)" equality is functionally dependent on all fields that
are used in "expr(...)" (here we take into account only fields that belong
to tables that can potentially be eliminated).
- a unique key is dependent on all of its components
- a table is dependent on all of its unique keys
- a join nest is dependent on all tables that it contains
These pointers are stored in form of one bitmap, such that:
"X depends on Y" == test( bitmap[(X's number)*n_objects + (Y's number)] )
Each object also stores a number of dependencies it needs to be satisfied
before it itself is satisfied:
- "tbl.field=expr(...)" needs all its underlying fields (if a field is
referenced many times it is counted only once)
- a unique key needs all of its key parts
- a table needs only one of its unique keys
- a join nest needs all of its tables
(TODO: so what do we do when we've marked a table as constant? We'll need to
update the "field=expr(....)" elements that use fields of that table. And the
problem is that we won't know how much to decrement from the counters of those
elements.
Solution#1: switch to table_map() based approach.
Solution#2: introduce separate elements for each involved field.
field will depend on its table,
"field=expr" will depend on fields.
)
Besides the above, let each element have a pointer to another element, so that
we can have a linked list of elements.
After the above structures have been created, we start the main algorithm.
The first step is to create a list of functionally-dependent elements. We walk
across array of dependencies and mark those elements that are already bound
(i.e. their dependencies are satisfied). At the moment those immediately-bound
are only "field=expr" dependencies that don't refer to any columns that are
not bound.
The second step is the loop
while (bound_list is not empty)
{
Take the first bound element F off the list.
Use the bitmap to find out what other elements depended on it
for each such element E
{
if (E becomes bound after F is bound)
add E to the list;
}
}
The last step is to walk through elements that represent the join nests. Those
that are bound can be eliminated.
4. Removal operation properties
===============================
* There is always one way to remove (no choice to remove either this or that)
* It is always better to remove as much tables as possible (at least within
our cost model).
Thus, no need for any cost calculations/etc. It's an unconditional rewrite.
5. Removal operation
====================
(This depends a lot on whether we make table elimination a one-off rewrite or
conditional)
At the moment table elimination is re-done for each join re-execution, hence
the removal operation is designed not to modify any statement's permanent
members.
* Remove the outer join nest's nested join structure (i.e. get the
outer join's TABLE_LIST object $OJ and remove it from $OJ->embedding,
$OJ->embedding->nested_join. Update table_map's of all ancestor nested
joins). [MARK2]
* Move the tables and their JOIN_TABs to the front of join order, like it is
done with const tables, with exception that if eliminated outer join nest
was within another outer join nest, that shouldn't prevent us from moving
away the eliminated tables.
* Update join->table_count and all-join-tables bitmap.
^ TODO: not true anymore ^
* That's it. Nothing else?
6. User interface
=================
6.1 @@optimizer_switch flag
---------------------------
Argument againist adding the flag:
* It is always better to perform table elimination than not to do it.
Arguments for the flag:
* It is always theoretically possible that the new code will cause unintended
slowdowns.
* Having the flag is useful for QA and comparative benchmarking.
Decision so far: add the flag under #ifdef. Make the flag be present in debug
builds.
6.2 EXPLAIN [EXTENDED]
----------------------
There are two possible options:
1. Show eliminated tables, like we do with const tables.
2. Do not show eliminated tables.
We chose option 2, because:
- the table is not accessed at all (besides locking it)
- it is more natural for anchor model user - when he's querying an anchor-
and attributes view, he doesn't care about the unused attributes.
EXPLAIN EXTENDED+SHOW WARNINGS won't show the removed table either.
NOTE: Before this WL, the warning text was generated after all JOIN objects
have been destroyed. This didn't allow to use information about join plan
when printing the warning. We've fixed this by keeping the JOIN objects until
the warning text has been generated.
Table elimination removes inner sides of outer join, and logically the ON
clause is also removed. If this clause has any subqueries, they will be
also removed from EXPLAIN output.
An exception to the above is that if we eliminate a derived table, it will
still be shown in EXPLAIN output. This comes from the fact that the FROM
subqueries are evaluated before table elimination is invoked.
TODO: Is the above ok or still remove parts of FROM subqueries?
7. Miscellaneous adjustments
============================
7.1 Fix used_tables() of aggregate functions
--------------------------------------------
Aggregate functions used to report that they depend on all tables, that is,
item_agg_func->used_tables() == (1ULL << join->tables) - 1
always. Fixed it, now aggregate function reports that it depends on the
tables that its arguments depend on. In particular, COUNT(*) reports that it
depends on no tables (item_count_star->used_tables()==0). One consequence of
that is that "item->used_tables()==0" is not equivalent to
"item->const_item()==true" anymore (not sure if it's "anymore" or this has
been already so for some items).
7.2 Make subquery predicates collect their outer references
-----------------------------------------------------------
Per-column functional dependency analysis requires us to take a
tbl.field = func(...)
equality and tell which columns of which tables are referred from func(...)
expression. For scalar expressions, this is accomplished by Item::walk()-based
traversal. It should be reasonably cheap (the only practical Item that can be
expensive to traverse seems to be a special case of "col IN (const1,const2,
...)". check if we traverse the long list for such items).
For correlated subqueries, traversal can be expensive, it is cheaper to make
each subquery item have a list of its outer references. The list can be
collected at fix_fields() stage with very little extra cost, and then it could
be used for other optimizations.
8. Other concerns
=================
8.1 Relationship with outer->inner joins converter
--------------------------------------------------
One could suspect that outer->inner join conversion could get in the way
of table elimination by changing outer joins (which could be eliminated)
to inner (which we will not try to eliminate).
This concern is not valid: we make outer->inner conversions based on
predicates in WHERE. If the WHERE referred to an inner table (this is a
requirement for the conversion) then table elimination would not be
applicable anyway.
8.2 Relationship with prepared statements
-----------------------------------------
On one hand, it's natural to desire to make table elimination a
once-per-statement operation, like outer->inner join conversion. We'll have
to limit the applicability by removing [MARK1] as that can change during
lifetime of the statement.
The other option is to do table elimination every time. This will require to
rework operation [MARK2] to be undoable.
8.3 Relationship with constant table detection
----------------------------------------------
Table elimination is performed after constant table detection (but before
the range analysis). Constant tables are technically different from
eliminated ones (e.g. the former are shown in EXPLAIN and the latter aren't).
Considering we've already done the join_read_const_table() call, is there any
real difference between constant table and eliminated one? If there is, should
we mark const tables also as eliminated?
from user/EXPLAIN point of view: no. constant table is the one that we read
one record from. eliminated table is the one that we don't acccess at all.
TODO
9. Tests and benchmarks
=======================
Create a benchmark in sql-bench which checks if the DBMS has table
elimination.
[According to Monty] Run
- query Q1 that would use elimination
- query Q2 that is very similar to Q1 (so that they would have same
QEP, execution cost, etc) but cannot use table elimination.
then compare run times and make a conclusion about whether the used dbms
supports table elimination.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Rev 2785: Disable subselect_notembedded.test due to LPBUG#545137 in file:///home/psergey/dev/maria-5.3-subqueries-r10/
by Sergey Petrunya 23 Mar '10
by Sergey Petrunya 23 Mar '10
23 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r10/
------------------------------------------------------------
revno: 2785
revision-id: psergey(a)askmonty.org-20100323145750-sr9oucry979i3p60
parent: psergey(a)askmonty.org-20100321200604-oxw7ri2qu5c9n7gy
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r10
timestamp: Tue 2010-03-23 17:57:50 +0300
message:
Disable subselect_notembedded.test due to LPBUG#545137
=== modified file 'mysql-test/t/disabled.def'
--- a/mysql-test/t/disabled.def 2010-03-04 08:03:07 +0000
+++ b/mysql-test/t/disabled.def 2010-03-23 14:57:50 +0000
@@ -11,3 +11,4 @@
##############################################################################
kill : Bug#37780 2008-12-03 HHunger need some changes to be robust enough for pushbuild.
query_cache_28249 : Bug#43861 2009-03-25 main.query_cache_28249 fails sporadically
+subselect_notembedded : MariaDB LPBUG#545137: crashes on some platforms
1
0
23 Mar '10
Kristian Nielsen <knielsen(a)knielsen-hq.org> writes:
>> I know how to take an external project and merge it into MariaDB. This uses
>> the "merge-into" bzr plugin.
>>
>> However, I do not know how to go in the opposite direction, that is take a
>> part od the MariaDB tree and split it out in a separate project, and still
>> allow merging it back in.
Actually, I just noticed the `bzr split` command, which seems to claim to do
exactly this.
I never used it, but you might want to google/experiment with it a bit and see
if it would work.
- Kristian.
2
1
Hi!
21 марта 2010, в 23:50, Sergei Golubchik написал(а):
> Hi.
>
> In few places ma_loghandler.c uses soft_sync_rwl.
> It's never defined, and the build fails on platforms where 5.2
> has no native supoport for atomic ops.
>
> What should it be ? soft_sync_rwl should be defined or it should be
> never used ?
>
> Can you fix that ?
Yes. We agreed with monty that I should remove it all.
[skip]
1
0
22 Mar '10
Hi Sergey,
Thanks for offering to help with the memory leak. Here are the details.
Tree is here:
lp:~maria-captains/maria/mariadb-5.1-knielsen
This tree is the merge with MySQL-5.1.44, including your fix for the
uninitialised variable in table elimination.
The test case is the following, which is a simplified version of main.union
(so main.union shows the same memory leak):
-------------------------------- cut here --------------------------------
CREATE TABLE t1 (a VARCHAR(10), FULLTEXT KEY a (a));
INSERT INTO t1 VALUES (1),(2);
CREATE TABLE t2 (b INT);
INSERT INTO t2 VALUES (1),(2);
EXPLAIN EXTENDED
SELECT * FROM t1 UNION SELECT * FROM t1
ORDER BY (SELECT a FROM t2 WHERE b = 12);
DROP TABLE t1,t2;
-------------------------------- cut here --------------------------------
Here is the stack trace from Valgrind:
main.knielsen [ pass ] 1604
***Warnings generated in error logs during shutdown after running tests: main.knielsen
==11409==
==11409== 1,440 bytes in 1 blocks are definitely lost in loss record 7 of 7
==11409== at 0x4C22FAB: malloc (vg_replace_malloc.c:207)
==11409== by 0xB3DD44: my_malloc (my_malloc.c:37)
==11409== by 0xB4D078: init_dynamic_array2 (array.c:64)
==11409== by 0x7146BA: update_ref_and_keys(THD*, st_dynamic_array*, st_join_table*, unsigned, Item*, COND_EQUAL*, unsigned long long, st_select_lex*, st_sargable_param**) (sql_select.cc:3930)
==11409== by 0x7156C5: make_join_statistics(JOIN*, TABLE_LIST*, Item*, st_dynamic_array*) (sql_select.cc:2721)
==11409== by 0x717998: JOIN::optimize() (sql_select.cc:1002)
==11409== by 0x71BC37: mysql_select(THD*, Item***, TABLE_LIST*, unsigned, List<Item>&, Item*, unsigned, st_order*, st_order*, Item*, st_order*, unsigned long long, select_result*, st_select_lex_unit*, st_select_lex*) (sql_select.cc:2473)
==11409== by 0x71C1C5: mysql_explain_union(THD*, st_select_lex_unit*, select_result*) (sql_select.cc:16946)
==11409== by 0x71E942: select_describe(JOIN*, bool, bool, bool, char const*) (sql_select.cc:16887)
==11409== by 0x71F60F: JOIN::exec() (sql_select.cc:1837)
==11409== by 0x846859: st_select_lex_unit::exec() (sql_union.cc:513)
==11409== by 0x71C0A0: mysql_explain_union(THD*, st_select_lex_unit*, select_result*) (sql_select.cc:16929)
==11409== by 0x688108: execute_sqlcom_select(THD*, TABLE_LIST*) (sql_parse.cc:5091)
==11409== by 0x68A2AC: mysql_execute_command(THD*) (sql_parse.cc:2299)
==11409== by 0x692EFF: mysql_parse(THD*, char const*, unsigned, char const**) (sql_parse.cc:6034)
==11409== by 0x693D11: dispatch_command(enum_server_command, THD*, char*, unsigned) (sql_parse.cc:1247)
- Kristian.
4
3
[Maria-developers] Rev 2784: Make test result stable (had different result orderings, on some platforms, both in file:///home/psergey/dev/maria-5.3-subqueries-r10/
by Sergey Petrunya 21 Mar '10
by Sergey Petrunya 21 Mar '10
21 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r10/
------------------------------------------------------------
revno: 2784
revision-id: psergey(a)askmonty.org-20100321200604-oxw7ri2qu5c9n7gy
parent: psergey(a)askmonty.org-20100321195033-jzl1l091k2nbb6zh
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r10
timestamp: Sun 2010-03-21 23:06:04 +0300
message:
Make test result stable (had different result orderings, on some platforms, both
of which satisfied the ORDER BY clause).
=== modified file 'mysql-test/r/join_outer.result'
--- a/mysql-test/r/join_outer.result 2010-03-20 12:01:47 +0000
+++ b/mysql-test/r/join_outer.result 2010-03-21 20:06:04 +0000
@@ -416,10 +416,10 @@
select t1.*, t2.* from t1 left join t2 on t1.n = t2.n and
t1.m = t2.m where t1.n = 1 order by t1.o;
n m o n m o
+1 2 11 1 2 3
1 2 7 1 2 3
1 2 9 1 2 3
1 3 9 NULL NULL NULL
-1 2 11 1 2 3
drop table t1,t2;
CREATE TABLE t1 (id1 INT NOT NULL PRIMARY KEY, dat1 CHAR(1), id2 INT);
INSERT INTO t1 VALUES (1,'a',1);
=== modified file 'mysql-test/r/join_outer_jcl6.result'
--- a/mysql-test/r/join_outer_jcl6.result 2010-03-20 12:01:47 +0000
+++ b/mysql-test/r/join_outer_jcl6.result 2010-03-21 20:06:04 +0000
@@ -420,10 +420,10 @@
select t1.*, t2.* from t1 left join t2 on t1.n = t2.n and
t1.m = t2.m where t1.n = 1 order by t1.o;
n m o n m o
+1 2 11 1 2 3
1 2 7 1 2 3
1 2 9 1 2 3
1 3 9 NULL NULL NULL
-1 2 11 1 2 3
drop table t1,t2;
CREATE TABLE t1 (id1 INT NOT NULL PRIMARY KEY, dat1 CHAR(1), id2 INT);
INSERT INTO t1 VALUES (1,'a',1);
=== modified file 'mysql-test/t/join_outer.test'
--- a/mysql-test/t/join_outer.test 2009-12-17 09:55:18 +0000
+++ b/mysql-test/t/join_outer.test 2010-03-21 20:06:04 +0000
@@ -308,6 +308,7 @@
insert into t2 values (1, 2, 3),(2, 2, 8), (4,3,9),(3,2,10);
select t1.*, t2.* from t1 left join t2 on t1.n = t2.n and
t1.m = t2.m where t1.n = 1;
+--sorted_result
select t1.*, t2.* from t1 left join t2 on t1.n = t2.n and
t1.m = t2.m where t1.n = 1 order by t1.o;
drop table t1,t2;
1
0
[Maria-developers] Rev 2783: Fix merge error in pbxt suite test results in file:///home/psergey/dev/maria-5.3-subqueries-r10/
by Sergey Petrunya 21 Mar '10
by Sergey Petrunya 21 Mar '10
21 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r10/
------------------------------------------------------------
revno: 2783
revision-id: psergey(a)askmonty.org-20100321195033-jzl1l091k2nbb6zh
parent: psergey(a)askmonty.org-20100320165930-ehfull9rin1bdme4
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r10
timestamp: Sun 2010-03-21 22:50:33 +0300
message:
Fix merge error in pbxt suite test results
=== modified file 'mysql-test/suite/pbxt/r/join_nested.result'
--- a/mysql-test/suite/pbxt/r/join_nested.result 2010-03-20 12:01:47 +0000
+++ b/mysql-test/suite/pbxt/r/join_nested.result 2010-03-21 19:50:33 +0000
@@ -1055,8 +1055,8 @@
(t8.b=t9.b OR t8.c IS NULL) AND
(t9.a=1);
id select_type table type possible_keys key key_len ref rows Extra
-1 SIMPLE t0 ref idx_a idx_a 5 const 1 100.00
-1 SIMPLE t1 ref idx_b idx_b 5 test.t0.b 1 100.00
+1 SIMPLE t0 ref idx_a idx_a 5 const 1
+1 SIMPLE t1 ref idx_b idx_b 5 test.t0.b 1
1 SIMPLE t2 ALL NULL NULL NULL NULL 3
1 SIMPLE t3 ALL NULL NULL NULL NULL 2
1 SIMPLE t4 ref idx_b idx_b 5 test.t2.b 1
1
0
[Maria-developers] Rev 2782: Fix union.test failure in buildbot: alternate fix for BUG#49734 in file:///home/psergey/dev/maria-5.3-subqueries-r10/
by Sergey Petrunya 20 Mar '10
by Sergey Petrunya 20 Mar '10
20 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r10/
------------------------------------------------------------
revno: 2782
revision-id: psergey(a)askmonty.org-20100320165930-ehfull9rin1bdme4
parent: psergey(a)askmonty.org-20100320120844-n8dvu5loib2fjvwl
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r10
timestamp: Sat 2010-03-20 19:59:30 +0300
message:
Fix union.test failure in buildbot: alternate fix for BUG#49734
=== modified file 'mysql-test/r/union.result'
--- a/mysql-test/r/union.result 2010-03-20 12:01:47 +0000
+++ b/mysql-test/r/union.result 2010-03-20 16:59:30 +0000
@@ -1632,7 +1632,7 @@
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
2 UNION t1 ALL NULL NULL NULL NULL 2 100.00
-3 SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
+3 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
NULL UNION RESULT <union1,2> ALL NULL NULL NULL NULL NULL NULL Using filesort
Warnings:
Note 1276 Field or reference 'test.t1.a' of SELECT #3 was resolved in SELECT #2
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-20 12:01:47 +0000
+++ b/sql/sql_select.cc 2010-03-20 16:59:30 +0000
@@ -18319,6 +18319,26 @@
unit;
unit= unit->next_unit())
{
+ /*
+ This fix_fields() call is to handle an edge case like this:
+
+ SELECT ... UNION SELECT ... ORDER BY (SELECT ...)
+
+ for such queries, we'll get here before having called
+ subquery_expr->fix_fields(), which will cause failure to
+ */
+ if (unit->item && !unit->item->fixed)
+ {
+ Item *ref= unit->item;
+ if (unit->item->fix_fields(thd, &ref))
+ DBUG_VOID_RETURN;
+ DBUG_ASSERT(ref == unit->item);
+ }
+
+ /*
+ Display subqueries only if they are not parts of eliminated WHERE/ON
+ clauses.
+ */
if (!(unit->item && unit->item->eliminated))
{
if (mysql_explain_union(thd, unit, result))
1
0
Hi!
I was looking through subquery code and found the following issue
with FirstMatch strategy:
Our original intent with FirstMatch strategy was to support join orders where
sj-inner tables are interleaved with outer tables that are not correlated
with the subquery. FirstMatch spec is here,
http://forge.mysql.com/worklog/task.php?id=3750, the question of interleaving
is covered in section 2.2.
[I assumed] I have coded this for non-buffered join execution, both optimizer
and executioner support. The first problem I saw was that it didn't seem to
be possible to come up with dataset/query that would cause the join optimizer
to pick such join order. I don't know whether this is because the cost formulas
make the choice impossible or I'm just not finding the right examples. Either
way, mysql-test-run suite has no coverage for FirstMatch+interleaving.
Now, when I look at the source code and/or force the choice of FirstMatch+
interleaving join order by changing costs from gdb, I find out that:
- setup_semijoin_dups_elimination() has a bug that will make the query produce
incorrect result
- Join bufferring now supports FirstMatch with multiple inner tables but
doesn't support FirstMatch+interleaving.
Since I'm not comfortable with making fixes for something that I can't have
testcases for, I'm considering disabling FirstMatch+interleaving. We can get
back to it when we have a better understanding of what goes on in the cost
model.
Any objections?
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
2
1
[Maria-developers] Rev 2781: Merge in file:///home/psergey/dev/maria-5.3-subqueries-r10/
by Sergey Petrunya 20 Mar '10
by Sergey Petrunya 20 Mar '10
20 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r10/
------------------------------------------------------------
revno: 2781 [merge]
revision-id: psergey(a)askmonty.org-20100320120844-n8dvu5loib2fjvwl
parent: psergey(a)askmonty.org-20100320120147-bbjquol551u9u8sq
parent: timour(a)askmonty.org-20100315224130-321rym1lsuwz2j5z
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r10
timestamp: Sat 2010-03-20 15:08:44 +0300
message:
Merge
modified:
mysql-test/suite/pbxt/r/subselect.result subselect.result-20090402100035-4ilk9i91sh65vjcb-146
mysql-test/suite/pbxt/t/subselect.test subselect.test-20090402100035-4ilk9i91sh65vjcb-313
=== modified file 'mysql-test/suite/pbxt/r/subselect.result'
--- a/mysql-test/suite/pbxt/r/subselect.result 2010-03-20 12:01:47 +0000
+++ b/mysql-test/suite/pbxt/r/subselect.result 2010-03-20 12:08:44 +0000
@@ -876,6 +876,8 @@
4.5
NULL
drop table t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int(11) NOT NULL default '0', PRIMARY KEY (a));
CREATE TABLE t2 (a int(11) default '0', INDEX (a));
INSERT INTO t1 VALUES (1),(2),(3),(4);
@@ -1771,6 +1773,7 @@
Warnings:
Note 1003 select `test`.`a`.`id` AS `id`,`test`.`a`.`text` AS `text`,`test`.`b`.`id` AS `id`,`test`.`b`.`text` AS `text`,`test`.`c`.`id` AS `id`,`test`.`c`.`text` AS `text` from `test`.`t1` `a` left join `test`.`t2` `b` on(((`test`.`b`.`id` = `test`.`a`.`id`) or isnull(`test`.`b`.`id`))) join `test`.`t1` `c` where (if(isnull(`test`.`b`.`id`),1000,`test`.`b`.`id`) = `test`.`c`.`id`)
drop table t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
create table t1 (a int);
insert into t1 values (1);
explain select benchmark(1000, (select a from t1 where a=sha(rand())));
@@ -2750,6 +2753,8 @@
max(fld)
1
drop table t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (one int, two int, flag char(1));
CREATE TABLE t2 (one int, two int, flag char(1));
INSERT INTO t1 VALUES(1,2,'Y'),(2,3,'Y'),(3,4,'Y'),(5,6,'N'),(7,8,'N');
@@ -2834,6 +2839,7 @@
Warnings:
Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one` AS `one`,`test`.`t2`.`two` AS `two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
SELECT * FROM t1 WHERE (a,b) IN (('aaa','aaa'), ('aaa','bbb'));
@@ -3004,6 +3010,8 @@
1 1
1 3
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1(a int, INDEX (a));
INSERT INTO t1 VALUES (1), (3), (5), (7);
INSERT INTO t1 VALUES (NULL);
@@ -3019,6 +3027,7 @@
2 NULL
3 1
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a DATETIME);
INSERT INTO t1 VALUES ('1998-09-23'), ('2003-03-25');
CREATE TABLE t2 AS SELECT
=== modified file 'mysql-test/suite/pbxt/t/subselect.test'
--- a/mysql-test/suite/pbxt/t/subselect.test 2009-11-24 10:19:08 +0000
+++ b/mysql-test/suite/pbxt/t/subselect.test 2010-03-20 12:08:44 +0000
@@ -477,6 +477,9 @@
# Null with keys
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1 (a int(11) NOT NULL default '0', PRIMARY KEY (a));
CREATE TABLE t2 (a int(11) default '0', INDEX (a));
INSERT INTO t1 VALUES (1),(2),(3),(4);
@@ -1121,6 +1124,8 @@
explain extended select * from t1 a left join t2 b on (a.id=b.id or b.id is null) join t1 c on (if(isnull(b.id), 1000, b.id)=c.id);
drop table t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Static tables & rund() in subqueries
#
@@ -1784,6 +1789,9 @@
# Bug #11867: queries with ROW(,elems>) IN (SELECT DISTINCT <cols> FROM ...)
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1 (one int, two int, flag char(1));
CREATE TABLE t2 (one int, two int, flag char(1));
INSERT INTO t1 VALUES(1,2,'Y'),(2,3,'Y'),(3,4,'Y'),(5,6,'N'),(7,8,'N');
@@ -1811,6 +1819,9 @@
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
+
#
# Bug #12392: where cond with IN predicate for rows and NULL values in table
#
@@ -1972,6 +1983,9 @@
# with possible NULL values by index access from the outer query
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1(a int, INDEX (a));
INSERT INTO t1 VALUES (1), (3), (5), (7);
INSERT INTO t1 VALUES (NULL);
@@ -1984,6 +1998,8 @@
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #11302: getObject() returns a String for a sub-query of type datetime
#
@@ -3096,6 +3112,7 @@
DROP TABLE t1,t2;
+
#
# Bug #32400: Complex SELECT query returns correct result only on some
# occasions
1
0
[Maria-developers] Rev 2780: Merge MariaDB-5.2 -> MariaDB 5.3 in file:///home/psergey/dev/maria-5.3-subqueries-r10/
by Sergey Petrunya 20 Mar '10
by Sergey Petrunya 20 Mar '10
20 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r10/
------------------------------------------------------------
revno: 2780 [merge]
revision-id: psergey(a)askmonty.org-20100320120147-bbjquol551u9u8sq
parent: timour(a)askmonty.org-20100315195258-nhomb3anbb1tv3mi
parent: sergii(a)pisem.net-20100315115123-21tgprclhz7qbk6m
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r10
timestamp: Sat 2010-03-20 15:01:47 +0300
message:
Merge MariaDB-5.2 -> MariaDB 5.3
removed:
mysql-test/include/have_dynamic_loading.inc have_dynamic_loading-20090522174437-1iywv3u2rmhtf5lw-1
mysql-test/r/have_big5.require sp1f-have_big5.require-20031224125945-cflpywfuzirasphsblv2ajraikylgacr
mysql-test/r/have_cp1250_ch.require sp1f-have_cp1250_ch.requi-20050303101531-cjneybwikyehox3jnm45loqqnaqwzkvj
mysql-test/r/have_cp1251.require sp1f-have_cp1251.require-20070628173450-i62zj4c2fxsc6eqgfqxy543guy4g3jrr
mysql-test/r/have_cp866.require sp1f-have_cp866.require-20070628173450-5fxsfev3ij44465l3cemvjqc7xiykvqq
mysql-test/r/have_cp932.require sp1f-have_cp932.require-20050201103743-aippxhnxxupykb6tdknithdkgv2oldnl
mysql-test/r/have_eucjpms.require sp1f-have_eucjpms.require-20050201103744-7hge7ogmcdtxhvv5tea4uplkfukyvism
mysql-test/r/have_euckr.require sp1f-have_euckr.require-20051209123314-widiorc3ixsg255w6ktst5ua4hsyqhca
mysql-test/r/have_gb2312.require sp1f-have_gb2312.require-20051209123314-7a5t363pdilm7l74pbcfkqxz3ta2ezev
mysql-test/r/have_gbk.require sp1f-have_gbk.require-20050722160346-et4yh4jixrhfv7i3iuebueeoacpozpwp
mysql-test/r/have_koi8r.require sp1f-have_koi8r.require-20070628173450-ufxb2v6lwytl7ugrukqsbakved2vlv6e
mysql-test/r/have_latin2_ch.require sp1f-have_latin2_ch.requi-20060320122820-jgi5uvsybpyb4lsz7taskv3rbgbji7ux
mysql-test/r/have_sjis.require sp1f-have_sjis.require-20040325102937-vjhvbemd525sdpmsqpdnetb3yzj676xh
mysql-test/r/have_tis620.require sp1f-have_tis620.require-20031225161058-upcrdr654iwyylqwxr64m3p5oxdrgjvm
mysql-test/r/have_ucs2.require sp1f-have_ucs2.require-20030523101003-bzbtmopmdpmfdhm6ozkjbn6shb6rewgu
mysql-test/r/have_ujis.require sp1f-have_ujis.require-20030523101003-iivfus472t4mc7ej3nvhxmb4aggtmtfz
mysql-test/r/have_utf8.require sp1f-have_utf8.require-20070628173450-mxsljvgqt5xfqethuo226imugbtei5g4
mysql-test/r/subselect4.result subselect4.result-20100117143923-se9pticby23pjrq4-1
mysql-test/suite/innodb/r/innodb_file_format.result innodb_file_format.r-20090730123212-ozvn639s8f71467w-1
mysql-test/suite/innodb/t/innodb_file_format.test innodb_file_format.t-20090730123218-e1mr6rq2zau4lsif-1
mysql-test/suite/maria/t/maria2-master.opt maria2master.opt-20090215104617-4fkcge2h3ssmgmv0-1
mysql-test/t/subselect4.test subselect4.test-20100117143929-ndri90rgqi9bqhat-1
mysys/mf_strip.c sp1f-mf_stripp.c-19700101030959-2ym735i6ydnmctuca7s77ymbqt7v66me
storage/innodb_plugin/README readme-20090527093836-7v4wb2xxka10h4d0-6
storage/innodb_plugin/handler/handler0vars.h handler0vars.h-20090730095633-tiqyypxaa0sth5gl-1
storage/innodb_plugin/handler/win_delay_loader.cc win_delay_loader.cc-20090730095633-tiqyypxaa0sth5gl-2
storage/innodb_plugin/win-plugin/ winplugin-20090527093836-7v4wb2xxka10h4d0-45
storage/innodb_plugin/win-plugin/README readme-20090527093836-7v4wb2xxka10h4d0-400
storage/innodb_plugin/win-plugin/win-plugin.diff winplugin.diff-20090527093836-7v4wb2xxka10h4d0-401
storage/xtradb/handler/handler0vars.h handler0vars.h-20081203050234-edoolglm28lyejuc-3
storage/xtradb/handler/win_delay_loader.cc win_delay_loader.cc-20081203050234-edoolglm28lyejuc-4
storage/xtradb/ut/ut0auxconf.c ut0auxconf.c-20090326061054-ylrdb8libxw6u7e9-10
storage/xtradb/win-plugin/ winplugin-20081203050234-edoolglm28lyejuc-2
storage/xtradb/win-plugin/README readme-20081203050234-edoolglm28lyejuc-15
storage/xtradb/win-plugin/win-plugin.diff winplugin.diff-20081203050234-edoolglm28lyejuc-16
added:
BUILD/compile-bintar compilebintar-20100107101810-lelof47hh40zljzw-1
BUILD/util.sh util.sh-20100107105306-5523083hapn1b4n5-1
include/mysql/service_my_snprintf.h service_my_snprintf.-20100228160908-qfuxjw8mp0bdlmzs-3
include/mysql/service_thd_alloc.h service_thd_alloc.h-20100228160908-qfuxjw8mp0bdlmzs-4
include/mysql/services.h services.h-20100228160908-qfuxjw8mp0bdlmzs-5
include/service_versions.h service_versions.h-20100228160908-qfuxjw8mp0bdlmzs-2
libservices/ libservices-20100228160908-qfuxjw8mp0bdlmzs-1
libservices/CMakeLists.txt cmakelists.txt-20100228160908-qfuxjw8mp0bdlmzs-8
libservices/HOWTO howto-20100228160908-qfuxjw8mp0bdlmzs-9
libservices/Makefile.am makefile.am-20100228160908-qfuxjw8mp0bdlmzs-10
libservices/my_snprintf_service.c my_snprintf_service.-20100228160908-qfuxjw8mp0bdlmzs-11
libservices/thd_alloc_service.c thd_alloc_service.c-20100228160908-qfuxjw8mp0bdlmzs-12
mysql-test/extra/binlog_tests/binlog_failure_mixing_engines.test binlog_failure_mixin-20091006002922-iahrhss60j05ei1h-1
mysql-test/extra/rpl_tests/rpl_auto_increment_insert_view.test rpl_auto_increment_i-20090914101609-rtpx81itbaubpmgc-1
mysql-test/extra/rpl_tests/rpl_auto_increment_invoke_trigger.test rpl_auto_increment_i-20090914101534-x7z3gkpnlcvpe8tc-1
mysql-test/extra/rpl_tests/rpl_autoinc_func_invokes_trigger.test rpl_autoinc_func_inv-20090927065809-91q8o1j3cgydbnbf-1
mysql-test/extra/rpl_tests/rpl_mixing_engines.inc rpl_mixing_engines.i-20091123213554-45y47762glp89til-1
mysql-test/extra/rpl_tests/rpl_not_null.test rpl_not_null.test-20090929140808-ln24vq9he9qkn20x-1
mysql-test/extra/rpl_tests/rpl_set_null.test rpl_set_null.test-20100120221348-fnma1tw4m3kl9b8c-1
mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test rpl_tmp_table_and_dd-20091231103103-hi3hbqvf2sei3ju0-1
mysql-test/include/binlog_inject_error.inc binlog_inject_error.-20090930014344-gklg7lve4oqyvyfm-1
mysql-test/include/have_case_insensitive_fs.inc have_case_insensitiv-20091027080400-j8v9m0ohbe2h6g9j-1
mysql-test/include/have_collation.inc have_collation.inc-20091227132600-3ntwjbkwy845e8l2-1
mysql-test/include/have_debug_sync.inc have_debug_sync.inc-20090925124518-2m0htks1bbp5jaf5-1
mysql-test/include/have_dynamic_loading.inc have_dynamic_loading-20090904194100-ugojr9bb769e3fbq-1
mysql-test/include/have_mysql_upgrade.inc have_mysql_upgrade.i-20090917092107-9hgddfty57tkdmvp-1
mysql-test/include/have_not_innodb_plugin.inc have_not_innodb_plug-20090923075831-ufghhe437p4n3lll-2
mysql-test/include/not_windows_embedded.inc not_windows_embedded-20091008083404-t3r99o2khtp7kg90-1
mysql-test/include/truncate_file.inc truncate_file.inc-20100105102156-fcyy3dww59ncnqzk-1
mysql-test/lib/v1/incompatible.tests incompatible.tests-20090716120442-nls1338ddzoeujfy-1
mysql-test/r/bug46760.result bug46760.result-20090918125958-1wth1m4e5jtlymw0-1
mysql-test/r/bug47671.result bug47671.result-20091125061811-9f1u8eyhc4lhli89-1
mysql-test/r/case_insensitive_fs.require case_insensitive_fs.-20091027080416-za7pbbn63nzc3sq1-1
mysql-test/r/create-uca.result createuca.result-20091227132600-3ntwjbkwy845e8l2-2
mysql-test/r/debug_sync.result debug_sync.result-20090925124518-2m0htks1bbp5jaf5-2
mysql-test/r/grant_lowercase_fs.result grant_lowercase_fs.r-20091027080441-p2gwdn1qob67e2sn-1
mysql-test/r/have_debug_sync.require have_debug_sync.requ-20090925124518-2m0htks1bbp5jaf5-3
mysql-test/r/innodb-autoinc-44030.result innodbautoinc44030.r-20100122100315-suuvvijmkzm7mfvd-1
mysql-test/r/innodb-consistent.result innodbconsistent.res-20100106115838-xm7yncpgr8fsekjt-2
mysql-test/r/innodb_bug44369.result innodb_bug44369.resu-20091005111405-nbp5t33h95jrqha2-1
mysql-test/r/innodb_bug44571.result innodb_bug44571.resu-20100106115838-xm7yncpgr8fsekjt-6
mysql-test/r/innodb_bug46000.result innodb_bug46000.resu-20091005110756-z0lgt2d7519jnym8-1
mysql-test/r/innodb_bug46676.result innodb_bug46676.resu-20100106115838-xm7yncpgr8fsekjt-10
mysql-test/r/innodb_bug47167.result innodb_bug47167.resu-20100106115838-xm7yncpgr8fsekjt-12
mysql-test/r/innodb_bug47777.result innodb_bug47777.resu-20091102144133-qtzn9xarzh75dlyu-1
mysql-test/r/innodb_file_format.result innodb_file_format.r-20090923000535-ke95wdd4zn27df71-18
mysql-test/r/innodb_utf8.result innodb_utf8.result-20091227132600-3ntwjbkwy845e8l2-3
mysql-test/r/locale.result locale.result-20091019084441-qaiek29ss4m5rftg-2
mysql-test/r/lowercase_mixed_tmpdir_innodb.result lowercase_mixed_tmpd-20090909093609-nobx9nnvim2rm4w6-1
mysql-test/r/not_true.require not_true.require-20090923075831-ufghhe437p4n3lll-1
mysql-test/r/partition_innodb_builtin.result partition_innodb_bui-20090923084042-z98c8mrrs0g20ib8-1
mysql-test/r/partition_innodb_plugin.result partition_innodb_plu-20090923084042-z98c8mrrs0g20ib8-2
mysql-test/r/partition_open_files_limit.result partition_open_files-20091007154706-q8bf9g4f8yeyzaha-1
mysql-test/r/sp-bugs.result spbugs.result-20091008145837-dfo321akug9vxs7z-2
mysql-test/r/sp_sync.result sp_sync.result-20100112141220-hrw3yyvlvrr1hnom-1
mysql-test/r/subselect4.result subselect4.result-20090903150316-1sul3u8k29ooxm3r-2
mysql-test/r/udf_query_cache.result udf_query_cache.resu-20100111125806-7lkv720m352b58ia-1
mysql-test/std_data/binlog_transaction.000001 binlog_transaction.0-20090924075501-0p7j6mqdvkocgsqt-1
mysql-test/std_data/bug47012.ARM bug47012.arm-20091110122641-iqp1lul6n2ekz7m1-3
mysql-test/std_data/bug47012.ARZ bug47012.arz-20091110122641-iqp1lul6n2ekz7m1-2
mysql-test/std_data/bug47012.frm bug47012.frm-20091110122641-iqp1lul6n2ekz7m1-1
mysql-test/std_data/bug47142_master-bin.000001 bug47142_masterbin.0-20100122163109-0oyg80muz1vjfwjt-1
mysql-test/std_data/latin1.xml latin1.xml-20091012072835-0kzrquhyy5du9pfx-1
mysql-test/suite/binlog/r/binlog_delete_and_flush_index.result binlog_delete_and_fl-20091019225654-0f8r2m0kfopmcyv9-1
mysql-test/suite/binlog/r/binlog_mixed_failure_mixing_engines.result binlog_mixed_failure-20091006002922-iahrhss60j05ei1h-2
mysql-test/suite/binlog/r/binlog_row_failure_mixing_engines.result binlog_row_failure_m-20091006002922-iahrhss60j05ei1h-3
mysql-test/suite/binlog/r/binlog_row_mysqlbinlog_verbose.result binlog_row_mysqlbinl-20091009083137-wijoi4v49b1btvoa-1
mysql-test/suite/binlog/r/binlog_stm_do_db.result binlog_stm_do_db.res-20090723105556-zwq0kkax3cohfix5-1
mysql-test/suite/binlog/r/binlog_write_error.result binlog_write_error.r-20090930014338-on5w673994v1ozm2-1
mysql-test/suite/binlog/std_data/update-full-row.binlog updatefullrow.binlog-20091009085429-osiv6twgm7nh9ra3-1
mysql-test/suite/binlog/std_data/update-partial-row.binlog updatepartialrow.bin-20091009085429-osiv6twgm7nh9ra3-2
mysql-test/suite/binlog/std_data/write-full-row.binlog writefullrow.binlog-20091009085435-3lxvujqi3h4xswim-1
mysql-test/suite/binlog/std_data/write-partial-row.binlog writepartialrow.binl-20091009085435-3lxvujqi3h4xswim-2
mysql-test/suite/binlog/t/binlog_delete_and_flush_index.test binlog_delete_and_fl-20091019225654-0f8r2m0kfopmcyv9-2
mysql-test/suite/binlog/t/binlog_mixed_failure_mixing_engines.test binlog_mixed_failure-20091006002922-iahrhss60j05ei1h-4
mysql-test/suite/binlog/t/binlog_row_failure_mixing_engines.test binlog_row_failure_m-20091006002922-iahrhss60j05ei1h-5
mysql-test/suite/binlog/t/binlog_row_mysqlbinlog_options-master.opt binlog_row_mysqlbinl-20100201185634-3gvrp49sr2zc4z92-1
mysql-test/suite/binlog/t/binlog_row_mysqlbinlog_verbose.test binlog_row_mysqlbinl-20091009083137-wijoi4v49b1btvoa-2
mysql-test/suite/binlog/t/binlog_stm_do_db-master.opt binlog_stm_do_dbmast-20090723105556-zwq0kkax3cohfix5-3
mysql-test/suite/binlog/t/binlog_stm_do_db.test binlog_stm_do_db.tes-20090723105556-zwq0kkax3cohfix5-2
mysql-test/suite/binlog/t/binlog_write_error.test binlog_write_error.t-20090930014333-w23q33ka9niwwhur-1
mysql-test/suite/federated/federated_debug-master.opt federated_debugmaste-20090930202808-tky4roen9kpzlmdp-1
mysql-test/suite/federated/federated_debug.result federated_debug.resu-20090930202818-9otpqz3uxpfx2iv1-1
mysql-test/suite/federated/federated_debug.test federated_debug.test-20090930202805-kt19apxdz61tx0ln-1
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_49329.result ibmdb2i_bug_49329.re-20091211060357-tvb1hizf1z36d1y8-1
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_49329.test ibmdb2i_bug_49329.te-20091211060412-968w5ehzmv73mkce-1
mysql-test/suite/innodb/r/innodb-consistent.result innodbconsistent.res-20091009132519-hgdn500g0czzt422-1
mysql-test/suite/innodb/r/innodb_bug44571.result innodb_bug44571.resu-20091008104658-12126vr05wqyllai-1
mysql-test/suite/innodb/r/innodb_bug46676.result innodb_bug46676.resu-20091130121246-w0etrydh59vqrhfk-1
mysql-test/suite/innodb/r/innodb_bug47167.result innodb_bug47167.resu-20091130114932-z1l0gqbz3zjhelcn-1
mysql-test/suite/innodb/t/innodb-consistent-master.opt innodbconsistentmast-20091009132511-05q1yxchk8rz94rf-1
mysql-test/suite/innodb/t/innodb-consistent.test innodbconsistent.tes-20091009132503-a1s2ak2b3c32x2xl-1
mysql-test/suite/innodb/t/innodb_bug44571.test innodb_bug44571.test-20091008104658-12126vr05wqyllai-2
mysql-test/suite/innodb/t/innodb_bug46676.test innodb_bug46676.test-20091130121246-w0etrydh59vqrhfk-2
mysql-test/suite/innodb/t/innodb_bug47167.test innodb_bug47167.test-20091130114932-z1l0gqbz3zjhelcn-2
mysql-test/suite/maria/r/group_commit.result group_commit.result-20100212130139-9we3v7ru70ruix5k-1
mysql-test/suite/maria/t/group_commit.test group_commit.test-20100212130125-57z0du2gotniu91w-1
mysql-test/suite/ndb/r/ndb_tmp_table_and_DDL.result ndb_tmp_table_and_dd-20091231103010-uyd7xh08px6j3kml-1
mysql-test/suite/ndb/t/ndb_tmp_table_and_DDL.test ndb_tmp_table_and_dd-20091231102956-hjjqhloqiuzk5oj0-1
mysql-test/suite/parts/t/partition_repair_myisam-master.opt partition_repair_myi-20100210190644-plfsej29z9xwjugc-1
mysql-test/suite/rpl/r/rpl_auto_increment_update_failure.result rpl_auto_increment_u-20090903144503-al54eug7lpxe5bxp-1
mysql-test/suite/rpl/r/rpl_geometry.result rpl_geometry.result-20091230085514-e2rpmsnfp40w154r-1
mysql-test/suite/rpl/r/rpl_loaddata_concurrent.result rpl_loaddata_concurr-20091128070839-rtfxv9y0fvqmvx6o-4
mysql-test/suite/rpl/r/rpl_loaddata_symlink.result rpl_loaddata_symlink-20091112200612-x5ek042xipzbdluy-1
mysql-test/suite/rpl/r/rpl_manual_change_index_file.result rpl_manual_change_in-20091210082846-phn3436ez8k1e97a-1
mysql-test/suite/rpl/r/rpl_mysql_upgrade.result rpl_mysql_upgrade.re-20090915080949-vr0m9hda55fjmelr-1
mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result rpl_nondeterministic-20091118142013-9zx8msk45pqp0kcn-1
mysql-test/suite/rpl/r/rpl_not_null_innodb.result rpl_not_null_innodb.-20090929140808-ln24vq9he9qkn20x-2
mysql-test/suite/rpl/r/rpl_not_null_myisam.result rpl_not_null_myisam.-20090929140808-ln24vq9he9qkn20x-3
mysql-test/suite/rpl/r/rpl_row_disabled_slave_key.result rpl_row_disabled_sla-20090926230521-lkpisp969kum1ko2-1
mysql-test/suite/rpl/r/rpl_row_trunc_temp.result rpl_row_trunc_temp.r-20091117145658-biyfd3a42vxttw4g-1
mysql-test/suite/rpl/r/rpl_set_null_innodb.result rpl_set_null_innodb.-20100119233203-2fidx106m4xxom3o-1
mysql-test/suite/rpl/r/rpl_set_null_myisam.result rpl_set_null_myisam.-20100119233203-2fidx106m4xxom3o-2
mysql-test/suite/rpl/r/rpl_stm_binlog_direct.result rpl_stm_causality_mi-20091103194246-m70twufoi2rr96fm-1
mysql-test/suite/rpl/r/rpl_tmp_table_and_DDL.result rpl_tmp_table_and_dd-20091231103048-p7b9555jbqbbdi62-1
mysql-test/suite/rpl/t/rpl_auto_increment_update_failure.test rpl_auto_increment_u-20090903144442-bgwonv8p7ky8c3ze-1
mysql-test/suite/rpl/t/rpl_geometry.test rpl_geometry.test-20091230085507-ztznucmqabqhaprh-1
mysql-test/suite/rpl/t/rpl_get_master_version_and_clock-slave.opt rpl_get_master_versi-20091022015603-0bswyro3q6eqinsm-1
mysql-test/suite/rpl/t/rpl_loaddata_concurrent.test rpl_loaddata_concurr-20091128070839-rtfxv9y0fvqmvx6o-5
mysql-test/suite/rpl/t/rpl_loaddata_symlink-master.opt rpl_loaddata_symlink-20091112200645-33lzxlcjtrz4ekex-1
mysql-test/suite/rpl/t/rpl_loaddata_symlink-master.sh rpl_loaddata_symlink-20091120052332-4iio0f9yuk8srq37-1
mysql-test/suite/rpl/t/rpl_loaddata_symlink-slave.opt rpl_loaddata_symlink-20091120052318-rey16d8e27esu4af-1
mysql-test/suite/rpl/t/rpl_loaddata_symlink-slave.sh rpl_loaddata_symlink-20091120052322-exs2kyw12kmns1tw-1
mysql-test/suite/rpl/t/rpl_loaddata_symlink.test rpl_loaddata_symlink-20091112200628-bv7esgx1vr7rhcky-1
mysql-test/suite/rpl/t/rpl_manual_change_index_file.test rpl_manual_change_in-20091210082953-vw8rrlgzmjt2dq6q-1
mysql-test/suite/rpl/t/rpl_mysql_upgrade.test rpl_mysql_upgrade.te-20090915080946-ihj08jolsl0jiel5-1
mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test rpl_nondeterministic-20091118141837-p0896pkizwxx60yu-1
mysql-test/suite/rpl/t/rpl_not_null_innodb.test rpl_not_null_innodb.-20090929140808-ln24vq9he9qkn20x-4
mysql-test/suite/rpl/t/rpl_not_null_myisam.test rpl_not_null_myisam.-20090929140808-ln24vq9he9qkn20x-5
mysql-test/suite/rpl/t/rpl_row_disabled_slave_key.test rpl_row_disabled_sla-20090926230521-lkpisp969kum1ko2-2
mysql-test/suite/rpl/t/rpl_row_trunc_temp.test rpl_row_trunc_temp.t-20091117145658-biyfd3a42vxttw4g-2
mysql-test/suite/rpl/t/rpl_set_null_innodb.test rpl_set_null_innodb.-20100119233130-z2zgo3y1jd439fhw-1
mysql-test/suite/rpl/t/rpl_set_null_myisam.test rpl_set_null_myisam.-20100119233130-z2zgo3y1jd439fhw-2
mysql-test/suite/rpl/t/rpl_stm_binlog_direct-master.opt rpl_stm_causality_mi-20091103192642-utv7cpex2kl69fum-1
mysql-test/suite/rpl/t/rpl_stm_binlog_direct.test rpl_stm_causality_mi-20091103192642-utv7cpex2kl69fum-2
mysql-test/suite/rpl/t/rpl_tmp_table_and_DDL.test rpl_tmp_table_and_dd-20091231103025-vh61hrs2phd29zoi-1
mysql-test/suite/rpl_ndb/r/rpl_ndb_set_null.result rpl_ndb_set_null.res-20100121135153-mksmczmjr7tutspb-1
mysql-test/suite/rpl_ndb/t/rpl_ndb_set_null.test rpl_ndb_set_null.tes-20100121135153-mksmczmjr7tutspb-2
mysql-test/t/bug46760-master.opt bug46760master.opt-20090918125958-1wth1m4e5jtlymw0-2
mysql-test/t/bug46760.test bug46760.test-20090918125958-1wth1m4e5jtlymw0-3
mysql-test/t/bug47671-master.opt bug47671master.opt-20091125061855-277zbp2qtbff4akr-1
mysql-test/t/bug47671.test bug47671.test-20091125061804-u0cz6u7eeeta2ao3-1
mysql-test/t/create-uca.test createuca.test-20091227132600-3ntwjbkwy845e8l2-4
mysql-test/t/debug_sync.test debug_sync.test-20090925124518-2m0htks1bbp5jaf5-4
mysql-test/t/grant_lowercase_fs.test grant_lowercase_fs.t-20091027080502-vaql5cl7hm77d4va-1
mysql-test/t/innodb-autoinc-44030.test innodbautoinc44030.t-20100122100303-9hpn5iovz0an1r0w-1
mysql-test/t/innodb-consistent-master.opt innodbconsistentmast-20100106115838-xm7yncpgr8fsekjt-1
mysql-test/t/innodb-consistent.test innodbconsistent.tes-20100106115838-xm7yncpgr8fsekjt-3
mysql-test/t/innodb_bug44369.test innodb_bug44369.test-20091005111405-nbp5t33h95jrqha2-2
mysql-test/t/innodb_bug44571.test innodb_bug44571.test-20100106115838-xm7yncpgr8fsekjt-7
mysql-test/t/innodb_bug46000.test innodb_bug46000.test-20091005110740-z4rhixe6pxtvfzwg-1
mysql-test/t/innodb_bug46676.test innodb_bug46676.test-20100106115838-xm7yncpgr8fsekjt-11
mysql-test/t/innodb_bug47167.test innodb_bug47167.test-20100106115838-xm7yncpgr8fsekjt-13
mysql-test/t/innodb_bug47777.test innodb_bug47777.test-20091102144121-in0bnk577l2r2niz-1
mysql-test/t/innodb_file_format.test innodb_file_format.t-20090923000535-ke95wdd4zn27df71-19
mysql-test/t/innodb_utf8.test innodb_utf8.test-20091227132600-3ntwjbkwy845e8l2-5
mysql-test/t/locale.test locale.test-20091019084441-qaiek29ss4m5rftg-1
mysql-test/t/lowercase_mixed_tmpdir_innodb-master.opt lowercase_mixed_tmpd-20090909093621-493y6grd4ycy587n-1
mysql-test/t/lowercase_mixed_tmpdir_innodb-master.sh lowercase_mixed_tmpd-20090909093706-tbaggs2flpboi335-1
mysql-test/t/lowercase_mixed_tmpdir_innodb.test lowercase_mixed_tmpd-20090909093625-zly7ha6rwwxch86u-1
mysql-test/t/mysqlbinlog2-master.opt mysqlbinlog2master.o-20100119103451-7nkhltk4tgr4fegm-1
mysql-test/t/mysqlbinlog_row-master.opt mysqlbinlog_rowmaste-20100127131139-b8fh6zh2p2j13cfj-1
mysql-test/t/mysqlbinlog_row_innodb-master.opt mysqlbinlog_row_inno-20100119125405-22esbb4wolnwm5h3-1
mysql-test/t/mysqlbinlog_row_myisam-master.opt mysqlbinlog_row_myis-20100119125405-22esbb4wolnwm5h3-2
mysql-test/t/mysqlbinlog_row_trans-master.opt mysqlbinlog_row_tran-20100127113200-cas1q0skxod0t0xi-2
mysql-test/t/partition_innodb-master.opt partition_innodbmast-20100118141528-c6b5g3k82kd9iv87-1
mysql-test/t/partition_innodb_builtin.test partition_innodb_bui-20090923082845-g02wtqf81dvzw6gc-1
mysql-test/t/partition_innodb_plugin.test partition_innodb_plu-20090923082850-5l2dv4lq6f99lruy-1
mysql-test/t/partition_open_files_limit-master.opt partition_open_files-20091007154613-kkfm9vev52v7g5qx-1
mysql-test/t/partition_open_files_limit.test partition_open_files-20091007154613-kkfm9vev52v7g5qx-2
mysql-test/t/sp-bugs.test spbugs.test-20091008145837-dfo321akug9vxs7z-1
mysql-test/t/sp_sync.test sp_sync.test-20100112141216-z3d36b7ouqevywqw-1
mysql-test/t/status-master.opt statusmaster.opt-20090827131631-hxfk90w60wdsy2hc-1
mysql-test/t/subselect4.test subselect4.test-20090903150316-1sul3u8k29ooxm3r-1
mysql-test/t/udf_query_cache-master.opt udf_query_cachemaste-20100111125806-7lkv720m352b58ia-2
mysql-test/t/udf_query_cache.test udf_query_cache.test-20100111125806-7lkv720m352b58ia-3
randgen/ randgen-20100212130043-p9e1fp04f2z82fo0-1
randgen/conf/ conf-20100212130043-p9e1fp04f2z82fo0-2
randgen/conf/maria_group_commit.yy maria_group_commit.y-20100212130043-p9e1fp04f2z82fo0-3
sql/debug_sync.cc debug_sync.cc-20090925124518-2m0htks1bbp5jaf5-5
sql/debug_sync.h debug_sync.h-20090925124518-2m0htks1bbp5jaf5-6
sql/sql_plugin_services.h sql_plugin_services.-20100228160908-qfuxjw8mp0bdlmzs-6
storage/innodb_plugin/mysql-test/innodb-consistent-master.opt innodbconsistentmast-20091012122637-mepyyow3z5ui6cel-1
storage/innodb_plugin/mysql-test/innodb-consistent.result innodbconsistent.res-20091012122706-h9tv41qfkzisq1b6-1
storage/innodb_plugin/mysql-test/innodb-consistent.test innodbconsistent.tes-20091012122706-h9tv41qfkzisq1b6-2
storage/innodb_plugin/mysql-test/innodb_bug44369.result innodb_bug44369.resu-20091012122706-h9tv41qfkzisq1b6-3
storage/innodb_plugin/mysql-test/innodb_bug44369.test innodb_bug44369.test-20091012122706-h9tv41qfkzisq1b6-4
storage/innodb_plugin/mysql-test/innodb_bug44571.result innodb_bug44571.resu-20091012122706-h9tv41qfkzisq1b6-5
storage/innodb_plugin/mysql-test/innodb_bug44571.test innodb_bug44571.test-20091012122706-h9tv41qfkzisq1b6-6
storage/innodb_plugin/mysql-test/innodb_bug46000.result innodb_bug46000.resu-20091012122706-h9tv41qfkzisq1b6-7
storage/innodb_plugin/mysql-test/innodb_bug46000.test innodb_bug46000.test-20091012122706-h9tv41qfkzisq1b6-8
storage/innodb_plugin/revert_gen.sh revert_gen.sh-20091012123850-w0rv1f2ijprz292d-1
storage/innodb_plugin/scripts/export.sh export.sh-20091012120743-l9z3v18op9lk6dhw-1
storage/innodb_plugin/ut/ut0auxconf_have_gcc_atomics.c ut0auxconf_have_gcc_-20091009120907-rmzjolcnf1dsprof-1
storage/pbxt/src/backup_xt.cc backup_xt.cc-20091124102728-ubj97poywwsd340z-1
storage/pbxt/src/backup_xt.h backup_xt.h-20091124102731-w2h8sg00631qsdab-1
storage/xtradb/COPYING.Percona copying.percona-20090923000535-ke95wdd4zn27df71-1
storage/xtradb/COPYING.Sun_Microsystems copying.sun_microsys-20090923000535-ke95wdd4zn27df71-2
storage/xtradb/Doxyfile doxyfile-20090923000535-ke95wdd4zn27df71-3
storage/xtradb/include/fsp0types.h fsp0types.h-20090923000535-ke95wdd4zn27df71-4
storage/xtradb/ut/ut0auxconf_atomic_pthread_t_gcc.c ut0auxconf_atomic_pt-20090923000535-ke95wdd4zn27df71-20
storage/xtradb/ut/ut0auxconf_atomic_pthread_t_solaris.c ut0auxconf_atomic_pt-20090923000535-ke95wdd4zn27df71-21
storage/xtradb/ut/ut0auxconf_have_gcc_atomics.c ut0auxconf_have_gcc_-20100106115838-xm7yncpgr8fsekjt-16
storage/xtradb/ut/ut0auxconf_have_solaris_atomics.c ut0auxconf_have_sola-20090923000535-ke95wdd4zn27df71-22
storage/xtradb/ut/ut0auxconf_pause.c ut0auxconf_pause.c-20090923000535-ke95wdd4zn27df71-23
storage/xtradb/ut/ut0auxconf_sizeof_pthread_t.c ut0auxconf_sizeof_pt-20090923000535-ke95wdd4zn27df71-24
unittest/mysys/my_vsnprintf-t.c my_vsnprintft.c-20100228160908-qfuxjw8mp0bdlmzs-7
renamed:
mysql-test/r/bug40113.result => mysql-test/r/innodb_lock_wait_timeout_1.result bug40113.result-20090619150423-w3im08cym6tyzn8f-3
mysql-test/suite/binlog/r/binlog_tbl_metadata.result => mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result binlog_tbl_metadata.-20090512114928-2whj3n6g302nij5u-1
mysql-test/suite/binlog/t/binlog_tbl_metadata.test => mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test binlog_tbl_metadata.-20090512113345-zzqv0wdjojj5q8oq-1
mysql-test/suite/innodb/r/innodb_bug44032.result => mysql-test/r/innodb_bug44032.result innodb_bug44032.resu-20090610132748-q9m60aph2eqy8zr6-20
mysql-test/suite/innodb/t/innodb_bug44032.test => mysql-test/t/innodb_bug44032.test innodb_bug44032.test-20090610132748-q9m60aph2eqy8zr6-34
mysql-test/suite/pbxt/t/load_unique_error1.inc => mysql-test/std_data/pbxt_load_unique_error1.inc load_unique_error1.i-20090407105731-jrdzpnlb2nlsfdp1-1
mysql-test/t/bug40113-master.opt => mysql-test/t/innodb_lock_wait_timeout_1-master.opt bug40113master.opt-20090619150423-w3im08cym6tyzn8f-1
mysql-test/t/bug40113.test => mysql-test/t/innodb_lock_wait_timeout_1.test bug40113.test-20090619150423-w3im08cym6tyzn8f-2
modified:
.bzrignore sp1f-ignore-20001018235455-q4gxfbritt5f42nwix354ufpsvrf5ebj
BUILD/FINISH.sh sp1f-finish.sh-20001218212418-rjfhkdbumwhfwg4upd5j2pgfe375sjfq
BUILD/Makefile.am sp1f-makefile.am-20020102192940-dza66ux2yxyklupzjz4q3km3hvye5rnj
BUILD/SETUP.sh sp1f-setup.sh-20001218212418-itvzddls4bsqffggcsjklbawdmaxdhde
BUILD/compile-pentium sp1f-compilepentium-19700101030959-3x4duhrh57l4x3qrpnugtf7w3vum6zfn
BUILD/compile-pentium64-gcov sp1f-compilepentium64gcov-20070816001013-3llo7o5e2z4354roescfeskiytwudssu
BUILD/compile-pentium64-gprof sp1f-compilepentium64gpro-20070816001013-vd5zmy2apndy3mhuc3e2haatwsn3cart
BUILD/compile-solaris-amd64-debug-forte* compilesolarisamd64d-20090707110736-p2i53hs87u5tkgxs-1
BUILD/compile-solaris-x86-32* compilesolarisx8632-20090707110736-p2i53hs87u5tkgxs-2
BUILD/compile-solaris-x86-32-debug* compilesolarisx8632d-20090707110736-p2i53hs87u5tkgxs-3
BUILD/compile-solaris-x86-32-debug-forte* compilesolarisx8632d-20090707110736-p2i53hs87u5tkgxs-4
BUILD/compile-solaris-x86-forte-32* compilesolarisx86for-20090707110736-p2i53hs87u5tkgxs-5
CMakeLists.txt sp1f-cmakelists.txt-20060831175236-433hkm7nrqfjbwios4ancgytabw354nr
Docs/INSTALL-BINARY sp1f-installbinary-20071102002932-lm64vo6qp4t3tz7t2edp2mrg5udn2xsy
INSTALL-SOURCE sp1f-installsource-20071102002932-2zshtrwogoj3cs24gl2smml7cigenare
INSTALL-WIN-SOURCE sp1f-installwinsource-20071102113629-gnlyyhvspfpki3lit2lps4hw6blq6u3q
Makefile.am sp1f-makefile.am-19700101030959-jbbpiygwpgybyqknlavdxxupbrjonu7h
README sp1f-readme-19700101030959-ipf4glwvob7zbr3norl5feyy3jwy3sod
client/client_priv.h sp1f-client_priv.h-20010912205330-fzvv7eg77ywdut64ojoihwu3lhbabphc
client/mysql.cc sp1f-mysql.cc-19700101030959-5sipizk7ehvbsi3tywrkdords5qy5zdl
client/mysql_upgrade.c sp1f-mysql_upgrade.c-20060428040559-3xcugp4nhhb6qfwfacoqw3d4ibgbeboz
client/mysqladmin.cc sp1f-mysqladmin.c-19700101030959-ud6encjcx2oypzvp7ptmojbi3xdos2fs
client/mysqlbinlog.cc sp1f-mysqlbinlog.cc-19700101030959-b3vgyo47ljent5mhbyj6ik33bi4bukad
client/mysqlcheck.c sp1f-mysqlcheck.c-20010419220847-mlhe2ixwl5ajjyneyciytsdsis3iujhl
client/mysqldump.c sp1f-mysqldump.c-19700101030959-thxq2iabzu3yo5snymsubfeclf7v5rac
client/mysqlimport.c sp1f-mysqlimport.c-19700101030959-m6nmuvl5kbp2qmdqtmu5mxafegtn7ipv
client/mysqlslap.c sp1f-mysqlslap.c-20051130000206-7t375hf5mtlqof5xd4nj76yckxvxykhv
client/mysqltest.cc sp1f-mysqltest.c-20001010065317-ix4zw26srlev7yugcz455ux22zwyynyf
cmd-line-utils/readline/config_readline.h sp1f-config_readline.h-20050419111215-h7qj4r3krmwazb64gm46kzkidw6a2fht
cmd-line-utils/readline/display.c sp1f-display.c-19700101030959-dhuvqj5evnaoid2tbnpgu7zlqrfutn7k
cmd-line-utils/readline/history.c sp1f-history.c-19700101030959-r6q6omzzgxsad5ifoewu7its7jgwaeur
cmd-line-utils/readline/rlmbutil.h sp1f-rlmbutil.h-20021119142529-aqfum336cg5t2jfa5i5odubnq55bm67y
cmd-line-utils/readline/text.c sp1f-text.c-20021119142529-oqwzsftz5nwbqldwyqtvfjqrvzhbolus
cmd-line-utils/readline/xmalloc.c sp1f-xmalloc.c-19700101030959-7zjd3wwo5mcvdg4qn57dmr76ccjlzxop
config/ac-macros/libevent.m4 libevent.m4-20090312215838-41pxaswf0zgarxu3-1
config/ac-macros/plugins.m4 sp1f-plugins.m4-20060413204924-cltp6uagmyygsgdno6od3mamfizdhk3m
configure.in sp1f-configure.in-19700101030959-mgdpoxtnh2ewmvusvfpkreuhwvffkcjw
dbug/dbug.c sp1f-dbug.c-19700101030959-dmu3qmh72hbptp5opqnmfgjtfckxi5ug
extra/comp_err.c sp1f-comp_err.c-19700101030959-xhnod5xbbwq5dckoic5y65at66d3sgik
extra/libevent/devpoll.c devpoll.c-20090312215838-41pxaswf0zgarxu3-30
extra/libevent/epoll.c epoll.c-20090312215838-41pxaswf0zgarxu3-31
extra/libevent/evbuffer.c evbuffer.c-20090312215838-41pxaswf0zgarxu3-33
extra/libevent/event.c event.c-20090312215838-41pxaswf0zgarxu3-35
extra/libevent/select.c select.c-20090312215838-41pxaswf0zgarxu3-44
extra/libevent/signal.c signal.c-20090312215838-41pxaswf0zgarxu3-45
extra/yassl/include/yassl_int.hpp sp1f-yassl_int.hpp-20050428132307-uqdopnog3njo2nicimdqmt7fco35gagn
extra/yassl/src/yassl_error.cpp sp1f-yassl_error.cpp-20050428132311-uwd5s2khyebi5wkzp66p3hhvr4sh44f3
extra/yassl/taocrypt/include/asn.hpp sp1f-asn.hpp-20050428132312-5ijcjgxj7cy3t67jcpi4rg3rbr4nnfmn
extra/yassl/taocrypt/include/block.hpp sp1f-block.hpp-20050428132313-36s5yrjvbk36ud3nkobt5zsv2ctac72e
extra/yassl/taocrypt/src/asn.cpp sp1f-asn.cpp-20050428132318-okq6hllvtur6rcfg4gc5pbxebunf764v
extra/yassl/taocrypt/src/random.cpp sp1f-random.cpp-20050428132321-q6wudeoop6upz7agf4pmigiyw6d6d3mt
include/Makefile.am sp1f-makefile.am-19700101030959-42ptynoryou25hajenjryzqyuvicbucw
include/config-win.h sp1f-configwin.h-19700101030959-5jisatcch5e354bfnojnqaygf3ow2zyt
include/ft_global.h sp1f-ft_global.h-19700101030959-qzez255ofrojrptdc5z2oi3sfi3bemf7
include/m_ctype.h sp1f-m_ctype.h-19700101030959-dsoy764fhlrdbb6tsjldsk3e5fohfai6
include/m_string.h sp1f-m_string.h-19700101030959-rraattbvw5ffkokv4sixxf3s7brqqaga
include/maria.h sp1f-maria.h-20060411134400-ylx7cem3pcdf2jg6it2tuutxyzoljzvv
include/my_base.h sp1f-my_base.h-19700101030959-3yhq5cta6tatwfxpqmoukzvevlehtxoz
include/my_dbug.h sp1f-dbug.h-19700101030959-jbasz7hhskrakujn4b3uatfstocyueon
include/my_global.h sp1f-my_global.h-20010915021246-4vawdgfw4vg3tuxq6mejt7lrchcnceha
include/my_no_pthread.h sp1f-my_no_pthread.h-19700101030959-ssl6ztub7u5hhlgqmx5jnoc6dzemclbd
include/my_pthread.h sp1f-my_pthread.h-19700101030959-z4yp3kljwx5fgmhlyvumtwxuw73xgrjn
include/my_stacktrace.h sp1f-stacktrace.h-20010513221240-cbiiu5humlsaar4gwejtppmg4trxkj47
include/my_sys.h sp1f-my_sys.h-19700101030959-lyllvna5vzqfcjnmlcrutgqocylhtb54
include/my_tree.h sp1f-my_tree.h-19700101030959-zri3kg3jzgh5btaillgci2qzmynzkewi
include/myisam.h sp1f-myisam.h-19700101030959-2zv2wn7kuuvbyktuyfsitra6cl37h3mm
include/myisamchk.h sp1f-myisamchk.h-20060411134400-oxba7mdmuzv2d62tlze6mrs5tpbmhrzw
include/mysql.h sp1f-mysql.h-19700101030959-soi7hu6ji273nui3fm25jjf4m4362pcw
include/mysql.h.pp mysql.h.pp-20080613094407-2m1760u4zdzt4dc7-1
include/mysql/plugin.h sp1f-plugin.h-20051105112032-xacmvx22ghtcgtqhu6v56b4bneqtx7l5
include/mysql/plugin.h.pp plugin.h.pp-20080613094359-py8jez90546shnqt-1
include/mysql_com.h sp1f-mysql_com.h-19700101030959-a255cet4ojn7jbd4gb4wadueimhj57r7
include/mysys_err.h sp1f-mysys_err.h-19700101030959-z4zqei4o4eblzvr5cgyspg6icfo7trix
include/violite.h sp1f-violite.h-19700101030959-jfyqeh5pmto4ncgcdcdf36bl5ininiqx
libmysql/libmysql.c sp1f-libmysql.c-19700101030959-ba4gwsjdmik5puh2qyrfpvoflwer257l
libmysqld/CMakeLists.txt sp1f-cmakelists.txt-20060403082523-x3vxka3k56u2wpzwcrlpykznlz2akpxd
libmysqld/Makefile.am sp1f-makefile.am-20010411110351-26htpk3ynkyh7pkfvnshztqrxx3few4g
libmysqld/lib_sql.cc sp1f-lib_sql.cc-20010411110351-gt5febleap73tqvapkesopvqtuht5sf5
libmysqld/libmysqld.c sp1f-libmysqld.c-20010411110351-4556sgf6vpnoounnscj2q6zw56ccl332
man/comp_err.1 comp_err.1-20090525095603-4fk5uwzrx2c7shtv-1
man/innochecksum.1 innochecksum.1-20090525095601-iqajpx4dd1tmgcv5-1
man/make_win_bin_dist.1 make_win_bin_dist.1-20090525095616-uxyflq5ybbyscodp-1
man/msql2mysql.1 msql2mysql.1-20090525095615-gdgvhigy9if7xq4g-1
man/my_print_defaults.1 my_print_defaults.1-20090525095609-tni7cnhq2jgs39kn-1
man/myisam_ftdump.1 myisam_ftdump.1-20090525095628-2jh4hkxgv438xsgv-1
man/myisamchk.1 myisamchk.1-20090525095638-dzseeu4ycd9e5ord-1
man/myisamlog.1 myisamlog.1-20090525095612-q6pse99hpl4xobaa-1
man/myisampack.1 myisampack.1-20090525095647-5dbjtz28ostitqta-1
man/mysql-stress-test.pl.1 mysqlstresstest.pl.1-20090525095649-19nwoj9hjrh2h5g6-1
man/mysql-test-run.pl.1 mysqltestrun.pl.1-20090525095613-lxqhkj31dtsojqe9-1
man/mysql.1 mysql.1-20090525095614-mcx09byxwzmmjve9-1
man/mysql.server.1 mysql.server.1-20090525095639-cn3iskg7mylu3f7y-1
man/mysql_client_test.1 mysql_client_test.1-20090525095633-wzvw7zq0frrhvzt9-1
man/mysql_config.1 mysql_config.1-20090525095621-i03klu91hgc51v7v-1
man/mysql_convert_table_format.1 mysql_convert_table_-20090525095631-181nirm84ivqfftv-1
man/mysql_find_rows.1 mysql_find_rows.1-20090525095619-4siq8k2zias4vosn-1
man/mysql_fix_extensions.1 mysql_fix_extensions-20090525095608-bybthq626if21blt-1
man/mysql_fix_privilege_tables.1 mysql_fix_privilege_-20090525095618-7dbntbmbro9h1mq4-1
man/mysql_install_db.1 mysql_install_db.1-20090525095630-dmkfbumaw81atu24-1
man/mysql_secure_installation.1 mysql_secure_install-20090525095646-5j5oc7jsecqqi5fv-1
man/mysql_setpermission.1 mysql_setpermission.-20090525095648-7qb24l20njulwpaf-1
man/mysql_tzinfo_to_sql.1 mysql_tzinfo_to_sql.-20090525095622-95m0xn1wx377hecm-1
man/mysql_upgrade.1 mysql_upgrade.1-20090525095624-3lc34mq3fnbz0keh-1
man/mysql_waitpid.1 mysql_waitpid.1-20090525095623-ukknhxen5c561a01-1
man/mysql_zap.1 mysql_zap.1-20090525095636-fttn5to1r6eqoptk-1
man/mysqlaccess.1 mysqlaccess.1-20090525095634-e84nr9bc820h5lh3-1
man/mysqladmin.1 mysqladmin.1-20090525095607-93akc7rtgncf8d4z-1
man/mysqlbinlog.1 mysqlbinlog.1-20090525095626-jfmz563j8jg73xrb-1
man/mysqlbug.1 mysqlbug.1-20090525095618-hzp6y070bmc1a6cn-1
man/mysqlcheck.1 mysqlcheck.1-20090525095611-8xoc706tixmrd1jf-1
man/mysqld.8 mysqld.8-20090525095600-80gvk83sokzwkf93-1
man/mysqld_multi.1 mysqld_multi.1-20090525095604-4k46xzppayrgbb2p-1
man/mysqld_safe.1 mysqld_safe.1-20090525095642-929nskpxfqo13wst-1
man/mysqldump.1 mysqldump.1-20090525095617-c0vnkqb3h4a3blus-1
man/mysqldumpslow.1 mysqldumpslow.1-20090525095629-dwva1gytzeq6ebha-1
man/mysqlhotcopy.1 mysqlhotcopy.1-20090525095641-uaojhd5lf772ga9b-1
man/mysqlimport.1 mysqlimport.1-20090525095627-mfl13ovw3oornooa-1
man/mysqlmanager.8 mysqlmanager.8-20090525095605-1h3mmx4430hb2n18-1
man/mysqlshow.1 mysqlshow.1-20090525095602-g122t7jqoc3bnqcn-1
man/mysqlslap.1 mysqlslap.1-20090525095610-qfprw8oc5cgsp0wu-1
man/mysqltest.1 mysqltest.1-20090525095620-v74fgi55hw1sr3v8-1
man/ndbd.8 ndbd.8-20090525095643-bylmcypb2afkbix1-1
man/ndbd_redo_log_reader.1 ndbd_redo_log_reader-20090525095632-uavlb6ttl24fathd-1
man/ndbmtd.8 ndbmtd.8-20090525095640-qnk6tubhl257cee3-1
man/perror.1 perror.1-20090525095645-tik3ibzzfve4fvqu-1
man/replace.1 replace.1-20090525095644-t2e5pbl5bcho9z6l-1
man/resolve_stack_dump.1 resolve_stack_dump.1-20090525095637-enz9ko57nji5y4pa-1
man/resolveip.1 resolveip.1-20090525095606-x51yf7vy7vpijf0l-1
mysql-test/collections/README.experimental readme.experimental-20090224115153-en8qgzjquiw0dxzn-1
mysql-test/collections/default.experimental default.experimental-20090224104813-e52mxw708penxv44-1
mysql-test/extra/binlog_tests/binlog.test sp1f-binlog.test-20050223135508-76cdewwz46hwby5kk5g5wmkoxb74yv4y
mysql-test/extra/binlog_tests/drop_temp_table.test sp1f-drop_temp_table.test-20030928163144-dyfrto7gxylnrnim2c5i5wfn7mvjqtgp
mysql-test/extra/rpl_tests/rpl_auto_increment.test sp1f-rpl_auto_increment.t-20051222053450-wnlpmgbkojqq6fnvjocf5ltfbtofin7z
mysql-test/extra/rpl_tests/rpl_extraSlave_Col.test sp1f-rpl_extraslave_col.t-20061103140340-egmkull7owd2wp7d4egg6itzef6p7g23
mysql-test/extra/rpl_tests/rpl_failed_optimize.test sp1f-rpl_failed_optimize.-20051222053450-ylhvmukj7czrgqebj3psg6xeigeaovdj
mysql-test/extra/rpl_tests/rpl_loaddata.test sp1f-rpl_loaddata.test-20051222053450-dtp64wd4tqum3ghoumeskw4aslqd3ntv
mysql-test/extra/rpl_tests/rpl_row_func003.test sp1f-rpl_row_func003.test-20051222053450-vjm7llz5qkczxajhh2ccgw3gtxckqkci
mysql-test/extra/rpl_tests/rpl_row_sp006.test sp1f-rpl_row_sp006.test-20051222053450-5432dx6ssedtpzocu66e5qdwfxa4vd3c
mysql-test/extra/rpl_tests/rpl_row_tabledefs.test sp1f-rpl_row_tabledefs.te-20051222053451-cr6a33nz4a4knerv7ws3ffszjgkqfet3
mysql-test/extra/rpl_tests/rpl_stm_000001.test sp1f-rpl000001.test-20001118063528-ailyrmllkfzwjx3qfvmu555ijzuk5yur
mysql-test/include/check-warnings.test sp1f-checkwarnings.test-20080408145123-h7zoaw4uh3notptmpiebbf3ot3ltza4q
mysql-test/include/concurrent.inc sp1f-innodb_concurrent.te-20051222053459-lwg5sp2ww5pt2wipchfkjjvnmslyp3g3
mysql-test/include/have_big5.inc sp1f-have_big5.inc-20031224125944-5tkectrdy42o4sp27tc264vbtsbsp5zk
mysql-test/include/have_cp1250_ch.inc sp1f-have_cp1250_ch.inc-20050303101530-hgs6b3lnqann3p7ezzu2oork2bt4c73e
mysql-test/include/have_cp1251.inc sp1f-have_cp1251.inc-20070628173450-c6uctbbjbg4cjafdu7n37mliedkrvpf6
mysql-test/include/have_cp866.inc sp1f-have_cp866.inc-20070628173450-ihf7ytndx3tlsk3bndatmunymaack3yr
mysql-test/include/have_cp932.inc sp1f-have_cp932.inc-20050201103742-esjnhrejoad6u7kin5dcrqhy3cp6cjkv
mysql-test/include/have_eucjpms.inc sp1f-have_eucjpms.inc-20050201103743-twgzhhnrnqlyk22if43qv5pp6x6mhh3j
mysql-test/include/have_euckr.inc sp1f-have_euckr.inc-20051209123331-7j7l35wk7rqbeilf5h2llsgozan57mps
mysql-test/include/have_example_plugin.inc sp1f-have_example_plugin.-20061214230953-aurfxypudnc5qjcbqivskkglb5ml6ogr
mysql-test/include/have_gb2312.inc sp1f-have_gb2312.inc-20051209123331-r3djrxfcjo25jhnahmhfpnb5exjh45rw
mysql-test/include/have_gbk.inc sp1f-have_gbk.inc-20050722160355-sjecbg3iondw577wdombapw2jo4xnvku
mysql-test/include/have_koi8r.inc sp1f-have_koi8r.inc-20070628173450-hy526uxwnx2jszukamifvrxjqazziqaz
mysql-test/include/have_latin2_ch.inc sp1f-have_latin2_ch.inc-20060320122819-24bzpko4oaio3gzcxwqxph366tvvcjhb
mysql-test/include/have_simple_parser.inc have_simple_parser.i-20081217115927-orp35vg5f5j5mx25-1
mysql-test/include/have_sjis.inc sp1f-have_sjis.inc-20040325102937-o7cuiclffpte3ihjypzjt3ddqzkwftpg
mysql-test/include/have_tis620.inc sp1f-have_tis620.inc-20031225161058-e77xn25ogqkblhtwdursdputw3ab4bek
mysql-test/include/have_ucs2.inc sp1f-have_ucs2.inc-20030523101002-dylfpt3tlwxfx3fomq4nuxs23v32bkk5
mysql-test/include/have_udf.inc sp1f-have_udf.inc-20060215161119-xf74h7vjdqy73koybl4szdso2mj6cr5n
mysql-test/include/have_ujis.inc sp1f-have_ujis.inc-20030523101003-tlztpr4vjkmdr665l2xwom3sf4onqy5b
mysql-test/include/have_utf8.inc sp1f-have_utf8.inc-20070628173450-5imgq7hc2puarovhwuaak6wxubgrejsu
mysql-test/include/index_merge2.inc sp1f-index_merge_innodb.t-20031120202449-xlcs3ske4vzaexqx7gqraoruphzq475l
mysql-test/include/kill_query.inc kill_query.inc-20090311072104-f3fohpyu4uduixz8-2
mysql-test/include/mix1.inc sp1f-innodb_mysql.test-20060426055153-mgtahdmgajg7vffqbq4xrmkzbhvanlaz
mysql-test/include/mtr_warnings.sql sp1f-mtr_warnings.sql-20080408145123-lhtlr627ins6hwi3hxjrcytx4t27nyjr
mysql-test/include/ps_conv.inc sp1f-ps_conv.inc-20040925150736-7yq4rnzrahaz656cmry5skpgvu5fjbet
mysql-test/include/setup_fake_relay_log.inc setup_fake_relay_log-20081113174908-ha6gaq0z4mt31re9-2
mysql-test/lib/My/ConfigFactory.pm sp1f-configfactory.pm-20071212171904-umibosyolpj2kzgk32rt5p6pl6vztmaq
mysql-test/lib/My/Platform.pm sp1f-platform.pm-20080220135528-i7dsgofojc7pzjnisb6y43f3kmu6j4ci
mysql-test/lib/My/SafeProcess.pm sp1f-safeprocess.pm-20071212171904-5dvgbxdpklkwzt5ti7tudtpid4iugxmy
mysql-test/lib/My/SafeProcess/safe_kill_win.cc sp1f-safe_kill_win.cc-20071212171905-p26lghnyebz5xacmoguzikby3mb4dmhf
mysql-test/lib/My/SafeProcess/safe_process_win.cc sp1f-safe_process_win.cc-20071212171905-5fxmtbbzzyquscvb7nvjxobmzfwasomh
mysql-test/lib/mtr_cases.pm sp1f-mtr_cases.pl-20050203205008-rrteoawyobvgq6u7zeyce4tmuu334ayg
mysql-test/lib/mtr_report.pm sp1f-mtr_report.pl-20041230152648-5foxu5uozo2rvqqrcdpi6gnt4o3z47is
mysql-test/lib/v1/mtr_cases.pl mtr_cases.pl-20081114073900-ptb78lyx8r1awhv9-1
mysql-test/lib/v1/mysql-test-run.pl mtrv1.pl-20081114074002-vy3rb5dcxdfqqvah-1
mysql-test/mysql-stress-test.pl sp1f-mysqlstresstest.pl-20051018162159-2oxz4uxwtkipjw3r7znlxngpg6l5vf63
mysql-test/mysql-test-run.pl sp1f-mysqltestrun.pl-20041230152716-xjnn5ndv4rr4by6ijmj5a4ysubxc7qh3
mysql-test/r/almost_full.result sp1f-almost_full.result-20071112090021-zhr5drqqn7ijqmzeawaiwjfhnqvkfkjr
mysql-test/r/alter_table.result sp1f-alter_table.result-20001228015632-hk5kqhiea33uxdjhnqa2vnagoypjqbi3
mysql-test/r/analyse.result sp1f-analyse.result-20001228015632-2j2wtzdyfq62m65gl6nxekuosny6gy6v
mysql-test/r/archive.result sp1f-archive.result-20040525194738-teb7vr2fyyav2vmvw55tdwgvu3h65flc
mysql-test/r/bug46080.result* bug46080.result-20090710115544-bi718vttwzhdrezd-3
mysql-test/r/count_distinct.result sp1f-count_distinct.resul-20001228015633-djnru5b3kboerj5im4kx2b4osqbhafxi
mysql-test/r/create.result sp1f-create.result-20001228015633-uy7n6oztnd6vmqcrw6z5tloij5yxv4ov
mysql-test/r/ctype_ldml.result sp1f-ctype_ldml.result-20070607125553-fkqnsdgkmmqoecb76tjv3wzmyqfaik22
mysql-test/r/ctype_uca.result sp1f-ctype_uca.result-20040614165527-sh3blw3yx5333e2d42ctirm72htvwqqs
mysql-test/r/ctype_ucs.result sp1f-ctype_ucs.result-20030916112630-sohywipskzw3eqmbhfsqxqjteoun6t2g
mysql-test/r/ctype_utf8.result sp1f-ctype_utf8.result-20030919115911-4q7n5xhenmb42lvyvvkhuqs4dpcxwbbf
mysql-test/r/delayed.result sp1f-delayed.result-20001228015633-d5brh5c3ulnb2qshtfvxu5cvvvxf4lsr
mysql-test/r/delete.result sp1f-delete.result-20010928050551-vf5sxtd554vuepifylwowaaq7k3mbilw
mysql-test/r/distinct.result sp1f-distinct.result-20001228015633-adu7puhxwf4tiwor5amegcrjobuxljra
mysql-test/r/explain.result sp1f-explain.result-20001228015633-fcck4ixyixae4yjfpahxubumufcrdc7p
mysql-test/r/foreign_key.result sp1f-foreign_key.result-20010928050551-csnooy5bq2ndts35xe5gcihbjbt64tjv
mysql-test/r/fulltext.result sp1f-fulltext.result-20001228015633-fi5pm63lvhgn665dsef6fjihfkijbrvt
mysql-test/r/fulltext_order_by.result sp1f-ft0000001.result-20001230154952-a5oer4qq4f6k3srtmabiulmzb2wat4uc
mysql-test/r/func_concat.result sp1f-func_concat.result-20020517075056-2asrraq724esh34kbyhuakztazkhtbrk
mysql-test/r/func_group.result sp1f-func_group.result-20001228015633-oe57bieiww3s6erojiyha7p26m5ul5ql
mysql-test/r/func_in.result sp1f-func_in.result-20001228015633-taucsvp7ggm45m64jbcfu6nyfgdhosnc
mysql-test/r/func_misc.result sp1f-func_misc.result-20001228015633-4sy6dzzt7xcs4ubzcxloyguc7zhougbr
mysql-test/r/func_str.result sp1f-strfunc.result-20001215085543-qraqxeite2ybbq4se6ojb2lwaxem3br3
mysql-test/r/func_time.result sp1f-func_time.result-20001228015633-voo54ouzgwctcbmk22zwazoev5os5o4x
mysql-test/r/gis-rtree.result sp1f-gisrtree.result-20030312125159-uqk53j6wi5kgqzfuaned6oxulziutwoz
mysql-test/r/gis.result sp1f-gis.result-20030301091631-7oyzcwsw4xnrr5tisytvtyymj3p6lvak
mysql-test/r/grant.result sp1f-grant.result-20020905131705-2gfwpyej777fcllxzcvadzd6tqdxfho3
mysql-test/r/grant2.result sp1f-grant2.result-20030722200047-flh2uaxcbwah7yfj5uohcoxndutgaced
mysql-test/r/grant3.result sp1f-grant3.result-20050322110338-ewbo53qs6fkxfzkc7u2ojzyu6bvyp7w6
mysql-test/r/group_min_max.result sp1f-group_min_max.result-20040827133611-aqzadxttbw23mkanmvdsiaambv2pcy27
mysql-test/r/index_merge_innodb.result sp1f-index_merge_innodb.r-20060816114352-umgqkfavfljswrg7qhdkcoptdwi5gipo
mysql-test/r/index_merge_myisam.result sp1f-index_merge_myisam.r-20060816114353-wd2664hjxwyjdvm4snup647av5fmxfln
mysql-test/r/information_schema.result sp1f-information_schema.r-20041113105544-waoxa2fjjsicturpothmjmi6jc3yrovn
mysql-test/r/information_schema_all_engines.result information_schema_a-20090408133348-au36idguotknighe-2
mysql-test/r/information_schema_db.result sp1f-information_schema_d-20050506190605-i2emmavt52skkx7n6b5jklprebhrdrxo
mysql-test/r/innodb-autoinc.result innodbautoinc.result-20081201061010-zymrrwrczns2vrex-280
mysql-test/r/innodb-index.result innodbindex.result-20081201061010-zymrrwrczns2vrex-284
mysql-test/r/innodb-timeout.result innodbtimeout.result-20081203050234-edoolglm28lyejuc-7
mysql-test/r/innodb-zip.result innodbzip.result-20081201061010-zymrrwrczns2vrex-296
mysql-test/r/innodb.result innodb.result-20081201061010-zymrrwrczns2vrex-298
mysql-test/r/innodb_bug36169.result innodb_bug36169.resu-20081201061010-zymrrwrczns2vrex-306
mysql-test/r/innodb_mysql.result sp1f-innodb_mysql.result-20060426055153-bychbbfnqtvmvrwccwhn24i6yi46uqjv
mysql-test/r/innodb_xtradb_bug317074.result innodb_xtradb_bug317-20090326061054-ylrdb8libxw6u7e9-8
mysql-test/r/insert_select.result sp1f-insert_select.result-20001228015633-wih3pifcw5hofocy6banrbkyhtfy6prn
mysql-test/r/join.result sp1f-join.result-20001228015633-f4navd6fbbzksvhaaqulo5ihgevkjty2
mysql-test/r/join_outer.result sp1f-join_outer.result-20001228015633-vk2jshiracfus3ze2d2bim2csnnrc5us
mysql-test/r/join_outer_jcl6.result join_outer_jcl6.resu-20091221012858-uiftww98yhc31z02-1
mysql-test/r/lowercase_fs_off.result sp1f-lowercase_fs_off.res-20060504065503-e5mqzuzzst4qdncn4przr2qqszyfdhf4
mysql-test/r/lowercase_table3.result sp1f-lowercase_table3.res-20040306084333-bdleyleqjz73g4ocjm43zmbf6zkjsipm
mysql-test/r/myisam.result sp1f-myisam.result-20010411215653-pgxkk2xg4lh3nxresmfnsuszf5h3nont
mysql-test/r/myisam_crash_before_flush_keys.result myisam_crash_before_-20090402094502-ekp4zzeucx0vftta-1
mysql-test/r/mysql.result sp1f-mysql.result-20050517191330-5ywsflw7k6pttof273om5l2mb7pyiu22
mysql-test/r/mysql_upgrade.result sp1f-mysql_upgrade.result-20061113123947-wmzpg5cqviqzmxda3qnho2oqhz3t3qhs
mysql-test/r/mysqlbinlog.result sp1f-mysqlbinlog.result-20030924192555-7477cirsvcmvihphlv4wbcvd5dfoh3bm
mysql-test/r/mysqltest.result sp1f-mysqltest.result-20041022024801-dfor5httbrm4yhbhqtfjzpkst5hoejym
mysql-test/r/olap.result sp1f-olap.result-20020720115150-egx2d46xkyxi5dgcpyjexyj4ri6wlcvb
mysql-test/r/openssl_1.result sp1f-ssl.result-20010831211351-xa6w74zno32dlg3iwugerlalsvrsq5hn
mysql-test/r/order_by.result sp1f-order_by.result-20001228015634-omkoitbok7pbz53pkfmplnhbifnrebic
mysql-test/r/partition.result sp1f-partition.result-20050718113029-xlmjyugiq5h2b5wjp236ipsmkmej7i62
mysql-test/r/partition_bug18198.result sp1f-partition_bug18198.r-20070412183249-2zywzajt4whjqp5wz4c7argi77yqhbwb
mysql-test/r/partition_csv.result sp1f-partition_csv.result-20071022181049-b7rjlf3ifzxbwhry54tng7jif24fuapa
mysql-test/r/partition_error.result sp1f-partition_error.resu-20050720124221-iacosg62tkflqhizuvhtiem5oeicwjit
mysql-test/r/partition_innodb.result sp1f-partition_innodb.res-20060518171642-5muwpwnvtxepgop4yhgzzrv2xo2wjlps
mysql-test/r/partition_pruning.result sp1f-partition_pruning.re-20051222092851-tdvef3tnyhio2fj4ktnbr4ienfg7k5qr
mysql-test/r/plugin.result sp1f-plugin.result-20061214230953-dljmjo3wuacc3eox3gwroufqmk3hlne7
mysql-test/r/ps.result sp1f-ps.result-20040405154119-efxzt5onloys45nfjak4gt44kr4awkdi
mysql-test/r/ps_ddl.result sp1f-ps_ddl.result-20071215004622-7wxecn5bjzrz7scbog54tuaaobpayisn
mysql-test/r/ps_grant.result sp1f-ps_grant.result-20050330011743-myy3zlxbj2yg7thj2vovrm2sszh6yzas
mysql-test/r/query_cache.result sp1f-query_cache.result-20011205230530-qf3qzwsqsgfi67vv5ijruxeci6cbkwjl
mysql-test/r/query_cache_notembedded.result sp1f-query_cache_notembed-20050729121335-enxz2r7srcrudvsmkq357ior3n4nlqpq
mysql-test/r/range.result sp1f-range.result-20001228015634-6hpoyn74lnc7irf4gop2jbowgpazbbae
mysql-test/r/select.result sp1f-select.result-20010103001548-znkoalxem6wchsbxizfosjhpfmhfyxuk
mysql-test/r/select_jcl6.result select_jcl6.result-20091221012908-0kl039gl68crw8rz-1
mysql-test/r/show_check.result sp1f-show_check.result-20001228015634-5hf7elb3nj3zmuz6tosvytmbu52bploi
mysql-test/r/sp-destruct.result sp1f-spdestruct.result-20051026133447-inwbkiot3w72y54qgbh3r3qkm7io632d
mysql-test/r/sp-error.result sp1f-sperror.result-20030305184512-euxcpn3oxmcl4dn2kqbdx73ljcbivzto
mysql-test/r/sp-security.result sp1f-spsecurity.result-20031213154048-xglie74lizlzappe5papku3ysbvrzg75
mysql-test/r/sp-ucs2.result sp1f-spucs2.result-20070219105703-dslt4pnq7whgwekaepg7blc2c3zdmu5l
mysql-test/r/sp.result sp1f-sp.result-20030117133802-duandg3yzagzyv7zhhbbt2kcomcegpc7
mysql-test/r/sp_notembedded.result sp1f-sp_notembedded.resul-20060224163410-okgh2uh6w7jxcoszw5y4sk6pq3ngt5n6
mysql-test/r/subselect.result sp1f-subselect.result-20020512204640-zgegcsgavnfd7t7eyrf7ibuqomsw7uzo
mysql-test/r/subselect_no_mat.result subselect_no_mat.res-20100117143924-hut18sl9k2c7qdj8-1
mysql-test/r/subselect_no_opts.result subselect_no_opts.re-20100117143925-pabg7o8iyokjlu93-1
mysql-test/r/subselect_no_semijoin.result subselect_no_semijoi-20100117143925-9yfygtcm7fwsuq2p-1
mysql-test/r/subselect_notembedded.result sp1f-subselect_notembedde-20060224163410-digkbayynk56rddadjogerwuvuzc6zn5
mysql-test/r/system_mysql_db.result sp1f-system_mysql_db.resu-20040310185404-f7br5g4442iqwxireltudlyu5ppbkijo
mysql-test/r/table_elim.result table_elim.result-20090603125022-nge13y0ohk1g2tt2-1
mysql-test/r/trigger.result sp1f-trigger.result-20040907122911-6m6f5d2ijohoqspgy53ybn6kavj4zefi
mysql-test/r/trigger_notembedded.result sp1f-triggergrant.result-20051110192455-2zus7d4a7l2y7ldnokefkk6ibykyn46y
mysql-test/r/type_bit.result sp1f-type_bit.result-20041217140559-ppf6bkjkl3r4tbmlt7ngn46zm6tapa46
mysql-test/r/type_newdecimal.result sp1f-type_newdecimal.resu-20050208224936-kj6dxol5i7zbbfohybib53ffje5i63mk
mysql-test/r/type_year.result sp1f-type_year.result-20001228015634-qnsjzaaz3ams6pb2etie5s3eleghfgp5
mysql-test/r/udf.result sp1f-udf.result-20060215161120-pm5l3nyny5gbznc2egfu4bhwgxbuc6wz
mysql-test/r/union.result sp1f-unions_one.result-20010725122836-ofxtwraxeohz7whhrmfdz57sl4a5prmp
mysql-test/r/update.result sp1f-update.result-20001228015634-eddqlilwpyd255hhzq2fnktefhhgzua5
mysql-test/r/upgrade.result sp1f-upgrade.result-20060217091846-vefhbvl6q255bcptbelqp7boemlf7jyp
mysql-test/r/user_var.result sp1f-user_var.result-20010314060712-zoeg6ozjsxpa3xj7eqrupoba3mq7wmey
mysql-test/r/variables.result sp1f-variables.result-20001228015635-w5m2doorn7gzhyyhpqrlqsupnwn6f6xh
mysql-test/r/view.result sp1f-view.result-20040715221517-nqk3l34grrhprjiitidhfjyjqlgh6a5v
mysql-test/r/view_grant.result sp1f-view_grant.result-20050404194355-hbbr5ud3thpo5tn65q6eyecswq5mdhwk
mysql-test/r/warnings.result sp1f-warnings.result-20010928050551-uka7prbsewkm4k6eu4jrzvvwjvhxgw3y
mysql-test/r/windows.result sp1f-windows.result-20050901013212-d5xc5u66d6z25c2hig2xgc2zjyazfh6i
mysql-test/r/xa.result sp1f-xa.result-20050403224953-ks6zlldv2mxqgh4edidq7sdrlmj7ko4l
mysql-test/std_data/Index.xml sp1f-index.xml-20070607125553-3iauribiu2zgziawdyrtrdxne2gs5o2y
mysql-test/std_data/cacert.pem sp1f-cacert.pem-20010724060723-mzsvdgy4lyvjjx62aqycz65bca4q4ien
mysql-test/std_data/client-cert.pem sp1f-clientcert.pem-20010724060723-x4ijt4gnmldkenng3te7iebn6qfarrjj
mysql-test/std_data/client-key.pem sp1f-clientkey.pem-20010822071134-2mmefinjcnjzc2gu6vh3ord32xvhpbch
mysql-test/std_data/server-cert.pem sp1f-servercert.pem-20010724060723-w7e2s2asnomtwtus3ncffn53f2qjifu6
mysql-test/std_data/server-key.pem sp1f-serverkey.pem-20010822071134-trmhdwb2jmf3nfahr76457fjthitvdm7
mysql-test/std_data/server8k-cert.pem sp1f-server8kcert.pem-20070717184057-mhwrxzunqmiwul3dm6t5hdhtvilxzvcg
mysql-test/std_data/server8k-key.pem sp1f-server8kkey.pem-20070717184104-vbwamgorrqrnzfjitahtkfvhkrdmxzd6
mysql-test/suite/binlog/r/binlog_index.result sp1f-binlog_index.result-20080317181903-axnmkwnuuu6l3mwrtun7rig7lcfcsg7r
mysql-test/suite/binlog/r/binlog_killed_simulate.result sp1f-binlog_killed_simula-20071031094843-hn7c5gekip7rxgka7cic7saaqoy6afkn
mysql-test/suite/binlog/r/binlog_row_binlog.result sp1f-binlog_row_binlog.re-20051222053451-vl3rsa7i5wuxj6pjofa7aylzersn25i6
mysql-test/suite/binlog/r/binlog_row_drop_tmp_tbl.result sp1f-binlog_row_drop_tmp_-20051222053451-bsaj7qioh5jt4woxo36t7i6tvuqwdny3
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result sp1f-binlog_row_mix_innod-20051222053451-e65hkpwkx65zhmyddgnozkwk6mudqlhi
mysql-test/suite/binlog/r/binlog_stm_binlog.result sp1f-binlog.result-20050223135507-y6mkcjto5zdkpgaevaqo5epoafa3yiq5
mysql-test/suite/binlog/r/binlog_stm_blackhole.result sp1f-blackhole.result-20050323001036-qq4xfb3k4omfzzjj3to67lnds6i7m6zw
mysql-test/suite/binlog/r/binlog_stm_drop_tmp_tbl.result sp1f-drop_temp_table.resu-20030928163144-txtshow2e37lpjddhpazuhs4eulbw2dd
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result sp1f-mix_innodb_myisam_bi-20030822133916-l72xhg2oxjthj6ichxowk55lrbjebfxa
mysql-test/suite/binlog/r/binlog_stm_row.result binlog_stm_row.resul-20080929134451-58v46d7hr9wzyo6f-1
mysql-test/suite/binlog/r/binlog_unsafe.result sp1f-binlog_unsafe.result-20070514124535-jqttmp2p2jfelgeewle4swp5rb4j7pbi
mysql-test/suite/binlog/t/binlog_index.test sp1f-binlog_index.test-20080317181903-dbjxnu4iciqjlbq6m3a26wpp4kir4km3
mysql-test/suite/binlog/t/binlog_killed.test sp1f-binlog_killed.test-20070528192019-wgkf3lpghurbledmqfyi43fg3mlrhxby
mysql-test/suite/binlog/t/binlog_row_mysqlbinlog_db_filter.test binlog_row_mysqlbinl-20090527141831-4bqcdit6efzx76qm-1
mysql-test/suite/binlog/t/binlog_stm_mix_innodb_myisam.test sp1f-binlog_stm_mix_innod-20051222053459-semsiuc62fcxnhjqz663pgpzwfjexwci
mysql-test/suite/binlog/t/binlog_stm_row.test binlog_stm_row.test-20080929134444-cku0pcqzy8apdoac-1
mysql-test/suite/binlog/t/binlog_stm_unsafe_warning.test binlog_stm_unsafe_wa-20090627130655-dgww8l4zomxa6h9l-3
mysql-test/suite/binlog/t/binlog_unsafe.test sp1f-binlog_unsafe.test-20070514124535-pau2ov4yta3qsk5zbdwkywce3vhd54xr
mysql-test/suite/federated/disabled.def sp1f-disabled.def-20071212171904-pmuy45bfadht6mm7ig3qvzqc5tfvwfjy
mysql-test/suite/federated/federated_server.result sp1f-federated_server.res-20061202004729-hi4tlt4wdwlni7gwjyolgiey4bwt4ehp
mysql-test/suite/federated/federated_server.test sp1f-federated_server.tes-20061202004730-idhp5qb6ntspupkz2wftb5aguipv27kk
mysql-test/suite/federated/my.cnf sp1f-my.cnf-20071212171904-j7gg67dkfqjo4f5yirx3nscdzpxieqyv
mysql-test/suite/funcs_1/datadict/processlist_val.inc sp1f-processlist_val.inc-20070815194641-3hfsmyh3jr2gjhvgms52iydpkamdbnoz
mysql-test/suite/funcs_1/r/innodb_func_view.result sp1f-innodb_func_view.res-20070206175435-zksozsfanah6kcpyvvlsojjqamdrlaix
mysql-test/suite/funcs_1/r/is_columns_is.result sp1f-is_columns_is.result-20080307163304-7bd6seaxklddmff6f3bb54inlyw6unpw
mysql-test/suite/funcs_1/r/is_columns_mysql.result sp1f-is_columns_mysql.res-20080307163304-a2ymkif2vzliwqqqzr22cglbaf75tthe
mysql-test/suite/funcs_1/r/is_statistics.result sp1f-is_statistics.result-20080307163304-m5utnb3asmvn24qpbivqzlypbjibrtqv
mysql-test/suite/funcs_1/r/is_tables_is.result sp1f-is_tables_is.result-20080307163304-6xl5vbegso6wet3dzoehxb645vntxpig
mysql-test/suite/funcs_1/r/memory_func_view.result sp1f-memory_func_view.res-20070206175436-3fqxnwbedehhtbacqfdpguyzlzigftqi
mysql-test/suite/funcs_1/r/myisam_func_view.result sp1f-myisam_func_view.res-20070206175437-hx5547ncsoco2bpruecyswjziotocyhd
mysql-test/suite/innodb/r/innodb-index.result innodbindex.result-20090610132748-q9m60aph2eqy8zr6-10
mysql-test/suite/innodb/r/innodb-zip.result innodbzip.result-20090610132748-q9m60aph2eqy8zr6-14
mysql-test/suite/innodb/t/innodb-index.test innodbindex.test-20090610132748-q9m60aph2eqy8zr6-24
mysql-test/suite/innodb/t/innodb-zip.test innodbzip.test-20090610132748-q9m60aph2eqy8zr6-28
mysql-test/suite/innodb/t/innodb_information_schema.test innodb_information_s-20090610132748-q9m60aph2eqy8zr6-35
mysql-test/suite/maria/r/maria-recover.result mariarecover.result-20080602174048-lgw3ipowzkym118b-1
mysql-test/suite/maria/r/maria3.result maria3.result-20080701120735-95p69v855sl5nh1m-1
mysql-test/suite/maria/t/maria-recover.test mariarecover.test-20080602174033-rnr5wg8wn2bqarwk-1
mysql-test/suite/maria/t/maria-recovery2-master.opt mariarecovery2master-20080630093401-xzvz0rumgt352cnt-1
mysql-test/suite/maria/t/maria3.test maria3.test-20080701120729-b8g279bfeomagayo-1
mysql-test/suite/parts/inc/part_blocked_sql_funcs_main.inc sp1f-partition_blocked_sq-20070206122149-oq7yhvvqv7hffx475yg4an3z5nuor77z
mysql-test/suite/parts/inc/partition_auto_increment.inc partition_auto_incre-20080902080504-smrqvl9x3yj3y30u-1
mysql-test/suite/parts/inc/partition_timestamp.inc sp1f-partition_timestamp.-20070206122236-ukxavk6zxnrewb3yuohuvwqpnkf3pfne
mysql-test/suite/parts/r/part_blocked_sql_func_innodb.result sp1f-partition_blocked_sq-20070206122237-2udvvhmox7s25o2jq2wzfmmulxd6zmnl
mysql-test/suite/parts/r/part_blocked_sql_func_myisam.result sp1f-partition_blocked_sq-20070206122237-57kffxtanupddrcg7v63inkali5lx6cr
mysql-test/suite/parts/r/partition_auto_increment_innodb.result partition_auto_incre-20080902080538-cpp3r5wsg1iaf0hq-1
mysql-test/suite/parts/r/partition_auto_increment_maria.result partition_auto_incre-20081121141947-lh5ecnr2v4dkwli4-3
mysql-test/suite/parts/r/partition_auto_increment_memory.result partition_auto_incre-20080902130257-psqogw2uun5wtb8c-1
mysql-test/suite/parts/r/partition_auto_increment_myisam.result partition_auto_incre-20080902080538-cpp3r5wsg1iaf0hq-2
mysql-test/suite/parts/r/partition_auto_increment_ndb.result partition_auto_incre-20080902130251-otbifbb7638h2boy-3
mysql-test/suite/parts/r/partition_datetime_innodb.result sp1f-partition_datetime_i-20070206122237-xrcjoqheeg4rbzuvskiypk36bfuncyyi
mysql-test/suite/parts/r/partition_datetime_myisam.result sp1f-partition_datetime_m-20070206122237-bkjjeosasfsj2akgc3hh6ktjzuxlmbmw
mysql-test/suite/parts/t/partition_alter1_2_innodb.test sp1f-partition_alter1_2_i-20080513231047-g6yx4e6tufcbky2wq463xgngz66cuqnu
mysql-test/suite/parts/t/partition_alter2_1_innodb.test sp1f-partition_alter2_inn-20070206122238-btax2l7djjymz6hlnkmuvwdt2nbqbxed
mysql-test/suite/parts/t/partition_alter2_2_innodb.test partition_alter2_2_i-20080908140941-884mge0s10lxgki2-3
mysql-test/suite/parts/t/partition_alter4_innodb.test sp1f-partition_alter4_inn-20070206122238-wzdnxx4co5gutrm6mnbees2dkz7bwsii
mysql-test/suite/parts/t/partition_auto_increment_archive.test partition_auto_incre-20080902130246-2kpijvsifmpe2sjo-1
mysql-test/suite/parts/t/partition_auto_increment_blackhole.test partition_auto_incre-20080902130246-2kpijvsifmpe2sjo-2
mysql-test/suite/parts/t/partition_recover_myisam.test partition_repair_myi-20080609121315-mjya2e9ekn7bunzm-2
mysql-test/suite/pbxt/r/func_group.result func_group.result-20090402100035-4ilk9i91sh65vjcb-50
mysql-test/suite/pbxt/r/grant.result grant.result-20090402100035-4ilk9i91sh65vjcb-65
mysql-test/suite/pbxt/r/group_min_max.result group_min_max.result-20090402100035-4ilk9i91sh65vjcb-69
mysql-test/suite/pbxt/r/join_nested.result join_nested.result-20090402100035-4ilk9i91sh65vjcb-81
mysql-test/suite/pbxt/r/lock_multi.result lock_multi.result-20090402100035-4ilk9i91sh65vjcb-90
mysql-test/suite/pbxt/r/mysqlshow.result mysqlshow.result-20090402100035-4ilk9i91sh65vjcb-101
mysql-test/suite/pbxt/r/negation_elimination.result negation_elimination-20090402100035-4ilk9i91sh65vjcb-103
mysql-test/suite/pbxt/r/partition_error.result partition_error.resu-20090402100035-4ilk9i91sh65vjcb-112
mysql-test/suite/pbxt/r/partition_pruning.result partition_pruning.re-20090402100035-4ilk9i91sh65vjcb-117
mysql-test/suite/pbxt/r/pbxt_bugs.result pbxt_bugs.result-20090402100035-4ilk9i91sh65vjcb-120
mysql-test/suite/pbxt/r/ps_grant.result ps_grant.result-20090402100035-4ilk9i91sh65vjcb-131
mysql-test/suite/pbxt/r/skip_grants.result skip_grants.result-20090402100035-4ilk9i91sh65vjcb-142
mysql-test/suite/pbxt/r/subselect.result subselect.result-20090402100035-4ilk9i91sh65vjcb-146
mysql-test/suite/pbxt/r/view_grant.result view_grant.result-20090402100035-4ilk9i91sh65vjcb-169
mysql-test/suite/pbxt/t/grant.test grant.test-20090402100035-4ilk9i91sh65vjcb-232
mysql-test/suite/pbxt/t/join_nested.test join_nested.test-20090402100035-4ilk9i91sh65vjcb-248
mysql-test/suite/pbxt/t/lock_multi.test lock_multi.test-20090402100035-4ilk9i91sh65vjcb-257
mysql-test/suite/pbxt/t/partition_error.test partition_error.test-20090402100035-4ilk9i91sh65vjcb-279
mysql-test/suite/pbxt/t/pbxt_bugs.test pbxt_bugs.test-20090402100035-4ilk9i91sh65vjcb-287
mysql-test/suite/pbxt/t/pbxt_locking.test pbxt_locking.test-20090402100035-4ilk9i91sh65vjcb-288
mysql-test/suite/pbxt/t/pbxt_transactions.test pbxt_transactions.te-20090402100035-4ilk9i91sh65vjcb-291
mysql-test/suite/pbxt/t/ps_1general.test ps_1general.test-20090402100035-4ilk9i91sh65vjcb-297
mysql-test/suite/pbxt/t/ps_grant.test ps_grant.test-20090402100035-4ilk9i91sh65vjcb-298
mysql-test/suite/pbxt/t/subselect.test subselect.test-20090402100035-4ilk9i91sh65vjcb-313
mysql-test/suite/rpl/r/rpl_auto_increment.result sp1f-rpl_auto_increment.r-20040915191013-ch23iwok7bzpqd7wtcsf4m6mt4txcr3o
mysql-test/suite/rpl/r/rpl_bug33931.result sp1f-rpl_bug33931.result-20080213120940-oicizd2vl3efcyhgeza6sjfbgw5j6ack
mysql-test/suite/rpl/r/rpl_create_if_not_exists.result rpl_create_if_not_ex-20090810060616-r2uzycs3kvm3gfjx-1
mysql-test/suite/rpl/r/rpl_do_grant.result sp1f-rpl_do_grant.result-20030802214618-dy6h6p3wwcdlud4mk6ivfxsgu5celg5t
mysql-test/suite/rpl/r/rpl_drop_temp.result sp1f-rpl_drop_temp.result-20050214224645-fme2pf42uu4gldojyb2mvqelfmrk6b24
mysql-test/suite/rpl/r/rpl_err_ignoredtable.result sp1f-rpl_error_ignored_ta-20030708095945-wtbf3wai2clqedywdvwntfdfwmloumec
mysql-test/suite/rpl/r/rpl_extraCol_innodb.result sp1f-rpl_extracol_innodb.-20061103140439-oyaqsdcl3ymjfl5y2wvwjz3cgb36dbj3
mysql-test/suite/rpl/r/rpl_extraCol_myisam.result sp1f-rpl_extracol_myisam.-20061103140439-ipxcnvlavhkichgny6fvkejbdgnvudtd
mysql-test/suite/rpl/r/rpl_get_lock.result sp1f-rpl_get_lock.result-20010831215549-yggpnug7jmur7hftd3ln6ytpungqckpy
mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result rpl_get_master_versi-20090714012948-jn3dghe3lx2bda9a-1
mysql-test/suite/rpl/r/rpl_idempotency.result sp1f-rpl_idempotency.resu-20071030201714-gapul4f6owmfen7q2slybdwkvwtonjr4
mysql-test/suite/rpl/r/rpl_init_slave_errors.result rpl_bug38197.result-20090211115331-pkf48eusdtkxwt89-1
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result sp1f-rpl_innodb_mixed_dml-20070206122521-55gg47nv2ebmatncbhfe7ihcle7wzd6n
mysql-test/suite/rpl/r/rpl_killed_ddl.result rpl_killed_ddl.resul-20090311072154-xs2e2pig7q07b8hs-1
mysql-test/suite/rpl/r/rpl_loaddata.result sp1f-rpl_loaddata.result-20030114092724-fx5ivt6tn56aiwq4siqhlaxoyw5gha66
mysql-test/suite/rpl/r/rpl_loaddata_fatal.result sp1f-rpl_loaddata_fatal.r-20070609051931-4yb5joctyn3lsurioiijsyz4xb4r2sqj
mysql-test/suite/rpl/r/rpl_loaddata_map.result sp1f-rpl_loaddata_map.res-20071221203440-ccy2zkl5istm7qrpaydhz3lbzt355aie
mysql-test/suite/rpl/r/rpl_loaddatalocal.result sp1f-rpl_loaddatalocal.re-20030228202359-vlyqlboybbakyklqm2lhptrbwruhia2n
mysql-test/suite/rpl/r/rpl_log_pos.result sp1f-rpl000014.result-20001212220135-a5ffppjzfu3hlnpovghh4w3fdmgdmk6c
mysql-test/suite/rpl/r/rpl_misc_functions.result sp1f-rpl_misc_functions.r-20030513204954-3552lzak55uhlqumdmnrmzqsudmv3lpi
mysql-test/suite/rpl/r/rpl_mixed_ddl_dml.result sp1f-rpl000002.result-20001118063528-dp4vigctbaz5p7s7r7cqtgabk25a5j3m
mysql-test/suite/rpl/r/rpl_optimize.result sp1f-rpl_optimize.result-20040222102242-g4p766mhq2a7b26lunf5duf5kc7r3nf6
mysql-test/suite/rpl/r/rpl_packet.result sp1f-rpl_packet.result-20060911211902-zl764nrlzzu3kom3pf3rrm6eltxveylw
mysql-test/suite/rpl/r/rpl_relayspace.result sp1f-rpl_relayspace.resul-20030317215153-kx422hojs2xkiqciwgt7jps2hdk376fb
mysql-test/suite/rpl/r/rpl_row_create_table.result sp1f-rpl_row_create_table-20051222053452-uud3ktz3erqptqb64rkh7ftoo7bdbf6c
mysql-test/suite/rpl/r/rpl_row_func003.result sp1f-rpl_row_func003.resu-20051222053453-pcvnyat2pciwhauzrtgsrny3ithtt7hr
mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result sp1f-rpl_row_mysqlbinlog.-20060222210227-sa7szacuk2lezzunpti7n7ixdtinkuvx
mysql-test/suite/rpl/r/rpl_row_sp006_InnoDB.result sp1f-rpl_row_sp006_innodb-20051222053454-fadhobwa33eljoqv6iunh46xfx3f4gab
mysql-test/suite/rpl/r/rpl_row_tabledefs_2myisam.result sp1f-rpl_row_tabledefs.re-20051222053457-perubbeq3fwsqe5phfwcpsstjqltqrnj
mysql-test/suite/rpl/r/rpl_row_tabledefs_3innodb.result sp1f-rpl_row_tabledefs_3i-20060508180502-wvvscuvjv34fiuhqjuhi6yvk6cwbmrpz
mysql-test/suite/rpl/r/rpl_slave_load_remove_tmpfile.result rpl_slave_fail_load_-20090317104308-hlm8218eqojabbnb-1
mysql-test/suite/rpl/r/rpl_sp.result sp1f-rpl_sp.result-20050505122047-6pz3qkb234acgvxly33c2rm665rolo6w
mysql-test/suite/rpl/r/rpl_stm_000001.result sp1f-rpl000001.result-20010116163624-seoa5zygxq5ibscm6kld7cneoimbmer4
mysql-test/suite/rpl/r/rpl_stm_log.result sp1f-rpl_log.result-20010621191923-r3yiuhrqrbautxnc66pw6bzlo6qp7sds
mysql-test/suite/rpl/r/rpl_stm_maria.result sp1f-rpl_stm_maria.result-20080120042524-br2zhhb4frmlzevzypcjsymengenqpmw
mysql-test/suite/rpl/r/rpl_stm_until.result sp1f-rpl_until.result-20030913201321-p3mvlpsb7inibiskpnbpjn5h4joda666
mysql-test/suite/rpl/r/rpl_temporary.result sp1f-rpl_temporary.result-20021229214238-uwyas6jaay7ygaqsdwolzlcec6reyckw
mysql-test/suite/rpl/r/rpl_temporary_errors.result sp1f-rpl_temporary_errors-20071020181608-xundxvrjaumpxwejukituoslcc42my7p
mysql-test/suite/rpl/r/rpl_trigger.result sp1f-rpl_trigger.result-20050815151505-durmghxr6fqgzgolaezf6gfotoeptwhz
mysql-test/suite/rpl/t/disabled.def sp1f-disabled.def-20070627122758-vdqevwzhnizicdrxrmfy4w4afgprx46x
mysql-test/suite/rpl/t/rpl_bug33931.test sp1f-rpl_bug33931.test-20080213120940-zrvpqfftvysveivghh37d77gupwv6ryc
mysql-test/suite/rpl/t/rpl_circular_for_4_hosts.test sp1f-rpl_circular_for_4_h-20080424204102-x4muyiodsvxrxdug2mnsmhevbu7k4bxf
mysql-test/suite/rpl/t/rpl_create_if_not_exists.test rpl_create_if_not_ex-20090810060633-wnx1xn452r0bo0j0-1
mysql-test/suite/rpl/t/rpl_do_grant.test sp1f-rpl_do_grant.test-20030802214619-wincvjltx3w7wntnmnquss36fcszy2wa
mysql-test/suite/rpl/t/rpl_drop_temp.test sp1f-rpl_drop_temp.test-20050214224649-nglnrerjic7a76wtis7mlhcr3nzxcioi
mysql-test/suite/rpl/t/rpl_err_ignoredtable.test sp1f-rpl_error_ignored_ta-20030708095933-nrriw3pbfsfrugbgvjpriczjb3dwm4mn
mysql-test/suite/rpl/t/rpl_get_lock.test sp1f-rpl_get_lock.test-20010831215549-mxwhygd7dfgxcx4dnnjjomunk7oojsj5
mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test rpl_get_master_versi-20090714012921-gi0n2a3z1he17ht5-1
mysql-test/suite/rpl/t/rpl_idempotency.test sp1f-rpl_idempotency.test-20071030201714-dbiujbc2tp25eunlqd4msz66kri542ju
mysql-test/suite/rpl/t/rpl_ignore_table.test sp1f-rpl_ignore_table.tes-20060126105131-jcbijazldvuzr3nd6gryujowhn5fthha
mysql-test/suite/rpl/t/rpl_init_slave_errors.test rpl_bug38197.test-20090211115331-pkf48eusdtkxwt89-2
mysql-test/suite/rpl/t/rpl_killed_ddl.test rpl_killed_ddl.test-20090311072121-1wioz7o6u1ok14u2-1
mysql-test/suite/rpl/t/rpl_loaddatalocal.test sp1f-rpl_loaddatalocal.te-20030228202359-thkinry6nfontmohwt4ppxanoj2g2yfd
mysql-test/suite/rpl/t/rpl_log_pos.test sp1f-rpl000014.test-20001212220135-bejuuiqndvmgqdoni6j4db7qxhci65dg
mysql-test/suite/rpl/t/rpl_misc_functions.test sp1f-rpl_misc_functions.t-20030513204954-oc37hig7s6qcysxxz3xjyeuqapgdwnud
mysql-test/suite/rpl/t/rpl_mixed_ddl_dml.test sp1f-rpl000002.test-20001118063528-xtihamqla2qxwkn544mamd5mlt5pev33
mysql-test/suite/rpl/t/rpl_optimize.test sp1f-rpl_optimize.test-20040222102242-sa235qkcz6ilnydeaz2vkcubkou7kkyr
mysql-test/suite/rpl/t/rpl_packet.test sp1f-rpl_packet.test-20060911211902-zxv62juvripcfcrseay32yorn6veiwfl
mysql-test/suite/rpl/t/rpl_relayspace.test sp1f-rpl_relayspace.test-20030317215153-aincxws3k2fb4ojvtowjzgandznovi7b
mysql-test/suite/rpl/t/rpl_rotate_logs.test sp1f-rpl000016.test-20001215004309-uqid5ejphbyjwielf3t6nd7523ynp353
mysql-test/suite/rpl/t/rpl_row_create_table.test sp1f-rpl_row_create_table-20051222053501-6a6pxustyjnj6swfd4bbsptzllxov5ao
mysql-test/suite/rpl/t/rpl_slave_load_remove_tmpfile.test rpl_slave_fail_load_-20090316155115-ydeyw4dljkuo6vf5-1
mysql-test/suite/rpl/t/rpl_stm_maria.test sp1f-rpl_stm_maria.test-20080120042525-zg3xxjue5l6hbj7esu6qdkidmspwgidv
mysql-test/suite/rpl/t/rpl_stm_until.test sp1f-rpl_until.test-20030913201321-dilykyg5bgdyqteanf25enggl6ekigje
mysql-test/suite/rpl/t/rpl_temporary.test sp1f-rpl_temporary.test-20021229214239-nxqbr5fvrk5sm3d5xf2gwzhfim3xdy7k
mysql-test/suite/rpl/t/rpl_temporary_errors.test sp1f-rpl_temporary_errors-20071020181608-od3xavonaky32wsy7gbvwomesyi5fiye
mysql-test/suite/rpl/t/rpl_timezone.test sp1f-rpl_timezone.test-20040618061121-pvoxjozqh37mmxq6duvd5limkzd2hj2m
mysql-test/suite/rpl/t/rpl_trigger.test sp1f-rpl_trigger.test-20050815151506-5wtqt7aazfxwebrvi37du2prbglo4lmq
mysql-test/suite/rpl_ndb/r/rpl_ndb_circular_simplex.result sp1f-rpl_ndb_circular_sim-20070412065801-k2ky5wpm6vtdcj53bacwmwxhhj4ofcy2
mysql-test/suite/rpl_ndb/r/rpl_ndb_extraCol.result sp1f-rpl_ndb_extracol.res-20061103140449-ucihyswq7mtsamyjm2whggyjxyfekxeo
mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result sp1f-rpl_ndb_func003.resu-20060203192750-aghso723jniyz5tfcig2dxfztgivxay2
mysql-test/suite/rpl_ndb/r/rpl_ndb_sp006.result sp1f-rpl_ndb_sp006.result-20060209212318-oue4xsmivduntdk47fzqt36ws5bbmhmq
mysql-test/suite/rpl_ndb/t/rpl_ndb_circular.test sp1f-rpl_ndb_circular.tes-20070412141346-av6rslz2h32ovpuk3ppyehbg7dbsgcu4
mysql-test/suite/rpl_ndb/t/rpl_ndb_circular_simplex.test sp1f-rpl_ndb_circular_sim-20070412065801-khbc7ydsapuk7454j2xfa54untsyzox5
mysql-test/t/almost_full.test sp1f-almost_full.test-20071112090021-ehngu75n2rqd6mngclfwfoxrj6ipxofo
mysql-test/t/alter_table.test sp1f-alter_table.test-20001228015635-ibytgjjpm4y57rzxqoascmr2hqujnjge
mysql-test/t/analyse.test sp1f-analyse.test-20001228015635-x364ynbakdxnjmftcf6js527huqaoipj
mysql-test/t/archive.test sp1f-archive.test-20040525194738-qla5yawytktcj3tlbgrlhvf3thbo6ghq
mysql-test/t/bug46080.test bug46080.test-20090710115544-bi718vttwzhdrezd-1
mysql-test/t/count_distinct.test sp1f-count_distinct.test-20001228015635-5dt4cdsivx3h7f5crgubhv46uwpwy5h2
mysql-test/t/create.test sp1f-create.test-20001228015635-grq5cruh7q3juapcegeza6mshjkzsxzo
mysql-test/t/ctype_ldml.test sp1f-ctype_ldml.test-20070607125553-mqkp3r7v2bep7crb3ooj7q5qxx7m2257
mysql-test/t/ctype_uca.test sp1f-ctype_uca.test-20040614165527-nfbh56fowcrwgwtbyyi5nczcykzaya45
mysql-test/t/ctype_ucs.test sp1f-ctype_ucs.test-20030916112631-7diba44oovwv3h5kqbswfqbveczwbrxv
mysql-test/t/ctype_utf8.test sp1f-ctype_utf8.test-20030919115911-7wynyiumuz3atssfpy7o5lgxxbdwqugi
mysql-test/t/ddl_i18n_koi8r.test sp1f-ddl_i18n_koi8r.test-20070628173450-l2wp2w535liri5p7y2vyec7jb4pdmrny
mysql-test/t/ddl_i18n_utf8.test sp1f-ddl_i18n_utf8.test-20070628173450-v2oss2i527p7rgwxdqhg2y4ciy4wrfvu
mysql-test/t/delayed.test sp1f-delayed.test-20001228015635-nfs6w3ic7qt55pnm6uld7wmmq4p73afq
mysql-test/t/delete.test sp1f-delete.test-20001228015635-7lhk263y3s3wild7htgoaesssx5wdy4s
mysql-test/t/disabled.def sp1f-disabled.def-20050315184020-inpdp4hiogithilv62snllppjz2dcing
mysql-test/t/distinct.test sp1f-distinct.test-20001228015635-qewnmgwdqesya4ppb2fbev4mxjvlme2f
mysql-test/t/explain.test sp1f-explain.test-20001228015635-wk7l25cmz54vfufovxkip3auyxz2s36e
mysql-test/t/flush_read_lock_kill.test sp1f-flush_read_lock_kill-20041202220229-cx7dgk5hubrznmauw6vq6lnvahvcwew5
mysql-test/t/foreign_key.test sp1f-foreign_key.test-20001228015635-w3cmvipwwfenx3qs7jgumvzpprnbkkb2
mysql-test/t/fulltext.test sp1f-fulltext.test-20001228015635-snfzkwn2snrqit5pagdg2vzhcoa56eea
mysql-test/t/fulltext2.test sp1f-fulltext2.test-20030113131636-wcatqsjdrsu5c7e7xt2i4fol6phzvwnp
mysql-test/t/fulltext_order_by.test sp1f-ft0000001.test-20001211130622-67fyfykug6v4jvdsllpj3bxfqopztoci
mysql-test/t/func_concat.test sp1f-func_concat.test-20020517075056-6eu3pfmbxcsbmfzicyneluklehdgammd
mysql-test/t/func_group.test sp1f-func_group.test-20001228015635-wkz277djccbddkitm63hibutxp7o4rb7
mysql-test/t/func_if.test sp1f-func_if.test-20020422204141-q7i5g4jckjd6a5k66ybo2si2cvk3wbfl
mysql-test/t/func_in.test sp1f-func_in.test-20001228015635-dykb2qebuowolk7cf6gpa4brezc4m5gk
mysql-test/t/func_misc.test sp1f-func_misc.test-20001228015635-kayguwcdgtjnekzavvdzbsnqcdwfm36c
mysql-test/t/func_str.test sp1f-strfunc.test-20001215085543-mqigcxue3chlbvewleghlo7v5ob5x6vj
mysql-test/t/gis-rtree.test sp1f-gisrtree.test-20030312125159-kg66qt2bmrgz7yscu55gymo7pqha5ra2
mysql-test/t/gis.test sp1f-gis.test-20030301091631-6xbsjkakono4hhavzhol5dhxlmcms4pj
mysql-test/t/grant.test sp1f-grant.test-20020905131705-iadu5zcjshnxgtjx7qpmfrs77bl75suy
mysql-test/t/grant2.test sp1f-grant2.test-20030722200048-galnas2hib5h2ygo4rzcnpblby7awdow
mysql-test/t/grant3.test sp1f-grant3.test-20050322110327-afyko7s7c6kg2wqmwekxz7stzflyxe2s
mysql-test/t/group_min_max.test sp1f-group_min_max.test-20040827133612-bbe7hj6l7byvtyxsg4iicylzflsgy6vj
mysql-test/t/information_schema.test sp1f-information_schema.t-20041113105545-lgutyhqnhpfgiswiwj2ykmjnolmsfq5h
mysql-test/t/information_schema_db.test sp1f-information_schema_d-20050506190606-kvrvmvgttlnqukdm6gfrtdntjs4tfjrm
mysql-test/t/innodb-analyze.test innodbanalyze.test-20081203050234-edoolglm28lyejuc-6
mysql-test/t/innodb-autoinc.test innodbautoinc.test-20081201061010-zymrrwrczns2vrex-281
mysql-test/t/innodb-index.test innodbindex.test-20081201061010-zymrrwrczns2vrex-285
mysql-test/t/innodb-master.opt innodbmaster.opt-20081201061010-zymrrwrczns2vrex-290
mysql-test/t/innodb-semi-consistent-master.opt innodbsemiconsistent-20081201061010-zymrrwrczns2vrex-293
mysql-test/t/innodb-timeout.test innodbtimeout.test-20081203050234-edoolglm28lyejuc-8
mysql-test/t/innodb-use-sys-malloc-master.opt innodbusesysmallocma-20090326061054-ylrdb8libxw6u7e9-3
mysql-test/t/innodb-zip.test innodbzip.test-20081201061010-zymrrwrczns2vrex-297
mysql-test/t/innodb.test innodb.test-20081201061010-zymrrwrczns2vrex-299
mysql-test/t/innodb_bug34300.test innodb_bug34300.test-20081201061010-zymrrwrczns2vrex-303
mysql-test/t/innodb_bug36169.test innodb_bug36169.test-20081201061010-zymrrwrczns2vrex-307
mysql-test/t/innodb_bug36172.test innodb_bug36172.test-20081203050234-edoolglm28lyejuc-10
mysql-test/t/innodb_bug39438.test innodb_bug39438.test-20081214202842-57uir9gc3v9g1pge-3
mysql-test/t/innodb_bug42101-nonzero-master.opt innodb_bug42101nonze-20090519075917-c0hbhca1f80pmx80-4
mysql-test/t/innodb_information_schema.test innodb_information_s-20081201061010-zymrrwrczns2vrex-309
mysql-test/t/innodb_mysql.test sp1f-innodb_mysql.test-20060816102624-6ymo37d3nyhvbqyzqn5ohsfuydwo426k
mysql-test/t/innodb_xtradb_bug317074.test innodb_xtradb_bug317-20090326061054-ylrdb8libxw6u7e9-9
mysql-test/t/insert_select.test sp1f-insert_select.test-20001228015636-zjrqdr7pnvxymgj7brilmnuk2ywuj5u4
mysql-test/t/join.test sp1f-join.test-20001228015636-punt3oq3irbqswtbrlkelkxape6lttnl
mysql-test/t/join_outer.test sp1f-join_outer.test-20001228015636-himrcptylaquy6l5d7pl7pawom3ytmtw
mysql-test/t/kill.test sp1f-kill.test-20010314060712-batcuefxmzrvmgnamk2ljdbhvztus52g
mysql-test/t/lock_multi.test sp1f-lock_multi.test-20011008015806-67rpwlsigaymaevma2l42r5edbtot3fp
mysql-test/t/lowercase_fs_off.test sp1f-lowercase_fs_off.tes-20060504065504-i6dehorpaxhrhthhkdl7ioqcnzsktaqa
mysql-test/t/lowercase_table3.test sp1f-lowercase_table3.tes-20040306084333-hnvnsyhtqiysn2fmfz3zkqfjy3mivo4d
mysql-test/t/myisam-system.test sp1f-myisamsystem.test-20060503125913-ozpyqynhhwo5iiwxdgyt7iljauvr2obd
mysql-test/t/myisam.test sp1f-myisam.test-20010411215653-cdmhjqbeu3xtipkauwbbirystludnac3
mysql-test/t/myisam_crash_before_flush_keys.test myisam_crash_before_-20090402094436-q8pxw24lav2lpjzk-1
mysql-test/t/mysql.test sp1f-mysql.test-20050517191330-gc7zxd3q7cgw4g3pdnswpnxwqnvqhwks
mysql-test/t/mysql_upgrade.test sp1f-mysql_upgrade.test-20061113123947-lushdxyf7hmcmdtuvg7gdaq5a6wjiswn
mysql-test/t/mysqlbinlog.test sp1f-mysqlbinlog.test-20030924192555-wsghaldjdiuxo36sss7fu2urp47axjk7
mysql-test/t/mysqltest.test sp1f-mysqltest.test-20041022024800-v3hvkzs4236l6rpunai7xttdltot7rvz
mysql-test/t/named_pipe.test sp1f-named_pipe.test-20070924104241-daakp7etk2k4hxuzofbxrgkdkzcwmw2a
mysql-test/t/not_partition.test sp1f-not_partition.test-20061026171106-weuf2mmixpkzlidd3r3j4yme2whe35rj
mysql-test/t/olap.test sp1f-olap.test-20020720115151-u3y5qjyyz4c7hufu5vftj74rijkr7rf2
mysql-test/t/openssl_1.test sp1f-ssl.test-20010831211355-mk47pipvythsqcor32yidzoopgdewdo6
mysql-test/t/order_by.test sp1f-order_by.test-20001228015636-nr7aml75ra7mdlruhoqo5dgbfv5tcesc
mysql-test/t/partition.test sp1f-partition.test-20050718113034-pbo3ht3bf4gfa3mz44on3sqafyctwo35
mysql-test/t/partition_bug18198.test sp1f-partition_bug18198.t-20070412183249-7r7ram5fqiyssvy4t5irs2a7sdd4oimt
mysql-test/t/partition_csv.test sp1f-partition_csv.test-20071022181049-u2nodruhqzkicgs2isvjxv5xfkj3q5hc
mysql-test/t/partition_error.test sp1f-partition_error.test-20050720124214-od2aou4vzloggrqktgmbjzvuhajiukpm
mysql-test/t/partition_innodb.test sp1f-partition_innodb.tes-20060518171642-twfw23mpackjkfvorfvay4dhvjxhtbfm
mysql-test/t/partition_innodb_semi_consistent.test partition_innodb_sem-20081216114001-2cqkultf4k3xhbvc-2
mysql-test/t/partition_pruning.test sp1f-partition_pruning.te-20051222092851-w33h4bmtllkwolwe5birv6mwcwoe2uys
mysql-test/t/plugin.test sp1f-plugin.test-20061214230953-rdqkovjzpupoeypjzzvefseahkmrdz4f
mysql-test/t/plugin_load.test sp1f-plugin_load.test-20080126000459-32quuvob6bm45mqub6nydrb66zhyumvz
mysql-test/t/ps.test sp1f-ps.test-20040405154119-4zqf6po44yypvz5foa2osprg5kb5ok63
mysql-test/t/ps_ddl.test sp1f-ps_ddl.test-20071215004622-2fkvss6xi7zvoksbhhmbwak3gs54jnbo
mysql-test/t/ps_not_windows.test sp1f-ps_not_windows.test-20061117214908-5zcao5tiy3glx2i3aqg2sn4koijafp6u
mysql-test/t/query_cache.test sp1f-query_cache.test-20011205230530-yfwho76ujeasygr3magwlmssnvwsukio
mysql-test/t/query_cache_debug.test sp1f-query_cache_debug.te-20080107200614-idvgytisf3mqftabyk43v42cynhijq5h
mysql-test/t/query_cache_notembedded.test sp1f-query_cache_notembed-20050729121335-367lhbc36drodp262lkuott3pk25wcdt
mysql-test/t/query_cache_ps_no_prot.test sp1f-query_cache_ps_no_pr-20070524201347-frtbp77ujtz4lm5li5y6jidbc7x5grb6
mysql-test/t/query_cache_ps_ps_prot.test sp1f-query_cache_ps_ps_pr-20070524201347-bph44fceahlyq3j3xcirxchzllxweimq
mysql-test/t/range.test sp1f-range.test-20001228015636-xfak6bsaw5p3ek36np7bznadjb3boh2q
mysql-test/t/select.test sp1f-select.test-20010103001548-tbl2ff7qehzh43qnsmf4ejhjqe66f46n
mysql-test/t/show_check-master.opt sp1f-show_checkmaster.opt-20061004031826-m2pj2wv7l6njctrnpaenfdqxhckyfxpz
mysql-test/t/show_check.test sp1f-show_check.test-20001228015637-uv35wm2ryvpkyrr6ojhmi2nq6x6jgdod
mysql-test/t/sp-destruct.test sp1f-spdestruct.test-20051026133448-bt2vu42upsulrlap6vytgz7hygx6a2hj
mysql-test/t/sp-error.test sp1f-sperror.test-20030305184512-aipdocqcicc6rgsz672mr32qowtm5ceb
mysql-test/t/sp-security.test sp1f-spsecurity.test-20031213154048-snbqkvepvo4c45wtxld2qrc3h35ap4ty
mysql-test/t/sp-ucs2.test sp1f-spucs2.test-20070219105703-22h7c3qd3wz3n3vlkunsrexgznpaqbin
mysql-test/t/sp.test sp1f-sp.test-20030117133803-b6pcfv2yscbqkur5fszep7acmdg7nf5k
mysql-test/t/sp_notembedded.test sp1f-sp_notembedded.test-20060224163411-4bxzhibgkpu3fm3zoyvknrqo3zudvvfa
mysql-test/t/subselect.test sp1f-subselect.test-20020512204640-lyqrayx6uwsn7zih6y7kerkenuitzbvr
mysql-test/t/subselect_notembedded.test sp1f-subselect_notembedde-20060224163411-obdnoufdqsjbzgzetzyx3v2slnzgry4n
mysql-test/t/table_elim.test table_elim.test-20090603125018-ka3vcfrm07bsldz8-1
mysql-test/t/trigger.test sp1f-trigger.test-20040907122911-eamsjnplirl554ohkncdnwi765xm2hbk
mysql-test/t/trigger_notembedded.test sp1f-triggergrant.test-20051110192456-j6hwzoi4loitpk57ccqotlhkzrm6ucsv
mysql-test/t/type_bit.test sp1f-type_bit.test-20041217140559-tzpygypzmjyjiukpq75swmn6zq4ytqe4
mysql-test/t/type_newdecimal.test sp1f-type_newdecimal.test-20050208224936-e244l5ugrk3oditjqp53n6izptkrteq2
mysql-test/t/type_year.test sp1f-type_year.test-20001228015637-j547qmpytndiwdwgn35oq34jgjduzo6l
mysql-test/t/udf.test sp1f-udf.test-20060215161120-inrv7ph3327gnzcvcqk25vmihneybyhk
mysql-test/t/union.test sp1f-unions_one.test-20010725122836-57cnbpjvizewgwar32kmidvvj6jsf7rz
mysql-test/t/update.test sp1f-update.test-20001228015637-63zlejfzul4bql7vagkgrfew3bn7qdhq
mysql-test/t/upgrade.test sp1f-upgrade.test-20060217091845-b2j6eahffx256stwqu5aki5p55sq2bz3
mysql-test/t/user_var.test sp1f-user_var.test-20010314060712-dapoxkmrpmwi4yh7qj36h6tolpnzg5bi
mysql-test/t/variables.test sp1f-variables.test-20001228015637-u4toadkin7aellpwwz75e5h5zuutteid
mysql-test/t/view.test sp1f-view.test-20040715221517-2kxb7l4itrpl4mw266xe5gby4vftru3z
mysql-test/t/view_grant.test sp1f-view_grant.test-20050404194355-y5ik7soywcms7xriyzo72dooviahc7cx
mysql-test/t/warnings.test sp1f-warnings.test-20001228015637-zfi7dd3hclrgbgbjruiknua2ytqtagx4
mysql-test/t/windows.test sp1f-windows.test-20050901013213-brlrkwlhdfgrngb2t563kyzyenq6gls2
mysql-test/t/xa.test sp1f-xa.test-20050403224954-lwrpyxgzlsnvmlzqdkoeuxg62yhkyecp
mysql-test/valgrind.supp sp1f-valgrind.supp-20050406142216-yg7xhezklqhgqlc3inx36vbghodhbovy
mysys/CMakeLists.txt sp1f-cmakelists.txt-20060831175237-shgpjtu5x7rmyswxjiriviagwnm5kvpd
mysys/Makefile.am sp1f-makefile.am-19700101030959-36zaboyabq4ooqfc2jpion3pic7yhpgb
mysys/charset-def.c sp1f-charsetdef.c-20031006195631-2zdalecmihnwhbxa6t6z67yy7avh5zgj
mysys/charset.c sp1f-charset.c-19700101030959-5zcsgxug5xkrzzvqsq6sm2nyc6has42i
mysys/default.c sp1f-default.c-19700101030959-yapqvlw42xojx5avf7fykpwrwpf3yvtz
mysys/errors.c sp1f-errors.c-19700101030959-xfxez7oem4mhljuj3yhmevl3vohnvgh3
mysys/hash.c sp1f-hash.c-19700101030959-ny373x26eb7225kqbdbb7l23bjwr6pun
mysys/lf_hash.c sp1f-lf_hash.c-20060802110035-r7bxwavbhraiyl2mnakqg4fj6dkhs2bi
mysys/mf_keycache.c sp1f-mf_keycache.c-19700101030959-wtigyxt4n6zscc6ezr56wziqguyc5bds
mysys/mf_pack.c sp1f-mf_pack.c-19700101030959-u7bzjnr4w3idabvny244w5gzcf33butm
mysys/my_copy.c sp1f-my_copy.c-19700101030959-jt6k3uijhnzawhndg4d4ocqbucojvwve
mysys/my_file.c sp1f-my_file.c-20040219173304-xky3nl63gme3w2apldzfpufwgcfnq23x
mysys/my_getopt.c sp1f-my_getopt.c-20020125212008-5ppwsdqmfhny46gxkjxph22zh3phetir
mysys/my_init.c sp1f-my_init.c-19700101030959-ghisog5texwet5e5x7gn35bf4c4d3v3h
mysys/my_largepage.c sp1f-my_largepage.c-20041214192504-4n2x3wmc6b43qlmnfmpmjlxvtke7lrcz
mysys/my_redel.c sp1f-my_redel.c-19700101030959-ki322j4p74mpkdbdsettqo2bh2y2ln2g
mysys/my_seek.c sp1f-my_seek.c-19700101030959-ud6rvcvrfr5z3bkv2uapwxioynoau3pv
mysys/my_static.c sp1f-my_static.c-19700101030959-vmmfiyygpz2awmm7d3pguy4rsuugbhcs
mysys/my_sync.c sp1f-my_sync.c-20031102135456-o4s6sunug6w2ch4bok2p3auq37qgqzox
mysys/my_thr_init.c sp1f-my_thr_init.c-19700101030959-siwg2eavxsdwdc4kkmwxvs42rp6ntkrm
mysys/my_uuid.c sp1f-my_uuid.c-20071009180946-wypun6nqt33pvndknve5p4tq2cesvp3q
mysys/my_wincond.c sp1f-my_wincond.c-19700101030959-qdv7yylq5t4imwxjnjub6dyqcq3wqwow
mysys/my_winthread.c sp1f-my_winthread.c-19700101030959-rvu37ehbcbsxvwhztjz3v3hjruvag5in
mysys/stacktrace.c sp1f-stacktrace.c-20010513221240-wwmyzt4dneecpsyuor3g7w3zacc6u4mq
mysys/thr_lock.c sp1f-thr_lock.c-19700101030959-igvxgo25qd7i2moc4pgo5eoth3xp34mh
mysys/thr_mutex.c sp1f-thr_mutex.c-19700101030959-ukbiyuwnq6i24hphqxuabckaeqaffe4p
mysys/tree.c sp1f-tree.c-19700101030959-febyd36tcwqmhiyrrcuaqate66xykgg3
mysys/typelib.c sp1f-typelib.c-19700101030959-yks6u7xso4ru4dpd6v7uq7ynmxg6wsgt
netware/libmysqlmain.c sp1f-libmysqlmain.c-20030131234059-qbfglreqvjkt5jiy4pjfxw4hkeyargq2
plugin/daemon_example/Makefile.am sp1f-makefile.am-20061111012154-qsjmuxl7tumesqnmmnvrlpe523w362jw
plugin/fulltext/plugin_example.c sp1f-plugin_example.c-20051228120521-okmw6cytrbhx3kpxsuswy6v7agqdiaik
regex/CMakeLists.txt sp1f-cmakelists.txt-20060831175237-6kmn7fqqj7jzzviead26v47chae5xab3
regex/engine.c sp1f-engine.c-19700101030959-ot7oocwwykoohvrrlnq6jcjy5gqp4hbe
regex/engine.ih sp1f-engine.ih-19700101030959-kktic6x6t5cuk5tpi7skcvt4cxs2im2m
scripts/fill_help_tables.sql sp1f-fill_help_tables.sql-20050413141357-uj2k2dquxn5t7vqrggtap5xrxnotzrts
scripts/make_win_bin_dist sp1f-make_win_bin_dist-20060901123056-xnusgszvkfrrcxkqidb7zszax2ezpyto
scripts/mysql_secure_installation.pl.in sp1f-mysql_secure_install-20071228215050-nnco3kgp33fxs7ja6zdy6xh56zszi2cc
scripts/mysql_secure_installation.sh sp1f-mysql_secure_install-20020609041239-b4zztmtqycjs24aubuhwun6np5wuuesy
scripts/mysql_system_tables.sql sp1f-mysql_system_tables.-20070226104923-4n5a67fuifobcyhhicfbacpsv5npohgv
scripts/mysql_system_tables_fix.sql sp1f-mysql_fix_privilege_-20030604152848-cz6lnrig5srcrvkt7d5m35bk3wsz4bdc
scripts/mysqlbug.sh sp1f-mysqlbug.sh-19700101030959-j7qi4cykvv7fsu2jsvoqipk2abu6gxq5
scripts/mysqld_multi.sh sp1f-mysql_multi_mysqld-20001207000014-ssrlcxbkjfvu6rzlxve43apfuc7dawcj
server-tools/instance-manager/instance_map.cc sp1f-instance_map.cc-20041023073148-3m4k4sa6saci6vpfwohgcv6ab3kiocum
server-tools/instance-manager/listener.cc sp1f-listener.cc-20030819155518-tnbkvf6bxiknuulx7xoa525vfmgo3aw7
server-tools/instance-manager/options.cc sp1f-options.cc-20030816174400-modlrjxbvzx6sqjnjyy5q57arvlfhziv
server-tools/instance-manager/user_map.cc sp1f-user_map.cc-20041023073155-hsq3vemyam75rv6ejrax7idczyxqkybc
sql-bench/bench-init.pl.sh sp1f-benchinit.pl.sh-19700101030959-y33ba2ynqtvduvv3xqcvc5edybs4aqqx
sql-bench/server-cfg.sh sp1f-servercfg.sh-19700101030959-nxao3bnxftxkyby3ucmddzuxl3vgejd3
sql-bench/test-ATIS.sh sp1f-testatis.sh-19700101030959-wlujbomwavyscmpdl7cpxrubzhfbe2s2
sql-bench/test-alter-table.sh sp1f-testaltertable.sh-19700101030959-nqya5yglxujxq4nq3cc5jtygmdptrbf3
sql-bench/test-big-tables.sh sp1f-testbigtables.sh-19700101030959-27ima5xzvezqmauxlrutitl2bt6t47po
sql-bench/test-connect.sh sp1f-testconnect.sh-19700101030959-hqm2sljcxv5567eboz3sryp77qsxovjh
sql-bench/test-create.sh sp1f-testcreate.sh-19700101030959-gkaml7qwon4qyltk33bfk4e6ahlezd7g
sql-bench/test-select.sh sp1f-testselect.sh-19700101030959-wtijbxbtdkj2ikmgrowedm2rjh5zewiw
sql-bench/test-transactions.sh sp1f-testtransactions.sh-20011122155518-i337wznzhz7yvjdnp7ytg3r5pyk7pqml
sql-bench/test-wisconsin.sh sp1f-testwisconsin.sh-19700101030959-irptrkrqm3jhd6iudg2svakvtwzeo72l
sql-common/client.c sp1f-client.c-20030502160736-oraaciqy6jkfwygs6tqfoaxgjbi65yo7
sql-common/my_time.c sp1f-my_time.c-20040624160839-c5ljhxyjpi5czybdscnldwjexwdyx3o6
sql/CMakeLists.txt sp1f-cmakelists.txt-20060831175237-esoeu5kpdtwjvehkghwy6fzbleniq2wy
sql/Makefile.am sp1f-makefile.am-19700101030959-xsjdiakci3nqcdd4xl4yomwdl5eo2f3q
sql/event_data_objects.cc sp1f-event_timed.cc-20051205104456-ckd2gzuwhr4u5umqbncmt43nvv45pxmf
sql/event_db_repository.cc sp1f-event_db_repository.-20060627064838-k6rpjg72omnihtxhbubu6ht7wjvxggb7
sql/event_scheduler.cc* sp1f-event_scheduler.cc-20060522184601-btrj3nnnhmns6ciogy2l5aueg53vywzf
sql/events.cc sp1f-event.cc-20051202122200-as66hughd4bhrhu2uqbb6mpogou2yihk
sql/field.cc sp1f-field.cc-19700101030959-f4imaofclsea3n4fj4ow5m7havmyxa2r
sql/field.h sp1f-field.h-19700101030959-3n6smzxcwkjl7bikm3wg4hfkjn66uvvp
sql/filesort.cc sp1f-filesort.cc-19700101030959-mfm2vmdgqqru7emm2meeecleb2q3zdso
sql/ha_ndbcluster.cc sp1f-ha_ndbcluster.cc-20040414175836-rvqnoxrkqexyhfu3d62s4t345ip7rez2
sql/ha_ndbcluster_binlog.cc sp1f-ha_ndbcluster_binlog-20060112185048-3hthowbxyrrly3srxavlrufjf5mmgqm6
sql/ha_partition.cc sp1f-ha_partition.cc-20050718113037-eoky4qluumb5dmdyg5z6n2fvdkgutxms
sql/ha_partition.h sp1f-ha_partition.h-20050718113038-4xxwqkuu2xgxqtrwfbc43zgfyfcwzjsq
sql/handler.cc sp1f-handler.cc-19700101030959-ta6zfrlbxzucylciyro3musjsdpocrdh
sql/handler.h sp1f-handler.h-19700101030959-mumq2hpilkpgxuf22ftyv5kbilysnzvn
sql/item.cc sp1f-item.cc-19700101030959-u7hxqopwpfly4kf5ctlyk2dvrq4l3dhn
sql/item.h sp1f-item.h-19700101030959-rrkb43htudd62batmoteashkebcwykpa
sql/item_cmpfunc.cc sp1f-item_cmpfunc.cc-19700101030959-hrk7pi2n6qpwxauufnkizirsoucdcx2e
sql/item_cmpfunc.h sp1f-item_cmpfunc.h-19700101030959-pcvbjplo4e4ng7ibynfhcd6pjyem57gr
sql/item_create.cc sp1f-item_create.cc-19700101030959-zdsezbi5r5xu5syntjdzqs2d2dswsojn
sql/item_func.cc sp1f-item_func.cc-19700101030959-3wmsx76yvc25sroqpfrx2n77kqdxxn3y
sql/item_func.h sp1f-item_func.h-19700101030959-fbjcbwkg66qubbzptqwh5w5evhnpukze
sql/item_geofunc.cc sp1f-item_geofunc.cc-20030530102226-vdbf2bd6tpkrzoy6q2wdibkzd3bkv2io
sql/item_strfunc.cc sp1f-item_strfunc.cc-19700101030959-yl2pwnrngmla3nmlgiuiwrztx3iu4ffl
sql/item_strfunc.h sp1f-item_strfunc.h-19700101030959-x4djohef3q433aqvcrybhjmclafdu4sx
sql/item_subselect.cc sp1f-item_subselect.cc-20020512204640-qep43aqhsfrwkqmrobni6czc3fqj36oo
sql/item_subselect.h sp1f-item_subselect.h-20020512204640-qdg77wil56cxyhtc2bjjdrppxq3wqgh3
sql/item_sum.cc sp1f-item_sum.cc-19700101030959-4woo23bi3am2t2zvsddqbpxk7xbttdkm
sql/item_sum.h sp1f-item_sum.h-19700101030959-ecgohlekwm355wxl5fv4zzq3alalbwyl
sql/item_timefunc.cc sp1f-item_timefunc.cc-19700101030959-rvvlgmw5b4ewpuuxuntrkiqimyrr5sw2
sql/item_timefunc.h sp1f-item_timefunc.h-19700101030959-o34ypz6ggolzqmhgsjnqh6inkvgugi46
sql/item_xmlfunc.cc sp1f-item_xmlfunc.cc-20051221130500-wo5dgojvjjm6mmra7fay3ri7ud5ow3yl
sql/lock.cc sp1f-lock.cc-19700101030959-lzrt5tyolna3dcihuenjh7nlicr7llt7
sql/log.cc sp1f-log.cc-19700101030959-r3hdfovek4kl6nd64ovoaknmirota6bq
sql/log.h sp1f-log.h-20051222053446-ggv6hdi5fnxggnjemezvv7n2bcbkx45e
sql/log_event.cc sp1f-log_event.cc-19700101030959-msmqlflsngxosswid2hpzxly5vfqdddc
sql/log_event.h sp1f-log_event.h-19700101030959-clq6ett55tcqbpys2i4cpfrdccq7j4om
sql/log_event_old.cc sp1f-log_event_old.cc-20070412135046-uu5xq4cnpwslzif6fbmj3g65x4vdkzxu
sql/my_decimal.cc sp1f-my_decimal.cc-20050208224937-tgb63ttruwc4ihp23jkciv5vfpwwm5bv
sql/my_decimal.h sp1f-my_decimal.h-20050208224937-z6shzy3pf5uyso4mvtc2f6pckjzfeg5f
sql/mysql_priv.h sp1f-mysql_priv.h-19700101030959-4fl65tqpop5zfgxaxkqotu2fa2ree5ci
sql/mysqld.cc sp1f-mysqld.cc-19700101030959-zpswdvekpvixxzxf7gdtofzel7nywtfj
sql/net_serv.cc sp1f-net_serv.cc-19700101030959-dp4s27g5nk64sph4g6g54dghekqozzmy
sql/opt_range.cc sp1f-opt_range.cc-19700101030959-afe3wtevb7zwrg4xyibt35uamov5r7ds
sql/opt_sum.cc sp1f-opt_sum.cc-19700101030959-ygmsylwaxwx3wf77i2nv2hdupycvexro
sql/opt_table_elimination.cc opt_table_eliminatio-20090625095316-7ka9w3zr7n5114iv-1
sql/partition_info.cc sp1f-partition_info.cpp-20060216163637-eco35bnz46tcywduzmpjofzudmzlgyog
sql/records.cc sp1f-records.cc-19700101030959-xg6elqzdqhvrmobazxrjajmiyqxf7lx7
sql/repl_failsafe.cc sp1f-repl_failsafe.cc-20011010025623-k7zhoyc3smc7tbliyp7vaf3f4idq22so
sql/rpl_filter.cc sp1f-table_filter.cc-20050308201116-4anzb26smj76r56ihkpxzbtnzlzatr2k
sql/rpl_injector.cc sp1f-rpl_injector.cc-20060112185049-nmh4krszzy7lfqbhbaznaczvq36kmykc
sql/rpl_record.cc sp1f-rpl_record.cc-20070413125523-wuthuk5jk7uxikuioz6esll6xakdngs4
sql/rpl_record.h sp1f-rpl_record.h-20070413125523-xvn32ub2xcvqged7y6ayilghjetpvkvg
sql/rpl_rli.cc sp1f-rpl_rli.cc-20061031112305-25t7pxjrjm24qo5h65c7rml66xu3uw4p
sql/rpl_rli.h sp1f-rpl_rli.h-20051222053448-bte4b72jikihtk3zbn5jyj2vbiawtwgc
sql/rpl_tblmap.cc sp1f-rpl_tblmap.cc-20051222053448-sgowtys7fb4tdpjvmzwktjxmb5krm3cc
sql/rpl_utility.h sp1f-rpl_utility.h-20060503130029-u44nzzcbdenh2gegnnyzro26kbk5quw7
sql/scheduler.cc sp1f-scheduler.cc-20070223111352-qwtxm54kmlec25urietuqte6v76hjp4b
sql/set_var.cc sp1f-set_var.cc-20020723153119-nwbpg2pwpz55pfw7yfzaxt7hsszzy7y3
sql/set_var.h sp1f-set_var.h-20020723153119-2yomygq3s4xjbqvuue3cdlpbjtj3kwmk
sql/share/errmsg.txt sp1f-errmsg.txt-20041213212820-do5w642w224ja7ctyqhyl6iihdmpkzv5
sql/slave.cc sp1f-slave.cc-19700101030959-a636aj3mjxgu7fnznrg5kt77p3u2bvhh
sql/slave.h sp1f-slave.h-20001111215010-k3xq56z2cul6s766om7zrdsnlwdc23y5
sql/sp.cc sp1f-sp.cc-20021212121421-6xwuvxq5bku2b4yv655kp2e5gsvautd5
sql/sp.h sp1f-sp.h-20021212121421-eh5y7kpcb3hkgy4wjuh3q3non36itye5
sql/sp_cache.cc sp1f-sp_cache.cc-20030703140129-ugsn54s2jpxh7hdznsgxn6ubwvbtj5hw
sql/sp_head.cc sp1f-sp_head.cc-20021208185920-jtgc5wvyqdnu2gvcdus3gazrfhxbofxd
sql/sp_head.h sp1f-sp_head.h-20021208185920-yrolg3rzamehfoejkbiai4q7njg5w6cd
sql/sp_pcontext.h sp1f-sp_pcontext.h-20021208185920-yc2b6m7m5tjs6mjmppqrnuvgy6pwra2z
sql/sp_rcontext.cc sp1f-sp_rcontext.cc-20030916122605-vg62h2qkjnbj54b4zlijgqswudijiyaf
sql/sql_acl.cc sp1f-sql_acl.cc-19700101030959-c4hku3uqxzujthqnndeprbrhamqy6a4i
sql/sql_acl.h sp1f-sql_acl.h-19700101030959-byf4bn7yfbxu6wa6z76kqcuspjl67msj
sql/sql_base.cc sp1f-sql_base.cc-19700101030959-w7tul2gb2n4jzayjwlslj3ybmf3uhk6a
sql/sql_binlog.cc sp1f-sql_binlog.cc-20051222053449-o6vkdfrjkuledkjdwz2jx3zykz4izfsz
sql/sql_builtin.cc.in sp1f-sql_builtin.cc.in-20060413204924-2uqxqmqkyuh3wtmodadlo23ag3lchfp6
sql/sql_cache.cc sp1f-sql_cache.cc-19700101030959-74bsqwcnhboovijsogcenqana5inu6wo
sql/sql_cache.h sp1f-sql_cache.h-20011202123401-gegktsz2a3er7fqwpmpoejydzpkeadeo
sql/sql_class.cc sp1f-sql_class.cc-19700101030959-rpotnweaff2pikkozh3butrf7mv3oero
sql/sql_class.h sp1f-sql_class.h-19700101030959-jnqnbrjyqsvgncsibnumsmg3lyi7pa5s
sql/sql_connect.cc sp1f-sql_connect.cc-20070223111352-fhh5znxdfvzxuca7da3uu4olnwgkrm4n
sql/sql_crypt.cc sp1f-sql_crypt.cc-19700101030959-fr47anlofzobv4xjnsn3plpnbguuuzc2
sql/sql_crypt.h sp1f-sql_crypt.h-19700101030959-5jrthewwwxpychvg5tyhjmv57x2tuzxx
sql/sql_db.cc sp1f-sql_db.cc-19700101030959-hyw6zjuisjyda5cj5746a2zzuzz5yibr
sql/sql_delete.cc sp1f-sql_delete.cc-19700101030959-ch2a6r6ushvc2vfwxt7ehcjuplelwthr
sql/sql_handler.cc sp1f-sql_handler.cc-20010406221833-l4tsiortoyipmoyajcoz2tcdppvyeltl
sql/sql_insert.cc sp1f-sql_insert.cc-19700101030959-xgwqe5svnimxudzdcuitauljzz2zjk5g
sql/sql_lex.cc sp1f-sql_lex.cc-19700101030959-4pizwlu5rqkti27gcwsvxkawq6bc2kph
sql/sql_lex.h sp1f-sql_lex.h-19700101030959-sgldb2sooc7twtw5q7pgjx7qzqiaa3sn
sql/sql_load.cc sp1f-sql_load.cc-19700101030959-hoqlay5we4yslrw23xqedulkejw6a3o5
sql/sql_locale.cc sp1f-sql_locale.cc-20060704124016-q5yfdbfinszhklmgyjf4kmnepgd4biai
sql/sql_parse.cc sp1f-sql_parse.cc-19700101030959-ehcre3rwhv5l3mlxqhaxg36ujenxnrcd
sql/sql_partition.cc sp1f-sql_partition.cc-20050718113038-57h5bzswps6cel2y7k7qideue3ghbg3u
sql/sql_partition.h sp1f-sql_partition.h-20060216163825-un6ect3xl76xucer2pubythjlegvmy43
sql/sql_plugin.cc sp1f-sql_plugin.cc-20051105112032-hrm64p6xfjq33ud6zy3uivpo7azm75a2
sql/sql_plugin.h sp1f-sql_plugin.h-20051105112032-zbh53e3gtkwr4zcntlb42fx3w6cr7ilm
sql/sql_prepare.cc sp1f-sql_prepare.cc-20020612210720-gtqjjiu7vpmfxb5xct2qke7urmqcabli
sql/sql_profile.cc sp1f-sql_profile.cc-20070222150305-yv5grcusm3k2b6rrcx3kkqggtm33i3z4
sql/sql_profile.h sp1f-sql_profile.h-20070222150305-tbdpljkisvi3e657yunhkvsnjw6ifjru
sql/sql_rename.cc sp1f-sql_rename.cc-20000821000147-ltbepgfv52umnrkaxzycedl5p2tlr3fp
sql/sql_repl.cc sp1f-sql_repl.cc-20001002032713-xqbns5ofqsaebhgi2ypcfn7nhz7nh5rp
sql/sql_select.cc sp1f-sql_select.cc-19700101030959-egb7whpkh76zzvikycs5nsnuviu4fdlb
sql/sql_select.h sp1f-sql_select.h-19700101030959-oqegfxr76xlgmrzd6qlevonoibfnwzoz
sql/sql_servers.cc sp1f-sql_servers.cc-20061202004728-casoctfc22ftno7vvvcsjbokkttngpwa
sql/sql_show.cc sp1f-sql_show.cc-19700101030959-umlljfnpplg452h7reeyqr4xnbmlkvfj
sql/sql_table.cc sp1f-sql_table.cc-19700101030959-tzdkvgigezpuaxnldqh3fx2h7h2ggslu
sql/sql_tablespace.cc sp1f-sql_tablespace.cc-20060111103519-oyr2sz233kphdr5xpru4mqwtac2mt4uf
sql/sql_test.cc sp1f-sql_test.cc-19700101030959-7fpt436b3qzk75qpy7rqpho7nkesvwuz
sql/sql_trigger.cc sp1f-sql_trigger.cc-20040907122911-35k3wamrp6g7qsupxe7hisftpobcwin5
sql/sql_udf.cc sp1f-sql_udf.cc-19700101030959-tk7ysmv4dpwkfhtdovfbqe5i6uvq67ft
sql/sql_union.cc sp1f-sql_unions.cc-20010613103653-ljpdcuczpligpiljxsifua5riwhxyomz
sql/sql_update.cc sp1f-sql_update.cc-19700101030959-edlgskfuer2ylczbw2znrr5gzfefiyw7
sql/sql_view.cc sp1f-sql_view.cc-20040715221517-nw4p4mja6nzzlvwwhzfgfqb4umxqobe4
sql/sql_yacc.yy sp1f-sql_yacc.yy-19700101030959-wvn4qyy2drpmge7kaq3dysprbhlrv27j
sql/structs.h sp1f-structs.h-19700101030959-dqulhwijezc2pwv2x4g32qdggnybj2nc
sql/table.cc sp1f-table.cc-19700101030959-nsxtem2adyqzwe6nz4cgrpcmts3o54v7
sql/table.h sp1f-table.h-19700101030959-dv72bajftxj5fbdjuajquappanuv2ija
sql/time.cc sp1f-time.cc-19700101030959-vhvl5k35iuojsrxbsg62xysptyi4pc64
sql/udf_example.c sp1f-udf_example.cc-19700101030959-ze6kwdimrvfxkxofoegzwby3qce75brj
sql/udf_example.def sp1f-udf_example.def-20060922124240-fxmt4egcapnpdlbg5u4xlrq4bppsjcnw
sql/unireg.cc sp1f-unireg.cc-19700101030959-6a4wymwak6cmvk25gch56ctjvadrhu3v
storage/archive/CMakeLists.txt* sp1f-cmakelists.txt-20060324221904-eeglvbzhfjgzaeenavdxsdvjzfjatvre
storage/archive/azio.c sp1f-azio.c-20051223034959-gjinfrr75cpes7iyw4uasbps2bkn65rc
storage/archive/azlib.h sp1f-azlib.h-20051223035000-23fs3st2s6whj3xl4hjzfxuzsjyzgnf6
storage/archive/ha_archive.cc sp1f-ha_archive.cc-20040521001938-uy57z43drkjeirpjafdzdpvfxruqho4q
storage/blackhole/ha_blackhole.cc sp1f-ha_blackhole.cc-20050323001036-ikllt6ts2equ6w4aru2q3rhdbrn64twz
storage/csv/ha_tina.cc sp1f-ha_tina.cc-20040813035429-5pwcme2ehkkuei6gu6ueo4tfldeeyw7l
storage/example/Makefile.am sp1f-makefile.am-20051221181830-qhpudfvq7qccif4djsijfcwl2l4igz4a
storage/example/ha_example.cc sp1f-ha_example.cc-20040331231732-d55r4dr2e7cf5dutte3f74z6h6yxdywb
storage/federated/CMakeLists.txt* sp1f-cmakelists.txt-20060819155453-lvbabhbwyj6n6qzjroyxasefis7wczs5
storage/federatedx/Makefile.am makefile.am-20091029224633-m824ql737a2j6q5a-9
storage/federatedx/ha_federatedx.cc ha_federatedx.cc-20091029224633-m824ql737a2j6q5a-6
storage/heap/hp_write.c sp1f-hp_write.c-19700101030959-fyft5higet4kliqpr6vywernwiypjfzr
storage/ibmdb2i/db2i_constraints.cc db2i_constraints.cc-20090215021022-15ov21fvravqaicb-10
storage/ibmdb2i/ha_ibmdb2i.cc ha_ibmdb2i.cc-20090215021022-15ov21fvravqaicb-30
storage/innobase/btr/btr0btr.c sp1f-btr0btr.c-20010217121858-mwrlor7vioqhw732pmqhbjjhpajttjyp
storage/innobase/data/data0type.c sp1f-data0type.c-20010217121859-zfpknsetjkfoyjnhomi2z2ek3jda2wji
storage/innobase/dict/dict0dict.c sp1f-dict0dict.c-20010217121859-dhmp6wllhccos4vvwyuqz5dmuctjxgmm
storage/innobase/fil/fil0fil.c sp1f-fil0fil.c-20010217121900-knsuuyumzpgipz5yczc5amt2guvsyowr
storage/innobase/handler/ha_innodb.cc sp1f-ha_innobase.cc-20001205235417-rlet3ei56gdrss673dssnrqgug67lwno
storage/innobase/handler/ha_innodb.h sp1f-ha_innobase.h-20001205235417-hami5r4niirc73bybnkeudrtmaqghhlk
storage/innobase/include/fil0fil.h sp1f-fil0fil.h-20010217121902-3sor3hp4np3u2e5w6isu7cbb47d7tvda
storage/innobase/include/ha_prototypes.h sp1f-ha_prototypes.h-20060601063356-7fsbop6gnnzwnq7pndelgmztdm2uygg2
storage/innobase/include/lock0lock.h sp1f-lock0lock.h-20010217121903-hypevxwxs43iajin636mkmjrunvza4o6
storage/innobase/include/mach0data.h sp1f-mach0data.h-20010217121903-66grfy3b6skgudvdli4owdshxnvcjo2o
storage/innobase/include/mach0data.ic sp1f-mach0data.ic-20010217121903-n2kjcumh2n3j5ydrwt5cllgns7c4kpyw
storage/innobase/include/mtr0mtr.h sp1f-mtr0mtr.h-20010217121904-yucrtgssz6acxwcs6rnsnopzvcfk2pxu
storage/innobase/include/os0file.h sp1f-os0file.h-20010217121904-5jy6i4kwdivoji2tzlssz6tj3hmntl4c
storage/innobase/include/srv0srv.h sp1f-srv0srv.h-20010217121907-oei3on7bcl3tqesmlbtyoybnevxxx2j5
storage/innobase/include/trx0trx.h sp1f-trx0trx.h-20010217121908-in6xvc2qnxbrz74ytwthpviclfo53zls
storage/innobase/lock/lock0lock.c sp1f-lock0lock.c-20010217121909-slawfq55wuitbo2ujz7h5doebhwxfmhe
storage/innobase/log/log0log.c sp1f-log0log.c-20010217121910-j2njcv6gzkte3dgq55bqukdb4z2bzuyt
storage/innobase/log/log0recv.c sp1f-log0recv.c-20010217121910-w5c7st35rpw6d6cesgisnx3vbdbsxxkt
storage/innobase/os/os0file.c sp1f-os0file.c-20010217121911-3wbmvsjzaw7z42arqr46leh46lbamvcp
storage/innobase/os/os0proc.c sp1f-os0proc.c-20010217121911-iociah67deec5bczgjf6gr33stj75df2
storage/innobase/row/row0mysql.c sp1f-row0mysql.c-20010217121914-f6pdtzldiainoq3xyil2uwziayos4irm
storage/innobase/row/row0sel.c sp1f-row0sel.c-20010217121914-c6o7vqncdgzrorm4pko5tpdlfeyujhvq
storage/innobase/srv/srv0srv.c sp1f-srv0srv.c-20010217121915-oxgww23dgwrrrgscuox5pkpnefaged77
storage/innobase/srv/srv0start.c sp1f-srv0start.c-20010217121915-yvimtjmlcqac56y3j7qw7wywb7hayn25
storage/innobase/trx/trx0trx.c sp1f-trx0trx.c-20010217121916-b5g7pmqxezfo2mktfpeiapibgdhogrv2
storage/innobase/ut/ut0ut.c sp1f-ut0ut.c-20010217121917-7nqugyaqeu7yecvgv73lm354njkx74zq
storage/innodb_plugin/CMakeLists.txt cmakelists.txt-20090527093836-7v4wb2xxka10h4d0-2
storage/innodb_plugin/ChangeLog changelog-20090527093836-7v4wb2xxka10h4d0-4
storage/innodb_plugin/Makefile.am makefile.am-20090527093836-7v4wb2xxka10h4d0-5
storage/innodb_plugin/btr/btr0btr.c btr0btr.c-20090527093836-7v4wb2xxka10h4d0-46
storage/innodb_plugin/btr/btr0sea.c btr0sea.c-20090527093836-7v4wb2xxka10h4d0-49
storage/innodb_plugin/buf/buf0buf.c buf0buf.c-20090527093836-7v4wb2xxka10h4d0-51
storage/innodb_plugin/buf/buf0flu.c buf0flu.c-20090527093836-7v4wb2xxka10h4d0-52
storage/innodb_plugin/buf/buf0lru.c buf0lru.c-20090527093836-7v4wb2xxka10h4d0-53
storage/innodb_plugin/buf/buf0rea.c buf0rea.c-20090527093836-7v4wb2xxka10h4d0-54
storage/innodb_plugin/data/data0type.c data0type.c-20090527093836-7v4wb2xxka10h4d0-56
storage/innodb_plugin/dict/dict0crea.c dict0crea.c-20090527093836-7v4wb2xxka10h4d0-58
storage/innodb_plugin/dict/dict0dict.c dict0dict.c-20090527093836-7v4wb2xxka10h4d0-59
storage/innodb_plugin/fil/fil0fil.c fil0fil.c-20090527093836-7v4wb2xxka10h4d0-65
storage/innodb_plugin/fsp/fsp0fsp.c fsp0fsp.c-20090527093836-7v4wb2xxka10h4d0-66
storage/innodb_plugin/handler/ha_innodb.cc ha_innodb.cc-20090527093836-7v4wb2xxka10h4d0-72
storage/innodb_plugin/handler/ha_innodb.h ha_innodb.h-20090527093836-7v4wb2xxka10h4d0-73
storage/innodb_plugin/handler/handler0alter.cc handler0alter.cc-20090527093836-7v4wb2xxka10h4d0-74
storage/innodb_plugin/ibuf/ibuf0ibuf.c ibuf0ibuf.c-20090527093836-7v4wb2xxka10h4d0-80
storage/innodb_plugin/include/btr0sea.h btr0sea.h-20090527093836-7v4wb2xxka10h4d0-87
storage/innodb_plugin/include/buf0buf.h buf0buf.h-20090527093836-7v4wb2xxka10h4d0-92
storage/innodb_plugin/include/buf0buf.ic buf0buf.ic-20090527093836-7v4wb2xxka10h4d0-93
storage/innodb_plugin/include/buf0lru.h buf0lru.h-20090527093836-7v4wb2xxka10h4d0-96
storage/innodb_plugin/include/buf0rea.h buf0rea.h-20090527093836-7v4wb2xxka10h4d0-98
storage/innodb_plugin/include/buf0types.h buf0types.h-20090527093836-7v4wb2xxka10h4d0-99
storage/innodb_plugin/include/db0err.h db0err.h-20090527093836-7v4wb2xxka10h4d0-105
storage/innodb_plugin/include/dict0crea.h dict0crea.h-20090527093836-7v4wb2xxka10h4d0-108
storage/innodb_plugin/include/dict0dict.h dict0dict.h-20090527093836-7v4wb2xxka10h4d0-110
storage/innodb_plugin/include/dict0mem.h dict0mem.h-20090527093836-7v4wb2xxka10h4d0-114
storage/innodb_plugin/include/fil0fil.h fil0fil.h-20090527093836-7v4wb2xxka10h4d0-123
storage/innodb_plugin/include/fsp0fsp.h fsp0fsp.h-20090527093836-7v4wb2xxka10h4d0-124
storage/innodb_plugin/include/ha_prototypes.h ha_prototypes.h-20090527093836-7v4wb2xxka10h4d0-134
storage/innodb_plugin/include/ibuf0ibuf.h ibuf0ibuf.h-20090527093836-7v4wb2xxka10h4d0-138
storage/innodb_plugin/include/lock0lock.h lock0lock.h-20090527093836-7v4wb2xxka10h4d0-142
storage/innodb_plugin/include/log0log.h log0log.h-20090527093836-7v4wb2xxka10h4d0-147
storage/innodb_plugin/include/log0log.ic log0log.ic-20090527093836-7v4wb2xxka10h4d0-148
storage/innodb_plugin/include/log0recv.h log0recv.h-20090527093836-7v4wb2xxka10h4d0-149
storage/innodb_plugin/include/mem0mem.h mem0mem.h-20090527093836-7v4wb2xxka10h4d0-155
storage/innodb_plugin/include/mem0pool.h mem0pool.h-20090527093836-7v4wb2xxka10h4d0-157
storage/innodb_plugin/include/mtr0mtr.h mtr0mtr.h-20090527093836-7v4wb2xxka10h4d0-161
storage/innodb_plugin/include/os0file.h os0file.h-20090527093836-7v4wb2xxka10h4d0-165
storage/innodb_plugin/include/os0sync.h os0sync.h-20090527093836-7v4wb2xxka10h4d0-168
storage/innodb_plugin/include/page0page.h page0page.h-20090527093836-7v4wb2xxka10h4d0-174
storage/innodb_plugin/include/page0page.ic page0page.ic-20090527093836-7v4wb2xxka10h4d0-175
storage/innodb_plugin/include/page0zip.h page0zip.h-20090527093836-7v4wb2xxka10h4d0-177
storage/innodb_plugin/include/pars0pars.h pars0pars.h-20090527093836-7v4wb2xxka10h4d0-182
storage/innodb_plugin/include/rem0cmp.h rem0cmp.h-20090527093836-7v4wb2xxka10h4d0-193
storage/innodb_plugin/include/rem0rec.ic rem0rec.ic-20090527093836-7v4wb2xxka10h4d0-196
storage/innodb_plugin/include/row0ins.h row0ins.h-20090527093836-7v4wb2xxka10h4d0-200
storage/innodb_plugin/include/row0mysql.h row0mysql.h-20090527093836-7v4wb2xxka10h4d0-203
storage/innodb_plugin/include/srv0srv.h srv0srv.h-20090527093836-7v4wb2xxka10h4d0-223
storage/innodb_plugin/include/thr0loc.h thr0loc.h-20090527093836-7v4wb2xxka10h4d0-233
storage/innodb_plugin/include/trx0i_s.h trx0i_s.h-20090527093836-7v4wb2xxka10h4d0-235
storage/innodb_plugin/include/trx0purge.h trx0purge.h-20090527093836-7v4wb2xxka10h4d0-236
storage/innodb_plugin/include/trx0rec.h trx0rec.h-20090527093836-7v4wb2xxka10h4d0-238
storage/innodb_plugin/include/trx0rec.ic trx0rec.ic-20090527093836-7v4wb2xxka10h4d0-239
storage/innodb_plugin/include/trx0roll.h trx0roll.h-20090527093836-7v4wb2xxka10h4d0-240
storage/innodb_plugin/include/trx0rseg.h trx0rseg.h-20090527093836-7v4wb2xxka10h4d0-242
storage/innodb_plugin/include/trx0sys.h trx0sys.h-20090527093836-7v4wb2xxka10h4d0-244
storage/innodb_plugin/include/trx0sys.ic trx0sys.ic-20090527093836-7v4wb2xxka10h4d0-245
storage/innodb_plugin/include/trx0trx.h trx0trx.h-20090527093836-7v4wb2xxka10h4d0-246
storage/innodb_plugin/include/trx0undo.h trx0undo.h-20090527093836-7v4wb2xxka10h4d0-249
storage/innodb_plugin/include/univ.i univ.i-20090527093836-7v4wb2xxka10h4d0-252
storage/innodb_plugin/include/usr0sess.h usr0sess.h-20090527093836-7v4wb2xxka10h4d0-253
storage/innodb_plugin/include/ut0auxconf.h ut0auxconf.h-20090527093836-7v4wb2xxka10h4d0-256
storage/innodb_plugin/include/ut0byte.h ut0byte.h-20090527093836-7v4wb2xxka10h4d0-257
storage/innodb_plugin/include/ut0byte.ic ut0byte.ic-20090527093836-7v4wb2xxka10h4d0-258
storage/innodb_plugin/include/ut0ut.h ut0ut.h-20090527093836-7v4wb2xxka10h4d0-268
storage/innodb_plugin/lock/lock0lock.c lock0lock.c-20090527093836-7v4wb2xxka10h4d0-274
storage/innodb_plugin/log/log0log.c log0log.c-20090527093836-7v4wb2xxka10h4d0-275
storage/innodb_plugin/log/log0recv.c log0recv.c-20090527093836-7v4wb2xxka10h4d0-276
storage/innodb_plugin/mem/mem0dbg.c mem0dbg.c-20090527093836-7v4wb2xxka10h4d0-278
storage/innodb_plugin/mem/mem0mem.c mem0mem.c-20090527093836-7v4wb2xxka10h4d0-279
storage/innodb_plugin/mem/mem0pool.c mem0pool.c-20090527093836-7v4wb2xxka10h4d0-280
storage/innodb_plugin/mtr/mtr0mtr.c mtr0mtr.c-20090527093836-7v4wb2xxka10h4d0-282
storage/innodb_plugin/mysql-test/innodb-analyze.test innodbanalyze.test-20090527093836-7v4wb2xxka10h4d0-286
storage/innodb_plugin/mysql-test/innodb-zip.result innodbzip.result-20090527093836-7v4wb2xxka10h4d0-307
storage/innodb_plugin/mysql-test/innodb-zip.test innodbzip.test-20090527093836-7v4wb2xxka10h4d0-308
storage/innodb_plugin/mysql-test/innodb_bug34300.test innodb_bug34300.test-20090527093836-7v4wb2xxka10h4d0-314
storage/innodb_plugin/mysql-test/innodb_bug36169.test innodb_bug36169.test-20090527093836-7v4wb2xxka10h4d0-318
storage/innodb_plugin/mysql-test/innodb_bug36172.test innodb_bug36172.test-20090527093836-7v4wb2xxka10h4d0-320
storage/innodb_plugin/mysql-test/innodb_file_format.result innodb_file_format.r-20090730103340-a7df5hza0ep3xo6j-7
storage/innodb_plugin/mysql-test/innodb_file_format.test innodb_file_format.t-20090730103340-a7df5hza0ep3xo6j-8
storage/innodb_plugin/os/os0file.c os0file.c-20090527093836-7v4wb2xxka10h4d0-338
storage/innodb_plugin/os/os0proc.c os0proc.c-20090527093836-7v4wb2xxka10h4d0-339
storage/innodb_plugin/os/os0sync.c os0sync.c-20090527093836-7v4wb2xxka10h4d0-340
storage/innodb_plugin/os/os0thread.c os0thread.c-20090527093836-7v4wb2xxka10h4d0-341
storage/innodb_plugin/page/page0cur.c page0cur.c-20090527093836-7v4wb2xxka10h4d0-342
storage/innodb_plugin/page/page0page.c page0page.c-20090527093836-7v4wb2xxka10h4d0-343
storage/innodb_plugin/page/page0zip.c page0zip.c-20090527093836-7v4wb2xxka10h4d0-344
storage/innodb_plugin/pars/lexyy.c lexyy.c-20090527093836-7v4wb2xxka10h4d0-345
storage/innodb_plugin/pars/pars0lex.l pars0lex.l-20090527093836-7v4wb2xxka10h4d0-350
storage/innodb_plugin/plug.in.disabled plug.in-20090527093836-7v4wb2xxka10h4d0-32
storage/innodb_plugin/que/que0que.c que0que.c-20090527093836-7v4wb2xxka10h4d0-354
storage/innodb_plugin/rem/rem0cmp.c rem0cmp.c-20090527093836-7v4wb2xxka10h4d0-356
storage/innodb_plugin/row/row0ins.c row0ins.c-20090527093836-7v4wb2xxka10h4d0-359
storage/innodb_plugin/row/row0merge.c row0merge.c-20090527093836-7v4wb2xxka10h4d0-360
storage/innodb_plugin/row/row0mysql.c row0mysql.c-20090527093836-7v4wb2xxka10h4d0-361
storage/innodb_plugin/srv/srv0srv.c srv0srv.c-20090527093836-7v4wb2xxka10h4d0-373
storage/innodb_plugin/srv/srv0start.c srv0start.c-20090527093836-7v4wb2xxka10h4d0-374
storage/innodb_plugin/sync/sync0arr.c sync0arr.c-20090527093836-7v4wb2xxka10h4d0-375
storage/innodb_plugin/sync/sync0rw.c sync0rw.c-20090527093836-7v4wb2xxka10h4d0-376
storage/innodb_plugin/sync/sync0sync.c sync0sync.c-20090527093836-7v4wb2xxka10h4d0-377
storage/innodb_plugin/thr/thr0loc.c thr0loc.c-20090527093836-7v4wb2xxka10h4d0-378
storage/innodb_plugin/trx/trx0i_s.c trx0i_s.c-20090527093836-7v4wb2xxka10h4d0-379
storage/innodb_plugin/trx/trx0purge.c trx0purge.c-20090527093836-7v4wb2xxka10h4d0-380
storage/innodb_plugin/trx/trx0rec.c trx0rec.c-20090527093836-7v4wb2xxka10h4d0-381
storage/innodb_plugin/trx/trx0roll.c trx0roll.c-20090527093836-7v4wb2xxka10h4d0-382
storage/innodb_plugin/trx/trx0rseg.c trx0rseg.c-20090527093836-7v4wb2xxka10h4d0-383
storage/innodb_plugin/trx/trx0sys.c trx0sys.c-20090527093836-7v4wb2xxka10h4d0-384
storage/innodb_plugin/trx/trx0trx.c trx0trx.c-20090527093836-7v4wb2xxka10h4d0-385
storage/innodb_plugin/trx/trx0undo.c trx0undo.c-20090527093836-7v4wb2xxka10h4d0-386
storage/innodb_plugin/usr/usr0sess.c usr0sess.c-20090527093836-7v4wb2xxka10h4d0-387
storage/innodb_plugin/ut/ut0auxconf_atomic_pthread_t_solaris.c ut0auxconf_atomic_pt-20090527093836-7v4wb2xxka10h4d0-389
storage/innodb_plugin/ut/ut0mem.c ut0mem.c-20090527093836-7v4wb2xxka10h4d0-395
storage/innodb_plugin/ut/ut0ut.c ut0ut.c-20090527093836-7v4wb2xxka10h4d0-397
storage/maria/ft_maria.c sp1f-ft_maria.c-20060411134407-c7zixlxmx36vm37l35blmgkjeq5e2zgv
storage/maria/ha_maria.cc sp1f-ha_maria.cc-20060411134405-dmngb4v5x5fxlxhff527ud3etiutxuxk
storage/maria/lockman.c sp1f-lockman.c-20061011145721-yvoyfytlt3pai3ojxszlkm7aoskod2k7
storage/maria/ma_blockrec.c sp1f-ma_blockrec.c-20070118193810-5wtbfa4irhu4voa3diiuus5km2j6jvlv
storage/maria/ma_check.c sp1f-ma_check.c-20060411134408-m5d5jao4sr32xsjjkig2uhdndqm5cgba
storage/maria/ma_check_standalone.h sp1f-ma_check_standalone.-20071003161031-zy6jbpaapkfiopgjilyz6crfhjcyqqwq
storage/maria/ma_close.c sp1f-ma_close.c-20060411134409-5c3eq7j6oloex4c4hrvcqrsuvz7xohev
storage/maria/ma_create.c sp1f-ma_create.c-20060411134410-ozzigempkjj2kdgxfbasiwfjzwjejevd
storage/maria/ma_delete.c sp1f-ma_delete.c-20060411134411-zqjgd6e2tfxthoferwrw46au2meb3dlj
storage/maria/ma_extra.c sp1f-ma_extra.c-20060411134414-odsjlm2dvwmrpwdcyu3eqmkilaatl3gb
storage/maria/ma_ft_boolean_search.c sp1f-ma_ft_boolean_search-20060411134414-l4bscelblvehls4cor5iwq3lbxkj4zwx
storage/maria/ma_ft_nlq_search.c sp1f-ma_ft_nlq_search.c-20060411134416-6huckpmebrw3u4tbi3z7hva6ghbomxzz
storage/maria/ma_ft_parser.c sp1f-ma_ft_parser.c-20060411134416-kws2xhd3kaxxjif2sauw4k5gztutzjye
storage/maria/ma_ftdefs.h sp1f-ma_ftdefs.h-20060411134419-oqk6vygfkih4joszmxt5mgy46rv2fizk
storage/maria/ma_init.c sp1f-ma_init.c-20060411134421-xondjvbbgljl5sotsqcyuevjzsskvtkl
storage/maria/ma_key_recover.h sp1f-ma_key_recover.h-20071114170804-7ufzuq5qi6ug2h7j34l5izqkwiklhgzm
storage/maria/ma_locking.c sp1f-ma_locking.c-20060411134423-5iokjcgoouoi54g2x2kfvwbt5xhjsnl3
storage/maria/ma_loghandler.c sp1f-ma_loghandler.c-20070202074129-utpzp3km4lrxldm2tdhejae2zy6zlmhq
storage/maria/ma_loghandler.h sp1f-ma_loghandler.h-20070202074129-s3537sryeljtck6bbguozuod72mp2gd4
storage/maria/ma_page.c sp1f-ma_page.c-20060411134426-pq3f3up2oh47zyjesvtfooc3xbtun42s
storage/maria/ma_recovery.c sp1f-recovery.c-20060427140636-kkuwrxyvjp42wmupdfbxuaro456oprrg
storage/maria/ma_rkey.c sp1f-ma_rkey.c-20060411134430-uvwfeqz5doebhdmjmb2yvcactm5q4qyx
storage/maria/ma_search.c sp1f-ma_search.c-20060411134442-haqjkc7jzp7zkt3fejfz4bdvhobo734v
storage/maria/ma_sort.c sp1f-ma_sort.c-20060411134442-cgxklkm2tqazbdc57w5xhs3qxbdcjpmh
storage/maria/ma_state.c sp1f-ma_state.c-20080529153331-ttwxiq5ksyib6sdrdsdl2lnbbm362lwh
storage/maria/ma_test3.c sp1f-ma_test3.c-20060411134447-llbsdlhu2zyxbt6taoa2lsts7snaic2j
storage/maria/ma_write.c sp1f-ma_write.c-20060411134450-llgjlkzrighulmt3uicm5qub3r63llaq
storage/maria/maria_def.h sp1f-maria_def.h-20060411134454-urdx4joxwcwzxbmltpzejn53y2rgjs44
storage/maria/maria_ftdump.c sp1f-maria_ftdump.c-20060411134454-xw7hmkx3ryphoh7mqirrxbrnvcewp5yj
storage/maria/trnman.c sp1f-trxman.c-20060816182810-j2a3jdxiefkpc62ad3xtioi44dbmr3dv
storage/myisam/ft_boolean_search.c sp1f-ft_boolean_search.c-20010411110351-pu6lfsyiumvnnewko2oqbyjz6g3q4xm3
storage/myisam/ft_myisam.c sp1f-ft_myisam.c-20060411134458-uct2l3bly6nrej2hilx4explav2i3kgv
storage/myisam/ft_nlq_search.c sp1f-ft_nlq_search.c-20010411110351-a7dhoojgfpsydi5k4qawswaatmakqe7b
storage/myisam/ft_parser.c sp1f-ft_parser.c-19700101030959-goim35zn24ujo7rbznobwhhw5r3lemab
storage/myisam/ft_stopwords.c sp1f-ft_stopwords.c-19700101030959-vgask5ebyzpaoa7j37ybfnjhx4rkzm63
storage/myisam/ftdefs.h sp1f-ftdefs.h-19700101030959-c5sgpgnpbutzv5fvbe6a63x6up2niz2p
storage/myisam/ha_myisam.cc sp1f-ha_myisam.cc-19700101030959-7xzssylbn7zfz3nupnsw43wws6xlltsu
storage/myisam/mi_check.c sp1f-mi_check.c-19700101030959-yzbhnjgzcmqdyj4zz5codhkkw5eedp6f
storage/myisam/mi_close.c sp1f-mi_close.c-19700101030959-vfzd6ivgjccwix7o2yyzgyxbqeuk6zz7
storage/myisam/mi_create.c sp1f-mi_create.c-19700101030959-i6lazhpsyf7ggr2yjukf6xxybhraxup3
storage/myisam/mi_extra.c sp1f-mi_extra.c-19700101030959-y5yhfph7parv3zdbew22zss3ho57dgvr
storage/myisam/mi_open.c sp1f-mi_open.c-19700101030959-2q2rxowhivdg4hjkjxyf2wtczsod5d6a
storage/myisam/mi_packrec.c sp1f-mi_packrec.c-19700101030959-q5c7eimwd4jctgok3jwycbwjfq3qs6lj
storage/myisam/mi_search.c sp1f-mi_search.c-19700101030959-kdl3zf7h3booyy7xyrnnoejouhznu4cs
storage/myisam/mi_static.c sp1f-mi_static.c-19700101030959-tdmnpz55hlrequ6y4hc3azz6hpxqfv75
storage/myisam/mi_test3.c sp1f-mi_test3.c-19700101030959-3yn4wc53noyuhc4nphffr4hxesodgfyf
storage/myisam/mi_write.c sp1f-mi_write.c-19700101030959-l47ss6e3phtvbf4dlpzjkleglspv72ef
storage/myisam/myisam_ftdump.c sp1f-ft_dump.c-20010411110351-jbiwe2tgwwoql6m5wgmorbnd5fydlehw
storage/myisam/myisamchk.c sp1f-myisamchk.c-19700101030959-hdnrqowbdb3ujo3qgjtzs6lgogwckvgc
storage/myisam/myisamdef.h sp1f-myisamdef.h-19700101030959-fzrxvpmzhzqfn5w2clasmcw7af4kanoa
storage/myisam/myisamlog.c sp1f-myisamlog.c-19700101030959-curz5f2h5crvlm6bfj5a5el6y4pad2ul
storage/myisam/sort.c sp1f-sort.c-19700101030959-n36775hcenftishba6lu6m7qtninzzgb
storage/myisammrg/ha_myisammrg.cc sp1f-ha_myisammrg.cc-19700101030959-7fis6yttnmseasvj7uuicb6o6kghtqxf
storage/myisammrg/myrg_open.c sp1f-myrg_open.c-19700101030959-vmszttys66wqrvmecn2q3yr57pnxhjox
storage/mysql_storage_engine.cmake mysql_storage_engine-20090610083740-kj4pwd9fzdgs1ocd-1
storage/ndb/plug.in sp1f-plug.in-20060819041916-otylumxnybrgve2wk4v7aaonziw24jdy
storage/ndb/src/kernel/blocks/dblqh/DblqhMain.cpp sp1f-dblqhmain.cpp-20040414082419-5mtvgr6eg47fgddawjjch74crdnaduvi
storage/ndb/src/kernel/blocks/suma/Suma.cpp sp1f-suma.cpp-20040414082421-p4toipzza63cmyzczerf4mdsbvqbwi5r
storage/pbxt/ChangeLog changelog-20090326121724-x683v32twzr3fi0y-3
storage/pbxt/plug.in plug.in-20090326121724-x683v32twzr3fi0y-9
storage/pbxt/src/Makefile.am makefile.am-20090326121724-x683v32twzr3fi0y-13
storage/pbxt/src/cache_xt.cc cache_xt.cc-20090326121724-x683v32twzr3fi0y-16
storage/pbxt/src/cache_xt.h cache_xt.h-20090326121724-x683v32twzr3fi0y-17
storage/pbxt/src/database_xt.cc database_xt.cc-20090326121724-x683v32twzr3fi0y-20
storage/pbxt/src/database_xt.h database_xt.h-20090326121724-x683v32twzr3fi0y-21
storage/pbxt/src/datadic_xt.cc datadic_xt.cc-20090326121724-x683v32twzr3fi0y-22
storage/pbxt/src/datalog_xt.cc datalog_xt.cc-20090326121724-x683v32twzr3fi0y-24
storage/pbxt/src/discover_xt.cc discover_xt.cc-20090326121724-x683v32twzr3fi0y-26
storage/pbxt/src/filesys_xt.cc filesys_xt.cc-20090326121724-x683v32twzr3fi0y-28
storage/pbxt/src/filesys_xt.h filesys_xt.h-20090326121724-x683v32twzr3fi0y-29
storage/pbxt/src/ha_pbxt.cc ha_pbxt.cc-20090326121724-x683v32twzr3fi0y-30
storage/pbxt/src/ha_pbxt.h ha_pbxt.h-20090326121724-x683v32twzr3fi0y-31
storage/pbxt/src/ha_xtsys.h ha_xtsys.h-20090326121724-x683v32twzr3fi0y-33
storage/pbxt/src/heap_xt.cc heap_xt.cc-20090326121724-x683v32twzr3fi0y-36
storage/pbxt/src/index_xt.cc index_xt.cc-20090326121724-x683v32twzr3fi0y-38
storage/pbxt/src/lock_xt.cc lock_xt.cc-20090326121724-x683v32twzr3fi0y-42
storage/pbxt/src/locklist_xt.h locklist_xt.h-20090326121724-x683v32twzr3fi0y-45
storage/pbxt/src/memory_xt.cc memory_xt.cc-20090326121724-x683v32twzr3fi0y-46
storage/pbxt/src/myxt_xt.cc myxt_xt.cc-20090326121724-x683v32twzr3fi0y-48
storage/pbxt/src/myxt_xt.h myxt_xt.h-20090326121724-x683v32twzr3fi0y-49
storage/pbxt/src/pbms.h pbms.h-20090326121724-x683v32twzr3fi0y-50
storage/pbxt/src/pbms_enabled.cc pbms_enabled.cc-20090818074502-tybcb62hp1kdrk3l-1
storage/pbxt/src/pbms_enabled.h pbms_enabled.h-20090818074502-tybcb62hp1kdrk3l-2
storage/pbxt/src/pthread_xt.cc pthread_xt.cc-20090326121724-x683v32twzr3fi0y-51
storage/pbxt/src/restart_xt.cc restart_xt.cc-20090326121724-x683v32twzr3fi0y-53
storage/pbxt/src/restart_xt.h restart_xt.h-20090326121724-x683v32twzr3fi0y-54
storage/pbxt/src/strutil_xt.cc strutil_xt.cc-20090326121724-x683v32twzr3fi0y-59
storage/pbxt/src/systab_xt.cc systab_xt.cc-20090326121724-x683v32twzr3fi0y-61
storage/pbxt/src/tabcache_xt.cc tabcache_xt.cc-20090326121724-x683v32twzr3fi0y-63
storage/pbxt/src/tabcache_xt.h tabcache_xt.h-20090326121724-x683v32twzr3fi0y-64
storage/pbxt/src/table_xt.cc table_xt.cc-20090326121724-x683v32twzr3fi0y-65
storage/pbxt/src/table_xt.h table_xt.h-20090326121724-x683v32twzr3fi0y-66
storage/pbxt/src/thread_xt.cc thread_xt.cc-20090326121724-x683v32twzr3fi0y-67
storage/pbxt/src/thread_xt.h thread_xt.h-20090326121724-x683v32twzr3fi0y-68
storage/pbxt/src/trace_xt.cc trace_xt.cc-20090326121724-x683v32twzr3fi0y-69
storage/pbxt/src/util_xt.cc util_xt.cc-20090326121724-x683v32twzr3fi0y-71
storage/pbxt/src/util_xt.h util_xt.h-20090326121724-x683v32twzr3fi0y-72
storage/pbxt/src/xaction_xt.cc xaction_xt.cc-20090326121724-x683v32twzr3fi0y-74
storage/pbxt/src/xaction_xt.h xaction_xt.h-20090326121724-x683v32twzr3fi0y-75
storage/pbxt/src/xactlog_xt.cc xactlog_xt.cc-20090326121724-x683v32twzr3fi0y-76
storage/pbxt/src/xactlog_xt.h xactlog_xt.h-20090326121724-x683v32twzr3fi0y-77
storage/pbxt/src/xt_config.h xt_config.h-20090326121724-x683v32twzr3fi0y-78
storage/pbxt/src/xt_defs.h xt_defs.h-20090326121724-x683v32twzr3fi0y-79
storage/pbxt/src/xt_errno.h xt_errno.h-20090326121724-x683v32twzr3fi0y-80
storage/xtradb/CMakeLists.txt cmakelists.txt-20081201061010-zymrrwrczns2vrex-1
storage/xtradb/ChangeLog changelog-20081201061010-zymrrwrczns2vrex-3
storage/xtradb/Makefile.am makefile.am-20081201061010-zymrrwrczns2vrex-4
storage/xtradb/btr/btr0btr.c btr0btr.c-20081201061010-zymrrwrczns2vrex-44
storage/xtradb/btr/btr0cur.c btr0cur.c-20081201061010-zymrrwrczns2vrex-45
storage/xtradb/btr/btr0pcur.c btr0pcur.c-20081201061010-zymrrwrczns2vrex-46
storage/xtradb/btr/btr0sea.c btr0sea.c-20081201061010-zymrrwrczns2vrex-47
storage/xtradb/buf/buf0buddy.c buf0buddy.c-20081201061010-zymrrwrczns2vrex-48
storage/xtradb/buf/buf0buf.c buf0buf.c-20081201061010-zymrrwrczns2vrex-49
storage/xtradb/buf/buf0flu.c buf0flu.c-20081201061010-zymrrwrczns2vrex-50
storage/xtradb/buf/buf0lru.c buf0lru.c-20081201061010-zymrrwrczns2vrex-51
storage/xtradb/buf/buf0rea.c buf0rea.c-20081201061010-zymrrwrczns2vrex-52
storage/xtradb/data/data0data.c data0data.c-20081201061010-zymrrwrczns2vrex-53
storage/xtradb/data/data0type.c data0type.c-20081201061010-zymrrwrczns2vrex-54
storage/xtradb/dict/dict0boot.c dict0boot.c-20081201061010-zymrrwrczns2vrex-55
storage/xtradb/dict/dict0crea.c dict0crea.c-20081201061010-zymrrwrczns2vrex-56
storage/xtradb/dict/dict0dict.c dict0dict.c-20081201061010-zymrrwrczns2vrex-57
storage/xtradb/dict/dict0load.c dict0load.c-20081201061010-zymrrwrczns2vrex-58
storage/xtradb/dict/dict0mem.c dict0mem.c-20081201061010-zymrrwrczns2vrex-59
storage/xtradb/dyn/dyn0dyn.c dyn0dyn.c-20081201061010-zymrrwrczns2vrex-60
storage/xtradb/eval/eval0eval.c eval0eval.c-20081201061010-zymrrwrczns2vrex-61
storage/xtradb/eval/eval0proc.c eval0proc.c-20081201061010-zymrrwrczns2vrex-62
storage/xtradb/fil/fil0fil.c fil0fil.c-20081201061010-zymrrwrczns2vrex-63
storage/xtradb/fsp/fsp0fsp.c fsp0fsp.c-20081201061010-zymrrwrczns2vrex-64
storage/xtradb/fut/fut0fut.c fut0fut.c-20081201061010-zymrrwrczns2vrex-65
storage/xtradb/fut/fut0lst.c fut0lst.c-20081201061010-zymrrwrczns2vrex-66
storage/xtradb/ha/ha0ha.c ha0ha.c-20081201061010-zymrrwrczns2vrex-67
storage/xtradb/ha/ha0storage.c ha0storage.c-20081201061010-zymrrwrczns2vrex-68
storage/xtradb/ha/hash0hash.c hash0hash.c-20081201061010-zymrrwrczns2vrex-69
storage/xtradb/handler/ha_innodb.cc ha_innodb.cc-20081201061010-zymrrwrczns2vrex-70
storage/xtradb/handler/ha_innodb.h ha_innodb.h-20081201061010-zymrrwrczns2vrex-71
storage/xtradb/handler/handler0alter.cc handler0alter.cc-20081201061010-zymrrwrczns2vrex-72
storage/xtradb/handler/i_s.cc i_s.cc-20081201061010-zymrrwrczns2vrex-73
storage/xtradb/handler/i_s.h i_s.h-20081201061010-zymrrwrczns2vrex-74
storage/xtradb/handler/innodb_patch_info.h innodb_patch_info.h-20081206234022-hep6ryfeacyr0572-1
storage/xtradb/handler/mysql_addons.cc mysql_addons.cc-20081201061010-zymrrwrczns2vrex-75
storage/xtradb/ibuf/ibuf0ibuf.c ibuf0ibuf.c-20081201061010-zymrrwrczns2vrex-76
storage/xtradb/include/btr0btr.h btr0btr.h-20081201061010-zymrrwrczns2vrex-77
storage/xtradb/include/btr0btr.ic btr0btr.ic-20081201061010-zymrrwrczns2vrex-78
storage/xtradb/include/btr0cur.h btr0cur.h-20081201061010-zymrrwrczns2vrex-79
storage/xtradb/include/btr0cur.ic btr0cur.ic-20081201061010-zymrrwrczns2vrex-80
storage/xtradb/include/btr0pcur.h btr0pcur.h-20081201061010-zymrrwrczns2vrex-81
storage/xtradb/include/btr0pcur.ic btr0pcur.ic-20081201061010-zymrrwrczns2vrex-82
storage/xtradb/include/btr0sea.h btr0sea.h-20081201061010-zymrrwrczns2vrex-83
storage/xtradb/include/btr0sea.ic btr0sea.ic-20081201061010-zymrrwrczns2vrex-84
storage/xtradb/include/btr0types.h btr0types.h-20081201061010-zymrrwrczns2vrex-85
storage/xtradb/include/buf0buddy.h buf0buddy.h-20081201061010-zymrrwrczns2vrex-86
storage/xtradb/include/buf0buddy.ic buf0buddy.ic-20081201061010-zymrrwrczns2vrex-87
storage/xtradb/include/buf0buf.h buf0buf.h-20081201061010-zymrrwrczns2vrex-88
storage/xtradb/include/buf0buf.ic buf0buf.ic-20081201061010-zymrrwrczns2vrex-89
storage/xtradb/include/buf0flu.h buf0flu.h-20081201061010-zymrrwrczns2vrex-90
storage/xtradb/include/buf0flu.ic buf0flu.ic-20081201061010-zymrrwrczns2vrex-91
storage/xtradb/include/buf0lru.h buf0lru.h-20081201061010-zymrrwrczns2vrex-92
storage/xtradb/include/buf0lru.ic buf0lru.ic-20081201061010-zymrrwrczns2vrex-93
storage/xtradb/include/buf0rea.h buf0rea.h-20081201061010-zymrrwrczns2vrex-94
storage/xtradb/include/buf0types.h buf0types.h-20081201061010-zymrrwrczns2vrex-95
storage/xtradb/include/data0data.h data0data.h-20081201061010-zymrrwrczns2vrex-96
storage/xtradb/include/data0data.ic data0data.ic-20081201061010-zymrrwrczns2vrex-97
storage/xtradb/include/data0type.h data0type.h-20081201061010-zymrrwrczns2vrex-98
storage/xtradb/include/data0type.ic data0type.ic-20081201061010-zymrrwrczns2vrex-99
storage/xtradb/include/data0types.h data0types.h-20081201061010-zymrrwrczns2vrex-100
storage/xtradb/include/db0err.h db0err.h-20081201061010-zymrrwrczns2vrex-101
storage/xtradb/include/dict0boot.h dict0boot.h-20081201061010-zymrrwrczns2vrex-102
storage/xtradb/include/dict0boot.ic dict0boot.ic-20081201061010-zymrrwrczns2vrex-103
storage/xtradb/include/dict0crea.h dict0crea.h-20081201061010-zymrrwrczns2vrex-104
storage/xtradb/include/dict0crea.ic dict0crea.ic-20081201061010-zymrrwrczns2vrex-105
storage/xtradb/include/dict0dict.h dict0dict.h-20081201061010-zymrrwrczns2vrex-106
storage/xtradb/include/dict0dict.ic dict0dict.ic-20081201061010-zymrrwrczns2vrex-107
storage/xtradb/include/dict0load.h dict0load.h-20081201061010-zymrrwrczns2vrex-108
storage/xtradb/include/dict0load.ic dict0load.ic-20081201061010-zymrrwrczns2vrex-109
storage/xtradb/include/dict0mem.h dict0mem.h-20081201061010-zymrrwrczns2vrex-110
storage/xtradb/include/dict0mem.ic dict0mem.ic-20081201061010-zymrrwrczns2vrex-111
storage/xtradb/include/dict0types.h dict0types.h-20081201061010-zymrrwrczns2vrex-112
storage/xtradb/include/dyn0dyn.h dyn0dyn.h-20081201061010-zymrrwrczns2vrex-113
storage/xtradb/include/dyn0dyn.ic dyn0dyn.ic-20081201061010-zymrrwrczns2vrex-114
storage/xtradb/include/eval0eval.h eval0eval.h-20081201061010-zymrrwrczns2vrex-115
storage/xtradb/include/eval0eval.ic eval0eval.ic-20081201061010-zymrrwrczns2vrex-116
storage/xtradb/include/eval0proc.h eval0proc.h-20081201061010-zymrrwrczns2vrex-117
storage/xtradb/include/eval0proc.ic eval0proc.ic-20081201061010-zymrrwrczns2vrex-118
storage/xtradb/include/fil0fil.h fil0fil.h-20081201061010-zymrrwrczns2vrex-119
storage/xtradb/include/fsp0fsp.h fsp0fsp.h-20081201061010-zymrrwrczns2vrex-120
storage/xtradb/include/fsp0fsp.ic fsp0fsp.ic-20081201061010-zymrrwrczns2vrex-121
storage/xtradb/include/fut0fut.h fut0fut.h-20081201061010-zymrrwrczns2vrex-122
storage/xtradb/include/fut0fut.ic fut0fut.ic-20081201061010-zymrrwrczns2vrex-123
storage/xtradb/include/fut0lst.h fut0lst.h-20081201061010-zymrrwrczns2vrex-124
storage/xtradb/include/fut0lst.ic fut0lst.ic-20081201061010-zymrrwrczns2vrex-125
storage/xtradb/include/ha0ha.h ha0ha.h-20081201061010-zymrrwrczns2vrex-126
storage/xtradb/include/ha0ha.ic ha0ha.ic-20081201061010-zymrrwrczns2vrex-127
storage/xtradb/include/ha0storage.h ha0storage.h-20081201061010-zymrrwrczns2vrex-128
storage/xtradb/include/ha0storage.ic ha0storage.ic-20081201061010-zymrrwrczns2vrex-129
storage/xtradb/include/ha_prototypes.h ha_prototypes.h-20081201061010-zymrrwrczns2vrex-130
storage/xtradb/include/handler0alter.h handler0alter.h-20081201061010-zymrrwrczns2vrex-131
storage/xtradb/include/hash0hash.h hash0hash.h-20081201061010-zymrrwrczns2vrex-132
storage/xtradb/include/hash0hash.ic hash0hash.ic-20081201061010-zymrrwrczns2vrex-133
storage/xtradb/include/ibuf0ibuf.h ibuf0ibuf.h-20081201061010-zymrrwrczns2vrex-134
storage/xtradb/include/ibuf0ibuf.ic ibuf0ibuf.ic-20081201061010-zymrrwrczns2vrex-135
storage/xtradb/include/ibuf0types.h ibuf0types.h-20081201061010-zymrrwrczns2vrex-136
storage/xtradb/include/lock0iter.h lock0iter.h-20081201061010-zymrrwrczns2vrex-137
storage/xtradb/include/lock0lock.h lock0lock.h-20081201061010-zymrrwrczns2vrex-138
storage/xtradb/include/lock0lock.ic lock0lock.ic-20081201061010-zymrrwrczns2vrex-139
storage/xtradb/include/lock0priv.h lock0priv.h-20081201061010-zymrrwrczns2vrex-140
storage/xtradb/include/lock0priv.ic lock0priv.ic-20081201061010-zymrrwrczns2vrex-141
storage/xtradb/include/lock0types.h lock0types.h-20081201061010-zymrrwrczns2vrex-142
storage/xtradb/include/log0log.h log0log.h-20081201061010-zymrrwrczns2vrex-143
storage/xtradb/include/log0log.ic log0log.ic-20081201061010-zymrrwrczns2vrex-144
storage/xtradb/include/log0recv.h log0recv.h-20081201061010-zymrrwrczns2vrex-145
storage/xtradb/include/log0recv.ic log0recv.ic-20081201061010-zymrrwrczns2vrex-146
storage/xtradb/include/mach0data.h mach0data.h-20081201061010-zymrrwrczns2vrex-147
storage/xtradb/include/mach0data.ic mach0data.ic-20081201061010-zymrrwrczns2vrex-148
storage/xtradb/include/mem0dbg.h mem0dbg.h-20081201061010-zymrrwrczns2vrex-149
storage/xtradb/include/mem0dbg.ic mem0dbg.ic-20081201061010-zymrrwrczns2vrex-150
storage/xtradb/include/mem0mem.h mem0mem.h-20081201061010-zymrrwrczns2vrex-151
storage/xtradb/include/mem0mem.ic mem0mem.ic-20081201061010-zymrrwrczns2vrex-152
storage/xtradb/include/mem0pool.h mem0pool.h-20081201061010-zymrrwrczns2vrex-153
storage/xtradb/include/mem0pool.ic mem0pool.ic-20081201061010-zymrrwrczns2vrex-154
storage/xtradb/include/mtr0log.h mtr0log.h-20081201061010-zymrrwrczns2vrex-155
storage/xtradb/include/mtr0log.ic mtr0log.ic-20081201061010-zymrrwrczns2vrex-156
storage/xtradb/include/mtr0mtr.h mtr0mtr.h-20081201061010-zymrrwrczns2vrex-157
storage/xtradb/include/mtr0mtr.ic mtr0mtr.ic-20081201061010-zymrrwrczns2vrex-158
storage/xtradb/include/mtr0types.h mtr0types.h-20081201061010-zymrrwrczns2vrex-159
storage/xtradb/include/mysql_addons.h mysql_addons.h-20081201061010-zymrrwrczns2vrex-160
storage/xtradb/include/os0file.h os0file.h-20081201061010-zymrrwrczns2vrex-161
storage/xtradb/include/os0proc.h os0proc.h-20081201061010-zymrrwrczns2vrex-162
storage/xtradb/include/os0proc.ic os0proc.ic-20081201061010-zymrrwrczns2vrex-163
storage/xtradb/include/os0sync.h os0sync.h-20081201061010-zymrrwrczns2vrex-164
storage/xtradb/include/os0sync.ic os0sync.ic-20081201061010-zymrrwrczns2vrex-165
storage/xtradb/include/os0thread.h os0thread.h-20081201061010-zymrrwrczns2vrex-166
storage/xtradb/include/os0thread.ic os0thread.ic-20081201061010-zymrrwrczns2vrex-167
storage/xtradb/include/page0cur.h page0cur.h-20081201061010-zymrrwrczns2vrex-168
storage/xtradb/include/page0cur.ic page0cur.ic-20081201061010-zymrrwrczns2vrex-169
storage/xtradb/include/page0page.h page0page.h-20081201061010-zymrrwrczns2vrex-170
storage/xtradb/include/page0page.ic page0page.ic-20081201061010-zymrrwrczns2vrex-171
storage/xtradb/include/page0types.h page0types.h-20081201061010-zymrrwrczns2vrex-172
storage/xtradb/include/page0zip.h page0zip.h-20081201061010-zymrrwrczns2vrex-173
storage/xtradb/include/page0zip.ic page0zip.ic-20081201061010-zymrrwrczns2vrex-174
storage/xtradb/include/pars0opt.h pars0opt.h-20081201061010-zymrrwrczns2vrex-176
storage/xtradb/include/pars0opt.ic pars0opt.ic-20081201061010-zymrrwrczns2vrex-177
storage/xtradb/include/pars0pars.h pars0pars.h-20081201061010-zymrrwrczns2vrex-178
storage/xtradb/include/pars0pars.ic pars0pars.ic-20081201061010-zymrrwrczns2vrex-179
storage/xtradb/include/pars0sym.h pars0sym.h-20081201061010-zymrrwrczns2vrex-180
storage/xtradb/include/pars0sym.ic pars0sym.ic-20081201061010-zymrrwrczns2vrex-181
storage/xtradb/include/pars0types.h pars0types.h-20081201061010-zymrrwrczns2vrex-182
storage/xtradb/include/que0que.h que0que.h-20081201061010-zymrrwrczns2vrex-183
storage/xtradb/include/que0que.ic que0que.ic-20081201061010-zymrrwrczns2vrex-184
storage/xtradb/include/que0types.h que0types.h-20081201061010-zymrrwrczns2vrex-185
storage/xtradb/include/read0read.h read0read.h-20081201061010-zymrrwrczns2vrex-186
storage/xtradb/include/read0read.ic read0read.ic-20081201061010-zymrrwrczns2vrex-187
storage/xtradb/include/read0types.h read0types.h-20081201061010-zymrrwrczns2vrex-188
storage/xtradb/include/rem0cmp.h rem0cmp.h-20081201061010-zymrrwrczns2vrex-189
storage/xtradb/include/rem0cmp.ic rem0cmp.ic-20081201061010-zymrrwrczns2vrex-190
storage/xtradb/include/rem0rec.h rem0rec.h-20081201061010-zymrrwrczns2vrex-191
storage/xtradb/include/rem0rec.ic rem0rec.ic-20081201061010-zymrrwrczns2vrex-192
storage/xtradb/include/rem0types.h rem0types.h-20081201061010-zymrrwrczns2vrex-193
storage/xtradb/include/row0ext.h row0ext.h-20081201061010-zymrrwrczns2vrex-194
storage/xtradb/include/row0ext.ic row0ext.ic-20081201061010-zymrrwrczns2vrex-195
storage/xtradb/include/row0ins.h row0ins.h-20081201061010-zymrrwrczns2vrex-196
storage/xtradb/include/row0ins.ic row0ins.ic-20081201061010-zymrrwrczns2vrex-197
storage/xtradb/include/row0merge.h row0merge.h-20081201061010-zymrrwrczns2vrex-198
storage/xtradb/include/row0mysql.h row0mysql.h-20081201061010-zymrrwrczns2vrex-199
storage/xtradb/include/row0mysql.ic row0mysql.ic-20081201061010-zymrrwrczns2vrex-200
storage/xtradb/include/row0purge.h row0purge.h-20081201061010-zymrrwrczns2vrex-201
storage/xtradb/include/row0purge.ic row0purge.ic-20081201061010-zymrrwrczns2vrex-202
storage/xtradb/include/row0row.h row0row.h-20081201061010-zymrrwrczns2vrex-203
storage/xtradb/include/row0row.ic row0row.ic-20081201061010-zymrrwrczns2vrex-204
storage/xtradb/include/row0sel.h row0sel.h-20081201061010-zymrrwrczns2vrex-205
storage/xtradb/include/row0sel.ic row0sel.ic-20081201061010-zymrrwrczns2vrex-206
storage/xtradb/include/row0types.h row0types.h-20081201061010-zymrrwrczns2vrex-207
storage/xtradb/include/row0uins.h row0uins.h-20081201061010-zymrrwrczns2vrex-208
storage/xtradb/include/row0uins.ic row0uins.ic-20081201061010-zymrrwrczns2vrex-209
storage/xtradb/include/row0umod.h row0umod.h-20081201061010-zymrrwrczns2vrex-210
storage/xtradb/include/row0umod.ic row0umod.ic-20081201061010-zymrrwrczns2vrex-211
storage/xtradb/include/row0undo.h row0undo.h-20081201061010-zymrrwrczns2vrex-212
storage/xtradb/include/row0undo.ic row0undo.ic-20081201061010-zymrrwrczns2vrex-213
storage/xtradb/include/row0upd.h row0upd.h-20081201061010-zymrrwrczns2vrex-214
storage/xtradb/include/row0upd.ic row0upd.ic-20081201061010-zymrrwrczns2vrex-215
storage/xtradb/include/row0vers.h row0vers.h-20081201061010-zymrrwrczns2vrex-216
storage/xtradb/include/row0vers.ic row0vers.ic-20081201061010-zymrrwrczns2vrex-217
storage/xtradb/include/srv0que.h srv0que.h-20081201061010-zymrrwrczns2vrex-218
storage/xtradb/include/srv0srv.h srv0srv.h-20081201061010-zymrrwrczns2vrex-219
storage/xtradb/include/srv0srv.ic srv0srv.ic-20081201061010-zymrrwrczns2vrex-220
storage/xtradb/include/srv0start.h srv0start.h-20081201061010-zymrrwrczns2vrex-221
storage/xtradb/include/sync0arr.h sync0arr.h-20081201061010-zymrrwrczns2vrex-222
storage/xtradb/include/sync0arr.ic sync0arr.ic-20081201061010-zymrrwrczns2vrex-223
storage/xtradb/include/sync0rw.h sync0rw.h-20081201061010-zymrrwrczns2vrex-224
storage/xtradb/include/sync0rw.ic sync0rw.ic-20081201061010-zymrrwrczns2vrex-225
storage/xtradb/include/sync0sync.h sync0sync.h-20081201061010-zymrrwrczns2vrex-226
storage/xtradb/include/sync0sync.ic sync0sync.ic-20081201061010-zymrrwrczns2vrex-227
storage/xtradb/include/sync0types.h sync0types.h-20081201061010-zymrrwrczns2vrex-228
storage/xtradb/include/thr0loc.h thr0loc.h-20081201061010-zymrrwrczns2vrex-229
storage/xtradb/include/thr0loc.ic thr0loc.ic-20081201061010-zymrrwrczns2vrex-230
storage/xtradb/include/trx0i_s.h trx0i_s.h-20081201061010-zymrrwrczns2vrex-231
storage/xtradb/include/trx0purge.h trx0purge.h-20081201061010-zymrrwrczns2vrex-232
storage/xtradb/include/trx0purge.ic trx0purge.ic-20081201061010-zymrrwrczns2vrex-233
storage/xtradb/include/trx0rec.h trx0rec.h-20081201061010-zymrrwrczns2vrex-234
storage/xtradb/include/trx0rec.ic trx0rec.ic-20081201061010-zymrrwrczns2vrex-235
storage/xtradb/include/trx0roll.h trx0roll.h-20081201061010-zymrrwrczns2vrex-236
storage/xtradb/include/trx0roll.ic trx0roll.ic-20081201061010-zymrrwrczns2vrex-237
storage/xtradb/include/trx0rseg.h trx0rseg.h-20081201061010-zymrrwrczns2vrex-238
storage/xtradb/include/trx0rseg.ic trx0rseg.ic-20081201061010-zymrrwrczns2vrex-239
storage/xtradb/include/trx0sys.h trx0sys.h-20081201061010-zymrrwrczns2vrex-240
storage/xtradb/include/trx0sys.ic trx0sys.ic-20081201061010-zymrrwrczns2vrex-241
storage/xtradb/include/trx0trx.h trx0trx.h-20081201061010-zymrrwrczns2vrex-242
storage/xtradb/include/trx0trx.ic trx0trx.ic-20081201061010-zymrrwrczns2vrex-243
storage/xtradb/include/trx0types.h trx0types.h-20081201061010-zymrrwrczns2vrex-244
storage/xtradb/include/trx0undo.h trx0undo.h-20081201061010-zymrrwrczns2vrex-245
storage/xtradb/include/trx0undo.ic trx0undo.ic-20081201061010-zymrrwrczns2vrex-246
storage/xtradb/include/trx0xa.h trx0xa.h-20081201061010-zymrrwrczns2vrex-247
storage/xtradb/include/univ.i univ.i-20081201061010-zymrrwrczns2vrex-248
storage/xtradb/include/usr0sess.h usr0sess.h-20081201061010-zymrrwrczns2vrex-249
storage/xtradb/include/usr0sess.ic usr0sess.ic-20081201061010-zymrrwrczns2vrex-250
storage/xtradb/include/usr0types.h usr0types.h-20081201061010-zymrrwrczns2vrex-251
storage/xtradb/include/ut0auxconf.h ut0auxconf.h-20090326061054-ylrdb8libxw6u7e9-2
storage/xtradb/include/ut0byte.h ut0byte.h-20081201061010-zymrrwrczns2vrex-252
storage/xtradb/include/ut0byte.ic ut0byte.ic-20081201061010-zymrrwrczns2vrex-253
storage/xtradb/include/ut0dbg.h ut0dbg.h-20081201061010-zymrrwrczns2vrex-254
storage/xtradb/include/ut0list.h ut0list.h-20081201061010-zymrrwrczns2vrex-255
storage/xtradb/include/ut0list.ic ut0list.ic-20081201061010-zymrrwrczns2vrex-256
storage/xtradb/include/ut0lst.h ut0lst.h-20081201061010-zymrrwrczns2vrex-257
storage/xtradb/include/ut0mem.h ut0mem.h-20081201061010-zymrrwrczns2vrex-258
storage/xtradb/include/ut0mem.ic ut0mem.ic-20081201061010-zymrrwrczns2vrex-259
storage/xtradb/include/ut0rnd.h ut0rnd.h-20081201061010-zymrrwrczns2vrex-260
storage/xtradb/include/ut0rnd.ic ut0rnd.ic-20081201061010-zymrrwrczns2vrex-261
storage/xtradb/include/ut0sort.h ut0sort.h-20081201061010-zymrrwrczns2vrex-262
storage/xtradb/include/ut0ut.h ut0ut.h-20081201061010-zymrrwrczns2vrex-263
storage/xtradb/include/ut0ut.ic ut0ut.ic-20081201061010-zymrrwrczns2vrex-264
storage/xtradb/include/ut0vec.h ut0vec.h-20081201061010-zymrrwrczns2vrex-265
storage/xtradb/include/ut0vec.ic ut0vec.ic-20081201061010-zymrrwrczns2vrex-266
storage/xtradb/include/ut0wqueue.h ut0wqueue.h-20081201061010-zymrrwrczns2vrex-267
storage/xtradb/lock/lock0iter.c lock0iter.c-20081201061010-zymrrwrczns2vrex-268
storage/xtradb/lock/lock0lock.c lock0lock.c-20081201061010-zymrrwrczns2vrex-269
storage/xtradb/log/log0log.c log0log.c-20081201061010-zymrrwrczns2vrex-270
storage/xtradb/log/log0recv.c log0recv.c-20081201061010-zymrrwrczns2vrex-271
storage/xtradb/mach/mach0data.c mach0data.c-20081201061010-zymrrwrczns2vrex-272
storage/xtradb/mem/mem0dbg.c mem0dbg.c-20081201061010-zymrrwrczns2vrex-273
storage/xtradb/mem/mem0mem.c mem0mem.c-20081201061010-zymrrwrczns2vrex-274
storage/xtradb/mem/mem0pool.c mem0pool.c-20081201061010-zymrrwrczns2vrex-275
storage/xtradb/mtr/mtr0log.c mtr0log.c-20081201061010-zymrrwrczns2vrex-276
storage/xtradb/mtr/mtr0mtr.c mtr0mtr.c-20081201061010-zymrrwrczns2vrex-277
storage/xtradb/os/os0file.c os0file.c-20081201061010-zymrrwrczns2vrex-313
storage/xtradb/os/os0proc.c os0proc.c-20081201061010-zymrrwrczns2vrex-314
storage/xtradb/os/os0sync.c os0sync.c-20081201061010-zymrrwrczns2vrex-315
storage/xtradb/os/os0thread.c os0thread.c-20081201061010-zymrrwrczns2vrex-316
storage/xtradb/page/page0cur.c page0cur.c-20081201061010-zymrrwrczns2vrex-317
storage/xtradb/page/page0page.c page0page.c-20081201061010-zymrrwrczns2vrex-318
storage/xtradb/page/page0zip.c page0zip.c-20081201061010-zymrrwrczns2vrex-319
storage/xtradb/pars/lexyy.c lexyy.c-20081201061010-zymrrwrczns2vrex-320
storage/xtradb/pars/pars0lex.l pars0lex.l-20081201061010-zymrrwrczns2vrex-325
storage/xtradb/pars/pars0opt.c pars0opt.c-20081201061010-zymrrwrczns2vrex-326
storage/xtradb/pars/pars0pars.c pars0pars.c-20081201061010-zymrrwrczns2vrex-327
storage/xtradb/pars/pars0sym.c pars0sym.c-20081201061010-zymrrwrczns2vrex-328
storage/xtradb/plug.in plug.in-20081201061010-zymrrwrczns2vrex-31
storage/xtradb/que/que0que.c que0que.c-20081201061010-zymrrwrczns2vrex-329
storage/xtradb/read/read0read.c read0read.c-20081201061010-zymrrwrczns2vrex-330
storage/xtradb/rem/rem0cmp.c rem0cmp.c-20081201061010-zymrrwrczns2vrex-331
storage/xtradb/rem/rem0rec.c rem0rec.c-20081201061010-zymrrwrczns2vrex-332
storage/xtradb/row/row0ext.c row0ext.c-20081201061010-zymrrwrczns2vrex-333
storage/xtradb/row/row0ins.c row0ins.c-20081201061010-zymrrwrczns2vrex-334
storage/xtradb/row/row0merge.c row0merge.c-20081201061010-zymrrwrczns2vrex-335
storage/xtradb/row/row0mysql.c row0mysql.c-20081201061010-zymrrwrczns2vrex-336
storage/xtradb/row/row0purge.c row0purge.c-20081201061010-zymrrwrczns2vrex-337
storage/xtradb/row/row0row.c row0row.c-20081201061010-zymrrwrczns2vrex-338
storage/xtradb/row/row0sel.c row0sel.c-20081201061010-zymrrwrczns2vrex-339
storage/xtradb/row/row0uins.c row0uins.c-20081201061010-zymrrwrczns2vrex-340
storage/xtradb/row/row0umod.c row0umod.c-20081201061010-zymrrwrczns2vrex-341
storage/xtradb/row/row0undo.c row0undo.c-20081201061010-zymrrwrczns2vrex-342
storage/xtradb/row/row0upd.c row0upd.c-20081201061010-zymrrwrczns2vrex-343
storage/xtradb/row/row0vers.c row0vers.c-20081201061010-zymrrwrczns2vrex-344
storage/xtradb/scripts/install_innodb_plugins.sql install_innodb_plugi-20081201061010-zymrrwrczns2vrex-345
storage/xtradb/scripts/install_innodb_plugins_win.sql install_innodb_plugi-20081203050234-edoolglm28lyejuc-14
storage/xtradb/srv/srv0que.c srv0que.c-20081201061010-zymrrwrczns2vrex-346
storage/xtradb/srv/srv0srv.c srv0srv.c-20081201061010-zymrrwrczns2vrex-347
storage/xtradb/srv/srv0start.c srv0start.c-20081201061010-zymrrwrczns2vrex-348
storage/xtradb/sync/sync0arr.c sync0arr.c-20081201061010-zymrrwrczns2vrex-349
storage/xtradb/sync/sync0rw.c sync0rw.c-20081201061010-zymrrwrczns2vrex-350
storage/xtradb/sync/sync0sync.c sync0sync.c-20081201061010-zymrrwrczns2vrex-351
storage/xtradb/thr/thr0loc.c thr0loc.c-20081201061010-zymrrwrczns2vrex-352
storage/xtradb/trx/trx0i_s.c trx0i_s.c-20081201061010-zymrrwrczns2vrex-353
storage/xtradb/trx/trx0purge.c trx0purge.c-20081201061010-zymrrwrczns2vrex-354
storage/xtradb/trx/trx0rec.c trx0rec.c-20081201061010-zymrrwrczns2vrex-355
storage/xtradb/trx/trx0roll.c trx0roll.c-20081201061010-zymrrwrczns2vrex-356
storage/xtradb/trx/trx0rseg.c trx0rseg.c-20081201061010-zymrrwrczns2vrex-357
storage/xtradb/trx/trx0sys.c trx0sys.c-20081201061010-zymrrwrczns2vrex-358
storage/xtradb/trx/trx0trx.c trx0trx.c-20081201061010-zymrrwrczns2vrex-359
storage/xtradb/trx/trx0undo.c trx0undo.c-20081201061010-zymrrwrczns2vrex-360
storage/xtradb/usr/usr0sess.c usr0sess.c-20081201061010-zymrrwrczns2vrex-361
storage/xtradb/ut/ut0byte.c ut0byte.c-20081201061010-zymrrwrczns2vrex-362
storage/xtradb/ut/ut0dbg.c ut0dbg.c-20081201061010-zymrrwrczns2vrex-363
storage/xtradb/ut/ut0list.c ut0list.c-20081201061010-zymrrwrczns2vrex-364
storage/xtradb/ut/ut0mem.c ut0mem.c-20081201061010-zymrrwrczns2vrex-365
storage/xtradb/ut/ut0rnd.c ut0rnd.c-20081201061010-zymrrwrczns2vrex-366
storage/xtradb/ut/ut0ut.c ut0ut.c-20081201061010-zymrrwrczns2vrex-367
storage/xtradb/ut/ut0vec.c ut0vec.c-20081201061010-zymrrwrczns2vrex-368
storage/xtradb/ut/ut0wqueue.c ut0wqueue.c-20081201061010-zymrrwrczns2vrex-369
strings/Makefile.am sp1f-makefile.am-19700101030959-jfitkanzc3r4h2otoyaaprgqn7muf4ux
strings/conf_to_src.c sp1f-conf_to_src.c-19700101030959-nvuvqe3jufdn2xi2v44sqkqtdpbbntah
strings/ctype-big5.c sp1f-ctypebig5.c-19700101030959-6cf5cz2yuk2totfrhn4wkbdnv2h7dq4b
strings/ctype-bin.c sp1f-ctypebin.c-20021023103022-yp52ewkogsbee4owkmbsigoo2qmhxsyw
strings/ctype-cp932.c sp1f-ctypecp932.c-20050112013139-evve6ejkfqxb5witjvxcnsrxp526tcuf
strings/ctype-czech.c sp1f-ctypeczech.c-19700101030959-fwxewpxo3ku6me4wnqcyhhbimr7pgbao
strings/ctype-euc_kr.c sp1f-ctypeeuc_kr.c-19700101030959-xtlkmcyvuckg2nfe6bxqxknimnib2ede
strings/ctype-eucjpms.c sp1f-ctypeeucjp_ms.c-20050112013139-g6o6gsnc6mipg6fk6gcn6hf5q54uvjc6
strings/ctype-extra.c sp1f-ctypeextra.c-20030129110807-75c3aglmos72axutct436sid7rpl7dpe
strings/ctype-gb2312.c sp1f-ctypegb2312.c-19700101030959-dxdbnfhbjfnuhqvk7r4oqdmzmxoy5cau
strings/ctype-gbk.c sp1f-ctypegbk.c-19700101030959-glit55deurqcrnqzs26kctev6hwtvk3f
strings/ctype-latin1.c sp1f-ctypelatin1.c-20030129133118-5vxg5x3t3iaskywfqp4xpe6xm63wpybx
strings/ctype-mb.c sp1f-ctypemb.c-20020312173754-rtl7oemmrocifpvc2z4og7rvep3jrhkh
strings/ctype-simple.c sp1f-ctypesimple.c-20020312173754-2nnl6235owml5myqwzsl3uzlhz72bwho
strings/ctype-sjis.c sp1f-ctypesjis.c-19700101030959-wee5mqv7qwhcc4x7jwhrmnyvzq3xfa5d
strings/ctype-tis620.c sp1f-ctypetis620.c-19700101030959-qphk64jej3b56zx33ubermglkaoasplq
strings/ctype-uca.c sp1f-ctypeuca.c-20040324121604-kwaskdasqzdrufymlf27j4gl3gwdy5fq
strings/ctype-ucs2.c sp1f-ctypeucs2.c-20030521102942-3fr4x6ti6jw6vqwdh7byhlxpu6oivdnn
strings/ctype-ujis.c sp1f-ctypeujis.c-19700101030959-qf5fzrgee4i2xz7tlr2qtzveandfhlpo
strings/ctype-utf8.c sp1f-ctypeutf8.c-20020328133143-7ldgrkcon3ejrongwc7hy4m63qddjsal
strings/ctype-win1250ch.c sp1f-ctypewin1250ch.c-20020417105712-fnmrblvlgis3o5sq3rxkqentnz2rkc2r
strings/ctype.c sp1f-ctype.c-19700101030959-kcyj7oyype5kohlxym7bzzw5go5qcmh4
strings/int2str.c sp1f-int2str.c-19700101030959-n4dtundq6ky54wd4qh3hkfjabt73ajhf
strings/my_vsnprintf.c sp1f-my_vsnprintf.c-19700101030959-tpt7gim7wclzegsmsqqysncmxhlmjrhp
strings/strmov.c sp1f-strmov.c-19700101030959-frzarqjtsxtbjwmkzysa6z3iqtmyz54o
support-files/MacOSX/ReadMe.txt sp1f-readme.txt-20071102002932-oqaazdag65tr7zn4vqtp6h2aywxyg754
support-files/Makefile.am sp1f-makefile.am-19700101030959-277rra4r5vvtfge67hzsdpbpm2puxyqy
support-files/binary-configure.sh sp1f-binaryconfigure.sh-19700101030959-brbiq3yf2mdlmehb4p77iqvjg535f4fs
support-files/compiler_warnings.supp sp1f-disabled_compiler_wa-20070110170439-wzgdkamsch2nrkgvcp2hytmquqeorohi
support-files/mysql.spec.sh sp1f-mysql.spec.sh-19700101030959-man6e3acwxvf62bdqvkpcpsvdtokf3ff
tests/mysql_client_test.c sp1f-client_test.c-20020614002636-eqy2zzksgelocknwbbogfuwxfwqy7q5x
unittest/mysys/Makefile.am sp1f-makefile.am-20060404161610-vihzdr4qjuef3o5tlkhxxs3o74qy7bln
unittest/mysys/waiting_threads-t.c waiting_threadst.c-20080623170213-r8baqa2porlpxzq1-5
vio/vio.c sp1f-vio.c-20010520120430-aw76h22ssarmssof7rplhty5elqiexku
vio/vio_priv.h sp1f-vio_priv.h-20030826235137-5sdl43z73qga2fo4s5g55pqqgyvkhbo7
vio/viosocket.c sp1f-viotcpip.c-20010520120437-u3pbzbt3fdfbclbmusalnzmuqh2y4nav
vio/viossl.c sp1f-viossl.c-20010520120431-amywaj3niiokylabjhaly7w33kgdifl6
vio/viosslfactories.c sp1f-viosslfactories.c-20010520120431-walfvbsc6adzg7cj5g6xl3r73ycxspmb
win/configure.js sp1f-configure.js-20060131135210-xvfnytwaxztc3ytr6pmdtutht4i26rdu
mysql-test/r/innodb_lock_wait_timeout_1.result bug40113.result-20090619150423-w3im08cym6tyzn8f-3
mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result binlog_tbl_metadata.-20090512114928-2whj3n6g302nij5u-1
mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test binlog_tbl_metadata.-20090512113345-zzqv0wdjojj5q8oq-1
mysql-test/t/innodb_lock_wait_timeout_1.test bug40113.test-20090619150423-w3im08cym6tyzn8f-2
Diff too large for email (252404 lines, the limit is 100000).
1
0
I've recently increased my down/up link to the Internet so checkouts should
be extremely fast. Builds on the hosts should obviously remain roughly the
same, but turnaround time due to download/upload bottlenecks should no
longer be an issue. The bots that should see an improvement are:
adutko-ultrasparc3
adutko-centos5-amd64
mariadb-brs
1
0
[Maria-developers] Updated (by Knielsen): Update packaging scripts for MariaDB 5.2 (88)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Update packaging scripts for MariaDB 5.2
CREATION DATE..: Sat, 27 Feb 2010, 16:39
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 88 (http://askmonty.org/worklog/?tid=88)
VERSION........: Server-5.2
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 10
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 30
PROGRESS NOTES:
-=-=(Knielsen - Fri, 19 Mar 2010, 08:05)=-=-
Status updated.
--- /tmp/wklog.88.old.4249 2010-03-19 08:05:09.000000000 +0000
+++ /tmp/wklog.88.new.4249 2010-03-19 08:05:09.000000000 +0000
@@ -1 +1 @@
-Assigned
+Complete
-=-=(Knielsen - Fri, 19 Mar 2010, 08:04)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.4242 2010-03-19 08:04:40.000000000 +0000
+++ /tmp/wklog.88.new.4242 2010-03-19 08:04:40.000000000 +0000
@@ -7,5 +7,3 @@
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
- - Setup Buildbot upgrade test from MariaDB 5.1.42
-
-=-=(Knielsen - Fri, 19 Mar 2010, 08:04)=-=-
High Level Description modified.
--- /tmp/wklog.88.old.4232 2010-03-19 08:04:28.000000000 +0000
+++ /tmp/wklog.88.new.4232 2010-03-19 08:04:28.000000000 +0000
@@ -5,6 +5,4 @@
The .rpm also need to be checked.
-Buildbot needs to be updated to do the new upgrade tests (mariadb-5.1 ->
-mariadb 5.2)
-
+See also WL#108 and WL#109 for upgrade testing related to this.
-=-=(Knielsen - Fri, 19 Mar 2010, 08:03)=-=-
The .deb and .rpm scripts have been fixed, and packages are working ok now in Buildbot for 5.2.
I moved the upgrade testing to separate worklogs WL#108 and WL#109. It was decided that enhanced
upgrade test is not a blocker for the 5.2 alpha releases (and I do not have time to work on in now).
Worked 10 hours and estimate 0 hours remain (original estimate decreased by 20 hours).
-=-=(Knielsen - Sat, 13 Mar 2010, 08:14)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.22266 2010-03-13 08:14:47.000000000 +0000
+++ /tmp/wklog.88.new.22266 2010-03-13 08:14:47.000000000 +0000
@@ -1 +1,11 @@
+Some of the tasks that need to be done.
+
+ - Setup a 5.2 version of .deb files and .rpm spec file.
+
+ - Rename 5.1->5.2 in relevant places.
+
+ - Fix provides: / replaces: and similar to ensure proper upgrade from mysql
+ 5.0/5.1 and mariadb 5.1.
+
+ - Setup Buildbot upgrade test from MariaDB 5.1.42
-=-=(Guest - Sat, 13 Mar 2010, 08:12)=-=-
Category updated.
--- /tmp/wklog.88.old.22167 2010-03-13 08:12:01.000000000 +0000
+++ /tmp/wklog.88.new.22167 2010-03-13 08:12:01.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
DESCRIPTION:
The packaging scripts need to be updated to work for MariaDB 5.2
Currently, 5.2 package builds fail in Buildbot. The .debs are missing a
debian-5.2 subdirectory.
The .rpm also need to be checked.
See also WL#108 and WL#109 for upgrade testing related to this.
LOW-LEVEL DESIGN:
Some of the tasks that need to be done.
- Setup a 5.2 version of .deb files and .rpm spec file.
- Rename 5.1->5.2 in relevant places.
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Knielsen): Update packaging scripts for MariaDB 5.2 (88)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Update packaging scripts for MariaDB 5.2
CREATION DATE..: Sat, 27 Feb 2010, 16:39
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 88 (http://askmonty.org/worklog/?tid=88)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 10
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 30
PROGRESS NOTES:
-=-=(Knielsen - Fri, 19 Mar 2010, 08:04)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.4242 2010-03-19 08:04:40.000000000 +0000
+++ /tmp/wklog.88.new.4242 2010-03-19 08:04:40.000000000 +0000
@@ -7,5 +7,3 @@
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
- - Setup Buildbot upgrade test from MariaDB 5.1.42
-
-=-=(Knielsen - Fri, 19 Mar 2010, 08:04)=-=-
High Level Description modified.
--- /tmp/wklog.88.old.4232 2010-03-19 08:04:28.000000000 +0000
+++ /tmp/wklog.88.new.4232 2010-03-19 08:04:28.000000000 +0000
@@ -5,6 +5,4 @@
The .rpm also need to be checked.
-Buildbot needs to be updated to do the new upgrade tests (mariadb-5.1 ->
-mariadb 5.2)
-
+See also WL#108 and WL#109 for upgrade testing related to this.
-=-=(Knielsen - Fri, 19 Mar 2010, 08:03)=-=-
The .deb and .rpm scripts have been fixed, and packages are working ok now in Buildbot for 5.2.
I moved the upgrade testing to separate worklogs WL#108 and WL#109. It was decided that enhanced
upgrade test is not a blocker for the 5.2 alpha releases (and I do not have time to work on in now).
Worked 10 hours and estimate 0 hours remain (original estimate decreased by 20 hours).
-=-=(Knielsen - Sat, 13 Mar 2010, 08:14)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.22266 2010-03-13 08:14:47.000000000 +0000
+++ /tmp/wklog.88.new.22266 2010-03-13 08:14:47.000000000 +0000
@@ -1 +1,11 @@
+Some of the tasks that need to be done.
+
+ - Setup a 5.2 version of .deb files and .rpm spec file.
+
+ - Rename 5.1->5.2 in relevant places.
+
+ - Fix provides: / replaces: and similar to ensure proper upgrade from mysql
+ 5.0/5.1 and mariadb 5.1.
+
+ - Setup Buildbot upgrade test from MariaDB 5.1.42
-=-=(Guest - Sat, 13 Mar 2010, 08:12)=-=-
Category updated.
--- /tmp/wklog.88.old.22167 2010-03-13 08:12:01.000000000 +0000
+++ /tmp/wklog.88.new.22167 2010-03-13 08:12:01.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
DESCRIPTION:
The packaging scripts need to be updated to work for MariaDB 5.2
Currently, 5.2 package builds fail in Buildbot. The .debs are missing a
debian-5.2 subdirectory.
The .rpm also need to be checked.
See also WL#108 and WL#109 for upgrade testing related to this.
LOW-LEVEL DESIGN:
Some of the tasks that need to be done.
- Setup a 5.2 version of .deb files and .rpm spec file.
- Rename 5.1->5.2 in relevant places.
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Knielsen): Update packaging scripts for MariaDB 5.2 (88)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Update packaging scripts for MariaDB 5.2
CREATION DATE..: Sat, 27 Feb 2010, 16:39
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 88 (http://askmonty.org/worklog/?tid=88)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 10
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 30
PROGRESS NOTES:
-=-=(Knielsen - Fri, 19 Mar 2010, 08:04)=-=-
High Level Description modified.
--- /tmp/wklog.88.old.4232 2010-03-19 08:04:28.000000000 +0000
+++ /tmp/wklog.88.new.4232 2010-03-19 08:04:28.000000000 +0000
@@ -5,6 +5,4 @@
The .rpm also need to be checked.
-Buildbot needs to be updated to do the new upgrade tests (mariadb-5.1 ->
-mariadb 5.2)
-
+See also WL#108 and WL#109 for upgrade testing related to this.
-=-=(Knielsen - Fri, 19 Mar 2010, 08:03)=-=-
The .deb and .rpm scripts have been fixed, and packages are working ok now in Buildbot for 5.2.
I moved the upgrade testing to separate worklogs WL#108 and WL#109. It was decided that enhanced
upgrade test is not a blocker for the 5.2 alpha releases (and I do not have time to work on in now).
Worked 10 hours and estimate 0 hours remain (original estimate decreased by 20 hours).
-=-=(Knielsen - Sat, 13 Mar 2010, 08:14)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.22266 2010-03-13 08:14:47.000000000 +0000
+++ /tmp/wklog.88.new.22266 2010-03-13 08:14:47.000000000 +0000
@@ -1 +1,11 @@
+Some of the tasks that need to be done.
+
+ - Setup a 5.2 version of .deb files and .rpm spec file.
+
+ - Rename 5.1->5.2 in relevant places.
+
+ - Fix provides: / replaces: and similar to ensure proper upgrade from mysql
+ 5.0/5.1 and mariadb 5.1.
+
+ - Setup Buildbot upgrade test from MariaDB 5.1.42
-=-=(Guest - Sat, 13 Mar 2010, 08:12)=-=-
Category updated.
--- /tmp/wklog.88.old.22167 2010-03-13 08:12:01.000000000 +0000
+++ /tmp/wklog.88.new.22167 2010-03-13 08:12:01.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
DESCRIPTION:
The packaging scripts need to be updated to work for MariaDB 5.2
Currently, 5.2 package builds fail in Buildbot. The .debs are missing a
debian-5.2 subdirectory.
The .rpm also need to be checked.
See also WL#108 and WL#109 for upgrade testing related to this.
LOW-LEVEL DESIGN:
Some of the tasks that need to be done.
- Setup a 5.2 version of .deb files and .rpm spec file.
- Rename 5.1->5.2 in relevant places.
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
- Setup Buildbot upgrade test from MariaDB 5.1.42
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Progress (by Knielsen): Update packaging scripts for MariaDB 5.2 (88)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Update packaging scripts for MariaDB 5.2
CREATION DATE..: Sat, 27 Feb 2010, 16:39
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 88 (http://askmonty.org/worklog/?tid=88)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 10
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 30
PROGRESS NOTES:
-=-=(Knielsen - Fri, 19 Mar 2010, 08:03)=-=-
The .deb and .rpm scripts have been fixed, and packages are working ok now in Buildbot for 5.2.
I moved the upgrade testing to separate worklogs WL#108 and WL#109. It was decided that enhanced
upgrade test is not a blocker for the 5.2 alpha releases (and I do not have time to work on in now).
Worked 10 hours and estimate 0 hours remain (original estimate decreased by 20 hours).
-=-=(Knielsen - Sat, 13 Mar 2010, 08:14)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.22266 2010-03-13 08:14:47.000000000 +0000
+++ /tmp/wklog.88.new.22266 2010-03-13 08:14:47.000000000 +0000
@@ -1 +1,11 @@
+Some of the tasks that need to be done.
+
+ - Setup a 5.2 version of .deb files and .rpm spec file.
+
+ - Rename 5.1->5.2 in relevant places.
+
+ - Fix provides: / replaces: and similar to ensure proper upgrade from mysql
+ 5.0/5.1 and mariadb 5.1.
+
+ - Setup Buildbot upgrade test from MariaDB 5.1.42
-=-=(Guest - Sat, 13 Mar 2010, 08:12)=-=-
Category updated.
--- /tmp/wklog.88.old.22167 2010-03-13 08:12:01.000000000 +0000
+++ /tmp/wklog.88.new.22167 2010-03-13 08:12:01.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
DESCRIPTION:
The packaging scripts need to be updated to work for MariaDB 5.2
Currently, 5.2 package builds fail in Buildbot. The .debs are missing a
debian-5.2 subdirectory.
The .rpm also need to be checked.
Buildbot needs to be updated to do the new upgrade tests (mariadb-5.1 ->
mariadb 5.2)
LOW-LEVEL DESIGN:
Some of the tasks that need to be done.
- Setup a 5.2 version of .deb files and .rpm spec file.
- Rename 5.1->5.2 in relevant places.
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
- Setup Buildbot upgrade test from MariaDB 5.1.42
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Knielsen): Improved upgrade testing in Buildbot (109)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Improved upgrade testing in Buildbot
CREATION DATE..: Fri, 19 Mar 2010, 07:56
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Other
TASK ID........: 109 (http://askmonty.org/worklog/?tid=109)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 50 (hours remain)
ORIG. ESTIMATE.: 50
PROGRESS NOTES:
-=-=(Knielsen - Fri, 19 Mar 2010, 08:00)=-=-
High-Level Specification modified.
--- /tmp/wklog.109.old.3875 2010-03-19 08:00:03.000000000 +0000
+++ /tmp/wklog.109.new.3875 2010-03-19 08:00:03.000000000 +0000
@@ -1 +1,15 @@
+The main tasks here is to prepare sample data, and scripts to first insert the
+data, and second to verify the data. Possibly some existing sample databases
+could be used for this.
+There needs to be also developed or found some script/automatic way to check the
+contents of the database.
+
+Such scripts need to be included in the MariaDB source tree (and packages).
+
+Once this is available, the next step is to set up virtual kvm images with this
+sample data for MySQL and MariaDB, to replace the current images used for
+upgrade tests.
+
+It is then a simple matter to re-configure Buildbot to use this new more
+comprehensive upgrade test.
DESCRIPTION:
The current .deb upgrade test is very simplistic. It mainly tests that the .deb
package scripts pre/postinst work correctly.
The actual test of the data is only a single table with a single value, and only
that the table is present. There is no test to check that the data is actually
preserved correctly.
It would be nice to extend this to include a set of example databases, covering
a reasonable range of commonly used column types, indexes, storage engines, etc.
It would also need some facility for checking that the data is correct after the
upgrade.
HIGH-LEVEL SPECIFICATION:
The main tasks here is to prepare sample data, and scripts to first insert the
data, and second to verify the data. Possibly some existing sample databases
could be used for this.
There needs to be also developed or found some script/automatic way to check the
contents of the database.
Such scripts need to be included in the MariaDB source tree (and packages).
Once this is available, the next step is to set up virtual kvm images with this
sample data for MySQL and MariaDB, to replace the current images used for
upgrade tests.
It is then a simple matter to re-configure Buildbot to use this new more
comprehensive upgrade test.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Knielsen): Improved upgrade testing in Buildbot (109)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Improved upgrade testing in Buildbot
CREATION DATE..: Fri, 19 Mar 2010, 07:56
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Other
TASK ID........: 109 (http://askmonty.org/worklog/?tid=109)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 50 (hours remain)
ORIG. ESTIMATE.: 50
PROGRESS NOTES:
DESCRIPTION:
The current .deb upgrade test is very simplistic. It mainly tests that the .deb
package scripts pre/postinst work correctly.
The actual test of the data is only a single table with a single value, and only
that the table is present. There is no test to check that the data is actually
preserved correctly.
It would be nice to extend this to include a set of example databases, covering
a reasonable range of commonly used column types, indexes, storage engines, etc.
It would also need some facility for checking that the data is correct after the
upgrade.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Knielsen): Buildbot upgrade test from MariaDB 5.1.42->newest (108)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Buildbot upgrade test from MariaDB 5.1.42->newest
CREATION DATE..: Fri, 19 Mar 2010, 07:48
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Other
TASK ID........: 108 (http://askmonty.org/worklog/?tid=108)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 25 (hours remain)
ORIG. ESTIMATE.: 25
PROGRESS NOTES:
-=-=(Knielsen - Fri, 19 Mar 2010, 07:52)=-=-
Low Level Design modified.
--- /tmp/wklog.108.old.3290 2010-03-19 07:52:59.000000000 +0000
+++ /tmp/wklog.108.new.3290 2010-03-19 07:52:59.000000000 +0000
@@ -1 +1,11 @@
+Tasks needed to do this:
+
+ - For all the .deb kvm builders, set up a new virtual machine image. Based on
+ -serial or -install, pre-install the appropriate MariaDB 5.1.42 package
+ with some data. Similar to how the -upgrade images are set up with MySQL
+ pre-installed.
+
+ - Add a test step in the Buildbot configuration that runs another upgrade
+ test, similar to the existing upgrade test, but with the images with
+ MariaDB preinstalled rather than MySQL.
DESCRIPTION:
Buildbot currently does automatic upgrade test for .debs from the official MySQL
5.0/5.1 package to the newest MariaDB package.
In addition to this, we need an upgrade test from older MariaDB release.
Probably 5.1.42, the first stable release, is the one to use.
This is particularly important for the 5.2/5.3 trees, to test that upgrade
from 5.1->5.2/5.3 works.
LOW-LEVEL DESIGN:
Tasks needed to do this:
- For all the .deb kvm builders, set up a new virtual machine image. Based on
-serial or -install, pre-install the appropriate MariaDB 5.1.42 package
with some data. Similar to how the -upgrade images are set up with MySQL
pre-installed.
- Add a test step in the Buildbot configuration that runs another upgrade
test, similar to the existing upgrade test, but with the images with
MariaDB preinstalled rather than MySQL.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Knielsen): Buildbot upgrade test from MariaDB 5.1.42->newest (108)
by worklog-noreply@askmonty.org 19 Mar '10
by worklog-noreply@askmonty.org 19 Mar '10
19 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Buildbot upgrade test from MariaDB 5.1.42->newest
CREATION DATE..: Fri, 19 Mar 2010, 07:48
SUPERVISOR.....: Knielsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Other
TASK ID........: 108 (http://askmonty.org/worklog/?tid=108)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 25 (hours remain)
ORIG. ESTIMATE.: 25
PROGRESS NOTES:
DESCRIPTION:
Buildbot currently does automatic upgrade test for .debs from the official MySQL
5.0/5.1 package to the newest MariaDB package.
In addition to this, we need an upgrade test from older MariaDB release.
Probably 5.1.42, the first stable release, is the one to use.
This is particularly important for the 5.2/5.3 trees, to test that upgrade
from 5.1->5.2/5.3 works.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2746)
by Igor Babaev 19 Mar '10
by Igor Babaev 19 Mar '10
19 Mar '10
#At lp:maria/5.2 based on revid:igor@askmonty.org-20100319062332-4ghnaoaxcgagwj3m
2746 Igor Babaev 2010-03-19 [merge]
Merge
modified:
configure.in
mysql-test/Makefile.am
=== modified file 'configure.in'
--- a/configure.in 2010-03-15 11:51:23 +0000
+++ b/configure.in 2010-03-18 12:08:39 +0000
@@ -4,9 +4,6 @@ dnl Process this file with autoconf to p
# Minimum Autoconf version required.
AC_PREREQ(2.59)
-# Minimum Autoconf version required.
-AC_PREREQ(2.59)
-
# Remember to also update version.c in ndb.
# When changing major version number please also check switch statement
# in client/mysqlbinlog.cc / check_master_version().
@@ -15,7 +12,7 @@ AC_PREREQ(2.59)
# MySQL version number.
#
# Note: the following line must be parseable by win/configure.js:GetVersion()
-AC_INIT([MariaDB Server], [5.2.0-MariaDB], [], [mysql])
+AC_INIT([MariaDB Server], [5.2.0-MariaDB-alpha], [], [mysql])
AC_CONFIG_SRCDIR([sql/mysqld.cc])
AC_CANONICAL_SYSTEM
# USTAR format gives us the possibility to store longer path names in
=== modified file 'mysql-test/Makefile.am'
--- a/mysql-test/Makefile.am 2009-12-03 11:19:05 +0000
+++ b/mysql-test/Makefile.am 2010-03-18 12:08:39 +0000
@@ -102,7 +102,8 @@ TEST_DIRS = t r include std_data std_dat
suite/rpl_ndb suite/rpl_ndb/t suite/rpl_ndb/r \
suite/parts suite/parts/t suite/parts/r suite/parts/inc \
suite/pbxt/t suite/pbxt/r \
- suite/innodb suite/innodb/t suite/innodb/r suite/innodb/include
+ suite/innodb suite/innodb/t suite/innodb/r suite/innodb/include \
+ suite/vcol suite/vcol/t suite/vcol/r suite/vcol/inc
# Used by dist-hook and install-data-local to copy all
# test files into either dist or install directory
1
0
[Maria-developers] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2745)
by Igor Babaev 19 Mar '10
by Igor Babaev 19 Mar '10
19 Mar '10
#At lp:maria/5.2 based on revid:igor@askmonty.org-20100317023231-w7h0euroof0lul8e
2745 Igor Babaev 2010-03-18
Made the vcol suite independent on time zone.
modified:
mysql-test/suite/vcol/inc/vcol_supported_sql_funcs_main.inc
mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result
mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result
=== modified file 'mysql-test/suite/vcol/inc/vcol_supported_sql_funcs_main.inc'
--- a/mysql-test/suite/vcol/inc/vcol_supported_sql_funcs_main.inc 2009-10-16 22:57:48 +0000
+++ b/mysql-test/suite/vcol/inc/vcol_supported_sql_funcs_main.inc 2010-03-19 06:23:32 +0000
@@ -912,6 +912,7 @@ let $rows = 1;
let $cols = a long, b datetime as (from_unixtime(a));
let $values1 = 1196440219,default;
let $rows = 1;
+set time_zone='UTC';
--source suite/vcol/inc/vcol_supported_sql_funcs.inc
--echo # GET_FORMAT()
=== modified file 'mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result'
--- a/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result 2010-03-17 02:32:31 +0000
+++ b/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result 2010-03-19 06:23:32 +0000
@@ -2194,6 +2194,7 @@ a b
drop table t1;
set sql_warnings = 0;
# FROM_UNIXTIME()
+set time_zone='UTC';
set sql_warnings = 1;
create table t1 (a long, b datetime as (from_unixtime(a)));
show create table t1;
@@ -2205,7 +2206,7 @@ t1 CREATE TABLE `t1` (
insert into t1 values (1196440219,default);
select * from t1;
a b
-1196440219 2007-11-30 08:30:19
+1196440219 2007-11-30 16:30:19
drop table t1;
set sql_warnings = 0;
# GET_FORMAT()
=== modified file 'mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result'
--- a/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result 2010-03-17 02:32:31 +0000
+++ b/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result 2010-03-19 06:23:32 +0000
@@ -2194,6 +2194,7 @@ a b
drop table t1;
set sql_warnings = 0;
# FROM_UNIXTIME()
+set time_zone='UTC';
set sql_warnings = 1;
create table t1 (a long, b datetime as (from_unixtime(a)));
show create table t1;
@@ -2205,7 +2206,7 @@ t1 CREATE TABLE `t1` (
insert into t1 values (1196440219,default);
select * from t1;
a b
-1196440219 2007-11-30 08:30:19
+1196440219 2007-11-30 16:30:19
drop table t1;
set sql_warnings = 0;
# GET_FORMAT()
1
0
19 Mar '10
Hakan Kuecuekyilmaz <hakan(a)askmonty.org> writes:
> we had some emails from Mark Callaghan, Patrick Galbraith, and Brian Aker
> asking for an external branch of our sql-bench directory. They want to extend
> and enhance the sql-bench benchmark suite.
>
> After a short phone call with Monty, he told me that there is a way to export
> a directory structure with bzr in such a way, that merging it back to our
> main tree would be easier.
>
> I checked http://askmonty.org/wiki/MariaDB::MergingMySQL but I could not find
> a hint there.
>
> Monty told me, that we are doing something like that with the PBXT sources.
>
> Do you have an idea, what Monty meant?
I know how to take an external project and merge it into MariaDB. This uses
the "merge-into" bzr plugin.
However, I do not know how to go in the opposite direction, that is take a
part od the MariaDB tree and split it out in a separate project, and still
allow merging it back in.
Unless we want to delete it from our tree, re-create as a separate project,
and merge that external project using merge-into. This would loose the
revision history (I think).
Also note that there seems to be a number of problems with this approach due
to this bug:
https://bugs.launchpad.net/bzr/+bug/375898
(For us, it works for xtradb, but not pbxt. Recently, some more people seem to
also be hit by this, including MySQL people.)
- Kristian.
1
0
[Maria-developers] Rev 2833: Fixed bug in view code when numeric reference in ORDER BY makes unusable View. in file:///Users/bell/maria/bzr/work-maria-5.1-view-order-bug/
by sanja@askmonty.org 18 Mar '10
by sanja@askmonty.org 18 Mar '10
18 Mar '10
At file:///Users/bell/maria/bzr/work-maria-5.1-view-order-bug/
------------------------------------------------------------
revno: 2833
revision-id: sanja(a)askmonty.org-20100318191914-wupwctzwixm1144h
parent: sergii(a)pisem.net-20100312190521-jw1nggiv4427l5sm
committer: sanja(a)askmonty.org
branch nick: work-maria-5.1-view-order-bug
timestamp: Thu 2010-03-18 21:19:14 +0200
message:
Fixed bug in view code when numeric reference in ORDER BY makes unusable View.
In view representation we prints expresion instead of its numeric reference.
=== modified file 'mysql-test/r/view.result'
--- a/mysql-test/r/view.result 2010-02-10 19:06:24 +0000
+++ b/mysql-test/r/view.result 2010-03-18 19:19:14 +0000
@@ -3844,6 +3844,53 @@
ALTER TABLE v1;
DROP VIEW v1;
DROP TABLE t1;
+#
+# Maria Bug #???: ORDER BY column reference in view leads
+# to unusable view
+#
+create table t1 (a int, b int);
+insert into t1 values (2,70), (8, 30), (1, 20);
+create view v1 as select a, b from t1 order by 2;
+select v1.a from v1;
+a
+1
+8
+2
+show create view v1;
+View Create View character_set_client collation_connection
+v1 CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `v1` AS select `t1`.`a` AS `a`,`t1`.`b` AS `b` from `t1` order by `t1`.`b` latin1 latin1_swedish_ci
+drop view v1;
+create view v1 as select a, b+3 as d from t1 order by 2;
+select v1.a from v1;
+a
+1
+8
+2
+show create view v1;
+View Create View character_set_client collation_connection
+v1 CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `v1` AS select `t1`.`a` AS `a`,(`t1`.`b` + 3) AS `d` from `t1` order by (`t1`.`b` + 3) latin1 latin1_swedish_ci
+drop view v1;
+create view v1 (a,v) as select a, b+3 as d from t1 order by 2;
+select v1.a from v1;
+a
+1
+8
+2
+show create view v1;
+View Create View character_set_client collation_connection
+v1 CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `v1` AS select `t1`.`a` AS `a`,(`t1`.`b` + 3) AS `v` from `t1` order by (`t1`.`b` + 3) latin1 latin1_swedish_ci
+drop view v1;
+create view v1 as select a, 3 as d from t1 order by 2;
+select v1.a from v1;
+a
+2
+8
+1
+show create view v1;
+View Create View character_set_client collation_connection
+v1 CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `v1` AS select `t1`.`a` AS `a`,3 AS `d` from `t1` order by (2 + 0) latin1 latin1_swedish_ci
+drop view v1;
+drop table t1;
# -----------------------------------------------------------------
# -- End of 5.1 tests.
# -----------------------------------------------------------------
=== modified file 'mysql-test/t/view.test'
--- a/mysql-test/t/view.test 2010-02-10 19:06:24 +0000
+++ b/mysql-test/t/view.test 2010-03-18 19:19:14 +0000
@@ -3869,6 +3869,29 @@
DROP VIEW v1;
DROP TABLE t1;
+--echo #
+--echo # Maria Bug #???: ORDER BY column reference in view leads
+--echo # to unusable view
+--echo #
+create table t1 (a int, b int);
+insert into t1 values (2,70), (8, 30), (1, 20);
+create view v1 as select a, b from t1 order by 2;
+select v1.a from v1;
+show create view v1;
+drop view v1;
+create view v1 as select a, b+3 as d from t1 order by 2;
+select v1.a from v1;
+show create view v1;
+drop view v1;
+create view v1 (a,v) as select a, b+3 as d from t1 order by 2;
+select v1.a from v1;
+show create view v1;
+drop view v1;
+create view v1 as select a, 3 as d from t1 order by 2;
+select v1.a from v1;
+show create view v1;
+drop view v1;
+drop table t1;
--echo # -----------------------------------------------------------------
--echo # -- End of 5.1 tests.
=== modified file 'sql/item.cc'
--- a/sql/item.cc 2010-03-04 08:03:07 +0000
+++ b/sql/item.cc 2010-03-18 19:19:14 +0000
@@ -2410,7 +2410,7 @@
void Item_string::print(String *str, enum_query_type query_type)
{
- if (query_type == QT_ORDINARY && is_cs_specified())
+ if (query_type != QT_IS && is_cs_specified())
{
str->append('_');
str->append(collation.collation->csname);
@@ -2418,7 +2418,7 @@
str->append('\'');
- if (query_type == QT_ORDINARY ||
+ if (query_type != QT_IS ||
my_charset_same(str_value.charset(), system_charset_info))
{
str_value.print(str);
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-03-04 08:03:07 +0000
+++ b/sql/mysql_priv.h 2010-03-18 19:19:14 +0000
@@ -52,12 +52,15 @@
QT_ORDINARY -- ordinary SQL query.
QT_IS -- SQL query to be shown in INFORMATION_SCHEMA (in utf8 and without
+ QT_VIEW_INTERNAL -- view internal representation (like QT_ORDINARY except
+ ORDER BY clause)
character set introducers).
*/
enum enum_query_type
{
QT_ORDINARY,
- QT_IS
+ QT_IS,
+ QT_VIEW_INTERNAL
};
/* TODO convert all these three maps to Bitmap classes */
=== modified file 'sql/sql_lex.cc'
--- a/sql/sql_lex.cc 2010-03-10 10:32:14 +0000
+++ b/sql/sql_lex.cc 2010-03-18 19:19:14 +0000
@@ -2057,9 +2057,27 @@
{
if (order->counter_used)
{
- char buffer[20];
- size_t length= my_snprintf(buffer, 20, "%d", order->counter);
- str->append(buffer, (uint) length);
+ if (query_type != QT_VIEW_INTERNAL)
+ {
+ char buffer[20];
+ size_t length= my_snprintf(buffer, 20, "%d", order->counter);
+ str->append(buffer, (uint) length);
+ }
+ else
+ {
+ /* replace numeric reference with expression */
+ if (order->item[0]->type() == Item::INT_ITEM &&
+ order->item[0]->basic_const_item())
+ {
+ char buffer[20];
+ size_t length= my_snprintf(buffer, 20, "%d", order->counter);
+ str->append(buffer, (uint) length);
+ /* make it expression instead of integer constant */
+ str->append(STRING_WITH_LEN("+0"));
+ }
+ else
+ (*order->item)->print(str, query_type);
+ }
}
else
(*order->item)->print(str, query_type);
@@ -2069,7 +2087,7 @@
str->append(',');
}
}
-
+
void st_select_lex::print_limit(THD *thd,
String *str,
=== modified file 'sql/sql_view.cc'
--- a/sql/sql_view.cc 2010-03-04 08:03:07 +0000
+++ b/sql/sql_view.cc 2010-03-18 19:19:14 +0000
@@ -814,7 +814,7 @@
ulong sql_mode= thd->variables.sql_mode & MODE_ANSI_QUOTES;
thd->variables.sql_mode&= ~MODE_ANSI_QUOTES;
- lex->unit.print(&view_query, QT_ORDINARY);
+ lex->unit.print(&view_query, QT_VIEW_INTERNAL);
lex->unit.print(&is_query, QT_IS);
thd->variables.sql_mode|= sql_mode;
1
0
[Maria-developers] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (knielsen:2745)
by knielsen@knielsen-hq.org 18 Mar '10
by knielsen@knielsen-hq.org 18 Mar '10
18 Mar '10
#At lp:maria/5.2
2745 knielsen(a)knielsen-hq.org 2010-03-18
Fix merge errors in configure.in
Add vcol test suite to `make dist`.
modified:
configure.in
mysql-test/Makefile.am
=== modified file 'configure.in'
--- a/configure.in 2010-03-15 11:51:23 +0000
+++ b/configure.in 2010-03-18 12:08:39 +0000
@@ -4,9 +4,6 @@ dnl Process this file with autoconf to p
# Minimum Autoconf version required.
AC_PREREQ(2.59)
-# Minimum Autoconf version required.
-AC_PREREQ(2.59)
-
# Remember to also update version.c in ndb.
# When changing major version number please also check switch statement
# in client/mysqlbinlog.cc / check_master_version().
@@ -15,7 +12,7 @@ AC_PREREQ(2.59)
# MySQL version number.
#
# Note: the following line must be parseable by win/configure.js:GetVersion()
-AC_INIT([MariaDB Server], [5.2.0-MariaDB], [], [mysql])
+AC_INIT([MariaDB Server], [5.2.0-MariaDB-alpha], [], [mysql])
AC_CONFIG_SRCDIR([sql/mysqld.cc])
AC_CANONICAL_SYSTEM
# USTAR format gives us the possibility to store longer path names in
=== modified file 'mysql-test/Makefile.am'
--- a/mysql-test/Makefile.am 2009-12-03 11:19:05 +0000
+++ b/mysql-test/Makefile.am 2010-03-18 12:08:39 +0000
@@ -102,7 +102,8 @@ TEST_DIRS = t r include std_data std_dat
suite/rpl_ndb suite/rpl_ndb/t suite/rpl_ndb/r \
suite/parts suite/parts/t suite/parts/r suite/parts/inc \
suite/pbxt/t suite/pbxt/r \
- suite/innodb suite/innodb/t suite/innodb/r suite/innodb/include
+ suite/innodb suite/innodb/t suite/innodb/r suite/innodb/include \
+ suite/vcol suite/vcol/t suite/vcol/r suite/vcol/inc
# Used by dist-hook and install-data-local to copy all
# test files into either dist or install directory
1
0
[Maria-developers] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (knielsen:2745)
by knielsen@knielsen-hq.org 18 Mar '10
by knielsen@knielsen-hq.org 18 Mar '10
18 Mar '10
#At lp:maria/5.2
2745 knielsen(a)knielsen-hq.org 2010-03-18
Fix merge errors in configure.in
modified:
configure.in
=== modified file 'configure.in'
--- a/configure.in 2010-03-15 11:51:23 +0000
+++ b/configure.in 2010-03-18 11:51:40 +0000
@@ -4,9 +4,6 @@ dnl Process this file with autoconf to p
# Minimum Autoconf version required.
AC_PREREQ(2.59)
-# Minimum Autoconf version required.
-AC_PREREQ(2.59)
-
# Remember to also update version.c in ndb.
# When changing major version number please also check switch statement
# in client/mysqlbinlog.cc / check_master_version().
@@ -15,7 +12,7 @@ AC_PREREQ(2.59)
# MySQL version number.
#
# Note: the following line must be parseable by win/configure.js:GetVersion()
-AC_INIT([MariaDB Server], [5.2.0-MariaDB], [], [mysql])
+AC_INIT([MariaDB Server], [5.2.0-MariaDB-alpha], [], [mysql])
AC_CONFIG_SRCDIR([sql/mysqld.cc])
AC_CANONICAL_SYSTEM
# USTAR format gives us the possibility to store longer path names in
1
0
Hi.
That's a rough list of enhancement ideas for worklog.
Some of them I'm going to do (but not all).
Feel free to suggest more or "get rid of WL, use X instead",
just don't forget the "because of" part.
report: easy to do tasks
make it to provide a list of "low-handing fruits", tasks that
could be done relatively easily and without prior extensive MySQL
source knowledge. For community members that want to help.
report: who's doing what NOW
that's pretty obvious
report: tasks for specific version, roadmap
worklog kind of does it now, mostly listed for
completeness
ability to remove hours
Somebody mentioned that a number of hours could be
increased by mistake, and there should be a way to decrease
is back. I'm not convinced it's a good idea, though.
generate weekly report templates
to reduce the need for double or tripple reporting. WL generates a
weekly report for a developer, based on its data, and sends it to
this developer. The developer in question can edit it and sent to
reports@, or copy-paste from it, or filter it out and ignore
completely.
private tasks and categories
There should be a way to have private categories and tasks in WL.
this includes fixing "private" field in WL tasks.
distingushing between employees and users
for many tasks from the above WL needs to distinguish between MPAB
employees and other registered users.
better search
Sanja seems to be unhappy with WL search
make it more readable for novice wl readers
no big redesign or anything, though. but I think that moving task
description up and, say, attached files and estimated number of
hours down could help somewhat.
not editable unless authenticated
that's more a bug than a new feature. no changes in tasks should be
allowed unless a user is authenticated.
subscribe w/o authentication
but perhaps we may want to allow users to subscribe to tasks w/o
being authenticated ? I'm not going to do that, though.
embeddable views
a couple of pages that could be easily inserted (iframe-ed ?) into
other pages without disrupting the design too much. Mainly "roadmap"
and "easy tasks" reports, but also "WL of the day" too.
"WL of the day" - it's a new crazy idea, I wanted to discuss.
Basically, that's a small block somewhere on the main page that shows a
randomly (?) selected WL task - only the description (or a part of it),
a link to a full task page, and voting controls (useless... very important).
Regards,
Sergei
6
10
[Maria-developers] bzr commit into Mariadb 5.2, with Maria 2.0:maria/5.2 branch (igor:2744) Bug#539643
by Igor Babaev 17 Mar '10
by Igor Babaev 17 Mar '10
17 Mar '10
#At lp:maria/5.2 based on revid:sergii@pisem.net-20100315115123-21tgprclhz7qbk6m
2744 Igor Babaev 2010-03-16
Fixed bug #539643.
The cause of the problem is a bad merge MariaDB-5.1=>MariaDB-5.2.
Added the vcol suite to the list of the default suites run
by mysql-test-run.pl.
modified:
mysql-test/mysql-test-run.pl
mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result
mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result
sql/field.cc
=== modified file 'mysql-test/mysql-test-run.pl'
--- a/mysql-test/mysql-test-run.pl 2010-03-10 10:32:14 +0000
+++ b/mysql-test/mysql-test-run.pl 2010-03-17 02:32:31 +0000
@@ -126,7 +126,7 @@ my $path_config_file; # The ge
# executables will be used by the test suite.
our $opt_vs_config = $ENV{'MTR_VS_CONFIG'};
-my $DEFAULT_SUITES= "main,binlog,federated,rpl,maria,parts";
+my $DEFAULT_SUITES= "main,binlog,federated,rpl,maria,parts,vcol";
my $opt_suites;
our $opt_verbose= 0; # Verbose output, enable with --verbose
=== modified file 'mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result'
--- a/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result 2009-10-16 22:57:48 +0000
+++ b/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_innodb.result 2010-03-17 02:32:31 +0000
@@ -2205,7 +2205,7 @@ t1 CREATE TABLE `t1` (
insert into t1 values (1196440219,default);
select * from t1;
a b
-1196440219 2007-11-30 19:30:19
+1196440219 2007-11-30 08:30:19
drop table t1;
set sql_warnings = 0;
# GET_FORMAT()
=== modified file 'mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result'
--- a/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result 2009-10-16 22:57:48 +0000
+++ b/mysql-test/suite/vcol/r/vcol_supported_sql_funcs_myisam.result 2010-03-17 02:32:31 +0000
@@ -2205,7 +2205,7 @@ t1 CREATE TABLE `t1` (
insert into t1 values (1196440219,default);
select * from t1;
a b
-1196440219 2007-11-30 19:30:19
+1196440219 2007-11-30 08:30:19
drop table t1;
set sql_warnings = 0;
# GET_FORMAT()
=== modified file 'sql/field.cc'
--- a/sql/field.cc 2010-03-15 11:51:23 +0000
+++ b/sql/field.cc 2010-03-17 02:32:31 +0000
@@ -9598,13 +9598,13 @@ bool Create_field::init(THD *thd, char *
interval_list.empty();
comment= *fld_comment;
+ vcol_info= fld_vcol_info;
stored_in_db= TRUE;
/* Initialize data for a computed field */
if ((uchar)fld_type == (uchar)MYSQL_TYPE_VIRTUAL)
{
DBUG_ASSERT(vcol_info && vcol_info->expr_item);
- vcol_info= fld_vcol_info;
stored_in_db= vcol_info->is_stored();
/*
Walk through the Item tree checking if all items are valid
@@ -9624,8 +9624,6 @@ bool Create_field::init(THD *thd, char *
*/
sql_type= fld_type= vcol_info->get_real_type();
}
- else
- vcol_info= NULL;
/*
Set NO_DEFAULT_VALUE_FLAG if this field doesn't have a default value and
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Phone home
CREATION DATE..: Wed, 01 Apr 2009, 15:30
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Bothorsen
COPIES TO......: Sergei
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 12 (http://askmonty.org/worklog/?tid=12)
VERSION........: Connector/.NET-1.6
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 120 (hours remain)
ORIG. ESTIMATE.: 120
PROGRESS NOTES:
-=-=(Serg - Tue, 16 Mar 2010, 19:51)=-=-
Observers changed: Sergei
-=-=(Monty - Tue, 09 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27527 2010-03-09 19:28:51.000000000 +0000
+++ /tmp/wklog.12.new.27527 2010-03-09 19:28:51.000000000 +0000
@@ -2,10 +2,10 @@
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
-analysis of this information will give MySQL AB useful insight into
+analysis of this information will give MariaDB developers useful insight into
the user base
-Summary of collected information
+Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
@@ -28,7 +28,7 @@
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
- (optionally by user) MySQL customer support contract id
+ (optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
-=-=(Monty - Tue, 09 Mar 2010, 19:25)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27502 2010-03-09 19:25:56.000000000 +0000
+++ /tmp/wklog.12.new.27502 2010-03-09 19:25:56.000000000 +0000
@@ -1,6 +1,6 @@
-This project is to develop a plugin for the MySQL server that provides
+This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
-server will regularly contact a web service operated by MySQL AB, and
+server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MySQL AB useful insight into
the user base
@@ -32,10 +32,10 @@
Data Not Sent
Contents or names of any user database
- Anything that allows MySQL to track down the user
+ Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
-What will run at MySQL's datacenter
+What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
DESCRIPTION:
This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MariaDB developers useful insight into
the user base
Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
what version and build they are running
what features are being used
Information sent from each instance
A unique server identifier
secure hash of MAC address + listening port
unique, but doesn't leak customer data
Processor type, speed, processor count / core count, bitwidth (32/64)
OS / Distro / Kernel id and version
Which storage engines are in use
Number and size of databases (disk space, probably can't for cluster)
Counts / Rates of I/O activity
List of loaded plugins
SHOW STATUS
SHOW VARIABLES (but not anything that can give away user identity)
will be a explicit list of what variables will be shown
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
(optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
database schema is TBD, but not complicated
Analysis/Reporting/Business Process are TBD
What will run on the users' machines
Daemon Plugin module
can be dynamically loaded, or statically compiled
runs as part of the mysqld process
daemon plugins have full access to server internals
Start and use it's own thread
will not block normal operation
will yield often
will hold read mutexs as short as possible
Loop and delay on an interval (specified by option)
probably default to be on server restart and about once a week
randomly spread time to avoid too many calling in at the same moment
Gather data from internal mysqld data structures
Convert data into simple text format (human readable)
Transmit data via HTTP REST POST to one or more given URLs
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Phone home
CREATION DATE..: Wed, 01 Apr 2009, 15:30
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Bothorsen
COPIES TO......: Sergei
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 12 (http://askmonty.org/worklog/?tid=12)
VERSION........: Connector/.NET-1.6
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 120 (hours remain)
ORIG. ESTIMATE.: 120
PROGRESS NOTES:
-=-=(Serg - Tue, 16 Mar 2010, 19:51)=-=-
Observers changed: Sergei
-=-=(Monty - Tue, 09 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27527 2010-03-09 19:28:51.000000000 +0000
+++ /tmp/wklog.12.new.27527 2010-03-09 19:28:51.000000000 +0000
@@ -2,10 +2,10 @@
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
-analysis of this information will give MySQL AB useful insight into
+analysis of this information will give MariaDB developers useful insight into
the user base
-Summary of collected information
+Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
@@ -28,7 +28,7 @@
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
- (optionally by user) MySQL customer support contract id
+ (optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
-=-=(Monty - Tue, 09 Mar 2010, 19:25)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27502 2010-03-09 19:25:56.000000000 +0000
+++ /tmp/wklog.12.new.27502 2010-03-09 19:25:56.000000000 +0000
@@ -1,6 +1,6 @@
-This project is to develop a plugin for the MySQL server that provides
+This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
-server will regularly contact a web service operated by MySQL AB, and
+server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MySQL AB useful insight into
the user base
@@ -32,10 +32,10 @@
Data Not Sent
Contents or names of any user database
- Anything that allows MySQL to track down the user
+ Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
-What will run at MySQL's datacenter
+What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
DESCRIPTION:
This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MariaDB developers useful insight into
the user base
Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
what version and build they are running
what features are being used
Information sent from each instance
A unique server identifier
secure hash of MAC address + listening port
unique, but doesn't leak customer data
Processor type, speed, processor count / core count, bitwidth (32/64)
OS / Distro / Kernel id and version
Which storage engines are in use
Number and size of databases (disk space, probably can't for cluster)
Counts / Rates of I/O activity
List of loaded plugins
SHOW STATUS
SHOW VARIABLES (but not anything that can give away user identity)
will be a explicit list of what variables will be shown
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
(optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
database schema is TBD, but not complicated
Analysis/Reporting/Business Process are TBD
What will run on the users' machines
Daemon Plugin module
can be dynamically loaded, or statically compiled
runs as part of the mysqld process
daemon plugins have full access to server internals
Start and use it's own thread
will not block normal operation
will yield often
will hold read mutexs as short as possible
Loop and delay on an interval (specified by option)
probably default to be on server restart and about once a week
randomly spread time to avoid too many calling in at the same moment
Gather data from internal mysqld data structures
Convert data into simple text format (human readable)
Transmit data via HTTP REST POST to one or more given URLs
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Phone home
CREATION DATE..: Wed, 01 Apr 2009, 15:30
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Bothorsen
COPIES TO......: Sergei
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 12 (http://askmonty.org/worklog/?tid=12)
VERSION........: Connector/.NET-1.6
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 120 (hours remain)
ORIG. ESTIMATE.: 120
PROGRESS NOTES:
-=-=(Serg - Tue, 16 Mar 2010, 19:51)=-=-
Observers changed: Sergei
-=-=(Monty - Tue, 09 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27527 2010-03-09 19:28:51.000000000 +0000
+++ /tmp/wklog.12.new.27527 2010-03-09 19:28:51.000000000 +0000
@@ -2,10 +2,10 @@
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
-analysis of this information will give MySQL AB useful insight into
+analysis of this information will give MariaDB developers useful insight into
the user base
-Summary of collected information
+Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
@@ -28,7 +28,7 @@
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
- (optionally by user) MySQL customer support contract id
+ (optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
-=-=(Monty - Tue, 09 Mar 2010, 19:25)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27502 2010-03-09 19:25:56.000000000 +0000
+++ /tmp/wklog.12.new.27502 2010-03-09 19:25:56.000000000 +0000
@@ -1,6 +1,6 @@
-This project is to develop a plugin for the MySQL server that provides
+This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
-server will regularly contact a web service operated by MySQL AB, and
+server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MySQL AB useful insight into
the user base
@@ -32,10 +32,10 @@
Data Not Sent
Contents or names of any user database
- Anything that allows MySQL to track down the user
+ Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
-What will run at MySQL's datacenter
+What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
DESCRIPTION:
This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MariaDB developers useful insight into
the user base
Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
what version and build they are running
what features are being used
Information sent from each instance
A unique server identifier
secure hash of MAC address + listening port
unique, but doesn't leak customer data
Processor type, speed, processor count / core count, bitwidth (32/64)
OS / Distro / Kernel id and version
Which storage engines are in use
Number and size of databases (disk space, probably can't for cluster)
Counts / Rates of I/O activity
List of loaded plugins
SHOW STATUS
SHOW VARIABLES (but not anything that can give away user identity)
will be a explicit list of what variables will be shown
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
(optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
database schema is TBD, but not complicated
Analysis/Reporting/Business Process are TBD
What will run on the users' machines
Daemon Plugin module
can be dynamically loaded, or statically compiled
runs as part of the mysqld process
daemon plugins have full access to server internals
Start and use it's own thread
will not block normal operation
will yield often
will hold read mutexs as short as possible
Loop and delay on an interval (specified by option)
probably default to be on server restart and about once a week
randomly spread time to avoid too many calling in at the same moment
Gather data from internal mysqld data structures
Convert data into simple text format (human readable)
Transmit data via HTTP REST POST to one or more given URLs
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Phone home
CREATION DATE..: Wed, 01 Apr 2009, 15:30
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Bothorsen
COPIES TO......: Sergei
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 12 (http://askmonty.org/worklog/?tid=12)
VERSION........: Connector/.NET-1.6
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 120 (hours remain)
ORIG. ESTIMATE.: 120
PROGRESS NOTES:
-=-=(Serg - Tue, 16 Mar 2010, 19:51)=-=-
Observers changed: Sergei
-=-=(Monty - Tue, 09 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27527 2010-03-09 19:28:51.000000000 +0000
+++ /tmp/wklog.12.new.27527 2010-03-09 19:28:51.000000000 +0000
@@ -2,10 +2,10 @@
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
-analysis of this information will give MySQL AB useful insight into
+analysis of this information will give MariaDB developers useful insight into
the user base
-Summary of collected information
+Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
@@ -28,7 +28,7 @@
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
- (optionally by user) MySQL customer support contract id
+ (optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
-=-=(Monty - Tue, 09 Mar 2010, 19:25)=-=-
High Level Description modified.
--- /tmp/wklog.12.old.27502 2010-03-09 19:25:56.000000000 +0000
+++ /tmp/wklog.12.new.27502 2010-03-09 19:25:56.000000000 +0000
@@ -1,6 +1,6 @@
-This project is to develop a plugin for the MySQL server that provides
+This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
-server will regularly contact a web service operated by MySQL AB, and
+server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MySQL AB useful insight into
the user base
@@ -32,10 +32,10 @@
Data Not Sent
Contents or names of any user database
- Anything that allows MySQL to track down the user
+ Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
-What will run at MySQL's datacenter
+What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
DESCRIPTION:
This project is to develop a plugin for the MariaDB server that provides
a "phone home" feature. When this plugin is installed, the database
server will regularly contact a web service operated by Monty Program Ab, and
upload a bundle of non-sensitive information. The collection and
analysis of this information will give MariaDB developers useful insight into
the user base
Summary of collected information (that anyone will be allowed to access)
how many servers are running the plugin
- can help estimate total number of running servers worldwide
what platform & hardware they are running on
what version and build they are running
what features are being used
Information sent from each instance
A unique server identifier
secure hash of MAC address + listening port
unique, but doesn't leak customer data
Processor type, speed, processor count / core count, bitwidth (32/64)
OS / Distro / Kernel id and version
Which storage engines are in use
Number and size of databases (disk space, probably can't for cluster)
Counts / Rates of I/O activity
List of loaded plugins
SHOW STATUS
SHOW VARIABLES (but not anything that can give away user identity)
will be a explicit list of what variables will be shown
gives among other things, the MySQL server version & Server uptime
(optionally by user) geographic location
(optionally by user) user information / company name
(optionally by user) Monty Program Ab customer support contract id
Data Not Sent
Contents or names of any user database
Anything that allows MariaDB to track down the user
if the user doesn't explicitly permit it
What will run at Monty Program's or/and the users datacenter
simple CGI on Apache
takes a HTTP REST PUT
insert received information into a database
database schema is TBD, but not complicated
Analysis/Reporting/Business Process are TBD
What will run on the users' machines
Daemon Plugin module
can be dynamically loaded, or statically compiled
runs as part of the mysqld process
daemon plugins have full access to server internals
Start and use it's own thread
will not block normal operation
will yield often
will hold read mutexs as short as possible
Loop and delay on an interval (specified by option)
probably default to be on server restart and about once a week
randomly spread time to avoid too many calling in at the same moment
Gather data from internal mysqld data structures
Convert data into simple text format (human readable)
Transmit data via HTTP REST POST to one or more given URLs
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (85)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 85 (http://askmonty.org/worklog/?tid=85)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:34)=-=-
High Level Description modified.
--- /tmp/wklog.85.old.22371 2010-03-16 19:34:33.000000000 +0000
+++ /tmp/wklog.85.new.22371 2010-03-16 19:34:33.000000000 +0000
@@ -15,4 +15,5 @@
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
+our external contributers (see the attached file segmented_keycache_v2.diff with
+the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Category updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Version updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
-=-=(Igor - Sun, 14 Feb 2010, 00:12)=-=-
New attachment: 'segmented_keycache_v2.diff'
DESCRIPTION:
A partitioned key cache is a collection of structures for regular MyiSAM key
caches called key cache partitions. Any page from a file can be placed into a
buffer of only one partition. The number of the partition is calculated from the
file number and the position of the page in the file, and it's always the same
for the page. The function that maps pages into partitions takes care of even
distribution of pages among partitions.
Partition key cache mitigate one of the major problem of simple key cache:
thread contention for key cache lock (mutex). Every call of a key cache
interface function must acquire this lock. So threads compete for this lock even
in the case when they have acquired shared locks for the file and pages they
want read from are in the key cache buffers. When working with a partitioned key
cache any key cache interface function that needs only one page has to acquire
the key cache lock only for the partition the page is ascribed to. This makes
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
our external contributers (see the attached file segmented_keycache_v2.diff with
the original patch from the contributor).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (85)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 85 (http://askmonty.org/worklog/?tid=85)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:34)=-=-
High Level Description modified.
--- /tmp/wklog.85.old.22371 2010-03-16 19:34:33.000000000 +0000
+++ /tmp/wklog.85.new.22371 2010-03-16 19:34:33.000000000 +0000
@@ -15,4 +15,5 @@
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
+our external contributers (see the attached file segmented_keycache_v2.diff with
+the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Category updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Version updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
-=-=(Igor - Sun, 14 Feb 2010, 00:12)=-=-
New attachment: 'segmented_keycache_v2.diff'
DESCRIPTION:
A partitioned key cache is a collection of structures for regular MyiSAM key
caches called key cache partitions. Any page from a file can be placed into a
buffer of only one partition. The number of the partition is calculated from the
file number and the position of the page in the file, and it's always the same
for the page. The function that maps pages into partitions takes care of even
distribution of pages among partitions.
Partition key cache mitigate one of the major problem of simple key cache:
thread contention for key cache lock (mutex). Every call of a key cache
interface function must acquire this lock. So threads compete for this lock even
in the case when they have acquired shared locks for the file and pages they
want read from are in the key cache buffers. When working with a partitioned key
cache any key cache interface function that needs only one page has to acquire
the key cache lock only for the partition the page is ascribed to. This makes
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
our external contributers (see the attached file segmented_keycache_v2.diff with
the original patch from the contributor).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (85)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:10
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 85 (http://askmonty.org/worklog/?tid=85)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:34)=-=-
High Level Description modified.
--- /tmp/wklog.85.old.22371 2010-03-16 19:34:33.000000000 +0000
+++ /tmp/wklog.85.new.22371 2010-03-16 19:34:33.000000000 +0000
@@ -15,4 +15,5 @@
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
+our external contributers (see the attached file segmented_keycache_v2.diff with
+the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Category updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:15)=-=-
Version updated.
--- /tmp/wklog.85.old.9810 2010-02-13 22:15:43.000000000 +0000
+++ /tmp/wklog.85.new.9810 2010-02-13 22:15:43.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
-=-=(Igor - Sun, 14 Feb 2010, 00:12)=-=-
New attachment: 'segmented_keycache_v2.diff'
DESCRIPTION:
A partitioned key cache is a collection of structures for regular MyiSAM key
caches called key cache partitions. Any page from a file can be placed into a
buffer of only one partition. The number of the partition is calculated from the
file number and the position of the page in the file, and it's always the same
for the page. The function that maps pages into partitions takes care of even
distribution of pages among partitions.
Partition key cache mitigate one of the major problem of simple key cache:
thread contention for key cache lock (mutex). Every call of a key cache
interface function must acquire this lock. So threads compete for this lock even
in the case when they have acquired shared locks for the file and pages they
want read from are in the key cache buffers. When working with a partitioned key
cache any key cache interface function that needs only one page has to acquire
the key cache lock only for the partition the page is ascribed to. This makes
the chances for threads not compete for the same key cache lock better.
The idea and the original of the partitioned key cache was provided by one of
our external contributers (see the attached file segmented_keycache_v2.diff with
the original patch from the contributor).
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Unused
CREATION DATE..: Sun, 14 Feb 2010, 00:17
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 86 (http://askmonty.org/worklog/?tid=86)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:30)=-=-
Title modified.
--- /tmp/wklog.86.old.22309 2010-03-16 19:30:04.000000000 +0000
+++ /tmp/wklog.86.new.22309 2010-03-16 19:30:04.000000000 +0000
@@ -1 +1 @@
-Partitioned Key Cache for MyISAM
+Unused
-=-=(Igor - Tue, 16 Mar 2010, 19:29)=-=-
High Level Description modified.
--- /tmp/wklog.86.old.22292 2010-03-16 19:29:37.000000000 +0000
+++ /tmp/wklog.86.new.22292 2010-03-16 19:29:37.000000000 +0000
@@ -1,19 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers (see the attached file segmented_keycache_v2.diff with
-the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Privacy level updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-y
+n
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Category updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:18)=-=-
Version updated.
--- /tmp/wklog.86.old.10044 2010-02-14 00:18:31.000000000 +0200
+++ /tmp/wklog.86.new.10044 2010-02-14 00:18:31.000000000 +0200
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Unused
CREATION DATE..: Sun, 14 Feb 2010, 00:17
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 86 (http://askmonty.org/worklog/?tid=86)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:30)=-=-
Title modified.
--- /tmp/wklog.86.old.22309 2010-03-16 19:30:04.000000000 +0000
+++ /tmp/wklog.86.new.22309 2010-03-16 19:30:04.000000000 +0000
@@ -1 +1 @@
-Partitioned Key Cache for MyISAM
+Unused
-=-=(Igor - Tue, 16 Mar 2010, 19:29)=-=-
High Level Description modified.
--- /tmp/wklog.86.old.22292 2010-03-16 19:29:37.000000000 +0000
+++ /tmp/wklog.86.new.22292 2010-03-16 19:29:37.000000000 +0000
@@ -1,19 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers (see the attached file segmented_keycache_v2.diff with
-the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Privacy level updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-y
+n
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Category updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:18)=-=-
Version updated.
--- /tmp/wklog.86.old.10044 2010-02-14 00:18:31.000000000 +0200
+++ /tmp/wklog.86.new.10044 2010-02-14 00:18:31.000000000 +0200
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Unused
CREATION DATE..: Sun, 14 Feb 2010, 00:17
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 86 (http://askmonty.org/worklog/?tid=86)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:30)=-=-
Title modified.
--- /tmp/wklog.86.old.22309 2010-03-16 19:30:04.000000000 +0000
+++ /tmp/wklog.86.new.22309 2010-03-16 19:30:04.000000000 +0000
@@ -1 +1 @@
-Partitioned Key Cache for MyISAM
+Unused
-=-=(Igor - Tue, 16 Mar 2010, 19:29)=-=-
High Level Description modified.
--- /tmp/wklog.86.old.22292 2010-03-16 19:29:37.000000000 +0000
+++ /tmp/wklog.86.new.22292 2010-03-16 19:29:37.000000000 +0000
@@ -1,19 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers (see the attached file segmented_keycache_v2.diff with
-the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Privacy level updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-y
+n
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Category updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:18)=-=-
Version updated.
--- /tmp/wklog.86.old.10044 2010-02-14 00:18:31.000000000 +0200
+++ /tmp/wklog.86.new.10044 2010-02-14 00:18:31.000000000 +0200
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (86)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:17
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 86 (http://askmonty.org/worklog/?tid=86)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:29)=-=-
High Level Description modified.
--- /tmp/wklog.86.old.22292 2010-03-16 19:29:37.000000000 +0000
+++ /tmp/wklog.86.new.22292 2010-03-16 19:29:37.000000000 +0000
@@ -1,19 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers (see the attached file segmented_keycache_v2.diff with
-the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Privacy level updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-y
+n
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Category updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:18)=-=-
Version updated.
--- /tmp/wklog.86.old.10044 2010-02-14 00:18:31.000000000 +0200
+++ /tmp/wklog.86.new.10044 2010-02-14 00:18:31.000000000 +0200
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (86)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:17
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 86 (http://askmonty.org/worklog/?tid=86)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:29)=-=-
High Level Description modified.
--- /tmp/wklog.86.old.22292 2010-03-16 19:29:37.000000000 +0000
+++ /tmp/wklog.86.new.22292 2010-03-16 19:29:37.000000000 +0000
@@ -1,19 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers (see the attached file segmented_keycache_v2.diff with
-the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Privacy level updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-y
+n
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Category updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:18)=-=-
Version updated.
--- /tmp/wklog.86.old.10044 2010-02-14 00:18:31.000000000 +0200
+++ /tmp/wklog.86.new.10044 2010-02-14 00:18:31.000000000 +0200
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (86)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:17
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 86 (http://askmonty.org/worklog/?tid=86)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 80 (hours remain)
ORIG. ESTIMATE.: 80
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:29)=-=-
High Level Description modified.
--- /tmp/wklog.86.old.22292 2010-03-16 19:29:37.000000000 +0000
+++ /tmp/wklog.86.new.22292 2010-03-16 19:29:37.000000000 +0000
@@ -1,19 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers (see the attached file segmented_keycache_v2.diff with
-the original patch from the contributor).
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Privacy level updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-y
+n
-=-=(Igor - Sun, 14 Feb 2010, 00:19)=-=-
Category updated.
--- /tmp/wklog.86.old.10092 2010-02-13 22:19:03.000000000 +0000
+++ /tmp/wklog.86.new.10092 2010-02-13 22:19:03.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Server-Sprint
-=-=(Igor - Sun, 14 Feb 2010, 00:18)=-=-
Version updated.
--- /tmp/wklog.86.old.10044 2010-02-14 00:18:31.000000000 +0200
+++ /tmp/wklog.86.new.10044 2010-02-14 00:18:31.000000000 +0200
@@ -1 +1 @@
-Benchmarks-3.0
+Server-5.2
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Unused
CREATION DATE..: Sun, 14 Feb 2010, 00:09
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-BackLog
TASK ID........: 84 (http://askmonty.org/worklog/?tid=84)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
Title modified.
--- /tmp/wklog.84.old.22271 2010-03-16 19:28:50.000000000 +0000
+++ /tmp/wklog.84.new.22271 2010-03-16 19:28:50.000000000 +0000
@@ -1 +1 @@
-Partitioned Key Cache for MyISAM
+Unused
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
Version updated.
--- /tmp/wklog.84.old.22271 2010-03-16 19:28:50.000000000 +0000
+++ /tmp/wklog.84.new.22271 2010-03-16 19:28:50.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.84.old.22253 2010-03-16 19:28:09.000000000 +0000
+++ /tmp/wklog.84.new.22253 2010-03-16 19:28:09.000000000 +0000
@@ -1,18 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Unused
CREATION DATE..: Sun, 14 Feb 2010, 00:09
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-BackLog
TASK ID........: 84 (http://askmonty.org/worklog/?tid=84)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
Title modified.
--- /tmp/wklog.84.old.22271 2010-03-16 19:28:50.000000000 +0000
+++ /tmp/wklog.84.new.22271 2010-03-16 19:28:50.000000000 +0000
@@ -1 +1 @@
-Partitioned Key Cache for MyISAM
+Unused
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
Version updated.
--- /tmp/wklog.84.old.22271 2010-03-16 19:28:50.000000000 +0000
+++ /tmp/wklog.84.new.22271 2010-03-16 19:28:50.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.84.old.22253 2010-03-16 19:28:09.000000000 +0000
+++ /tmp/wklog.84.new.22253 2010-03-16 19:28:09.000000000 +0000
@@ -1,18 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Unused
CREATION DATE..: Sun, 14 Feb 2010, 00:09
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-BackLog
TASK ID........: 84 (http://askmonty.org/worklog/?tid=84)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
Title modified.
--- /tmp/wklog.84.old.22271 2010-03-16 19:28:50.000000000 +0000
+++ /tmp/wklog.84.new.22271 2010-03-16 19:28:50.000000000 +0000
@@ -1 +1 @@
-Partitioned Key Cache for MyISAM
+Unused
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
Version updated.
--- /tmp/wklog.84.old.22271 2010-03-16 19:28:50.000000000 +0000
+++ /tmp/wklog.84.new.22271 2010-03-16 19:28:50.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.84.old.22253 2010-03-16 19:28:09.000000000 +0000
+++ /tmp/wklog.84.new.22253 2010-03-16 19:28:09.000000000 +0000
@@ -1,18 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (84)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:09
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-BackLog
TASK ID........: 84 (http://askmonty.org/worklog/?tid=84)
VERSION........: Benchmarks-3.0
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.84.old.22253 2010-03-16 19:28:09.000000000 +0000
+++ /tmp/wklog.84.new.22253 2010-03-16 19:28:09.000000000 +0000
@@ -1,18 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (84)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:09
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-BackLog
TASK ID........: 84 (http://askmonty.org/worklog/?tid=84)
VERSION........: Benchmarks-3.0
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.84.old.22253 2010-03-16 19:28:09.000000000 +0000
+++ /tmp/wklog.84.new.22253 2010-03-16 19:28:09.000000000 +0000
@@ -1,18 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Partitioned Key Cache for MyISAM (84)
by worklog-noreply@askmonty.org 16 Mar '10
by worklog-noreply@askmonty.org 16 Mar '10
16 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Partitioned Key Cache for MyISAM
CREATION DATE..: Sun, 14 Feb 2010, 00:09
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Monty, Sergei
CATEGORY.......: Server-BackLog
TASK ID........: 84 (http://askmonty.org/worklog/?tid=84)
VERSION........: Benchmarks-3.0
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Tue, 16 Mar 2010, 19:28)=-=-
High Level Description modified.
--- /tmp/wklog.84.old.22253 2010-03-16 19:28:09.000000000 +0000
+++ /tmp/wklog.84.new.22253 2010-03-16 19:28:09.000000000 +0000
@@ -1,18 +1 @@
-A partitioned key cache is a collection of structures for regular MyiSAM key
-caches called key cache partitions. Any page from a file can be placed into a
-buffer of only one partition. The number of the partition is calculated from the
-file number and the position of the page in the file, and it's always the same
-for the page. The function that maps pages into partitions takes care of even
-distribution of pages among partitions.
-Partition key cache mitigate one of the major problem of simple key cache:
-thread contention for key cache lock (mutex). Every call of a key cache
-interface function must acquire this lock. So threads compete for this lock even
-in the case when they have acquired shared locks for the file and pages they
-want read from are in the key cache buffers. When working with a partitioned key
-cache any key cache interface function that needs only one page has to acquire
-the key cache lock only for the partition the page is ascribed to. This makes
-the chances for threads not compete for the same key cache lock better.
-
-The idea and the original of the partitioned key cache was provided by one of
-our external contributers.
DESCRIPTION:
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Rev 2780: MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs in file:///home/tsk/mprog/src/5.3-subqueries/
by timour@askmonty.org 15 Mar '10
by timour@askmonty.org 15 Mar '10
15 Mar '10
At file:///home/tsk/mprog/src/5.3-subqueries/
------------------------------------------------------------
revno: 2780
revision-id: timour(a)askmonty.org-20100315224130-321rym1lsuwz2j5z
parent: timour(a)askmonty.org-20100315195258-nhomb3anbb1tv3mi
committer: timour(a)askmonty.org
branch nick: 5.3-subqueries
timestamp: Tue 2010-03-16 00:41:30 +0200
message:
MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs
Fix for the PBXT copy of subselect.test.
=== modified file 'mysql-test/suite/pbxt/r/subselect.result'
--- a/mysql-test/suite/pbxt/r/subselect.result 2010-02-23 09:22:02 +0000
+++ b/mysql-test/suite/pbxt/r/subselect.result 2010-03-15 22:41:30 +0000
@@ -876,6 +876,8 @@
4.5
NULL
drop table t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int(11) NOT NULL default '0', PRIMARY KEY (a));
CREATE TABLE t2 (a int(11) default '0', INDEX (a));
INSERT INTO t1 VALUES (1),(2),(3),(4);
@@ -1771,6 +1773,7 @@
Warnings:
Note 1003 select `test`.`a`.`id` AS `id`,`test`.`a`.`text` AS `text`,`test`.`b`.`id` AS `id`,`test`.`b`.`text` AS `text`,`test`.`c`.`id` AS `id`,`test`.`c`.`text` AS `text` from `test`.`t1` `a` left join `test`.`t2` `b` on(((`test`.`b`.`id` = `test`.`a`.`id`) or isnull(`test`.`b`.`id`))) join `test`.`t1` `c` where (if(isnull(`test`.`b`.`id`),1000,`test`.`b`.`id`) = `test`.`c`.`id`)
drop table t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
create table t1 (a int);
insert into t1 values (1);
explain select benchmark(1000, (select a from t1 where a=sha(rand())));
@@ -2750,6 +2753,8 @@
max(fld)
1
drop table t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (one int, two int, flag char(1));
CREATE TABLE t2 (one int, two int, flag char(1));
INSERT INTO t1 VALUES(1,2,'Y'),(2,3,'Y'),(3,4,'Y'),(5,6,'N'),(7,8,'N');
@@ -2834,6 +2839,7 @@
Warnings:
Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one` AS `one`,`test`.`t2`.`two` AS `two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
SELECT * FROM t1 WHERE (a,b) IN (('aaa','aaa'), ('aaa','bbb'));
@@ -3004,6 +3010,8 @@
1 1
1 3
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1(a int, INDEX (a));
INSERT INTO t1 VALUES (1), (3), (5), (7);
INSERT INTO t1 VALUES (NULL);
@@ -3019,6 +3027,7 @@
2 NULL
3 1
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a DATETIME);
INSERT INTO t1 VALUES ('1998-09-23'), ('2003-03-25');
CREATE TABLE t2 AS SELECT
=== modified file 'mysql-test/suite/pbxt/t/subselect.test'
--- a/mysql-test/suite/pbxt/t/subselect.test 2009-11-06 17:22:32 +0000
+++ b/mysql-test/suite/pbxt/t/subselect.test 2010-03-15 22:41:30 +0000
@@ -477,6 +477,9 @@
# Null with keys
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1 (a int(11) NOT NULL default '0', PRIMARY KEY (a));
CREATE TABLE t2 (a int(11) default '0', INDEX (a));
INSERT INTO t1 VALUES (1),(2),(3),(4);
@@ -1121,6 +1124,8 @@
explain extended select * from t1 a left join t2 b on (a.id=b.id or b.id is null) join t1 c on (if(isnull(b.id), 1000, b.id)=c.id);
drop table t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Static tables & rund() in subqueries
#
@@ -1784,6 +1789,9 @@
# Bug #11867: queries with ROW(,elems>) IN (SELECT DISTINCT <cols> FROM ...)
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1 (one int, two int, flag char(1));
CREATE TABLE t2 (one int, two int, flag char(1));
INSERT INTO t1 VALUES(1,2,'Y'),(2,3,'Y'),(3,4,'Y'),(5,6,'N'),(7,8,'N');
@@ -1811,6 +1819,9 @@
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
+
#
# Bug #12392: where cond with IN predicate for rows and NULL values in table
#
@@ -1972,6 +1983,9 @@
# with possible NULL values by index access from the outer query
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1(a int, INDEX (a));
INSERT INTO t1 VALUES (1), (3), (5), (7);
INSERT INTO t1 VALUES (NULL);
@@ -1984,6 +1998,8 @@
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #11302: getObject() returns a String for a sub-query of type datetime
#
@@ -3096,6 +3112,7 @@
DROP TABLE t1,t2;
+
#
# Bug #32400: Complex SELECT query returns correct result only on some
# occasions
1
0
[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-subqueries/ branch (timour:2780)
by timour@askmonty.org 15 Mar '10
by timour@askmonty.org 15 Mar '10
15 Mar '10
#At file:///home/tsk/mprog/src/5.3-subqueries/ based on revid:timour@askmonty.org-20100315195258-nhomb3anbb1tv3mi
2780 timour(a)askmonty.org 2010-03-16
MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs
Fix for the PBXT copy of subselect.test.
modified:
mysql-test/suite/pbxt/r/subselect.result
mysql-test/suite/pbxt/t/subselect.test
=== modified file 'mysql-test/suite/pbxt/r/subselect.result'
--- a/mysql-test/suite/pbxt/r/subselect.result 2010-02-23 09:22:02 +0000
+++ b/mysql-test/suite/pbxt/r/subselect.result 2010-03-15 22:41:30 +0000
@@ -876,6 +876,8 @@ select (select a+1) from t1;
4.5
NULL
drop table t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int(11) NOT NULL default '0', PRIMARY KEY (a));
CREATE TABLE t2 (a int(11) default '0', INDEX (a));
INSERT INTO t1 VALUES (1),(2),(3),(4);
@@ -1771,6 +1773,7 @@ id select_type table type possible_keys
Warnings:
Note 1003 select `test`.`a`.`id` AS `id`,`test`.`a`.`text` AS `text`,`test`.`b`.`id` AS `id`,`test`.`b`.`text` AS `text`,`test`.`c`.`id` AS `id`,`test`.`c`.`text` AS `text` from `test`.`t1` `a` left join `test`.`t2` `b` on(((`test`.`b`.`id` = `test`.`a`.`id`) or isnull(`test`.`b`.`id`))) join `test`.`t1` `c` where (if(isnull(`test`.`b`.`id`),1000,`test`.`b`.`id`) = `test`.`c`.`id`)
drop table t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
create table t1 (a int);
insert into t1 values (1);
explain select benchmark(1000, (select a from t1 where a=sha(rand())));
@@ -2750,6 +2753,8 @@ select * from (select max(fld) from t1)
max(fld)
1
drop table t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (one int, two int, flag char(1));
CREATE TABLE t2 (one int, two int, flag char(1));
INSERT INTO t1 VALUES(1,2,'Y'),(2,3,'Y'),(3,4,'Y'),(5,6,'N'),(7,8,'N');
@@ -2834,6 +2839,7 @@ id select_type table type possible_keys
Warnings:
Note 1003 select `test`.`t1`.`one` AS `one`,`test`.`t1`.`two` AS `two`,<in_optimizer>((`test`.`t1`.`one`,`test`.`t1`.`two`),<exists>(select `test`.`t2`.`one` AS `one`,`test`.`t2`.`two` AS `two` from `test`.`t2` where (`test`.`t2`.`flag` = '0') group by `test`.`t2`.`one`,`test`.`t2`.`two` having (trigcond(((<cache>(`test`.`t1`.`one`) = `test`.`t2`.`one`) or isnull(`test`.`t2`.`one`))) and trigcond(((<cache>(`test`.`t1`.`two`) = `test`.`t2`.`two`) or isnull(`test`.`t2`.`two`))) and trigcond(<is_not_null_test>(`test`.`t2`.`one`)) and trigcond(<is_not_null_test>(`test`.`t2`.`two`))))) AS `test` from `test`.`t1`
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a char(5), b char(5));
INSERT INTO t1 VALUES (NULL,'aaa'), ('aaa','aaa');
SELECT * FROM t1 WHERE (a,b) IN (('aaa','aaa'), ('aaa','bbb'));
@@ -3004,6 +3010,8 @@ field1 field2
1 1
1 3
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1(a int, INDEX (a));
INSERT INTO t1 VALUES (1), (3), (5), (7);
INSERT INTO t1 VALUES (NULL);
@@ -3019,6 +3027,7 @@ a a IN (SELECT a FROM t1)
2 NULL
3 1
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a DATETIME);
INSERT INTO t1 VALUES ('1998-09-23'), ('2003-03-25');
CREATE TABLE t2 AS SELECT
=== modified file 'mysql-test/suite/pbxt/t/subselect.test'
--- a/mysql-test/suite/pbxt/t/subselect.test 2009-11-06 17:22:32 +0000
+++ b/mysql-test/suite/pbxt/t/subselect.test 2010-03-15 22:41:30 +0000
@@ -477,6 +477,9 @@ drop table t1;
# Null with keys
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1 (a int(11) NOT NULL default '0', PRIMARY KEY (a));
CREATE TABLE t2 (a int(11) default '0', INDEX (a));
INSERT INTO t1 VALUES (1),(2),(3),(4);
@@ -1121,6 +1124,8 @@ select * from t1 a left join t2 b on (a.
explain extended select * from t1 a left join t2 b on (a.id=b.id or b.id is null) join t1 c on (if(isnull(b.id), 1000, b.id)=c.id);
drop table t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Static tables & rund() in subqueries
#
@@ -1784,6 +1789,9 @@ drop table t1;
# Bug #11867: queries with ROW(,elems>) IN (SELECT DISTINCT <cols> FROM ...)
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1 (one int, two int, flag char(1));
CREATE TABLE t2 (one int, two int, flag char(1));
INSERT INTO t1 VALUES(1,2,'Y'),(2,3,'Y'),(3,4,'Y'),(5,6,'N'),(7,8,'N');
@@ -1811,6 +1819,9 @@ explain extended SELECT one,two from t1
explain extended SELECT one,two,ROW(one,two) IN (SELECT one,two FROM t2 WHERE flag = '0' group by one,two) as 'test' from t1;
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
+
#
# Bug #12392: where cond with IN predicate for rows and NULL values in table
#
@@ -1972,6 +1983,9 @@ DROP TABLE t1, t2;
# with possible NULL values by index access from the outer query
#
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
CREATE TABLE t1(a int, INDEX (a));
INSERT INTO t1 VALUES (1), (3), (5), (7);
INSERT INTO t1 VALUES (NULL);
@@ -1984,6 +1998,8 @@ SELECT a, a IN (SELECT a FROM t1) FROM t
DROP TABLE t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #11302: getObject() returns a String for a sub-query of type datetime
#
@@ -3096,6 +3112,7 @@ SELECT a,b FROM t1 WHERE b IN (SELECT a
DROP TABLE t1,t2;
+
#
# Bug #32400: Complex SELECT query returns correct result only on some
# occasions
1
0
[Maria-developers] Rev 2779: Merge in MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs in file:///home/tsk/mprog/src/5.3-subqueries/
by timour@askmonty.org 15 Mar '10
by timour@askmonty.org 15 Mar '10
15 Mar '10
At file:///home/tsk/mprog/src/5.3-subqueries/
------------------------------------------------------------
revno: 2779 [merge]
revision-id: timour(a)askmonty.org-20100315195258-nhomb3anbb1tv3mi
parent: psergey(a)askmonty.org-20100315063535-jsp4jgya6lfqt8e6
parent: timour(a)askmonty.org-20100311214331-kw8ng8aiy6h60vai
committer: timour(a)askmonty.org
branch nick: 5.3-subqueries
timestamp: Mon 2010-03-15 21:52:58 +0200
message:
Merge in MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs
modified:
mysql-test/include/mix1.inc sp1f-innodb_mysql.test-20060426055153-mgtahdmgajg7vffqbq4xrmkzbhvanlaz
mysql-test/r/index_merge_myisam.result sp1f-index_merge_myisam.r-20060816114353-wd2664hjxwyjdvm4snup647av5fmxfln
mysql-test/r/innodb_mysql.result sp1f-innodb_mysql.result-20060426055153-bychbbfnqtvmvrwccwhn24i6yi46uqjv
mysql-test/r/myisam_mrr.result myisam_mrr.result-20091215071345-6wadxunod6vi8m48-1
mysql-test/r/ps.result sp1f-ps.result-20040405154119-efxzt5onloys45nfjak4gt44kr4awkdi
mysql-test/r/subselect.result sp1f-subselect.result-20020512204640-zgegcsgavnfd7t7eyrf7ibuqomsw7uzo
mysql-test/r/subselect3.result sp1f-subselect3.result-20061031174245-v7hvtc7uwevifiq4lziwv5gdcxpeak7t
mysql-test/r/subselect3_jcl6.result subselect3_jcl6.resu-20100117143923-cf6j4mu5zzng00u7-1
mysql-test/r/subselect_no_mat.result subselect_no_mat.res-20100117143924-hut18sl9k2c7qdj8-1
mysql-test/r/subselect_no_opts.result subselect_no_opts.re-20100117143925-pabg7o8iyokjlu93-1
mysql-test/r/subselect_no_semijoin.result subselect_no_semijoi-20100117143925-9yfygtcm7fwsuq2p-1
mysql-test/r/subselect_sj.result subselect_sj.result-20100117143926-nrop4ku355g3kv8b-1
mysql-test/r/subselect_sj_jcl6.result subselect_sj_jcl6.re-20100117143928-7vzk51yaf29cdavp-1
mysql-test/t/ps.test sp1f-ps.test-20040405154119-4zqf6po44yypvz5foa2osprg5kb5ok63
mysql-test/t/subselect.test sp1f-subselect.test-20020512204640-lyqrayx6uwsn7zih6y7kerkenuitzbvr
mysql-test/t/subselect3.test sp1f-subselect3.test-20061031174245-pcxt5ljylerxhx2jkfhrbqfv5vqcazlz
sql/item_cmpfunc.h sp1f-item_cmpfunc.h-19700101030959-pcvbjplo4e4ng7ibynfhcd6pjyem57gr
sql/item_subselect.cc sp1f-item_subselect.cc-20020512204640-qep43aqhsfrwkqmrobni6czc3fqj36oo
sql/item_subselect.h sp1f-item_subselect.h-20020512204640-qdg77wil56cxyhtc2bjjdrppxq3wqgh3
sql/mysql_priv.h sp1f-mysql_priv.h-19700101030959-4fl65tqpop5zfgxaxkqotu2fa2ree5ci
sql/mysqld.cc sp1f-mysqld.cc-19700101030959-zpswdvekpvixxzxf7gdtofzel7nywtfj
sql/opt_subselect.cc opt_subselect.cc-20100215190428-nekkl8wisp0k6nlk-1
sql/set_var.cc sp1f-set_var.cc-20020723153119-nwbpg2pwpz55pfw7yfzaxt7hsszzy7y3
sql/sql_class.cc sp1f-sql_class.cc-19700101030959-rpotnweaff2pikkozh3butrf7mv3oero
sql/sql_class.h sp1f-sql_class.h-19700101030959-jnqnbrjyqsvgncsibnumsmg3lyi7pa5s
sql/sql_select.cc sp1f-sql_select.cc-19700101030959-egb7whpkh76zzvikycs5nsnuviu4fdlb
=== modified file 'mysql-test/include/mix1.inc'
--- a/mysql-test/include/mix1.inc 2009-09-15 06:08:54 +0000
+++ b/mysql-test/include/mix1.inc 2010-03-11 21:43:31 +0000
@@ -1177,8 +1177,11 @@
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
--echo End of 5.0 tests
=== modified file 'mysql-test/r/index_merge_myisam.result'
--- a/mysql-test/r/index_merge_myisam.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/index_merge_myisam.result 2010-03-11 21:43:31 +0000
@@ -1419,19 +1419,19 @@
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge=off,index_merge_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge_union=on';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,index_merge_sort_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=4;
ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
set optimizer_switch=NULL;
@@ -1458,21 +1458,21 @@
set optimizer_switch='index_merge=off,index_merge_union=off,default';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set @@global.optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
#
# Check index_merge's @@optimizer_switch flags
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -1582,5 +1582,5 @@
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
drop table t0, t1;
=== modified file 'mysql-test/r/innodb_mysql.result'
--- a/mysql-test/r/innodb_mysql.result 2009-12-15 07:16:46 +0000
+++ b/mysql-test/r/innodb_mysql.result 2010-03-11 21:43:31 +0000
@@ -1425,12 +1425,15 @@
#
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
2 DEPENDENT SUBQUERY t1 system NULL NULL NULL NULL 0 const row not found
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 1
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
End of 5.0 tests
CREATE TABLE `t2` (
=== modified file 'mysql-test/r/myisam_mrr.result'
--- a/mysql-test/r/myisam_mrr.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/myisam_mrr.result 2010-03-11 21:43:31 +0000
@@ -394,7 +394,7 @@
# - engine_condition_pushdown does not affect ICP
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, key(a));
=== modified file 'mysql-test/r/ps.result'
--- a/mysql-test/r/ps.result 2009-05-27 15:19:44 +0000
+++ b/mysql-test/r/ps.result 2010-03-11 21:43:31 +0000
@@ -149,6 +149,8 @@
c32 set('monday', 'tuesday', 'wednesday')
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -177,6 +179,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
set @arg00=1;
prepare stmt1 from ' create table t1 (m int) as select 1 as m ' ;
execute stmt1 ;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-02-17 21:59:41 +0000
+++ b/mysql-test/r/subselect.result 2010-03-11 21:43:31 +0000
@@ -1,4 +1,6 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4803,4 +4805,5 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-02-17 10:05:27 +0000
+++ b/mysql-test/r/subselect3.result 2010-03-11 21:43:31 +0000
@@ -63,12 +63,15 @@
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -692,6 +695,8 @@
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -759,6 +764,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -960,7 +966,7 @@
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -977,7 +983,7 @@
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-02-17 10:47:55 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-03-11 21:43:31 +0000
@@ -67,12 +67,15 @@
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -696,6 +699,8 @@
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -763,6 +768,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -964,7 +970,7 @@
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -981,7 +987,7 @@
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off,semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-03-15 06:32:54 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-15 19:52:58 +0000
@@ -202,39 +202,39 @@
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 06:32:54 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 19:52:58 +0000
@@ -206,39 +206,39 @@
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/t/ps.test'
--- a/mysql-test/t/ps.test 2009-05-27 15:19:44 +0000
+++ b/mysql-test/t/ps.test 2010-03-11 21:43:31 +0000
@@ -163,6 +163,9 @@
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -171,6 +174,8 @@
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# parameters from variables (for field creation)
#
=== modified file 'mysql-test/t/subselect.test'
--- a/mysql-test/t/subselect.test 2010-01-17 20:52:20 +0000
+++ b/mysql-test/t/subselect.test 2010-03-11 21:43:31 +0000
@@ -11,6 +11,9 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
--enable_warnings
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
select (select 2);
explain extended select (select 2);
SELECT (SELECT 1) UNION SELECT (SELECT 2);
@@ -4061,4 +4064,6 @@
(SELECT LAST_INSERT_ID() FROM t1 ORDER BY MIN(a) ASC LIMIT 1);
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
+
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/subselect3.test'
--- a/mysql-test/t/subselect3.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect3.test 2010-03-11 21:43:31 +0000
@@ -59,9 +59,13 @@
show status like 'Handler_read_rnd_next';
select ' ^ This must show 11' Z;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
# This must show trigcond:
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
#
@@ -529,6 +533,9 @@
DROP TABLE t1, t2;
+# The next three test cases must be executed with the IN=>EXISTS strategy
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
#
# Bug #27870: crash of an equijoin query with WHERE condition containing
@@ -588,6 +595,8 @@
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #34763: item_subselect.cc:1235:Item_in_subselect::row_value_transformer:
# Assertion failed, unexpected error message:
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-03-13 20:04:52 +0000
+++ b/sql/item_cmpfunc.h 2010-03-15 19:52:58 +0000
@@ -350,6 +350,7 @@
CHARSET_INFO *compare_collation() { return cmp.cmp_collation.collation; }
uint decimal_precision() const { return 1; }
void top_level_item() { abort_on_null= TRUE; }
+ Arg_comparator *get_comparator() { return &cmp; }
friend class Arg_comparator;
};
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-02-21 06:32:23 +0000
+++ b/sql/item_subselect.cc 2010-03-09 10:14:06 +0000
@@ -138,6 +138,7 @@
left_expr_cache= NULL;
}
first_execution= TRUE;
+ is_constant= FALSE;
Item_subselect::cleanup();
DBUG_VOID_RETURN;
}
@@ -449,8 +450,10 @@
int res;
if (thd->is_error())
- /* Do not execute subselect in case of a fatal error */
+ {
+ /* Do not execute subselect in case of a fatal error */
return 1;
+ }
/*
Simulate a failure in sub-query execution. Used to test e.g.
out of memory or query being killed conditions.
@@ -475,9 +478,6 @@
bool Item_in_subselect::exec()
{
DBUG_ENTER("Item_in_subselect::exec");
- DBUG_ASSERT(exec_method != MATERIALIZATION ||
- (exec_method == MATERIALIZATION &&
- engine->engine_type() == subselect_engine::HASH_SJ_ENGINE));
/*
Initialize the cache of the left predicate operand. This has to be done as
late as now, because Cached_item directly contains a resolved field (not
@@ -493,14 +493,14 @@
if (!left_expr_cache && exec_method == MATERIALIZATION)
init_left_expr_cache();
- /* If the new left operand is already in the cache, reuse the old result. */
- if (left_expr_cache && test_if_item_cache_changed(*left_expr_cache) < 0)
- {
- /* Always compute IN for the first row as the cache is not valid for it. */
- if (!first_execution)
- DBUG_RETURN(FALSE);
- first_execution= FALSE;
- }
+ /*
+ If the new left operand is already in the cache, reuse the old result.
+ Use the cached result only if this is not the first execution of IN
+ because the cache is not valid for the first execution.
+ */
+ if (!first_execution && left_expr_cache &&
+ test_if_item_cache_changed(*left_expr_cache) < 0)
+ DBUG_RETURN(FALSE);
/*
The exec() method below updates item::value, and item::null_value, thus if
@@ -910,8 +910,8 @@
Item_in_subselect::Item_in_subselect(Item * left_exp,
st_select_lex *select_lex):
Item_exists_subselect(), left_expr_cache(0), first_execution(TRUE),
- optimizer(0), pushed_cond_guards(NULL), exec_method(NOT_TRANSFORMED),
- upper_item(0)
+ is_constant(FALSE), optimizer(0), pushed_cond_guards(NULL),
+ exec_method(NOT_TRANSFORMED), upper_item(0)
{
DBUG_ENTER("Item_in_subselect::Item_in_subselect");
left_expr= left_exp;
@@ -1105,6 +1105,8 @@
{
DBUG_ASSERT(fixed == 1);
null_value= 0;
+ if (is_constant)
+ return value;
if (exec())
{
reset();
@@ -1571,9 +1573,9 @@
DBUG_ENTER("Item_in_subselect::row_value_transformer");
// psergey: duplicated_subselect_card_check
- if (select_lex->item_list.elements != left_expr->cols())
+ if (select_lex->item_list.elements != cols_num)
{
- my_error(ER_OPERAND_COLUMNS, MYF(0), left_expr->cols());
+ my_error(ER_OPERAND_COLUMNS, MYF(0), cols_num);
DBUG_RETURN(RES_ERROR);
}
@@ -1980,17 +1982,69 @@
bool Item_in_subselect::fix_fields(THD *thd_arg, Item **ref)
{
- bool result = 0;
+ uint outer_cols_num;
+ List<Item> *inner_cols;
if (exec_method == SEMI_JOIN)
return !( (*ref)= new Item_int(1));
- if (thd_arg->lex->view_prepare_mode && left_expr && !left_expr->fixed)
- result = left_expr->fix_fields(thd_arg, &left_expr);
-
- return result || Item_subselect::fix_fields(thd_arg, ref);
+ /*
+ Check if the outer and inner IN operands match in those cases when we
+ will not perform IN=>EXISTS transformation. Currently this is when we
+ use subquery materialization.
+
+ The condition below is true when this method was called recursively from
+ inside JOIN::prepare for the JOIN object created by the call chain
+ Item_subselect::fix_fields -> subselect_single_select_engine::prepare,
+ which creates a JOIN object for the subquery and calls JOIN::prepare for
+ the JOIN of the subquery.
+ Notice that in some cases, this doesn't happen, and the check_cols()
+ test for each Item happens later in
+ Item_in_subselect::row_value_in_to_exists_transformer.
+ The reason for this mess is that our JOIN::prepare phase works top-down
+ instead of bottom-up, so we first do name resoluton and semantic checks
+ for the outer selects, then for the inner.
+ */
+ if (engine &&
+ engine->engine_type() == subselect_engine::SINGLE_SELECT_ENGINE &&
+ ((subselect_single_select_engine*)engine)->join)
+ {
+ outer_cols_num= left_expr->cols();
+
+ if (unit->is_union())
+ inner_cols= &(unit->types);
+ else
+ inner_cols= &(unit->first_select()->item_list);
+ if (outer_cols_num != inner_cols->elements)
+ {
+ my_error(ER_OPERAND_COLUMNS, MYF(0), outer_cols_num);
+ return TRUE;
+ }
+ if (outer_cols_num > 1)
+ {
+ List_iterator<Item> inner_col_it(*inner_cols);
+ Item *inner_col;
+ for (uint i= 0; i < outer_cols_num; i++)
+ {
+ inner_col= inner_col_it++;
+ if (inner_col->check_cols(left_expr->element_index(i)->cols()))
+ return TRUE;
+ }
+ }
+ }
+
+ if (thd_arg->lex->view_prepare_mode && left_expr && !left_expr->fixed &&
+ left_expr->fix_fields(thd_arg, &left_expr))
+ return TRUE;
+ if (Item_subselect::fix_fields(thd_arg, ref))
+ return TRUE;
+
+ fixed= TRUE;
+
+ return FALSE;
}
+
void Item_in_subselect::fix_after_pullout(st_select_lex *new_parent, Item **ref)
{
left_expr->fix_after_pullout(new_parent, &left_expr);
@@ -2267,10 +2321,9 @@
void subselect_uniquesubquery_engine::cleanup()
{
DBUG_ENTER("subselect_uniquesubquery_engine::cleanup");
- /*
- subselect_uniquesubquery_engine have not 'result' assigbed, so we do not
- cleanup() it
- */
+ /* Tell handler we don't need the index anymore */
+ if (tab->table->file->inited)
+ tab->table->file->ha_index_end();
DBUG_VOID_RETURN;
}
@@ -2291,7 +2344,7 @@
Create and prepare the JOIN object that represents the query execution
plan for the subquery.
- @detail
+ @details
This method is called from Item_subselect::fix_fields. For prepared
statements it is called both during the PREPARE and EXECUTE phases in the
following ways:
@@ -2593,14 +2646,23 @@
for (;;)
{
error=table->file->ha_rnd_next(table->record[0]);
- if (error && error != HA_ERR_END_OF_FILE)
- {
- error= report_error(table, error);
- break;
+ if (error) {
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ error= 0;
+ continue;
+ }
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ error= 0;
+ break;
+ }
+ else
+ {
+ error= report_error(table, error);
+ break;
+ }
}
- /* No more rows */
- if (table->status)
- break;
if (!cond || cond->val_int())
{
@@ -2711,6 +2773,56 @@
/*
+ @retval 1 A NULL was found in the outer reference, index lookup is
+ not applicable, the outer ref is unsusable as a lookup key,
+ use some other method to find a match.
+ @retval 0 The outer ref was copied into an index lookup key.
+ @retval -1 The outer ref cannot possibly match any row, IN is FALSE.
+*/
+/* TIMOUR: this method is a variant of copy_ref_key(), needs refactoring. */
+
+int subselect_uniquesubquery_engine::copy_ref_key_simple()
+{
+ for (store_key **copy= tab->ref.key_copy ; *copy ; copy++)
+ {
+ enum store_key::store_key_result store_res;
+ store_res= (*copy)->copy();
+ tab->ref.key_err= store_res;
+
+ /*
+ When there is a NULL part in the key we don't need to make index
+ lookup for such key thus we don't need to copy whole key.
+ If we later should do a sequential scan return OK. Fail otherwise.
+
+ See also the comment for the subselect_uniquesubquery_engine::exec()
+ function.
+ */
+ null_keypart= (*copy)->null_key;
+ if (null_keypart)
+ return 1;
+
+ /*
+ Check if the error is equal to STORE_KEY_FATAL. This is not expressed
+ using the store_key::store_key_result enum because ref.key_err is a
+ boolean and we want to detect both TRUE and STORE_KEY_FATAL from the
+ space of the union of the values of [TRUE, FALSE] and
+ store_key::store_key_result.
+ TODO: fix the variable an return types.
+ */
+ if (store_res == store_key::STORE_KEY_FATAL)
+ {
+ /*
+ Error converting the left IN operand to the column type of the right
+ IN operand.
+ */
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+/*
Execute subselect
SYNOPSIS
@@ -2750,7 +2862,13 @@
/* TODO: change to use of 'full_scan' here? */
if (copy_ref_key())
+ {
+ /*
+ TIMOUR: copy_ref_key() == 1 means NULL result, not error, why return 1?
+ Check who reiles on this result.
+ */
DBUG_RETURN(1);
+ }
if (table->status)
{
/*
@@ -2791,6 +2909,46 @@
}
+/*
+ TIMOUR: write comment
+*/
+
+int subselect_uniquesubquery_engine::index_lookup()
+{
+ DBUG_ENTER("subselect_uniquesubquery_engine::index_lookup");
+ int error;
+ TABLE *table= tab->table;
+
+ if (!table->file->inited)
+ table->file->ha_index_init(tab->ref.key, 0);
+ error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->
+ ref.key_parts),
+ HA_READ_KEY_EXACT);
+ DBUG_PRINT("info", ("lookup result: %i", error));
+
+ if (error && error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
+ {
+ /*
+ TIMOUR: I don't understand at all when do we need to call report_error.
+ In most places where we access an index, we don't do this. Why here?
+ */
+ error= report_error(table, error);
+ DBUG_RETURN(error);
+ }
+
+ table->null_row= 0;
+ if (!error && (!cond || cond->val_int()))
+ ((Item_in_subselect *) item)->value= 1;
+ else
+ ((Item_in_subselect *) item)->value= 0;
+
+ DBUG_RETURN(0);
+}
+
+
+
subselect_uniquesubquery_engine::~subselect_uniquesubquery_engine()
{
/* Tell handler we don't need the index anymore */
@@ -3225,6 +3383,7 @@
bool subselect_uniquesubquery_engine::no_tables()
{
/* returning value is correct, but this method should never be called */
+ DBUG_ASSERT(FALSE);
return 0;
}
@@ -3235,16 +3394,259 @@
/**
+ Check if an IN predicate should be executed via partial matching using
+ only schema information.
+
+ @details
+ This test essentially has three results:
+ - partial matching is applicable, but cannot be executed due to a
+ limitation in the total number of indexes, as a result we can't
+ use subquery materialization at all.
+ - partial matching is either applicable or not, and this can be
+ determined by looking at 'this->max_keys'.
+ If max_keys > 1, then we need partial matching because there are
+ more indexes than just the one we use during materialization to
+ remove duplicates.
+
+ @note
+ TIMOUR: The schema-based analysis for partial matching can be done once for
+ prepared statement and remembered. It is done here to remove the need to
+ save/restore all related variables between each re-execution, thus making
+ the code simpler.
+
+ @retval PARTIAL_MATCH if a partial match should be used
+ @retval COMPLETE_MATCH if a complete match (index lookup) should be used
+*/
+
+subselect_hash_sj_engine::exec_strategy
+subselect_hash_sj_engine::get_strategy_using_schema()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+
+ if (item_in->is_top_level_item())
+ return COMPLETE_MATCH;
+ else
+ {
+ List_iterator<Item> inner_col_it(*item_in->unit->get_unit_column_types());
+ Item *outer_col, *inner_col;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ outer_col= item_in->left_expr->element_index(i);
+ inner_col= inner_col_it++;
+
+ if (!inner_col->maybe_null && !outer_col->maybe_null)
+ bitmap_set_bit(&non_null_key_parts, i);
+ else
+ {
+ bitmap_set_bit(&partial_match_key_parts, i);
+ ++count_partial_match_columns;
+ }
+ }
+ }
+
+ /* If no column contains NULLs use regular hash index lookups. */
+ if (count_partial_match_columns)
+ return PARTIAL_MATCH;
+ return COMPLETE_MATCH;
+}
+
+
+/**
+ Test whether an IN predicate must be computed via partial matching
+ based on the NULL statistics for each column of a materialized subquery.
+
+ @details The procedure analyzes column NULL statistics, updates the
+ matching type of columns that cannot be NULL or that contain only NULLs.
+ Based on this, the procedure determines the final execution strategy for
+ the [NOT] IN predicate.
+
+ @retval PARTIAL_MATCH if a partial match should be used
+ @retval COMPLETE_MATCH if a complete match (index lookup) should be used
+*/
+
+subselect_hash_sj_engine::exec_strategy
+subselect_hash_sj_engine::get_strategy_using_data()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+ Item *outer_col;
+
+ /*
+ If we already determined that a complete match is enough based on schema
+ information, nothing can be better.
+ */
+ if (strategy == COMPLETE_MATCH)
+ return COMPLETE_MATCH;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ if (!bitmap_is_set(&partial_match_key_parts, i))
+ continue;
+ outer_col= item_in->left_expr->element_index(i);
+ /*
+ If column 'i' doesn't contain NULLs, and the corresponding outer reference
+ cannot have a NULL value, then 'i' is a non-nullable column.
+ */
+ if (result_sink->get_null_count_of_col(i) == 0 && !outer_col->maybe_null)
+ {
+ bitmap_clear_bit(&partial_match_key_parts, i);
+ bitmap_set_bit(&non_null_key_parts, i);
+ --count_partial_match_columns;
+ }
+ if (result_sink->get_null_count_of_col(i) ==
+ tmp_table->file->stats.records)
+ ++count_null_only_columns;
+ }
+
+ /* If no column contains NULLs use regular hash index lookups. */
+ if (!count_partial_match_columns)
+ return COMPLETE_MATCH;
+ return PARTIAL_MATCH;
+}
+
+
+void
+subselect_hash_sj_engine::choose_partial_match_strategy(
+ bool has_non_null_key, bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts)
+{
+ size_t pm_buff_size;
+
+ DBUG_ASSERT(strategy == PARTIAL_MATCH);
+ /*
+ Choose according to global optimizer switch. If only one of the switches is
+ 'ON', then the remaining strategy is the only possible one. The only cases
+ when this will be overriden is when the total size of all buffers for the
+ merge strategy is bigger than the 'rowid_merge_buff_size' system variable,
+ or if there isn't enough physical memory to allocate the buffers.
+ */
+ if (!optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) &&
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN))
+ strategy= PARTIAL_MATCH_SCAN;
+ else if
+ ( optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) &&
+ !optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN))
+ strategy= PARTIAL_MATCH_MERGE;
+
+ /*
+ If both switches are ON, or both are OFF, we interpret that as "let the
+ optimizer decide". Perform a cost based choice between the two partial
+ matching strategies.
+ */
+ /*
+ TIMOUR: the above interpretation of the switch values could be changed to:
+ - if both are ON - let the optimizer decide,
+ - if both are OFF - do not use partial matching, therefore do not use
+ materialization in non-top-level predicates.
+ The problem with this is that we know for sure if we need partial matching
+ only after the subquery is materialized, and this is too late to revert to
+ the IN=>EXISTS strategy.
+ */
+ if (strategy == PARTIAL_MATCH)
+ {
+ /*
+ TIMOUR: Currently we use a super simplistic measure. This will be
+ addressed in a separate task.
+ */
+ if (tmp_table->file->stats.records < 100)
+ strategy= PARTIAL_MATCH_SCAN;
+ else
+ strategy= PARTIAL_MATCH_MERGE;
+ }
+
+ /* Check if there is enough memory for the rowid merge strategy. */
+ if (strategy == PARTIAL_MATCH_MERGE)
+ {
+ pm_buff_size= rowid_merge_buff_size(has_non_null_key,
+ has_covering_null_row,
+ partial_match_key_parts);
+ if (pm_buff_size > thd->variables.rowid_merge_buff_size)
+ strategy= PARTIAL_MATCH_SCAN;
+ }
+}
+
+
+/*
+ Compute the memory size of all buffers proportional to the number of rows
+ in tmp_table.
+
+ @details
+ If the result is bigger than thd->variables.rowid_merge_buff_size, partial
+ matching via merging is not applicable.
+*/
+
+size_t subselect_hash_sj_engine::rowid_merge_buff_size(
+ bool has_non_null_key, bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts)
+{
+ size_t buff_size; /* Total size of all buffers used by partial matching. */
+ ha_rows row_count= tmp_table->file->stats.records;
+ uint rowid_length= tmp_table->file->ref_length;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+
+ /* Size of the subselect_rowid_merge_engine::row_num_to_rowid buffer. */
+ buff_size= row_count * rowid_length * sizeof(uchar);
+
+ if (has_non_null_key)
+ {
+ /* Add the size of Ordered_key::key_buff of the only non-NULL key. */
+ buff_size+= row_count * sizeof(rownum_t);
+ }
+
+ if (!has_covering_null_row)
+ {
+ for (uint i= 0; i < partial_match_key_parts->n_bits; i++)
+ {
+ if (!bitmap_is_set(partial_match_key_parts, i) ||
+ result_sink->get_null_count_of_col(i) == row_count)
+ continue; /* In these cases we wouldn't construct Ordered keys. */
+
+ /* Add the size of Ordered_key::key_buff */
+ buff_size+= (row_count - result_sink->get_null_count_of_col(i)) *
+ sizeof(rownum_t);
+ /* Add the size of Ordered_key::null_key */
+ buff_size+= bitmap_buffer_size(result_sink->get_max_null_of_col(i));
+ }
+ }
+
+ return buff_size;
+}
+
+
+/*
+ Initialize a MY_BITMAP with a buffer allocated on the current
+ memory root.
+ TIMOUR: move to bitmap C file?
+*/
+
+static my_bool
+bitmap_init_memroot(MY_BITMAP *map, uint n_bits, MEM_ROOT *mem_root)
+{
+ my_bitmap_map *bitmap_buf;
+
+ if (!(bitmap_buf= (my_bitmap_map*) alloc_root(mem_root,
+ bitmap_buffer_size(n_bits))) ||
+ bitmap_init(map, bitmap_buf, n_bits, FALSE))
+ return TRUE;
+ bitmap_clear_all(map);
+ return FALSE;
+}
+
+
+/**
Create all structures needed for IN execution that can live between PS
reexecution.
- @detail
+ @param tmp_columns the items that produce the data for the temp table
+
+ @details
- Create a temporary table to store the result of the IN subquery. The
temporary table has one hash index on all its columns.
- Create a new result sink that sends the result stream of the subquery to
the temporary table,
- - Create and initialize a new JOIN_TAB, and TABLE_REF objects to perform
- lookups into the indexed temporary table.
@notice:
Currently Item_subselect::init() already chooses and creates at parse
@@ -3256,71 +3658,178 @@
bool subselect_hash_sj_engine::init_permanent(List<Item> *tmp_columns)
{
- /* The result sink where we will materialize the subquery result. */
- select_union *tmp_result_sink;
- /* The table into which the subquery is materialized. */
- TABLE *tmp_table;
- KEY *tmp_key; /* The only index on the temporary table. */
- uint tmp_key_parts; /* Number of keyparts in tmp_key. */
- Item_in_subselect *item_in= (Item_in_subselect *) item;
+ /* Options to create_tmp_table. */
+ ulonglong tmp_create_options= thd->options | TMP_TABLE_ALL_COLUMNS;
+ /* | TMP_TABLE_FORCE_MYISAM; TIMOUR: force MYISAM */
DBUG_ENTER("subselect_hash_sj_engine::init_permanent");
- /* 1. Create/initialize materialization related objects. */
+ if (bitmap_init_memroot(&non_null_key_parts, tmp_columns->elements,
+ thd->mem_root) ||
+ bitmap_init_memroot(&partial_match_key_parts, tmp_columns->elements,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
/*
Create and initialize a select result interceptor that stores the
result stream in a temporary table. The temporary table itself is
managed (created/filled/etc) internally by the interceptor.
*/
- if (!(tmp_result_sink= new select_union))
- DBUG_RETURN(TRUE);
- if (tmp_result_sink->create_result_table(
- thd, tmp_columns, TRUE,
- thd->options | TMP_TABLE_ALL_COLUMNS,
+/*
+ TIMOUR:
+ Select a more efficient result sink when we know there is no need to collect
+ data statistics.
+
+ if (strategy == COMPLETE_MATCH)
+ {
+ if (!(result= new select_union))
+ DBUG_RETURN(TRUE);
+ }
+ else if (strategy == PARTIAL_MATCH)
+ {
+ if (!(result= new select_materialize_with_stats))
+ DBUG_RETURN(TRUE);
+ }
+*/
+ if (!(result= new select_materialize_with_stats))
+ DBUG_RETURN(TRUE);
+
+ if (((select_union*) result)->create_result_table(
+ thd, tmp_columns, TRUE, tmp_create_options,
"materialized subselect", TRUE))
DBUG_RETURN(TRUE);
- tmp_table= tmp_result_sink->table;
- tmp_key= tmp_table->key_info;
- tmp_key_parts= tmp_key->key_parts;
+ tmp_table= ((select_union*) result)->table;
/*
- If the subquery has blobs, or the total key lenght is bigger than some
- length, then the created index cannot be used for lookups and we
- can't use hash semi join. If this is the case, delete the temporary
- table since it will not be used, and tell the caller we failed to
- initialize the engine.
+ If the subquery has blobs, or the total key lenght is bigger than
+ some length, or the total number of key parts is more than the
+ allowed maximum (currently MAX_REF_PARTS == 16), then the created
+ index cannot be used for lookups and we can't use hash semi
+ join. If this is the case, delete the temporary table since it
+ will not be used, and tell the caller we failed to initialize the
+ engine.
*/
if (tmp_table->s->keys == 0)
{
-#ifndef DBUG_OFF
- handlerton *tmp_table_hton= tmp_table->s->db_type();
-#ifdef USE_MARIA_FOR_TMP_TABLES
- DBUG_ASSERT(tmp_table_hton == maria_hton);
-#else
- DBUG_ASSERT(tmp_table_hton == myisam_hton);
-#endif
-#endif
DBUG_ASSERT(
tmp_table->s->uniques ||
tmp_table->key_info->key_length >= tmp_table->file->max_key_length() ||
tmp_table->key_info->key_parts > tmp_table->file->max_key_parts());
free_tmp_table(thd, tmp_table);
+ tmp_table= NULL;
delete result;
result= NULL;
DBUG_RETURN(TRUE);
}
- result= tmp_result_sink;
/*
Make sure there is only one index on the temp table, and it doesn't have
the extra key part created when s->uniques > 0.
*/
- DBUG_ASSERT(tmp_table->s->keys == 1 && tmp_columns->elements == tmp_key_parts);
-
-
- /* 2. Create/initialize execution related objects. */
+ DBUG_ASSERT(tmp_table->s->keys == 1 &&
+ ((Item_in_subselect *) item)->left_expr->cols() ==
+ tmp_table->key_info->key_parts);
+
+ if (make_semi_join_conds() ||
+ /* A unique_engine is used both for complete and partial matching. */
+ !(lookup_engine= make_unique_engine()))
+ DBUG_RETURN(TRUE);
+
+ DBUG_RETURN(FALSE);
+}
+
+
+/*
+ Create an artificial condition to post-filter those rows matched by index
+ lookups that cannot be distinguished by the index lookup procedure.
+
+ @notes
+ The need for post-filtering may occur e.g. because of
+ truncation. Prepared statements execution requires that fix_fields is
+ called for every execution. In order to call fix_fields we need to
+ create a Name_resolution_context and a corresponding TABLE_LIST for
+ the temporary table for the subquery, so that all column references
+ to the materialized subquery table can be resolved correctly.
+
+ @returns
+ @retval TRUE memory allocation error occurred
+ @retval FALSE the conditions were created and resolved (fixed)
+*/
+
+bool subselect_hash_sj_engine::make_semi_join_conds()
+{
+ /*
+ Table reference for tmp_table that is used to resolve column references
+ (Item_fields) to columns in tmp_table.
+ */
+ TABLE_LIST *tmp_table_ref;
+ /* Name resolution context for all tmp_table columns created below. */
+ Name_resolution_context *context;
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+
+ DBUG_ENTER("subselect_hash_sj_engine::make_semi_join_conds");
+ DBUG_ASSERT(semi_join_conds == NULL);
+
+ if (!(semi_join_conds= new Item_cond_and))
+ DBUG_RETURN(TRUE);
+
+ if (!(tmp_table_ref= (TABLE_LIST*) thd->alloc(sizeof(TABLE_LIST))))
+ DBUG_RETURN(TRUE);
+
+ tmp_table_ref->init_one_table("", "materialized subselect", TL_READ);
+ tmp_table_ref->table= tmp_table;
+
+ context= new Name_resolution_context;
+ context->init();
+ context->first_name_resolution_table=
+ context->last_name_resolution_table= tmp_table_ref;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ Item_func_eq *eq_cond; /* New equi-join condition for the current column. */
+ /* Item for the corresponding field from the materialized temp table. */
+ Item_field *right_col_item;
+
+ if (!(right_col_item= new Item_field(thd, context, tmp_table->field[i])) ||
+ !(eq_cond= new Item_func_eq(item_in->left_expr->element_index(i),
+ right_col_item)) ||
+ (((Item_cond_and*)semi_join_conds)->add(eq_cond)))
+ {
+ delete semi_join_conds;
+ semi_join_conds= NULL;
+ DBUG_RETURN(TRUE);
+ }
+ }
+ if (semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
+ DBUG_RETURN(TRUE);
+
+ DBUG_RETURN(FALSE);
+}
+
+
+/**
+ Create a new uniquesubquery engine for the execution of an IN predicate.
+
+ @details
+ Create and initialize a new JOIN_TAB, and Table_ref objects to perform
+ lookups into the indexed temporary table.
+
+ @retval A new subselect_hash_sj_engine object
+ @retval NULL if a memory allocation error occurs
+*/
+
+subselect_uniquesubquery_engine*
+subselect_hash_sj_engine::make_unique_engine()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ /* The only index on the temporary table. */
+ KEY *tmp_key= tmp_table->key_info;
+ /* Number of keyparts in tmp_key. */
+ uint tmp_key_parts= tmp_key->key_parts;
+ JOIN_TAB *tab;
+
+ DBUG_ENTER("subselect_hash_sj_engine::make_unique_engine");
/*
Create and initialize the JOIN_TAB that represents an index lookup
@@ -3328,9 +3837,9 @@
- this JOIN_TAB has no corresponding JOIN (and doesn't need one), and
- here we initialize only those members that are used by
subselect_uniquesubquery_engine, so these objects are incomplete.
- */
+ */
if (!(tab= (JOIN_TAB*) thd->alloc(sizeof(JOIN_TAB))))
- DBUG_RETURN(TRUE);
+ DBUG_RETURN(NULL);
tab->table= tmp_table;
tab->ref.key= 0; /* The only temp table index. */
tab->ref.key_length= tmp_key->key_length;
@@ -3341,60 +3850,18 @@
(tmp_key_parts + 1)))) ||
!(tab->ref.items=
(Item**) thd->alloc(sizeof(Item*) * tmp_key_parts)))
- DBUG_RETURN(TRUE);
+ DBUG_RETURN(NULL);
KEY_PART_INFO *cur_key_part= tmp_key->key_part;
store_key **ref_key= tab->ref.key_copy;
uchar *cur_ref_buff= tab->ref.key_buff;
-
- /*
- Create an artificial condition to post-filter those rows matched by index
- lookups that cannot be distinguished by the index lookup procedure, e.g.
- because of truncation. Prepared statements execution requires that
- fix_fields is called for every execution. In order to call fix_fields we
- need to create a Name_resolution_context and a corresponding TABLE_LIST
- for the temporary table for the subquery, so that all column references
- to the materialized subquery table can be resolved correctly.
- */
- DBUG_ASSERT(cond == NULL);
- if (!(cond= new Item_cond_and))
- DBUG_RETURN(TRUE);
- /*
- Table reference for tmp_table that is used to resolve column references
- (Item_fields) to columns in tmp_table.
- */
- TABLE_LIST *tmp_table_ref;
- if (!(tmp_table_ref= (TABLE_LIST*) thd->alloc(sizeof(TABLE_LIST))))
- DBUG_RETURN(TRUE);
-
- tmp_table_ref->init_one_table("", "materialized subselect", TL_READ);
- tmp_table_ref->table= tmp_table;
-
- /* Name resolution context for all tmp_table columns created below. */
- Name_resolution_context *context= new Name_resolution_context;
- context->init();
- context->first_name_resolution_table=
- context->last_name_resolution_table= tmp_table_ref;
for (uint i= 0; i < tmp_key_parts; i++, cur_key_part++, ref_key++)
{
- Item_func_eq *eq_cond; /* New equi-join condition for the current column. */
- /* Item for the corresponding field from the materialized temp table. */
- Item_field *right_col_item;
+ tab->ref.items[i]= item_in->left_expr->element_index(i);
int null_count= test(cur_key_part->field->real_maybe_null());
- tab->ref.items[i]= item_in->left_expr->element_index(i);
-
- if (!(right_col_item= new Item_field(thd, context, cur_key_part->field)) ||
- !(eq_cond= new Item_func_eq(tab->ref.items[i], right_col_item)) ||
- ((Item_cond_and*)cond)->add(eq_cond))
- {
- delete cond;
- cond= NULL;
- DBUG_RETURN(TRUE);
- }
-
*ref_key= new store_key_item(thd, cur_key_part->field,
- /* TODO:
+ /* TIMOUR:
the NULL byte is taken into account in
cur_key_part->store_length, so instead of
cur_ref_buff + test(maybe_null), we could
@@ -3409,10 +3876,8 @@
tab->ref.key_err= 1;
tab->ref.key_parts= tmp_key_parts;
- if (cond->fix_fields(thd, &cond))
- DBUG_RETURN(TRUE);
-
- DBUG_RETURN(FALSE);
+ DBUG_RETURN(new subselect_uniquesubquery_engine(thd, tab, item,
+ semi_join_conds));
}
@@ -3435,7 +3900,8 @@
Repeat name resolution for 'cond' since cond is not part of any
clause of the query, and it is not 'fixed' during JOIN::prepare.
*/
- if (cond && !cond->fixed && cond->fix_fields(thd, &cond))
+ if (semi_join_conds && !semi_join_conds->fixed &&
+ semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
return TRUE;
/* Let our engine reuse this query plan for materialization. */
materialize_join= materialize_engine->join;
@@ -3446,32 +3912,53 @@
subselect_hash_sj_engine::~subselect_hash_sj_engine()
{
+ delete lookup_engine;
delete result;
- if (tab)
- free_tmp_table(thd, tab->table);
+ if (tmp_table)
+ free_tmp_table(thd, tmp_table);
}
/**
Cleanup performed after each PS execution.
- @detail
+ @details
Called in the end of JOIN::prepare for PS from Item_subselect::cleanup.
*/
void subselect_hash_sj_engine::cleanup()
{
+ enum_engine_type lookup_engine_type= lookup_engine->engine_type();
is_materialized= FALSE;
+ bitmap_clear_all(&non_null_key_parts);
+ bitmap_clear_all(&partial_match_key_parts);
+ count_partial_match_columns= 0;
+ count_null_only_columns= 0;
+ strategy= UNDEFINED;
+ materialize_engine->cleanup();
+ if (lookup_engine_type == TABLE_SCAN_ENGINE ||
+ lookup_engine_type == ROWID_MERGE_ENGINE)
+ {
+ subselect_engine *inner_lookup_engine;
+ inner_lookup_engine=
+ ((subselect_partial_match_engine*) lookup_engine)->lookup_engine;
+ /*
+ Partial match engines are recreated for each PS execution inside
+ subselect_hash_sj_engine::exec().
+ */
+ delete lookup_engine;
+ lookup_engine= inner_lookup_engine;
+ }
+ DBUG_ASSERT(lookup_engine->engine_type() == UNIQUESUBQUERY_ENGINE);
+ lookup_engine->cleanup();
result->cleanup(); /* Resets the temp table as well. */
- materialize_engine->cleanup();
- subselect_uniquesubquery_engine::cleanup();
}
/**
Execute a subquery IN predicate via materialization.
- @detail
+ @details
If needed materialize the subquery into a temporary table, then
copmpute the predicate via a lookup into this table.
@@ -3482,6 +3969,9 @@
int subselect_hash_sj_engine::exec()
{
Item_in_subselect *item_in= (Item_in_subselect *) item;
+ SELECT_LEX *save_select= thd->lex->current_select;
+ subselect_partial_match_engine *pm_engine= NULL;
+ int res= 0;
DBUG_ENTER("subselect_hash_sj_engine::exec");
@@ -3489,56 +3979,126 @@
Optimize and materialize the subquery during the first execution of
the subquery predicate.
*/
- if (!is_materialized)
- {
- int res= 0;
- SELECT_LEX *save_select= thd->lex->current_select;
- thd->lex->current_select= materialize_engine->select_lex;
- if ((res= materialize_join->optimize()))
- goto err; /* purecov: inspected */
- materialize_join->exec();
- if ((res= test(materialize_join->error || thd->is_fatal_error)))
- goto err;
-
- /*
- TODO:
- - Unlock all subquery tables as we don't need them. To implement this
- we need to add new functionality to JOIN::join_free that can unlock
- all tables in a subquery (and all its subqueries).
- - The temp table used for grouping in the subquery can be freed
- immediately after materialization (yet it's done together with
- unlocking).
- */
- is_materialized= TRUE;
- /*
- If the subquery returned no rows, the temporary table is empty, so we know
- directly that the result of IN is FALSE. We first update the table
- statistics, then we test if the temporary table for the query result is
- empty.
- */
- tab->table->file->info(HA_STATUS_VARIABLE);
- if (!tab->table->file->stats.records)
- {
- empty_result_set= TRUE;
- item_in->value= FALSE;
- /* TODO: check we need this: item_in->null_value= FALSE; */
- DBUG_RETURN(FALSE);
- }
- /* Set tmp_param only if its usable, i.e. tmp_param->copy_field != NULL. */
- tmp_param= &(item_in->unit->outer_select()->join->tmp_table_param);
- if (tmp_param && !tmp_param->copy_field)
- tmp_param= NULL;
+ thd->lex->current_select= materialize_engine->select_lex;
+ if ((res= materialize_join->optimize()))
+ goto err; /* purecov: inspected */
+ DBUG_ASSERT(!is_materialized); /* We should materialize only once. */
+ materialize_join->exec();
+ if ((res= test(materialize_join->error || thd->is_fatal_error)))
+ goto err;
+
+ /*
+ TODO:
+ - Unlock all subquery tables as we don't need them. To implement this
+ we need to add new functionality to JOIN::join_free that can unlock
+ all tables in a subquery (and all its subqueries).
+ - The temp table used for grouping in the subquery can be freed
+ immediately after materialization (yet it's done together with
+ unlocking).
+ */
+ is_materialized= TRUE;
+ /*
+ If the subquery returned no rows, the temporary table is empty, so we know
+ directly that the result of IN is FALSE. We first update the table
+ statistics, then we test if the temporary table for the query result is
+ empty.
+ */
+ tmp_table->file->info(HA_STATUS_VARIABLE);
+ if (!tmp_table->file->stats.records)
+ {
+ item_in->value= FALSE;
+ /* The value of IN will not change during this execution. */
+ item_in->is_constant= TRUE;
+ item_in->set_first_execution();
+ /* TIMOUR: check if we need this: item_in->null_value= FALSE; */
+ DBUG_RETURN(FALSE);
+ }
+
+ /*
+ TIMOUR: The schema-based analysis for partial matching can be done once for
+ prepared statement and remembered. It is done here to remove the need to
+ save/restore all related variables between each re-execution, thus making
+ the code simpler.
+ */
+ strategy= get_strategy_using_schema();
+ /* This call may discover that we don't need partial matching at all. */
+ strategy= get_strategy_using_data();
+ if (strategy == PARTIAL_MATCH)
+ {
+ uint count_pm_keys; /* Total number of keys needed for partial matching. */
+ MY_BITMAP *nn_key_parts; /* The key parts of the only non-NULL index. */
+ uint covering_null_row_width;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+
+ nn_key_parts= (count_partial_match_columns < tmp_table->s->fields) ?
+ &non_null_key_parts : NULL;
+
+ if (result_sink->get_max_nulls_in_row() ==
+ tmp_table->s->fields -
+ (nn_key_parts ? bitmap_bits_set(nn_key_parts) : 0))
+ covering_null_row_width= result_sink->get_max_nulls_in_row();
+ else
+ covering_null_row_width= 0;
+
+ if (covering_null_row_width)
+ count_pm_keys= nn_key_parts ? 1 : 0;
+ else
+ count_pm_keys= count_partial_match_columns - count_null_only_columns +
+ (nn_key_parts ? 1 : 0);
+
+ choose_partial_match_strategy(test(nn_key_parts),
+ test(covering_null_row_width),
+ &partial_match_key_parts);
+ DBUG_ASSERT(strategy == PARTIAL_MATCH_MERGE ||
+ strategy == PARTIAL_MATCH_SCAN);
+ if (strategy == PARTIAL_MATCH_MERGE)
+ {
+ pm_engine=
+ new subselect_rowid_merge_engine((subselect_uniquesubquery_engine*)
+ lookup_engine, tmp_table,
+ count_pm_keys,
+ covering_null_row_width,
+ item, result,
+ semi_join_conds->argument_list());
+ if (!pm_engine ||
+ ((subselect_rowid_merge_engine*) pm_engine)->
+ init(nn_key_parts, &partial_match_key_parts))
+ {
+ /*
+ The call to init() would fail if there was not enough memory to allocate
+ all buffers for the rowid merge strategy. In this case revert to table
+ scanning which doesn't need any big buffers.
+ */
+ delete pm_engine;
+ pm_engine= NULL;
+ strategy= PARTIAL_MATCH_SCAN;
+ }
+ }
+
+ if (strategy == PARTIAL_MATCH_SCAN)
+ {
+ if (!(pm_engine=
+ new subselect_table_scan_engine((subselect_uniquesubquery_engine*)
+ lookup_engine, tmp_table,
+ item, result,
+ semi_join_conds->argument_list(),
+ covering_null_row_width)))
+ {
+ /* This is an irrecoverable error. */
+ res= 1;
+ goto err;
+ }
+ }
+ }
+
+ if (pm_engine)
+ lookup_engine= pm_engine;
+ item_in->change_engine(lookup_engine);
err:
- thd->lex->current_select= save_select;
- if (res)
- DBUG_RETURN(res);
- }
-
- /*
- Lookup the left IN operand in the hash index of the materialized subquery.
- */
- DBUG_RETURN(subselect_uniquesubquery_engine::exec());
+ thd->lex->current_select= save_select;
+ DBUG_RETURN(res);
}
@@ -3551,10 +4111,1008 @@
str->append(STRING_WITH_LEN(" <materialize> ("));
materialize_engine->print(str, query_type);
str->append(STRING_WITH_LEN(" ), "));
- if (tab)
- subselect_uniquesubquery_engine::print(str, query_type);
+
+ if (lookup_engine)
+ lookup_engine->print(str, query_type);
else
str->append(STRING_WITH_LEN(
- "<the access method for lookups is not yet created>"
+ "<engine selected at execution time>"
));
}
+
+void subselect_hash_sj_engine::fix_length_and_dec(Item_cache** row)
+{
+ DBUG_ASSERT(FALSE);
+}
+
+void subselect_hash_sj_engine::exclude()
+{
+ DBUG_ASSERT(FALSE);
+}
+
+bool subselect_hash_sj_engine::no_tables()
+{
+ DBUG_ASSERT(FALSE);
+ return FALSE;
+}
+
+bool subselect_hash_sj_engine::change_result(Item_subselect *si,
+ select_result_interceptor *res)
+{
+ DBUG_ASSERT(FALSE);
+ return TRUE;
+}
+
+
+Ordered_key::Ordered_key(uint keyid_arg, TABLE *tbl_arg, Item *search_key_arg,
+ ha_rows null_count_arg, ha_rows min_null_row_arg,
+ ha_rows max_null_row_arg, uchar *row_num_to_rowid_arg)
+ : keyid(keyid_arg), tbl(tbl_arg), search_key(search_key_arg),
+ row_num_to_rowid(row_num_to_rowid_arg), null_count(null_count_arg)
+{
+ DBUG_ASSERT(tbl->file->stats.records > null_count);
+ key_buff_elements= tbl->file->stats.records - null_count;
+ cur_key_idx= HA_POS_ERROR;
+
+ DBUG_ASSERT((null_count && min_null_row_arg && max_null_row_arg) ||
+ (!null_count && !min_null_row_arg && !max_null_row_arg));
+ if (null_count)
+ {
+ /* The counters are 1-based, for key access we need 0-based indexes. */
+ min_null_row= min_null_row_arg - 1;
+ max_null_row= max_null_row_arg - 1;
+ }
+ else
+ min_null_row= max_null_row= 0;
+}
+
+
+Ordered_key::~Ordered_key()
+{
+ my_free((char*) key_buff, MYF(0));
+ bitmap_free(&null_key);
+}
+
+
+/*
+ Cleanup that needs to be done for each PS (re)execution.
+*/
+
+void Ordered_key::cleanup()
+{
+ /*
+ Currently these keys are recreated for each PS re-execution, thus
+ there is nothing to cleanup, the whole object goes away after execution
+ is over. All handler related initialization/deinitialization is done by
+ the parent subselect_rowid_merge_engine object.
+ */
+}
+
+
+/*
+ Initialize a multi-column index.
+*/
+
+bool Ordered_key::init(MY_BITMAP *columns_to_index)
+{
+ THD *thd= tbl->in_use;
+ uint cur_key_col= 0;
+ Item_field *cur_tmp_field;
+ Item_func_lt *fn_less_than;
+
+ key_column_count= bitmap_bits_set(columns_to_index);
+
+ // TIMOUR: check for mem allocation err, revert to scan
+
+ key_columns= (Item_field**) thd->alloc(key_column_count *
+ sizeof(Item_field*));
+ compare_pred= (Item_func_lt**) thd->alloc(key_column_count *
+ sizeof(Item_func_lt*));
+
+ for (uint i= 0; i < columns_to_index->n_bits; i++)
+ {
+ if (!bitmap_is_set(columns_to_index, i))
+ continue;
+ cur_tmp_field= new Item_field(tbl->field[i]);
+ /* Create the predicate (tmp_column[i] < outer_ref[i]). */
+ fn_less_than= new Item_func_lt(cur_tmp_field,
+ search_key->element_index(i));
+ fn_less_than->fix_fields(thd, (Item**) &fn_less_than);
+ key_columns[cur_key_col]= cur_tmp_field;
+ compare_pred[cur_key_col]= fn_less_than;
+ ++cur_key_col;
+ }
+
+ if (alloc_keys_buffers())
+ {
+ /* TIMOUR revert to partial match via table scan. */
+ return TRUE;
+ }
+ return FALSE;
+}
+
+
+/*
+ Initialize a single-column index.
+*/
+
+bool Ordered_key::init(int col_idx)
+{
+ THD *thd= tbl->in_use;
+
+ key_column_count= 1;
+
+ // TIMOUR: check for mem allocation err, revert to scan
+
+ key_columns= (Item_field**) thd->alloc(sizeof(Item_field*));
+ compare_pred= (Item_func_lt**) thd->alloc(sizeof(Item_func_lt*));
+
+ key_columns[0]= new Item_field(tbl->field[col_idx]);
+ /* Create the predicate (tmp_column[i] < outer_ref[i]). */
+ compare_pred[0]= new Item_func_lt(key_columns[0],
+ search_key->element_index(col_idx));
+ compare_pred[0]->fix_fields(thd, (Item**)&compare_pred[0]);
+
+ if (alloc_keys_buffers())
+ {
+ /* TIMOUR revert to partial match via table scan. */
+ return TRUE;
+ }
+ return FALSE;
+}
+
+
+/*
+ Allocate the buffers for both the row number, and the NULL-bitmap indexes.
+*/
+
+bool Ordered_key::alloc_keys_buffers()
+{
+ DBUG_ASSERT(key_buff_elements > 0);
+
+ if (!(key_buff= (rownum_t*) my_malloc(key_buff_elements * sizeof(rownum_t),
+ MYF(MY_WME))))
+ return TRUE;
+
+ /*
+ TIMOUR: it is enough to create bitmaps with size
+ (max_null_row - min_null_row), and then use min_null_row as
+ lookup offset.
+ */
+ /* Notice that max_null_row is max array index, we need count, so +1. */
+ if (bitmap_init(&null_key, NULL, max_null_row + 1, FALSE))
+ return TRUE;
+
+ cur_key_idx= HA_POS_ERROR;
+
+ return FALSE;
+}
+
+
+/*
+ Quick sort comparison function that compares two rows of the same table
+ indentfied with their row numbers.
+
+ @retval -1
+ @retval 0
+ @retval +1
+*/
+
+int
+Ordered_key::cmp_keys_by_row_data(ha_rows a, ha_rows b)
+{
+ uchar *rowid_a, *rowid_b;
+ int error, cmp_res;
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tbl->file->ref_length;
+
+ if (a == b)
+ return 0;
+ /* Get the corresponding rowids. */
+ rowid_a= row_num_to_rowid + a * rowid_length;
+ rowid_b= row_num_to_rowid + b * rowid_length;
+ /* Fetch the rows for comparison. */
+ error= tbl->file->ha_rnd_pos(tbl->record[0], rowid_a);
+ DBUG_ASSERT(!error);
+ error= tbl->file->ha_rnd_pos(tbl->record[1], rowid_b);
+ DBUG_ASSERT(!error);
+ /*
+ Compare the two rows by the corresponding values of the indexed
+ columns.
+ */
+ for (uint i= 0; i < key_column_count; i++)
+ {
+ Field *cur_field= key_columns[i]->field;
+ if ((cmp_res= cur_field->cmp_offset(tbl->s->rec_buff_length)))
+ return (cmp_res > 0 ? 1 : -1);
+ }
+ return 0;
+}
+
+
+int
+Ordered_key::cmp_keys_by_row_data_and_rownum(Ordered_key *key,
+ rownum_t* a, rownum_t* b)
+{
+ /* The result of comparing the two keys according to their row data. */
+ int cmp_row_res= key->cmp_keys_by_row_data(*a, *b);
+ if (cmp_row_res)
+ return cmp_row_res;
+ return (*a < *b) ? -1 : (*a > *b) ? 1 : 0;
+}
+
+
+void Ordered_key::sort_keys()
+{
+ my_qsort2(key_buff, key_buff_elements, sizeof(rownum_t),
+ (qsort2_cmp) &cmp_keys_by_row_data_and_rownum, (void*) this);
+ /* Invalidate the current row position. */
+ cur_key_idx= HA_POS_ERROR;
+}
+
+
+/*
+ The fraction of rows that do not contain NULL in the columns indexed by
+ this key.
+
+ @retval 1 if there are no NULLs
+ @retval 0 if only NULLs
+*/
+
+double Ordered_key::null_selectivity()
+{
+ /* We should not be processing empty tables. */
+ DBUG_ASSERT(tbl->file->stats.records);
+ return (1 - (double) null_count / (double) tbl->file->stats.records);
+}
+
+
+/*
+ Compare the value(s) of the current key in 'search_key' with the
+ data of the current table record.
+
+ @notes The comparison result follows from the way compare_pred
+ is created in Ordered_key::init. Currently compare_pred compares
+ a field in of the current row with the corresponding Item that
+ contains the search key.
+
+ @param row_num Number of the row (not index in the key_buff array)
+
+ @retval -1 if (current row < search_key)
+ @retval 0 if (current row == search_key)
+ @retval +1 if (current row > search_key)
+*/
+
+int Ordered_key::cmp_key_with_search_key(rownum_t row_num)
+{
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tbl->file->ref_length;
+ uchar *cur_rowid= row_num_to_rowid + row_num * rowid_length;
+ int error, cmp_res;
+
+ error= tbl->file->ha_rnd_pos(tbl->record[0], cur_rowid);
+ DBUG_ASSERT(!error);
+
+ for (uint i= 0; i < key_column_count; i++)
+ {
+ cmp_res= compare_pred[i]->get_comparator()->compare();
+ /* Unlike Arg_comparator::compare_row() here there should be no NULLs. */
+ DBUG_ASSERT(!compare_pred[i]->null_value);
+ if (cmp_res)
+ return (cmp_res > 0 ? 1 : -1);
+ }
+ return 0;
+}
+
+
+/*
+ Find a key in a sorted array of keys via binary search.
+
+ see create_subq_in_equalities()
+*/
+
+bool Ordered_key::lookup()
+{
+ DBUG_ASSERT(key_buff_elements);
+
+ ha_rows lo= 0;
+ ha_rows hi= key_buff_elements - 1;
+ ha_rows mid;
+ int cmp_res;
+
+ while (lo <= hi)
+ {
+ mid= lo + (hi - lo) / 2;
+ cmp_res= cmp_key_with_search_key(key_buff[mid]);
+ /*
+ In order to find the minimum match, check if the pevious element is
+ equal or smaller than the found one. If equal, we need to search further
+ to the left.
+ */
+ if (!cmp_res && mid > 0)
+ cmp_res= !cmp_key_with_search_key(key_buff[mid - 1]) ? 1 : 0;
+
+ if (cmp_res == -1)
+ {
+ /* row[mid] < search_key */
+ lo= mid + 1;
+ }
+ else if (cmp_res == 1)
+ {
+ /* row[mid] > search_key */
+ if (!mid)
+ goto not_found;
+ hi= mid - 1;
+ }
+ else
+ {
+ /* row[mid] == search_key */
+ cur_key_idx= mid;
+ return TRUE;
+ }
+ }
+not_found:
+ cur_key_idx= HA_POS_ERROR;
+ return FALSE;
+}
+
+
+/*
+ Move the current index pointer to the next key with the same column
+ values as the current key. Since the index is sorted, all such keys
+ are contiguous.
+*/
+
+bool Ordered_key::next_same()
+{
+ DBUG_ASSERT(key_buff_elements);
+
+ if (cur_key_idx < key_buff_elements - 1)
+ {
+ /*
+ TIMOUR:
+ The below is quite inefficient, since as a result we will fetch every
+ row (except the last one) twice. There must be a more efficient way,
+ e.g. swapping record[0] and record[1], and reading only the new record.
+ */
+ if (!cmp_keys_by_row_data(key_buff[cur_key_idx], key_buff[cur_key_idx + 1]))
+ {
+ ++cur_key_idx;
+ return TRUE;
+ }
+ }
+ return FALSE;
+}
+
+
+void Ordered_key::print(String *str)
+{
+ uint i;
+ str->append("{idx=");
+ str->qs_append(keyid);
+ str->append(", (");
+ for (i= 0; i < key_column_count - 1; i++)
+ {
+ str->append(key_columns[i]->field->field_name);
+ str->append(", ");
+ }
+ str->append(key_columns[i]->field->field_name);
+ str->append("), ");
+
+ str->append("null_bitmap: (bits=");
+ str->qs_append(null_key.n_bits);
+ str->append(", nulls= ");
+ str->qs_append((double)null_count);
+ str->append(", min_null= ");
+ str->qs_append((double)min_null_row);
+ str->append(", max_null= ");
+ str->qs_append((double)max_null_row);
+ str->append("), ");
+
+ str->append('}');
+}
+
+
+subselect_partial_match_engine::subselect_partial_match_engine(
+ subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg)
+ :subselect_engine(item_arg, result_arg),
+ tmp_table(tmp_table_arg), lookup_engine(engine_arg),
+ equi_join_conds(equi_join_conds_arg),
+ covering_null_row_width(covering_null_row_width_arg)
+{}
+
+
+int subselect_partial_match_engine::exec()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ int res;
+
+ /* Try to find a matching row by index lookup. */
+ res= lookup_engine->copy_ref_key_simple();
+ if (res == -1)
+ {
+ /* The result is FALSE based on the outer reference. */
+ item_in->value= 0;
+ item_in->null_value= 0;
+ return 0;
+ }
+ else if (res == 0)
+ {
+ /* Search for a complete match. */
+ if ((res= lookup_engine->index_lookup()))
+ {
+ /* An error occured during lookup(). */
+ item_in->value= 0;
+ item_in->null_value= 0;
+ return res;
+ }
+ else if (item_in->value)
+ {
+ /*
+ A complete match was found, the result of IN is TRUE.
+ Notice: (this->item == lookup_engine->item)
+ */
+ return 0;
+ }
+ }
+
+ if (covering_null_row_width == tmp_table->s->fields)
+ {
+ /*
+ If there is a NULL-only row that coveres all columns the result of IN
+ is UNKNOWN.
+ */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 1;
+ item_in->null_value= 1;
+ return 0;
+ }
+
+ /*
+ There is no complete match. Look for a partial match (UNKNOWN result), or
+ no match (FALSE).
+ */
+ if (tmp_table->file->inited)
+ tmp_table->file->ha_index_end();
+
+ if (partial_match())
+ {
+ /* The result of IN is UNKNOWN. */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 1;
+ item_in->null_value= 1;
+ }
+ else
+ {
+ /* The result of IN is FALSE. */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 0;
+ item_in->null_value= 0;
+ }
+
+ return 0;
+}
+
+
+void subselect_partial_match_engine::print(String *str,
+ enum_query_type query_type)
+{
+ /*
+ Should never be called as the actual engine cannot be known at query
+ optimization time.
+ */
+ DBUG_ASSERT(FALSE);
+}
+
+
+/*
+ @param non_null_key_parts
+ @param partial_match_key_parts A union of all single-column NULL key parts.
+ @param count_partial_match_columns Number of NULL keyparts (set bits above).
+
+ @retval FALSE the engine was initialized successfully
+ @retval TRUE there was some (memory allocation) error during initialization,
+ such errors should be interpreted as revert to other strategy
+*/
+
+bool
+subselect_rowid_merge_engine::init(MY_BITMAP *non_null_key_parts,
+ MY_BITMAP *partial_match_key_parts)
+{
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tmp_table->file->ref_length;
+ ha_rows row_count= tmp_table->file->stats.records;
+ rownum_t cur_rownum= 0;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+ uint cur_keyid= 0;
+ Item_in_subselect *item_in= (Item_in_subselect*) item;
+ int error;
+
+ if (keys_count == 0)
+ {
+ /* There is nothing to initialize, we will only do regular lookups. */
+ return FALSE;
+ }
+
+ DBUG_ASSERT(!covering_null_row_width || (covering_null_row_width &&
+ keys_count == 1 &&
+ non_null_key_parts));
+ /*
+ Allocate buffers to hold the merged keys and the mapping between rowids and
+ row numbers.
+ */
+ if (!(merge_keys= (Ordered_key**) thd->alloc(keys_count *
+ sizeof(Ordered_key*))) ||
+ !(row_num_to_rowid= (uchar*) my_malloc(row_count * rowid_length *
+ sizeof(uchar), MYF(MY_WME))))
+ return TRUE;
+
+ /* Create the only non-NULL key if there is any. */
+ if (non_null_key_parts)
+ {
+ non_null_key= new Ordered_key(cur_keyid, tmp_table, item_in->left_expr,
+ 0, 0, 0, row_num_to_rowid);
+ if (non_null_key->init(non_null_key_parts))
+ return TRUE;
+ merge_keys[cur_keyid]= non_null_key;
+ merge_keys[cur_keyid]->first();
+ ++cur_keyid;
+ }
+
+ /*
+ If there is a covering NULL row, the only key that is needed is the
+ only non-NULL key that is already created above. We create keys on
+ NULL-able columns only if there is no covering NULL row.
+ */
+ if (!covering_null_row_width)
+ {
+ if (bitmap_init_memroot(&matching_keys, keys_count, thd->mem_root) ||
+ bitmap_init_memroot(&matching_outer_cols, keys_count, thd->mem_root) ||
+ bitmap_init_memroot(&null_only_columns, keys_count, thd->mem_root))
+ return TRUE;
+
+ /*
+ Create one single-column NULL-key for each column in
+ partial_match_key_parts.
+ */
+ for (uint i= 0; i < partial_match_key_parts->n_bits; i++)
+ {
+ if (!bitmap_is_set(partial_match_key_parts, i))
+ continue;
+
+ if (result_sink->get_null_count_of_col(i) == row_count)
+ bitmap_set_bit(&null_only_columns, cur_keyid);
+ else
+ {
+ merge_keys[cur_keyid]= new Ordered_key(
+ cur_keyid, tmp_table,
+ item_in->left_expr->element_index(i),
+ result_sink->get_null_count_of_col(i),
+ result_sink->get_min_null_of_col(i),
+ result_sink->get_max_null_of_col(i),
+ row_num_to_rowid);
+ if (merge_keys[cur_keyid]->init(i))
+ return TRUE;
+ merge_keys[cur_keyid]->first();
+ }
+ ++cur_keyid;
+ }
+ }
+
+ /* Populate the indexes with data from the temporary table. */
+ tmp_table->file->ha_rnd_init(1);
+ tmp_table->file->extra_opt(HA_EXTRA_CACHE,
+ current_thd->variables.read_buff_size);
+ tmp_table->null_row= 0;
+ while (TRUE)
+ {
+ error= tmp_table->file->ha_rnd_next(tmp_table->record[0]);
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ /* We get this for duplicate records that should not be in tmp_table. */
+ continue;
+ }
+ /*
+ This is a temp table that we fully own, there should be no other
+ cause to stop the iteration than EOF.
+ */
+ DBUG_ASSERT(!error || error == HA_ERR_END_OF_FILE);
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ DBUG_ASSERT(cur_rownum == tmp_table->file->stats.records);
+ break;
+ }
+
+ /*
+ Save the position of this record in the row_num -> rowid mapping.
+ */
+ tmp_table->file->position(tmp_table->record[0]);
+ memcpy(row_num_to_rowid + cur_rownum * rowid_length,
+ tmp_table->file->ref, rowid_length);
+
+ /* Add the current row number to the corresponding keys. */
+ if (non_null_key)
+ {
+ /* By definition there are no NULLs in the non-NULL key. */
+ non_null_key->add_key(cur_rownum);
+ }
+
+ for (uint i= (non_null_key ? 1 : 0); i < keys_count; i++)
+ {
+ /*
+ Check if the first and only indexed column contains NULL in the curent
+ row, and add the row number to the corresponding key.
+ */
+ if (tmp_table->field[merge_keys[i]->get_field_idx(0)]->is_null())
+ merge_keys[i]->set_null(cur_rownum);
+ else
+ merge_keys[i]->add_key(cur_rownum);
+ }
+ ++cur_rownum;
+ }
+
+ tmp_table->file->ha_rnd_end();
+
+ /* Sort all the keys by their NULL selectivity. */
+ my_qsort(merge_keys, keys_count, sizeof(Ordered_key*),
+ (qsort_cmp) cmp_keys_by_null_selectivity);
+
+ /* Sort the keys in each of the indexes. */
+ for (uint i= 0; i < keys_count; i++)
+ merge_keys[i]->sort_keys();
+
+ if (init_queue(&pq, keys_count, 0, FALSE,
+ subselect_rowid_merge_engine::cmp_keys_by_cur_rownum, NULL))
+ return TRUE;
+
+ return FALSE;
+}
+
+
+subselect_rowid_merge_engine::~subselect_rowid_merge_engine()
+{
+ /* None of the resources below is allocated if there are no ordered keys. */
+ if (keys_count)
+ {
+ my_free((char*) row_num_to_rowid, MYF(0));
+ for (uint i= 0; i < keys_count; i++)
+ delete merge_keys[i];
+ delete_queue(&pq);
+ if (tmp_table->file->inited == handler::RND)
+ tmp_table->file->ha_rnd_end();
+ }
+}
+
+
+void subselect_rowid_merge_engine::cleanup()
+{
+}
+
+
+/*
+ Quick sort comparison function to compare keys in order of decreasing bitmap
+ selectivity, so that the most selective keys come first.
+
+ @param k1 first key to compare
+ @param k2 second key to compare
+
+ @retval 1 if k1 is less selective than k2
+ @retval 0 if k1 is equally selective as k2
+ @retval -1 if k1 is more selective than k2
+*/
+
+int
+subselect_rowid_merge_engine::cmp_keys_by_null_selectivity(Ordered_key **k1,
+ Ordered_key **k2)
+{
+ double k1_sel= (*k1)->null_selectivity();
+ double k2_sel= (*k2)->null_selectivity();
+ if (k1_sel < k2_sel)
+ return 1;
+ if (k1_sel > k2_sel)
+ return -1;
+ return 0;
+}
+
+
+/*
+*/
+
+int
+subselect_rowid_merge_engine::cmp_keys_by_cur_rownum(void *arg,
+ uchar *k1, uchar *k2)
+{
+ rownum_t r1= ((Ordered_key*) k1)->current();
+ rownum_t r2= ((Ordered_key*) k2)->current();
+
+ return (r1 < r2) ? -1 : (r1 > r2) ? 1 : 0;
+}
+
+
+/*
+ Check if certain table row contains a NULL in all columns for which there is
+ no match in the corresponding value index.
+
+ @retval TRUE if a NULL row exists
+ @retval FALSE otherwise
+*/
+
+bool subselect_rowid_merge_engine::test_null_row(rownum_t row_num)
+{
+ Ordered_key *cur_key;
+ uint cur_id;
+ for (uint i = 0; i < keys_count; i++)
+ {
+ cur_key= merge_keys[i];
+ cur_id= cur_key->get_keyid();
+ if (bitmap_is_set(&matching_keys, cur_id))
+ {
+ /*
+ The key 'i' (with id 'cur_keyid') already matches a value in row 'row_num',
+ thus we skip it as it can't possibly match a NULL.
+ */
+ continue;
+ }
+ if (!cur_key->is_null(row_num))
+ return FALSE;
+ }
+ return TRUE;
+}
+
+
+/*
+ @retval TRUE there is a partial match (UNKNOWN)
+ @retval FALSE there is no match at all (FALSE)
+*/
+
+bool subselect_rowid_merge_engine::partial_match()
+{
+ Ordered_key *min_key; /* Key that contains the current minimum position. */
+ rownum_t min_row_num; /* Current row number of min_key. */
+ Ordered_key *cur_key;
+ rownum_t cur_row_num;
+ uint count_nulls_in_search_key= 0;
+ bool res= FALSE;
+
+ /* If there is a non-NULL key, it must be the first key in the keys array. */
+ DBUG_ASSERT(!non_null_key || (non_null_key && merge_keys[0] == non_null_key));
+
+ /* All data accesses during execution are via handler::ha_rnd_pos() */
+ tmp_table->file->ha_rnd_init(0);
+
+ /* Check if there is a match for the columns of the only non-NULL key. */
+ if (non_null_key && !non_null_key->lookup())
+ {
+ res= FALSE;
+ goto end;
+ }
+
+ /*
+ If there is a NULL (sub)row that covers all NULL-able columns,
+ then there is a guranteed partial match, and we don't need to search
+ for the matching row.
+ */
+ if (covering_null_row_width)
+ {
+ res= TRUE;
+ goto end;
+ }
+
+ if (non_null_key)
+ queue_insert(&pq, (uchar *) non_null_key);
+ /*
+ Do not add the non_null_key, since it was already processed above.
+ */
+ bitmap_clear_all(&matching_outer_cols);
+ for (uint i= test(non_null_key); i < keys_count; i++)
+ {
+ DBUG_ASSERT(merge_keys[i]->get_column_count() == 1);
+ if (merge_keys[i]->get_search_key(0)->is_null())
+ {
+ ++count_nulls_in_search_key;
+ bitmap_set_bit(&matching_outer_cols, merge_keys[i]->get_keyid());
+ }
+ else if (merge_keys[i]->lookup())
+ queue_insert(&pq, (uchar *) merge_keys[i]);
+ }
+
+ /*
+ If the outer reference consists of only NULLs, or if it has NULLs in all
+ nullable columns, the result is UNKNOWN.
+ */
+ if (count_nulls_in_search_key ==
+ ((Item_in_subselect *) item)->left_expr->cols() -
+ (non_null_key ? non_null_key->get_column_count() : 0))
+ {
+ res= TRUE;
+ goto end;
+ }
+
+ /*
+ If there is no NULL (sub)row that covers all NULL columns, and there is no
+ single match for any of the NULL columns, the result is FALSE.
+ */
+ if (pq.elements - test(non_null_key) == 0)
+ {
+ res= FALSE;
+ goto end;
+ }
+
+ DBUG_ASSERT(pq.elements);
+
+ min_key= (Ordered_key*) queue_remove(&pq, 0);
+ min_row_num= min_key->current();
+ bitmap_copy(&matching_keys, &null_only_columns);
+ bitmap_set_bit(&matching_keys, min_key->get_keyid());
+ bitmap_union(&matching_keys, &matching_outer_cols);
+ if (min_key->next_same())
+ queue_insert(&pq, (uchar *) min_key);
+
+ if (pq.elements == 0)
+ {
+ /*
+ Check the only matching row of the only key min_key for NULL matches
+ in the other columns.
+ */
+ res= test_null_row(min_row_num);
+ goto end;
+ }
+
+ while (TRUE)
+ {
+ cur_key= (Ordered_key*) queue_remove(&pq, 0);
+ cur_row_num= cur_key->current();
+
+ if (cur_row_num == min_row_num)
+ bitmap_set_bit(&matching_keys, cur_key->get_keyid());
+ else
+ {
+ /* Follows from the correct use of priority queue. */
+ DBUG_ASSERT(cur_row_num > min_row_num);
+ if (test_null_row(min_row_num))
+ {
+ res= TRUE;
+ goto end;
+ }
+ else
+ {
+ min_key= cur_key;
+ min_row_num= cur_row_num;
+ bitmap_copy(&matching_keys, &null_only_columns);
+ bitmap_set_bit(&matching_keys, min_key->get_keyid());
+ bitmap_union(&matching_keys, &matching_outer_cols);
+ }
+ }
+
+ if (cur_key->next_same())
+ queue_insert(&pq, (uchar *) cur_key);
+
+ if (pq.elements == 0)
+ {
+ /* Check the last row of the last column in PQ for NULL matches. */
+ res= test_null_row(min_row_num);
+ goto end;
+ }
+ }
+
+ /* We should never get here - all branches must be handled explicitly above. */
+ DBUG_ASSERT(FALSE);
+
+end:
+ tmp_table->file->ha_rnd_end();
+ return res;
+}
+
+
+subselect_table_scan_engine::subselect_table_scan_engine(
+ subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg,
+ Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg)
+ :subselect_partial_match_engine(engine_arg, tmp_table_arg, item_arg,
+ result_arg, equi_join_conds_arg,
+ covering_null_row_width_arg)
+{}
+
+
+/*
+ TIMOUR:
+ This method is based on subselect_uniquesubquery_engine::scan_table().
+ Consider refactoring somehow, 80% of the code is the same.
+
+ for each row_i in tmp_table
+ {
+ count_matches= 0;
+ for each row element row_i[j]
+ {
+ if (outer_ref[j] is NULL || row_i[j] is NULL || outer_ref[j] == row_i[j])
+ ++count_matches;
+ }
+ if (count_matches == outer_ref.elements)
+ return TRUE
+ }
+ return FALSE
+*/
+
+bool subselect_table_scan_engine::partial_match()
+{
+ List_iterator_fast<Item> equality_it(*equi_join_conds);
+ Item *cur_eq;
+ uint count_matches;
+ int error;
+ bool res;
+
+ tmp_table->file->ha_rnd_init(1);
+ tmp_table->file->extra_opt(HA_EXTRA_CACHE,
+ current_thd->variables.read_buff_size);
+ /*
+ TIMOUR:
+ scan_table() also calls "table->null_row= 0;", why, do we need it?
+ */
+ for (;;)
+ {
+ error= tmp_table->file->ha_rnd_next(tmp_table->record[0]);
+ if (error) {
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ error= 0;
+ continue;
+ }
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ error= 0;
+ break;
+ }
+ else
+ {
+ error= report_error(tmp_table, error);
+ break;
+ }
+ }
+
+ equality_it.rewind();
+ count_matches= 0;
+ while ((cur_eq= equality_it++))
+ {
+ DBUG_ASSERT(cur_eq->type() == Item::FUNC_ITEM &&
+ ((Item_func*)cur_eq)->functype() == Item_func::EQ_FUNC);
+ if (!cur_eq->val_int() && !cur_eq->null_value)
+ break;
+ ++count_matches;
+ }
+ if (count_matches == tmp_table->s->fields)
+ {
+ res= TRUE; /* Found a matching row. */
+ goto end;
+ }
+ }
+
+ res= FALSE;
+end:
+ tmp_table->file->ha_rnd_end();
+ return res;
+}
+
+
+void subselect_table_scan_engine::cleanup()
+{
+}
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-02-11 23:59:58 +0000
+++ b/sql/item_subselect.h 2010-03-09 10:14:06 +0000
@@ -297,7 +297,7 @@
Representation of IN subquery predicates of the form
"left_expr IN (SELECT ...)".
- @detail
+ @details
This class has:
- A "subquery execution engine" (as a subclass of Item_subselect) that allows
it to evaluate subqueries. (and this class participates in execution by
@@ -319,6 +319,12 @@
*/
List<Cached_item> *left_expr_cache;
bool first_execution;
+ /*
+ Set to TRUE if at query execution time we determine that this item's
+ value is a constant during this execution. We need this member because
+ it is not possible to substitute 'this' with a constant item.
+ */
+ bool is_constant;
/*
expr & optimizer used in subselect rewriting to store Item for
@@ -387,8 +393,8 @@
Item_in_subselect(Item * left_expr, st_select_lex *select_lex);
Item_in_subselect()
:Item_exists_subselect(), left_expr_cache(0), first_execution(TRUE),
- optimizer(0), abort_on_null(0), pushed_cond_guards(NULL),
- exec_method(NOT_TRANSFORMED), upper_item(0)
+ is_constant(FALSE), optimizer(0), abort_on_null(0),
+ pushed_cond_guards(NULL), exec_method(NOT_TRANSFORMED), upper_item(0)
{}
void cleanup();
subs_type substype() { return IN_SUBS; }
@@ -421,6 +427,8 @@
void update_used_tables();
bool setup_engine();
bool init_left_expr_cache();
+ /* Inform 'this' that it was computed, and contains a valid result. */
+ void set_first_execution() { if (first_execution) first_execution= FALSE; }
bool is_expensive_processor(uchar *arg);
friend class Item_ref_null_helper;
@@ -428,6 +436,7 @@
friend class Item_in_optimizer;
friend class subselect_indexsubquery_engine;
friend class subselect_hash_sj_engine;
+ friend class subselect_partial_match_engine;
};
@@ -462,7 +471,8 @@
enum enum_engine_type {ABSTRACT_ENGINE, SINGLE_SELECT_ENGINE,
UNION_ENGINE, UNIQUESUBQUERY_ENGINE,
- INDEXSUBQUERY_ENGINE, HASH_SJ_ENGINE};
+ INDEXSUBQUERY_ENGINE, HASH_SJ_ENGINE,
+ ROWID_MERGE_ENGINE, TABLE_SCAN_ENGINE};
subselect_engine(Item_subselect *si, select_result_interceptor *res)
:thd(0)
@@ -635,8 +645,10 @@
virtual void print (String *str, enum_query_type query_type);
bool change_result(Item_subselect *si, select_result_interceptor *result);
bool no_tables();
+ int index_lookup(); /* TIMOUR: this method needs refactoring. */
int scan_table();
bool copy_ref_key();
+ int copy_ref_key_simple(); /* TIMOUR: this method needs refactoring. */
bool no_rows() { return empty_result_set; }
virtual enum_engine_type engine_type() { return UNIQUESUBQUERY_ENGINE; }
};
@@ -705,50 +717,439 @@
/**
- Compute an IN predicate via a hash semi-join. The subquery is materialized
- during the first evaluation of the IN predicate. The IN predicate is executed
- via the functionality inherited from subselect_uniquesubquery_engine.
+ Compute an IN predicate via a hash semi-join. This class is responsible for
+ the materialization of the subquery, and the selection of the correct and
+ optimal execution method (e.g. direct index lookup, or partial matching) for
+ the IN predicate.
*/
-class subselect_hash_sj_engine: public subselect_uniquesubquery_engine
+class subselect_hash_sj_engine : public subselect_engine
{
protected:
+ /* The table into which the subquery is materialized. */
+ TABLE *tmp_table;
/* TRUE if the subquery was materialized into a temp table. */
bool is_materialized;
/*
The old engine already chosen at parse time and stored in permanent memory.
Through this member we can re-create and re-prepare materialize_join for
- each execution of a prepared statement. We akso resuse the functionality
+ each execution of a prepared statement. We also reuse the functionality
of subselect_single_select_engine::[prepare | cols].
*/
subselect_single_select_engine *materialize_engine;
+ /* The engine used to compute the IN predicate. */
+ subselect_engine *lookup_engine;
/*
QEP to execute the subquery and materialize its result into a
temporary table. Created during the first call to exec().
*/
JOIN *materialize_join;
- /* Temp table context of the outer select's JOIN. */
- TMP_TABLE_PARAM *tmp_param;
+
+ /* Keyparts of the only non-NULL composite index in a rowid merge. */
+ MY_BITMAP non_null_key_parts;
+ /* Keyparts of the single column indexes with NULL, one keypart per index. */
+ MY_BITMAP partial_match_key_parts;
+ uint count_partial_match_columns;
+ uint count_null_only_columns;
+ /*
+ A conjunction of all the equality condtions between all pairs of expressions
+ that are arguments of an IN predicate. We need these to post-filter some
+ IN results because index lookups sometimes match values that are actually
+ not equal to the search key in SQL terms.
+ */
+ Item_cond_and *semi_join_conds;
+ /* Possible execution strategies that can be used to compute hash semi-join.*/
+ enum exec_strategy {
+ UNDEFINED,
+ COMPLETE_MATCH, /* Use regular index lookups. */
+ PARTIAL_MATCH, /* Use some partial matching strategy. */
+ PARTIAL_MATCH_MERGE, /* Use partial matching through index merging. */
+ PARTIAL_MATCH_SCAN, /* Use partial matching through table scan. */
+ IMPOSSIBLE /* Subquery materialization is not applicable. */
+ };
+ /* The chosen execution strategy. Computed after materialization. */
+ exec_strategy strategy;
+protected:
+ exec_strategy get_strategy_using_schema();
+ exec_strategy get_strategy_using_data();
+ size_t rowid_merge_buff_size(bool has_non_null_key,
+ bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts);
+ void choose_partial_match_strategy(bool has_non_null_key,
+ bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts);
+ bool make_semi_join_conds();
+ subselect_uniquesubquery_engine* make_unique_engine();
public:
subselect_hash_sj_engine(THD *thd, Item_subselect *in_predicate,
- subselect_single_select_engine *old_engine)
- :subselect_uniquesubquery_engine(thd, NULL, in_predicate, NULL),
- is_materialized(FALSE), materialize_engine(old_engine),
- materialize_join(NULL), tmp_param(NULL)
- {}
+ subselect_single_select_engine *old_engine)
+ :subselect_engine(in_predicate, NULL), tmp_table(NULL),
+ is_materialized(FALSE), materialize_engine(old_engine), lookup_engine(NULL),
+ materialize_join(NULL), count_partial_match_columns(0),
+ count_null_only_columns(0), semi_join_conds(NULL), strategy(UNDEFINED)
+ {
+ set_thd(thd);
+ }
~subselect_hash_sj_engine();
bool init_permanent(List<Item> *tmp_columns);
bool init_runtime();
void cleanup();
- int prepare() { return 0; }
+ int prepare() { return 0; } /* Override virtual function in base class. */
int exec();
- virtual void print (String *str, enum_query_type query_type);
+ virtual void print(String *str, enum_query_type query_type);
uint cols()
{
return materialize_engine->cols();
}
+ uint8 uncacheable() { return UNCACHEABLE_DEPENDENT; }
+ table_map upper_select_const_tables() { return 0; }
+ bool no_rows() { return !tmp_table->file->stats.records; }
virtual enum_engine_type engine_type() { return HASH_SJ_ENGINE; }
-};
-
+ /*
+ TODO: factor out all these methods in a base subselect_index_engine class
+ because all of them have dummy implementations and should never be called.
+ */
+ void fix_length_and_dec(Item_cache** row);//=>base class
+ void exclude(); //=>base class
+ //=>base class
+ bool change_result(Item_subselect *si, select_result_interceptor *result);
+ bool no_tables();//=>base class
+};
+
+
+/*
+ Distinguish the type od (0-based) row numbers from the type of the index into
+ an array of row numbers.
+*/
+typedef ha_rows rownum_t;
+
+
+/*
+ An Ordered_key is an in-memory table index that allows O(log(N)) time
+ lookups of a multi-part key.
+
+ If the index is over a single column, then this column may contain NULLs, and
+ the NULLs are stored and tested separately for NULL in O(1) via is_null().
+ Multi-part indexes assume that the indexed columns do not contain NULLs.
+
+ TODO:
+ = Due to the unnatural assymetry between single and multi-part indexes, it
+ makes sense to somehow refactor or extend the class.
+
+ = This class can be refactored into a base abstract interface, and two
+ subclasses:
+ - one to represent single-column indexes, and
+ - another to represent multi-column indexes.
+ Such separation would allow slightly more efficient implementation of
+ the single-column indexes.
+ = The current design requires such indexes to be fully recreated for each
+ PS (re)execution, however most of the comprising objects can be reused.
+*/
+
+class Ordered_key : public Sql_alloc
+{
+protected:
+ /*
+ Index of the key in an array of keys. This index allows to
+ construct (sub)sets of keys represented by bitmaps.
+ */
+ uint keyid;
+ /* The table being indexed. */
+ TABLE *tbl;
+ /* The columns being indexed. */
+ Item_field **key_columns;
+ /* Number of elements in 'key_columns' (number of key parts). */
+ uint key_column_count;
+ /*
+ An expression, or sequence of expressions that forms the search key.
+ The search key is a sequence when it is Item_row. Each element of the
+ sequence is accessible via Item::element_index(int i).
+ */
+ Item *search_key;
+
+/* Value index related members. */
+ /*
+ The actual value index, consists of a sorted sequence of row numbers.
+ */
+ rownum_t *key_buff;
+ /* Number of elements in key_buff. */
+ ha_rows key_buff_elements;
+ /* Current element in 'key_buff'. */
+ ha_rows cur_key_idx;
+ /*
+ Mapping from row numbers to row ids. The element row_num_to_rowid[i]
+ contains a buffer with the rowid for the row numbered 'i'.
+ The memory for this member is not maintanined by this class because
+ all Ordered_key indexes of the same table share the same mapping.
+ */
+ uchar *row_num_to_rowid;
+ /*
+ A sequence of predicates to compare the search key with the corresponding
+ columns of a table row from the index.
+ */
+ Item_func_lt **compare_pred;
+
+/* Null index related members. */
+ MY_BITMAP null_key;
+ /* Count of NULLs per column. */
+ ha_rows null_count;
+ /* The row number that contains the first NULL in a column. */
+ ha_rows min_null_row;
+ /* The row number that contains the last NULL in a column. */
+ ha_rows max_null_row;
+
+protected:
+ bool alloc_keys_buffers();
+ /*
+ Quick sort comparison function that compares two rows of the same table
+ indentfied with their row numbers.
+ */
+ int cmp_keys_by_row_data(rownum_t a, rownum_t b);
+ static int cmp_keys_by_row_data_and_rownum(Ordered_key *key,
+ rownum_t* a, rownum_t* b);
+
+ int cmp_key_with_search_key(rownum_t row_num);
+
+public:
+ Ordered_key(uint keyid_arg, TABLE *tbl_arg,
+ Item *search_key_arg, ha_rows null_count_arg,
+ ha_rows min_null_row_arg, ha_rows max_null_row_arg,
+ uchar *row_num_to_rowid_arg);
+ ~Ordered_key();
+ void cleanup();
+ /* Initialize a multi-column index. */
+ bool init(MY_BITMAP *columns_to_index);
+ /* Initialize a single-column index. */
+ bool init(int col_idx);
+
+ uint get_column_count() { return key_column_count; }
+ uint get_keyid() { return keyid; }
+ uint get_field_idx(uint i)
+ {
+ DBUG_ASSERT(i < key_column_count);
+ return key_columns[i]->field->field_index;
+ }
+ /*
+ Get the search key element that corresponds to the i-th key part of this
+ index.
+ */
+ Item *get_search_key(uint i)
+ {
+ return search_key->element_index(key_columns[i]->field->field_index);
+ }
+ void add_key(rownum_t row_num)
+ {
+ /* The caller must know how many elements to add. */
+ DBUG_ASSERT(key_buff_elements && cur_key_idx < key_buff_elements);
+ key_buff[cur_key_idx]= row_num;
+ ++cur_key_idx;
+ }
+
+ void sort_keys();
+ double null_selectivity();
+
+ /*
+ Position the current element at the first row that matches the key.
+ The key itself is propagated by evaluating the current value(s) of
+ this->search_key.
+ */
+ bool lookup();
+ /* Move the current index cursor to the first key. */
+ void first()
+ {
+ DBUG_ASSERT(key_buff_elements);
+ cur_key_idx= 0;
+ }
+ /* TODO */
+ bool next_same();
+ /* Move the current index cursor to the next key. */
+ bool next()
+ {
+ DBUG_ASSERT(key_buff_elements);
+ if (cur_key_idx < key_buff_elements - 1)
+ {
+ ++cur_key_idx;
+ return TRUE;
+ }
+ return FALSE;
+ };
+ /* Return the current index element. */
+ rownum_t current()
+ {
+ DBUG_ASSERT(key_buff_elements && cur_key_idx < key_buff_elements);
+ return key_buff[cur_key_idx];
+ }
+
+ void set_null(rownum_t row_num)
+ {
+ bitmap_set_bit(&null_key, row_num);
+ }
+ bool is_null(rownum_t row_num)
+ {
+ /*
+ Indexes consisting of only NULLs do not have a bitmap buffer at all.
+ Their only initialized member is 'n_bits', which is equal to the number
+ of temp table rows.
+ */
+ if (null_count == tbl->file->stats.records)
+ {
+ DBUG_ASSERT(tbl->file->stats.records == null_key.n_bits);
+ return TRUE;
+ }
+ if (row_num > max_null_row || row_num < min_null_row)
+ return FALSE;
+ return bitmap_is_set(&null_key, row_num);
+ }
+ void print(String *str);
+};
+
+
+class subselect_partial_match_engine : public subselect_engine
+{
+protected:
+ /* The temporary table that contains a materialized subquery. */
+ TABLE *tmp_table;
+ /*
+ The engine used to check whether an IN predicate is TRUE or not. If not
+ TRUE, then subselect_rowid_merge_engine further distinguishes between
+ FALSE and UNKNOWN.
+ */
+ subselect_uniquesubquery_engine *lookup_engine;
+ /* A list of equalities between each pair of IN operands. */
+ List<Item> *equi_join_conds;
+ /*
+ If there is a row, such that all its NULL-able components are NULL, this
+ member is set to the number of covered columns. If there is no covering
+ row, then this is 0.
+ */
+ uint covering_null_row_width;
+protected:
+ virtual bool partial_match()= 0;
+public:
+ subselect_partial_match_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg);
+ int prepare() { return 0; }
+ int exec();
+ void fix_length_and_dec(Item_cache**) {}
+ uint cols() { /* TODO: what is the correct value? */ return 1; }
+ uint8 uncacheable() { return UNCACHEABLE_DEPENDENT; }
+ void exclude() {}
+ table_map upper_select_const_tables() { return 0; }
+ bool change_result(Item_subselect*, select_result_interceptor*)
+ { DBUG_ASSERT(FALSE); return false; }
+ bool no_tables() { return false; }
+ bool no_rows()
+ {
+ /*
+ TODO: It is completely unclear what is the semantics of this
+ method. The current result is computed so that the call to no_rows()
+ from Item_in_optimizer::val_int() sets Item_in_optimizer::null_value
+ correctly.
+ */
+ return !(((Item_in_subselect *) item)->null_value);
+ }
+ void print(String*, enum_query_type);
+
+ friend void subselect_hash_sj_engine::cleanup();
+};
+
+
+class subselect_rowid_merge_engine: public subselect_partial_match_engine
+{
+protected:
+ /*
+ Mapping from row numbers to row ids. The rowids are stored sequentially
+ in the array - rowid[i] is located in row_num_to_rowid + i * rowid_length.
+ */
+ uchar *row_num_to_rowid;
+ /*
+ A subset of all the keys for which there is a match for the same row.
+ Used during execution. Computed for each outer reference
+ */
+ MY_BITMAP matching_keys;
+ /*
+ The columns of the outer reference that are NULL. Computed for each
+ outer reference.
+ */
+ MY_BITMAP matching_outer_cols;
+ /*
+ Columns that consist of only NULLs. Such columns match any value.
+ Computed once per query execution.
+ */
+ MY_BITMAP null_only_columns;
+ /*
+ Indexes of row numbers, sorted by <column_value, row_number>. If an
+ index may contain NULLs, the NULLs are stored efficiently in a bitmap.
+
+ The indexes are sorted by the selectivity of their NULL sub-indexes, the
+ one with the fewer NULLs is first. Thus, if there is any index on
+ non-NULL columns, it is contained in keys[0].
+ */
+ Ordered_key **merge_keys;
+ /* The number of elements in keys. */
+ uint keys_count;
+ /*
+ An index on all non-NULL columns of 'tmp_table'. The index has the
+ logical form: <[v_i1 | ... | v_ik], rownum>. It allows to find the row
+ number where the columns c_i1,...,c1_k contain the values v_i1,...,v_ik.
+ If such an index exists, it is always the first element of 'keys'.
+ */
+ Ordered_key *non_null_key;
+ /*
+ Priority queue of Ordered_key indexes, one per NULLable column.
+ This queue is used by the partial match algorithm in method exec().
+ */
+ QUEUE pq;
+protected:
+ /*
+ Comparison function to compare keys in order of decreasing bitmap
+ selectivity.
+ */
+ static int cmp_keys_by_null_selectivity(Ordered_key **k1, Ordered_key **k2);
+ /*
+ Comparison function used by the priority queue pq, the 'smaller' key
+ is the one with the smaller current row number.
+ */
+ static int cmp_keys_by_cur_rownum(void *arg, uchar *k1, uchar *k2);
+
+ bool test_null_row(rownum_t row_num);
+ bool partial_match();
+public:
+ subselect_rowid_merge_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, uint keys_count_arg,
+ uint covering_null_row_width_arg,
+ Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg)
+ :subselect_partial_match_engine(engine_arg, tmp_table_arg, item_arg,
+ result_arg, equi_join_conds_arg,
+ covering_null_row_width_arg),
+ keys_count(keys_count_arg), non_null_key(NULL)
+ {
+ thd= lookup_engine->get_thd();
+ }
+ ~subselect_rowid_merge_engine();
+ bool init(MY_BITMAP *non_null_key_parts, MY_BITMAP *partial_match_key_parts);
+ void cleanup();
+ virtual enum_engine_type engine_type() { return ROWID_MERGE_ENGINE; }
+};
+
+
+class subselect_table_scan_engine: public subselect_partial_match_engine
+{
+protected:
+ bool partial_match();
+public:
+ subselect_table_scan_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg);
+ void cleanup();
+ virtual enum_engine_type engine_type() { return TABLE_SCAN_ENGINE; }
+};
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-01-17 14:55:08 +0000
+++ b/sql/mysql_priv.h 2010-03-09 10:14:06 +0000
@@ -552,12 +552,14 @@
#define OPTIMIZER_SWITCH_LOOSE_SCAN 64
#define OPTIMIZER_SWITCH_MATERIALIZATION 128
#define OPTIMIZER_SWITCH_SEMIJOIN 256
+#define OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE 512
+#define OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN 1024
#ifdef DBUG_OFF
-# define OPTIMIZER_SWITCH_LAST 512
+# define OPTIMIZER_SWITCH_LAST 2048
#else
-# define OPTIMIZER_SWITCH_TABLE_ELIMINATION 512
-# define OPTIMIZER_SWITCH_LAST 1024
+# define OPTIMIZER_SWITCH_TABLE_ELIMINATION 2048
+# define OPTIMIZER_SWITCH_LAST 4096
#endif
#ifdef DBUG_OFF
@@ -570,8 +572,10 @@
OPTIMIZER_SWITCH_FIRSTMATCH | \
OPTIMIZER_SWITCH_LOOSE_SCAN | \
OPTIMIZER_SWITCH_MATERIALIZATION | \
- OPTIMIZER_SWITCH_SEMIJOIN)
-#else
+ OPTIMIZER_SWITCH_SEMIJOIN | \
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE|\
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)
+#else
# define OPTIMIZER_SWITCH_DEFAULT (OPTIMIZER_SWITCH_INDEX_MERGE | \
OPTIMIZER_SWITCH_INDEX_MERGE_UNION | \
OPTIMIZER_SWITCH_INDEX_MERGE_SORT_UNION | \
@@ -581,7 +585,9 @@
OPTIMIZER_SWITCH_FIRSTMATCH | \
OPTIMIZER_SWITCH_LOOSE_SCAN | \
OPTIMIZER_SWITCH_MATERIALIZATION | \
- OPTIMIZER_SWITCH_SEMIJOIN)
+ OPTIMIZER_SWITCH_SEMIJOIN | \
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE|\
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)
#endif
/*
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-01-17 14:55:08 +0000
+++ b/sql/mysqld.cc 2010-03-09 10:14:06 +0000
@@ -301,7 +301,9 @@
"index_merge","index_merge_union","index_merge_sort_union",
"index_merge_intersection",
"index_condition_pushdown",
- "firstmatch","loosescan","materialization", "semijoin",
+ "firstmatch","loosescan","materialization", "semijoin",
+ "partial_match_rowid_merge",
+ "partial_match_table_scan",
#ifndef DBUG_OFF
"table_elimination",
#endif
@@ -320,6 +322,8 @@
sizeof("loosescan") - 1,
sizeof("materialization") - 1,
sizeof("semijoin") - 1,
+ sizeof("partial_match_rowid_merge") - 1,
+ sizeof("partial_match_table_scan") - 1,
#ifndef DBUG_OFF
sizeof("table_elimination") - 1,
#endif
@@ -5794,7 +5798,8 @@
OPT_RECORD_RND_BUFFER, OPT_DIV_PRECINCREMENT, OPT_RELAY_LOG_SPACE_LIMIT,
OPT_RELAY_LOG_PURGE,
OPT_SLAVE_NET_TIMEOUT, OPT_SLAVE_COMPRESSED_PROTOCOL, OPT_SLOW_LAUNCH_TIME,
- OPT_SLAVE_TRANS_RETRIES, OPT_READONLY, OPT_DEBUGGING, OPT_DEBUG_FLUSH,
+ OPT_SLAVE_TRANS_RETRIES, OPT_READONLY, OPT_ROWID_MERGE_BUFF_SIZE,
+ OPT_DEBUGGING, OPT_DEBUG_FLUSH,
OPT_SORT_BUFFER, OPT_TABLE_OPEN_CACHE, OPT_TABLE_DEF_CACHE,
OPT_THREAD_CONCURRENCY, OPT_THREAD_CACHE_SIZE,
OPT_TMP_TABLE_SIZE, OPT_THREAD_STACK,
@@ -7130,6 +7135,11 @@
(uchar**) &max_system_variables.range_alloc_block_size, 0, GET_ULONG,
REQUIRED_ARG, RANGE_ALLOC_BLOCK_SIZE, RANGE_ALLOC_BLOCK_SIZE,
(longlong) ULONG_MAX, 0, 1024, 0},
+ {"rowid_merge_buff_size", OPT_ROWID_MERGE_BUFF_SIZE,
+ "The size of the buffers used [NOT] IN evaluation via partial matching.",
+ (uchar**) &global_system_variables.rowid_merge_buff_size,
+ (uchar**) &max_system_variables.rowid_merge_buff_size, 0, GET_ULONG,
+ REQUIRED_ARG, 8*1024*1024L, 0, MAX_MEM_TABLE_SIZE/2, 0, 1, 0},
{"read_buffer_size", OPT_RECORD_BUFFER,
"Each thread that does a sequential scan allocates a buffer of this size for each table it scans. If you do many sequential scans, you may want to increase this value.",
(uchar**) &global_system_variables.read_buff_size,
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-15 06:32:54 +0000
+++ b/sql/opt_subselect.cc 2010-03-15 19:52:58 +0000
@@ -187,10 +187,10 @@
does not call setup_subquery_materialization(). We could make
SELECT ... FROM DUAL call that function but that doesn't seem
to be the case that is worth handling.
- 4. Subquery predicate is a top-level predicate
- (this implies it is not negated)
- TODO: this is a limitation that should be lifted once we
- implement correct NULL semantics (WL#3830)
+ 4. Either the subquery predicate is a top-level predicate, or at
+ least one partial match strategy is enabled. If no partial match
+ strategy is enabled, then materialization cannot be used for
+ non-top-level queries because it cannot handle NULLs correctly.
5. Subquery is non-correlated
TODO:
This is an overly restrictive condition. It can be extended to:
@@ -204,8 +204,8 @@
(*) The subquery must be part of a SELECT statement. The current
condition also excludes multi-table update statements.
- We have to determine whether we will perform subquery materialization
- before calling the IN=>EXISTS transformation, so that we know whether to
+ Determine whether we will perform subquery materialization before
+ calling the IN=>EXISTS transformation, so that we know whether to
perform the whole transformation or only that part of it which wraps
Item_in_subselect in an Item_in_optimizer.
*/
@@ -215,12 +215,14 @@
select_lex->master_unit()->first_select()->leaf_tables && // 3
thd->lex->sql_command == SQLCOM_SELECT && // *
select_lex->outer_select()->leaf_tables && // 3A
- subquery_types_allow_materialization(in_subs))
+ subquery_types_allow_materialization(in_subs) &&
+ // psergey-todo: duplicated_subselect_card_check: where it's done?
+ (in_subs->is_top_level_item() ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)) &&//4
+ !in_subs->is_correlated && // 5
+ in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
{
- // psergey-todo: duplicated_subselect_card_check: where it's done?
- if (in_subs->is_top_level_item() && // 4
- !in_subs->is_correlated && // 5
- in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
in_subs->exec_method= Item_in_subselect::MATERIALIZATION;
}
=== modified file 'sql/set_var.cc'
--- a/sql/set_var.cc 2009-12-22 12:49:15 +0000
+++ b/sql/set_var.cc 2010-03-09 10:14:06 +0000
@@ -540,6 +540,9 @@
static sys_var_thd_ulong sys_range_alloc_block_size(&vars, "range_alloc_block_size",
&SV::range_alloc_block_size);
+static sys_var_thd_ulong sys_rowid_merge_buff_size(&vars, "rowid_merge_buff_size",
+ &SV::rowid_merge_buff_size);
+
static sys_var_thd_ulong sys_query_alloc_block_size(&vars, "query_alloc_block_size",
&SV::query_alloc_block_size,
0, fix_thd_mem_root);
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-02-17 21:59:41 +0000
+++ b/sql/sql_class.cc 2010-02-19 21:55:57 +0000
@@ -42,6 +42,7 @@
#include "sp_rcontext.h"
#include "sp_cache.h"
+#include "sql_select.h" /* declares create_tmp_table() */
/*
The following is used to initialise Table_ident with a internal
@@ -2877,6 +2878,71 @@
return 0;
}
+
+bool
+select_materialize_with_stats::
+create_result_table(THD *thd_arg, List<Item> *column_types,
+ bool is_union_distinct, ulonglong options,
+ const char *table_alias, bool bit_fields_as_long)
+{
+ DBUG_ASSERT(table == 0);
+ tmp_table_param.field_count= column_types->elements;
+ tmp_table_param.bit_fields_as_long= bit_fields_as_long;
+
+ if (! (table= create_tmp_table(thd_arg, &tmp_table_param, *column_types,
+ (ORDER*) 0, is_union_distinct, 1,
+ options, HA_POS_ERROR, (char*) table_alias)))
+ return TRUE;
+
+ col_stat= (Column_statistics*) table->in_use->alloc(table->s->fields *
+ sizeof(Column_statistics));
+ if (!stat)
+ return TRUE;
+
+ cleanup();
+
+ table->file->extra(HA_EXTRA_WRITE_CACHE);
+ table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+ return FALSE;
+}
+
+
+/**
+ Override select_union::send_data to analyze each row for NULLs and to
+ update null_statistics before sending data to the client.
+
+ @return TRUE if fatal error when sending data to the client
+ @return FALSE on success
+*/
+
+bool select_materialize_with_stats::send_data(List<Item> &items)
+{
+ List_iterator_fast<Item> item_it(items);
+ Item *cur_item;
+ Column_statistics *cur_col_stat= col_stat;
+ uint nulls_in_row= 0;
+
+ ++count_rows;
+
+ while ((cur_item= item_it++))
+ {
+ if (cur_item->is_null())
+ {
+ ++cur_col_stat->null_count;
+ cur_col_stat->max_null_row= count_rows;
+ if (!cur_col_stat->min_null_row)
+ cur_col_stat->min_null_row= count_rows;
+ ++nulls_in_row;
+ }
+ ++cur_col_stat;
+ }
+ if (nulls_in_row > max_nulls_in_row)
+ max_nulls_in_row= nulls_in_row;
+
+ return select_union::send_data(items);
+}
+
+
/****************************************************************************
TMP_TABLE_PARAM
****************************************************************************/
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-02-17 21:59:41 +0000
+++ b/sql/sql_class.h 2010-03-09 10:14:06 +0000
@@ -343,6 +343,8 @@
ulong mrr_buff_size;
ulong div_precincrement;
ulong sortbuff_size;
+ /* Total size of all buffers used by the subselect_rowid_merge_engine. */
+ ulong rowid_merge_buff_size;
ulong thread_handling;
ulong tx_isolation;
ulong completion_type;
@@ -2740,19 +2742,20 @@
class select_union :public select_result_interceptor
{
+protected:
TMP_TABLE_PARAM tmp_table_param;
public:
TABLE *table;
- select_union() :table(0) {}
+ select_union() :table(0) { tmp_table_param.init(); }
int prepare(List<Item> &list, SELECT_LEX_UNIT *u);
bool send_data(List<Item> &items);
bool send_eof();
bool flush();
- bool create_result_table(THD *thd, List<Item> *column_types,
- bool is_distinct, ulonglong options,
- const char *alias, bool bit_fields_as_long);
+ virtual bool create_result_table(THD *thd, List<Item> *column_types,
+ bool is_distinct, ulonglong options,
+ const char *alias, bool bit_fields_as_long);
};
/* Base subselect interface class */
@@ -2776,6 +2779,74 @@
bool send_data(List<Item> &items);
};
+
+/*
+ This class specializes select_union to collect statistics about the
+ data stored in the temp table. Currently the class collects statistcs
+ about NULLs.
+*/
+
+class select_materialize_with_stats : public select_union
+{
+protected:
+ class Column_statistics
+ {
+ public:
+ /* Count of NULLs per column. */
+ ha_rows null_count;
+ /* The row number that contains the first NULL in a column. */
+ ha_rows min_null_row;
+ /* The row number that contains the last NULL in a column. */
+ ha_rows max_null_row;
+ };
+
+ /* Array of statistics data per column. */
+ Column_statistics* col_stat;
+
+ /*
+ The number of columns in the biggest sub-row that consists of only
+ NULL values.
+ */
+ ha_rows max_nulls_in_row;
+ /*
+ Count of rows writtent to the temp table. This is redundant as it is
+ already stored in handler::stats.records, however that one is relatively
+ expensive to compute (given we need that for evry row).
+ */
+ ha_rows count_rows;
+
+public:
+ select_materialize_with_stats() {}
+ virtual bool create_result_table(THD *thd, List<Item> *column_types,
+ bool is_distinct, ulonglong options,
+ const char *alias, bool bit_fields_as_long);
+ bool init_result_table(ulonglong select_options);
+ bool send_data(List<Item> &items);
+ void cleanup()
+ {
+ memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
+ max_nulls_in_row= 0;
+ count_rows= 0;
+ }
+ ha_rows get_null_count_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].null_count;
+ }
+ ha_rows get_max_null_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].max_null_row;
+ }
+ ha_rows get_min_null_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].min_null_row;
+ }
+ ha_rows get_max_nulls_in_row() { return max_nulls_in_row; }
+};
+
+
/* used in independent ALL/ANY optimisation */
class select_max_min_finder_subselect :public select_subselect
{
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-14 18:25:43 +0000
+++ b/sql/sql_select.cc 2010-03-15 19:52:58 +0000
@@ -874,6 +874,9 @@
{
DBUG_PRINT("info",("No tables"));
error= 0;
+ /* Create all structures needed for materialized subquery execution. */
+ if (setup_subquery_materialization())
+ DBUG_RETURN(1);
DBUG_RETURN(0);
}
error= -1; // Error is sent to client
@@ -11258,7 +11261,7 @@
param->group_buff=group_buff;
share->keys=1;
share->uniques= test(using_unique_constraint);
- table->key_info=keyinfo;
+ table->key_info= table->s->key_info= keyinfo;
keyinfo->key_part=key_part_info;
keyinfo->flags=HA_NOSAME;
keyinfo->usable_key_parts=keyinfo->key_parts= param->group_parts;
@@ -11344,7 +11347,7 @@
keyinfo->key_parts * sizeof(KEY_PART_INFO))))
goto err;
bzero((void*) key_part_info, keyinfo->key_parts * sizeof(KEY_PART_INFO));
- table->key_info=keyinfo;
+ table->key_info= table->s->key_info= keyinfo;
keyinfo->key_part=key_part_info;
keyinfo->flags=HA_NOSAME | HA_NULL_ARE_EQUAL;
keyinfo->key_length= 0; // Will compute the sum of the parts below.
1
0
[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-subqueries/ branch (timour:2779)
by timour@askmonty.org 15 Mar '10
by timour@askmonty.org 15 Mar '10
15 Mar '10
#At file:///home/tsk/mprog/src/5.3-subqueries/ based on revid:psergey@askmonty.org-20100315063535-jsp4jgya6lfqt8e6
2779 timour(a)askmonty.org 2010-03-15 [merge]
Merge in MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs
modified:
mysql-test/include/mix1.inc
mysql-test/r/index_merge_myisam.result
mysql-test/r/innodb_mysql.result
mysql-test/r/myisam_mrr.result
mysql-test/r/ps.result
mysql-test/r/subselect.result
mysql-test/r/subselect3.result
mysql-test/r/subselect3_jcl6.result
mysql-test/r/subselect_no_mat.result
mysql-test/r/subselect_no_opts.result
mysql-test/r/subselect_no_semijoin.result
mysql-test/r/subselect_sj.result
mysql-test/r/subselect_sj_jcl6.result
mysql-test/t/ps.test
mysql-test/t/subselect.test
mysql-test/t/subselect3.test
sql/item_cmpfunc.h
sql/item_subselect.cc
sql/item_subselect.h
sql/mysql_priv.h
sql/mysqld.cc
sql/opt_subselect.cc
sql/set_var.cc
sql/sql_class.cc
sql/sql_class.h
sql/sql_select.cc
=== modified file 'mysql-test/include/mix1.inc'
--- a/mysql-test/include/mix1.inc 2009-09-15 06:08:54 +0000
+++ b/mysql-test/include/mix1.inc 2010-03-11 21:43:31 +0000
@@ -1177,8 +1177,11 @@ DROP TABLE t1;
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
--echo End of 5.0 tests
=== modified file 'mysql-test/r/index_merge_myisam.result'
--- a/mysql-test/r/index_merge_myisam.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/index_merge_myisam.result 2010-03-11 21:43:31 +0000
@@ -1419,19 +1419,19 @@ drop table t1;
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge=off,index_merge_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge_union=on';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,index_merge_sort_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=4;
ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
set optimizer_switch=NULL;
@@ -1458,21 +1458,21 @@ set optimizer_switch=default;
set optimizer_switch='index_merge=off,index_merge_union=off,default';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set @@global.optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
#
# Check index_merge's @@optimizer_switch flags
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -1582,5 +1582,5 @@ id select_type table type possible_keys
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
drop table t0, t1;
=== modified file 'mysql-test/r/innodb_mysql.result'
--- a/mysql-test/r/innodb_mysql.result 2009-12-15 07:16:46 +0000
+++ b/mysql-test/r/innodb_mysql.result 2010-03-11 21:43:31 +0000
@@ -1425,12 +1425,15 @@ DROP TABLE t1;
#
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
2 DEPENDENT SUBQUERY t1 system NULL NULL NULL NULL 0 const row not found
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 1
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
End of 5.0 tests
CREATE TABLE `t2` (
=== modified file 'mysql-test/r/myisam_mrr.result'
--- a/mysql-test/r/myisam_mrr.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/myisam_mrr.result 2010-03-11 21:43:31 +0000
@@ -394,7 +394,7 @@ drop table t0, t1;
# - engine_condition_pushdown does not affect ICP
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, key(a));
=== modified file 'mysql-test/r/ps.result'
--- a/mysql-test/r/ps.result 2009-05-27 15:19:44 +0000
+++ b/mysql-test/r/ps.result 2010-03-11 21:43:31 +0000
@@ -149,6 +149,8 @@ c29 longblob, c30 longtext, c31 enum('on
c32 set('monday', 'tuesday', 'wednesday')
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -177,6 +179,7 @@ id select_type table type possible_keys
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
set @arg00=1;
prepare stmt1 from ' create table t1 (m int) as select 1 as m ' ;
execute stmt1 ;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-02-17 21:59:41 +0000
+++ b/mysql-test/r/subselect.result 2010-03-11 21:43:31 +0000
@@ -1,4 +1,6 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4803,4 +4805,5 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-02-17 10:05:27 +0000
+++ b/mysql-test/r/subselect3.result 2010-03-11 21:43:31 +0000
@@ -63,12 +63,15 @@ Handler_read_rnd_next 11
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -692,6 +695,8 @@ a MAX(b) test
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -759,6 +764,7 @@ id select_type table type possible_keys
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -960,7 +966,7 @@ i1 i2
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -977,7 +983,7 @@ i1 i2
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-02-17 10:47:55 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-03-11 21:43:31 +0000
@@ -67,12 +67,15 @@ Handler_read_rnd_next 11
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -696,6 +699,8 @@ a MAX(b) test
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -763,6 +768,7 @@ id select_type table type possible_keys
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -964,7 +970,7 @@ i1 i2
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -981,7 +987,7 @@ i1 i2
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off,semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-03-15 06:32:54 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-15 19:52:58 +0000
@@ -202,39 +202,39 @@ BUG#37120 optimizer_switch allowable val
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 06:32:54 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 19:52:58 +0000
@@ -206,39 +206,39 @@ BUG#37120 optimizer_switch allowable val
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/t/ps.test'
--- a/mysql-test/t/ps.test 2009-05-27 15:19:44 +0000
+++ b/mysql-test/t/ps.test 2010-03-11 21:43:31 +0000
@@ -163,6 +163,9 @@ create table t1
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -171,6 +174,8 @@ explain SELECT (SELECT SUM(c1 + c12 + 0.
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# parameters from variables (for field creation)
#
=== modified file 'mysql-test/t/subselect.test'
--- a/mysql-test/t/subselect.test 2010-01-17 20:52:20 +0000
+++ b/mysql-test/t/subselect.test 2010-03-11 21:43:31 +0000
@@ -11,6 +11,9 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
--enable_warnings
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
select (select 2);
explain extended select (select 2);
SELECT (SELECT 1) UNION SELECT (SELECT 2);
@@ -4061,4 +4064,6 @@ SELECT 1 FROM t1 GROUP BY
(SELECT LAST_INSERT_ID() FROM t1 ORDER BY MIN(a) ASC LIMIT 1);
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
+
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/subselect3.test'
--- a/mysql-test/t/subselect3.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect3.test 2010-03-11 21:43:31 +0000
@@ -59,9 +59,13 @@ select a in (select max(ie) from t1 wher
show status like 'Handler_read_rnd_next';
select ' ^ This must show 11' Z;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
# This must show trigcond:
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
#
@@ -529,6 +533,9 @@ SELECT a, MAX(b),
DROP TABLE t1, t2;
+# The next three test cases must be executed with the IN=>EXISTS strategy
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
#
# Bug #27870: crash of an equijoin query with WHERE condition containing
@@ -588,6 +595,8 @@ EXPLAIN SELECT a FROM t1 WHERE a NOT IN
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #34763: item_subselect.cc:1235:Item_in_subselect::row_value_transformer:
# Assertion failed, unexpected error message:
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-03-13 20:04:52 +0000
+++ b/sql/item_cmpfunc.h 2010-03-15 19:52:58 +0000
@@ -350,6 +350,7 @@ public:
CHARSET_INFO *compare_collation() { return cmp.cmp_collation.collation; }
uint decimal_precision() const { return 1; }
void top_level_item() { abort_on_null= TRUE; }
+ Arg_comparator *get_comparator() { return &cmp; }
friend class Arg_comparator;
};
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-02-21 06:32:23 +0000
+++ b/sql/item_subselect.cc 2010-03-09 10:14:06 +0000
@@ -138,6 +138,7 @@ void Item_in_subselect::cleanup()
left_expr_cache= NULL;
}
first_execution= TRUE;
+ is_constant= FALSE;
Item_subselect::cleanup();
DBUG_VOID_RETURN;
}
@@ -449,8 +450,10 @@ bool Item_subselect::exec()
int res;
if (thd->is_error())
- /* Do not execute subselect in case of a fatal error */
+ {
+ /* Do not execute subselect in case of a fatal error */
return 1;
+ }
/*
Simulate a failure in sub-query execution. Used to test e.g.
out of memory or query being killed conditions.
@@ -475,9 +478,6 @@ bool Item_subselect::exec()
bool Item_in_subselect::exec()
{
DBUG_ENTER("Item_in_subselect::exec");
- DBUG_ASSERT(exec_method != MATERIALIZATION ||
- (exec_method == MATERIALIZATION &&
- engine->engine_type() == subselect_engine::HASH_SJ_ENGINE));
/*
Initialize the cache of the left predicate operand. This has to be done as
late as now, because Cached_item directly contains a resolved field (not
@@ -493,14 +493,14 @@ bool Item_in_subselect::exec()
if (!left_expr_cache && exec_method == MATERIALIZATION)
init_left_expr_cache();
- /* If the new left operand is already in the cache, reuse the old result. */
- if (left_expr_cache && test_if_item_cache_changed(*left_expr_cache) < 0)
- {
- /* Always compute IN for the first row as the cache is not valid for it. */
- if (!first_execution)
- DBUG_RETURN(FALSE);
- first_execution= FALSE;
- }
+ /*
+ If the new left operand is already in the cache, reuse the old result.
+ Use the cached result only if this is not the first execution of IN
+ because the cache is not valid for the first execution.
+ */
+ if (!first_execution && left_expr_cache &&
+ test_if_item_cache_changed(*left_expr_cache) < 0)
+ DBUG_RETURN(FALSE);
/*
The exec() method below updates item::value, and item::null_value, thus if
@@ -910,8 +910,8 @@ bool Item_in_subselect::test_limit(st_se
Item_in_subselect::Item_in_subselect(Item * left_exp,
st_select_lex *select_lex):
Item_exists_subselect(), left_expr_cache(0), first_execution(TRUE),
- optimizer(0), pushed_cond_guards(NULL), exec_method(NOT_TRANSFORMED),
- upper_item(0)
+ is_constant(FALSE), optimizer(0), pushed_cond_guards(NULL),
+ exec_method(NOT_TRANSFORMED), upper_item(0)
{
DBUG_ENTER("Item_in_subselect::Item_in_subselect");
left_expr= left_exp;
@@ -1105,6 +1105,8 @@ bool Item_in_subselect::val_bool()
{
DBUG_ASSERT(fixed == 1);
null_value= 0;
+ if (is_constant)
+ return value;
if (exec())
{
reset();
@@ -1571,9 +1573,9 @@ Item_in_subselect::row_value_transformer
DBUG_ENTER("Item_in_subselect::row_value_transformer");
// psergey: duplicated_subselect_card_check
- if (select_lex->item_list.elements != left_expr->cols())
+ if (select_lex->item_list.elements != cols_num)
{
- my_error(ER_OPERAND_COLUMNS, MYF(0), left_expr->cols());
+ my_error(ER_OPERAND_COLUMNS, MYF(0), cols_num);
DBUG_RETURN(RES_ERROR);
}
@@ -1980,17 +1982,69 @@ void Item_in_subselect::print(String *st
bool Item_in_subselect::fix_fields(THD *thd_arg, Item **ref)
{
- bool result = 0;
+ uint outer_cols_num;
+ List<Item> *inner_cols;
if (exec_method == SEMI_JOIN)
return !( (*ref)= new Item_int(1));
- if (thd_arg->lex->view_prepare_mode && left_expr && !left_expr->fixed)
- result = left_expr->fix_fields(thd_arg, &left_expr);
+ /*
+ Check if the outer and inner IN operands match in those cases when we
+ will not perform IN=>EXISTS transformation. Currently this is when we
+ use subquery materialization.
+
+ The condition below is true when this method was called recursively from
+ inside JOIN::prepare for the JOIN object created by the call chain
+ Item_subselect::fix_fields -> subselect_single_select_engine::prepare,
+ which creates a JOIN object for the subquery and calls JOIN::prepare for
+ the JOIN of the subquery.
+ Notice that in some cases, this doesn't happen, and the check_cols()
+ test for each Item happens later in
+ Item_in_subselect::row_value_in_to_exists_transformer.
+ The reason for this mess is that our JOIN::prepare phase works top-down
+ instead of bottom-up, so we first do name resoluton and semantic checks
+ for the outer selects, then for the inner.
+ */
+ if (engine &&
+ engine->engine_type() == subselect_engine::SINGLE_SELECT_ENGINE &&
+ ((subselect_single_select_engine*)engine)->join)
+ {
+ outer_cols_num= left_expr->cols();
+
+ if (unit->is_union())
+ inner_cols= &(unit->types);
+ else
+ inner_cols= &(unit->first_select()->item_list);
+ if (outer_cols_num != inner_cols->elements)
+ {
+ my_error(ER_OPERAND_COLUMNS, MYF(0), outer_cols_num);
+ return TRUE;
+ }
+ if (outer_cols_num > 1)
+ {
+ List_iterator<Item> inner_col_it(*inner_cols);
+ Item *inner_col;
+ for (uint i= 0; i < outer_cols_num; i++)
+ {
+ inner_col= inner_col_it++;
+ if (inner_col->check_cols(left_expr->element_index(i)->cols()))
+ return TRUE;
+ }
+ }
+ }
+
+ if (thd_arg->lex->view_prepare_mode && left_expr && !left_expr->fixed &&
+ left_expr->fix_fields(thd_arg, &left_expr))
+ return TRUE;
+ if (Item_subselect::fix_fields(thd_arg, ref))
+ return TRUE;
- return result || Item_subselect::fix_fields(thd_arg, ref);
+ fixed= TRUE;
+
+ return FALSE;
}
+
void Item_in_subselect::fix_after_pullout(st_select_lex *new_parent, Item **ref)
{
left_expr->fix_after_pullout(new_parent, &left_expr);
@@ -2267,10 +2321,9 @@ bool subselect_union_engine::no_rows()
void subselect_uniquesubquery_engine::cleanup()
{
DBUG_ENTER("subselect_uniquesubquery_engine::cleanup");
- /*
- subselect_uniquesubquery_engine have not 'result' assigbed, so we do not
- cleanup() it
- */
+ /* Tell handler we don't need the index anymore */
+ if (tab->table->file->inited)
+ tab->table->file->ha_index_end();
DBUG_VOID_RETURN;
}
@@ -2291,7 +2344,7 @@ subselect_union_engine::subselect_union_
Create and prepare the JOIN object that represents the query execution
plan for the subquery.
- @detail
+ @details
This method is called from Item_subselect::fix_fields. For prepared
statements it is called both during the PREPARE and EXECUTE phases in the
following ways:
@@ -2593,14 +2646,23 @@ int subselect_uniquesubquery_engine::sca
for (;;)
{
error=table->file->ha_rnd_next(table->record[0]);
- if (error && error != HA_ERR_END_OF_FILE)
- {
- error= report_error(table, error);
- break;
+ if (error) {
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ error= 0;
+ continue;
+ }
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ error= 0;
+ break;
+ }
+ else
+ {
+ error= report_error(table, error);
+ break;
+ }
}
- /* No more rows */
- if (table->status)
- break;
if (!cond || cond->val_int())
{
@@ -2711,6 +2773,56 @@ bool subselect_uniquesubquery_engine::co
/*
+ @retval 1 A NULL was found in the outer reference, index lookup is
+ not applicable, the outer ref is unsusable as a lookup key,
+ use some other method to find a match.
+ @retval 0 The outer ref was copied into an index lookup key.
+ @retval -1 The outer ref cannot possibly match any row, IN is FALSE.
+*/
+/* TIMOUR: this method is a variant of copy_ref_key(), needs refactoring. */
+
+int subselect_uniquesubquery_engine::copy_ref_key_simple()
+{
+ for (store_key **copy= tab->ref.key_copy ; *copy ; copy++)
+ {
+ enum store_key::store_key_result store_res;
+ store_res= (*copy)->copy();
+ tab->ref.key_err= store_res;
+
+ /*
+ When there is a NULL part in the key we don't need to make index
+ lookup for such key thus we don't need to copy whole key.
+ If we later should do a sequential scan return OK. Fail otherwise.
+
+ See also the comment for the subselect_uniquesubquery_engine::exec()
+ function.
+ */
+ null_keypart= (*copy)->null_key;
+ if (null_keypart)
+ return 1;
+
+ /*
+ Check if the error is equal to STORE_KEY_FATAL. This is not expressed
+ using the store_key::store_key_result enum because ref.key_err is a
+ boolean and we want to detect both TRUE and STORE_KEY_FATAL from the
+ space of the union of the values of [TRUE, FALSE] and
+ store_key::store_key_result.
+ TODO: fix the variable an return types.
+ */
+ if (store_res == store_key::STORE_KEY_FATAL)
+ {
+ /*
+ Error converting the left IN operand to the column type of the right
+ IN operand.
+ */
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+/*
Execute subselect
SYNOPSIS
@@ -2750,7 +2862,13 @@ int subselect_uniquesubquery_engine::exe
/* TODO: change to use of 'full_scan' here? */
if (copy_ref_key())
+ {
+ /*
+ TIMOUR: copy_ref_key() == 1 means NULL result, not error, why return 1?
+ Check who reiles on this result.
+ */
DBUG_RETURN(1);
+ }
if (table->status)
{
/*
@@ -2791,6 +2909,46 @@ int subselect_uniquesubquery_engine::exe
}
+/*
+ TIMOUR: write comment
+*/
+
+int subselect_uniquesubquery_engine::index_lookup()
+{
+ DBUG_ENTER("subselect_uniquesubquery_engine::index_lookup");
+ int error;
+ TABLE *table= tab->table;
+
+ if (!table->file->inited)
+ table->file->ha_index_init(tab->ref.key, 0);
+ error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->
+ ref.key_parts),
+ HA_READ_KEY_EXACT);
+ DBUG_PRINT("info", ("lookup result: %i", error));
+
+ if (error && error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
+ {
+ /*
+ TIMOUR: I don't understand at all when do we need to call report_error.
+ In most places where we access an index, we don't do this. Why here?
+ */
+ error= report_error(table, error);
+ DBUG_RETURN(error);
+ }
+
+ table->null_row= 0;
+ if (!error && (!cond || cond->val_int()))
+ ((Item_in_subselect *) item)->value= 1;
+ else
+ ((Item_in_subselect *) item)->value= 0;
+
+ DBUG_RETURN(0);
+}
+
+
+
subselect_uniquesubquery_engine::~subselect_uniquesubquery_engine()
{
/* Tell handler we don't need the index anymore */
@@ -3225,6 +3383,7 @@ bool subselect_union_engine::no_tables()
bool subselect_uniquesubquery_engine::no_tables()
{
/* returning value is correct, but this method should never be called */
+ DBUG_ASSERT(FALSE);
return 0;
}
@@ -3235,16 +3394,259 @@ bool subselect_uniquesubquery_engine::no
/**
+ Check if an IN predicate should be executed via partial matching using
+ only schema information.
+
+ @details
+ This test essentially has three results:
+ - partial matching is applicable, but cannot be executed due to a
+ limitation in the total number of indexes, as a result we can't
+ use subquery materialization at all.
+ - partial matching is either applicable or not, and this can be
+ determined by looking at 'this->max_keys'.
+ If max_keys > 1, then we need partial matching because there are
+ more indexes than just the one we use during materialization to
+ remove duplicates.
+
+ @note
+ TIMOUR: The schema-based analysis for partial matching can be done once for
+ prepared statement and remembered. It is done here to remove the need to
+ save/restore all related variables between each re-execution, thus making
+ the code simpler.
+
+ @retval PARTIAL_MATCH if a partial match should be used
+ @retval COMPLETE_MATCH if a complete match (index lookup) should be used
+*/
+
+subselect_hash_sj_engine::exec_strategy
+subselect_hash_sj_engine::get_strategy_using_schema()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+
+ if (item_in->is_top_level_item())
+ return COMPLETE_MATCH;
+ else
+ {
+ List_iterator<Item> inner_col_it(*item_in->unit->get_unit_column_types());
+ Item *outer_col, *inner_col;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ outer_col= item_in->left_expr->element_index(i);
+ inner_col= inner_col_it++;
+
+ if (!inner_col->maybe_null && !outer_col->maybe_null)
+ bitmap_set_bit(&non_null_key_parts, i);
+ else
+ {
+ bitmap_set_bit(&partial_match_key_parts, i);
+ ++count_partial_match_columns;
+ }
+ }
+ }
+
+ /* If no column contains NULLs use regular hash index lookups. */
+ if (count_partial_match_columns)
+ return PARTIAL_MATCH;
+ return COMPLETE_MATCH;
+}
+
+
+/**
+ Test whether an IN predicate must be computed via partial matching
+ based on the NULL statistics for each column of a materialized subquery.
+
+ @details The procedure analyzes column NULL statistics, updates the
+ matching type of columns that cannot be NULL or that contain only NULLs.
+ Based on this, the procedure determines the final execution strategy for
+ the [NOT] IN predicate.
+
+ @retval PARTIAL_MATCH if a partial match should be used
+ @retval COMPLETE_MATCH if a complete match (index lookup) should be used
+*/
+
+subselect_hash_sj_engine::exec_strategy
+subselect_hash_sj_engine::get_strategy_using_data()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+ Item *outer_col;
+
+ /*
+ If we already determined that a complete match is enough based on schema
+ information, nothing can be better.
+ */
+ if (strategy == COMPLETE_MATCH)
+ return COMPLETE_MATCH;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ if (!bitmap_is_set(&partial_match_key_parts, i))
+ continue;
+ outer_col= item_in->left_expr->element_index(i);
+ /*
+ If column 'i' doesn't contain NULLs, and the corresponding outer reference
+ cannot have a NULL value, then 'i' is a non-nullable column.
+ */
+ if (result_sink->get_null_count_of_col(i) == 0 && !outer_col->maybe_null)
+ {
+ bitmap_clear_bit(&partial_match_key_parts, i);
+ bitmap_set_bit(&non_null_key_parts, i);
+ --count_partial_match_columns;
+ }
+ if (result_sink->get_null_count_of_col(i) ==
+ tmp_table->file->stats.records)
+ ++count_null_only_columns;
+ }
+
+ /* If no column contains NULLs use regular hash index lookups. */
+ if (!count_partial_match_columns)
+ return COMPLETE_MATCH;
+ return PARTIAL_MATCH;
+}
+
+
+void
+subselect_hash_sj_engine::choose_partial_match_strategy(
+ bool has_non_null_key, bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts)
+{
+ size_t pm_buff_size;
+
+ DBUG_ASSERT(strategy == PARTIAL_MATCH);
+ /*
+ Choose according to global optimizer switch. If only one of the switches is
+ 'ON', then the remaining strategy is the only possible one. The only cases
+ when this will be overriden is when the total size of all buffers for the
+ merge strategy is bigger than the 'rowid_merge_buff_size' system variable,
+ or if there isn't enough physical memory to allocate the buffers.
+ */
+ if (!optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) &&
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN))
+ strategy= PARTIAL_MATCH_SCAN;
+ else if
+ ( optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) &&
+ !optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN))
+ strategy= PARTIAL_MATCH_MERGE;
+
+ /*
+ If both switches are ON, or both are OFF, we interpret that as "let the
+ optimizer decide". Perform a cost based choice between the two partial
+ matching strategies.
+ */
+ /*
+ TIMOUR: the above interpretation of the switch values could be changed to:
+ - if both are ON - let the optimizer decide,
+ - if both are OFF - do not use partial matching, therefore do not use
+ materialization in non-top-level predicates.
+ The problem with this is that we know for sure if we need partial matching
+ only after the subquery is materialized, and this is too late to revert to
+ the IN=>EXISTS strategy.
+ */
+ if (strategy == PARTIAL_MATCH)
+ {
+ /*
+ TIMOUR: Currently we use a super simplistic measure. This will be
+ addressed in a separate task.
+ */
+ if (tmp_table->file->stats.records < 100)
+ strategy= PARTIAL_MATCH_SCAN;
+ else
+ strategy= PARTIAL_MATCH_MERGE;
+ }
+
+ /* Check if there is enough memory for the rowid merge strategy. */
+ if (strategy == PARTIAL_MATCH_MERGE)
+ {
+ pm_buff_size= rowid_merge_buff_size(has_non_null_key,
+ has_covering_null_row,
+ partial_match_key_parts);
+ if (pm_buff_size > thd->variables.rowid_merge_buff_size)
+ strategy= PARTIAL_MATCH_SCAN;
+ }
+}
+
+
+/*
+ Compute the memory size of all buffers proportional to the number of rows
+ in tmp_table.
+
+ @details
+ If the result is bigger than thd->variables.rowid_merge_buff_size, partial
+ matching via merging is not applicable.
+*/
+
+size_t subselect_hash_sj_engine::rowid_merge_buff_size(
+ bool has_non_null_key, bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts)
+{
+ size_t buff_size; /* Total size of all buffers used by partial matching. */
+ ha_rows row_count= tmp_table->file->stats.records;
+ uint rowid_length= tmp_table->file->ref_length;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+
+ /* Size of the subselect_rowid_merge_engine::row_num_to_rowid buffer. */
+ buff_size= row_count * rowid_length * sizeof(uchar);
+
+ if (has_non_null_key)
+ {
+ /* Add the size of Ordered_key::key_buff of the only non-NULL key. */
+ buff_size+= row_count * sizeof(rownum_t);
+ }
+
+ if (!has_covering_null_row)
+ {
+ for (uint i= 0; i < partial_match_key_parts->n_bits; i++)
+ {
+ if (!bitmap_is_set(partial_match_key_parts, i) ||
+ result_sink->get_null_count_of_col(i) == row_count)
+ continue; /* In these cases we wouldn't construct Ordered keys. */
+
+ /* Add the size of Ordered_key::key_buff */
+ buff_size+= (row_count - result_sink->get_null_count_of_col(i)) *
+ sizeof(rownum_t);
+ /* Add the size of Ordered_key::null_key */
+ buff_size+= bitmap_buffer_size(result_sink->get_max_null_of_col(i));
+ }
+ }
+
+ return buff_size;
+}
+
+
+/*
+ Initialize a MY_BITMAP with a buffer allocated on the current
+ memory root.
+ TIMOUR: move to bitmap C file?
+*/
+
+static my_bool
+bitmap_init_memroot(MY_BITMAP *map, uint n_bits, MEM_ROOT *mem_root)
+{
+ my_bitmap_map *bitmap_buf;
+
+ if (!(bitmap_buf= (my_bitmap_map*) alloc_root(mem_root,
+ bitmap_buffer_size(n_bits))) ||
+ bitmap_init(map, bitmap_buf, n_bits, FALSE))
+ return TRUE;
+ bitmap_clear_all(map);
+ return FALSE;
+}
+
+
+/**
Create all structures needed for IN execution that can live between PS
reexecution.
- @detail
+ @param tmp_columns the items that produce the data for the temp table
+
+ @details
- Create a temporary table to store the result of the IN subquery. The
temporary table has one hash index on all its columns.
- Create a new result sink that sends the result stream of the subquery to
the temporary table,
- - Create and initialize a new JOIN_TAB, and TABLE_REF objects to perform
- lookups into the indexed temporary table.
@notice:
Currently Item_subselect::init() already chooses and creates at parse
@@ -3256,145 +3658,210 @@ bool subselect_uniquesubquery_engine::no
bool subselect_hash_sj_engine::init_permanent(List<Item> *tmp_columns)
{
- /* The result sink where we will materialize the subquery result. */
- select_union *tmp_result_sink;
- /* The table into which the subquery is materialized. */
- TABLE *tmp_table;
- KEY *tmp_key; /* The only index on the temporary table. */
- uint tmp_key_parts; /* Number of keyparts in tmp_key. */
- Item_in_subselect *item_in= (Item_in_subselect *) item;
+ /* Options to create_tmp_table. */
+ ulonglong tmp_create_options= thd->options | TMP_TABLE_ALL_COLUMNS;
+ /* | TMP_TABLE_FORCE_MYISAM; TIMOUR: force MYISAM */
DBUG_ENTER("subselect_hash_sj_engine::init_permanent");
- /* 1. Create/initialize materialization related objects. */
+ if (bitmap_init_memroot(&non_null_key_parts, tmp_columns->elements,
+ thd->mem_root) ||
+ bitmap_init_memroot(&partial_match_key_parts, tmp_columns->elements,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
/*
Create and initialize a select result interceptor that stores the
result stream in a temporary table. The temporary table itself is
managed (created/filled/etc) internally by the interceptor.
*/
- if (!(tmp_result_sink= new select_union))
+/*
+ TIMOUR:
+ Select a more efficient result sink when we know there is no need to collect
+ data statistics.
+
+ if (strategy == COMPLETE_MATCH)
+ {
+ if (!(result= new select_union))
+ DBUG_RETURN(TRUE);
+ }
+ else if (strategy == PARTIAL_MATCH)
+ {
+ if (!(result= new select_materialize_with_stats))
+ DBUG_RETURN(TRUE);
+ }
+*/
+ if (!(result= new select_materialize_with_stats))
DBUG_RETURN(TRUE);
- if (tmp_result_sink->create_result_table(
- thd, tmp_columns, TRUE,
- thd->options | TMP_TABLE_ALL_COLUMNS,
+
+ if (((select_union*) result)->create_result_table(
+ thd, tmp_columns, TRUE, tmp_create_options,
"materialized subselect", TRUE))
DBUG_RETURN(TRUE);
- tmp_table= tmp_result_sink->table;
- tmp_key= tmp_table->key_info;
- tmp_key_parts= tmp_key->key_parts;
+ tmp_table= ((select_union*) result)->table;
/*
- If the subquery has blobs, or the total key lenght is bigger than some
- length, then the created index cannot be used for lookups and we
- can't use hash semi join. If this is the case, delete the temporary
- table since it will not be used, and tell the caller we failed to
- initialize the engine.
+ If the subquery has blobs, or the total key lenght is bigger than
+ some length, or the total number of key parts is more than the
+ allowed maximum (currently MAX_REF_PARTS == 16), then the created
+ index cannot be used for lookups and we can't use hash semi
+ join. If this is the case, delete the temporary table since it
+ will not be used, and tell the caller we failed to initialize the
+ engine.
*/
if (tmp_table->s->keys == 0)
{
-#ifndef DBUG_OFF
- handlerton *tmp_table_hton= tmp_table->s->db_type();
-#ifdef USE_MARIA_FOR_TMP_TABLES
- DBUG_ASSERT(tmp_table_hton == maria_hton);
-#else
- DBUG_ASSERT(tmp_table_hton == myisam_hton);
-#endif
-#endif
DBUG_ASSERT(
tmp_table->s->uniques ||
tmp_table->key_info->key_length >= tmp_table->file->max_key_length() ||
tmp_table->key_info->key_parts > tmp_table->file->max_key_parts());
free_tmp_table(thd, tmp_table);
+ tmp_table= NULL;
delete result;
result= NULL;
DBUG_RETURN(TRUE);
}
- result= tmp_result_sink;
/*
Make sure there is only one index on the temp table, and it doesn't have
the extra key part created when s->uniques > 0.
*/
- DBUG_ASSERT(tmp_table->s->keys == 1 && tmp_columns->elements == tmp_key_parts);
+ DBUG_ASSERT(tmp_table->s->keys == 1 &&
+ ((Item_in_subselect *) item)->left_expr->cols() ==
+ tmp_table->key_info->key_parts);
+
+ if (make_semi_join_conds() ||
+ /* A unique_engine is used both for complete and partial matching. */
+ !(lookup_engine= make_unique_engine()))
+ DBUG_RETURN(TRUE);
+
+ DBUG_RETURN(FALSE);
+}
- /* 2. Create/initialize execution related objects. */
+/*
+ Create an artificial condition to post-filter those rows matched by index
+ lookups that cannot be distinguished by the index lookup procedure.
- /*
- Create and initialize the JOIN_TAB that represents an index lookup
- plan operator into the materialized subquery result. Notice that:
- - this JOIN_TAB has no corresponding JOIN (and doesn't need one), and
- - here we initialize only those members that are used by
- subselect_uniquesubquery_engine, so these objects are incomplete.
- */
- if (!(tab= (JOIN_TAB*) thd->alloc(sizeof(JOIN_TAB))))
- DBUG_RETURN(TRUE);
- tab->table= tmp_table;
- tab->ref.key= 0; /* The only temp table index. */
- tab->ref.key_length= tmp_key->key_length;
- if (!(tab->ref.key_buff=
- (uchar*) thd->calloc(ALIGN_SIZE(tmp_key->key_length) * 2)) ||
- !(tab->ref.key_copy=
- (store_key**) thd->alloc((sizeof(store_key*) *
- (tmp_key_parts + 1)))) ||
- !(tab->ref.items=
- (Item**) thd->alloc(sizeof(Item*) * tmp_key_parts)))
- DBUG_RETURN(TRUE);
+ @notes
+ The need for post-filtering may occur e.g. because of
+ truncation. Prepared statements execution requires that fix_fields is
+ called for every execution. In order to call fix_fields we need to
+ create a Name_resolution_context and a corresponding TABLE_LIST for
+ the temporary table for the subquery, so that all column references
+ to the materialized subquery table can be resolved correctly.
- KEY_PART_INFO *cur_key_part= tmp_key->key_part;
- store_key **ref_key= tab->ref.key_copy;
- uchar *cur_ref_buff= tab->ref.key_buff;
+ @returns
+ @retval TRUE memory allocation error occurred
+ @retval FALSE the conditions were created and resolved (fixed)
+*/
- /*
- Create an artificial condition to post-filter those rows matched by index
- lookups that cannot be distinguished by the index lookup procedure, e.g.
- because of truncation. Prepared statements execution requires that
- fix_fields is called for every execution. In order to call fix_fields we
- need to create a Name_resolution_context and a corresponding TABLE_LIST
- for the temporary table for the subquery, so that all column references
- to the materialized subquery table can be resolved correctly.
- */
- DBUG_ASSERT(cond == NULL);
- if (!(cond= new Item_cond_and))
- DBUG_RETURN(TRUE);
+bool subselect_hash_sj_engine::make_semi_join_conds()
+{
/*
Table reference for tmp_table that is used to resolve column references
(Item_fields) to columns in tmp_table.
*/
TABLE_LIST *tmp_table_ref;
+ /* Name resolution context for all tmp_table columns created below. */
+ Name_resolution_context *context;
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+
+ DBUG_ENTER("subselect_hash_sj_engine::make_semi_join_conds");
+ DBUG_ASSERT(semi_join_conds == NULL);
+
+ if (!(semi_join_conds= new Item_cond_and))
+ DBUG_RETURN(TRUE);
+
if (!(tmp_table_ref= (TABLE_LIST*) thd->alloc(sizeof(TABLE_LIST))))
DBUG_RETURN(TRUE);
tmp_table_ref->init_one_table("", "materialized subselect", TL_READ);
tmp_table_ref->table= tmp_table;
- /* Name resolution context for all tmp_table columns created below. */
- Name_resolution_context *context= new Name_resolution_context;
+ context= new Name_resolution_context;
context->init();
context->first_name_resolution_table=
context->last_name_resolution_table= tmp_table_ref;
- for (uint i= 0; i < tmp_key_parts; i++, cur_key_part++, ref_key++)
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
{
Item_func_eq *eq_cond; /* New equi-join condition for the current column. */
/* Item for the corresponding field from the materialized temp table. */
Item_field *right_col_item;
- int null_count= test(cur_key_part->field->real_maybe_null());
- tab->ref.items[i]= item_in->left_expr->element_index(i);
- if (!(right_col_item= new Item_field(thd, context, cur_key_part->field)) ||
- !(eq_cond= new Item_func_eq(tab->ref.items[i], right_col_item)) ||
- ((Item_cond_and*)cond)->add(eq_cond))
+ if (!(right_col_item= new Item_field(thd, context, tmp_table->field[i])) ||
+ !(eq_cond= new Item_func_eq(item_in->left_expr->element_index(i),
+ right_col_item)) ||
+ (((Item_cond_and*)semi_join_conds)->add(eq_cond)))
{
- delete cond;
- cond= NULL;
+ delete semi_join_conds;
+ semi_join_conds= NULL;
DBUG_RETURN(TRUE);
}
+ }
+ if (semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
+ DBUG_RETURN(TRUE);
+
+ DBUG_RETURN(FALSE);
+}
+
+
+/**
+ Create a new uniquesubquery engine for the execution of an IN predicate.
+
+ @details
+ Create and initialize a new JOIN_TAB, and Table_ref objects to perform
+ lookups into the indexed temporary table.
+
+ @retval A new subselect_hash_sj_engine object
+ @retval NULL if a memory allocation error occurs
+*/
+
+subselect_uniquesubquery_engine*
+subselect_hash_sj_engine::make_unique_engine()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ /* The only index on the temporary table. */
+ KEY *tmp_key= tmp_table->key_info;
+ /* Number of keyparts in tmp_key. */
+ uint tmp_key_parts= tmp_key->key_parts;
+ JOIN_TAB *tab;
+
+ DBUG_ENTER("subselect_hash_sj_engine::make_unique_engine");
+
+ /*
+ Create and initialize the JOIN_TAB that represents an index lookup
+ plan operator into the materialized subquery result. Notice that:
+ - this JOIN_TAB has no corresponding JOIN (and doesn't need one), and
+ - here we initialize only those members that are used by
+ subselect_uniquesubquery_engine, so these objects are incomplete.
+ */
+ if (!(tab= (JOIN_TAB*) thd->alloc(sizeof(JOIN_TAB))))
+ DBUG_RETURN(NULL);
+ tab->table= tmp_table;
+ tab->ref.key= 0; /* The only temp table index. */
+ tab->ref.key_length= tmp_key->key_length;
+ if (!(tab->ref.key_buff=
+ (uchar*) thd->calloc(ALIGN_SIZE(tmp_key->key_length) * 2)) ||
+ !(tab->ref.key_copy=
+ (store_key**) thd->alloc((sizeof(store_key*) *
+ (tmp_key_parts + 1)))) ||
+ !(tab->ref.items=
+ (Item**) thd->alloc(sizeof(Item*) * tmp_key_parts)))
+ DBUG_RETURN(NULL);
+ KEY_PART_INFO *cur_key_part= tmp_key->key_part;
+ store_key **ref_key= tab->ref.key_copy;
+ uchar *cur_ref_buff= tab->ref.key_buff;
+
+ for (uint i= 0; i < tmp_key_parts; i++, cur_key_part++, ref_key++)
+ {
+ tab->ref.items[i]= item_in->left_expr->element_index(i);
+ int null_count= test(cur_key_part->field->real_maybe_null());
*ref_key= new store_key_item(thd, cur_key_part->field,
- /* TODO:
+ /* TIMOUR:
the NULL byte is taken into account in
cur_key_part->store_length, so instead of
cur_ref_buff + test(maybe_null), we could
@@ -3409,10 +3876,8 @@ bool subselect_hash_sj_engine::init_perm
tab->ref.key_err= 1;
tab->ref.key_parts= tmp_key_parts;
- if (cond->fix_fields(thd, &cond))
- DBUG_RETURN(TRUE);
-
- DBUG_RETURN(FALSE);
+ DBUG_RETURN(new subselect_uniquesubquery_engine(thd, tab, item,
+ semi_join_conds));
}
@@ -3435,7 +3900,8 @@ bool subselect_hash_sj_engine::init_runt
Repeat name resolution for 'cond' since cond is not part of any
clause of the query, and it is not 'fixed' during JOIN::prepare.
*/
- if (cond && !cond->fixed && cond->fix_fields(thd, &cond))
+ if (semi_join_conds && !semi_join_conds->fixed &&
+ semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
return TRUE;
/* Let our engine reuse this query plan for materialization. */
materialize_join= materialize_engine->join;
@@ -3446,32 +3912,53 @@ bool subselect_hash_sj_engine::init_runt
subselect_hash_sj_engine::~subselect_hash_sj_engine()
{
+ delete lookup_engine;
delete result;
- if (tab)
- free_tmp_table(thd, tab->table);
+ if (tmp_table)
+ free_tmp_table(thd, tmp_table);
}
/**
Cleanup performed after each PS execution.
- @detail
+ @details
Called in the end of JOIN::prepare for PS from Item_subselect::cleanup.
*/
void subselect_hash_sj_engine::cleanup()
{
+ enum_engine_type lookup_engine_type= lookup_engine->engine_type();
is_materialized= FALSE;
- result->cleanup(); /* Resets the temp table as well. */
+ bitmap_clear_all(&non_null_key_parts);
+ bitmap_clear_all(&partial_match_key_parts);
+ count_partial_match_columns= 0;
+ count_null_only_columns= 0;
+ strategy= UNDEFINED;
materialize_engine->cleanup();
- subselect_uniquesubquery_engine::cleanup();
+ if (lookup_engine_type == TABLE_SCAN_ENGINE ||
+ lookup_engine_type == ROWID_MERGE_ENGINE)
+ {
+ subselect_engine *inner_lookup_engine;
+ inner_lookup_engine=
+ ((subselect_partial_match_engine*) lookup_engine)->lookup_engine;
+ /*
+ Partial match engines are recreated for each PS execution inside
+ subselect_hash_sj_engine::exec().
+ */
+ delete lookup_engine;
+ lookup_engine= inner_lookup_engine;
+ }
+ DBUG_ASSERT(lookup_engine->engine_type() == UNIQUESUBQUERY_ENGINE);
+ lookup_engine->cleanup();
+ result->cleanup(); /* Resets the temp table as well. */
}
/**
Execute a subquery IN predicate via materialization.
- @detail
+ @details
If needed materialize the subquery into a temporary table, then
copmpute the predicate via a lookup into this table.
@@ -3482,6 +3969,9 @@ void subselect_hash_sj_engine::cleanup()
int subselect_hash_sj_engine::exec()
{
Item_in_subselect *item_in= (Item_in_subselect *) item;
+ SELECT_LEX *save_select= thd->lex->current_select;
+ subselect_partial_match_engine *pm_engine= NULL;
+ int res= 0;
DBUG_ENTER("subselect_hash_sj_engine::exec");
@@ -3489,56 +3979,126 @@ int subselect_hash_sj_engine::exec()
Optimize and materialize the subquery during the first execution of
the subquery predicate.
*/
- if (!is_materialized)
- {
- int res= 0;
- SELECT_LEX *save_select= thd->lex->current_select;
- thd->lex->current_select= materialize_engine->select_lex;
- if ((res= materialize_join->optimize()))
- goto err; /* purecov: inspected */
- materialize_join->exec();
- if ((res= test(materialize_join->error || thd->is_fatal_error)))
- goto err;
-
- /*
- TODO:
- - Unlock all subquery tables as we don't need them. To implement this
- we need to add new functionality to JOIN::join_free that can unlock
- all tables in a subquery (and all its subqueries).
- - The temp table used for grouping in the subquery can be freed
- immediately after materialization (yet it's done together with
- unlocking).
- */
- is_materialized= TRUE;
- /*
- If the subquery returned no rows, the temporary table is empty, so we know
- directly that the result of IN is FALSE. We first update the table
- statistics, then we test if the temporary table for the query result is
- empty.
- */
- tab->table->file->info(HA_STATUS_VARIABLE);
- if (!tab->table->file->stats.records)
- {
- empty_result_set= TRUE;
- item_in->value= FALSE;
- /* TODO: check we need this: item_in->null_value= FALSE; */
- DBUG_RETURN(FALSE);
- }
- /* Set tmp_param only if its usable, i.e. tmp_param->copy_field != NULL. */
- tmp_param= &(item_in->unit->outer_select()->join->tmp_table_param);
- if (tmp_param && !tmp_param->copy_field)
- tmp_param= NULL;
+ thd->lex->current_select= materialize_engine->select_lex;
+ if ((res= materialize_join->optimize()))
+ goto err; /* purecov: inspected */
+ DBUG_ASSERT(!is_materialized); /* We should materialize only once. */
+ materialize_join->exec();
+ if ((res= test(materialize_join->error || thd->is_fatal_error)))
+ goto err;
-err:
- thd->lex->current_select= save_select;
- if (res)
- DBUG_RETURN(res);
+ /*
+ TODO:
+ - Unlock all subquery tables as we don't need them. To implement this
+ we need to add new functionality to JOIN::join_free that can unlock
+ all tables in a subquery (and all its subqueries).
+ - The temp table used for grouping in the subquery can be freed
+ immediately after materialization (yet it's done together with
+ unlocking).
+ */
+ is_materialized= TRUE;
+ /*
+ If the subquery returned no rows, the temporary table is empty, so we know
+ directly that the result of IN is FALSE. We first update the table
+ statistics, then we test if the temporary table for the query result is
+ empty.
+ */
+ tmp_table->file->info(HA_STATUS_VARIABLE);
+ if (!tmp_table->file->stats.records)
+ {
+ item_in->value= FALSE;
+ /* The value of IN will not change during this execution. */
+ item_in->is_constant= TRUE;
+ item_in->set_first_execution();
+ /* TIMOUR: check if we need this: item_in->null_value= FALSE; */
+ DBUG_RETURN(FALSE);
}
/*
- Lookup the left IN operand in the hash index of the materialized subquery.
+ TIMOUR: The schema-based analysis for partial matching can be done once for
+ prepared statement and remembered. It is done here to remove the need to
+ save/restore all related variables between each re-execution, thus making
+ the code simpler.
*/
- DBUG_RETURN(subselect_uniquesubquery_engine::exec());
+ strategy= get_strategy_using_schema();
+ /* This call may discover that we don't need partial matching at all. */
+ strategy= get_strategy_using_data();
+ if (strategy == PARTIAL_MATCH)
+ {
+ uint count_pm_keys; /* Total number of keys needed for partial matching. */
+ MY_BITMAP *nn_key_parts; /* The key parts of the only non-NULL index. */
+ uint covering_null_row_width;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+
+ nn_key_parts= (count_partial_match_columns < tmp_table->s->fields) ?
+ &non_null_key_parts : NULL;
+
+ if (result_sink->get_max_nulls_in_row() ==
+ tmp_table->s->fields -
+ (nn_key_parts ? bitmap_bits_set(nn_key_parts) : 0))
+ covering_null_row_width= result_sink->get_max_nulls_in_row();
+ else
+ covering_null_row_width= 0;
+
+ if (covering_null_row_width)
+ count_pm_keys= nn_key_parts ? 1 : 0;
+ else
+ count_pm_keys= count_partial_match_columns - count_null_only_columns +
+ (nn_key_parts ? 1 : 0);
+
+ choose_partial_match_strategy(test(nn_key_parts),
+ test(covering_null_row_width),
+ &partial_match_key_parts);
+ DBUG_ASSERT(strategy == PARTIAL_MATCH_MERGE ||
+ strategy == PARTIAL_MATCH_SCAN);
+ if (strategy == PARTIAL_MATCH_MERGE)
+ {
+ pm_engine=
+ new subselect_rowid_merge_engine((subselect_uniquesubquery_engine*)
+ lookup_engine, tmp_table,
+ count_pm_keys,
+ covering_null_row_width,
+ item, result,
+ semi_join_conds->argument_list());
+ if (!pm_engine ||
+ ((subselect_rowid_merge_engine*) pm_engine)->
+ init(nn_key_parts, &partial_match_key_parts))
+ {
+ /*
+ The call to init() would fail if there was not enough memory to allocate
+ all buffers for the rowid merge strategy. In this case revert to table
+ scanning which doesn't need any big buffers.
+ */
+ delete pm_engine;
+ pm_engine= NULL;
+ strategy= PARTIAL_MATCH_SCAN;
+ }
+ }
+
+ if (strategy == PARTIAL_MATCH_SCAN)
+ {
+ if (!(pm_engine=
+ new subselect_table_scan_engine((subselect_uniquesubquery_engine*)
+ lookup_engine, tmp_table,
+ item, result,
+ semi_join_conds->argument_list(),
+ covering_null_row_width)))
+ {
+ /* This is an irrecoverable error. */
+ res= 1;
+ goto err;
+ }
+ }
+ }
+
+ if (pm_engine)
+ lookup_engine= pm_engine;
+ item_in->change_engine(lookup_engine);
+
+err:
+ thd->lex->current_select= save_select;
+ DBUG_RETURN(res);
}
@@ -3551,10 +4111,1008 @@ void subselect_hash_sj_engine::print(Str
str->append(STRING_WITH_LEN(" <materialize> ("));
materialize_engine->print(str, query_type);
str->append(STRING_WITH_LEN(" ), "));
- if (tab)
- subselect_uniquesubquery_engine::print(str, query_type);
+
+ if (lookup_engine)
+ lookup_engine->print(str, query_type);
else
str->append(STRING_WITH_LEN(
- "<the access method for lookups is not yet created>"
+ "<engine selected at execution time>"
));
}
+
+void subselect_hash_sj_engine::fix_length_and_dec(Item_cache** row)
+{
+ DBUG_ASSERT(FALSE);
+}
+
+void subselect_hash_sj_engine::exclude()
+{
+ DBUG_ASSERT(FALSE);
+}
+
+bool subselect_hash_sj_engine::no_tables()
+{
+ DBUG_ASSERT(FALSE);
+ return FALSE;
+}
+
+bool subselect_hash_sj_engine::change_result(Item_subselect *si,
+ select_result_interceptor *res)
+{
+ DBUG_ASSERT(FALSE);
+ return TRUE;
+}
+
+
+Ordered_key::Ordered_key(uint keyid_arg, TABLE *tbl_arg, Item *search_key_arg,
+ ha_rows null_count_arg, ha_rows min_null_row_arg,
+ ha_rows max_null_row_arg, uchar *row_num_to_rowid_arg)
+ : keyid(keyid_arg), tbl(tbl_arg), search_key(search_key_arg),
+ row_num_to_rowid(row_num_to_rowid_arg), null_count(null_count_arg)
+{
+ DBUG_ASSERT(tbl->file->stats.records > null_count);
+ key_buff_elements= tbl->file->stats.records - null_count;
+ cur_key_idx= HA_POS_ERROR;
+
+ DBUG_ASSERT((null_count && min_null_row_arg && max_null_row_arg) ||
+ (!null_count && !min_null_row_arg && !max_null_row_arg));
+ if (null_count)
+ {
+ /* The counters are 1-based, for key access we need 0-based indexes. */
+ min_null_row= min_null_row_arg - 1;
+ max_null_row= max_null_row_arg - 1;
+ }
+ else
+ min_null_row= max_null_row= 0;
+}
+
+
+Ordered_key::~Ordered_key()
+{
+ my_free((char*) key_buff, MYF(0));
+ bitmap_free(&null_key);
+}
+
+
+/*
+ Cleanup that needs to be done for each PS (re)execution.
+*/
+
+void Ordered_key::cleanup()
+{
+ /*
+ Currently these keys are recreated for each PS re-execution, thus
+ there is nothing to cleanup, the whole object goes away after execution
+ is over. All handler related initialization/deinitialization is done by
+ the parent subselect_rowid_merge_engine object.
+ */
+}
+
+
+/*
+ Initialize a multi-column index.
+*/
+
+bool Ordered_key::init(MY_BITMAP *columns_to_index)
+{
+ THD *thd= tbl->in_use;
+ uint cur_key_col= 0;
+ Item_field *cur_tmp_field;
+ Item_func_lt *fn_less_than;
+
+ key_column_count= bitmap_bits_set(columns_to_index);
+
+ // TIMOUR: check for mem allocation err, revert to scan
+
+ key_columns= (Item_field**) thd->alloc(key_column_count *
+ sizeof(Item_field*));
+ compare_pred= (Item_func_lt**) thd->alloc(key_column_count *
+ sizeof(Item_func_lt*));
+
+ for (uint i= 0; i < columns_to_index->n_bits; i++)
+ {
+ if (!bitmap_is_set(columns_to_index, i))
+ continue;
+ cur_tmp_field= new Item_field(tbl->field[i]);
+ /* Create the predicate (tmp_column[i] < outer_ref[i]). */
+ fn_less_than= new Item_func_lt(cur_tmp_field,
+ search_key->element_index(i));
+ fn_less_than->fix_fields(thd, (Item**) &fn_less_than);
+ key_columns[cur_key_col]= cur_tmp_field;
+ compare_pred[cur_key_col]= fn_less_than;
+ ++cur_key_col;
+ }
+
+ if (alloc_keys_buffers())
+ {
+ /* TIMOUR revert to partial match via table scan. */
+ return TRUE;
+ }
+ return FALSE;
+}
+
+
+/*
+ Initialize a single-column index.
+*/
+
+bool Ordered_key::init(int col_idx)
+{
+ THD *thd= tbl->in_use;
+
+ key_column_count= 1;
+
+ // TIMOUR: check for mem allocation err, revert to scan
+
+ key_columns= (Item_field**) thd->alloc(sizeof(Item_field*));
+ compare_pred= (Item_func_lt**) thd->alloc(sizeof(Item_func_lt*));
+
+ key_columns[0]= new Item_field(tbl->field[col_idx]);
+ /* Create the predicate (tmp_column[i] < outer_ref[i]). */
+ compare_pred[0]= new Item_func_lt(key_columns[0],
+ search_key->element_index(col_idx));
+ compare_pred[0]->fix_fields(thd, (Item**)&compare_pred[0]);
+
+ if (alloc_keys_buffers())
+ {
+ /* TIMOUR revert to partial match via table scan. */
+ return TRUE;
+ }
+ return FALSE;
+}
+
+
+/*
+ Allocate the buffers for both the row number, and the NULL-bitmap indexes.
+*/
+
+bool Ordered_key::alloc_keys_buffers()
+{
+ DBUG_ASSERT(key_buff_elements > 0);
+
+ if (!(key_buff= (rownum_t*) my_malloc(key_buff_elements * sizeof(rownum_t),
+ MYF(MY_WME))))
+ return TRUE;
+
+ /*
+ TIMOUR: it is enough to create bitmaps with size
+ (max_null_row - min_null_row), and then use min_null_row as
+ lookup offset.
+ */
+ /* Notice that max_null_row is max array index, we need count, so +1. */
+ if (bitmap_init(&null_key, NULL, max_null_row + 1, FALSE))
+ return TRUE;
+
+ cur_key_idx= HA_POS_ERROR;
+
+ return FALSE;
+}
+
+
+/*
+ Quick sort comparison function that compares two rows of the same table
+ indentfied with their row numbers.
+
+ @retval -1
+ @retval 0
+ @retval +1
+*/
+
+int
+Ordered_key::cmp_keys_by_row_data(ha_rows a, ha_rows b)
+{
+ uchar *rowid_a, *rowid_b;
+ int error, cmp_res;
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tbl->file->ref_length;
+
+ if (a == b)
+ return 0;
+ /* Get the corresponding rowids. */
+ rowid_a= row_num_to_rowid + a * rowid_length;
+ rowid_b= row_num_to_rowid + b * rowid_length;
+ /* Fetch the rows for comparison. */
+ error= tbl->file->ha_rnd_pos(tbl->record[0], rowid_a);
+ DBUG_ASSERT(!error);
+ error= tbl->file->ha_rnd_pos(tbl->record[1], rowid_b);
+ DBUG_ASSERT(!error);
+ /*
+ Compare the two rows by the corresponding values of the indexed
+ columns.
+ */
+ for (uint i= 0; i < key_column_count; i++)
+ {
+ Field *cur_field= key_columns[i]->field;
+ if ((cmp_res= cur_field->cmp_offset(tbl->s->rec_buff_length)))
+ return (cmp_res > 0 ? 1 : -1);
+ }
+ return 0;
+}
+
+
+int
+Ordered_key::cmp_keys_by_row_data_and_rownum(Ordered_key *key,
+ rownum_t* a, rownum_t* b)
+{
+ /* The result of comparing the two keys according to their row data. */
+ int cmp_row_res= key->cmp_keys_by_row_data(*a, *b);
+ if (cmp_row_res)
+ return cmp_row_res;
+ return (*a < *b) ? -1 : (*a > *b) ? 1 : 0;
+}
+
+
+void Ordered_key::sort_keys()
+{
+ my_qsort2(key_buff, key_buff_elements, sizeof(rownum_t),
+ (qsort2_cmp) &cmp_keys_by_row_data_and_rownum, (void*) this);
+ /* Invalidate the current row position. */
+ cur_key_idx= HA_POS_ERROR;
+}
+
+
+/*
+ The fraction of rows that do not contain NULL in the columns indexed by
+ this key.
+
+ @retval 1 if there are no NULLs
+ @retval 0 if only NULLs
+*/
+
+double Ordered_key::null_selectivity()
+{
+ /* We should not be processing empty tables. */
+ DBUG_ASSERT(tbl->file->stats.records);
+ return (1 - (double) null_count / (double) tbl->file->stats.records);
+}
+
+
+/*
+ Compare the value(s) of the current key in 'search_key' with the
+ data of the current table record.
+
+ @notes The comparison result follows from the way compare_pred
+ is created in Ordered_key::init. Currently compare_pred compares
+ a field in of the current row with the corresponding Item that
+ contains the search key.
+
+ @param row_num Number of the row (not index in the key_buff array)
+
+ @retval -1 if (current row < search_key)
+ @retval 0 if (current row == search_key)
+ @retval +1 if (current row > search_key)
+*/
+
+int Ordered_key::cmp_key_with_search_key(rownum_t row_num)
+{
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tbl->file->ref_length;
+ uchar *cur_rowid= row_num_to_rowid + row_num * rowid_length;
+ int error, cmp_res;
+
+ error= tbl->file->ha_rnd_pos(tbl->record[0], cur_rowid);
+ DBUG_ASSERT(!error);
+
+ for (uint i= 0; i < key_column_count; i++)
+ {
+ cmp_res= compare_pred[i]->get_comparator()->compare();
+ /* Unlike Arg_comparator::compare_row() here there should be no NULLs. */
+ DBUG_ASSERT(!compare_pred[i]->null_value);
+ if (cmp_res)
+ return (cmp_res > 0 ? 1 : -1);
+ }
+ return 0;
+}
+
+
+/*
+ Find a key in a sorted array of keys via binary search.
+
+ see create_subq_in_equalities()
+*/
+
+bool Ordered_key::lookup()
+{
+ DBUG_ASSERT(key_buff_elements);
+
+ ha_rows lo= 0;
+ ha_rows hi= key_buff_elements - 1;
+ ha_rows mid;
+ int cmp_res;
+
+ while (lo <= hi)
+ {
+ mid= lo + (hi - lo) / 2;
+ cmp_res= cmp_key_with_search_key(key_buff[mid]);
+ /*
+ In order to find the minimum match, check if the pevious element is
+ equal or smaller than the found one. If equal, we need to search further
+ to the left.
+ */
+ if (!cmp_res && mid > 0)
+ cmp_res= !cmp_key_with_search_key(key_buff[mid - 1]) ? 1 : 0;
+
+ if (cmp_res == -1)
+ {
+ /* row[mid] < search_key */
+ lo= mid + 1;
+ }
+ else if (cmp_res == 1)
+ {
+ /* row[mid] > search_key */
+ if (!mid)
+ goto not_found;
+ hi= mid - 1;
+ }
+ else
+ {
+ /* row[mid] == search_key */
+ cur_key_idx= mid;
+ return TRUE;
+ }
+ }
+not_found:
+ cur_key_idx= HA_POS_ERROR;
+ return FALSE;
+}
+
+
+/*
+ Move the current index pointer to the next key with the same column
+ values as the current key. Since the index is sorted, all such keys
+ are contiguous.
+*/
+
+bool Ordered_key::next_same()
+{
+ DBUG_ASSERT(key_buff_elements);
+
+ if (cur_key_idx < key_buff_elements - 1)
+ {
+ /*
+ TIMOUR:
+ The below is quite inefficient, since as a result we will fetch every
+ row (except the last one) twice. There must be a more efficient way,
+ e.g. swapping record[0] and record[1], and reading only the new record.
+ */
+ if (!cmp_keys_by_row_data(key_buff[cur_key_idx], key_buff[cur_key_idx + 1]))
+ {
+ ++cur_key_idx;
+ return TRUE;
+ }
+ }
+ return FALSE;
+}
+
+
+void Ordered_key::print(String *str)
+{
+ uint i;
+ str->append("{idx=");
+ str->qs_append(keyid);
+ str->append(", (");
+ for (i= 0; i < key_column_count - 1; i++)
+ {
+ str->append(key_columns[i]->field->field_name);
+ str->append(", ");
+ }
+ str->append(key_columns[i]->field->field_name);
+ str->append("), ");
+
+ str->append("null_bitmap: (bits=");
+ str->qs_append(null_key.n_bits);
+ str->append(", nulls= ");
+ str->qs_append((double)null_count);
+ str->append(", min_null= ");
+ str->qs_append((double)min_null_row);
+ str->append(", max_null= ");
+ str->qs_append((double)max_null_row);
+ str->append("), ");
+
+ str->append('}');
+}
+
+
+subselect_partial_match_engine::subselect_partial_match_engine(
+ subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg)
+ :subselect_engine(item_arg, result_arg),
+ tmp_table(tmp_table_arg), lookup_engine(engine_arg),
+ equi_join_conds(equi_join_conds_arg),
+ covering_null_row_width(covering_null_row_width_arg)
+{}
+
+
+int subselect_partial_match_engine::exec()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ int res;
+
+ /* Try to find a matching row by index lookup. */
+ res= lookup_engine->copy_ref_key_simple();
+ if (res == -1)
+ {
+ /* The result is FALSE based on the outer reference. */
+ item_in->value= 0;
+ item_in->null_value= 0;
+ return 0;
+ }
+ else if (res == 0)
+ {
+ /* Search for a complete match. */
+ if ((res= lookup_engine->index_lookup()))
+ {
+ /* An error occured during lookup(). */
+ item_in->value= 0;
+ item_in->null_value= 0;
+ return res;
+ }
+ else if (item_in->value)
+ {
+ /*
+ A complete match was found, the result of IN is TRUE.
+ Notice: (this->item == lookup_engine->item)
+ */
+ return 0;
+ }
+ }
+
+ if (covering_null_row_width == tmp_table->s->fields)
+ {
+ /*
+ If there is a NULL-only row that coveres all columns the result of IN
+ is UNKNOWN.
+ */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 1;
+ item_in->null_value= 1;
+ return 0;
+ }
+
+ /*
+ There is no complete match. Look for a partial match (UNKNOWN result), or
+ no match (FALSE).
+ */
+ if (tmp_table->file->inited)
+ tmp_table->file->ha_index_end();
+
+ if (partial_match())
+ {
+ /* The result of IN is UNKNOWN. */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 1;
+ item_in->null_value= 1;
+ }
+ else
+ {
+ /* The result of IN is FALSE. */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 0;
+ item_in->null_value= 0;
+ }
+
+ return 0;
+}
+
+
+void subselect_partial_match_engine::print(String *str,
+ enum_query_type query_type)
+{
+ /*
+ Should never be called as the actual engine cannot be known at query
+ optimization time.
+ */
+ DBUG_ASSERT(FALSE);
+}
+
+
+/*
+ @param non_null_key_parts
+ @param partial_match_key_parts A union of all single-column NULL key parts.
+ @param count_partial_match_columns Number of NULL keyparts (set bits above).
+
+ @retval FALSE the engine was initialized successfully
+ @retval TRUE there was some (memory allocation) error during initialization,
+ such errors should be interpreted as revert to other strategy
+*/
+
+bool
+subselect_rowid_merge_engine::init(MY_BITMAP *non_null_key_parts,
+ MY_BITMAP *partial_match_key_parts)
+{
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tmp_table->file->ref_length;
+ ha_rows row_count= tmp_table->file->stats.records;
+ rownum_t cur_rownum= 0;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+ uint cur_keyid= 0;
+ Item_in_subselect *item_in= (Item_in_subselect*) item;
+ int error;
+
+ if (keys_count == 0)
+ {
+ /* There is nothing to initialize, we will only do regular lookups. */
+ return FALSE;
+ }
+
+ DBUG_ASSERT(!covering_null_row_width || (covering_null_row_width &&
+ keys_count == 1 &&
+ non_null_key_parts));
+ /*
+ Allocate buffers to hold the merged keys and the mapping between rowids and
+ row numbers.
+ */
+ if (!(merge_keys= (Ordered_key**) thd->alloc(keys_count *
+ sizeof(Ordered_key*))) ||
+ !(row_num_to_rowid= (uchar*) my_malloc(row_count * rowid_length *
+ sizeof(uchar), MYF(MY_WME))))
+ return TRUE;
+
+ /* Create the only non-NULL key if there is any. */
+ if (non_null_key_parts)
+ {
+ non_null_key= new Ordered_key(cur_keyid, tmp_table, item_in->left_expr,
+ 0, 0, 0, row_num_to_rowid);
+ if (non_null_key->init(non_null_key_parts))
+ return TRUE;
+ merge_keys[cur_keyid]= non_null_key;
+ merge_keys[cur_keyid]->first();
+ ++cur_keyid;
+ }
+
+ /*
+ If there is a covering NULL row, the only key that is needed is the
+ only non-NULL key that is already created above. We create keys on
+ NULL-able columns only if there is no covering NULL row.
+ */
+ if (!covering_null_row_width)
+ {
+ if (bitmap_init_memroot(&matching_keys, keys_count, thd->mem_root) ||
+ bitmap_init_memroot(&matching_outer_cols, keys_count, thd->mem_root) ||
+ bitmap_init_memroot(&null_only_columns, keys_count, thd->mem_root))
+ return TRUE;
+
+ /*
+ Create one single-column NULL-key for each column in
+ partial_match_key_parts.
+ */
+ for (uint i= 0; i < partial_match_key_parts->n_bits; i++)
+ {
+ if (!bitmap_is_set(partial_match_key_parts, i))
+ continue;
+
+ if (result_sink->get_null_count_of_col(i) == row_count)
+ bitmap_set_bit(&null_only_columns, cur_keyid);
+ else
+ {
+ merge_keys[cur_keyid]= new Ordered_key(
+ cur_keyid, tmp_table,
+ item_in->left_expr->element_index(i),
+ result_sink->get_null_count_of_col(i),
+ result_sink->get_min_null_of_col(i),
+ result_sink->get_max_null_of_col(i),
+ row_num_to_rowid);
+ if (merge_keys[cur_keyid]->init(i))
+ return TRUE;
+ merge_keys[cur_keyid]->first();
+ }
+ ++cur_keyid;
+ }
+ }
+
+ /* Populate the indexes with data from the temporary table. */
+ tmp_table->file->ha_rnd_init(1);
+ tmp_table->file->extra_opt(HA_EXTRA_CACHE,
+ current_thd->variables.read_buff_size);
+ tmp_table->null_row= 0;
+ while (TRUE)
+ {
+ error= tmp_table->file->ha_rnd_next(tmp_table->record[0]);
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ /* We get this for duplicate records that should not be in tmp_table. */
+ continue;
+ }
+ /*
+ This is a temp table that we fully own, there should be no other
+ cause to stop the iteration than EOF.
+ */
+ DBUG_ASSERT(!error || error == HA_ERR_END_OF_FILE);
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ DBUG_ASSERT(cur_rownum == tmp_table->file->stats.records);
+ break;
+ }
+
+ /*
+ Save the position of this record in the row_num -> rowid mapping.
+ */
+ tmp_table->file->position(tmp_table->record[0]);
+ memcpy(row_num_to_rowid + cur_rownum * rowid_length,
+ tmp_table->file->ref, rowid_length);
+
+ /* Add the current row number to the corresponding keys. */
+ if (non_null_key)
+ {
+ /* By definition there are no NULLs in the non-NULL key. */
+ non_null_key->add_key(cur_rownum);
+ }
+
+ for (uint i= (non_null_key ? 1 : 0); i < keys_count; i++)
+ {
+ /*
+ Check if the first and only indexed column contains NULL in the curent
+ row, and add the row number to the corresponding key.
+ */
+ if (tmp_table->field[merge_keys[i]->get_field_idx(0)]->is_null())
+ merge_keys[i]->set_null(cur_rownum);
+ else
+ merge_keys[i]->add_key(cur_rownum);
+ }
+ ++cur_rownum;
+ }
+
+ tmp_table->file->ha_rnd_end();
+
+ /* Sort all the keys by their NULL selectivity. */
+ my_qsort(merge_keys, keys_count, sizeof(Ordered_key*),
+ (qsort_cmp) cmp_keys_by_null_selectivity);
+
+ /* Sort the keys in each of the indexes. */
+ for (uint i= 0; i < keys_count; i++)
+ merge_keys[i]->sort_keys();
+
+ if (init_queue(&pq, keys_count, 0, FALSE,
+ subselect_rowid_merge_engine::cmp_keys_by_cur_rownum, NULL))
+ return TRUE;
+
+ return FALSE;
+}
+
+
+subselect_rowid_merge_engine::~subselect_rowid_merge_engine()
+{
+ /* None of the resources below is allocated if there are no ordered keys. */
+ if (keys_count)
+ {
+ my_free((char*) row_num_to_rowid, MYF(0));
+ for (uint i= 0; i < keys_count; i++)
+ delete merge_keys[i];
+ delete_queue(&pq);
+ if (tmp_table->file->inited == handler::RND)
+ tmp_table->file->ha_rnd_end();
+ }
+}
+
+
+void subselect_rowid_merge_engine::cleanup()
+{
+}
+
+
+/*
+ Quick sort comparison function to compare keys in order of decreasing bitmap
+ selectivity, so that the most selective keys come first.
+
+ @param k1 first key to compare
+ @param k2 second key to compare
+
+ @retval 1 if k1 is less selective than k2
+ @retval 0 if k1 is equally selective as k2
+ @retval -1 if k1 is more selective than k2
+*/
+
+int
+subselect_rowid_merge_engine::cmp_keys_by_null_selectivity(Ordered_key **k1,
+ Ordered_key **k2)
+{
+ double k1_sel= (*k1)->null_selectivity();
+ double k2_sel= (*k2)->null_selectivity();
+ if (k1_sel < k2_sel)
+ return 1;
+ if (k1_sel > k2_sel)
+ return -1;
+ return 0;
+}
+
+
+/*
+*/
+
+int
+subselect_rowid_merge_engine::cmp_keys_by_cur_rownum(void *arg,
+ uchar *k1, uchar *k2)
+{
+ rownum_t r1= ((Ordered_key*) k1)->current();
+ rownum_t r2= ((Ordered_key*) k2)->current();
+
+ return (r1 < r2) ? -1 : (r1 > r2) ? 1 : 0;
+}
+
+
+/*
+ Check if certain table row contains a NULL in all columns for which there is
+ no match in the corresponding value index.
+
+ @retval TRUE if a NULL row exists
+ @retval FALSE otherwise
+*/
+
+bool subselect_rowid_merge_engine::test_null_row(rownum_t row_num)
+{
+ Ordered_key *cur_key;
+ uint cur_id;
+ for (uint i = 0; i < keys_count; i++)
+ {
+ cur_key= merge_keys[i];
+ cur_id= cur_key->get_keyid();
+ if (bitmap_is_set(&matching_keys, cur_id))
+ {
+ /*
+ The key 'i' (with id 'cur_keyid') already matches a value in row 'row_num',
+ thus we skip it as it can't possibly match a NULL.
+ */
+ continue;
+ }
+ if (!cur_key->is_null(row_num))
+ return FALSE;
+ }
+ return TRUE;
+}
+
+
+/*
+ @retval TRUE there is a partial match (UNKNOWN)
+ @retval FALSE there is no match at all (FALSE)
+*/
+
+bool subselect_rowid_merge_engine::partial_match()
+{
+ Ordered_key *min_key; /* Key that contains the current minimum position. */
+ rownum_t min_row_num; /* Current row number of min_key. */
+ Ordered_key *cur_key;
+ rownum_t cur_row_num;
+ uint count_nulls_in_search_key= 0;
+ bool res= FALSE;
+
+ /* If there is a non-NULL key, it must be the first key in the keys array. */
+ DBUG_ASSERT(!non_null_key || (non_null_key && merge_keys[0] == non_null_key));
+
+ /* All data accesses during execution are via handler::ha_rnd_pos() */
+ tmp_table->file->ha_rnd_init(0);
+
+ /* Check if there is a match for the columns of the only non-NULL key. */
+ if (non_null_key && !non_null_key->lookup())
+ {
+ res= FALSE;
+ goto end;
+ }
+
+ /*
+ If there is a NULL (sub)row that covers all NULL-able columns,
+ then there is a guranteed partial match, and we don't need to search
+ for the matching row.
+ */
+ if (covering_null_row_width)
+ {
+ res= TRUE;
+ goto end;
+ }
+
+ if (non_null_key)
+ queue_insert(&pq, (uchar *) non_null_key);
+ /*
+ Do not add the non_null_key, since it was already processed above.
+ */
+ bitmap_clear_all(&matching_outer_cols);
+ for (uint i= test(non_null_key); i < keys_count; i++)
+ {
+ DBUG_ASSERT(merge_keys[i]->get_column_count() == 1);
+ if (merge_keys[i]->get_search_key(0)->is_null())
+ {
+ ++count_nulls_in_search_key;
+ bitmap_set_bit(&matching_outer_cols, merge_keys[i]->get_keyid());
+ }
+ else if (merge_keys[i]->lookup())
+ queue_insert(&pq, (uchar *) merge_keys[i]);
+ }
+
+ /*
+ If the outer reference consists of only NULLs, or if it has NULLs in all
+ nullable columns, the result is UNKNOWN.
+ */
+ if (count_nulls_in_search_key ==
+ ((Item_in_subselect *) item)->left_expr->cols() -
+ (non_null_key ? non_null_key->get_column_count() : 0))
+ {
+ res= TRUE;
+ goto end;
+ }
+
+ /*
+ If there is no NULL (sub)row that covers all NULL columns, and there is no
+ single match for any of the NULL columns, the result is FALSE.
+ */
+ if (pq.elements - test(non_null_key) == 0)
+ {
+ res= FALSE;
+ goto end;
+ }
+
+ DBUG_ASSERT(pq.elements);
+
+ min_key= (Ordered_key*) queue_remove(&pq, 0);
+ min_row_num= min_key->current();
+ bitmap_copy(&matching_keys, &null_only_columns);
+ bitmap_set_bit(&matching_keys, min_key->get_keyid());
+ bitmap_union(&matching_keys, &matching_outer_cols);
+ if (min_key->next_same())
+ queue_insert(&pq, (uchar *) min_key);
+
+ if (pq.elements == 0)
+ {
+ /*
+ Check the only matching row of the only key min_key for NULL matches
+ in the other columns.
+ */
+ res= test_null_row(min_row_num);
+ goto end;
+ }
+
+ while (TRUE)
+ {
+ cur_key= (Ordered_key*) queue_remove(&pq, 0);
+ cur_row_num= cur_key->current();
+
+ if (cur_row_num == min_row_num)
+ bitmap_set_bit(&matching_keys, cur_key->get_keyid());
+ else
+ {
+ /* Follows from the correct use of priority queue. */
+ DBUG_ASSERT(cur_row_num > min_row_num);
+ if (test_null_row(min_row_num))
+ {
+ res= TRUE;
+ goto end;
+ }
+ else
+ {
+ min_key= cur_key;
+ min_row_num= cur_row_num;
+ bitmap_copy(&matching_keys, &null_only_columns);
+ bitmap_set_bit(&matching_keys, min_key->get_keyid());
+ bitmap_union(&matching_keys, &matching_outer_cols);
+ }
+ }
+
+ if (cur_key->next_same())
+ queue_insert(&pq, (uchar *) cur_key);
+
+ if (pq.elements == 0)
+ {
+ /* Check the last row of the last column in PQ for NULL matches. */
+ res= test_null_row(min_row_num);
+ goto end;
+ }
+ }
+
+ /* We should never get here - all branches must be handled explicitly above. */
+ DBUG_ASSERT(FALSE);
+
+end:
+ tmp_table->file->ha_rnd_end();
+ return res;
+}
+
+
+subselect_table_scan_engine::subselect_table_scan_engine(
+ subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg,
+ Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg)
+ :subselect_partial_match_engine(engine_arg, tmp_table_arg, item_arg,
+ result_arg, equi_join_conds_arg,
+ covering_null_row_width_arg)
+{}
+
+
+/*
+ TIMOUR:
+ This method is based on subselect_uniquesubquery_engine::scan_table().
+ Consider refactoring somehow, 80% of the code is the same.
+
+ for each row_i in tmp_table
+ {
+ count_matches= 0;
+ for each row element row_i[j]
+ {
+ if (outer_ref[j] is NULL || row_i[j] is NULL || outer_ref[j] == row_i[j])
+ ++count_matches;
+ }
+ if (count_matches == outer_ref.elements)
+ return TRUE
+ }
+ return FALSE
+*/
+
+bool subselect_table_scan_engine::partial_match()
+{
+ List_iterator_fast<Item> equality_it(*equi_join_conds);
+ Item *cur_eq;
+ uint count_matches;
+ int error;
+ bool res;
+
+ tmp_table->file->ha_rnd_init(1);
+ tmp_table->file->extra_opt(HA_EXTRA_CACHE,
+ current_thd->variables.read_buff_size);
+ /*
+ TIMOUR:
+ scan_table() also calls "table->null_row= 0;", why, do we need it?
+ */
+ for (;;)
+ {
+ error= tmp_table->file->ha_rnd_next(tmp_table->record[0]);
+ if (error) {
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ error= 0;
+ continue;
+ }
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ error= 0;
+ break;
+ }
+ else
+ {
+ error= report_error(tmp_table, error);
+ break;
+ }
+ }
+
+ equality_it.rewind();
+ count_matches= 0;
+ while ((cur_eq= equality_it++))
+ {
+ DBUG_ASSERT(cur_eq->type() == Item::FUNC_ITEM &&
+ ((Item_func*)cur_eq)->functype() == Item_func::EQ_FUNC);
+ if (!cur_eq->val_int() && !cur_eq->null_value)
+ break;
+ ++count_matches;
+ }
+ if (count_matches == tmp_table->s->fields)
+ {
+ res= TRUE; /* Found a matching row. */
+ goto end;
+ }
+ }
+
+ res= FALSE;
+end:
+ tmp_table->file->ha_rnd_end();
+ return res;
+}
+
+
+void subselect_table_scan_engine::cleanup()
+{
+}
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-02-11 23:59:58 +0000
+++ b/sql/item_subselect.h 2010-03-09 10:14:06 +0000
@@ -297,7 +297,7 @@ public:
Representation of IN subquery predicates of the form
"left_expr IN (SELECT ...)".
- @detail
+ @details
This class has:
- A "subquery execution engine" (as a subclass of Item_subselect) that allows
it to evaluate subqueries. (and this class participates in execution by
@@ -319,6 +319,12 @@ protected:
*/
List<Cached_item> *left_expr_cache;
bool first_execution;
+ /*
+ Set to TRUE if at query execution time we determine that this item's
+ value is a constant during this execution. We need this member because
+ it is not possible to substitute 'this' with a constant item.
+ */
+ bool is_constant;
/*
expr & optimizer used in subselect rewriting to store Item for
@@ -387,8 +393,8 @@ public:
Item_in_subselect(Item * left_expr, st_select_lex *select_lex);
Item_in_subselect()
:Item_exists_subselect(), left_expr_cache(0), first_execution(TRUE),
- optimizer(0), abort_on_null(0), pushed_cond_guards(NULL),
- exec_method(NOT_TRANSFORMED), upper_item(0)
+ is_constant(FALSE), optimizer(0), abort_on_null(0),
+ pushed_cond_guards(NULL), exec_method(NOT_TRANSFORMED), upper_item(0)
{}
void cleanup();
subs_type substype() { return IN_SUBS; }
@@ -421,6 +427,8 @@ public:
void update_used_tables();
bool setup_engine();
bool init_left_expr_cache();
+ /* Inform 'this' that it was computed, and contains a valid result. */
+ void set_first_execution() { if (first_execution) first_execution= FALSE; }
bool is_expensive_processor(uchar *arg);
friend class Item_ref_null_helper;
@@ -428,6 +436,7 @@ public:
friend class Item_in_optimizer;
friend class subselect_indexsubquery_engine;
friend class subselect_hash_sj_engine;
+ friend class subselect_partial_match_engine;
};
@@ -462,7 +471,8 @@ public:
enum enum_engine_type {ABSTRACT_ENGINE, SINGLE_SELECT_ENGINE,
UNION_ENGINE, UNIQUESUBQUERY_ENGINE,
- INDEXSUBQUERY_ENGINE, HASH_SJ_ENGINE};
+ INDEXSUBQUERY_ENGINE, HASH_SJ_ENGINE,
+ ROWID_MERGE_ENGINE, TABLE_SCAN_ENGINE};
subselect_engine(Item_subselect *si, select_result_interceptor *res)
:thd(0)
@@ -635,8 +645,10 @@ public:
virtual void print (String *str, enum_query_type query_type);
bool change_result(Item_subselect *si, select_result_interceptor *result);
bool no_tables();
+ int index_lookup(); /* TIMOUR: this method needs refactoring. */
int scan_table();
bool copy_ref_key();
+ int copy_ref_key_simple(); /* TIMOUR: this method needs refactoring. */
bool no_rows() { return empty_result_set; }
virtual enum_engine_type engine_type() { return UNIQUESUBQUERY_ENGINE; }
};
@@ -705,50 +717,439 @@ inline bool Item_subselect::is_uncacheab
/**
- Compute an IN predicate via a hash semi-join. The subquery is materialized
- during the first evaluation of the IN predicate. The IN predicate is executed
- via the functionality inherited from subselect_uniquesubquery_engine.
+ Compute an IN predicate via a hash semi-join. This class is responsible for
+ the materialization of the subquery, and the selection of the correct and
+ optimal execution method (e.g. direct index lookup, or partial matching) for
+ the IN predicate.
*/
-class subselect_hash_sj_engine: public subselect_uniquesubquery_engine
+class subselect_hash_sj_engine : public subselect_engine
{
protected:
+ /* The table into which the subquery is materialized. */
+ TABLE *tmp_table;
/* TRUE if the subquery was materialized into a temp table. */
bool is_materialized;
/*
The old engine already chosen at parse time and stored in permanent memory.
Through this member we can re-create and re-prepare materialize_join for
- each execution of a prepared statement. We akso resuse the functionality
+ each execution of a prepared statement. We also reuse the functionality
of subselect_single_select_engine::[prepare | cols].
*/
subselect_single_select_engine *materialize_engine;
+ /* The engine used to compute the IN predicate. */
+ subselect_engine *lookup_engine;
/*
QEP to execute the subquery and materialize its result into a
temporary table. Created during the first call to exec().
*/
JOIN *materialize_join;
- /* Temp table context of the outer select's JOIN. */
- TMP_TABLE_PARAM *tmp_param;
+
+ /* Keyparts of the only non-NULL composite index in a rowid merge. */
+ MY_BITMAP non_null_key_parts;
+ /* Keyparts of the single column indexes with NULL, one keypart per index. */
+ MY_BITMAP partial_match_key_parts;
+ uint count_partial_match_columns;
+ uint count_null_only_columns;
+ /*
+ A conjunction of all the equality condtions between all pairs of expressions
+ that are arguments of an IN predicate. We need these to post-filter some
+ IN results because index lookups sometimes match values that are actually
+ not equal to the search key in SQL terms.
+ */
+ Item_cond_and *semi_join_conds;
+ /* Possible execution strategies that can be used to compute hash semi-join.*/
+ enum exec_strategy {
+ UNDEFINED,
+ COMPLETE_MATCH, /* Use regular index lookups. */
+ PARTIAL_MATCH, /* Use some partial matching strategy. */
+ PARTIAL_MATCH_MERGE, /* Use partial matching through index merging. */
+ PARTIAL_MATCH_SCAN, /* Use partial matching through table scan. */
+ IMPOSSIBLE /* Subquery materialization is not applicable. */
+ };
+ /* The chosen execution strategy. Computed after materialization. */
+ exec_strategy strategy;
+protected:
+ exec_strategy get_strategy_using_schema();
+ exec_strategy get_strategy_using_data();
+ size_t rowid_merge_buff_size(bool has_non_null_key,
+ bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts);
+ void choose_partial_match_strategy(bool has_non_null_key,
+ bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts);
+ bool make_semi_join_conds();
+ subselect_uniquesubquery_engine* make_unique_engine();
public:
subselect_hash_sj_engine(THD *thd, Item_subselect *in_predicate,
- subselect_single_select_engine *old_engine)
- :subselect_uniquesubquery_engine(thd, NULL, in_predicate, NULL),
- is_materialized(FALSE), materialize_engine(old_engine),
- materialize_join(NULL), tmp_param(NULL)
- {}
+ subselect_single_select_engine *old_engine)
+ :subselect_engine(in_predicate, NULL), tmp_table(NULL),
+ is_materialized(FALSE), materialize_engine(old_engine), lookup_engine(NULL),
+ materialize_join(NULL), count_partial_match_columns(0),
+ count_null_only_columns(0), semi_join_conds(NULL), strategy(UNDEFINED)
+ {
+ set_thd(thd);
+ }
~subselect_hash_sj_engine();
bool init_permanent(List<Item> *tmp_columns);
bool init_runtime();
void cleanup();
- int prepare() { return 0; }
+ int prepare() { return 0; } /* Override virtual function in base class. */
int exec();
- virtual void print (String *str, enum_query_type query_type);
+ virtual void print(String *str, enum_query_type query_type);
uint cols()
{
return materialize_engine->cols();
}
+ uint8 uncacheable() { return UNCACHEABLE_DEPENDENT; }
+ table_map upper_select_const_tables() { return 0; }
+ bool no_rows() { return !tmp_table->file->stats.records; }
virtual enum_engine_type engine_type() { return HASH_SJ_ENGINE; }
+ /*
+ TODO: factor out all these methods in a base subselect_index_engine class
+ because all of them have dummy implementations and should never be called.
+ */
+ void fix_length_and_dec(Item_cache** row);//=>base class
+ void exclude(); //=>base class
+ //=>base class
+ bool change_result(Item_subselect *si, select_result_interceptor *result);
+ bool no_tables();//=>base class
+};
+
+
+/*
+ Distinguish the type od (0-based) row numbers from the type of the index into
+ an array of row numbers.
+*/
+typedef ha_rows rownum_t;
+
+
+/*
+ An Ordered_key is an in-memory table index that allows O(log(N)) time
+ lookups of a multi-part key.
+
+ If the index is over a single column, then this column may contain NULLs, and
+ the NULLs are stored and tested separately for NULL in O(1) via is_null().
+ Multi-part indexes assume that the indexed columns do not contain NULLs.
+
+ TODO:
+ = Due to the unnatural assymetry between single and multi-part indexes, it
+ makes sense to somehow refactor or extend the class.
+
+ = This class can be refactored into a base abstract interface, and two
+ subclasses:
+ - one to represent single-column indexes, and
+ - another to represent multi-column indexes.
+ Such separation would allow slightly more efficient implementation of
+ the single-column indexes.
+ = The current design requires such indexes to be fully recreated for each
+ PS (re)execution, however most of the comprising objects can be reused.
+*/
+
+class Ordered_key : public Sql_alloc
+{
+protected:
+ /*
+ Index of the key in an array of keys. This index allows to
+ construct (sub)sets of keys represented by bitmaps.
+ */
+ uint keyid;
+ /* The table being indexed. */
+ TABLE *tbl;
+ /* The columns being indexed. */
+ Item_field **key_columns;
+ /* Number of elements in 'key_columns' (number of key parts). */
+ uint key_column_count;
+ /*
+ An expression, or sequence of expressions that forms the search key.
+ The search key is a sequence when it is Item_row. Each element of the
+ sequence is accessible via Item::element_index(int i).
+ */
+ Item *search_key;
+
+/* Value index related members. */
+ /*
+ The actual value index, consists of a sorted sequence of row numbers.
+ */
+ rownum_t *key_buff;
+ /* Number of elements in key_buff. */
+ ha_rows key_buff_elements;
+ /* Current element in 'key_buff'. */
+ ha_rows cur_key_idx;
+ /*
+ Mapping from row numbers to row ids. The element row_num_to_rowid[i]
+ contains a buffer with the rowid for the row numbered 'i'.
+ The memory for this member is not maintanined by this class because
+ all Ordered_key indexes of the same table share the same mapping.
+ */
+ uchar *row_num_to_rowid;
+ /*
+ A sequence of predicates to compare the search key with the corresponding
+ columns of a table row from the index.
+ */
+ Item_func_lt **compare_pred;
+
+/* Null index related members. */
+ MY_BITMAP null_key;
+ /* Count of NULLs per column. */
+ ha_rows null_count;
+ /* The row number that contains the first NULL in a column. */
+ ha_rows min_null_row;
+ /* The row number that contains the last NULL in a column. */
+ ha_rows max_null_row;
+
+protected:
+ bool alloc_keys_buffers();
+ /*
+ Quick sort comparison function that compares two rows of the same table
+ indentfied with their row numbers.
+ */
+ int cmp_keys_by_row_data(rownum_t a, rownum_t b);
+ static int cmp_keys_by_row_data_and_rownum(Ordered_key *key,
+ rownum_t* a, rownum_t* b);
+
+ int cmp_key_with_search_key(rownum_t row_num);
+
+public:
+ Ordered_key(uint keyid_arg, TABLE *tbl_arg,
+ Item *search_key_arg, ha_rows null_count_arg,
+ ha_rows min_null_row_arg, ha_rows max_null_row_arg,
+ uchar *row_num_to_rowid_arg);
+ ~Ordered_key();
+ void cleanup();
+ /* Initialize a multi-column index. */
+ bool init(MY_BITMAP *columns_to_index);
+ /* Initialize a single-column index. */
+ bool init(int col_idx);
+
+ uint get_column_count() { return key_column_count; }
+ uint get_keyid() { return keyid; }
+ uint get_field_idx(uint i)
+ {
+ DBUG_ASSERT(i < key_column_count);
+ return key_columns[i]->field->field_index;
+ }
+ /*
+ Get the search key element that corresponds to the i-th key part of this
+ index.
+ */
+ Item *get_search_key(uint i)
+ {
+ return search_key->element_index(key_columns[i]->field->field_index);
+ }
+ void add_key(rownum_t row_num)
+ {
+ /* The caller must know how many elements to add. */
+ DBUG_ASSERT(key_buff_elements && cur_key_idx < key_buff_elements);
+ key_buff[cur_key_idx]= row_num;
+ ++cur_key_idx;
+ }
+
+ void sort_keys();
+ double null_selectivity();
+
+ /*
+ Position the current element at the first row that matches the key.
+ The key itself is propagated by evaluating the current value(s) of
+ this->search_key.
+ */
+ bool lookup();
+ /* Move the current index cursor to the first key. */
+ void first()
+ {
+ DBUG_ASSERT(key_buff_elements);
+ cur_key_idx= 0;
+ }
+ /* TODO */
+ bool next_same();
+ /* Move the current index cursor to the next key. */
+ bool next()
+ {
+ DBUG_ASSERT(key_buff_elements);
+ if (cur_key_idx < key_buff_elements - 1)
+ {
+ ++cur_key_idx;
+ return TRUE;
+ }
+ return FALSE;
+ };
+ /* Return the current index element. */
+ rownum_t current()
+ {
+ DBUG_ASSERT(key_buff_elements && cur_key_idx < key_buff_elements);
+ return key_buff[cur_key_idx];
+ }
+
+ void set_null(rownum_t row_num)
+ {
+ bitmap_set_bit(&null_key, row_num);
+ }
+ bool is_null(rownum_t row_num)
+ {
+ /*
+ Indexes consisting of only NULLs do not have a bitmap buffer at all.
+ Their only initialized member is 'n_bits', which is equal to the number
+ of temp table rows.
+ */
+ if (null_count == tbl->file->stats.records)
+ {
+ DBUG_ASSERT(tbl->file->stats.records == null_key.n_bits);
+ return TRUE;
+ }
+ if (row_num > max_null_row || row_num < min_null_row)
+ return FALSE;
+ return bitmap_is_set(&null_key, row_num);
+ }
+ void print(String *str);
+};
+
+
+class subselect_partial_match_engine : public subselect_engine
+{
+protected:
+ /* The temporary table that contains a materialized subquery. */
+ TABLE *tmp_table;
+ /*
+ The engine used to check whether an IN predicate is TRUE or not. If not
+ TRUE, then subselect_rowid_merge_engine further distinguishes between
+ FALSE and UNKNOWN.
+ */
+ subselect_uniquesubquery_engine *lookup_engine;
+ /* A list of equalities between each pair of IN operands. */
+ List<Item> *equi_join_conds;
+ /*
+ If there is a row, such that all its NULL-able components are NULL, this
+ member is set to the number of covered columns. If there is no covering
+ row, then this is 0.
+ */
+ uint covering_null_row_width;
+protected:
+ virtual bool partial_match()= 0;
+public:
+ subselect_partial_match_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg);
+ int prepare() { return 0; }
+ int exec();
+ void fix_length_and_dec(Item_cache**) {}
+ uint cols() { /* TODO: what is the correct value? */ return 1; }
+ uint8 uncacheable() { return UNCACHEABLE_DEPENDENT; }
+ void exclude() {}
+ table_map upper_select_const_tables() { return 0; }
+ bool change_result(Item_subselect*, select_result_interceptor*)
+ { DBUG_ASSERT(FALSE); return false; }
+ bool no_tables() { return false; }
+ bool no_rows()
+ {
+ /*
+ TODO: It is completely unclear what is the semantics of this
+ method. The current result is computed so that the call to no_rows()
+ from Item_in_optimizer::val_int() sets Item_in_optimizer::null_value
+ correctly.
+ */
+ return !(((Item_in_subselect *) item)->null_value);
+ }
+ void print(String*, enum_query_type);
+
+ friend void subselect_hash_sj_engine::cleanup();
+};
+
+
+class subselect_rowid_merge_engine: public subselect_partial_match_engine
+{
+protected:
+ /*
+ Mapping from row numbers to row ids. The rowids are stored sequentially
+ in the array - rowid[i] is located in row_num_to_rowid + i * rowid_length.
+ */
+ uchar *row_num_to_rowid;
+ /*
+ A subset of all the keys for which there is a match for the same row.
+ Used during execution. Computed for each outer reference
+ */
+ MY_BITMAP matching_keys;
+ /*
+ The columns of the outer reference that are NULL. Computed for each
+ outer reference.
+ */
+ MY_BITMAP matching_outer_cols;
+ /*
+ Columns that consist of only NULLs. Such columns match any value.
+ Computed once per query execution.
+ */
+ MY_BITMAP null_only_columns;
+ /*
+ Indexes of row numbers, sorted by <column_value, row_number>. If an
+ index may contain NULLs, the NULLs are stored efficiently in a bitmap.
+
+ The indexes are sorted by the selectivity of their NULL sub-indexes, the
+ one with the fewer NULLs is first. Thus, if there is any index on
+ non-NULL columns, it is contained in keys[0].
+ */
+ Ordered_key **merge_keys;
+ /* The number of elements in keys. */
+ uint keys_count;
+ /*
+ An index on all non-NULL columns of 'tmp_table'. The index has the
+ logical form: <[v_i1 | ... | v_ik], rownum>. It allows to find the row
+ number where the columns c_i1,...,c1_k contain the values v_i1,...,v_ik.
+ If such an index exists, it is always the first element of 'keys'.
+ */
+ Ordered_key *non_null_key;
+ /*
+ Priority queue of Ordered_key indexes, one per NULLable column.
+ This queue is used by the partial match algorithm in method exec().
+ */
+ QUEUE pq;
+protected:
+ /*
+ Comparison function to compare keys in order of decreasing bitmap
+ selectivity.
+ */
+ static int cmp_keys_by_null_selectivity(Ordered_key **k1, Ordered_key **k2);
+ /*
+ Comparison function used by the priority queue pq, the 'smaller' key
+ is the one with the smaller current row number.
+ */
+ static int cmp_keys_by_cur_rownum(void *arg, uchar *k1, uchar *k2);
+
+ bool test_null_row(rownum_t row_num);
+ bool partial_match();
+public:
+ subselect_rowid_merge_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, uint keys_count_arg,
+ uint covering_null_row_width_arg,
+ Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg)
+ :subselect_partial_match_engine(engine_arg, tmp_table_arg, item_arg,
+ result_arg, equi_join_conds_arg,
+ covering_null_row_width_arg),
+ keys_count(keys_count_arg), non_null_key(NULL)
+ {
+ thd= lookup_engine->get_thd();
+ }
+ ~subselect_rowid_merge_engine();
+ bool init(MY_BITMAP *non_null_key_parts, MY_BITMAP *partial_match_key_parts);
+ void cleanup();
+ virtual enum_engine_type engine_type() { return ROWID_MERGE_ENGINE; }
};
+
+class subselect_table_scan_engine: public subselect_partial_match_engine
+{
+protected:
+ bool partial_match();
+public:
+ subselect_table_scan_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg);
+ void cleanup();
+ virtual enum_engine_type engine_type() { return TABLE_SCAN_ENGINE; }
+};
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-01-17 14:55:08 +0000
+++ b/sql/mysql_priv.h 2010-03-09 10:14:06 +0000
@@ -552,12 +552,14 @@ protected:
#define OPTIMIZER_SWITCH_LOOSE_SCAN 64
#define OPTIMIZER_SWITCH_MATERIALIZATION 128
#define OPTIMIZER_SWITCH_SEMIJOIN 256
+#define OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE 512
+#define OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN 1024
#ifdef DBUG_OFF
-# define OPTIMIZER_SWITCH_LAST 512
+# define OPTIMIZER_SWITCH_LAST 2048
#else
-# define OPTIMIZER_SWITCH_TABLE_ELIMINATION 512
-# define OPTIMIZER_SWITCH_LAST 1024
+# define OPTIMIZER_SWITCH_TABLE_ELIMINATION 2048
+# define OPTIMIZER_SWITCH_LAST 4096
#endif
#ifdef DBUG_OFF
@@ -570,8 +572,10 @@ protected:
OPTIMIZER_SWITCH_FIRSTMATCH | \
OPTIMIZER_SWITCH_LOOSE_SCAN | \
OPTIMIZER_SWITCH_MATERIALIZATION | \
- OPTIMIZER_SWITCH_SEMIJOIN)
-#else
+ OPTIMIZER_SWITCH_SEMIJOIN | \
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE|\
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)
+#else
# define OPTIMIZER_SWITCH_DEFAULT (OPTIMIZER_SWITCH_INDEX_MERGE | \
OPTIMIZER_SWITCH_INDEX_MERGE_UNION | \
OPTIMIZER_SWITCH_INDEX_MERGE_SORT_UNION | \
@@ -581,7 +585,9 @@ protected:
OPTIMIZER_SWITCH_FIRSTMATCH | \
OPTIMIZER_SWITCH_LOOSE_SCAN | \
OPTIMIZER_SWITCH_MATERIALIZATION | \
- OPTIMIZER_SWITCH_SEMIJOIN)
+ OPTIMIZER_SWITCH_SEMIJOIN | \
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE|\
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)
#endif
/*
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-01-17 14:55:08 +0000
+++ b/sql/mysqld.cc 2010-03-09 10:14:06 +0000
@@ -301,7 +301,9 @@ static const char *optimizer_switch_name
"index_merge","index_merge_union","index_merge_sort_union",
"index_merge_intersection",
"index_condition_pushdown",
- "firstmatch","loosescan","materialization", "semijoin",
+ "firstmatch","loosescan","materialization", "semijoin",
+ "partial_match_rowid_merge",
+ "partial_match_table_scan",
#ifndef DBUG_OFF
"table_elimination",
#endif
@@ -320,6 +322,8 @@ static const unsigned int optimizer_swit
sizeof("loosescan") - 1,
sizeof("materialization") - 1,
sizeof("semijoin") - 1,
+ sizeof("partial_match_rowid_merge") - 1,
+ sizeof("partial_match_table_scan") - 1,
#ifndef DBUG_OFF
sizeof("table_elimination") - 1,
#endif
@@ -5794,7 +5798,8 @@ enum options_mysqld
OPT_RECORD_RND_BUFFER, OPT_DIV_PRECINCREMENT, OPT_RELAY_LOG_SPACE_LIMIT,
OPT_RELAY_LOG_PURGE,
OPT_SLAVE_NET_TIMEOUT, OPT_SLAVE_COMPRESSED_PROTOCOL, OPT_SLOW_LAUNCH_TIME,
- OPT_SLAVE_TRANS_RETRIES, OPT_READONLY, OPT_DEBUGGING, OPT_DEBUG_FLUSH,
+ OPT_SLAVE_TRANS_RETRIES, OPT_READONLY, OPT_ROWID_MERGE_BUFF_SIZE,
+ OPT_DEBUGGING, OPT_DEBUG_FLUSH,
OPT_SORT_BUFFER, OPT_TABLE_OPEN_CACHE, OPT_TABLE_DEF_CACHE,
OPT_THREAD_CONCURRENCY, OPT_THREAD_CACHE_SIZE,
OPT_TMP_TABLE_SIZE, OPT_THREAD_STACK,
@@ -7130,6 +7135,11 @@ The minimum value for this variable is 4
(uchar**) &max_system_variables.range_alloc_block_size, 0, GET_ULONG,
REQUIRED_ARG, RANGE_ALLOC_BLOCK_SIZE, RANGE_ALLOC_BLOCK_SIZE,
(longlong) ULONG_MAX, 0, 1024, 0},
+ {"rowid_merge_buff_size", OPT_ROWID_MERGE_BUFF_SIZE,
+ "The size of the buffers used [NOT] IN evaluation via partial matching.",
+ (uchar**) &global_system_variables.rowid_merge_buff_size,
+ (uchar**) &max_system_variables.rowid_merge_buff_size, 0, GET_ULONG,
+ REQUIRED_ARG, 8*1024*1024L, 0, MAX_MEM_TABLE_SIZE/2, 0, 1, 0},
{"read_buffer_size", OPT_RECORD_BUFFER,
"Each thread that does a sequential scan allocates a buffer of this size for each table it scans. If you do many sequential scans, you may want to increase this value.",
(uchar**) &global_system_variables.read_buff_size,
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-15 06:32:54 +0000
+++ b/sql/opt_subselect.cc 2010-03-15 19:52:58 +0000
@@ -187,10 +187,10 @@ int check_and_do_in_subquery_rewrites(JO
does not call setup_subquery_materialization(). We could make
SELECT ... FROM DUAL call that function but that doesn't seem
to be the case that is worth handling.
- 4. Subquery predicate is a top-level predicate
- (this implies it is not negated)
- TODO: this is a limitation that should be lifted once we
- implement correct NULL semantics (WL#3830)
+ 4. Either the subquery predicate is a top-level predicate, or at
+ least one partial match strategy is enabled. If no partial match
+ strategy is enabled, then materialization cannot be used for
+ non-top-level queries because it cannot handle NULLs correctly.
5. Subquery is non-correlated
TODO:
This is an overly restrictive condition. It can be extended to:
@@ -204,8 +204,8 @@ int check_and_do_in_subquery_rewrites(JO
(*) The subquery must be part of a SELECT statement. The current
condition also excludes multi-table update statements.
- We have to determine whether we will perform subquery materialization
- before calling the IN=>EXISTS transformation, so that we know whether to
+ Determine whether we will perform subquery materialization before
+ calling the IN=>EXISTS transformation, so that we know whether to
perform the whole transformation or only that part of it which wraps
Item_in_subselect in an Item_in_optimizer.
*/
@@ -215,12 +215,14 @@ int check_and_do_in_subquery_rewrites(JO
select_lex->master_unit()->first_select()->leaf_tables && // 3
thd->lex->sql_command == SQLCOM_SELECT && // *
select_lex->outer_select()->leaf_tables && // 3A
- subquery_types_allow_materialization(in_subs))
+ subquery_types_allow_materialization(in_subs) &&
+ // psergey-todo: duplicated_subselect_card_check: where it's done?
+ (in_subs->is_top_level_item() ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)) &&//4
+ !in_subs->is_correlated && // 5
+ in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
{
- // psergey-todo: duplicated_subselect_card_check: where it's done?
- if (in_subs->is_top_level_item() && // 4
- !in_subs->is_correlated && // 5
- in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
in_subs->exec_method= Item_in_subselect::MATERIALIZATION;
}
=== modified file 'sql/set_var.cc'
--- a/sql/set_var.cc 2009-12-22 12:49:15 +0000
+++ b/sql/set_var.cc 2010-03-09 10:14:06 +0000
@@ -540,6 +540,9 @@ static sys_var_long_ptr sys_query_cache_
static sys_var_thd_ulong sys_range_alloc_block_size(&vars, "range_alloc_block_size",
&SV::range_alloc_block_size);
+static sys_var_thd_ulong sys_rowid_merge_buff_size(&vars, "rowid_merge_buff_size",
+ &SV::rowid_merge_buff_size);
+
static sys_var_thd_ulong sys_query_alloc_block_size(&vars, "query_alloc_block_size",
&SV::query_alloc_block_size,
0, fix_thd_mem_root);
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-02-17 21:59:41 +0000
+++ b/sql/sql_class.cc 2010-02-19 21:55:57 +0000
@@ -42,6 +42,7 @@
#include "sp_rcontext.h"
#include "sp_cache.h"
+#include "sql_select.h" /* declares create_tmp_table() */
/*
The following is used to initialise Table_ident with a internal
@@ -2877,6 +2878,71 @@ bool select_dumpvar::send_eof()
return 0;
}
+
+bool
+select_materialize_with_stats::
+create_result_table(THD *thd_arg, List<Item> *column_types,
+ bool is_union_distinct, ulonglong options,
+ const char *table_alias, bool bit_fields_as_long)
+{
+ DBUG_ASSERT(table == 0);
+ tmp_table_param.field_count= column_types->elements;
+ tmp_table_param.bit_fields_as_long= bit_fields_as_long;
+
+ if (! (table= create_tmp_table(thd_arg, &tmp_table_param, *column_types,
+ (ORDER*) 0, is_union_distinct, 1,
+ options, HA_POS_ERROR, (char*) table_alias)))
+ return TRUE;
+
+ col_stat= (Column_statistics*) table->in_use->alloc(table->s->fields *
+ sizeof(Column_statistics));
+ if (!stat)
+ return TRUE;
+
+ cleanup();
+
+ table->file->extra(HA_EXTRA_WRITE_CACHE);
+ table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+ return FALSE;
+}
+
+
+/**
+ Override select_union::send_data to analyze each row for NULLs and to
+ update null_statistics before sending data to the client.
+
+ @return TRUE if fatal error when sending data to the client
+ @return FALSE on success
+*/
+
+bool select_materialize_with_stats::send_data(List<Item> &items)
+{
+ List_iterator_fast<Item> item_it(items);
+ Item *cur_item;
+ Column_statistics *cur_col_stat= col_stat;
+ uint nulls_in_row= 0;
+
+ ++count_rows;
+
+ while ((cur_item= item_it++))
+ {
+ if (cur_item->is_null())
+ {
+ ++cur_col_stat->null_count;
+ cur_col_stat->max_null_row= count_rows;
+ if (!cur_col_stat->min_null_row)
+ cur_col_stat->min_null_row= count_rows;
+ ++nulls_in_row;
+ }
+ ++cur_col_stat;
+ }
+ if (nulls_in_row > max_nulls_in_row)
+ max_nulls_in_row= nulls_in_row;
+
+ return select_union::send_data(items);
+}
+
+
/****************************************************************************
TMP_TABLE_PARAM
****************************************************************************/
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-02-17 21:59:41 +0000
+++ b/sql/sql_class.h 2010-03-09 10:14:06 +0000
@@ -343,6 +343,8 @@ struct system_variables
ulong mrr_buff_size;
ulong div_precincrement;
ulong sortbuff_size;
+ /* Total size of all buffers used by the subselect_rowid_merge_engine. */
+ ulong rowid_merge_buff_size;
ulong thread_handling;
ulong tx_isolation;
ulong completion_type;
@@ -2740,19 +2742,20 @@ public:
class select_union :public select_result_interceptor
{
+protected:
TMP_TABLE_PARAM tmp_table_param;
public:
TABLE *table;
- select_union() :table(0) {}
+ select_union() :table(0) { tmp_table_param.init(); }
int prepare(List<Item> &list, SELECT_LEX_UNIT *u);
bool send_data(List<Item> &items);
bool send_eof();
bool flush();
- bool create_result_table(THD *thd, List<Item> *column_types,
- bool is_distinct, ulonglong options,
- const char *alias, bool bit_fields_as_long);
+ virtual bool create_result_table(THD *thd, List<Item> *column_types,
+ bool is_distinct, ulonglong options,
+ const char *alias, bool bit_fields_as_long);
};
/* Base subselect interface class */
@@ -2776,6 +2779,74 @@ public:
bool send_data(List<Item> &items);
};
+
+/*
+ This class specializes select_union to collect statistics about the
+ data stored in the temp table. Currently the class collects statistcs
+ about NULLs.
+*/
+
+class select_materialize_with_stats : public select_union
+{
+protected:
+ class Column_statistics
+ {
+ public:
+ /* Count of NULLs per column. */
+ ha_rows null_count;
+ /* The row number that contains the first NULL in a column. */
+ ha_rows min_null_row;
+ /* The row number that contains the last NULL in a column. */
+ ha_rows max_null_row;
+ };
+
+ /* Array of statistics data per column. */
+ Column_statistics* col_stat;
+
+ /*
+ The number of columns in the biggest sub-row that consists of only
+ NULL values.
+ */
+ ha_rows max_nulls_in_row;
+ /*
+ Count of rows writtent to the temp table. This is redundant as it is
+ already stored in handler::stats.records, however that one is relatively
+ expensive to compute (given we need that for evry row).
+ */
+ ha_rows count_rows;
+
+public:
+ select_materialize_with_stats() {}
+ virtual bool create_result_table(THD *thd, List<Item> *column_types,
+ bool is_distinct, ulonglong options,
+ const char *alias, bool bit_fields_as_long);
+ bool init_result_table(ulonglong select_options);
+ bool send_data(List<Item> &items);
+ void cleanup()
+ {
+ memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
+ max_nulls_in_row= 0;
+ count_rows= 0;
+ }
+ ha_rows get_null_count_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].null_count;
+ }
+ ha_rows get_max_null_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].max_null_row;
+ }
+ ha_rows get_min_null_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].min_null_row;
+ }
+ ha_rows get_max_nulls_in_row() { return max_nulls_in_row; }
+};
+
+
/* used in independent ALL/ANY optimisation */
class select_max_min_finder_subselect :public select_subselect
{
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-14 18:25:43 +0000
+++ b/sql/sql_select.cc 2010-03-15 19:52:58 +0000
@@ -874,6 +874,9 @@ JOIN::optimize()
{
DBUG_PRINT("info",("No tables"));
error= 0;
+ /* Create all structures needed for materialized subquery execution. */
+ if (setup_subquery_materialization())
+ DBUG_RETURN(1);
DBUG_RETURN(0);
}
error= -1; // Error is sent to client
@@ -11258,7 +11261,7 @@ create_tmp_table(THD *thd,TMP_TABLE_PARA
param->group_buff=group_buff;
share->keys=1;
share->uniques= test(using_unique_constraint);
- table->key_info=keyinfo;
+ table->key_info= table->s->key_info= keyinfo;
keyinfo->key_part=key_part_info;
keyinfo->flags=HA_NOSAME;
keyinfo->usable_key_parts=keyinfo->key_parts= param->group_parts;
@@ -11344,7 +11347,7 @@ create_tmp_table(THD *thd,TMP_TABLE_PARA
keyinfo->key_parts * sizeof(KEY_PART_INFO))))
goto err;
bzero((void*) key_part_info, keyinfo->key_parts * sizeof(KEY_PART_INFO));
- table->key_info=keyinfo;
+ table->key_info= table->s->key_info= keyinfo;
keyinfo->key_part=key_part_info;
keyinfo->flags=HA_NOSAME | HA_NULL_ARE_EQUAL;
keyinfo->key_length= 0; // Will compute the sum of the parts below.
1
0
[Maria-developers] Rev 2779: Merge in MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs in file:///home/psergey/dev/maria-5.3-subqueries-r7/
by Sergey Petrunya 15 Mar '10
by Sergey Petrunya 15 Mar '10
15 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7/
------------------------------------------------------------
revno: 2779 [merge]
revision-id: psergey(a)askmonty.org-20100315150935-4xm838tskbh9k3ci
parent: psergey(a)askmonty.org-20100315063535-jsp4jgya6lfqt8e6
parent: timour(a)sun.com-20100315143456-82d9rq3lbdscbr2n
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7
timestamp: Mon 2010-03-15 18:09:35 +0300
message:
Merge in MWL#68: Subquery optimization: Efficient NOT IN execution with NULLs
modified:
mysql-test/include/mix1.inc sp1f-innodb_mysql.test-20060426055153-mgtahdmgajg7vffqbq4xrmkzbhvanlaz
mysql-test/r/index_merge_myisam.result sp1f-index_merge_myisam.r-20060816114353-wd2664hjxwyjdvm4snup647av5fmxfln
mysql-test/r/innodb_mysql.result sp1f-innodb_mysql.result-20060426055153-bychbbfnqtvmvrwccwhn24i6yi46uqjv
mysql-test/r/myisam_mrr.result myisam_mrr.result-20091215071345-6wadxunod6vi8m48-1
mysql-test/r/ps.result sp1f-ps.result-20040405154119-efxzt5onloys45nfjak4gt44kr4awkdi
mysql-test/r/subselect.result sp1f-subselect.result-20020512204640-zgegcsgavnfd7t7eyrf7ibuqomsw7uzo
mysql-test/r/subselect3.result sp1f-subselect3.result-20061031174245-v7hvtc7uwevifiq4lziwv5gdcxpeak7t
mysql-test/r/subselect3_jcl6.result subselect3_jcl6.resu-20100117143923-cf6j4mu5zzng00u7-1
mysql-test/r/subselect_no_mat.result subselect_no_mat.res-20100117143924-hut18sl9k2c7qdj8-1
mysql-test/r/subselect_no_opts.result subselect_no_opts.re-20100117143925-pabg7o8iyokjlu93-1
mysql-test/r/subselect_no_semijoin.result subselect_no_semijoi-20100117143925-9yfygtcm7fwsuq2p-1
mysql-test/r/subselect_sj.result subselect_sj.result-20100117143926-nrop4ku355g3kv8b-1
mysql-test/r/subselect_sj_jcl6.result subselect_sj_jcl6.re-20100117143928-7vzk51yaf29cdavp-1
mysql-test/t/ps.test sp1f-ps.test-20040405154119-4zqf6po44yypvz5foa2osprg5kb5ok63
mysql-test/t/subselect.test sp1f-subselect.test-20020512204640-lyqrayx6uwsn7zih6y7kerkenuitzbvr
mysql-test/t/subselect3.test sp1f-subselect3.test-20061031174245-pcxt5ljylerxhx2jkfhrbqfv5vqcazlz
sql/item_cmpfunc.h sp1f-item_cmpfunc.h-19700101030959-pcvbjplo4e4ng7ibynfhcd6pjyem57gr
sql/item_subselect.cc sp1f-item_subselect.cc-20020512204640-qep43aqhsfrwkqmrobni6czc3fqj36oo
sql/item_subselect.h sp1f-item_subselect.h-20020512204640-qdg77wil56cxyhtc2bjjdrppxq3wqgh3
sql/mysql_priv.h sp1f-mysql_priv.h-19700101030959-4fl65tqpop5zfgxaxkqotu2fa2ree5ci
sql/mysqld.cc sp1f-mysqld.cc-19700101030959-zpswdvekpvixxzxf7gdtofzel7nywtfj
sql/opt_subselect.cc opt_subselect.cc-20100215190428-nekkl8wisp0k6nlk-1
sql/set_var.cc sp1f-set_var.cc-20020723153119-nwbpg2pwpz55pfw7yfzaxt7hsszzy7y3
sql/sql_class.cc sp1f-sql_class.cc-19700101030959-rpotnweaff2pikkozh3butrf7mv3oero
sql/sql_class.h sp1f-sql_class.h-19700101030959-jnqnbrjyqsvgncsibnumsmg3lyi7pa5s
sql/sql_select.cc sp1f-sql_select.cc-19700101030959-egb7whpkh76zzvikycs5nsnuviu4fdlb
=== modified file 'mysql-test/include/mix1.inc'
--- a/mysql-test/include/mix1.inc 2009-09-15 06:08:54 +0000
+++ b/mysql-test/include/mix1.inc 2010-03-11 21:43:31 +0000
@@ -1177,8 +1177,11 @@
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
--echo End of 5.0 tests
=== modified file 'mysql-test/r/index_merge_myisam.result'
--- a/mysql-test/r/index_merge_myisam.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/index_merge_myisam.result 2010-03-11 21:43:31 +0000
@@ -1419,19 +1419,19 @@
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge=off,index_merge_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge_union=on';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,index_merge_sort_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=4;
ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
set optimizer_switch=NULL;
@@ -1458,21 +1458,21 @@
set optimizer_switch='index_merge=off,index_merge_union=off,default';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set @@global.optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
#
# Check index_merge's @@optimizer_switch flags
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -1582,5 +1582,5 @@
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
drop table t0, t1;
=== modified file 'mysql-test/r/innodb_mysql.result'
--- a/mysql-test/r/innodb_mysql.result 2009-12-15 07:16:46 +0000
+++ b/mysql-test/r/innodb_mysql.result 2010-03-11 21:43:31 +0000
@@ -1425,12 +1425,15 @@
#
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
2 DEPENDENT SUBQUERY t1 system NULL NULL NULL NULL 0 const row not found
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 1
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
End of 5.0 tests
CREATE TABLE `t2` (
=== modified file 'mysql-test/r/myisam_mrr.result'
--- a/mysql-test/r/myisam_mrr.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/myisam_mrr.result 2010-03-11 21:43:31 +0000
@@ -394,7 +394,7 @@
# - engine_condition_pushdown does not affect ICP
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, key(a));
=== modified file 'mysql-test/r/ps.result'
--- a/mysql-test/r/ps.result 2009-05-27 15:19:44 +0000
+++ b/mysql-test/r/ps.result 2010-03-11 21:43:31 +0000
@@ -149,6 +149,8 @@
c32 set('monday', 'tuesday', 'wednesday')
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -177,6 +179,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
set @arg00=1;
prepare stmt1 from ' create table t1 (m int) as select 1 as m ' ;
execute stmt1 ;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-02-17 21:59:41 +0000
+++ b/mysql-test/r/subselect.result 2010-03-11 21:43:31 +0000
@@ -1,4 +1,6 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4803,4 +4805,5 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-02-17 10:05:27 +0000
+++ b/mysql-test/r/subselect3.result 2010-03-11 21:43:31 +0000
@@ -63,12 +63,15 @@
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -692,6 +695,8 @@
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -759,6 +764,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -960,7 +966,7 @@
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -977,7 +983,7 @@
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-02-17 10:47:55 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-03-11 21:43:31 +0000
@@ -67,12 +67,15 @@
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -696,6 +699,8 @@
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -763,6 +768,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -964,7 +970,7 @@
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -981,7 +987,7 @@
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off,semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-03-15 06:32:54 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-15 15:09:35 +0000
@@ -202,39 +202,39 @@
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 06:32:54 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 15:09:35 +0000
@@ -206,39 +206,39 @@
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/t/ps.test'
--- a/mysql-test/t/ps.test 2009-05-27 15:19:44 +0000
+++ b/mysql-test/t/ps.test 2010-03-11 21:43:31 +0000
@@ -163,6 +163,9 @@
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -171,6 +174,8 @@
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# parameters from variables (for field creation)
#
=== modified file 'mysql-test/t/subselect.test'
--- a/mysql-test/t/subselect.test 2010-01-17 20:52:20 +0000
+++ b/mysql-test/t/subselect.test 2010-03-11 21:43:31 +0000
@@ -11,6 +11,9 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
--enable_warnings
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
select (select 2);
explain extended select (select 2);
SELECT (SELECT 1) UNION SELECT (SELECT 2);
@@ -4061,4 +4064,6 @@
(SELECT LAST_INSERT_ID() FROM t1 ORDER BY MIN(a) ASC LIMIT 1);
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
+
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/subselect3.test'
--- a/mysql-test/t/subselect3.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect3.test 2010-03-11 21:43:31 +0000
@@ -59,9 +59,13 @@
show status like 'Handler_read_rnd_next';
select ' ^ This must show 11' Z;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
# This must show trigcond:
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
#
@@ -529,6 +533,9 @@
DROP TABLE t1, t2;
+# The next three test cases must be executed with the IN=>EXISTS strategy
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
#
# Bug #27870: crash of an equijoin query with WHERE condition containing
@@ -588,6 +595,8 @@
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #34763: item_subselect.cc:1235:Item_in_subselect::row_value_transformer:
# Assertion failed, unexpected error message:
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-03-13 20:04:52 +0000
+++ b/sql/item_cmpfunc.h 2010-03-15 14:34:56 +0000
@@ -350,6 +350,7 @@
CHARSET_INFO *compare_collation() { return cmp.cmp_collation.collation; }
uint decimal_precision() const { return 1; }
void top_level_item() { abort_on_null= TRUE; }
+ Arg_comparator *get_comparator() { return &cmp; }
friend class Arg_comparator;
};
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-02-21 06:32:23 +0000
+++ b/sql/item_subselect.cc 2010-03-09 10:14:06 +0000
@@ -138,6 +138,7 @@
left_expr_cache= NULL;
}
first_execution= TRUE;
+ is_constant= FALSE;
Item_subselect::cleanup();
DBUG_VOID_RETURN;
}
@@ -449,8 +450,10 @@
int res;
if (thd->is_error())
- /* Do not execute subselect in case of a fatal error */
+ {
+ /* Do not execute subselect in case of a fatal error */
return 1;
+ }
/*
Simulate a failure in sub-query execution. Used to test e.g.
out of memory or query being killed conditions.
@@ -475,9 +478,6 @@
bool Item_in_subselect::exec()
{
DBUG_ENTER("Item_in_subselect::exec");
- DBUG_ASSERT(exec_method != MATERIALIZATION ||
- (exec_method == MATERIALIZATION &&
- engine->engine_type() == subselect_engine::HASH_SJ_ENGINE));
/*
Initialize the cache of the left predicate operand. This has to be done as
late as now, because Cached_item directly contains a resolved field (not
@@ -493,14 +493,14 @@
if (!left_expr_cache && exec_method == MATERIALIZATION)
init_left_expr_cache();
- /* If the new left operand is already in the cache, reuse the old result. */
- if (left_expr_cache && test_if_item_cache_changed(*left_expr_cache) < 0)
- {
- /* Always compute IN for the first row as the cache is not valid for it. */
- if (!first_execution)
- DBUG_RETURN(FALSE);
- first_execution= FALSE;
- }
+ /*
+ If the new left operand is already in the cache, reuse the old result.
+ Use the cached result only if this is not the first execution of IN
+ because the cache is not valid for the first execution.
+ */
+ if (!first_execution && left_expr_cache &&
+ test_if_item_cache_changed(*left_expr_cache) < 0)
+ DBUG_RETURN(FALSE);
/*
The exec() method below updates item::value, and item::null_value, thus if
@@ -910,8 +910,8 @@
Item_in_subselect::Item_in_subselect(Item * left_exp,
st_select_lex *select_lex):
Item_exists_subselect(), left_expr_cache(0), first_execution(TRUE),
- optimizer(0), pushed_cond_guards(NULL), exec_method(NOT_TRANSFORMED),
- upper_item(0)
+ is_constant(FALSE), optimizer(0), pushed_cond_guards(NULL),
+ exec_method(NOT_TRANSFORMED), upper_item(0)
{
DBUG_ENTER("Item_in_subselect::Item_in_subselect");
left_expr= left_exp;
@@ -1105,6 +1105,8 @@
{
DBUG_ASSERT(fixed == 1);
null_value= 0;
+ if (is_constant)
+ return value;
if (exec())
{
reset();
@@ -1571,9 +1573,9 @@
DBUG_ENTER("Item_in_subselect::row_value_transformer");
// psergey: duplicated_subselect_card_check
- if (select_lex->item_list.elements != left_expr->cols())
+ if (select_lex->item_list.elements != cols_num)
{
- my_error(ER_OPERAND_COLUMNS, MYF(0), left_expr->cols());
+ my_error(ER_OPERAND_COLUMNS, MYF(0), cols_num);
DBUG_RETURN(RES_ERROR);
}
@@ -1980,17 +1982,69 @@
bool Item_in_subselect::fix_fields(THD *thd_arg, Item **ref)
{
- bool result = 0;
+ uint outer_cols_num;
+ List<Item> *inner_cols;
if (exec_method == SEMI_JOIN)
return !( (*ref)= new Item_int(1));
- if (thd_arg->lex->view_prepare_mode && left_expr && !left_expr->fixed)
- result = left_expr->fix_fields(thd_arg, &left_expr);
-
- return result || Item_subselect::fix_fields(thd_arg, ref);
+ /*
+ Check if the outer and inner IN operands match in those cases when we
+ will not perform IN=>EXISTS transformation. Currently this is when we
+ use subquery materialization.
+
+ The condition below is true when this method was called recursively from
+ inside JOIN::prepare for the JOIN object created by the call chain
+ Item_subselect::fix_fields -> subselect_single_select_engine::prepare,
+ which creates a JOIN object for the subquery and calls JOIN::prepare for
+ the JOIN of the subquery.
+ Notice that in some cases, this doesn't happen, and the check_cols()
+ test for each Item happens later in
+ Item_in_subselect::row_value_in_to_exists_transformer.
+ The reason for this mess is that our JOIN::prepare phase works top-down
+ instead of bottom-up, so we first do name resoluton and semantic checks
+ for the outer selects, then for the inner.
+ */
+ if (engine &&
+ engine->engine_type() == subselect_engine::SINGLE_SELECT_ENGINE &&
+ ((subselect_single_select_engine*)engine)->join)
+ {
+ outer_cols_num= left_expr->cols();
+
+ if (unit->is_union())
+ inner_cols= &(unit->types);
+ else
+ inner_cols= &(unit->first_select()->item_list);
+ if (outer_cols_num != inner_cols->elements)
+ {
+ my_error(ER_OPERAND_COLUMNS, MYF(0), outer_cols_num);
+ return TRUE;
+ }
+ if (outer_cols_num > 1)
+ {
+ List_iterator<Item> inner_col_it(*inner_cols);
+ Item *inner_col;
+ for (uint i= 0; i < outer_cols_num; i++)
+ {
+ inner_col= inner_col_it++;
+ if (inner_col->check_cols(left_expr->element_index(i)->cols()))
+ return TRUE;
+ }
+ }
+ }
+
+ if (thd_arg->lex->view_prepare_mode && left_expr && !left_expr->fixed &&
+ left_expr->fix_fields(thd_arg, &left_expr))
+ return TRUE;
+ if (Item_subselect::fix_fields(thd_arg, ref))
+ return TRUE;
+
+ fixed= TRUE;
+
+ return FALSE;
}
+
void Item_in_subselect::fix_after_pullout(st_select_lex *new_parent, Item **ref)
{
left_expr->fix_after_pullout(new_parent, &left_expr);
@@ -2267,10 +2321,9 @@
void subselect_uniquesubquery_engine::cleanup()
{
DBUG_ENTER("subselect_uniquesubquery_engine::cleanup");
- /*
- subselect_uniquesubquery_engine have not 'result' assigbed, so we do not
- cleanup() it
- */
+ /* Tell handler we don't need the index anymore */
+ if (tab->table->file->inited)
+ tab->table->file->ha_index_end();
DBUG_VOID_RETURN;
}
@@ -2291,7 +2344,7 @@
Create and prepare the JOIN object that represents the query execution
plan for the subquery.
- @detail
+ @details
This method is called from Item_subselect::fix_fields. For prepared
statements it is called both during the PREPARE and EXECUTE phases in the
following ways:
@@ -2593,14 +2646,23 @@
for (;;)
{
error=table->file->ha_rnd_next(table->record[0]);
- if (error && error != HA_ERR_END_OF_FILE)
- {
- error= report_error(table, error);
- break;
+ if (error) {
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ error= 0;
+ continue;
+ }
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ error= 0;
+ break;
+ }
+ else
+ {
+ error= report_error(table, error);
+ break;
+ }
}
- /* No more rows */
- if (table->status)
- break;
if (!cond || cond->val_int())
{
@@ -2711,6 +2773,56 @@
/*
+ @retval 1 A NULL was found in the outer reference, index lookup is
+ not applicable, the outer ref is unsusable as a lookup key,
+ use some other method to find a match.
+ @retval 0 The outer ref was copied into an index lookup key.
+ @retval -1 The outer ref cannot possibly match any row, IN is FALSE.
+*/
+/* TIMOUR: this method is a variant of copy_ref_key(), needs refactoring. */
+
+int subselect_uniquesubquery_engine::copy_ref_key_simple()
+{
+ for (store_key **copy= tab->ref.key_copy ; *copy ; copy++)
+ {
+ enum store_key::store_key_result store_res;
+ store_res= (*copy)->copy();
+ tab->ref.key_err= store_res;
+
+ /*
+ When there is a NULL part in the key we don't need to make index
+ lookup for such key thus we don't need to copy whole key.
+ If we later should do a sequential scan return OK. Fail otherwise.
+
+ See also the comment for the subselect_uniquesubquery_engine::exec()
+ function.
+ */
+ null_keypart= (*copy)->null_key;
+ if (null_keypart)
+ return 1;
+
+ /*
+ Check if the error is equal to STORE_KEY_FATAL. This is not expressed
+ using the store_key::store_key_result enum because ref.key_err is a
+ boolean and we want to detect both TRUE and STORE_KEY_FATAL from the
+ space of the union of the values of [TRUE, FALSE] and
+ store_key::store_key_result.
+ TODO: fix the variable an return types.
+ */
+ if (store_res == store_key::STORE_KEY_FATAL)
+ {
+ /*
+ Error converting the left IN operand to the column type of the right
+ IN operand.
+ */
+ return -1;
+ }
+ }
+ return 0;
+}
+
+
+/*
Execute subselect
SYNOPSIS
@@ -2750,7 +2862,13 @@
/* TODO: change to use of 'full_scan' here? */
if (copy_ref_key())
+ {
+ /*
+ TIMOUR: copy_ref_key() == 1 means NULL result, not error, why return 1?
+ Check who reiles on this result.
+ */
DBUG_RETURN(1);
+ }
if (table->status)
{
/*
@@ -2791,6 +2909,46 @@
}
+/*
+ TIMOUR: write comment
+*/
+
+int subselect_uniquesubquery_engine::index_lookup()
+{
+ DBUG_ENTER("subselect_uniquesubquery_engine::index_lookup");
+ int error;
+ TABLE *table= tab->table;
+
+ if (!table->file->inited)
+ table->file->ha_index_init(tab->ref.key, 0);
+ error= table->file->ha_index_read_map(table->record[0],
+ tab->ref.key_buff,
+ make_prev_keypart_map(tab->
+ ref.key_parts),
+ HA_READ_KEY_EXACT);
+ DBUG_PRINT("info", ("lookup result: %i", error));
+
+ if (error && error != HA_ERR_KEY_NOT_FOUND && error != HA_ERR_END_OF_FILE)
+ {
+ /*
+ TIMOUR: I don't understand at all when do we need to call report_error.
+ In most places where we access an index, we don't do this. Why here?
+ */
+ error= report_error(table, error);
+ DBUG_RETURN(error);
+ }
+
+ table->null_row= 0;
+ if (!error && (!cond || cond->val_int()))
+ ((Item_in_subselect *) item)->value= 1;
+ else
+ ((Item_in_subselect *) item)->value= 0;
+
+ DBUG_RETURN(0);
+}
+
+
+
subselect_uniquesubquery_engine::~subselect_uniquesubquery_engine()
{
/* Tell handler we don't need the index anymore */
@@ -3225,6 +3383,7 @@
bool subselect_uniquesubquery_engine::no_tables()
{
/* returning value is correct, but this method should never be called */
+ DBUG_ASSERT(FALSE);
return 0;
}
@@ -3235,16 +3394,259 @@
/**
+ Check if an IN predicate should be executed via partial matching using
+ only schema information.
+
+ @details
+ This test essentially has three results:
+ - partial matching is applicable, but cannot be executed due to a
+ limitation in the total number of indexes, as a result we can't
+ use subquery materialization at all.
+ - partial matching is either applicable or not, and this can be
+ determined by looking at 'this->max_keys'.
+ If max_keys > 1, then we need partial matching because there are
+ more indexes than just the one we use during materialization to
+ remove duplicates.
+
+ @note
+ TIMOUR: The schema-based analysis for partial matching can be done once for
+ prepared statement and remembered. It is done here to remove the need to
+ save/restore all related variables between each re-execution, thus making
+ the code simpler.
+
+ @retval PARTIAL_MATCH if a partial match should be used
+ @retval COMPLETE_MATCH if a complete match (index lookup) should be used
+*/
+
+subselect_hash_sj_engine::exec_strategy
+subselect_hash_sj_engine::get_strategy_using_schema()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+
+ if (item_in->is_top_level_item())
+ return COMPLETE_MATCH;
+ else
+ {
+ List_iterator<Item> inner_col_it(*item_in->unit->get_unit_column_types());
+ Item *outer_col, *inner_col;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ outer_col= item_in->left_expr->element_index(i);
+ inner_col= inner_col_it++;
+
+ if (!inner_col->maybe_null && !outer_col->maybe_null)
+ bitmap_set_bit(&non_null_key_parts, i);
+ else
+ {
+ bitmap_set_bit(&partial_match_key_parts, i);
+ ++count_partial_match_columns;
+ }
+ }
+ }
+
+ /* If no column contains NULLs use regular hash index lookups. */
+ if (count_partial_match_columns)
+ return PARTIAL_MATCH;
+ return COMPLETE_MATCH;
+}
+
+
+/**
+ Test whether an IN predicate must be computed via partial matching
+ based on the NULL statistics for each column of a materialized subquery.
+
+ @details The procedure analyzes column NULL statistics, updates the
+ matching type of columns that cannot be NULL or that contain only NULLs.
+ Based on this, the procedure determines the final execution strategy for
+ the [NOT] IN predicate.
+
+ @retval PARTIAL_MATCH if a partial match should be used
+ @retval COMPLETE_MATCH if a complete match (index lookup) should be used
+*/
+
+subselect_hash_sj_engine::exec_strategy
+subselect_hash_sj_engine::get_strategy_using_data()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+ Item *outer_col;
+
+ /*
+ If we already determined that a complete match is enough based on schema
+ information, nothing can be better.
+ */
+ if (strategy == COMPLETE_MATCH)
+ return COMPLETE_MATCH;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ if (!bitmap_is_set(&partial_match_key_parts, i))
+ continue;
+ outer_col= item_in->left_expr->element_index(i);
+ /*
+ If column 'i' doesn't contain NULLs, and the corresponding outer reference
+ cannot have a NULL value, then 'i' is a non-nullable column.
+ */
+ if (result_sink->get_null_count_of_col(i) == 0 && !outer_col->maybe_null)
+ {
+ bitmap_clear_bit(&partial_match_key_parts, i);
+ bitmap_set_bit(&non_null_key_parts, i);
+ --count_partial_match_columns;
+ }
+ if (result_sink->get_null_count_of_col(i) ==
+ tmp_table->file->stats.records)
+ ++count_null_only_columns;
+ }
+
+ /* If no column contains NULLs use regular hash index lookups. */
+ if (!count_partial_match_columns)
+ return COMPLETE_MATCH;
+ return PARTIAL_MATCH;
+}
+
+
+void
+subselect_hash_sj_engine::choose_partial_match_strategy(
+ bool has_non_null_key, bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts)
+{
+ size_t pm_buff_size;
+
+ DBUG_ASSERT(strategy == PARTIAL_MATCH);
+ /*
+ Choose according to global optimizer switch. If only one of the switches is
+ 'ON', then the remaining strategy is the only possible one. The only cases
+ when this will be overriden is when the total size of all buffers for the
+ merge strategy is bigger than the 'rowid_merge_buff_size' system variable,
+ or if there isn't enough physical memory to allocate the buffers.
+ */
+ if (!optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) &&
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN))
+ strategy= PARTIAL_MATCH_SCAN;
+ else if
+ ( optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) &&
+ !optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN))
+ strategy= PARTIAL_MATCH_MERGE;
+
+ /*
+ If both switches are ON, or both are OFF, we interpret that as "let the
+ optimizer decide". Perform a cost based choice between the two partial
+ matching strategies.
+ */
+ /*
+ TIMOUR: the above interpretation of the switch values could be changed to:
+ - if both are ON - let the optimizer decide,
+ - if both are OFF - do not use partial matching, therefore do not use
+ materialization in non-top-level predicates.
+ The problem with this is that we know for sure if we need partial matching
+ only after the subquery is materialized, and this is too late to revert to
+ the IN=>EXISTS strategy.
+ */
+ if (strategy == PARTIAL_MATCH)
+ {
+ /*
+ TIMOUR: Currently we use a super simplistic measure. This will be
+ addressed in a separate task.
+ */
+ if (tmp_table->file->stats.records < 100)
+ strategy= PARTIAL_MATCH_SCAN;
+ else
+ strategy= PARTIAL_MATCH_MERGE;
+ }
+
+ /* Check if there is enough memory for the rowid merge strategy. */
+ if (strategy == PARTIAL_MATCH_MERGE)
+ {
+ pm_buff_size= rowid_merge_buff_size(has_non_null_key,
+ has_covering_null_row,
+ partial_match_key_parts);
+ if (pm_buff_size > thd->variables.rowid_merge_buff_size)
+ strategy= PARTIAL_MATCH_SCAN;
+ }
+}
+
+
+/*
+ Compute the memory size of all buffers proportional to the number of rows
+ in tmp_table.
+
+ @details
+ If the result is bigger than thd->variables.rowid_merge_buff_size, partial
+ matching via merging is not applicable.
+*/
+
+size_t subselect_hash_sj_engine::rowid_merge_buff_size(
+ bool has_non_null_key, bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts)
+{
+ size_t buff_size; /* Total size of all buffers used by partial matching. */
+ ha_rows row_count= tmp_table->file->stats.records;
+ uint rowid_length= tmp_table->file->ref_length;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+
+ /* Size of the subselect_rowid_merge_engine::row_num_to_rowid buffer. */
+ buff_size= row_count * rowid_length * sizeof(uchar);
+
+ if (has_non_null_key)
+ {
+ /* Add the size of Ordered_key::key_buff of the only non-NULL key. */
+ buff_size+= row_count * sizeof(rownum_t);
+ }
+
+ if (!has_covering_null_row)
+ {
+ for (uint i= 0; i < partial_match_key_parts->n_bits; i++)
+ {
+ if (!bitmap_is_set(partial_match_key_parts, i) ||
+ result_sink->get_null_count_of_col(i) == row_count)
+ continue; /* In these cases we wouldn't construct Ordered keys. */
+
+ /* Add the size of Ordered_key::key_buff */
+ buff_size+= (row_count - result_sink->get_null_count_of_col(i)) *
+ sizeof(rownum_t);
+ /* Add the size of Ordered_key::null_key */
+ buff_size+= bitmap_buffer_size(result_sink->get_max_null_of_col(i));
+ }
+ }
+
+ return buff_size;
+}
+
+
+/*
+ Initialize a MY_BITMAP with a buffer allocated on the current
+ memory root.
+ TIMOUR: move to bitmap C file?
+*/
+
+static my_bool
+bitmap_init_memroot(MY_BITMAP *map, uint n_bits, MEM_ROOT *mem_root)
+{
+ my_bitmap_map *bitmap_buf;
+
+ if (!(bitmap_buf= (my_bitmap_map*) alloc_root(mem_root,
+ bitmap_buffer_size(n_bits))) ||
+ bitmap_init(map, bitmap_buf, n_bits, FALSE))
+ return TRUE;
+ bitmap_clear_all(map);
+ return FALSE;
+}
+
+
+/**
Create all structures needed for IN execution that can live between PS
reexecution.
- @detail
+ @param tmp_columns the items that produce the data for the temp table
+
+ @details
- Create a temporary table to store the result of the IN subquery. The
temporary table has one hash index on all its columns.
- Create a new result sink that sends the result stream of the subquery to
the temporary table,
- - Create and initialize a new JOIN_TAB, and TABLE_REF objects to perform
- lookups into the indexed temporary table.
@notice:
Currently Item_subselect::init() already chooses and creates at parse
@@ -3256,71 +3658,178 @@
bool subselect_hash_sj_engine::init_permanent(List<Item> *tmp_columns)
{
- /* The result sink where we will materialize the subquery result. */
- select_union *tmp_result_sink;
- /* The table into which the subquery is materialized. */
- TABLE *tmp_table;
- KEY *tmp_key; /* The only index on the temporary table. */
- uint tmp_key_parts; /* Number of keyparts in tmp_key. */
- Item_in_subselect *item_in= (Item_in_subselect *) item;
+ /* Options to create_tmp_table. */
+ ulonglong tmp_create_options= thd->options | TMP_TABLE_ALL_COLUMNS;
+ /* | TMP_TABLE_FORCE_MYISAM; TIMOUR: force MYISAM */
DBUG_ENTER("subselect_hash_sj_engine::init_permanent");
- /* 1. Create/initialize materialization related objects. */
+ if (bitmap_init_memroot(&non_null_key_parts, tmp_columns->elements,
+ thd->mem_root) ||
+ bitmap_init_memroot(&partial_match_key_parts, tmp_columns->elements,
+ thd->mem_root))
+ DBUG_RETURN(TRUE);
/*
Create and initialize a select result interceptor that stores the
result stream in a temporary table. The temporary table itself is
managed (created/filled/etc) internally by the interceptor.
*/
- if (!(tmp_result_sink= new select_union))
- DBUG_RETURN(TRUE);
- if (tmp_result_sink->create_result_table(
- thd, tmp_columns, TRUE,
- thd->options | TMP_TABLE_ALL_COLUMNS,
+/*
+ TIMOUR:
+ Select a more efficient result sink when we know there is no need to collect
+ data statistics.
+
+ if (strategy == COMPLETE_MATCH)
+ {
+ if (!(result= new select_union))
+ DBUG_RETURN(TRUE);
+ }
+ else if (strategy == PARTIAL_MATCH)
+ {
+ if (!(result= new select_materialize_with_stats))
+ DBUG_RETURN(TRUE);
+ }
+*/
+ if (!(result= new select_materialize_with_stats))
+ DBUG_RETURN(TRUE);
+
+ if (((select_union*) result)->create_result_table(
+ thd, tmp_columns, TRUE, tmp_create_options,
"materialized subselect", TRUE))
DBUG_RETURN(TRUE);
- tmp_table= tmp_result_sink->table;
- tmp_key= tmp_table->key_info;
- tmp_key_parts= tmp_key->key_parts;
+ tmp_table= ((select_union*) result)->table;
/*
- If the subquery has blobs, or the total key lenght is bigger than some
- length, then the created index cannot be used for lookups and we
- can't use hash semi join. If this is the case, delete the temporary
- table since it will not be used, and tell the caller we failed to
- initialize the engine.
+ If the subquery has blobs, or the total key lenght is bigger than
+ some length, or the total number of key parts is more than the
+ allowed maximum (currently MAX_REF_PARTS == 16), then the created
+ index cannot be used for lookups and we can't use hash semi
+ join. If this is the case, delete the temporary table since it
+ will not be used, and tell the caller we failed to initialize the
+ engine.
*/
if (tmp_table->s->keys == 0)
{
-#ifndef DBUG_OFF
- handlerton *tmp_table_hton= tmp_table->s->db_type();
-#ifdef USE_MARIA_FOR_TMP_TABLES
- DBUG_ASSERT(tmp_table_hton == maria_hton);
-#else
- DBUG_ASSERT(tmp_table_hton == myisam_hton);
-#endif
-#endif
DBUG_ASSERT(
tmp_table->s->uniques ||
tmp_table->key_info->key_length >= tmp_table->file->max_key_length() ||
tmp_table->key_info->key_parts > tmp_table->file->max_key_parts());
free_tmp_table(thd, tmp_table);
+ tmp_table= NULL;
delete result;
result= NULL;
DBUG_RETURN(TRUE);
}
- result= tmp_result_sink;
/*
Make sure there is only one index on the temp table, and it doesn't have
the extra key part created when s->uniques > 0.
*/
- DBUG_ASSERT(tmp_table->s->keys == 1 && tmp_columns->elements == tmp_key_parts);
-
-
- /* 2. Create/initialize execution related objects. */
+ DBUG_ASSERT(tmp_table->s->keys == 1 &&
+ ((Item_in_subselect *) item)->left_expr->cols() ==
+ tmp_table->key_info->key_parts);
+
+ if (make_semi_join_conds() ||
+ /* A unique_engine is used both for complete and partial matching. */
+ !(lookup_engine= make_unique_engine()))
+ DBUG_RETURN(TRUE);
+
+ DBUG_RETURN(FALSE);
+}
+
+
+/*
+ Create an artificial condition to post-filter those rows matched by index
+ lookups that cannot be distinguished by the index lookup procedure.
+
+ @notes
+ The need for post-filtering may occur e.g. because of
+ truncation. Prepared statements execution requires that fix_fields is
+ called for every execution. In order to call fix_fields we need to
+ create a Name_resolution_context and a corresponding TABLE_LIST for
+ the temporary table for the subquery, so that all column references
+ to the materialized subquery table can be resolved correctly.
+
+ @returns
+ @retval TRUE memory allocation error occurred
+ @retval FALSE the conditions were created and resolved (fixed)
+*/
+
+bool subselect_hash_sj_engine::make_semi_join_conds()
+{
+ /*
+ Table reference for tmp_table that is used to resolve column references
+ (Item_fields) to columns in tmp_table.
+ */
+ TABLE_LIST *tmp_table_ref;
+ /* Name resolution context for all tmp_table columns created below. */
+ Name_resolution_context *context;
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+
+ DBUG_ENTER("subselect_hash_sj_engine::make_semi_join_conds");
+ DBUG_ASSERT(semi_join_conds == NULL);
+
+ if (!(semi_join_conds= new Item_cond_and))
+ DBUG_RETURN(TRUE);
+
+ if (!(tmp_table_ref= (TABLE_LIST*) thd->alloc(sizeof(TABLE_LIST))))
+ DBUG_RETURN(TRUE);
+
+ tmp_table_ref->init_one_table("", "materialized subselect", TL_READ);
+ tmp_table_ref->table= tmp_table;
+
+ context= new Name_resolution_context;
+ context->init();
+ context->first_name_resolution_table=
+ context->last_name_resolution_table= tmp_table_ref;
+
+ for (uint i= 0; i < item_in->left_expr->cols(); i++)
+ {
+ Item_func_eq *eq_cond; /* New equi-join condition for the current column. */
+ /* Item for the corresponding field from the materialized temp table. */
+ Item_field *right_col_item;
+
+ if (!(right_col_item= new Item_field(thd, context, tmp_table->field[i])) ||
+ !(eq_cond= new Item_func_eq(item_in->left_expr->element_index(i),
+ right_col_item)) ||
+ (((Item_cond_and*)semi_join_conds)->add(eq_cond)))
+ {
+ delete semi_join_conds;
+ semi_join_conds= NULL;
+ DBUG_RETURN(TRUE);
+ }
+ }
+ if (semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
+ DBUG_RETURN(TRUE);
+
+ DBUG_RETURN(FALSE);
+}
+
+
+/**
+ Create a new uniquesubquery engine for the execution of an IN predicate.
+
+ @details
+ Create and initialize a new JOIN_TAB, and Table_ref objects to perform
+ lookups into the indexed temporary table.
+
+ @retval A new subselect_hash_sj_engine object
+ @retval NULL if a memory allocation error occurs
+*/
+
+subselect_uniquesubquery_engine*
+subselect_hash_sj_engine::make_unique_engine()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ /* The only index on the temporary table. */
+ KEY *tmp_key= tmp_table->key_info;
+ /* Number of keyparts in tmp_key. */
+ uint tmp_key_parts= tmp_key->key_parts;
+ JOIN_TAB *tab;
+
+ DBUG_ENTER("subselect_hash_sj_engine::make_unique_engine");
/*
Create and initialize the JOIN_TAB that represents an index lookup
@@ -3328,9 +3837,9 @@
- this JOIN_TAB has no corresponding JOIN (and doesn't need one), and
- here we initialize only those members that are used by
subselect_uniquesubquery_engine, so these objects are incomplete.
- */
+ */
if (!(tab= (JOIN_TAB*) thd->alloc(sizeof(JOIN_TAB))))
- DBUG_RETURN(TRUE);
+ DBUG_RETURN(NULL);
tab->table= tmp_table;
tab->ref.key= 0; /* The only temp table index. */
tab->ref.key_length= tmp_key->key_length;
@@ -3341,60 +3850,18 @@
(tmp_key_parts + 1)))) ||
!(tab->ref.items=
(Item**) thd->alloc(sizeof(Item*) * tmp_key_parts)))
- DBUG_RETURN(TRUE);
+ DBUG_RETURN(NULL);
KEY_PART_INFO *cur_key_part= tmp_key->key_part;
store_key **ref_key= tab->ref.key_copy;
uchar *cur_ref_buff= tab->ref.key_buff;
-
- /*
- Create an artificial condition to post-filter those rows matched by index
- lookups that cannot be distinguished by the index lookup procedure, e.g.
- because of truncation. Prepared statements execution requires that
- fix_fields is called for every execution. In order to call fix_fields we
- need to create a Name_resolution_context and a corresponding TABLE_LIST
- for the temporary table for the subquery, so that all column references
- to the materialized subquery table can be resolved correctly.
- */
- DBUG_ASSERT(cond == NULL);
- if (!(cond= new Item_cond_and))
- DBUG_RETURN(TRUE);
- /*
- Table reference for tmp_table that is used to resolve column references
- (Item_fields) to columns in tmp_table.
- */
- TABLE_LIST *tmp_table_ref;
- if (!(tmp_table_ref= (TABLE_LIST*) thd->alloc(sizeof(TABLE_LIST))))
- DBUG_RETURN(TRUE);
-
- tmp_table_ref->init_one_table("", "materialized subselect", TL_READ);
- tmp_table_ref->table= tmp_table;
-
- /* Name resolution context for all tmp_table columns created below. */
- Name_resolution_context *context= new Name_resolution_context;
- context->init();
- context->first_name_resolution_table=
- context->last_name_resolution_table= tmp_table_ref;
for (uint i= 0; i < tmp_key_parts; i++, cur_key_part++, ref_key++)
{
- Item_func_eq *eq_cond; /* New equi-join condition for the current column. */
- /* Item for the corresponding field from the materialized temp table. */
- Item_field *right_col_item;
+ tab->ref.items[i]= item_in->left_expr->element_index(i);
int null_count= test(cur_key_part->field->real_maybe_null());
- tab->ref.items[i]= item_in->left_expr->element_index(i);
-
- if (!(right_col_item= new Item_field(thd, context, cur_key_part->field)) ||
- !(eq_cond= new Item_func_eq(tab->ref.items[i], right_col_item)) ||
- ((Item_cond_and*)cond)->add(eq_cond))
- {
- delete cond;
- cond= NULL;
- DBUG_RETURN(TRUE);
- }
-
*ref_key= new store_key_item(thd, cur_key_part->field,
- /* TODO:
+ /* TIMOUR:
the NULL byte is taken into account in
cur_key_part->store_length, so instead of
cur_ref_buff + test(maybe_null), we could
@@ -3409,10 +3876,8 @@
tab->ref.key_err= 1;
tab->ref.key_parts= tmp_key_parts;
- if (cond->fix_fields(thd, &cond))
- DBUG_RETURN(TRUE);
-
- DBUG_RETURN(FALSE);
+ DBUG_RETURN(new subselect_uniquesubquery_engine(thd, tab, item,
+ semi_join_conds));
}
@@ -3435,7 +3900,8 @@
Repeat name resolution for 'cond' since cond is not part of any
clause of the query, and it is not 'fixed' during JOIN::prepare.
*/
- if (cond && !cond->fixed && cond->fix_fields(thd, &cond))
+ if (semi_join_conds && !semi_join_conds->fixed &&
+ semi_join_conds->fix_fields(thd, (Item**)&semi_join_conds))
return TRUE;
/* Let our engine reuse this query plan for materialization. */
materialize_join= materialize_engine->join;
@@ -3446,32 +3912,53 @@
subselect_hash_sj_engine::~subselect_hash_sj_engine()
{
+ delete lookup_engine;
delete result;
- if (tab)
- free_tmp_table(thd, tab->table);
+ if (tmp_table)
+ free_tmp_table(thd, tmp_table);
}
/**
Cleanup performed after each PS execution.
- @detail
+ @details
Called in the end of JOIN::prepare for PS from Item_subselect::cleanup.
*/
void subselect_hash_sj_engine::cleanup()
{
+ enum_engine_type lookup_engine_type= lookup_engine->engine_type();
is_materialized= FALSE;
+ bitmap_clear_all(&non_null_key_parts);
+ bitmap_clear_all(&partial_match_key_parts);
+ count_partial_match_columns= 0;
+ count_null_only_columns= 0;
+ strategy= UNDEFINED;
+ materialize_engine->cleanup();
+ if (lookup_engine_type == TABLE_SCAN_ENGINE ||
+ lookup_engine_type == ROWID_MERGE_ENGINE)
+ {
+ subselect_engine *inner_lookup_engine;
+ inner_lookup_engine=
+ ((subselect_partial_match_engine*) lookup_engine)->lookup_engine;
+ /*
+ Partial match engines are recreated for each PS execution inside
+ subselect_hash_sj_engine::exec().
+ */
+ delete lookup_engine;
+ lookup_engine= inner_lookup_engine;
+ }
+ DBUG_ASSERT(lookup_engine->engine_type() == UNIQUESUBQUERY_ENGINE);
+ lookup_engine->cleanup();
result->cleanup(); /* Resets the temp table as well. */
- materialize_engine->cleanup();
- subselect_uniquesubquery_engine::cleanup();
}
/**
Execute a subquery IN predicate via materialization.
- @detail
+ @details
If needed materialize the subquery into a temporary table, then
copmpute the predicate via a lookup into this table.
@@ -3482,6 +3969,9 @@
int subselect_hash_sj_engine::exec()
{
Item_in_subselect *item_in= (Item_in_subselect *) item;
+ SELECT_LEX *save_select= thd->lex->current_select;
+ subselect_partial_match_engine *pm_engine= NULL;
+ int res= 0;
DBUG_ENTER("subselect_hash_sj_engine::exec");
@@ -3489,56 +3979,126 @@
Optimize and materialize the subquery during the first execution of
the subquery predicate.
*/
- if (!is_materialized)
- {
- int res= 0;
- SELECT_LEX *save_select= thd->lex->current_select;
- thd->lex->current_select= materialize_engine->select_lex;
- if ((res= materialize_join->optimize()))
- goto err; /* purecov: inspected */
- materialize_join->exec();
- if ((res= test(materialize_join->error || thd->is_fatal_error)))
- goto err;
-
- /*
- TODO:
- - Unlock all subquery tables as we don't need them. To implement this
- we need to add new functionality to JOIN::join_free that can unlock
- all tables in a subquery (and all its subqueries).
- - The temp table used for grouping in the subquery can be freed
- immediately after materialization (yet it's done together with
- unlocking).
- */
- is_materialized= TRUE;
- /*
- If the subquery returned no rows, the temporary table is empty, so we know
- directly that the result of IN is FALSE. We first update the table
- statistics, then we test if the temporary table for the query result is
- empty.
- */
- tab->table->file->info(HA_STATUS_VARIABLE);
- if (!tab->table->file->stats.records)
- {
- empty_result_set= TRUE;
- item_in->value= FALSE;
- /* TODO: check we need this: item_in->null_value= FALSE; */
- DBUG_RETURN(FALSE);
- }
- /* Set tmp_param only if its usable, i.e. tmp_param->copy_field != NULL. */
- tmp_param= &(item_in->unit->outer_select()->join->tmp_table_param);
- if (tmp_param && !tmp_param->copy_field)
- tmp_param= NULL;
+ thd->lex->current_select= materialize_engine->select_lex;
+ if ((res= materialize_join->optimize()))
+ goto err; /* purecov: inspected */
+ DBUG_ASSERT(!is_materialized); /* We should materialize only once. */
+ materialize_join->exec();
+ if ((res= test(materialize_join->error || thd->is_fatal_error)))
+ goto err;
+
+ /*
+ TODO:
+ - Unlock all subquery tables as we don't need them. To implement this
+ we need to add new functionality to JOIN::join_free that can unlock
+ all tables in a subquery (and all its subqueries).
+ - The temp table used for grouping in the subquery can be freed
+ immediately after materialization (yet it's done together with
+ unlocking).
+ */
+ is_materialized= TRUE;
+ /*
+ If the subquery returned no rows, the temporary table is empty, so we know
+ directly that the result of IN is FALSE. We first update the table
+ statistics, then we test if the temporary table for the query result is
+ empty.
+ */
+ tmp_table->file->info(HA_STATUS_VARIABLE);
+ if (!tmp_table->file->stats.records)
+ {
+ item_in->value= FALSE;
+ /* The value of IN will not change during this execution. */
+ item_in->is_constant= TRUE;
+ item_in->set_first_execution();
+ /* TIMOUR: check if we need this: item_in->null_value= FALSE; */
+ DBUG_RETURN(FALSE);
+ }
+
+ /*
+ TIMOUR: The schema-based analysis for partial matching can be done once for
+ prepared statement and remembered. It is done here to remove the need to
+ save/restore all related variables between each re-execution, thus making
+ the code simpler.
+ */
+ strategy= get_strategy_using_schema();
+ /* This call may discover that we don't need partial matching at all. */
+ strategy= get_strategy_using_data();
+ if (strategy == PARTIAL_MATCH)
+ {
+ uint count_pm_keys; /* Total number of keys needed for partial matching. */
+ MY_BITMAP *nn_key_parts; /* The key parts of the only non-NULL index. */
+ uint covering_null_row_width;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+
+ nn_key_parts= (count_partial_match_columns < tmp_table->s->fields) ?
+ &non_null_key_parts : NULL;
+
+ if (result_sink->get_max_nulls_in_row() ==
+ tmp_table->s->fields -
+ (nn_key_parts ? bitmap_bits_set(nn_key_parts) : 0))
+ covering_null_row_width= result_sink->get_max_nulls_in_row();
+ else
+ covering_null_row_width= 0;
+
+ if (covering_null_row_width)
+ count_pm_keys= nn_key_parts ? 1 : 0;
+ else
+ count_pm_keys= count_partial_match_columns - count_null_only_columns +
+ (nn_key_parts ? 1 : 0);
+
+ choose_partial_match_strategy(test(nn_key_parts),
+ test(covering_null_row_width),
+ &partial_match_key_parts);
+ DBUG_ASSERT(strategy == PARTIAL_MATCH_MERGE ||
+ strategy == PARTIAL_MATCH_SCAN);
+ if (strategy == PARTIAL_MATCH_MERGE)
+ {
+ pm_engine=
+ new subselect_rowid_merge_engine((subselect_uniquesubquery_engine*)
+ lookup_engine, tmp_table,
+ count_pm_keys,
+ covering_null_row_width,
+ item, result,
+ semi_join_conds->argument_list());
+ if (!pm_engine ||
+ ((subselect_rowid_merge_engine*) pm_engine)->
+ init(nn_key_parts, &partial_match_key_parts))
+ {
+ /*
+ The call to init() would fail if there was not enough memory to allocate
+ all buffers for the rowid merge strategy. In this case revert to table
+ scanning which doesn't need any big buffers.
+ */
+ delete pm_engine;
+ pm_engine= NULL;
+ strategy= PARTIAL_MATCH_SCAN;
+ }
+ }
+
+ if (strategy == PARTIAL_MATCH_SCAN)
+ {
+ if (!(pm_engine=
+ new subselect_table_scan_engine((subselect_uniquesubquery_engine*)
+ lookup_engine, tmp_table,
+ item, result,
+ semi_join_conds->argument_list(),
+ covering_null_row_width)))
+ {
+ /* This is an irrecoverable error. */
+ res= 1;
+ goto err;
+ }
+ }
+ }
+
+ if (pm_engine)
+ lookup_engine= pm_engine;
+ item_in->change_engine(lookup_engine);
err:
- thd->lex->current_select= save_select;
- if (res)
- DBUG_RETURN(res);
- }
-
- /*
- Lookup the left IN operand in the hash index of the materialized subquery.
- */
- DBUG_RETURN(subselect_uniquesubquery_engine::exec());
+ thd->lex->current_select= save_select;
+ DBUG_RETURN(res);
}
@@ -3551,10 +4111,1008 @@
str->append(STRING_WITH_LEN(" <materialize> ("));
materialize_engine->print(str, query_type);
str->append(STRING_WITH_LEN(" ), "));
- if (tab)
- subselect_uniquesubquery_engine::print(str, query_type);
+
+ if (lookup_engine)
+ lookup_engine->print(str, query_type);
else
str->append(STRING_WITH_LEN(
- "<the access method for lookups is not yet created>"
+ "<engine selected at execution time>"
));
}
+
+void subselect_hash_sj_engine::fix_length_and_dec(Item_cache** row)
+{
+ DBUG_ASSERT(FALSE);
+}
+
+void subselect_hash_sj_engine::exclude()
+{
+ DBUG_ASSERT(FALSE);
+}
+
+bool subselect_hash_sj_engine::no_tables()
+{
+ DBUG_ASSERT(FALSE);
+ return FALSE;
+}
+
+bool subselect_hash_sj_engine::change_result(Item_subselect *si,
+ select_result_interceptor *res)
+{
+ DBUG_ASSERT(FALSE);
+ return TRUE;
+}
+
+
+Ordered_key::Ordered_key(uint keyid_arg, TABLE *tbl_arg, Item *search_key_arg,
+ ha_rows null_count_arg, ha_rows min_null_row_arg,
+ ha_rows max_null_row_arg, uchar *row_num_to_rowid_arg)
+ : keyid(keyid_arg), tbl(tbl_arg), search_key(search_key_arg),
+ row_num_to_rowid(row_num_to_rowid_arg), null_count(null_count_arg)
+{
+ DBUG_ASSERT(tbl->file->stats.records > null_count);
+ key_buff_elements= tbl->file->stats.records - null_count;
+ cur_key_idx= HA_POS_ERROR;
+
+ DBUG_ASSERT((null_count && min_null_row_arg && max_null_row_arg) ||
+ (!null_count && !min_null_row_arg && !max_null_row_arg));
+ if (null_count)
+ {
+ /* The counters are 1-based, for key access we need 0-based indexes. */
+ min_null_row= min_null_row_arg - 1;
+ max_null_row= max_null_row_arg - 1;
+ }
+ else
+ min_null_row= max_null_row= 0;
+}
+
+
+Ordered_key::~Ordered_key()
+{
+ my_free((char*) key_buff, MYF(0));
+ bitmap_free(&null_key);
+}
+
+
+/*
+ Cleanup that needs to be done for each PS (re)execution.
+*/
+
+void Ordered_key::cleanup()
+{
+ /*
+ Currently these keys are recreated for each PS re-execution, thus
+ there is nothing to cleanup, the whole object goes away after execution
+ is over. All handler related initialization/deinitialization is done by
+ the parent subselect_rowid_merge_engine object.
+ */
+}
+
+
+/*
+ Initialize a multi-column index.
+*/
+
+bool Ordered_key::init(MY_BITMAP *columns_to_index)
+{
+ THD *thd= tbl->in_use;
+ uint cur_key_col= 0;
+ Item_field *cur_tmp_field;
+ Item_func_lt *fn_less_than;
+
+ key_column_count= bitmap_bits_set(columns_to_index);
+
+ // TIMOUR: check for mem allocation err, revert to scan
+
+ key_columns= (Item_field**) thd->alloc(key_column_count *
+ sizeof(Item_field*));
+ compare_pred= (Item_func_lt**) thd->alloc(key_column_count *
+ sizeof(Item_func_lt*));
+
+ for (uint i= 0; i < columns_to_index->n_bits; i++)
+ {
+ if (!bitmap_is_set(columns_to_index, i))
+ continue;
+ cur_tmp_field= new Item_field(tbl->field[i]);
+ /* Create the predicate (tmp_column[i] < outer_ref[i]). */
+ fn_less_than= new Item_func_lt(cur_tmp_field,
+ search_key->element_index(i));
+ fn_less_than->fix_fields(thd, (Item**) &fn_less_than);
+ key_columns[cur_key_col]= cur_tmp_field;
+ compare_pred[cur_key_col]= fn_less_than;
+ ++cur_key_col;
+ }
+
+ if (alloc_keys_buffers())
+ {
+ /* TIMOUR revert to partial match via table scan. */
+ return TRUE;
+ }
+ return FALSE;
+}
+
+
+/*
+ Initialize a single-column index.
+*/
+
+bool Ordered_key::init(int col_idx)
+{
+ THD *thd= tbl->in_use;
+
+ key_column_count= 1;
+
+ // TIMOUR: check for mem allocation err, revert to scan
+
+ key_columns= (Item_field**) thd->alloc(sizeof(Item_field*));
+ compare_pred= (Item_func_lt**) thd->alloc(sizeof(Item_func_lt*));
+
+ key_columns[0]= new Item_field(tbl->field[col_idx]);
+ /* Create the predicate (tmp_column[i] < outer_ref[i]). */
+ compare_pred[0]= new Item_func_lt(key_columns[0],
+ search_key->element_index(col_idx));
+ compare_pred[0]->fix_fields(thd, (Item**)&compare_pred[0]);
+
+ if (alloc_keys_buffers())
+ {
+ /* TIMOUR revert to partial match via table scan. */
+ return TRUE;
+ }
+ return FALSE;
+}
+
+
+/*
+ Allocate the buffers for both the row number, and the NULL-bitmap indexes.
+*/
+
+bool Ordered_key::alloc_keys_buffers()
+{
+ DBUG_ASSERT(key_buff_elements > 0);
+
+ if (!(key_buff= (rownum_t*) my_malloc(key_buff_elements * sizeof(rownum_t),
+ MYF(MY_WME))))
+ return TRUE;
+
+ /*
+ TIMOUR: it is enough to create bitmaps with size
+ (max_null_row - min_null_row), and then use min_null_row as
+ lookup offset.
+ */
+ /* Notice that max_null_row is max array index, we need count, so +1. */
+ if (bitmap_init(&null_key, NULL, max_null_row + 1, FALSE))
+ return TRUE;
+
+ cur_key_idx= HA_POS_ERROR;
+
+ return FALSE;
+}
+
+
+/*
+ Quick sort comparison function that compares two rows of the same table
+ indentfied with their row numbers.
+
+ @retval -1
+ @retval 0
+ @retval +1
+*/
+
+int
+Ordered_key::cmp_keys_by_row_data(ha_rows a, ha_rows b)
+{
+ uchar *rowid_a, *rowid_b;
+ int error, cmp_res;
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tbl->file->ref_length;
+
+ if (a == b)
+ return 0;
+ /* Get the corresponding rowids. */
+ rowid_a= row_num_to_rowid + a * rowid_length;
+ rowid_b= row_num_to_rowid + b * rowid_length;
+ /* Fetch the rows for comparison. */
+ error= tbl->file->ha_rnd_pos(tbl->record[0], rowid_a);
+ DBUG_ASSERT(!error);
+ error= tbl->file->ha_rnd_pos(tbl->record[1], rowid_b);
+ DBUG_ASSERT(!error);
+ /*
+ Compare the two rows by the corresponding values of the indexed
+ columns.
+ */
+ for (uint i= 0; i < key_column_count; i++)
+ {
+ Field *cur_field= key_columns[i]->field;
+ if ((cmp_res= cur_field->cmp_offset(tbl->s->rec_buff_length)))
+ return (cmp_res > 0 ? 1 : -1);
+ }
+ return 0;
+}
+
+
+int
+Ordered_key::cmp_keys_by_row_data_and_rownum(Ordered_key *key,
+ rownum_t* a, rownum_t* b)
+{
+ /* The result of comparing the two keys according to their row data. */
+ int cmp_row_res= key->cmp_keys_by_row_data(*a, *b);
+ if (cmp_row_res)
+ return cmp_row_res;
+ return (*a < *b) ? -1 : (*a > *b) ? 1 : 0;
+}
+
+
+void Ordered_key::sort_keys()
+{
+ my_qsort2(key_buff, key_buff_elements, sizeof(rownum_t),
+ (qsort2_cmp) &cmp_keys_by_row_data_and_rownum, (void*) this);
+ /* Invalidate the current row position. */
+ cur_key_idx= HA_POS_ERROR;
+}
+
+
+/*
+ The fraction of rows that do not contain NULL in the columns indexed by
+ this key.
+
+ @retval 1 if there are no NULLs
+ @retval 0 if only NULLs
+*/
+
+double Ordered_key::null_selectivity()
+{
+ /* We should not be processing empty tables. */
+ DBUG_ASSERT(tbl->file->stats.records);
+ return (1 - (double) null_count / (double) tbl->file->stats.records);
+}
+
+
+/*
+ Compare the value(s) of the current key in 'search_key' with the
+ data of the current table record.
+
+ @notes The comparison result follows from the way compare_pred
+ is created in Ordered_key::init. Currently compare_pred compares
+ a field in of the current row with the corresponding Item that
+ contains the search key.
+
+ @param row_num Number of the row (not index in the key_buff array)
+
+ @retval -1 if (current row < search_key)
+ @retval 0 if (current row == search_key)
+ @retval +1 if (current row > search_key)
+*/
+
+int Ordered_key::cmp_key_with_search_key(rownum_t row_num)
+{
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tbl->file->ref_length;
+ uchar *cur_rowid= row_num_to_rowid + row_num * rowid_length;
+ int error, cmp_res;
+
+ error= tbl->file->ha_rnd_pos(tbl->record[0], cur_rowid);
+ DBUG_ASSERT(!error);
+
+ for (uint i= 0; i < key_column_count; i++)
+ {
+ cmp_res= compare_pred[i]->get_comparator()->compare();
+ /* Unlike Arg_comparator::compare_row() here there should be no NULLs. */
+ DBUG_ASSERT(!compare_pred[i]->null_value);
+ if (cmp_res)
+ return (cmp_res > 0 ? 1 : -1);
+ }
+ return 0;
+}
+
+
+/*
+ Find a key in a sorted array of keys via binary search.
+
+ see create_subq_in_equalities()
+*/
+
+bool Ordered_key::lookup()
+{
+ DBUG_ASSERT(key_buff_elements);
+
+ ha_rows lo= 0;
+ ha_rows hi= key_buff_elements - 1;
+ ha_rows mid;
+ int cmp_res;
+
+ while (lo <= hi)
+ {
+ mid= lo + (hi - lo) / 2;
+ cmp_res= cmp_key_with_search_key(key_buff[mid]);
+ /*
+ In order to find the minimum match, check if the pevious element is
+ equal or smaller than the found one. If equal, we need to search further
+ to the left.
+ */
+ if (!cmp_res && mid > 0)
+ cmp_res= !cmp_key_with_search_key(key_buff[mid - 1]) ? 1 : 0;
+
+ if (cmp_res == -1)
+ {
+ /* row[mid] < search_key */
+ lo= mid + 1;
+ }
+ else if (cmp_res == 1)
+ {
+ /* row[mid] > search_key */
+ if (!mid)
+ goto not_found;
+ hi= mid - 1;
+ }
+ else
+ {
+ /* row[mid] == search_key */
+ cur_key_idx= mid;
+ return TRUE;
+ }
+ }
+not_found:
+ cur_key_idx= HA_POS_ERROR;
+ return FALSE;
+}
+
+
+/*
+ Move the current index pointer to the next key with the same column
+ values as the current key. Since the index is sorted, all such keys
+ are contiguous.
+*/
+
+bool Ordered_key::next_same()
+{
+ DBUG_ASSERT(key_buff_elements);
+
+ if (cur_key_idx < key_buff_elements - 1)
+ {
+ /*
+ TIMOUR:
+ The below is quite inefficient, since as a result we will fetch every
+ row (except the last one) twice. There must be a more efficient way,
+ e.g. swapping record[0] and record[1], and reading only the new record.
+ */
+ if (!cmp_keys_by_row_data(key_buff[cur_key_idx], key_buff[cur_key_idx + 1]))
+ {
+ ++cur_key_idx;
+ return TRUE;
+ }
+ }
+ return FALSE;
+}
+
+
+void Ordered_key::print(String *str)
+{
+ uint i;
+ str->append("{idx=");
+ str->qs_append(keyid);
+ str->append(", (");
+ for (i= 0; i < key_column_count - 1; i++)
+ {
+ str->append(key_columns[i]->field->field_name);
+ str->append(", ");
+ }
+ str->append(key_columns[i]->field->field_name);
+ str->append("), ");
+
+ str->append("null_bitmap: (bits=");
+ str->qs_append(null_key.n_bits);
+ str->append(", nulls= ");
+ str->qs_append((double)null_count);
+ str->append(", min_null= ");
+ str->qs_append((double)min_null_row);
+ str->append(", max_null= ");
+ str->qs_append((double)max_null_row);
+ str->append("), ");
+
+ str->append('}');
+}
+
+
+subselect_partial_match_engine::subselect_partial_match_engine(
+ subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg)
+ :subselect_engine(item_arg, result_arg),
+ tmp_table(tmp_table_arg), lookup_engine(engine_arg),
+ equi_join_conds(equi_join_conds_arg),
+ covering_null_row_width(covering_null_row_width_arg)
+{}
+
+
+int subselect_partial_match_engine::exec()
+{
+ Item_in_subselect *item_in= (Item_in_subselect *) item;
+ int res;
+
+ /* Try to find a matching row by index lookup. */
+ res= lookup_engine->copy_ref_key_simple();
+ if (res == -1)
+ {
+ /* The result is FALSE based on the outer reference. */
+ item_in->value= 0;
+ item_in->null_value= 0;
+ return 0;
+ }
+ else if (res == 0)
+ {
+ /* Search for a complete match. */
+ if ((res= lookup_engine->index_lookup()))
+ {
+ /* An error occured during lookup(). */
+ item_in->value= 0;
+ item_in->null_value= 0;
+ return res;
+ }
+ else if (item_in->value)
+ {
+ /*
+ A complete match was found, the result of IN is TRUE.
+ Notice: (this->item == lookup_engine->item)
+ */
+ return 0;
+ }
+ }
+
+ if (covering_null_row_width == tmp_table->s->fields)
+ {
+ /*
+ If there is a NULL-only row that coveres all columns the result of IN
+ is UNKNOWN.
+ */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 1;
+ item_in->null_value= 1;
+ return 0;
+ }
+
+ /*
+ There is no complete match. Look for a partial match (UNKNOWN result), or
+ no match (FALSE).
+ */
+ if (tmp_table->file->inited)
+ tmp_table->file->ha_index_end();
+
+ if (partial_match())
+ {
+ /* The result of IN is UNKNOWN. */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 1;
+ item_in->null_value= 1;
+ }
+ else
+ {
+ /* The result of IN is FALSE. */
+ item_in->value= 0;
+ /*
+ TIMOUR: which one is the right way to propagate an UNKNOWN result?
+ Should we also set empty_result_set= FALSE; ???
+ */
+ //item_in->was_null= 0;
+ item_in->null_value= 0;
+ }
+
+ return 0;
+}
+
+
+void subselect_partial_match_engine::print(String *str,
+ enum_query_type query_type)
+{
+ /*
+ Should never be called as the actual engine cannot be known at query
+ optimization time.
+ */
+ DBUG_ASSERT(FALSE);
+}
+
+
+/*
+ @param non_null_key_parts
+ @param partial_match_key_parts A union of all single-column NULL key parts.
+ @param count_partial_match_columns Number of NULL keyparts (set bits above).
+
+ @retval FALSE the engine was initialized successfully
+ @retval TRUE there was some (memory allocation) error during initialization,
+ such errors should be interpreted as revert to other strategy
+*/
+
+bool
+subselect_rowid_merge_engine::init(MY_BITMAP *non_null_key_parts,
+ MY_BITMAP *partial_match_key_parts)
+{
+ /* The length in bytes of the rowids (positions) of tmp_table. */
+ uint rowid_length= tmp_table->file->ref_length;
+ ha_rows row_count= tmp_table->file->stats.records;
+ rownum_t cur_rownum= 0;
+ select_materialize_with_stats *result_sink=
+ (select_materialize_with_stats *) result;
+ uint cur_keyid= 0;
+ Item_in_subselect *item_in= (Item_in_subselect*) item;
+ int error;
+
+ if (keys_count == 0)
+ {
+ /* There is nothing to initialize, we will only do regular lookups. */
+ return FALSE;
+ }
+
+ DBUG_ASSERT(!covering_null_row_width || (covering_null_row_width &&
+ keys_count == 1 &&
+ non_null_key_parts));
+ /*
+ Allocate buffers to hold the merged keys and the mapping between rowids and
+ row numbers.
+ */
+ if (!(merge_keys= (Ordered_key**) thd->alloc(keys_count *
+ sizeof(Ordered_key*))) ||
+ !(row_num_to_rowid= (uchar*) my_malloc(row_count * rowid_length *
+ sizeof(uchar), MYF(MY_WME))))
+ return TRUE;
+
+ /* Create the only non-NULL key if there is any. */
+ if (non_null_key_parts)
+ {
+ non_null_key= new Ordered_key(cur_keyid, tmp_table, item_in->left_expr,
+ 0, 0, 0, row_num_to_rowid);
+ if (non_null_key->init(non_null_key_parts))
+ return TRUE;
+ merge_keys[cur_keyid]= non_null_key;
+ merge_keys[cur_keyid]->first();
+ ++cur_keyid;
+ }
+
+ /*
+ If there is a covering NULL row, the only key that is needed is the
+ only non-NULL key that is already created above. We create keys on
+ NULL-able columns only if there is no covering NULL row.
+ */
+ if (!covering_null_row_width)
+ {
+ if (bitmap_init_memroot(&matching_keys, keys_count, thd->mem_root) ||
+ bitmap_init_memroot(&matching_outer_cols, keys_count, thd->mem_root) ||
+ bitmap_init_memroot(&null_only_columns, keys_count, thd->mem_root))
+ return TRUE;
+
+ /*
+ Create one single-column NULL-key for each column in
+ partial_match_key_parts.
+ */
+ for (uint i= 0; i < partial_match_key_parts->n_bits; i++)
+ {
+ if (!bitmap_is_set(partial_match_key_parts, i))
+ continue;
+
+ if (result_sink->get_null_count_of_col(i) == row_count)
+ bitmap_set_bit(&null_only_columns, cur_keyid);
+ else
+ {
+ merge_keys[cur_keyid]= new Ordered_key(
+ cur_keyid, tmp_table,
+ item_in->left_expr->element_index(i),
+ result_sink->get_null_count_of_col(i),
+ result_sink->get_min_null_of_col(i),
+ result_sink->get_max_null_of_col(i),
+ row_num_to_rowid);
+ if (merge_keys[cur_keyid]->init(i))
+ return TRUE;
+ merge_keys[cur_keyid]->first();
+ }
+ ++cur_keyid;
+ }
+ }
+
+ /* Populate the indexes with data from the temporary table. */
+ tmp_table->file->ha_rnd_init(1);
+ tmp_table->file->extra_opt(HA_EXTRA_CACHE,
+ current_thd->variables.read_buff_size);
+ tmp_table->null_row= 0;
+ while (TRUE)
+ {
+ error= tmp_table->file->ha_rnd_next(tmp_table->record[0]);
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ /* We get this for duplicate records that should not be in tmp_table. */
+ continue;
+ }
+ /*
+ This is a temp table that we fully own, there should be no other
+ cause to stop the iteration than EOF.
+ */
+ DBUG_ASSERT(!error || error == HA_ERR_END_OF_FILE);
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ DBUG_ASSERT(cur_rownum == tmp_table->file->stats.records);
+ break;
+ }
+
+ /*
+ Save the position of this record in the row_num -> rowid mapping.
+ */
+ tmp_table->file->position(tmp_table->record[0]);
+ memcpy(row_num_to_rowid + cur_rownum * rowid_length,
+ tmp_table->file->ref, rowid_length);
+
+ /* Add the current row number to the corresponding keys. */
+ if (non_null_key)
+ {
+ /* By definition there are no NULLs in the non-NULL key. */
+ non_null_key->add_key(cur_rownum);
+ }
+
+ for (uint i= (non_null_key ? 1 : 0); i < keys_count; i++)
+ {
+ /*
+ Check if the first and only indexed column contains NULL in the curent
+ row, and add the row number to the corresponding key.
+ */
+ if (tmp_table->field[merge_keys[i]->get_field_idx(0)]->is_null())
+ merge_keys[i]->set_null(cur_rownum);
+ else
+ merge_keys[i]->add_key(cur_rownum);
+ }
+ ++cur_rownum;
+ }
+
+ tmp_table->file->ha_rnd_end();
+
+ /* Sort all the keys by their NULL selectivity. */
+ my_qsort(merge_keys, keys_count, sizeof(Ordered_key*),
+ (qsort_cmp) cmp_keys_by_null_selectivity);
+
+ /* Sort the keys in each of the indexes. */
+ for (uint i= 0; i < keys_count; i++)
+ merge_keys[i]->sort_keys();
+
+ if (init_queue(&pq, keys_count, 0, FALSE,
+ subselect_rowid_merge_engine::cmp_keys_by_cur_rownum, NULL))
+ return TRUE;
+
+ return FALSE;
+}
+
+
+subselect_rowid_merge_engine::~subselect_rowid_merge_engine()
+{
+ /* None of the resources below is allocated if there are no ordered keys. */
+ if (keys_count)
+ {
+ my_free((char*) row_num_to_rowid, MYF(0));
+ for (uint i= 0; i < keys_count; i++)
+ delete merge_keys[i];
+ delete_queue(&pq);
+ if (tmp_table->file->inited == handler::RND)
+ tmp_table->file->ha_rnd_end();
+ }
+}
+
+
+void subselect_rowid_merge_engine::cleanup()
+{
+}
+
+
+/*
+ Quick sort comparison function to compare keys in order of decreasing bitmap
+ selectivity, so that the most selective keys come first.
+
+ @param k1 first key to compare
+ @param k2 second key to compare
+
+ @retval 1 if k1 is less selective than k2
+ @retval 0 if k1 is equally selective as k2
+ @retval -1 if k1 is more selective than k2
+*/
+
+int
+subselect_rowid_merge_engine::cmp_keys_by_null_selectivity(Ordered_key **k1,
+ Ordered_key **k2)
+{
+ double k1_sel= (*k1)->null_selectivity();
+ double k2_sel= (*k2)->null_selectivity();
+ if (k1_sel < k2_sel)
+ return 1;
+ if (k1_sel > k2_sel)
+ return -1;
+ return 0;
+}
+
+
+/*
+*/
+
+int
+subselect_rowid_merge_engine::cmp_keys_by_cur_rownum(void *arg,
+ uchar *k1, uchar *k2)
+{
+ rownum_t r1= ((Ordered_key*) k1)->current();
+ rownum_t r2= ((Ordered_key*) k2)->current();
+
+ return (r1 < r2) ? -1 : (r1 > r2) ? 1 : 0;
+}
+
+
+/*
+ Check if certain table row contains a NULL in all columns for which there is
+ no match in the corresponding value index.
+
+ @retval TRUE if a NULL row exists
+ @retval FALSE otherwise
+*/
+
+bool subselect_rowid_merge_engine::test_null_row(rownum_t row_num)
+{
+ Ordered_key *cur_key;
+ uint cur_id;
+ for (uint i = 0; i < keys_count; i++)
+ {
+ cur_key= merge_keys[i];
+ cur_id= cur_key->get_keyid();
+ if (bitmap_is_set(&matching_keys, cur_id))
+ {
+ /*
+ The key 'i' (with id 'cur_keyid') already matches a value in row 'row_num',
+ thus we skip it as it can't possibly match a NULL.
+ */
+ continue;
+ }
+ if (!cur_key->is_null(row_num))
+ return FALSE;
+ }
+ return TRUE;
+}
+
+
+/*
+ @retval TRUE there is a partial match (UNKNOWN)
+ @retval FALSE there is no match at all (FALSE)
+*/
+
+bool subselect_rowid_merge_engine::partial_match()
+{
+ Ordered_key *min_key; /* Key that contains the current minimum position. */
+ rownum_t min_row_num; /* Current row number of min_key. */
+ Ordered_key *cur_key;
+ rownum_t cur_row_num;
+ uint count_nulls_in_search_key= 0;
+ bool res= FALSE;
+
+ /* If there is a non-NULL key, it must be the first key in the keys array. */
+ DBUG_ASSERT(!non_null_key || (non_null_key && merge_keys[0] == non_null_key));
+
+ /* All data accesses during execution are via handler::ha_rnd_pos() */
+ tmp_table->file->ha_rnd_init(0);
+
+ /* Check if there is a match for the columns of the only non-NULL key. */
+ if (non_null_key && !non_null_key->lookup())
+ {
+ res= FALSE;
+ goto end;
+ }
+
+ /*
+ If there is a NULL (sub)row that covers all NULL-able columns,
+ then there is a guranteed partial match, and we don't need to search
+ for the matching row.
+ */
+ if (covering_null_row_width)
+ {
+ res= TRUE;
+ goto end;
+ }
+
+ if (non_null_key)
+ queue_insert(&pq, (uchar *) non_null_key);
+ /*
+ Do not add the non_null_key, since it was already processed above.
+ */
+ bitmap_clear_all(&matching_outer_cols);
+ for (uint i= test(non_null_key); i < keys_count; i++)
+ {
+ DBUG_ASSERT(merge_keys[i]->get_column_count() == 1);
+ if (merge_keys[i]->get_search_key(0)->is_null())
+ {
+ ++count_nulls_in_search_key;
+ bitmap_set_bit(&matching_outer_cols, merge_keys[i]->get_keyid());
+ }
+ else if (merge_keys[i]->lookup())
+ queue_insert(&pq, (uchar *) merge_keys[i]);
+ }
+
+ /*
+ If the outer reference consists of only NULLs, or if it has NULLs in all
+ nullable columns, the result is UNKNOWN.
+ */
+ if (count_nulls_in_search_key ==
+ ((Item_in_subselect *) item)->left_expr->cols() -
+ (non_null_key ? non_null_key->get_column_count() : 0))
+ {
+ res= TRUE;
+ goto end;
+ }
+
+ /*
+ If there is no NULL (sub)row that covers all NULL columns, and there is no
+ single match for any of the NULL columns, the result is FALSE.
+ */
+ if (pq.elements - test(non_null_key) == 0)
+ {
+ res= FALSE;
+ goto end;
+ }
+
+ DBUG_ASSERT(pq.elements);
+
+ min_key= (Ordered_key*) queue_remove(&pq, 0);
+ min_row_num= min_key->current();
+ bitmap_copy(&matching_keys, &null_only_columns);
+ bitmap_set_bit(&matching_keys, min_key->get_keyid());
+ bitmap_union(&matching_keys, &matching_outer_cols);
+ if (min_key->next_same())
+ queue_insert(&pq, (uchar *) min_key);
+
+ if (pq.elements == 0)
+ {
+ /*
+ Check the only matching row of the only key min_key for NULL matches
+ in the other columns.
+ */
+ res= test_null_row(min_row_num);
+ goto end;
+ }
+
+ while (TRUE)
+ {
+ cur_key= (Ordered_key*) queue_remove(&pq, 0);
+ cur_row_num= cur_key->current();
+
+ if (cur_row_num == min_row_num)
+ bitmap_set_bit(&matching_keys, cur_key->get_keyid());
+ else
+ {
+ /* Follows from the correct use of priority queue. */
+ DBUG_ASSERT(cur_row_num > min_row_num);
+ if (test_null_row(min_row_num))
+ {
+ res= TRUE;
+ goto end;
+ }
+ else
+ {
+ min_key= cur_key;
+ min_row_num= cur_row_num;
+ bitmap_copy(&matching_keys, &null_only_columns);
+ bitmap_set_bit(&matching_keys, min_key->get_keyid());
+ bitmap_union(&matching_keys, &matching_outer_cols);
+ }
+ }
+
+ if (cur_key->next_same())
+ queue_insert(&pq, (uchar *) cur_key);
+
+ if (pq.elements == 0)
+ {
+ /* Check the last row of the last column in PQ for NULL matches. */
+ res= test_null_row(min_row_num);
+ goto end;
+ }
+ }
+
+ /* We should never get here - all branches must be handled explicitly above. */
+ DBUG_ASSERT(FALSE);
+
+end:
+ tmp_table->file->ha_rnd_end();
+ return res;
+}
+
+
+subselect_table_scan_engine::subselect_table_scan_engine(
+ subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg,
+ Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg)
+ :subselect_partial_match_engine(engine_arg, tmp_table_arg, item_arg,
+ result_arg, equi_join_conds_arg,
+ covering_null_row_width_arg)
+{}
+
+
+/*
+ TIMOUR:
+ This method is based on subselect_uniquesubquery_engine::scan_table().
+ Consider refactoring somehow, 80% of the code is the same.
+
+ for each row_i in tmp_table
+ {
+ count_matches= 0;
+ for each row element row_i[j]
+ {
+ if (outer_ref[j] is NULL || row_i[j] is NULL || outer_ref[j] == row_i[j])
+ ++count_matches;
+ }
+ if (count_matches == outer_ref.elements)
+ return TRUE
+ }
+ return FALSE
+*/
+
+bool subselect_table_scan_engine::partial_match()
+{
+ List_iterator_fast<Item> equality_it(*equi_join_conds);
+ Item *cur_eq;
+ uint count_matches;
+ int error;
+ bool res;
+
+ tmp_table->file->ha_rnd_init(1);
+ tmp_table->file->extra_opt(HA_EXTRA_CACHE,
+ current_thd->variables.read_buff_size);
+ /*
+ TIMOUR:
+ scan_table() also calls "table->null_row= 0;", why, do we need it?
+ */
+ for (;;)
+ {
+ error= tmp_table->file->ha_rnd_next(tmp_table->record[0]);
+ if (error) {
+ if (error == HA_ERR_RECORD_DELETED)
+ {
+ error= 0;
+ continue;
+ }
+ if (error == HA_ERR_END_OF_FILE)
+ {
+ error= 0;
+ break;
+ }
+ else
+ {
+ error= report_error(tmp_table, error);
+ break;
+ }
+ }
+
+ equality_it.rewind();
+ count_matches= 0;
+ while ((cur_eq= equality_it++))
+ {
+ DBUG_ASSERT(cur_eq->type() == Item::FUNC_ITEM &&
+ ((Item_func*)cur_eq)->functype() == Item_func::EQ_FUNC);
+ if (!cur_eq->val_int() && !cur_eq->null_value)
+ break;
+ ++count_matches;
+ }
+ if (count_matches == tmp_table->s->fields)
+ {
+ res= TRUE; /* Found a matching row. */
+ goto end;
+ }
+ }
+
+ res= FALSE;
+end:
+ tmp_table->file->ha_rnd_end();
+ return res;
+}
+
+
+void subselect_table_scan_engine::cleanup()
+{
+}
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-02-11 23:59:58 +0000
+++ b/sql/item_subselect.h 2010-03-09 10:14:06 +0000
@@ -297,7 +297,7 @@
Representation of IN subquery predicates of the form
"left_expr IN (SELECT ...)".
- @detail
+ @details
This class has:
- A "subquery execution engine" (as a subclass of Item_subselect) that allows
it to evaluate subqueries. (and this class participates in execution by
@@ -319,6 +319,12 @@
*/
List<Cached_item> *left_expr_cache;
bool first_execution;
+ /*
+ Set to TRUE if at query execution time we determine that this item's
+ value is a constant during this execution. We need this member because
+ it is not possible to substitute 'this' with a constant item.
+ */
+ bool is_constant;
/*
expr & optimizer used in subselect rewriting to store Item for
@@ -387,8 +393,8 @@
Item_in_subselect(Item * left_expr, st_select_lex *select_lex);
Item_in_subselect()
:Item_exists_subselect(), left_expr_cache(0), first_execution(TRUE),
- optimizer(0), abort_on_null(0), pushed_cond_guards(NULL),
- exec_method(NOT_TRANSFORMED), upper_item(0)
+ is_constant(FALSE), optimizer(0), abort_on_null(0),
+ pushed_cond_guards(NULL), exec_method(NOT_TRANSFORMED), upper_item(0)
{}
void cleanup();
subs_type substype() { return IN_SUBS; }
@@ -421,6 +427,8 @@
void update_used_tables();
bool setup_engine();
bool init_left_expr_cache();
+ /* Inform 'this' that it was computed, and contains a valid result. */
+ void set_first_execution() { if (first_execution) first_execution= FALSE; }
bool is_expensive_processor(uchar *arg);
friend class Item_ref_null_helper;
@@ -428,6 +436,7 @@
friend class Item_in_optimizer;
friend class subselect_indexsubquery_engine;
friend class subselect_hash_sj_engine;
+ friend class subselect_partial_match_engine;
};
@@ -462,7 +471,8 @@
enum enum_engine_type {ABSTRACT_ENGINE, SINGLE_SELECT_ENGINE,
UNION_ENGINE, UNIQUESUBQUERY_ENGINE,
- INDEXSUBQUERY_ENGINE, HASH_SJ_ENGINE};
+ INDEXSUBQUERY_ENGINE, HASH_SJ_ENGINE,
+ ROWID_MERGE_ENGINE, TABLE_SCAN_ENGINE};
subselect_engine(Item_subselect *si, select_result_interceptor *res)
:thd(0)
@@ -635,8 +645,10 @@
virtual void print (String *str, enum_query_type query_type);
bool change_result(Item_subselect *si, select_result_interceptor *result);
bool no_tables();
+ int index_lookup(); /* TIMOUR: this method needs refactoring. */
int scan_table();
bool copy_ref_key();
+ int copy_ref_key_simple(); /* TIMOUR: this method needs refactoring. */
bool no_rows() { return empty_result_set; }
virtual enum_engine_type engine_type() { return UNIQUESUBQUERY_ENGINE; }
};
@@ -705,50 +717,439 @@
/**
- Compute an IN predicate via a hash semi-join. The subquery is materialized
- during the first evaluation of the IN predicate. The IN predicate is executed
- via the functionality inherited from subselect_uniquesubquery_engine.
+ Compute an IN predicate via a hash semi-join. This class is responsible for
+ the materialization of the subquery, and the selection of the correct and
+ optimal execution method (e.g. direct index lookup, or partial matching) for
+ the IN predicate.
*/
-class subselect_hash_sj_engine: public subselect_uniquesubquery_engine
+class subselect_hash_sj_engine : public subselect_engine
{
protected:
+ /* The table into which the subquery is materialized. */
+ TABLE *tmp_table;
/* TRUE if the subquery was materialized into a temp table. */
bool is_materialized;
/*
The old engine already chosen at parse time and stored in permanent memory.
Through this member we can re-create and re-prepare materialize_join for
- each execution of a prepared statement. We akso resuse the functionality
+ each execution of a prepared statement. We also reuse the functionality
of subselect_single_select_engine::[prepare | cols].
*/
subselect_single_select_engine *materialize_engine;
+ /* The engine used to compute the IN predicate. */
+ subselect_engine *lookup_engine;
/*
QEP to execute the subquery and materialize its result into a
temporary table. Created during the first call to exec().
*/
JOIN *materialize_join;
- /* Temp table context of the outer select's JOIN. */
- TMP_TABLE_PARAM *tmp_param;
+
+ /* Keyparts of the only non-NULL composite index in a rowid merge. */
+ MY_BITMAP non_null_key_parts;
+ /* Keyparts of the single column indexes with NULL, one keypart per index. */
+ MY_BITMAP partial_match_key_parts;
+ uint count_partial_match_columns;
+ uint count_null_only_columns;
+ /*
+ A conjunction of all the equality condtions between all pairs of expressions
+ that are arguments of an IN predicate. We need these to post-filter some
+ IN results because index lookups sometimes match values that are actually
+ not equal to the search key in SQL terms.
+ */
+ Item_cond_and *semi_join_conds;
+ /* Possible execution strategies that can be used to compute hash semi-join.*/
+ enum exec_strategy {
+ UNDEFINED,
+ COMPLETE_MATCH, /* Use regular index lookups. */
+ PARTIAL_MATCH, /* Use some partial matching strategy. */
+ PARTIAL_MATCH_MERGE, /* Use partial matching through index merging. */
+ PARTIAL_MATCH_SCAN, /* Use partial matching through table scan. */
+ IMPOSSIBLE /* Subquery materialization is not applicable. */
+ };
+ /* The chosen execution strategy. Computed after materialization. */
+ exec_strategy strategy;
+protected:
+ exec_strategy get_strategy_using_schema();
+ exec_strategy get_strategy_using_data();
+ size_t rowid_merge_buff_size(bool has_non_null_key,
+ bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts);
+ void choose_partial_match_strategy(bool has_non_null_key,
+ bool has_covering_null_row,
+ MY_BITMAP *partial_match_key_parts);
+ bool make_semi_join_conds();
+ subselect_uniquesubquery_engine* make_unique_engine();
public:
subselect_hash_sj_engine(THD *thd, Item_subselect *in_predicate,
- subselect_single_select_engine *old_engine)
- :subselect_uniquesubquery_engine(thd, NULL, in_predicate, NULL),
- is_materialized(FALSE), materialize_engine(old_engine),
- materialize_join(NULL), tmp_param(NULL)
- {}
+ subselect_single_select_engine *old_engine)
+ :subselect_engine(in_predicate, NULL), tmp_table(NULL),
+ is_materialized(FALSE), materialize_engine(old_engine), lookup_engine(NULL),
+ materialize_join(NULL), count_partial_match_columns(0),
+ count_null_only_columns(0), semi_join_conds(NULL), strategy(UNDEFINED)
+ {
+ set_thd(thd);
+ }
~subselect_hash_sj_engine();
bool init_permanent(List<Item> *tmp_columns);
bool init_runtime();
void cleanup();
- int prepare() { return 0; }
+ int prepare() { return 0; } /* Override virtual function in base class. */
int exec();
- virtual void print (String *str, enum_query_type query_type);
+ virtual void print(String *str, enum_query_type query_type);
uint cols()
{
return materialize_engine->cols();
}
+ uint8 uncacheable() { return UNCACHEABLE_DEPENDENT; }
+ table_map upper_select_const_tables() { return 0; }
+ bool no_rows() { return !tmp_table->file->stats.records; }
virtual enum_engine_type engine_type() { return HASH_SJ_ENGINE; }
-};
-
+ /*
+ TODO: factor out all these methods in a base subselect_index_engine class
+ because all of them have dummy implementations and should never be called.
+ */
+ void fix_length_and_dec(Item_cache** row);//=>base class
+ void exclude(); //=>base class
+ //=>base class
+ bool change_result(Item_subselect *si, select_result_interceptor *result);
+ bool no_tables();//=>base class
+};
+
+
+/*
+ Distinguish the type od (0-based) row numbers from the type of the index into
+ an array of row numbers.
+*/
+typedef ha_rows rownum_t;
+
+
+/*
+ An Ordered_key is an in-memory table index that allows O(log(N)) time
+ lookups of a multi-part key.
+
+ If the index is over a single column, then this column may contain NULLs, and
+ the NULLs are stored and tested separately for NULL in O(1) via is_null().
+ Multi-part indexes assume that the indexed columns do not contain NULLs.
+
+ TODO:
+ = Due to the unnatural assymetry between single and multi-part indexes, it
+ makes sense to somehow refactor or extend the class.
+
+ = This class can be refactored into a base abstract interface, and two
+ subclasses:
+ - one to represent single-column indexes, and
+ - another to represent multi-column indexes.
+ Such separation would allow slightly more efficient implementation of
+ the single-column indexes.
+ = The current design requires such indexes to be fully recreated for each
+ PS (re)execution, however most of the comprising objects can be reused.
+*/
+
+class Ordered_key : public Sql_alloc
+{
+protected:
+ /*
+ Index of the key in an array of keys. This index allows to
+ construct (sub)sets of keys represented by bitmaps.
+ */
+ uint keyid;
+ /* The table being indexed. */
+ TABLE *tbl;
+ /* The columns being indexed. */
+ Item_field **key_columns;
+ /* Number of elements in 'key_columns' (number of key parts). */
+ uint key_column_count;
+ /*
+ An expression, or sequence of expressions that forms the search key.
+ The search key is a sequence when it is Item_row. Each element of the
+ sequence is accessible via Item::element_index(int i).
+ */
+ Item *search_key;
+
+/* Value index related members. */
+ /*
+ The actual value index, consists of a sorted sequence of row numbers.
+ */
+ rownum_t *key_buff;
+ /* Number of elements in key_buff. */
+ ha_rows key_buff_elements;
+ /* Current element in 'key_buff'. */
+ ha_rows cur_key_idx;
+ /*
+ Mapping from row numbers to row ids. The element row_num_to_rowid[i]
+ contains a buffer with the rowid for the row numbered 'i'.
+ The memory for this member is not maintanined by this class because
+ all Ordered_key indexes of the same table share the same mapping.
+ */
+ uchar *row_num_to_rowid;
+ /*
+ A sequence of predicates to compare the search key with the corresponding
+ columns of a table row from the index.
+ */
+ Item_func_lt **compare_pred;
+
+/* Null index related members. */
+ MY_BITMAP null_key;
+ /* Count of NULLs per column. */
+ ha_rows null_count;
+ /* The row number that contains the first NULL in a column. */
+ ha_rows min_null_row;
+ /* The row number that contains the last NULL in a column. */
+ ha_rows max_null_row;
+
+protected:
+ bool alloc_keys_buffers();
+ /*
+ Quick sort comparison function that compares two rows of the same table
+ indentfied with their row numbers.
+ */
+ int cmp_keys_by_row_data(rownum_t a, rownum_t b);
+ static int cmp_keys_by_row_data_and_rownum(Ordered_key *key,
+ rownum_t* a, rownum_t* b);
+
+ int cmp_key_with_search_key(rownum_t row_num);
+
+public:
+ Ordered_key(uint keyid_arg, TABLE *tbl_arg,
+ Item *search_key_arg, ha_rows null_count_arg,
+ ha_rows min_null_row_arg, ha_rows max_null_row_arg,
+ uchar *row_num_to_rowid_arg);
+ ~Ordered_key();
+ void cleanup();
+ /* Initialize a multi-column index. */
+ bool init(MY_BITMAP *columns_to_index);
+ /* Initialize a single-column index. */
+ bool init(int col_idx);
+
+ uint get_column_count() { return key_column_count; }
+ uint get_keyid() { return keyid; }
+ uint get_field_idx(uint i)
+ {
+ DBUG_ASSERT(i < key_column_count);
+ return key_columns[i]->field->field_index;
+ }
+ /*
+ Get the search key element that corresponds to the i-th key part of this
+ index.
+ */
+ Item *get_search_key(uint i)
+ {
+ return search_key->element_index(key_columns[i]->field->field_index);
+ }
+ void add_key(rownum_t row_num)
+ {
+ /* The caller must know how many elements to add. */
+ DBUG_ASSERT(key_buff_elements && cur_key_idx < key_buff_elements);
+ key_buff[cur_key_idx]= row_num;
+ ++cur_key_idx;
+ }
+
+ void sort_keys();
+ double null_selectivity();
+
+ /*
+ Position the current element at the first row that matches the key.
+ The key itself is propagated by evaluating the current value(s) of
+ this->search_key.
+ */
+ bool lookup();
+ /* Move the current index cursor to the first key. */
+ void first()
+ {
+ DBUG_ASSERT(key_buff_elements);
+ cur_key_idx= 0;
+ }
+ /* TODO */
+ bool next_same();
+ /* Move the current index cursor to the next key. */
+ bool next()
+ {
+ DBUG_ASSERT(key_buff_elements);
+ if (cur_key_idx < key_buff_elements - 1)
+ {
+ ++cur_key_idx;
+ return TRUE;
+ }
+ return FALSE;
+ };
+ /* Return the current index element. */
+ rownum_t current()
+ {
+ DBUG_ASSERT(key_buff_elements && cur_key_idx < key_buff_elements);
+ return key_buff[cur_key_idx];
+ }
+
+ void set_null(rownum_t row_num)
+ {
+ bitmap_set_bit(&null_key, row_num);
+ }
+ bool is_null(rownum_t row_num)
+ {
+ /*
+ Indexes consisting of only NULLs do not have a bitmap buffer at all.
+ Their only initialized member is 'n_bits', which is equal to the number
+ of temp table rows.
+ */
+ if (null_count == tbl->file->stats.records)
+ {
+ DBUG_ASSERT(tbl->file->stats.records == null_key.n_bits);
+ return TRUE;
+ }
+ if (row_num > max_null_row || row_num < min_null_row)
+ return FALSE;
+ return bitmap_is_set(&null_key, row_num);
+ }
+ void print(String *str);
+};
+
+
+class subselect_partial_match_engine : public subselect_engine
+{
+protected:
+ /* The temporary table that contains a materialized subquery. */
+ TABLE *tmp_table;
+ /*
+ The engine used to check whether an IN predicate is TRUE or not. If not
+ TRUE, then subselect_rowid_merge_engine further distinguishes between
+ FALSE and UNKNOWN.
+ */
+ subselect_uniquesubquery_engine *lookup_engine;
+ /* A list of equalities between each pair of IN operands. */
+ List<Item> *equi_join_conds;
+ /*
+ If there is a row, such that all its NULL-able components are NULL, this
+ member is set to the number of covered columns. If there is no covering
+ row, then this is 0.
+ */
+ uint covering_null_row_width;
+protected:
+ virtual bool partial_match()= 0;
+public:
+ subselect_partial_match_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg);
+ int prepare() { return 0; }
+ int exec();
+ void fix_length_and_dec(Item_cache**) {}
+ uint cols() { /* TODO: what is the correct value? */ return 1; }
+ uint8 uncacheable() { return UNCACHEABLE_DEPENDENT; }
+ void exclude() {}
+ table_map upper_select_const_tables() { return 0; }
+ bool change_result(Item_subselect*, select_result_interceptor*)
+ { DBUG_ASSERT(FALSE); return false; }
+ bool no_tables() { return false; }
+ bool no_rows()
+ {
+ /*
+ TODO: It is completely unclear what is the semantics of this
+ method. The current result is computed so that the call to no_rows()
+ from Item_in_optimizer::val_int() sets Item_in_optimizer::null_value
+ correctly.
+ */
+ return !(((Item_in_subselect *) item)->null_value);
+ }
+ void print(String*, enum_query_type);
+
+ friend void subselect_hash_sj_engine::cleanup();
+};
+
+
+class subselect_rowid_merge_engine: public subselect_partial_match_engine
+{
+protected:
+ /*
+ Mapping from row numbers to row ids. The rowids are stored sequentially
+ in the array - rowid[i] is located in row_num_to_rowid + i * rowid_length.
+ */
+ uchar *row_num_to_rowid;
+ /*
+ A subset of all the keys for which there is a match for the same row.
+ Used during execution. Computed for each outer reference
+ */
+ MY_BITMAP matching_keys;
+ /*
+ The columns of the outer reference that are NULL. Computed for each
+ outer reference.
+ */
+ MY_BITMAP matching_outer_cols;
+ /*
+ Columns that consist of only NULLs. Such columns match any value.
+ Computed once per query execution.
+ */
+ MY_BITMAP null_only_columns;
+ /*
+ Indexes of row numbers, sorted by <column_value, row_number>. If an
+ index may contain NULLs, the NULLs are stored efficiently in a bitmap.
+
+ The indexes are sorted by the selectivity of their NULL sub-indexes, the
+ one with the fewer NULLs is first. Thus, if there is any index on
+ non-NULL columns, it is contained in keys[0].
+ */
+ Ordered_key **merge_keys;
+ /* The number of elements in keys. */
+ uint keys_count;
+ /*
+ An index on all non-NULL columns of 'tmp_table'. The index has the
+ logical form: <[v_i1 | ... | v_ik], rownum>. It allows to find the row
+ number where the columns c_i1,...,c1_k contain the values v_i1,...,v_ik.
+ If such an index exists, it is always the first element of 'keys'.
+ */
+ Ordered_key *non_null_key;
+ /*
+ Priority queue of Ordered_key indexes, one per NULLable column.
+ This queue is used by the partial match algorithm in method exec().
+ */
+ QUEUE pq;
+protected:
+ /*
+ Comparison function to compare keys in order of decreasing bitmap
+ selectivity.
+ */
+ static int cmp_keys_by_null_selectivity(Ordered_key **k1, Ordered_key **k2);
+ /*
+ Comparison function used by the priority queue pq, the 'smaller' key
+ is the one with the smaller current row number.
+ */
+ static int cmp_keys_by_cur_rownum(void *arg, uchar *k1, uchar *k2);
+
+ bool test_null_row(rownum_t row_num);
+ bool partial_match();
+public:
+ subselect_rowid_merge_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, uint keys_count_arg,
+ uint covering_null_row_width_arg,
+ Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg)
+ :subselect_partial_match_engine(engine_arg, tmp_table_arg, item_arg,
+ result_arg, equi_join_conds_arg,
+ covering_null_row_width_arg),
+ keys_count(keys_count_arg), non_null_key(NULL)
+ {
+ thd= lookup_engine->get_thd();
+ }
+ ~subselect_rowid_merge_engine();
+ bool init(MY_BITMAP *non_null_key_parts, MY_BITMAP *partial_match_key_parts);
+ void cleanup();
+ virtual enum_engine_type engine_type() { return ROWID_MERGE_ENGINE; }
+};
+
+
+class subselect_table_scan_engine: public subselect_partial_match_engine
+{
+protected:
+ bool partial_match();
+public:
+ subselect_table_scan_engine(subselect_uniquesubquery_engine *engine_arg,
+ TABLE *tmp_table_arg, Item_subselect *item_arg,
+ select_result_interceptor *result_arg,
+ List<Item> *equi_join_conds_arg,
+ uint covering_null_row_width_arg);
+ void cleanup();
+ virtual enum_engine_type engine_type() { return TABLE_SCAN_ENGINE; }
+};
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-01-17 14:55:08 +0000
+++ b/sql/mysql_priv.h 2010-03-09 10:14:06 +0000
@@ -552,12 +552,14 @@
#define OPTIMIZER_SWITCH_LOOSE_SCAN 64
#define OPTIMIZER_SWITCH_MATERIALIZATION 128
#define OPTIMIZER_SWITCH_SEMIJOIN 256
+#define OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE 512
+#define OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN 1024
#ifdef DBUG_OFF
-# define OPTIMIZER_SWITCH_LAST 512
+# define OPTIMIZER_SWITCH_LAST 2048
#else
-# define OPTIMIZER_SWITCH_TABLE_ELIMINATION 512
-# define OPTIMIZER_SWITCH_LAST 1024
+# define OPTIMIZER_SWITCH_TABLE_ELIMINATION 2048
+# define OPTIMIZER_SWITCH_LAST 4096
#endif
#ifdef DBUG_OFF
@@ -570,8 +572,10 @@
OPTIMIZER_SWITCH_FIRSTMATCH | \
OPTIMIZER_SWITCH_LOOSE_SCAN | \
OPTIMIZER_SWITCH_MATERIALIZATION | \
- OPTIMIZER_SWITCH_SEMIJOIN)
-#else
+ OPTIMIZER_SWITCH_SEMIJOIN | \
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE|\
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)
+#else
# define OPTIMIZER_SWITCH_DEFAULT (OPTIMIZER_SWITCH_INDEX_MERGE | \
OPTIMIZER_SWITCH_INDEX_MERGE_UNION | \
OPTIMIZER_SWITCH_INDEX_MERGE_SORT_UNION | \
@@ -581,7 +585,9 @@
OPTIMIZER_SWITCH_FIRSTMATCH | \
OPTIMIZER_SWITCH_LOOSE_SCAN | \
OPTIMIZER_SWITCH_MATERIALIZATION | \
- OPTIMIZER_SWITCH_SEMIJOIN)
+ OPTIMIZER_SWITCH_SEMIJOIN | \
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE|\
+ OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)
#endif
/*
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-01-17 14:55:08 +0000
+++ b/sql/mysqld.cc 2010-03-09 10:14:06 +0000
@@ -301,7 +301,9 @@
"index_merge","index_merge_union","index_merge_sort_union",
"index_merge_intersection",
"index_condition_pushdown",
- "firstmatch","loosescan","materialization", "semijoin",
+ "firstmatch","loosescan","materialization", "semijoin",
+ "partial_match_rowid_merge",
+ "partial_match_table_scan",
#ifndef DBUG_OFF
"table_elimination",
#endif
@@ -320,6 +322,8 @@
sizeof("loosescan") - 1,
sizeof("materialization") - 1,
sizeof("semijoin") - 1,
+ sizeof("partial_match_rowid_merge") - 1,
+ sizeof("partial_match_table_scan") - 1,
#ifndef DBUG_OFF
sizeof("table_elimination") - 1,
#endif
@@ -5794,7 +5798,8 @@
OPT_RECORD_RND_BUFFER, OPT_DIV_PRECINCREMENT, OPT_RELAY_LOG_SPACE_LIMIT,
OPT_RELAY_LOG_PURGE,
OPT_SLAVE_NET_TIMEOUT, OPT_SLAVE_COMPRESSED_PROTOCOL, OPT_SLOW_LAUNCH_TIME,
- OPT_SLAVE_TRANS_RETRIES, OPT_READONLY, OPT_DEBUGGING, OPT_DEBUG_FLUSH,
+ OPT_SLAVE_TRANS_RETRIES, OPT_READONLY, OPT_ROWID_MERGE_BUFF_SIZE,
+ OPT_DEBUGGING, OPT_DEBUG_FLUSH,
OPT_SORT_BUFFER, OPT_TABLE_OPEN_CACHE, OPT_TABLE_DEF_CACHE,
OPT_THREAD_CONCURRENCY, OPT_THREAD_CACHE_SIZE,
OPT_TMP_TABLE_SIZE, OPT_THREAD_STACK,
@@ -7130,6 +7135,11 @@
(uchar**) &max_system_variables.range_alloc_block_size, 0, GET_ULONG,
REQUIRED_ARG, RANGE_ALLOC_BLOCK_SIZE, RANGE_ALLOC_BLOCK_SIZE,
(longlong) ULONG_MAX, 0, 1024, 0},
+ {"rowid_merge_buff_size", OPT_ROWID_MERGE_BUFF_SIZE,
+ "The size of the buffers used [NOT] IN evaluation via partial matching.",
+ (uchar**) &global_system_variables.rowid_merge_buff_size,
+ (uchar**) &max_system_variables.rowid_merge_buff_size, 0, GET_ULONG,
+ REQUIRED_ARG, 8*1024*1024L, 0, MAX_MEM_TABLE_SIZE/2, 0, 1, 0},
{"read_buffer_size", OPT_RECORD_BUFFER,
"Each thread that does a sequential scan allocates a buffer of this size for each table it scans. If you do many sequential scans, you may want to increase this value.",
(uchar**) &global_system_variables.read_buff_size,
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-15 06:32:54 +0000
+++ b/sql/opt_subselect.cc 2010-03-15 15:09:35 +0000
@@ -187,10 +187,10 @@
does not call setup_subquery_materialization(). We could make
SELECT ... FROM DUAL call that function but that doesn't seem
to be the case that is worth handling.
- 4. Subquery predicate is a top-level predicate
- (this implies it is not negated)
- TODO: this is a limitation that should be lifted once we
- implement correct NULL semantics (WL#3830)
+ 4. Either the subquery predicate is a top-level predicate, or at
+ least one partial match strategy is enabled. If no partial match
+ strategy is enabled, then materialization cannot be used for
+ non-top-level queries because it cannot handle NULLs correctly.
5. Subquery is non-correlated
TODO:
This is an overly restrictive condition. It can be extended to:
@@ -204,8 +204,8 @@
(*) The subquery must be part of a SELECT statement. The current
condition also excludes multi-table update statements.
- We have to determine whether we will perform subquery materialization
- before calling the IN=>EXISTS transformation, so that we know whether to
+ Determine whether we will perform subquery materialization before
+ calling the IN=>EXISTS transformation, so that we know whether to
perform the whole transformation or only that part of it which wraps
Item_in_subselect in an Item_in_optimizer.
*/
@@ -215,12 +215,14 @@
select_lex->master_unit()->first_select()->leaf_tables && // 3
thd->lex->sql_command == SQLCOM_SELECT && // *
select_lex->outer_select()->leaf_tables && // 3A
- subquery_types_allow_materialization(in_subs))
+ subquery_types_allow_materialization(in_subs) &&
+ // psergey-todo: duplicated_subselect_card_check: where it's done?
+ (in_subs->is_top_level_item() ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)) &&//4
+ !in_subs->is_correlated && // 5
+ in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
{
- // psergey-todo: duplicated_subselect_card_check: where it's done?
- if (in_subs->is_top_level_item() && // 4
- !in_subs->is_correlated && // 5
- in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
in_subs->exec_method= Item_in_subselect::MATERIALIZATION;
}
=== modified file 'sql/set_var.cc'
--- a/sql/set_var.cc 2009-12-22 12:49:15 +0000
+++ b/sql/set_var.cc 2010-03-09 10:14:06 +0000
@@ -540,6 +540,9 @@
static sys_var_thd_ulong sys_range_alloc_block_size(&vars, "range_alloc_block_size",
&SV::range_alloc_block_size);
+static sys_var_thd_ulong sys_rowid_merge_buff_size(&vars, "rowid_merge_buff_size",
+ &SV::rowid_merge_buff_size);
+
static sys_var_thd_ulong sys_query_alloc_block_size(&vars, "query_alloc_block_size",
&SV::query_alloc_block_size,
0, fix_thd_mem_root);
=== modified file 'sql/sql_class.cc'
--- a/sql/sql_class.cc 2010-02-17 21:59:41 +0000
+++ b/sql/sql_class.cc 2010-02-19 21:55:57 +0000
@@ -42,6 +42,7 @@
#include "sp_rcontext.h"
#include "sp_cache.h"
+#include "sql_select.h" /* declares create_tmp_table() */
/*
The following is used to initialise Table_ident with a internal
@@ -2877,6 +2878,71 @@
return 0;
}
+
+bool
+select_materialize_with_stats::
+create_result_table(THD *thd_arg, List<Item> *column_types,
+ bool is_union_distinct, ulonglong options,
+ const char *table_alias, bool bit_fields_as_long)
+{
+ DBUG_ASSERT(table == 0);
+ tmp_table_param.field_count= column_types->elements;
+ tmp_table_param.bit_fields_as_long= bit_fields_as_long;
+
+ if (! (table= create_tmp_table(thd_arg, &tmp_table_param, *column_types,
+ (ORDER*) 0, is_union_distinct, 1,
+ options, HA_POS_ERROR, (char*) table_alias)))
+ return TRUE;
+
+ col_stat= (Column_statistics*) table->in_use->alloc(table->s->fields *
+ sizeof(Column_statistics));
+ if (!stat)
+ return TRUE;
+
+ cleanup();
+
+ table->file->extra(HA_EXTRA_WRITE_CACHE);
+ table->file->extra(HA_EXTRA_IGNORE_DUP_KEY);
+ return FALSE;
+}
+
+
+/**
+ Override select_union::send_data to analyze each row for NULLs and to
+ update null_statistics before sending data to the client.
+
+ @return TRUE if fatal error when sending data to the client
+ @return FALSE on success
+*/
+
+bool select_materialize_with_stats::send_data(List<Item> &items)
+{
+ List_iterator_fast<Item> item_it(items);
+ Item *cur_item;
+ Column_statistics *cur_col_stat= col_stat;
+ uint nulls_in_row= 0;
+
+ ++count_rows;
+
+ while ((cur_item= item_it++))
+ {
+ if (cur_item->is_null())
+ {
+ ++cur_col_stat->null_count;
+ cur_col_stat->max_null_row= count_rows;
+ if (!cur_col_stat->min_null_row)
+ cur_col_stat->min_null_row= count_rows;
+ ++nulls_in_row;
+ }
+ ++cur_col_stat;
+ }
+ if (nulls_in_row > max_nulls_in_row)
+ max_nulls_in_row= nulls_in_row;
+
+ return select_union::send_data(items);
+}
+
+
/****************************************************************************
TMP_TABLE_PARAM
****************************************************************************/
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2010-02-17 21:59:41 +0000
+++ b/sql/sql_class.h 2010-03-09 10:14:06 +0000
@@ -343,6 +343,8 @@
ulong mrr_buff_size;
ulong div_precincrement;
ulong sortbuff_size;
+ /* Total size of all buffers used by the subselect_rowid_merge_engine. */
+ ulong rowid_merge_buff_size;
ulong thread_handling;
ulong tx_isolation;
ulong completion_type;
@@ -2740,19 +2742,20 @@
class select_union :public select_result_interceptor
{
+protected:
TMP_TABLE_PARAM tmp_table_param;
public:
TABLE *table;
- select_union() :table(0) {}
+ select_union() :table(0) { tmp_table_param.init(); }
int prepare(List<Item> &list, SELECT_LEX_UNIT *u);
bool send_data(List<Item> &items);
bool send_eof();
bool flush();
- bool create_result_table(THD *thd, List<Item> *column_types,
- bool is_distinct, ulonglong options,
- const char *alias, bool bit_fields_as_long);
+ virtual bool create_result_table(THD *thd, List<Item> *column_types,
+ bool is_distinct, ulonglong options,
+ const char *alias, bool bit_fields_as_long);
};
/* Base subselect interface class */
@@ -2776,6 +2779,74 @@
bool send_data(List<Item> &items);
};
+
+/*
+ This class specializes select_union to collect statistics about the
+ data stored in the temp table. Currently the class collects statistcs
+ about NULLs.
+*/
+
+class select_materialize_with_stats : public select_union
+{
+protected:
+ class Column_statistics
+ {
+ public:
+ /* Count of NULLs per column. */
+ ha_rows null_count;
+ /* The row number that contains the first NULL in a column. */
+ ha_rows min_null_row;
+ /* The row number that contains the last NULL in a column. */
+ ha_rows max_null_row;
+ };
+
+ /* Array of statistics data per column. */
+ Column_statistics* col_stat;
+
+ /*
+ The number of columns in the biggest sub-row that consists of only
+ NULL values.
+ */
+ ha_rows max_nulls_in_row;
+ /*
+ Count of rows writtent to the temp table. This is redundant as it is
+ already stored in handler::stats.records, however that one is relatively
+ expensive to compute (given we need that for evry row).
+ */
+ ha_rows count_rows;
+
+public:
+ select_materialize_with_stats() {}
+ virtual bool create_result_table(THD *thd, List<Item> *column_types,
+ bool is_distinct, ulonglong options,
+ const char *alias, bool bit_fields_as_long);
+ bool init_result_table(ulonglong select_options);
+ bool send_data(List<Item> &items);
+ void cleanup()
+ {
+ memset(col_stat, 0, table->s->fields * sizeof(Column_statistics));
+ max_nulls_in_row= 0;
+ count_rows= 0;
+ }
+ ha_rows get_null_count_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].null_count;
+ }
+ ha_rows get_max_null_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].max_null_row;
+ }
+ ha_rows get_min_null_of_col(uint idx)
+ {
+ DBUG_ASSERT(idx < table->s->fields);
+ return col_stat[idx].min_null_row;
+ }
+ ha_rows get_max_nulls_in_row() { return max_nulls_in_row; }
+};
+
+
/* used in independent ALL/ANY optimisation */
class select_max_min_finder_subselect :public select_subselect
{
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-14 18:25:43 +0000
+++ b/sql/sql_select.cc 2010-03-15 14:34:56 +0000
@@ -874,6 +874,9 @@
{
DBUG_PRINT("info",("No tables"));
error= 0;
+ /* Create all structures needed for materialized subquery execution. */
+ if (setup_subquery_materialization())
+ DBUG_RETURN(1);
DBUG_RETURN(0);
}
error= -1; // Error is sent to client
@@ -11258,7 +11261,7 @@
param->group_buff=group_buff;
share->keys=1;
share->uniques= test(using_unique_constraint);
- table->key_info=keyinfo;
+ table->key_info= table->s->key_info= keyinfo;
keyinfo->key_part=key_part_info;
keyinfo->flags=HA_NOSAME;
keyinfo->usable_key_parts=keyinfo->key_parts= param->group_parts;
@@ -11344,7 +11347,7 @@
keyinfo->key_parts * sizeof(KEY_PART_INFO))))
goto err;
bzero((void*) key_part_info, keyinfo->key_parts * sizeof(KEY_PART_INFO));
- table->key_info=keyinfo;
+ table->key_info= table->s->key_info= keyinfo;
keyinfo->key_part=key_part_info;
keyinfo->flags=HA_NOSAME | HA_NULL_ARE_EQUAL;
keyinfo->key_length= 0; // Will compute the sum of the parts below.
1
0
Hi!
>>>>> "Sergei" == Sergei Golubchik <serg(a)askmonty.org> writes:
<cut>
>>> 2. Unknown option should be an error by default.
>>
>> OK. The only problem is that it is contradict to Monty requirements.
>> Our initial decision was issue error if option was added explicitly.
>> The only problem is that it is very difficult to implement - we write
>> options to .frm first then read them and pass to engine. I have no
>> idea how to pass this information via/over frm.
Sergei> I hope you've seen my reasoning below about optimizing for a common
Sergei> case. Monty wants boundary cases to work - like changing engines back
Sergei> and forth and replication. I am saying that by default unknown options
Sergei> should be an error, but one should be able to disable that.
Sergei> "An error if opion as added explicitly" does not solve all boundary
Sergei> cases, for example, restoring a dump into a different engine.
Sergei> Monty would probably want to cover that too.
As almost all options are just 'extra information', I prefer that by
default one doesn't get an error if the engine doesn't recognize the
option.
This is otherwise it's hell for automatic create table tools to work.
It's much easier if one can just choose engine and then different
options, some which are supported and others that may not be supported.
Otherwise each tool would need to have a list of all existing engines
and what options each support, which would be real hell.
>>> 3. use something my_getopt-like as we discussed, don't force every
>>> engine to parse its options
>>
>> I can add such function for users to use, but it will be thier choice
>> use it or do not, is it OK?
Sergei> What was the problem with doing it automatically ?
Beccause the engine will still needs to do a switch over all options it
supports, so it's hard to do it automatically.
>>> 4. make options immutable to avoid copying them in ::clone
>>
>> I do not know way to do it if they should be allocated in different
>> mem_roots.
Sergei> Example ? Where are they allocated in different memroots ?
This should work if we create the new table and reopen it before the
old table is closed (which should be the case).
>>> 5. don't check for changed options in alter table with your
>>> check_if_incompatible_data. let the engine do that.
>>
>> This and 8 require big changes engine and ALTER TABLE. Monty's
>> requirement was do not touch current code. I would be glad if you
>> discuss it and make some non contradicting requirement.
No comments, but I think this is easier to do on the top level than in
the engine (but I don't remember Sanjas code exactly regarding this).
>>> 7. parser: make the equal sign optional
>>
>> I have some doubts that it is doable
>>
>> DATA DIRECTORY TEST VALUE ...
>>
>> Does it mean:
>>
>> DATA = DIRECTORY TEST = VALUE ...
>>
>> or
>>
>> DATA DIRECTORY = TEST VALUE ... ? - error
>> (ALTER TABLE uses create_table_options_space_separated list of options)
Sergei> did you try the code from my previous email ?
Agree with sanja that not having = can lead to parse problems.
Also using = is more readable so I would prefer to over time start
deprecate space between keyword and value.
<cut>
>>>> === modified file 'sql/sql_table.cc'
>>>> --- sql/sql_table.cc 2010-02-12 08:47:31 +0000
>>>> +++ sql/sql_table.cc 2010-03-04 20:46:55 +0000
>>>> @@ -5789,6 +5791,15 @@ compare_tables(TABLE *table,
>>>> DBUG_RETURN(0);
>>>> }
>>>>
>>>> + if (!is_equal_create_options(tmp_new_field->create_options.first,
>>>> + field->create_options.first))
>>>> + {
>>>
>>> I am not sure this should be checked on MySQL level, we don't know the
>>> semantics of options. I'd say this check belong to
>>> handler::check_if_incompatible_data() and should be implemented in the
>>> storage engine internally.
>>
>> Monty even requested me to recreate .frm even if case of KEY was chenged
>> (which is clear do not chengr semantic) - i.e. any change == rewriting
>> .frm. So your requests contradict here it should be discussed (I do not see
>> sens nor harm in such rewriting policy)
Sergei> recreating frm is one thing, doing a full alter with copying the data is
Sergei> another. I'm saying that it's not MySQL that should decide what change
Sergei> in table options requires copy_data_between_tables - but the engine
Sergei> itself.
Agree that it's only the engine that knows if we need to copy the data
or not.
>>>> +plugin_option_value:
>>>> + DEFAULT
>>>> + {
>>>> + $$.str= NULL; /* We are going to remove the option */
>>>> + $$.length= 0;
>>>> + }
>>>> + | NULL_SYM
>>>
>>> I don't like this trick.
>>> If you don't support NULLs, dont't allow users to specify them
>>
>> how it can be stored as parameter value? Such semantic prevent users of
>> thinking that assigning NULL will make it really NULL not "NULL".
Sergei> It won't be "NULL", IDENT_sys that you use in plugin_option_value
Sergei> will not treat NULL as an ident. I think if you simply remove
Sergei> NULL alternative from the plugin_option_value rule, you'll end up
Sergei> having a syntax error for option=NULL, which is better than what you
Sergei> have now.
Ok with me that we delete the =NULL syntax to remove options.
>>>> +++ sql/sql_create_options.cc 2010-03-04 20:46:55 +0000
>>>> +my_bool create_option_add(CREATE_OPTION_LIST *options, MEM_ROOT *root,
>>>> + const LEX_STRING *str_key,
>>>> + const LEX_STRING *str_val,
>>>> + my_bool *changed)
>>>> +{
>>>> + CREATE_OPTION *cur_option, **option;
>>>> + char *key, *val;
>>>> + my_bool not_used;
>>>> + my_bool copy= FALSE;
>>>> + my_bool replace= FALSE;
>>>> + DBUG_ENTER("create_option_add");
>>>> + DBUG_PRINT("enter", ("key: '%s' value: '%s'",
>>>> + str_key->str, str_val->str));
>>>> + if (changed)
>>>> + copy= TRUE;
>>>> + else
>>>> + changed= ¬_used;
>>>> +
>>>> + DBUG_ASSERT(options->first ||
>>>> + (!options->first && options->last == &options->first));
>>>> + *changed= FALSE;
>>>
>>> Hmm, strange. From the way you use 'changed' I thought it should
>>> accumulate
>>> the results - I mean, it's one variable that is passed into
>>> create_option_add() for all options. Apparently at the end it should be
>>> true if *any* of the options has changed.
>>>
>>> But then, why do you set it to false inside create_option_add() ?
>>
>> It was special case for call from ALTER TABLE and from parser. Only ALTER
>> TABLE was interested in changes and so required copying parameters.
Sergei> I don't understand.
I also in my review thought it would be much more logical if 'changed'
would be reset (if needed) on the outer level, not in the function.
>>>> +
>>>> + /* try to find the option first */
>>>> + for (option= &(options->first);
>>>> + *option && my_strcasecmp(system_charset_info,
>>>> + str_key->str, (*option)->key.str);
>>>> + option= &((*option)->next)) ;
>>>> + if (str_val->str)
>>>> + {
>>>> + /* add / replace */
>>>> + if (*option)
>>>> + {
>>>> + /* replace */
>>>> + cur_option= *option;
>>>> + if (!(*changed) &&
>>>> + (cur_option->val.length != str_val->length ||
>>>> + memcmp(cur_option->val.str, str_val->str, str_val->length)))
>>>> + {
>>>> + *changed= TRUE;
>>>> + }
>>>> + replace= TRUE;
>>>> + }
>>>> + else
>>>> + {
Sergei> ...
>>>> +CREATE_OPTION_LIST *create_create_options_array(MEM_ROOT *root, uint n)
>>>
>>> "create_create" is not a good name :(
>>
>> I did not found better but open for suggestion.
Sergei> make_create_options_array ?
Sergei> construct_create_options_array ?
construct_create_options_array sounds nice to me.
>>>> +my_bool create_options_read(const uchar *buff, uint length, MEM_ROOT
>>>> *root,
>>>> + TABLE_OPTIONS *opt)
>>>> +{
>>>> + const uchar *buff_end= buff + length;
>>>> + DBUG_ENTER("create_options_read");
>>>> + while (buff < buff_end)
>>>> + {
>>>> + CREATE_OPTION *option;
>>>> + CREATE_OPTION_TYPES type;
>>>> + uint index= 0;
>>>> +
>>>> + if (!(option= (CREATE_OPTION *) alloc_root(root,
>>>> sizeof(CREATE_OPTION))))
>>>> + DBUG_RETURN(TRUE);
>>>> +
>>>> + DBUG_ASSERT(buff + 4 <= buff_end);
>>>> + option->val.length= uint2korr(buff);
>>>> + option->key.length= buff[2];
>>>> + option->next= NULL;
>>>> + type= (CREATE_OPTION_TYPES)buff[3];
>>>> + buff+= 4;
>>>> + switch (type) {
>>>> + case CREATE_OPTION_FIELD:
>>>
>>> interesting encoding. so basically you support the case when field,
>>> key, and table options are all written interleaved:
>>>
>>> <table option><key 1 option><field 5 option><table option><field 3
option> <key 4 option>...
>>>
>>> why the heck do you want to support it ?
>>
>> Could you propose other encoding taking into account that some fields, keys
>> and tables do not have parameters and some has several ones?
Sergei> Sure. Many :)
Sergei> For example
Sergei> <number of table options>
Sergei> <length-encoded strings for table options>
Sergei> <number of field 1 options>
Sergei> <length-encoded strings for field 1 options>
Sergei> <number of field 2 options>
Sergei> <length-encoded strings for field 2 options>
Sergei> ...
Sergei> <number of key 1 options>
Sergei> <length-encoded strings for key 1 options>
Sergei> <number of key 2 options>
Sergei> <length-encoded strings for key 2 options>
Sergei> Assuming a table with three fields and two keys that would be
Sergei> 0x02 0x05 "topt1" 0x03 "val" 0x03 "to2" 0x04 "val2"
Sergei> 0x00
Sergei> 0x01 0x04 "fil1" 0x01 "1"
Sergei> 0x03 0x01 "A" 0x02 "bb" 0x01 "B" 0x02 "CC" 0x02 "de" 0x01 "0"
Sergei> 0x01 0x06 "packed" 0x03 "yes"
Sergei> 0x00
I also originally thought about this (I would probably have stored
things the above way if I would have coded this).
However, I am not sure that the code would be shorter than Sanjas
code. The fact that the code can handle cases that never happens in
reality didn't bother me.
Regards,
Monty
1
0
[Maria-developers] Progress (by Knielsen): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 25
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Knielsen - Mon, 15 Mar 2010, 14:28)=-=-
Research into the problem, and discussions on phone/mailing list
Worked 25 hours and estimate 0 hours remain (original estimate increased by 25 hours).
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Progress (by Knielsen): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 25
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Knielsen - Mon, 15 Mar 2010, 14:28)=-=-
Research into the problem, and discussions on phone/mailing list
Worked 25 hours and estimate 0 hours remain (original estimate increased by 25 hours).
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Mon, 15 Mar 2010, 14:18)=-=-
High-Level Specification modified.
--- /tmp/wklog.107.old.9086 2010-03-15 14:18:18.000000000 +0000
+++ /tmp/wklog.107.new.9086 2010-03-15 14:18:18.000000000 +0000
@@ -1 +1,43 @@
+Current ideas/status after discussions on the mailing list:
+
+ - Implement a set of plugin APIs and use them to move all of the existing
+ MySQL replication into a (set of) plugins.
+
+ - Design the APIs so that they can support full MySQL replication, but also
+ so that they do not hardcode assumptions about how this replication
+ implementation is done, and so that they will be suitable for other types of
+ replication (Tungsten, Galera, parallel replication, ...).
+
+ - APIs need to include the concept of a global transaction ID. Need to
+ determine the extent to which the semantics of such ID will be defined
+ by the API, and to which extend it will be defined by the plugin
+ implementations.
+
+ - APIs should properly support reliable crash-recovery with decent
+ performance (eg. not require multiple mandatory fsync()s per commit, and
+ not make group commit impossible).
+
+ - Would be nice if the API provided facilities for implementing good
+ consistency checking support (mainly checking master tables against slave
+ tables is hard here I think, but also applying wrong binlog data and
+ individual event checksums).
+
+
+Steps to make this more concrete:
+
+ - Investigate the current MySQL replication, and list all of the places where
+ a plugin implementation will need to connect/hook into the MySQL server.
+ * handler::{write,update,delete}_row()
+ * Statement execution
+ * Transaction start/commit
+ * Table open
+ * Query safe/not/safe for statement based replication
+ * Statement-based logging details (user variables, random seed, etc.)
+ * ...
+
+ - Use this list to make an initial sketch of the set of APIs we need.
+
+ - Use the list to determine the feasibility of this project and the level of
+ detail in the API needed to support a full replication implementation as a
+ plugin.
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
HIGH-LEVEL SPECIFICATION:
Current ideas/status after discussions on the mailing list:
- Implement a set of plugin APIs and use them to move all of the existing
MySQL replication into a (set of) plugins.
- Design the APIs so that they can support full MySQL replication, but also
so that they do not hardcode assumptions about how this replication
implementation is done, and so that they will be suitable for other types of
replication (Tungsten, Galera, parallel replication, ...).
- APIs need to include the concept of a global transaction ID. Need to
determine the extent to which the semantics of such ID will be defined
by the API, and to which extend it will be defined by the plugin
implementations.
- APIs should properly support reliable crash-recovery with decent
performance (eg. not require multiple mandatory fsync()s per commit, and
not make group commit impossible).
- Would be nice if the API provided facilities for implementing good
consistency checking support (mainly checking master tables against slave
tables is hard here I think, but also applying wrong binlog data and
individual event checksums).
Steps to make this more concrete:
- Investigate the current MySQL replication, and list all of the places where
a plugin implementation will need to connect/hook into the MySQL server.
* handler::{write,update,delete}_row()
* Statement execution
* Transaction start/commit
* Table open
* Query safe/not/safe for statement based replication
* Statement-based logging details (user variables, random seed, etc.)
* ...
- Use this list to make an initial sketch of the set of APIs we need.
- Use the list to determine the feasibility of this project and the level of
detail in the API needed to support a full replication implementation as a
plugin.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Serg): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Serg): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......: Sergei
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Serg - Mon, 15 Mar 2010, 14:13)=-=-
Observers changed: Sergei
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Knielsen): New replication APIs (107)
by worklog-noreply@askmonty.org 15 Mar '10
by worklog-noreply@askmonty.org 15 Mar '10
15 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: New replication APIs
CREATION DATE..: Mon, 15 Mar 2010, 13:55
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 107 (http://askmonty.org/worklog/?tid=107)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
This is a top-level task for the project of designing a new set of replication
APIs for MariaDB.
This task is for the initial discussion of what to do and where to focus.
The project is started in this email thread:
https://lists.launchpad.net/maria-developers/msg01998.html
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Rev 2778: Merge in file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
by Sergey Petrunya 15 Mar '10
by Sergey Petrunya 15 Mar '10
15 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
------------------------------------------------------------
revno: 2778 [merge]
revision-id: psergey(a)askmonty.org-20100315063535-jsp4jgya6lfqt8e6
parent: psergey(a)askmonty.org-20100315063254-z1ctm7srl0573s5c
parent: psergey(a)askmonty.org-20100315060659-0spqc4jdav12ja2u
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7-rel
timestamp: Mon 2010-03-15 09:35:35 +0300
message:
Merge
modified:
mysql-test/r/type_datetime.result sp1f-type_datetime.result-20001228015634-jrgwqpilnfn4kvdp6wm5hp5imvf3tkek
=== modified file 'mysql-test/r/type_datetime.result'
--- a/mysql-test/r/type_datetime.result 2010-02-11 21:59:32 +0000
+++ b/mysql-test/r/type_datetime.result 2010-03-15 06:06:59 +0000
@@ -516,7 +516,7 @@
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.cur_date' of SELECT #2 was resolved in SELECT #1
-Note 1003 select '1' AS `id`,'2007-04-25 18:30:22' AS `cur_date` from `test`.`t1` `x1` join `test`.`t1` where (('2007-04-25 18:30:22' = 0))
+Note 1003 select '1' AS `id`,'2007-04-25 18:30:22' AS `cur_date` from `test`.`t1` semi join (`test`.`t1` `x1`) where (('2007-04-25 18:30:22' = 0))
select * from t1
where id in (select id from t1 as x1 where (t1.cur_date is null));
id cur_date
@@ -527,7 +527,7 @@
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t2.cur_date' of SELECT #2 was resolved in SELECT #1
-Note 1003 select '1' AS `id`,'2007-04-25' AS `cur_date` from `test`.`t2` `x1` join `test`.`t2` where (('2007-04-25' = 0))
+Note 1003 select '1' AS `id`,'2007-04-25' AS `cur_date` from `test`.`t2` semi join (`test`.`t2` `x1`) where (('2007-04-25' = 0))
select * from t2
where id in (select id from t2 as x1 where (t2.cur_date is null));
id cur_date
1
0
[Maria-developers] Rev 2777: Apply fix by Roy Lyseng: in file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
by Sergey Petrunya 15 Mar '10
by Sergey Petrunya 15 Mar '10
15 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
------------------------------------------------------------
revno: 2777
revision-id: psergey(a)askmonty.org-20100315063254-z1ctm7srl0573s5c
parent: psergey(a)askmonty.org-20100314182543-4t3ehit7df20adu8
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7-rel
timestamp: Mon 2010-03-15 09:32:54 +0300
message:
Apply fix by Roy Lyseng:
Bug#48623: Multiple subqueries are optimized incorrectly
The function setup_semijoin_dups_elimination() has a major loop that
goes through every table in the JOIN object. Usually, there is a normal
"plus one" increment in the for loop that implements this, but each semijoin
nest is treated as one entity and there is another increment that skips past
the semijoin nest to the next table in the JOIN object. However, when
combining these two increments, the next joined table is skipped, and if that
happens to be the start of another semijoin nest, the correct processing
for that nest will not be carried out.
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-03-14 18:25:43 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-15 06:32:54 +0000
@@ -1079,3 +1079,36 @@
partner_id
partner2
drop table t1,t2,t3,t4;
+#
+# Bug#48623 Multiple subqueries are optimized incorrectly
+#
+CREATE TABLE t1(val VARCHAR(10));
+CREATE TABLE t2(val VARCHAR(10));
+CREATE TABLE t3(val VARCHAR(10));
+INSERT INTO t1 VALUES('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+INSERT INTO t2 VALUES('aaa'), ('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+INSERT INTO t3 VALUES('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+EXPLAIN
+SELECT *
+FROM t1
+WHERE t1.val IN (SELECT t2.val FROM t2
+WHERE t2.val LIKE 'a%' OR t2.val LIKE 'e%')
+AND t1.val IN (SELECT t3.val FROM t3
+WHERE t3.val LIKE 'a%' OR t3.val LIKE 'e%');
+id select_type table type possible_keys key key_len ref rows Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 5
+1 PRIMARY t3 ALL NULL NULL NULL NULL 5 Using where; FirstMatch(t1)
+1 PRIMARY t2 ALL NULL NULL NULL NULL 6 Using where; FirstMatch(t3)
+SELECT *
+FROM t1
+WHERE t1.val IN (SELECT t2.val FROM t2
+WHERE t2.val LIKE 'a%' OR t2.val LIKE 'e%')
+AND t1.val IN (SELECT t3.val FROM t3
+WHERE t3.val LIKE 'a%' OR t3.val LIKE 'e%');
+val
+aaa
+eee
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+# End of Bug#48623
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-14 18:25:43 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-15 06:32:54 +0000
@@ -1083,6 +1083,39 @@
partner_id
partner2
drop table t1,t2,t3,t4;
+#
+# Bug#48623 Multiple subqueries are optimized incorrectly
+#
+CREATE TABLE t1(val VARCHAR(10));
+CREATE TABLE t2(val VARCHAR(10));
+CREATE TABLE t3(val VARCHAR(10));
+INSERT INTO t1 VALUES('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+INSERT INTO t2 VALUES('aaa'), ('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+INSERT INTO t3 VALUES('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+EXPLAIN
+SELECT *
+FROM t1
+WHERE t1.val IN (SELECT t2.val FROM t2
+WHERE t2.val LIKE 'a%' OR t2.val LIKE 'e%')
+AND t1.val IN (SELECT t3.val FROM t3
+WHERE t3.val LIKE 'a%' OR t3.val LIKE 'e%');
+id select_type table type possible_keys key key_len ref rows Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 5
+1 PRIMARY t3 ALL NULL NULL NULL NULL 5 Using where; FirstMatch(t1); Using join buffer
+1 PRIMARY t2 ALL NULL NULL NULL NULL 6 Using where; FirstMatch(t3); Using join buffer
+SELECT *
+FROM t1
+WHERE t1.val IN (SELECT t2.val FROM t2
+WHERE t2.val LIKE 'a%' OR t2.val LIKE 'e%')
+AND t1.val IN (SELECT t3.val FROM t3
+WHERE t3.val LIKE 'a%' OR t3.val LIKE 'e%');
+val
+aaa
+eee
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+# End of Bug#48623
#
# BUG#49129: Wrong result with IN-subquery with join_cache_level=6 and firstmatch=off
#
=== modified file 'mysql-test/t/subselect_sj.test'
--- a/mysql-test/t/subselect_sj.test 2010-03-14 18:25:43 +0000
+++ b/mysql-test/t/subselect_sj.test 2010-03-15 06:32:54 +0000
@@ -943,5 +943,35 @@
execute stmt;
drop table t1,t2,t3,t4;
-
-
+--echo #
+--echo # Bug#48623 Multiple subqueries are optimized incorrectly
+--echo #
+
+CREATE TABLE t1(val VARCHAR(10));
+CREATE TABLE t2(val VARCHAR(10));
+CREATE TABLE t3(val VARCHAR(10));
+
+INSERT INTO t1 VALUES('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+INSERT INTO t2 VALUES('aaa'), ('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+INSERT INTO t3 VALUES('aaa'), ('bbb'), ('eee'), ('mmm'), ('ppp');
+
+EXPLAIN
+SELECT *
+FROM t1
+WHERE t1.val IN (SELECT t2.val FROM t2
+ WHERE t2.val LIKE 'a%' OR t2.val LIKE 'e%')
+ AND t1.val IN (SELECT t3.val FROM t3
+ WHERE t3.val LIKE 'a%' OR t3.val LIKE 'e%');
+
+SELECT *
+FROM t1
+WHERE t1.val IN (SELECT t2.val FROM t2
+ WHERE t2.val LIKE 'a%' OR t2.val LIKE 'e%')
+ AND t1.val IN (SELECT t3.val FROM t3
+ WHERE t3.val LIKE 'a%' OR t3.val LIKE 'e%');
+
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+
+--echo # End of Bug#48623
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-14 18:25:43 +0000
+++ b/sql/opt_subselect.cc 2010-03-15 06:32:54 +0000
@@ -3030,7 +3030,7 @@
THD *thd= join->thd;
DBUG_ENTER("setup_semijoin_dups_elimination");
- for (i= join->const_tables ; i < join->tables ; i++)
+ for (i= join->const_tables ; i < join->tables; )
{
JOIN_TAB *tab=join->join_tab + i;
POSITION *pos= join->best_positions + i;
@@ -3039,7 +3039,7 @@
case SJ_OPT_MATERIALIZE:
case SJ_OPT_MATERIALIZE_SCAN:
/* Do nothing */
- i += pos->n_sj_tables;
+ i+= pos->n_sj_tables;
break;
case SJ_OPT_LOOSE_SCAN:
{
@@ -3055,7 +3055,7 @@
tab->loosescan_key_len= keylen;
if (pos->n_sj_tables > 1)
tab[pos->n_sj_tables - 1].do_firstmatch= tab;
- i += pos->n_sj_tables;
+ i+= pos->n_sj_tables;
break;
}
case SJ_OPT_DUPS_WEEDOUT:
@@ -3152,7 +3152,7 @@
join->join_tab[first_table].flush_weedout_table= sjtbl;
join->join_tab[i + pos->n_sj_tables - 1].check_weed_out_table= sjtbl;
- i += pos->n_sj_tables;
+ i+= pos->n_sj_tables;
break;
}
case SJ_OPT_FIRST_MATCH:
@@ -3174,10 +3174,11 @@
}
}
j[-1].do_firstmatch= jump_to;
- i += pos->n_sj_tables;
+ i+= pos->n_sj_tables;
break;
}
case SJ_OPT_NONE:
+ i++;
break;
}
}
1
0
[Maria-developers] Rev 2777: Update test results for the previous push in file:///home/psergey/dev/maria-5.3-subqueries-r7/
by Sergey Petrunya 15 Mar '10
by Sergey Petrunya 15 Mar '10
15 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7/
------------------------------------------------------------
revno: 2777
revision-id: psergey(a)askmonty.org-20100315060659-0spqc4jdav12ja2u
parent: psergey(a)askmonty.org-20100314182543-4t3ehit7df20adu8
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7
timestamp: Mon 2010-03-15 09:06:59 +0300
message:
Update test results for the previous push
=== modified file 'mysql-test/r/type_datetime.result'
--- a/mysql-test/r/type_datetime.result 2010-02-11 21:59:32 +0000
+++ b/mysql-test/r/type_datetime.result 2010-03-15 06:06:59 +0000
@@ -516,7 +516,7 @@
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t1.cur_date' of SELECT #2 was resolved in SELECT #1
-Note 1003 select '1' AS `id`,'2007-04-25 18:30:22' AS `cur_date` from `test`.`t1` `x1` join `test`.`t1` where (('2007-04-25 18:30:22' = 0))
+Note 1003 select '1' AS `id`,'2007-04-25 18:30:22' AS `cur_date` from `test`.`t1` semi join (`test`.`t1` `x1`) where (('2007-04-25 18:30:22' = 0))
select * from t1
where id in (select id from t1 as x1 where (t1.cur_date is null));
id cur_date
@@ -527,7 +527,7 @@
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
Warnings:
Note 1276 Field or reference 'test.t2.cur_date' of SELECT #2 was resolved in SELECT #1
-Note 1003 select '1' AS `id`,'2007-04-25' AS `cur_date` from `test`.`t2` `x1` join `test`.`t2` where (('2007-04-25' = 0))
+Note 1003 select '1' AS `id`,'2007-04-25' AS `cur_date` from `test`.`t2` semi join (`test`.`t2` `x1`) where (('2007-04-25' = 0))
select * from t2
where id in (select id from t2 as x1 where (t2.cur_date is null));
id cur_date
1
0
[Maria-developers] Rev 2776: Merge in file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
by Sergey Petrunya 14 Mar '10
by Sergey Petrunya 14 Mar '10
14 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
------------------------------------------------------------
revno: 2776 [merge]
revision-id: psergey(a)askmonty.org-20100314182543-4t3ehit7df20adu8
parent: psergey(a)askmonty.org-20100314175549-0gcze3pxaudgapxh
parent: psergey(a)askmonty.org-20100313211106-5xyfyl02gfenbi7f
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7-rel
timestamp: Sun 2010-03-14 21:25:43 +0300
message:
Merge
modified:
mysql-test/r/subselect_mat.result subselect_mat.result-20100117143924-r0jv32dj80dg3b5h-1
mysql-test/r/subselect_sj.result subselect_sj.result-20100117143926-nrop4ku355g3kv8b-1
mysql-test/r/subselect_sj_jcl6.result subselect_sj_jcl6.re-20100117143928-7vzk51yaf29cdavp-1
mysql-test/t/subselect_mat.test subselect_mat.test-20100117143929-iif102ysgna1tyj0-1
mysql-test/t/subselect_sj.test subselect_sj.test-20100117143931-qp396ufpe3k0scre-1
sql/item.cc sp1f-item.cc-19700101030959-u7hxqopwpfly4kf5ctlyk2dvrq4l3dhn
sql/item_cmpfunc.cc sp1f-item_cmpfunc.cc-19700101030959-hrk7pi2n6qpwxauufnkizirsoucdcx2e
sql/item_cmpfunc.h sp1f-item_cmpfunc.h-19700101030959-pcvbjplo4e4ng7ibynfhcd6pjyem57gr
sql/opt_subselect.cc opt_subselect.cc-20100215190428-nekkl8wisp0k6nlk-1
sql/sql_select.cc sp1f-sql_select.cc-19700101030959-egb7whpkh76zzvikycs5nsnuviu4fdlb
sql/sql_select.h sp1f-sql_select.h-19700101030959-oqegfxr76xlgmrzd6qlevonoibfnwzoz
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-03-13 21:11:06 +0000
@@ -583,7 +583,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in (select 1 AS `Not_used` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select b1 from t2_16 where b1 > '0');
@@ -597,7 +597,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`) in (select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))
select left(a1,7), left(a2,7)
from t1_16
where (a1,a2) in (select b1, b2 from t2_16 where b1 > '0');
@@ -625,7 +625,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in (select group_concat(`test`.`t2_16`.`b1` separator ',') AS `group_concat(b1)` from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ',')))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select group_concat(`test`.`t2_16`.`b1` separator ',') AS `group_concat(b1)` from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ',')))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select group_concat(b1) from t2_16 group by b2);
@@ -662,7 +662,7 @@
3 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using join buffer
4 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 AS `Not_used` from `test`.`t1_16` where (<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`) in (select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` AS `c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 AS `Not_used` from `test`.`t1_16` where (<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` AS `c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8)))))
drop table t1_16, t2_16, t3_16;
set @blob_len = 512;
set @suffix_len = @blob_len - @prefix_len;
@@ -696,7 +696,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in (select 1 AS `Not_used` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select b1 from t2_512 where b1 > '0');
@@ -710,7 +710,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`) in (select `test`.`t2_512`.`b1` AS `b1`,`test`.`t2_512`.`b2` AS `b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),<exists>(select `test`.`t2_512`.`b1` AS `b1`,`test`.`t2_512`.`b2` AS `b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`))))
select left(a1,7), left(a2,7)
from t1_512
where (a1,a2) in (select b1, b2 from t2_512 where b1 > '0');
@@ -789,7 +789,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in (select 1 AS `Not_used` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select b1 from t2_1024 where b1 > '0');
@@ -803,7 +803,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`) in (select `test`.`t2_1024`.`b1` AS `b1`,`test`.`t2_1024`.`b2` AS `b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),<exists>(select `test`.`t2_1024`.`b1` AS `b1`,`test`.`t2_1024`.`b2` AS `b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`))))
select left(a1,7), left(a2,7)
from t1_1024
where (a1,a2) in (select b1, b2 from t2_1024 where b1 > '0');
@@ -882,7 +882,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in (select 1 AS `Not_used` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select b1 from t2_1025 where b1 > '0');
@@ -896,7 +896,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`) in (select `test`.`t2_1025`.`b1` AS `b1`,`test`.`t2_1025`.`b2` AS `b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),<exists>(select `test`.`t2_1025`.`b1` AS `b1`,`test`.`t2_1025`.`b2` AS `b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`))))
select left(a1,7), left(a2,7)
from t1_1025
where (a1,a2) in (select b1, b2 from t2_1025 where b1 > '0');
@@ -982,7 +982,7 @@
1 PRIMARY t1bb ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2bb ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),(`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`) in (select `test`.`t2bb`.`b1` AS `b1`,`test`.`t2bb`.`b2` AS `b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`))))
+Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),<exists>(select `test`.`t2bb`.`b1` AS `b1`,`test`.`t2bb`.`b2` AS `b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`))))
select bin(a1), a2
from t1bb
where (a1, a2) in (select b1, b2 from t2bb);
@@ -1219,3 +1219,28 @@
pk
2
DROP TABLE t1, t2;
+#
+# BUG#50019: Wrong result for IN-subquery with materialization
+#
+create table t1(i int);
+insert into t1 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t2(i int);
+insert into t2 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t3(i int);
+insert into t3 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+i
+1
+2
+3
+4
+set @save_optimizer_switch=@@optimizer_switch;
+set session optimizer_switch='materialization=off';
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+i
+1
+2
+3
+4
+set session optimizer_switch=@save_optimizer_switch;
+drop table t1, t2, t3;
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-03-14 17:54:12 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-14 18:25:43 +0000
@@ -825,6 +825,127 @@
2
drop table t1, t2, t3;
#
+# Bug#48213 Materialized subselect crashes if using GEOMETRY type
+#
+CREATE TABLE t1 (
+pk int,
+a varchar(1),
+b varchar(4),
+c tinyblob,
+d blob,
+e mediumblob,
+f longblob,
+g tinytext,
+h text,
+i mediumtext,
+j longtext,
+k geometry,
+PRIMARY KEY (pk)
+);
+INSERT INTO t1 VALUES (1,'o','ffff','ffff','ffoo','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff', 'ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+CREATE TABLE t2 LIKE t1;
+INSERT INTO t2 VALUES (1,'i','iiii','iiii','iiii','iiii','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using MRR; Materialize
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`c` = `test`.`t1`.`c`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`d` = `test`.`t1`.`d`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`e` = `test`.`t1`.`e`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`f` = `test`.`t1`.`f`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`g` = `test`.`t1`.`g`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`h` = `test`.`t1`.`h`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`i` = `test`.`t1`.`i`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`j` = `test`.`t1`.`j`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`k` = `test`.`t1`.`k`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+pk
+1
+2
+DROP TABLE t1, t2;
+# End of Bug#48213
+#
# Bug#49198 Wrong result for second call of procedure
# with view in subselect.
#
@@ -872,6 +993,42 @@
DROP VIEW v2, v3;
# End of Bug#49198
#
+# Bug#45174: Incorrectly applied equality propagation caused wrong
+# result on a query with a materialized semi-join.
+#
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`varchar_key` varchar(1) NOT NULL,
+`varchar_nokey` varchar(1) NOT NULL,
+PRIMARY KEY (`pk`),
+KEY `varchar_key` (`varchar_key`)
+);
+INSERT INTO `t1` VALUES (11,'m','m'),(12,'j','j'),(13,'z','z'),(14,'a','a'),(15,'',''),(16,'e','e'),(17,'t','t'),(19,'b','b'),(20,'w','w'),(21,'m','m'),(23,'',''),(24,'w','w'),(26,'e','e'),(27,'e','e'),(28,'p','p');
+CREATE TABLE `t2` (
+`varchar_nokey` varchar(1) NOT NULL
+);
+INSERT INTO `t2` VALUES ('v'),('u'),('n'),('l'),('h'),('u'),('n'),('j'),('k'),('e'),('i'),('u'),('n'),('b'),('x'),(''),('q'),('u');
+EXPLAIN EXTENDED SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t2 ALL NULL NULL NULL NULL 18 100.00
+1 PRIMARY t1 ALL varchar_key NULL NULL NULL 15 100.00 Using where; Materialize
+Warnings:
+Note 1003 select `test`.`t2`.`varchar_nokey` AS `varchar_nokey` from `test`.`t2` semi join (`test`.`t1`) where ((`test`.`t1`.`varchar_nokey` = `test`.`t1`.`varchar_key`) and ((`test`.`t1`.`varchar_nokey` < 'n') xor `test`.`t1`.`pk`))
+SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+varchar_nokey
+DROP TABLE t1, t2;
+# End of the test for bug#45174.
+#
# BUG#43768: Prepared query with nested subqueries core dumps on second execution
#
create table t1 (
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-14 17:54:12 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-14 18:25:43 +0000
@@ -829,6 +829,127 @@
2
drop table t1, t2, t3;
#
+# Bug#48213 Materialized subselect crashes if using GEOMETRY type
+#
+CREATE TABLE t1 (
+pk int,
+a varchar(1),
+b varchar(4),
+c tinyblob,
+d blob,
+e mediumblob,
+f longblob,
+g tinytext,
+h text,
+i mediumtext,
+j longtext,
+k geometry,
+PRIMARY KEY (pk)
+);
+INSERT INTO t1 VALUES (1,'o','ffff','ffff','ffoo','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff', 'ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+CREATE TABLE t2 LIKE t1;
+INSERT INTO t2 VALUES (1,'i','iiii','iiii','iiii','iiii','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using MRR; Materialize
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`c` = `test`.`t1`.`c`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`d` = `test`.`t1`.`d`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`e` = `test`.`t1`.`e`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`f` = `test`.`t1`.`f`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`g` = `test`.`t1`.`g`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`h` = `test`.`t1`.`h`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`i` = `test`.`t1`.`i`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`j` = `test`.`t1`.`j`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`k` = `test`.`t1`.`k`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+pk
+1
+2
+DROP TABLE t1, t2;
+# End of Bug#48213
+#
# Bug#49198 Wrong result for second call of procedure
# with view in subselect.
#
@@ -876,6 +997,42 @@
DROP VIEW v2, v3;
# End of Bug#49198
#
+# Bug#45174: Incorrectly applied equality propagation caused wrong
+# result on a query with a materialized semi-join.
+#
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`varchar_key` varchar(1) NOT NULL,
+`varchar_nokey` varchar(1) NOT NULL,
+PRIMARY KEY (`pk`),
+KEY `varchar_key` (`varchar_key`)
+);
+INSERT INTO `t1` VALUES (11,'m','m'),(12,'j','j'),(13,'z','z'),(14,'a','a'),(15,'',''),(16,'e','e'),(17,'t','t'),(19,'b','b'),(20,'w','w'),(21,'m','m'),(23,'',''),(24,'w','w'),(26,'e','e'),(27,'e','e'),(28,'p','p');
+CREATE TABLE `t2` (
+`varchar_nokey` varchar(1) NOT NULL
+);
+INSERT INTO `t2` VALUES ('v'),('u'),('n'),('l'),('h'),('u'),('n'),('j'),('k'),('e'),('i'),('u'),('n'),('b'),('x'),(''),('q'),('u');
+EXPLAIN EXTENDED SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t2 ALL NULL NULL NULL NULL 18 100.00
+1 PRIMARY t1 ALL varchar_key NULL NULL NULL 15 100.00 Using where; Materialize
+Warnings:
+Note 1003 select `test`.`t2`.`varchar_nokey` AS `varchar_nokey` from `test`.`t2` semi join (`test`.`t1`) where ((`test`.`t1`.`varchar_nokey` = `test`.`t1`.`varchar_key`) and ((`test`.`t1`.`varchar_nokey` < 'n') xor `test`.`t1`.`pk`))
+SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+varchar_nokey
+DROP TABLE t1, t2;
+# End of the test for bug#45174.
+#
# BUG#43768: Prepared query with nested subqueries core dumps on second execution
#
create table t1 (
=== modified file 'mysql-test/t/subselect_mat.test'
--- a/mysql-test/t/subselect_mat.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect_mat.test 2010-03-13 20:04:52 +0000
@@ -889,3 +889,19 @@
SELECT pk FROM t1 WHERE (b,c,d) IN (SELECT b,c,d FROM t2 WHERE pk > 0);
DROP TABLE t1, t2;
+--echo #
+--echo # BUG#50019: Wrong result for IN-subquery with materialization
+--echo #
+create table t1(i int);
+insert into t1 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t2(i int);
+insert into t2 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t3(i int);
+insert into t3 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+set @save_optimizer_switch=@@optimizer_switch;
+set session optimizer_switch='materialization=off';
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+set session optimizer_switch=@save_optimizer_switch;
+drop table t1, t2, t3;
+
=== modified file 'mysql-test/t/subselect_sj.test'
--- a/mysql-test/t/subselect_sj.test 2010-03-14 17:54:12 +0000
+++ b/mysql-test/t/subselect_sj.test 2010-03-14 18:25:43 +0000
@@ -729,6 +729,86 @@
drop table t1, t2, t3;
--echo #
+--echo # Bug#48213 Materialized subselect crashes if using GEOMETRY type
+--echo #
+
+CREATE TABLE t1 (
+ pk int,
+ a varchar(1),
+ b varchar(4),
+ c tinyblob,
+ d blob,
+ e mediumblob,
+ f longblob,
+ g tinytext,
+ h text,
+ i mediumtext,
+ j longtext,
+ k geometry,
+ PRIMARY KEY (pk)
+);
+
+INSERT INTO t1 VALUES (1,'o','ffff','ffff','ffoo','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff', 'ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+
+CREATE TABLE t2 LIKE t1;
+INSERT INTO t2 VALUES (1,'i','iiii','iiii','iiii','iiii','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+
+# Test that materialization is skipped for semijoins where materialized
+# table would contain GEOMETRY or different kinds of BLOB/TEXT columns
+let $query=
+SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+DROP TABLE t1, t2;
+--echo # End of Bug#48213
+
+--echo #
--echo # Bug#49198 Wrong result for second call of procedure
--echo # with view in subselect.
--echo #
@@ -772,6 +852,44 @@
--echo # End of Bug#49198
--echo #
+--echo # Bug#45174: Incorrectly applied equality propagation caused wrong
+--echo # result on a query with a materialized semi-join.
+--echo #
+
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `varchar_key` varchar(1) NOT NULL,
+ `varchar_nokey` varchar(1) NOT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `varchar_key` (`varchar_key`)
+);
+
+INSERT INTO `t1` VALUES (11,'m','m'),(12,'j','j'),(13,'z','z'),(14,'a','a'),(15,'',''),(16,'e','e'),(17,'t','t'),(19,'b','b'),(20,'w','w'),(21,'m','m'),(23,'',''),(24,'w','w'),(26,'e','e'),(27,'e','e'),(28,'p','p');
+
+CREATE TABLE `t2` (
+ `varchar_nokey` varchar(1) NOT NULL
+);
+
+INSERT INTO `t2` VALUES ('v'),('u'),('n'),('l'),('h'),('u'),('n'),('j'),('k'),('e'),('i'),('u'),('n'),('b'),('x'),(''),('q'),('u');
+
+EXPLAIN EXTENDED SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+
+SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+
+DROP TABLE t1, t2;
+
+--echo # End of the test for bug#45174.
+--echo #
--echo # BUG#43768: Prepared query with nested subqueries core dumps on second execution
--echo #
create table t1 (
=== modified file 'sql/item.cc'
--- a/sql/item.cc 2010-02-24 11:33:42 +0000
+++ b/sql/item.cc 2010-03-13 20:04:52 +0000
@@ -4761,7 +4761,7 @@
return this;
return const_item;
}
- Item_field *subst= item_equal->get_first();
+ Item_field *subst= item_equal->get_first(this);
if (subst && field->table != subst->field->table && !field->eq(subst->field))
return subst;
}
=== modified file 'sql/item_cmpfunc.cc'
--- a/sql/item_cmpfunc.cc 2010-02-17 10:05:27 +0000
+++ b/sql/item_cmpfunc.cc 2010-03-13 20:04:52 +0000
@@ -5369,7 +5369,7 @@
void Item_equal::fix_length_and_dec()
{
- Item *item= get_first();
+ Item *item= get_first(NULL);
eval_item= cmp_item::get_comparator(item->result_type(),
item->collation.collation);
}
@@ -5432,3 +5432,128 @@
str->append(')');
}
+
+/*
+ @brief Get the first equal field of multiple equality.
+ @param[in] field the field to get equal field to
+
+ @details Get the first field of multiple equality that is equal to the
+ given field. In order to make semi-join materialization strategy work
+ correctly we can't propagate equal fields from upper select to a
+ materialized semi-join.
+ Thus the fields is returned according to following rules:
+
+ 1) If the given field belongs to a semi-join then the first field in
+ multiple equality which belong to the same semi-join is returned.
+ Otherwise NULL is returned.
+ 2) If the given field doesn't belong to a semi-join then
+ the first field in the multiple equality that doesn't belong to any
+ semi-join is returned.
+ If all fields in the equality are belong to semi-join(s) then NULL
+ is returned.
+ 3) If no field is given then the first field in the multiple equality
+ is returned without regarding whether it belongs to a semi-join or not.
+
+ @retval Found first field in the multiple equality.
+ @retval 0 if no field found.
+*/
+
+Item_field* Item_equal::get_first(Item_field *field)
+{
+ List_iterator<Item_field> it(fields);
+ Item_field *item;
+ JOIN_TAB *field_tab;
+
+ if (!field)
+ return fields.head();
+
+ /*
+ Of all equal fields, return the first one we can use. Normally, this is the
+ field which belongs to the table that is the first in the join order.
+
+ There is one exception to this: When semi-join materialization strategy is
+ used, and the given field belongs to a table within the semi-join nest, we
+ must pick the first field in the semi-join nest.
+
+ Example: suppose we have a join order:
+
+ ot1 ot2 SJ-Mat(it1 it2 it3) ot3
+
+ and equality ot2.col = it1.col = it2.col
+ If we're looking for best substitute for 'it2.col', we should pick it1.col
+ and not ot2.col.
+
+ eliminate_item_equal() also has code that deals with equality substitution
+ in presense of SJM nests.
+ */
+
+ field_tab= field->field->table->reginfo.join_tab;
+
+ TABLE_LIST *emb_nest= field->field->table->pos_in_table_list->embedding;
+
+ if (emb_nest && emb_nest->sj_mat_info && emb_nest->sj_mat_info->is_used)
+ {
+ /*
+ It's a field from an materialized semi-join. We can substitute it only
+ for a field from the same semi-join.
+ */
+ JOIN_TAB *first;
+ JOIN *join= field_tab->join;
+ uint tab_idx= field_tab - field_tab->join->join_tab;
+
+ /* Find the first table of this semi-join nest */
+ for (uint i= tab_idx; i != join->const_tables; i--)
+ {
+ if (join->join_tab[i].table->map & emb_nest->sj_inner_tables)
+ first= join->join_tab + i;
+ else
+ // Found first tab that doesn't belong to current SJ.
+ break;
+ }
+ /* Find an item to substitute for. */
+ while ((item= it++))
+ {
+ if (item->field->table->reginfo.join_tab >= first)
+ {
+ /*
+ If we found given field then return NULL to avoid unnecessary
+ substitution.
+ */
+ return (item != field) ? item : NULL;
+ }
+ }
+ }
+ else
+ {
+#if 0
+ /*
+ The field is not in SJ-Materialization nest. We must return the first
+ field that's not embedded in a SJ-Materialization nest.
+ Example: suppose we have a join order:
+
+ SJ-Mat(it1 it2) ot1 ot2
+
+ and equality ot2.col = ot1.col = it2.col
+ If we're looking for best substitute for 'ot2.col', we should pick ot1.col
+ and not it2.col, because when we run a join between ot1 and ot2
+ execution of SJ-Mat(...) has already finished and we can't rely on the
+ value of it*.*.
+ psergey-fix-fix: ^^ THAT IS INCORRECT ^^. Pick the first, whatever that
+ is.
+ */
+ while ((item= it++))
+ {
+ TABLE_LIST *emb_nest= item->field->table->pos_in_table_list->embedding;
+ if (!emb_nest || !emb_nest->sj_mat_info ||
+ !emb_nest->sj_mat_info->is_used)
+ {
+ return item;
+ }
+ }
+#endif
+ return fields.head();
+ }
+ // Shouldn't get here.
+ DBUG_ASSERT(0);
+ return NULL;
+}
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-02-17 10:05:27 +0000
+++ b/sql/item_cmpfunc.h 2010-03-13 20:04:52 +0000
@@ -1589,7 +1589,7 @@
void add(Item_field *f);
uint members();
bool contains(Field *field);
- Item_field* get_first() { return fields.head(); }
+ Item_field* get_first(Item_field *field);
uint n_fields() { return fields.elements; }
void merge(Item_equal *item);
void update_const();
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-14 17:54:12 +0000
+++ b/sql/opt_subselect.cc 2010-03-14 18:25:43 +0000
@@ -322,7 +322,13 @@
default:
;/* suitable for materialization */
}
+
+ // Materialization does not work with BLOB columns
+ if (inner->field_type() == MYSQL_TYPE_BLOB ||
+ inner->field_type() == MYSQL_TYPE_GEOMETRY)
+ DBUG_RETURN(FALSE);
}
+
in_subs->types_allow_materialization= TRUE;
in_subs->sjm_scan_allowed= all_are_fields;
DBUG_PRINT("info",("subquery_types_allow_materialization: ok, allowed"));
@@ -2181,6 +2187,8 @@
if (tablenr != first)
pos->sj_strategy= SJ_OPT_NONE;
remaining_tables |= s->table->map;
+ //s->sj_strategy= pos->sj_strategy;
+ join->join_tab[first].sj_strategy= join->best_positions[first].sj_strategy;
}
}
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-14 17:54:12 +0000
+++ b/sql/sql_select.cc 2010-03-14 18:25:43 +0000
@@ -8869,6 +8869,15 @@
}
+static TABLE_LIST* embedding_sjm(Item_field *item_field)
+{
+ TABLE_LIST *nest= item_field->field->table->pos_in_table_list->embedding;
+ if (nest && nest->sj_mat_info && nest->sj_mat_info->is_used)
+ return nest;
+ else
+ return NULL;
+}
+
/**
Generate minimal set of simple equalities equivalent to a multiple equality.
@@ -8902,6 +8911,23 @@
So only t1.a=t3.c should be left in the lower level.
If cond is equal to 0, then not more then one equality is generated
and a pointer to it is returned as the result of the function.
+
+ Equality substutution and semi-join materialization nests:
+
+ In case join order looks like this:
+
+ outer_tbl1 outer_tbl2 SJM (inner_tbl1 inner_tbl2) outer_tbl3
+
+ We must not construct equalities like
+
+ outer_tbl1.col = inner_tbl1.col
+
+ because they would get attached to inner_tbl1 and will get evaluated
+ during materialization phase, when we don't have current value of
+ outer_tbl1.col.
+
+ Item_equal::get_first() also takes similar measures for dealing with
+ equality substitution in presense of SJM nests.
@return
- The condition with generated simple equalities or
@@ -8919,18 +8945,44 @@
Item *item_const= item_equal->get_const();
Item_equal_iterator it(*item_equal);
Item *head;
+ TABLE_LIST *current_sjm= NULL;
+ Item *current_sjm_head= NULL;
+
+ /*
+ Pick the "head" item: the constant one or the first in the join order
+ that's not inside some SJM nest.
+ */
if (item_const)
head= item_const;
else
{
- head= item_equal->get_first();
+ TABLE_LIST *emb_nest;
+ Item_field *item_field;
+ head= item_field= item_equal->get_first(NULL);
it++;
+ if ((emb_nest= embedding_sjm(item_field)))
+ {
+ current_sjm= emb_nest;
+ current_sjm_head= head;
+ }
}
+
Item_field *item_field;
+ /*
+ For each other item, generate "item=head" equality (except the tables that
+ are within SJ-Materialization nests, for those "head" is defined
+ differently)
+ */
while ((item_field= it++))
{
Item_equal *upper= item_field->find_item_equal(upper_levels);
Item_field *item= item_field;
+ TABLE_LIST *field_sjm= embedding_sjm(item_field);
+
+ /*
+ Check if "item_field=head" equality is already guaranteed to be true
+ on upper AND-levels.
+ */
if (upper)
{
if (item_const && upper->get_const())
@@ -8945,65 +8997,29 @@
}
}
}
- if (item == item_field)
+
+ bool produce_equality= test(item == item_field);
+ if (!item_const && field_sjm && field_sjm != current_sjm)
+ {
+ /* Entering an SJM nest */
+ current_sjm_head= item_field;
+ if (!field_sjm->sj_mat_info->is_sj_scan)
+ produce_equality= FALSE;
+ }
+
+ if (produce_equality)
{
if (eq_item)
eq_list.push_back(eq_item);
- /*
- item_field might refer to a table that is within a semi-join
- materialization nest. In that case, the join order looks like this:
-
- outer_tbl1 outer_tbl2 SJM (inner_tbl1 inner_tbl2) outer_tbl3
-
- We must not construct equalities like
-
- outer_tbl1.col = inner_tbl1.col
-
- because they would get attached to inner_tbl1 and will get evaluated
- during materialization phase, when we don't have current value of
- outer_tbl1.col.
- */
- TABLE_LIST *emb_nest=
- item_field->field->table->pos_in_table_list->embedding;
- if (!item_const && emb_nest && emb_nest->sj_mat_info &&
- emb_nest->sj_mat_info->is_used)
- {
- /*
- Find the first equal expression that refers to a table that is
- within the semijoin nest. If we can't find it, do nothing
- */
- List_iterator<Item_field> fit(item_equal->fields);
- Item_field *head_in_sjm;
- bool found= FALSE;
- while ((head_in_sjm= fit++))
- {
- if (head_in_sjm->used_tables() & emb_nest->sj_inner_tables)
- {
- if (head_in_sjm == item_field)
- {
- /* This is the first table inside the semi-join*/
- eq_item= new Item_func_eq(item_field, head);
- /* Tell make_cond_for_table don't use this. */
- eq_item->marker=3;
- }
- else
- {
- eq_item= new Item_func_eq(item_field, head_in_sjm);
- found= TRUE;
- }
- break;
- }
- }
- if (!found)
- continue;
- }
- else
- eq_item= new Item_func_eq(item_field, head);
+
+ eq_item= new Item_func_eq(item_field, current_sjm? current_sjm_head: head);
+
if (!eq_item)
return 0;
eq_item->set_cmp_func();
eq_item->quick_fix_field();
}
+ current_sjm= field_sjm;
}
if (!cond && !eq_list.head())
=== modified file 'sql/sql_select.h'
--- a/sql/sql_select.h 2010-03-05 18:54:48 +0000
+++ b/sql/sql_select.h 2010-03-13 20:04:52 +0000
@@ -279,6 +279,13 @@
/* NestedOuterJoins: Bitmap of nested joins this table is part of */
nested_join_map embedding_map;
+ /*
+ Semi-join strategy to be used for this join table. This is a copy of
+ POSITION::sj_strategy field. This field is set up by the
+ fix_semijion_strategies_for_picked_join_order.
+ */
+ uint sj_strategy;
+
void cleanup();
inline bool is_using_loose_index_scan()
{
1
0
[Maria-developers] Rev 2775: Fix support-files/build-tags to work with recent versions of bazaar. in file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
by Sergey Petrunya 14 Mar '10
by Sergey Petrunya 14 Mar '10
14 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
------------------------------------------------------------
revno: 2775
revision-id: psergey(a)askmonty.org-20100314175549-0gcze3pxaudgapxh
parent: psergey(a)askmonty.org-20100314175412-umtxuabkn4txl1yd
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7-rel
timestamp: Sun 2010-03-14 20:55:49 +0300
message:
Fix support-files/build-tags to work with recent versions of bazaar.
=== modified file 'support-files/build-tags'
--- a/support-files/build-tags 2009-12-15 07:16:46 +0000
+++ b/support-files/build-tags 2010-03-14 17:55:49 +0000
@@ -4,7 +4,7 @@
filter='\.cc$\|\.c$\|\.h$\|\.yy$'
list="find . -type f"
-bzr root >/dev/null 2>/dev/null && list="bzr ls --from-root --kind=file --versioned"
+bzr root >/dev/null 2>/dev/null && list="bzr ls --from-root -R --kind=file --versioned"
$list |grep $filter |while read f;
do
1
0
[Maria-developers] Rev 2774: BUG#43768: Prepared query with nested subqueries core dumps on second execution in file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
by Sergey Petrunya 14 Mar '10
by Sergey Petrunya 14 Mar '10
14 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7-rel/
------------------------------------------------------------
revno: 2774
revision-id: psergey(a)askmonty.org-20100314175412-umtxuabkn4txl1yd
parent: psergey(a)askmonty.org-20100307154145-ksby2b1l0sqm1xne
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7-rel
timestamp: Sun 2010-03-14 20:54:12 +0300
message:
BUG#43768: Prepared query with nested subqueries core dumps on second execution
Fix two problems:
1. Let optimize_semijoin_nests() reset sj_nest->sjmat_info irrespectively
of value of optimizer_flag. We need this in case somebody has turned optimization
off between reexecutions of the same statement.
2. Do not pull out constant tables out of semi-join nests. The problem is that pullout
operation is not undoable, and if a table is constant because it is 1/0-row table it
may cease to be constant on the next execution. Note that tables that are constant
because of possible eq_ref(const) access will still be pulled out as they are
considered functionally-dependent.
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-02-24 11:33:42 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-14 17:54:12 +0000
@@ -1,4 +1,4 @@
-drop table if exists t0, t1, t2, t10, t11, t12;
+drop table if exists t0, t1, t2, t3, t4, t10, t11, t12;
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1(a int, b int);
@@ -871,3 +871,54 @@
DROP TABLE t1, t2, t3;
DROP VIEW v2, v3;
# End of Bug#49198
+#
+# BUG#43768: Prepared query with nested subqueries core dumps on second execution
+#
+create table t1 (
+id int(11) unsigned not null primary key auto_increment,
+partner_id varchar(35) not null,
+t1_status_id int(10) unsigned
+);
+insert into t1 values ("1", "partner1", "10"), ("2", "partner2", "10"),
+("3", "partner3", "10"), ("4", "partner4", "10");
+create table t2 (
+id int(11) unsigned not null default '0',
+t1_line_id int(11) unsigned not null default '0',
+article_id varchar(20),
+sequence int(11) not null default '0',
+primary key (id,t1_line_id)
+);
+insert into t2 values ("1", "1", "sup", "0"), ("2", "1", "sup", "1"),
+("2", "2", "sup", "2"), ("2", "3", "sup", "3"),
+("2", "4", "imp", "4"), ("3", "1", "sup", "0"),
+("4", "1", "sup", "0");
+create table t3 (
+id int(11) not null default '0',
+preceeding_id int(11) not null default '0',
+primary key (id,preceeding_id)
+);
+create table t4 (
+user_id varchar(50) not null,
+article_id varchar(20) not null,
+primary key (user_id,article_id)
+);
+insert into t4 values("nicke", "imp");
+prepare stmt from
+'select t1.partner_id
+from t1
+where
+ t1.id in (
+ select pl_inner.id
+ from t2 as pl_inner
+ where pl_inner.article_id in (
+ select t4.article_id from t4
+ where t4.user_id = \'nicke\'
+ )
+ )';
+execute stmt;
+partner_id
+partner2
+execute stmt;
+partner_id
+partner2
+drop table t1,t2,t3,t4;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-07 15:41:45 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-14 17:54:12 +0000
@@ -2,7 +2,7 @@
show variables like 'join_cache_level';
Variable_name Value
join_cache_level 6
-drop table if exists t0, t1, t2, t10, t11, t12;
+drop table if exists t0, t1, t2, t3, t4, t10, t11, t12;
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1(a int, b int);
@@ -876,6 +876,57 @@
DROP VIEW v2, v3;
# End of Bug#49198
#
+# BUG#43768: Prepared query with nested subqueries core dumps on second execution
+#
+create table t1 (
+id int(11) unsigned not null primary key auto_increment,
+partner_id varchar(35) not null,
+t1_status_id int(10) unsigned
+);
+insert into t1 values ("1", "partner1", "10"), ("2", "partner2", "10"),
+("3", "partner3", "10"), ("4", "partner4", "10");
+create table t2 (
+id int(11) unsigned not null default '0',
+t1_line_id int(11) unsigned not null default '0',
+article_id varchar(20),
+sequence int(11) not null default '0',
+primary key (id,t1_line_id)
+);
+insert into t2 values ("1", "1", "sup", "0"), ("2", "1", "sup", "1"),
+("2", "2", "sup", "2"), ("2", "3", "sup", "3"),
+("2", "4", "imp", "4"), ("3", "1", "sup", "0"),
+("4", "1", "sup", "0");
+create table t3 (
+id int(11) not null default '0',
+preceeding_id int(11) not null default '0',
+primary key (id,preceeding_id)
+);
+create table t4 (
+user_id varchar(50) not null,
+article_id varchar(20) not null,
+primary key (user_id,article_id)
+);
+insert into t4 values("nicke", "imp");
+prepare stmt from
+'select t1.partner_id
+from t1
+where
+ t1.id in (
+ select pl_inner.id
+ from t2 as pl_inner
+ where pl_inner.article_id in (
+ select t4.article_id from t4
+ where t4.user_id = \'nicke\'
+ )
+ )';
+execute stmt;
+partner_id
+partner2
+execute stmt;
+partner_id
+partner2
+drop table t1,t2,t3,t4;
+#
# BUG#49129: Wrong result with IN-subquery with join_cache_level=6 and firstmatch=off
#
CREATE TABLE t0 (a INT);
=== modified file 'mysql-test/t/subselect_sj.test'
--- a/mysql-test/t/subselect_sj.test 2010-02-24 11:33:42 +0000
+++ b/mysql-test/t/subselect_sj.test 2010-03-14 17:54:12 +0000
@@ -2,7 +2,7 @@
# Nested Loops semi-join subquery evaluation tests
#
--disable_warnings
-drop table if exists t0, t1, t2, t10, t11, t12;
+drop table if exists t0, t1, t2, t3, t4, t10, t11, t12;
--enable_warnings
#
@@ -770,3 +770,60 @@
DROP VIEW v2, v3;
--echo # End of Bug#49198
+
+--echo #
+--echo # BUG#43768: Prepared query with nested subqueries core dumps on second execution
+--echo #
+create table t1 (
+ id int(11) unsigned not null primary key auto_increment,
+ partner_id varchar(35) not null,
+ t1_status_id int(10) unsigned
+);
+
+insert into t1 values ("1", "partner1", "10"), ("2", "partner2", "10"),
+ ("3", "partner3", "10"), ("4", "partner4", "10");
+
+create table t2 (
+ id int(11) unsigned not null default '0',
+ t1_line_id int(11) unsigned not null default '0',
+ article_id varchar(20),
+ sequence int(11) not null default '0',
+ primary key (id,t1_line_id)
+);
+
+insert into t2 values ("1", "1", "sup", "0"), ("2", "1", "sup", "1"),
+ ("2", "2", "sup", "2"), ("2", "3", "sup", "3"),
+ ("2", "4", "imp", "4"), ("3", "1", "sup", "0"),
+ ("4", "1", "sup", "0");
+create table t3 (
+ id int(11) not null default '0',
+ preceeding_id int(11) not null default '0',
+ primary key (id,preceeding_id)
+);
+
+create table t4 (
+ user_id varchar(50) not null,
+ article_id varchar(20) not null,
+ primary key (user_id,article_id)
+);
+
+insert into t4 values("nicke", "imp");
+prepare stmt from
+'select t1.partner_id
+from t1
+where
+ t1.id in (
+ select pl_inner.id
+ from t2 as pl_inner
+ where pl_inner.article_id in (
+ select t4.article_id from t4
+ where t4.user_id = \'nicke\'
+ )
+ )';
+
+execute stmt;
+execute stmt;
+drop table t1,t2,t3,t4;
+
+
+
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-07 15:41:45 +0000
+++ b/sql/opt_subselect.cc 2010-03-14 17:54:12 +0000
@@ -963,7 +963,6 @@
{
/* Action #1: Mark the constant tables to be pulled out */
table_map pulled_tables= 0;
-
List_iterator<TABLE_LIST> child_li(sj_nest->nested_join->join_list);
TABLE_LIST *tbl;
while ((tbl= child_li++))
@@ -971,12 +970,34 @@
if (tbl->table)
{
tbl->table->reginfo.join_tab->emb_sj_nest= sj_nest;
+#if 0
+ /*
+ Do not pull out tables because they are constant. This operation has
+ a problem:
+ - Some constant tables may become/cease to be constant across PS
+ re-executions
+ - Contrary to our initial assumption, it turned out that table pullout
+ operation is not easily undoable.
+
+ The solution is to leave constant tables where they are. This will
+ affect only constant tables that are 1-row or empty, tables that are
+ constant because they are accessed via eq_ref(const) access will
+ still be pulled out as functionally-dependent.
+
+ This will cause us to miss the chance to flatten some of the
+ subqueries, but since const tables do not generate many duplicates,
+ it really doesn't matter that much whether they were pulled out or
+ not.
+
+ All of this was done as fix for BUG#43768.
+ */
if (tbl->table->map & join->const_table_map)
{
pulled_tables |= tbl->table->map;
DBUG_PRINT("info", ("Table %s pulled out (reason: constant)",
tbl->table->alias));
}
+#endif
}
}
@@ -1048,6 +1069,7 @@
pointers.
*/
child_li.remove();
+ sj_nest->nested_join->used_tables &= ~tbl->table->map;
upper_join_list->push_back(tbl);
tbl->join_list= upper_join_list;
tbl->embedding= sj_nest->embedding;
@@ -1104,20 +1126,20 @@
DBUG_ENTER("optimize_semijoin_nests");
List_iterator<TABLE_LIST> sj_list_it(join->select_lex->sj_nests);
TABLE_LIST *sj_nest;
- /*
- The statement may have been executed with 'semijoin=on' earlier.
- We need to verify that 'semijoin=on' still holds.
- */
- if (optimizer_flag(join->thd, OPTIMIZER_SWITCH_SEMIJOIN) &&
- optimizer_flag(join->thd, OPTIMIZER_SWITCH_MATERIALIZATION))
+ while ((sj_nest= sj_list_it++))
{
- while ((sj_nest= sj_list_it++))
+ /* semi-join nests with only constant tables are not valid */
+ /// DBUG_ASSERT(sj_nest->sj_inner_tables & ~join->const_table_map);
+
+ sj_nest->sj_mat_info= NULL;
+ /*
+ The statement may have been executed with 'semijoin=on' earlier.
+ We need to verify that 'semijoin=on' still holds.
+ */
+ if (optimizer_flag(join->thd, OPTIMIZER_SWITCH_SEMIJOIN) &&
+ optimizer_flag(join->thd, OPTIMIZER_SWITCH_MATERIALIZATION))
{
- /* semi-join nests with only constant tables are not valid */
- DBUG_ASSERT(sj_nest->sj_inner_tables & ~join->const_table_map);
-
- sj_nest->sj_mat_info= NULL;
- if (sj_nest->sj_inner_tables && /* not everything was pulled out */
+ if ((sj_nest->sj_inner_tables & ~join->const_table_map) && /* not everything was pulled out */
!sj_nest->sj_subq_pred->is_correlated &&
sj_nest->sj_subq_pred->types_allow_materialization)
{
@@ -1128,7 +1150,7 @@
The best plan to run the subquery is now in join->best_positions,
save it.
*/
- uint n_tables= my_count_bits(sj_nest->sj_inner_tables);
+ uint n_tables= my_count_bits(sj_nest->sj_inner_tables & ~join->const_table_map);
SJ_MATERIALIZATION_INFO* sjm;
if (!(sjm= new SJ_MATERIALIZATION_INFO) ||
!(sjm->positions= (POSITION*)join->thd->alloc(sizeof(POSITION)*
@@ -1443,7 +1465,7 @@
new_join_tab->emb_sj_nest->nested_join->sj_corr_tables |
new_join_tab->emb_sj_nest->nested_join->sj_depends_on;
const table_map sj_inner_tables=
- new_join_tab->emb_sj_nest->sj_inner_tables;
+ new_join_tab->emb_sj_nest->sj_inner_tables & ~join->const_table_map;
/*
Enter condition:
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-07 15:41:45 +0000
+++ b/sql/sql_select.cc 2010-03-14 17:54:12 +0000
@@ -5127,7 +5127,9 @@
/* number of tables that remain to be optimized */
n_tables= size_remain= my_count_bits(remaining_tables &
(join->emb_sjm_nest?
- join->emb_sjm_nest->sj_inner_tables :
+ (join->emb_sjm_nest->sj_inner_tables &
+ ~join->const_table_map)
+ :
~(table_map)0));
do {
@@ -5387,7 +5389,7 @@
table_map allowed_tables= ~(table_map)0;
if (join->emb_sjm_nest)
- allowed_tables= join->emb_sjm_nest->sj_inner_tables;
+ allowed_tables= join->emb_sjm_nest->sj_inner_tables & ~join->const_table_map;
for (JOIN_TAB **pos= join->best_ref + idx ; (s= *pos) ; pos++)
{
1
0
[Maria-developers] Rev 2775: Apply fix by oystein.grovlen@sun.com 2010-03-12: in file:///home/psergey/dev/maria-5.3-subqueries-r7/
by Sergey Petrunya 13 Mar '10
by Sergey Petrunya 13 Mar '10
13 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7/
------------------------------------------------------------
revno: 2775
revision-id: psergey(a)askmonty.org-20100313211106-5xyfyl02gfenbi7f
parent: psergey(a)askmonty.org-20100313200452-kq4dxayp7b45zum1
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7
timestamp: Sun 2010-03-14 00:11:06 +0300
message:
Apply fix by oystein.grovlen(a)sun.com 2010-03-12:
Bug#48213 Materialized subselect crashes if using GEOMETRY type
The problem occurred because during semi-join a materialized table
was created which contained a GEOMETRY column, which is a specialized
BLOB column. This caused an segmentation fault because such tables will
have extra columns, and the semi-join code was not prepared for that.
The solution is to disable materialization when Blob/Geometry columns would
need to be materialized. Blob columns cannot be used for index look-up
anyway, so it does not makes sense to use materialization.
This fix implies that it is detected earlier that subquery materialization
can not be used. The result of that is that in->exist optimization may
be performed for such queries. Hence, extended query plans for such
queries had to be updated.
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-03-13 20:04:52 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-03-13 21:11:06 +0000
@@ -583,7 +583,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in (select 1 AS `Not_used` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select b1 from t2_16 where b1 > '0');
@@ -597,7 +597,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`) in (select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` where ((`test`.`t2_16`.`b1` > '0') and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`))))
select left(a1,7), left(a2,7)
from t1_16
where (a1,a2) in (select b1, b2 from t2_16 where b1 > '0');
@@ -625,7 +625,7 @@
1 PRIMARY t1_16 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_16 ALL NULL NULL NULL NULL 3 100.00 Using filesort
Warnings:
-Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a1` in (select group_concat(`test`.`t2_16`.`b1` separator ',') AS `group_concat(b1)` from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ',')))))
+Note 1003 select left(`test`.`t1_16`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_16`.`a2`,7) AS `left(a2,7)` from `test`.`t1_16` where <in_optimizer>(`test`.`t1_16`.`a1`,<exists>(select group_concat(`test`.`t2_16`.`b1` separator ',') AS `group_concat(b1)` from `test`.`t2_16` group by `test`.`t2_16`.`b2` having (<cache>(`test`.`t1_16`.`a1`) = <ref_null_helper>(group_concat(`test`.`t2_16`.`b1` separator ',')))))
select left(a1,7), left(a2,7)
from t1_16
where a1 in (select group_concat(b1) from t2_16 group by b2);
@@ -662,7 +662,7 @@
3 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 5 100.00 Using where; Using join buffer
4 SUBQUERY t3 ALL NULL NULL NULL NULL 4 100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 AS `Not_used` from `test`.`t1_16` where (<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),(`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`) in (select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` AS `c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8)))))
+Note 1003 select `test`.`t1`.`a1` AS `a1`,`test`.`t1`.`a2` AS `a2` from `test`.`t1` where <in_optimizer>(concat(`test`.`t1`.`a1`,'x'),<exists>(select 1 AS `Not_used` from `test`.`t1_16` where (<in_optimizer>((`test`.`t1_16`.`a1`,`test`.`t1_16`.`a2`),<exists>(select `test`.`t2_16`.`b1` AS `b1`,`test`.`t2_16`.`b2` AS `b2` from `test`.`t2_16` join `test`.`t2` where ((`test`.`t2`.`b2` = substr(`test`.`t2_16`.`b2`,1,6)) and <in_optimizer>(`test`.`t2`.`b1`,`test`.`t2`.`b1` in ( <materialize> (select `test`.`t3`.`c1` AS `c1` from `test`.`t3` where (`test`.`t3`.`c2` > '0') ), <primary_index_lookup>(`test`.`t2`.`b1` in <temporary table> on distinct_key where ((`test`.`t2`.`b1` = `materialized subselect`.`c1`))))) and (<cache>(`test`.`t1_16`.`a1`) = `test`.`t2_16`.`b1`) and (<cache>(`test`.`t1_16`.`a2`) = `test`.`t2_16`.`b2`)))) and (<cache>(concat(`test`.`t1`.`a1`,'x')) = left(`test`.`t1_16`.`a1`,8)))))
drop table t1_16, t2_16, t3_16;
set @blob_len = 512;
set @suffix_len = @blob_len - @prefix_len;
@@ -696,7 +696,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a1` in (select 1 AS `Not_used` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>(`test`.`t1_512`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`))))
select left(a1,7), left(a2,7)
from t1_512
where a1 in (select b1 from t2_512 where b1 > '0');
@@ -710,7 +710,7 @@
1 PRIMARY t1_512 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_512 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),(`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`) in (select `test`.`t2_512`.`b1` AS `b1`,`test`.`t2_512`.`b2` AS `b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`))))
+Note 1003 select left(`test`.`t1_512`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_512`.`a2`,7) AS `left(a2,7)` from `test`.`t1_512` where <in_optimizer>((`test`.`t1_512`.`a1`,`test`.`t1_512`.`a2`),<exists>(select `test`.`t2_512`.`b1` AS `b1`,`test`.`t2_512`.`b2` AS `b2` from `test`.`t2_512` where ((`test`.`t2_512`.`b1` > '0') and (<cache>(`test`.`t1_512`.`a1`) = `test`.`t2_512`.`b1`) and (<cache>(`test`.`t1_512`.`a2`) = `test`.`t2_512`.`b2`))))
select left(a1,7), left(a2,7)
from t1_512
where (a1,a2) in (select b1, b2 from t2_512 where b1 > '0');
@@ -789,7 +789,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a1` in (select 1 AS `Not_used` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>(`test`.`t1_1024`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`))))
select left(a1,7), left(a2,7)
from t1_1024
where a1 in (select b1 from t2_1024 where b1 > '0');
@@ -803,7 +803,7 @@
1 PRIMARY t1_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1024 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),(`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`) in (select `test`.`t2_1024`.`b1` AS `b1`,`test`.`t2_1024`.`b2` AS `b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`))))
+Note 1003 select left(`test`.`t1_1024`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1024`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1024` where <in_optimizer>((`test`.`t1_1024`.`a1`,`test`.`t1_1024`.`a2`),<exists>(select `test`.`t2_1024`.`b1` AS `b1`,`test`.`t2_1024`.`b2` AS `b2` from `test`.`t2_1024` where ((`test`.`t2_1024`.`b1` > '0') and (<cache>(`test`.`t1_1024`.`a1`) = `test`.`t2_1024`.`b1`) and (<cache>(`test`.`t1_1024`.`a2`) = `test`.`t2_1024`.`b2`))))
select left(a1,7), left(a2,7)
from t1_1024
where (a1,a2) in (select b1, b2 from t2_1024 where b1 > '0');
@@ -882,7 +882,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a1` in (select 1 AS `Not_used` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>(`test`.`t1_1025`.`a1`,<exists>(select 1 AS `Not_used` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`))))
select left(a1,7), left(a2,7)
from t1_1025
where a1 in (select b1 from t2_1025 where b1 > '0');
@@ -896,7 +896,7 @@
1 PRIMARY t1_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2_1025 ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),(`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`) in (select `test`.`t2_1025`.`b1` AS `b1`,`test`.`t2_1025`.`b2` AS `b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`))))
+Note 1003 select left(`test`.`t1_1025`.`a1`,7) AS `left(a1,7)`,left(`test`.`t1_1025`.`a2`,7) AS `left(a2,7)` from `test`.`t1_1025` where <in_optimizer>((`test`.`t1_1025`.`a1`,`test`.`t1_1025`.`a2`),<exists>(select `test`.`t2_1025`.`b1` AS `b1`,`test`.`t2_1025`.`b2` AS `b2` from `test`.`t2_1025` where ((`test`.`t2_1025`.`b1` > '0') and (<cache>(`test`.`t1_1025`.`a1`) = `test`.`t2_1025`.`b1`) and (<cache>(`test`.`t1_1025`.`a2`) = `test`.`t2_1025`.`b2`))))
select left(a1,7), left(a2,7)
from t1_1025
where (a1,a2) in (select b1, b2 from t2_1025 where b1 > '0');
@@ -982,7 +982,7 @@
1 PRIMARY t1bb ALL NULL NULL NULL NULL 3 100.00 Using where
2 DEPENDENT SUBQUERY t2bb ALL NULL NULL NULL NULL 3 100.00 Using where
Warnings:
-Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),(`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`) in (select `test`.`t2bb`.`b1` AS `b1`,`test`.`t2bb`.`b2` AS `b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`))))
+Note 1003 select conv(`test`.`t1bb`.`a1`,10,2) AS `bin(a1)`,`test`.`t1bb`.`a2` AS `a2` from `test`.`t1bb` where <in_optimizer>((`test`.`t1bb`.`a1`,`test`.`t1bb`.`a2`),<exists>(select `test`.`t2bb`.`b1` AS `b1`,`test`.`t2bb`.`b2` AS `b2` from `test`.`t2bb` where ((<cache>(`test`.`t1bb`.`a1`) = `test`.`t2bb`.`b1`) and (<cache>(`test`.`t1bb`.`a2`) = `test`.`t2bb`.`b2`))))
select bin(a1), a2
from t1bb
where (a1, a2) in (select b1, b2 from t2bb);
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-03-13 20:04:52 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-13 21:11:06 +0000
@@ -825,6 +825,127 @@
2
drop table t1, t2, t3;
#
+# Bug#48213 Materialized subselect crashes if using GEOMETRY type
+#
+CREATE TABLE t1 (
+pk int,
+a varchar(1),
+b varchar(4),
+c tinyblob,
+d blob,
+e mediumblob,
+f longblob,
+g tinytext,
+h text,
+i mediumtext,
+j longtext,
+k geometry,
+PRIMARY KEY (pk)
+);
+INSERT INTO t1 VALUES (1,'o','ffff','ffff','ffoo','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff', 'ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+CREATE TABLE t2 LIKE t1;
+INSERT INTO t2 VALUES (1,'i','iiii','iiii','iiii','iiii','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using MRR; Materialize
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`c` = `test`.`t1`.`c`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`d` = `test`.`t1`.`d`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`e` = `test`.`t1`.`e`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`f` = `test`.`t1`.`f`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`g` = `test`.`t1`.`g`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`h` = `test`.`t1`.`h`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`i` = `test`.`t1`.`i`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`j` = `test`.`t1`.`j`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1)
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`k` = `test`.`t1`.`k`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+pk
+1
+2
+DROP TABLE t1, t2;
+# End of Bug#48213
+#
# Bug#49198 Wrong result for second call of procedure
# with view in subselect.
#
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-13 20:04:52 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-13 21:11:06 +0000
@@ -829,6 +829,127 @@
2
drop table t1, t2, t3;
#
+# Bug#48213 Materialized subselect crashes if using GEOMETRY type
+#
+CREATE TABLE t1 (
+pk int,
+a varchar(1),
+b varchar(4),
+c tinyblob,
+d blob,
+e mediumblob,
+f longblob,
+g tinytext,
+h text,
+i mediumtext,
+j longtext,
+k geometry,
+PRIMARY KEY (pk)
+);
+INSERT INTO t1 VALUES (1,'o','ffff','ffff','ffoo','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff', 'ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+CREATE TABLE t2 LIKE t1;
+INSERT INTO t2 VALUES (1,'i','iiii','iiii','iiii','iiii','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using MRR; Materialize
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`c` = `test`.`t1`.`c`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`d` = `test`.`t1`.`d`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+pk
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`e` = `test`.`t1`.`e`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`f` = `test`.`t1`.`f`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`g` = `test`.`t1`.`g`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`h` = `test`.`t1`.`h`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`i` = `test`.`t1`.`i`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`j` = `test`.`t1`.`j`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+pk
+1
+2
+EXPLAIN EXTENDED SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+1 PRIMARY t2 range PRIMARY PRIMARY 4 NULL 2 100.00 Using index condition; Using where; Using MRR; FirstMatch(t1); Using join buffer
+Warnings:
+Note 1003 select `test`.`t1`.`pk` AS `pk` from `test`.`t1` semi join (`test`.`t2`) where ((`test`.`t2`.`k` = `test`.`t1`.`k`) and (`test`.`t2`.`b` = `test`.`t1`.`b`) and (`test`.`t2`.`pk` > 0))
+SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+pk
+1
+2
+DROP TABLE t1, t2;
+# End of Bug#48213
+#
# Bug#49198 Wrong result for second call of procedure
# with view in subselect.
#
=== modified file 'mysql-test/t/subselect_sj.test'
--- a/mysql-test/t/subselect_sj.test 2010-03-13 20:04:52 +0000
+++ b/mysql-test/t/subselect_sj.test 2010-03-13 21:11:06 +0000
@@ -729,6 +729,86 @@
drop table t1, t2, t3;
--echo #
+--echo # Bug#48213 Materialized subselect crashes if using GEOMETRY type
+--echo #
+
+CREATE TABLE t1 (
+ pk int,
+ a varchar(1),
+ b varchar(4),
+ c tinyblob,
+ d blob,
+ e mediumblob,
+ f longblob,
+ g tinytext,
+ h text,
+ i mediumtext,
+ j longtext,
+ k geometry,
+ PRIMARY KEY (pk)
+);
+
+INSERT INTO t1 VALUES (1,'o','ffff','ffff','ffoo','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff', 'ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+
+CREATE TABLE t2 LIKE t1;
+INSERT INTO t2 VALUES (1,'i','iiii','iiii','iiii','iiii','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))')), (2,'f','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff','ffff',GeomFromText('POLYGON((0 0, 0 2, 2 2, 2 0, 0 0))'));
+
+# Test that materialization is skipped for semijoins where materialized
+# table would contain GEOMETRY or different kinds of BLOB/TEXT columns
+let $query=
+SELECT pk FROM t1 WHERE (a, b) IN (SELECT a, b FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, c) IN (SELECT b, c FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, d) IN (SELECT b, d FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, e) IN (SELECT b, e FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, f) IN (SELECT b, f FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, g) IN (SELECT b, g FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, h) IN (SELECT b, h FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, i) IN (SELECT b, i FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, j) IN (SELECT b, j FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+let $query=
+SELECT pk FROM t1 WHERE (b, k) IN (SELECT b, k FROM t2 WHERE pk > 0);
+eval EXPLAIN EXTENDED $query;
+eval $query;
+
+DROP TABLE t1, t2;
+--echo # End of Bug#48213
+
+--echo #
--echo # Bug#49198 Wrong result for second call of procedure
--echo # with view in subselect.
--echo #
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-13 20:04:52 +0000
+++ b/sql/opt_subselect.cc 2010-03-13 21:11:06 +0000
@@ -322,7 +322,13 @@
default:
;/* suitable for materialization */
}
+
+ // Materialization does not work with BLOB columns
+ if (inner->field_type() == MYSQL_TYPE_BLOB ||
+ inner->field_type() == MYSQL_TYPE_GEOMETRY)
+ DBUG_RETURN(FALSE);
}
+
in_subs->types_allow_materialization= TRUE;
in_subs->sjm_scan_allowed= all_are_fields;
DBUG_PRINT("info",("subquery_types_allow_materialization: ok, allowed"));
1
0
Re: [Maria-developers] bzr commit into mysql-5.4 branch (epotemkin:2814) Bug#45174
by Sergey Petrunya 13 Mar '10
by Sergey Petrunya 13 Mar '10
13 Mar '10
Hi!
I can't offer any testcases but I think this patch has several issues. See
below.
On Tue, Oct 13, 2009 at 09:38:52AM +0000, Evgeny Potemkin wrote:
> #At file:///work/bzrroot/45174-bug-azalea/ based on revid:alik@sun.com-20090702085822-8svd0aslr7qnddbb
>
> 2814 Evgeny Potemkin 2009-10-13
> Bug#45174: Incorrectly applied equality propagation caused wrong result
> on a query with a materialized semi-join.
>
> When a subquery is a subject to a semi-join optimization its tables are
> merged to the upper query and later they treated as usual tables.
> This allows a bunch of optimizations to be applied, equality
> propagation is among them. Equality propagation is done after query execution
> plan is chosen. It substitutes fields from tables being retrieved later for
> fields from tables being retrieved earlier. However it can't be applied as is
> to any semi-join table.
> The semi-join materialization strategy differs from other semi-join
> strategies that the data from materialized semi-join tables isn't used
> directly but saved to a temporary table first. The materialization isn't
> isolated is a separate step, it is done inline within the nested loop execution.
> When it comes to fetch rows from the first table in
> the block of materialized semi-join tables they are isolated and the
> sub_select function is called to materialize result and save it in the
> semi-join result table. Materialization is done once and later data from the
> semi-join result table is used.
> Due to this we can't substitute fields that belong to the semi-join
> for fields from outer query and vice versa.
>
> Example: suppose we have a join order:
>
> ot1 ot2 SJ-Mat(it1 it2 it3) ot3
>
> and equality ot2.col = it1.col = it2.col
> If we're looking for best substitute for 'it2.col', we should pick it1.col
> and not ot2.col.
>
> For a field that is not in a materialized semi-join we must pick a field
> that's not embedded in a materialized semi-join.
>
> Example: suppose we have a join order:
>
> SJ-Mat(it1 it2) ot1 ot2
>
> and equality ot2.col = ot1.col = it2.col
> If we're looking for best substitute for 'ot2.col', we should pick ot1.col
> and not it2.col, because when we run a join between ot1 and ot2
> execution of SJ-Mat(...) has already finished and we can't rely on the value
> of it*.*.
>
> Now the Item_equal::get_first function accepts as a parameter a field being
> substituted and checks whether it belongs to a materialized semi-join.
> Depending on the check result a field to substitute for or NULL is returned.
>
> The sj_strategy field is added to the st_join_table structure. It's a copy of the
> POSITION::sj_strategy field and is used to easy checks.
> @ mysql-test/r/subselect_sj.result
> A test case added for the bug#45174.
> @ mysql-test/r/subselect_sj_jcl6.result
> A test case added for the bug#45174.
> @ mysql-test/t/subselect_sj.test
> A test case added for the bug#45174.
> @ sql/item.cc
> Bug#45174: Incorrectly applied equality propagation caused wrong result
> on a query with a materialized semi-join.
> Now the Item_equal::get_first function accepts as a parameter a field being
> substituted.
> @ sql/item_cmpfunc.cc
> Bug#45174: Incorrectly applied equality propagation caused wrong result
> on a query with a materialized semi-join.
>
> Now the Item_equal::get_first function accepts a field being substituted and
> checks whether it belongs to a materialized semi-join. Depending on the check
> result a field to substitute for or NULL is returned.
> @ sql/item_cmpfunc.h
> Bug#45174: Incorrectly applied equality propagation caused wrong result
> on a query with a materialized semi-join.
>
> Now the Item_equal::get_first function accepts as a parameter a field being
> substituted.
> @ sql/sql_select.cc
> Bug#45174: Incorrectly applied equality propagation caused wrong result
> on a query with a materialized semi-join.
> The is_sj_materialization_strategy method is added to the JOIN_TAB class to
> check whether JOIN_TAB belongs to a materialized semi-join.
> @ sql/sql_select.h
> Bug#45174: Incorrectly applied equality propagation caused wrong result
> on a query with a materialized semi-join.
>
> The sj_strategy field is added to the st_join_table structure. It's a copy of the
> POSITION::sj_strategy field and is used to easy checks.
>
> modified:
> mysql-test/r/subselect_sj.result
> mysql-test/r/subselect_sj_jcl6.result
> mysql-test/t/subselect_sj.test
> sql/item.cc
> sql/item_cmpfunc.cc
> sql/item_cmpfunc.h
> sql/sql_select.cc
> sql/sql_select.h
> === modified file 'sql/item.cc'
> --- a/sql/item.cc 2009-06-09 16:53:34 +0000
> +++ b/sql/item.cc 2009-10-13 09:38:46 +0000
> @@ -4883,7 +4883,7 @@ Item *Item_field::replace_equal_field(uc
> return this;
> return const_item;
> }
> - Item_field *subst= item_equal->get_first();
> + Item_field *subst= item_equal->get_first(this);
> if (subst && field->table != subst->field->table && !field->eq(subst->field))
> return subst;
> }
>
> === modified file 'sql/item_cmpfunc.cc'
> --- a/sql/item_cmpfunc.cc 2009-06-09 16:53:34 +0000
> +++ b/sql/item_cmpfunc.cc 2009-10-13 09:38:46 +0000
> @@ -5376,7 +5376,7 @@ longlong Item_equal::val_int()
>
> void Item_equal::fix_length_and_dec()
> {
> - Item *item= get_first();
> + Item *item= get_first(NULL);
> eval_item= cmp_item::get_comparator(item->result_type(),
> item->collation.collation);
> }
> @@ -5439,3 +5439,115 @@ void Item_equal::print(String *str, enum
> str->append(')');
> }
>
> +
> +/*
> + @brief Get the first equal field of multiple equality.
> + @param[in] field the field to get equal field to
> +
> + @details Get the first field of multiple equality that is equal to the
> + given field. In order to make semi-join materialization strategy work
> + correctly we can't propagate equal fields from upper select to a
> + materialized semi-join.
> + Thus the fields is returned according to following rules:
> +
> + 1) If the given field belongs to a semi-join then the first field in
> + multiple equality which belong to the same semi-join is returned.
> + Otherwise NULL is returned.
> + 2) If the given field doesn't belong to a semi-join then
> + the first field in the multiple equality that doesn't belong to any
> + semi-join is returned.
> + If all fields in the equality are belong to semi-join(s) then NULL
> + is returned.
> + 3) If no field is given then the first field in the multiple equality
> + is returned without regarding whether it belongs to a semi-join or not.
> +
> + @retval Found first field in the multiple equality.
> + @retval 0 if no field found.
> +*/
> +
> +Item_field* Item_equal::get_first(Item_field *field)
> +{
> + List_iterator<Item_field> it(fields);
> + Item_field *item;
> + JOIN_TAB *field_tab;
> +
> + if (!field)
> + return fields.head();
> + /*
> + Of all equal fields, return the first one we can use. Normally, this is the
> + field which belongs to the table that is the first in the join order.
> +
> + There is one exception to this: When semi-join materialization strategy is
> + used, and the given field belongs to a table within the semi-join nest, we
> + must pick the first field in the semi-join nest.
> +
> + Example: suppose we have a join order:
> +
> + ot1 ot2 SJ-Mat(it1 it2 it3) ot3
> +
> + and equality ot2.col = it1.col = it2.col
> + If we're looking for best substitute for 'it2.col', we should pick it1.col
> + and not ot2.col.
> + */
> +
> + field_tab= field->field->table->reginfo.join_tab;
> + if (field_tab->sj_strategy == SJ_OPT_MATERIALIZE ||
> + field_tab->sj_strategy == SJ_OPT_MATERIALIZE_SCAN)
> + {
> + /*
> + It's a field from an materialized semi-join. We can substitute it only
> + for a field from the same semi-join.
> + */
> + JOIN_TAB *first;
> + JOIN *join= field_tab->join;
> + uint tab_idx= field_tab - field_tab->join->join_tab;
> + /* Find first table of this semi-join. */
> + for (int i=tab_idx; i >= join->const_tables; i--)
> + {
> + if (join->best_positions[i].sj_strategy == SJ_OPT_MATERIALIZE ||
> + join->best_positions[i].sj_strategy == SJ_OPT_MATERIALIZE_SCAN)
> + first= join->join_tab + i;
> + else
> + // Found first tab that doesn't belong to current SJ.
> + break;
> + }
> + /* Find an item to substitute for. */
> + while ((item= it++))
> + {
> + if (item->field->table->reginfo.join_tab >= first)
> + {
> + /*
> + If we found given field then return NULL to avoid unnecessary
> + substitution.
> + */
> + return (item != field) ? item : NULL;
> + }
> + }
> + }
> + else
> + {
> + /*
> + The field is not in SJ-Materialization nest. We must return the first
> + field that's not embedded in a SJ-Materialization nest.
> + Example: suppose we have a join order:
> +
> + SJ-Mat(it1 it2) ot1 ot2
> +
> + and equality ot2.col = ot1.col = it2.col
> + If we're looking for best substitute for 'ot2.col', we should pick ot1.col
> + and not it2.col, because when we run a join between ot1 and ot2
> + execution of SJ-Mat(...) has already finished and we can't rely on the
> + value of it*.*.
This can cause cross-join to be computed between materialization result and
table it1. Actually, substitution with table it2 should be fine, as
SJ-Materialization-Scan (and this example cannot be lookup) will 'unpack'
column value to it2.col when doing the scan of the materialized temptable.
I've wrote up my understanding of the problem here (with pics, so on the wiki):
http://askmonty.org/wiki/EqualityPropagationAndEqualityPropagationAndSemiJo…
> + */
> + while ((item= it++))
> + {
> + field_tab= item->field->table->reginfo.join_tab;
> + if (!(field_tab->sj_strategy == SJ_OPT_MATERIALIZE ||
> + field_tab->sj_strategy == SJ_OPT_MATERIALIZE_SCAN))
This is a wrong way to check if a field is inside SJ-Materialization nest. The
condition is true only for the first table in SJ-Materialization nest, while we
need to catch *any* SJ-Mat-inner table.
the correct way to check this is as follows:
field_tab->pos_in_table_list->embedding &&
field_tab->pos_in_table_list->embedding->sj_mat &&
field_tab->pos_in_table_list->embedding->sj_mat->is_used
> + return item;
> + }
> + }
> + // Shouldn't get here.
> + DBUG_ASSERT(0);
> + return NULL;
> +}
>
> === modified file 'sql/item_cmpfunc.h'
> --- a/sql/item_cmpfunc.h 2009-01-26 16:03:39 +0000
> +++ b/sql/item_cmpfunc.h 2009-10-13 09:38:46 +0000
> @@ -1592,7 +1592,7 @@ public:
> void add(Item_field *f);
> uint members();
> bool contains(Field *field);
> - Item_field* get_first() { return fields.head(); }
> + Item_field* get_first(Item_field *field);
> void merge(Item_equal *item);
> void update_const();
> enum Functype functype() const { return MULT_EQUAL_FUNC; }
>
> === modified file 'sql/sql_select.cc'
> --- a/sql/sql_select.cc 2009-06-30 08:03:05 +0000
> +++ b/sql/sql_select.cc 2009-10-13 09:38:46 +0000
> @@ -7911,6 +7911,7 @@ static void fix_semijoin_strategies_for_
> if (tablenr != first)
> pos->sj_strategy= SJ_OPT_NONE;
> remaining_tables |= s->table->map;
> + s->sj_strategy= pos->sj_strategy;
> }
> }
>
> @@ -11706,7 +11707,7 @@ Item *eliminate_item_equal(COND *cond, C
> head= item_const;
> else
> {
> - head= item_equal->get_first();
> + head= item_equal->get_first(NULL);
> it++;
> }
> Item_field *item_field;
>
> === modified file 'sql/sql_select.h'
> --- a/sql/sql_select.h 2009-05-07 20:48:24 +0000
> +++ b/sql/sql_select.h 2009-10-13 09:38:46 +0000
> @@ -274,6 +274,13 @@ typedef struct st_join_table
> /* NestedOuterJoins: Bitmap of nested joins this table is part of */
> nested_join_map embedding_map;
>
> + /*
> + Semi-join strategy to be used for this join table. This is a copy of
> + POSITION::sj_strategy field. This field is set up by the
> + fix_semijion_strategies_for_picked_join_order.
> + */
> + uint sj_strategy;
> +
> void cleanup();
> inline bool is_using_loose_index_scan()
> {
>
BR
Sergey
--
Sergey Petrunia, Software Developer
Monty Program AB, http://askmonty.org
Blog: http://s.petrunia.net/blog
1
0
[Maria-developers] Rev 2774: BUG#45174: XOR in subqueries produces differing results in 5.1 and 5.4 in file:///home/psergey/dev/maria-5.3-subqueries-r7/
by Sergey Petrunya 13 Mar '10
by Sergey Petrunya 13 Mar '10
13 Mar '10
At file:///home/psergey/dev/maria-5.3-subqueries-r7/
------------------------------------------------------------
revno: 2774
revision-id: psergey(a)askmonty.org-20100313200452-kq4dxayp7b45zum1
parent: psergey(a)askmonty.org-20100307154145-ksby2b1l0sqm1xne
committer: Sergey Petrunya <psergey(a)askmonty.org>
branch nick: maria-5.3-subqueries-r7
timestamp: Sat 2010-03-13 23:04:52 +0300
message:
BUG#45174: XOR in subqueries produces differing results in 5.1 and 5.4
BUG#50019: Wrong result for IN-subquery with materialization
- Fix equality substitution in presense of semi-join materialization, lookup and scan variants
(started off from fix by Evgen Potemkin, then modified it to work in all cases)
=== modified file 'mysql-test/r/subselect_mat.result'
--- a/mysql-test/r/subselect_mat.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/subselect_mat.result 2010-03-13 20:04:52 +0000
@@ -1219,3 +1219,28 @@
pk
2
DROP TABLE t1, t2;
+#
+# BUG#50019: Wrong result for IN-subquery with materialization
+#
+create table t1(i int);
+insert into t1 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t2(i int);
+insert into t2 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t3(i int);
+insert into t3 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+i
+1
+2
+3
+4
+set @save_optimizer_switch=@@optimizer_switch;
+set session optimizer_switch='materialization=off';
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+i
+1
+2
+3
+4
+set session optimizer_switch=@save_optimizer_switch;
+drop table t1, t2, t3;
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-02-24 11:33:42 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-13 20:04:52 +0000
@@ -871,3 +871,39 @@
DROP TABLE t1, t2, t3;
DROP VIEW v2, v3;
# End of Bug#49198
+#
+# Bug#45174: Incorrectly applied equality propagation caused wrong
+# result on a query with a materialized semi-join.
+#
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`varchar_key` varchar(1) NOT NULL,
+`varchar_nokey` varchar(1) NOT NULL,
+PRIMARY KEY (`pk`),
+KEY `varchar_key` (`varchar_key`)
+);
+INSERT INTO `t1` VALUES (11,'m','m'),(12,'j','j'),(13,'z','z'),(14,'a','a'),(15,'',''),(16,'e','e'),(17,'t','t'),(19,'b','b'),(20,'w','w'),(21,'m','m'),(23,'',''),(24,'w','w'),(26,'e','e'),(27,'e','e'),(28,'p','p');
+CREATE TABLE `t2` (
+`varchar_nokey` varchar(1) NOT NULL
+);
+INSERT INTO `t2` VALUES ('v'),('u'),('n'),('l'),('h'),('u'),('n'),('j'),('k'),('e'),('i'),('u'),('n'),('b'),('x'),(''),('q'),('u');
+EXPLAIN EXTENDED SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t2 ALL NULL NULL NULL NULL 18 100.00
+1 PRIMARY t1 ALL varchar_key NULL NULL NULL 15 100.00 Using where; Materialize
+Warnings:
+Note 1003 select `test`.`t2`.`varchar_nokey` AS `varchar_nokey` from `test`.`t2` semi join (`test`.`t1`) where ((`test`.`t1`.`varchar_nokey` = `test`.`t1`.`varchar_key`) and ((`test`.`t1`.`varchar_nokey` < 'n') xor `test`.`t1`.`pk`))
+SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+varchar_nokey
+DROP TABLE t1, t2;
+# End of the test for bug#45174.
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-07 15:41:45 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-13 20:04:52 +0000
@@ -876,6 +876,42 @@
DROP VIEW v2, v3;
# End of Bug#49198
#
+# Bug#45174: Incorrectly applied equality propagation caused wrong
+# result on a query with a materialized semi-join.
+#
+CREATE TABLE `t1` (
+`pk` int(11) NOT NULL AUTO_INCREMENT,
+`varchar_key` varchar(1) NOT NULL,
+`varchar_nokey` varchar(1) NOT NULL,
+PRIMARY KEY (`pk`),
+KEY `varchar_key` (`varchar_key`)
+);
+INSERT INTO `t1` VALUES (11,'m','m'),(12,'j','j'),(13,'z','z'),(14,'a','a'),(15,'',''),(16,'e','e'),(17,'t','t'),(19,'b','b'),(20,'w','w'),(21,'m','m'),(23,'',''),(24,'w','w'),(26,'e','e'),(27,'e','e'),(28,'p','p');
+CREATE TABLE `t2` (
+`varchar_nokey` varchar(1) NOT NULL
+);
+INSERT INTO `t2` VALUES ('v'),('u'),('n'),('l'),('h'),('u'),('n'),('j'),('k'),('e'),('i'),('u'),('n'),('b'),('x'),(''),('q'),('u');
+EXPLAIN EXTENDED SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t2 ALL NULL NULL NULL NULL 18 100.00
+1 PRIMARY t1 ALL varchar_key NULL NULL NULL 15 100.00 Using where; Materialize
+Warnings:
+Note 1003 select `test`.`t2`.`varchar_nokey` AS `varchar_nokey` from `test`.`t2` semi join (`test`.`t1`) where ((`test`.`t1`.`varchar_nokey` = `test`.`t1`.`varchar_key`) and ((`test`.`t1`.`varchar_nokey` < 'n') xor `test`.`t1`.`pk`))
+SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+varchar_nokey
+DROP TABLE t1, t2;
+# End of the test for bug#45174.
+#
# BUG#49129: Wrong result with IN-subquery with join_cache_level=6 and firstmatch=off
#
CREATE TABLE t0 (a INT);
=== modified file 'mysql-test/t/subselect_mat.test'
--- a/mysql-test/t/subselect_mat.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect_mat.test 2010-03-13 20:04:52 +0000
@@ -889,3 +889,19 @@
SELECT pk FROM t1 WHERE (b,c,d) IN (SELECT b,c,d FROM t2 WHERE pk > 0);
DROP TABLE t1, t2;
+--echo #
+--echo # BUG#50019: Wrong result for IN-subquery with materialization
+--echo #
+create table t1(i int);
+insert into t1 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t2(i int);
+insert into t2 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+create table t3(i int);
+insert into t3 values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+set @save_optimizer_switch=@@optimizer_switch;
+set session optimizer_switch='materialization=off';
+select * from t1 where t1.i in (select t2.i from t2 join t3 where t2.i + t3.i = 5);
+set session optimizer_switch=@save_optimizer_switch;
+drop table t1, t2, t3;
+
=== modified file 'mysql-test/t/subselect_sj.test'
--- a/mysql-test/t/subselect_sj.test 2010-02-24 11:33:42 +0000
+++ b/mysql-test/t/subselect_sj.test 2010-03-13 20:04:52 +0000
@@ -770,3 +770,42 @@
DROP VIEW v2, v3;
--echo # End of Bug#49198
+
+--echo #
+--echo # Bug#45174: Incorrectly applied equality propagation caused wrong
+--echo # result on a query with a materialized semi-join.
+--echo #
+
+CREATE TABLE `t1` (
+ `pk` int(11) NOT NULL AUTO_INCREMENT,
+ `varchar_key` varchar(1) NOT NULL,
+ `varchar_nokey` varchar(1) NOT NULL,
+ PRIMARY KEY (`pk`),
+ KEY `varchar_key` (`varchar_key`)
+);
+
+INSERT INTO `t1` VALUES (11,'m','m'),(12,'j','j'),(13,'z','z'),(14,'a','a'),(15,'',''),(16,'e','e'),(17,'t','t'),(19,'b','b'),(20,'w','w'),(21,'m','m'),(23,'',''),(24,'w','w'),(26,'e','e'),(27,'e','e'),(28,'p','p');
+
+CREATE TABLE `t2` (
+ `varchar_nokey` varchar(1) NOT NULL
+);
+
+INSERT INTO `t2` VALUES ('v'),('u'),('n'),('l'),('h'),('u'),('n'),('j'),('k'),('e'),('i'),('u'),('n'),('b'),('x'),(''),('q'),('u');
+
+EXPLAIN EXTENDED SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+
+SELECT varchar_nokey
+FROM t2
+WHERE ( `varchar_nokey` , `varchar_nokey` ) IN (
+SELECT `varchar_key` , `varchar_nokey`
+FROM t1
+WHERE `varchar_nokey` < 'n' XOR `pk` ) ;
+
+DROP TABLE t1, t2;
+
+--echo # End of the test for bug#45174.
=== modified file 'sql/item.cc'
--- a/sql/item.cc 2010-02-24 11:33:42 +0000
+++ b/sql/item.cc 2010-03-13 20:04:52 +0000
@@ -4761,7 +4761,7 @@
return this;
return const_item;
}
- Item_field *subst= item_equal->get_first();
+ Item_field *subst= item_equal->get_first(this);
if (subst && field->table != subst->field->table && !field->eq(subst->field))
return subst;
}
=== modified file 'sql/item_cmpfunc.cc'
--- a/sql/item_cmpfunc.cc 2010-02-17 10:05:27 +0000
+++ b/sql/item_cmpfunc.cc 2010-03-13 20:04:52 +0000
@@ -5369,7 +5369,7 @@
void Item_equal::fix_length_and_dec()
{
- Item *item= get_first();
+ Item *item= get_first(NULL);
eval_item= cmp_item::get_comparator(item->result_type(),
item->collation.collation);
}
@@ -5432,3 +5432,128 @@
str->append(')');
}
+
+/*
+ @brief Get the first equal field of multiple equality.
+ @param[in] field the field to get equal field to
+
+ @details Get the first field of multiple equality that is equal to the
+ given field. In order to make semi-join materialization strategy work
+ correctly we can't propagate equal fields from upper select to a
+ materialized semi-join.
+ Thus the fields is returned according to following rules:
+
+ 1) If the given field belongs to a semi-join then the first field in
+ multiple equality which belong to the same semi-join is returned.
+ Otherwise NULL is returned.
+ 2) If the given field doesn't belong to a semi-join then
+ the first field in the multiple equality that doesn't belong to any
+ semi-join is returned.
+ If all fields in the equality are belong to semi-join(s) then NULL
+ is returned.
+ 3) If no field is given then the first field in the multiple equality
+ is returned without regarding whether it belongs to a semi-join or not.
+
+ @retval Found first field in the multiple equality.
+ @retval 0 if no field found.
+*/
+
+Item_field* Item_equal::get_first(Item_field *field)
+{
+ List_iterator<Item_field> it(fields);
+ Item_field *item;
+ JOIN_TAB *field_tab;
+
+ if (!field)
+ return fields.head();
+
+ /*
+ Of all equal fields, return the first one we can use. Normally, this is the
+ field which belongs to the table that is the first in the join order.
+
+ There is one exception to this: When semi-join materialization strategy is
+ used, and the given field belongs to a table within the semi-join nest, we
+ must pick the first field in the semi-join nest.
+
+ Example: suppose we have a join order:
+
+ ot1 ot2 SJ-Mat(it1 it2 it3) ot3
+
+ and equality ot2.col = it1.col = it2.col
+ If we're looking for best substitute for 'it2.col', we should pick it1.col
+ and not ot2.col.
+
+ eliminate_item_equal() also has code that deals with equality substitution
+ in presense of SJM nests.
+ */
+
+ field_tab= field->field->table->reginfo.join_tab;
+
+ TABLE_LIST *emb_nest= field->field->table->pos_in_table_list->embedding;
+
+ if (emb_nest && emb_nest->sj_mat_info && emb_nest->sj_mat_info->is_used)
+ {
+ /*
+ It's a field from an materialized semi-join. We can substitute it only
+ for a field from the same semi-join.
+ */
+ JOIN_TAB *first;
+ JOIN *join= field_tab->join;
+ uint tab_idx= field_tab - field_tab->join->join_tab;
+
+ /* Find the first table of this semi-join nest */
+ for (uint i= tab_idx; i != join->const_tables; i--)
+ {
+ if (join->join_tab[i].table->map & emb_nest->sj_inner_tables)
+ first= join->join_tab + i;
+ else
+ // Found first tab that doesn't belong to current SJ.
+ break;
+ }
+ /* Find an item to substitute for. */
+ while ((item= it++))
+ {
+ if (item->field->table->reginfo.join_tab >= first)
+ {
+ /*
+ If we found given field then return NULL to avoid unnecessary
+ substitution.
+ */
+ return (item != field) ? item : NULL;
+ }
+ }
+ }
+ else
+ {
+#if 0
+ /*
+ The field is not in SJ-Materialization nest. We must return the first
+ field that's not embedded in a SJ-Materialization nest.
+ Example: suppose we have a join order:
+
+ SJ-Mat(it1 it2) ot1 ot2
+
+ and equality ot2.col = ot1.col = it2.col
+ If we're looking for best substitute for 'ot2.col', we should pick ot1.col
+ and not it2.col, because when we run a join between ot1 and ot2
+ execution of SJ-Mat(...) has already finished and we can't rely on the
+ value of it*.*.
+ psergey-fix-fix: ^^ THAT IS INCORRECT ^^. Pick the first, whatever that
+ is.
+ */
+ while ((item= it++))
+ {
+ TABLE_LIST *emb_nest= item->field->table->pos_in_table_list->embedding;
+ if (!emb_nest || !emb_nest->sj_mat_info ||
+ !emb_nest->sj_mat_info->is_used)
+ {
+ return item;
+ }
+ }
+#endif
+ return fields.head();
+ }
+ // Shouldn't get here.
+ DBUG_ASSERT(0);
+ return NULL;
+}
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-02-17 10:05:27 +0000
+++ b/sql/item_cmpfunc.h 2010-03-13 20:04:52 +0000
@@ -1589,7 +1589,7 @@
void add(Item_field *f);
uint members();
bool contains(Field *field);
- Item_field* get_first() { return fields.head(); }
+ Item_field* get_first(Item_field *field);
uint n_fields() { return fields.elements; }
void merge(Item_equal *item);
void update_const();
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-07 15:41:45 +0000
+++ b/sql/opt_subselect.cc 2010-03-13 20:04:52 +0000
@@ -2159,6 +2159,8 @@
if (tablenr != first)
pos->sj_strategy= SJ_OPT_NONE;
remaining_tables |= s->table->map;
+ //s->sj_strategy= pos->sj_strategy;
+ join->join_tab[first].sj_strategy= join->best_positions[first].sj_strategy;
}
}
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-07 15:41:45 +0000
+++ b/sql/sql_select.cc 2010-03-13 20:04:52 +0000
@@ -8867,6 +8867,15 @@
}
+static TABLE_LIST* embedding_sjm(Item_field *item_field)
+{
+ TABLE_LIST *nest= item_field->field->table->pos_in_table_list->embedding;
+ if (nest && nest->sj_mat_info && nest->sj_mat_info->is_used)
+ return nest;
+ else
+ return NULL;
+}
+
/**
Generate minimal set of simple equalities equivalent to a multiple equality.
@@ -8900,6 +8909,23 @@
So only t1.a=t3.c should be left in the lower level.
If cond is equal to 0, then not more then one equality is generated
and a pointer to it is returned as the result of the function.
+
+ Equality substutution and semi-join materialization nests:
+
+ In case join order looks like this:
+
+ outer_tbl1 outer_tbl2 SJM (inner_tbl1 inner_tbl2) outer_tbl3
+
+ We must not construct equalities like
+
+ outer_tbl1.col = inner_tbl1.col
+
+ because they would get attached to inner_tbl1 and will get evaluated
+ during materialization phase, when we don't have current value of
+ outer_tbl1.col.
+
+ Item_equal::get_first() also takes similar measures for dealing with
+ equality substitution in presense of SJM nests.
@return
- The condition with generated simple equalities or
@@ -8917,18 +8943,44 @@
Item *item_const= item_equal->get_const();
Item_equal_iterator it(*item_equal);
Item *head;
+ TABLE_LIST *current_sjm= NULL;
+ Item *current_sjm_head= NULL;
+
+ /*
+ Pick the "head" item: the constant one or the first in the join order
+ that's not inside some SJM nest.
+ */
if (item_const)
head= item_const;
else
{
- head= item_equal->get_first();
+ TABLE_LIST *emb_nest;
+ Item_field *item_field;
+ head= item_field= item_equal->get_first(NULL);
it++;
+ if ((emb_nest= embedding_sjm(item_field)))
+ {
+ current_sjm= emb_nest;
+ current_sjm_head= head;
+ }
}
+
Item_field *item_field;
+ /*
+ For each other item, generate "item=head" equality (except the tables that
+ are within SJ-Materialization nests, for those "head" is defined
+ differently)
+ */
while ((item_field= it++))
{
Item_equal *upper= item_field->find_item_equal(upper_levels);
Item_field *item= item_field;
+ TABLE_LIST *field_sjm= embedding_sjm(item_field);
+
+ /*
+ Check if "item_field=head" equality is already guaranteed to be true
+ on upper AND-levels.
+ */
if (upper)
{
if (item_const && upper->get_const())
@@ -8943,65 +8995,29 @@
}
}
}
- if (item == item_field)
+
+ bool produce_equality= test(item == item_field);
+ if (!item_const && field_sjm && field_sjm != current_sjm)
+ {
+ /* Entering an SJM nest */
+ current_sjm_head= item_field;
+ if (!field_sjm->sj_mat_info->is_sj_scan)
+ produce_equality= FALSE;
+ }
+
+ if (produce_equality)
{
if (eq_item)
eq_list.push_back(eq_item);
- /*
- item_field might refer to a table that is within a semi-join
- materialization nest. In that case, the join order looks like this:
-
- outer_tbl1 outer_tbl2 SJM (inner_tbl1 inner_tbl2) outer_tbl3
-
- We must not construct equalities like
-
- outer_tbl1.col = inner_tbl1.col
-
- because they would get attached to inner_tbl1 and will get evaluated
- during materialization phase, when we don't have current value of
- outer_tbl1.col.
- */
- TABLE_LIST *emb_nest=
- item_field->field->table->pos_in_table_list->embedding;
- if (!item_const && emb_nest && emb_nest->sj_mat_info &&
- emb_nest->sj_mat_info->is_used)
- {
- /*
- Find the first equal expression that refers to a table that is
- within the semijoin nest. If we can't find it, do nothing
- */
- List_iterator<Item_field> fit(item_equal->fields);
- Item_field *head_in_sjm;
- bool found= FALSE;
- while ((head_in_sjm= fit++))
- {
- if (head_in_sjm->used_tables() & emb_nest->sj_inner_tables)
- {
- if (head_in_sjm == item_field)
- {
- /* This is the first table inside the semi-join*/
- eq_item= new Item_func_eq(item_field, head);
- /* Tell make_cond_for_table don't use this. */
- eq_item->marker=3;
- }
- else
- {
- eq_item= new Item_func_eq(item_field, head_in_sjm);
- found= TRUE;
- }
- break;
- }
- }
- if (!found)
- continue;
- }
- else
- eq_item= new Item_func_eq(item_field, head);
+
+ eq_item= new Item_func_eq(item_field, current_sjm? current_sjm_head: head);
+
if (!eq_item)
return 0;
eq_item->set_cmp_func();
eq_item->quick_fix_field();
}
+ current_sjm= field_sjm;
}
if (!cond && !eq_list.head())
=== modified file 'sql/sql_select.h'
--- a/sql/sql_select.h 2010-03-05 18:54:48 +0000
+++ b/sql/sql_select.h 2010-03-13 20:04:52 +0000
@@ -279,6 +279,13 @@
/* NestedOuterJoins: Bitmap of nested joins this table is part of */
nested_join_map embedding_map;
+ /*
+ Semi-join strategy to be used for this join table. This is a copy of
+ POSITION::sj_strategy field. This field is set up by the
+ fix_semijion_strategies_for_picked_join_order.
+ */
+ uint sj_strategy;
+
void cleanup();
inline bool is_using_loose_index_scan()
{
1
0
13 Mar '10
"Adam M. Dutko" <dutko.adam(a)gmail.com> writes:
> I've packaged RPMs before if you'd like me to take a stab at it. Do you
> have an existing spec file?
Any help would be highly appreciated, thanks!
I guess we just need to coordinate to not duplicate efforts.
The spec file is in this repository on Launchpad:
lp:~ourdelta-core/ourdelta/ourdelta-mariadb-5.2
The file in that repository is
bakery/mysql51-ourdelta-centos.spec
Maybe we should make a separate copy of that for 5.2, not sure.
I guess at this point the main issues is to handle any dependency headers
correctly, provides:, replaces:, depends: correctly. I'm pretty blank in
knowledge about that area for .rpm.
I'll start looking at the .deb stuff following Arjen's suggestions.
- Kristian.
1
0
[Maria-developers] Updated (by Knielsen): Update packaging scripts for MariaDB 5.2 (88)
by worklog-noreply@askmonty.org 13 Mar '10
by worklog-noreply@askmonty.org 13 Mar '10
13 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Update packaging scripts for MariaDB 5.2
CREATION DATE..: Sat, 27 Feb 2010, 16:39
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 88 (http://askmonty.org/worklog/?tid=88)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 30 (hours remain)
ORIG. ESTIMATE.: 30
PROGRESS NOTES:
-=-=(Knielsen - Sat, 13 Mar 2010, 08:14)=-=-
Low Level Design modified.
--- /tmp/wklog.88.old.22266 2010-03-13 08:14:47.000000000 +0000
+++ /tmp/wklog.88.new.22266 2010-03-13 08:14:47.000000000 +0000
@@ -1 +1,11 @@
+Some of the tasks that need to be done.
+
+ - Setup a 5.2 version of .deb files and .rpm spec file.
+
+ - Rename 5.1->5.2 in relevant places.
+
+ - Fix provides: / replaces: and similar to ensure proper upgrade from mysql
+ 5.0/5.1 and mariadb 5.1.
+
+ - Setup Buildbot upgrade test from MariaDB 5.1.42
-=-=(Guest - Sat, 13 Mar 2010, 08:12)=-=-
Category updated.
--- /tmp/wklog.88.old.22167 2010-03-13 08:12:01.000000000 +0000
+++ /tmp/wklog.88.new.22167 2010-03-13 08:12:01.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
DESCRIPTION:
The packaging scripts need to be updated to work for MariaDB 5.2
Currently, 5.2 package builds fail in Buildbot. The .debs are missing a
debian-5.2 subdirectory.
The .rpm also need to be checked.
Buildbot needs to be updated to do the new upgrade tests (mariadb-5.1 ->
mariadb 5.2)
LOW-LEVEL DESIGN:
Some of the tasks that need to be done.
- Setup a 5.2 version of .deb files and .rpm spec file.
- Rename 5.1->5.2 in relevant places.
- Fix provides: / replaces: and similar to ensure proper upgrade from mysql
5.0/5.1 and mariadb 5.1.
- Setup Buildbot upgrade test from MariaDB 5.1.42
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Update packaging scripts for MariaDB 5.2 (88)
by worklog-noreply@askmonty.org 13 Mar '10
by worklog-noreply@askmonty.org 13 Mar '10
13 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Update packaging scripts for MariaDB 5.2
CREATION DATE..: Sat, 27 Feb 2010, 16:39
SUPERVISOR.....: Knielsen
IMPLEMENTOR....: Knielsen
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 88 (http://askmonty.org/worklog/?tid=88)
VERSION........: Server-5.2
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 30 (hours remain)
ORIG. ESTIMATE.: 30
PROGRESS NOTES:
-=-=(Guest - Sat, 13 Mar 2010, 08:12)=-=-
Category updated.
--- /tmp/wklog.88.old.22167 2010-03-13 08:12:01.000000000 +0000
+++ /tmp/wklog.88.new.22167 2010-03-13 08:12:01.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
DESCRIPTION:
The packaging scripts need to be updated to work for MariaDB 5.2
Currently, 5.2 package builds fail in Buildbot. The .debs are missing a
debian-5.2 subdirectory.
The .rpm also need to be checked.
Buildbot needs to be updated to do the new upgrade tests (mariadb-5.1 ->
mariadb 5.2)
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2832: 1. don't crash on failing to load a plugin with newer MYSQL_PLUGIN_INTERFACE_VERSION
by noreply@launchpad.net 12 Mar '10
by noreply@launchpad.net 12 Mar '10
12 Mar '10
------------------------------------------------------------
revno: 2832
committer: Sergei Golubchik <sergii(a)pisem.net>
branch nick: maria-5.1
timestamp: Fri 2010-03-12 20:05:21 +0100
message:
1. don't crash on failing to load a plugin with newer MYSQL_PLUGIN_INTERFACE_VERSION
2. don't copy st_mysql_plugin structure unnecessary (sizeof hasn't changed)
modified:
sql/sql_plugin.cc
sql/sql_plugin.h
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2831: Fix myisam checksum patch to check for HA_OPTION_CHECKSUM after it was set, not before
by noreply@launchpad.net 12 Mar '10
by noreply@launchpad.net 12 Mar '10
12 Mar '10
------------------------------------------------------------
revno: 2831
committer: Sergei Golubchik <sergii(a)pisem.net>
branch nick: maria-5.1
timestamp: Fri 2010-03-12 20:03:37 +0100
message:
Fix myisam checksum patch to check for HA_OPTION_CHECKSUM after it was set, not before
modified:
storage/myisam/mi_create.c
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] Updated (by Timour): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 12 Mar '10
by worklog-noreply@askmonty.org 12 Mar '10
12 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Timour - Fri, 12 Mar 2010, 09:17)=-=-
Status updated.
--- /tmp/wklog.89.old.13018 2010-03-12 09:17:25.000000000 +0000
+++ /tmp/wklog.89.new.13018 2010-03-12 09:17:25.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Timour): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 12 Mar '10
by worklog-noreply@askmonty.org 12 Mar '10
12 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Timour - Fri, 12 Mar 2010, 09:17)=-=-
Status updated.
--- /tmp/wklog.89.old.13018 2010-03-12 09:17:25.000000000 +0000
+++ /tmp/wklog.89.new.13018 2010-03-12 09:17:25.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Timour): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 12 Mar '10
by worklog-noreply@askmonty.org 12 Mar '10
12 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Timour - Fri, 12 Mar 2010, 09:17)=-=-
Status updated.
--- /tmp/wklog.89.old.13018 2010-03-12 09:17:25.000000000 +0000
+++ /tmp/wklog.89.new.13018 2010-03-12 09:17:25.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Timour): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 12 Mar '10
by worklog-noreply@askmonty.org 12 Mar '10
12 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Timour - Fri, 12 Mar 2010, 09:17)=-=-
Status updated.
--- /tmp/wklog.89.old.13018 2010-03-12 09:17:25.000000000 +0000
+++ /tmp/wklog.89.new.13018 2010-03-12 09:17:25.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Timour): Subquery optimization: Efficient NOT IN execution with NULLs (68)
by worklog-noreply@askmonty.org 12 Mar '10
by worklog-noreply@askmonty.org 12 Mar '10
12 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subquery optimization: Efficient NOT IN execution with NULLs
CREATION DATE..: Fri, 27 Nov 2009, 13:22
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 68 (http://askmonty.org/worklog/?tid=68)
VERSION........: Server-9.x
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 68
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 68
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 68
-=-=(Guest - Sat, 27 Feb 2010, 10:11)=-=-
Status updated.
No change.
-=-=(Guest - Sat, 27 Feb 2010, 10:11)=-=-
Status updated.
--- /tmp/wklog.68.old.24229 2010-02-27 10:11:57.000000000 +0000
+++ /tmp/wklog.68.new.24229 2010-02-27 10:11:57.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Timour - Mon, 22 Feb 2010, 17:39)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.17116 2010-02-22 17:39:48.000000000 +0200
+++ /tmp/wklog.68.new.17116 2010-02-22 17:39:48.000000000 +0200
@@ -233,6 +233,7 @@
1. If columns a_j1,...,a_jm do not contain null values in the temporary
table at all and v_j1,...,v_jm cannot be null, create for these columns
only one index array (and of course do not create any bitmaps for them).
+[done]
2. Consider the ratio d(a_i)=N'(a_i)/V(a_i), where N'(a_i) is the number
of rows, where a_i is not null and V(a_i) is the number of distinct
@@ -264,6 +265,10 @@
7. If you get a row with nulls in all columns stop filling the temporary
table and return UNKNOWN for any tuple <v1,...,vn>.
+[This is wrong, because if we don't fill the whole temp table, there may
+ be some tuple(s) that would match some outer tuple. In such cases, if we
+ stop filling the temp table, we would miss a TRUE result. Having a partial
+ match doesn't preclude us from having a complete match].
8. [timour]
Consider that due to materialization, we already have a unique index
-=-=(Timour - Tue, 19 Jan 2010, 18:44)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.22569 2010-01-19 18:44:01.000000000 +0200
+++ /tmp/wklog.68.new.22569 2010-01-19 18:44:01.000000000 +0200
@@ -132,11 +132,10 @@
if (nonull_key && ! nonull_key->lookup(outer_ref))
return FALSE
- if (nonull_key)
- pq.insert(nonull_key)
for (i = 1; i <= n; i++)
{
+ if (vkey[i] != nonull_key)
vkey[i].lookup(outer_ref)
if (! vkey[i].is_eof())
pq.insert(i)
@@ -167,7 +166,7 @@
/* There cannot be a complete match, as we already checked for one. */
assert(matching_keys.elements < n)
}
- else if (cur_min_key == nonull_key)
+ else if (vkey[cur_min_key] == nonull_key)
{
/*
The non-NULL key has no corresponding NULL index, so we know for
@@ -183,8 +182,10 @@
/*
Check if all null_keys contain a NULL at row 'min_row'. The procedure
internally checks all keys in a special precomputed order. A prior
- procedure determines an optimal order and a mapping
- idx_no -> idx_order (encoded as an array).
+ procedure determines an optimal order and a mapping idx_no -> idx_order
+ (encoded as an array).
+
+ This procedure makes sure not to match the non-NULL column.
*/
if (test_null_row(null_keys, min_row))
return TRUE
@@ -198,6 +199,14 @@
vkey[cur_min_key].next()
if (! vkey[cur_min_key].is_eof())
pq.insert(cur_min_key)
+ else if (vkey[cur_min_key] == nonull_key)
+ {
+ /*
+ If there can't be more matches for the nonull_key, we know for sure
+ there is no match, since there is no possible NULL match.
+ */
+ return FALSE
+ }
if (pq.is_empty())
{
@@ -216,7 +225,6 @@
}
-
3. Directions for improvement
========================================================================
-=-=(Timour - Tue, 19 Jan 2010, 18:29)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.21045 2010-01-19 18:29:12.000000000 +0200
+++ /tmp/wklog.68.new.21045 2010-01-19 18:29:12.000000000 +0200
@@ -132,6 +132,8 @@
if (nonull_key && ! nonull_key->lookup(outer_ref))
return FALSE
+ if (nonull_key)
+ pq.insert(nonull_key)
for (i = 1; i <= n; i++)
{
-=-=(Guest - Tue, 19 Jan 2010, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.19825 2010-01-19 18:15:30.000000000 +0200
+++ /tmp/wklog.68.new.19825 2010-01-19 18:15:30.000000000 +0200
@@ -1,8 +1,16 @@
-This a copy of the initial algorithm proposed by Igor:
-======================================================
+Contents
+========================================================================
-For each left side tuple (v_1,...,v_n) we have to find the following set
-of rowids for the temp table containing N rows as the result of
+1. Initial idea as proposed by Igor
+2. Algorithm for IN execution with partial matching
+3. Directions for improvement
+
+
+1. Initial idea as proposed by Igor
+========================================================================
+
+For each left side tuple (v_1,...,v_n) we have to find the following
+set of rowids for the temp table containing N rows as the result of
materialization of the subquery:
R= INTERSECT (rowid{a_i=v_i} UNION rowid{a_i is null} where i runs
@@ -18,38 +26,198 @@
- it requires minimum memory: not more than N*n bits in total
- search of an element in a set is extremely cheap
-Taken all above into account I could suggest the following algorithm to
-build R:
+Taken all above into account I could suggest the following algorithm
+to build R:
- Using indexes (read about them below) for each column participating in the
- intersection,
- merge ordered sets rowid{a_i=v_i} in the following manner.
+ Using indexes (read about them below) for each column participating
+ in the intersection, merge ordered sets rowid{a_i=v_i} in the
+ following manner.
If a rowid r has been encountered maximum in k sets
-rowid{a_i1=v_i1},...,rowid(a_ik=v_ik),
+ rowid{a_i1=v_i1},...,rowid(a_ik=v_ik),
then it has to be checked against all rowid{a_i=v_i} such that i is
-not in {i1,...,ik}.
+ not in {i1,...,ik}.
As soon as we fail to find r in one of these sets we discard it.
If r has been found in all of them then r belongs to the set R.
-Here we use the property (1): any r from rowid{a_i=v_i} UNION rowid{a_i
-is null} is either
+Here we use the property (1):
+any r from rowid{a_i=v_i} UNION rowid{a_i is null} is either
belongs to rowid{a_i=v_i} or to rowid{a_i is null}. From this we can
-infer that for any r from R
-indexes a_i can be uniquely divided into two groups: one contains
-indexes a_i where r belongs to
-the sets rowid{a_i=v_i}, the other contains indexes a_j such that r
-belongs to rowid{a_j is null}.
-
-Now let's talk how to get elements from rowid{a_i=v_i} in a sorted order
-needed for the merge procedure. We could use BTREE indexes for temp
-table. But they are rather expensive and
-take a lot of memory as the are implemented with RB trees.
+infer that for any r from R indexes a_i can be uniquely divided into
+two groups:
+- one contains indexes a_i where r belongs to the sets rowid{a_i=v_i},
+- the other contains indexes a_j such that r belongs to
+ rowid{a_j is null}.
+
+Now let's talk how to get elements from rowid{a_i=v_i} in a sorted
+order needed for the merge procedure. We could use BTREE indexes for
+temp table. But they are rather expensive and take a lot of memory as
+the are implemented with RB trees.
I would suggest creating for each column from the temporary table just
an array of rowids sorted by the value from column a.
Index lookup in such an array is cheap. It's also rather cheap to check
that the next rowid refers to a row with a different value in column a.
The array can be created on demand.
+2. Algorithm for IN execution with partial matching
+========================================================================
+
+2.1 Below is shown the top-level algorithm to execute an IN predicate
+with partial matching. This algorithm is essentially the implementation
+of Item_subselect:exec().
+
+int lookup_with_null_semantics(outer_ref[], mat_subquery)
+{
+ if (index_lookup(outer_ref, mat_subquery)
+ return TRUE
+ else
+ {
+ /*
+ Check if there is a partial match (UNKNOWN) or no match (NULL).
+ */
+ if (this is the first partial match)
+ {
+ vkey[] = build array of value keys for each NULL-able column
+ of mat_subquery.
+ nkey[] = build a bitmap NULL index for each column of mat_subquery
+ that contains NULLs
+ nonull_key = build a key over all non-NULL columns of mat_subquery
+ }
+ if (partial_match(outer_ref, vkey[], nkey[], nonull_key)
+ return UNKNOWN
+ else
+ return FALSE
+ }
+}
+
+2.2 The implementation of partial matching is as follows
+
+/*
+ Assumptions:
+ - It has already been checked if there is a complete match by a
+ regular index lookup, and the test failed.
+ - It has already been checked if there is a complete NULL row,
+ and if there was we wouldn't call this function. Thus we assume
+ that there is no complete NULL row.
+ - Not all vidx_i are empty, but some can be empty. If all were empty,
+ then the only possibility for a match is a complete NULL row, which
+ we already checked.
+
+ @param outer_ref - the uter (left) IN argument.
+ @param vidx[] - array of value keys
+ Ordered sequences of rowids of the corresponding columns a_i, such
+ that all rowids in idx_i are the ones where column a_i contains some
+ value or NULL. Each idx_i is derived dynamically, for each different
+ left argument of an IN predicate.
+ @param nidx[] - array of NULL keys
+ Bitmpas, one per each column, where a bit is set if the corresponding
+ row has a NULL value for the corresponding column.
+ @nonull_key - the only key over all columns of the materialized subquery
+ that do not contain NULLs
+
+ @returns
+ @retval FALSE if there is no match
+ @retval TRUE if there is a partial match
+*/
+
+Boolean partial_match(outer_ref, vkey[], nkey[], nonull_key)
+{
+ /* Set of the keys (columns) that form a partial match. */
+ Set matching_keys = {}
+ /* A subset of all keys that need to be checked for NULL matches. */
+ Set null_keys = {}
+ Int min_key /* Key that contains the current minimum position. */
+ Int min_row /* Current row number of min_key. */
+ Int cur_min_key, cur_min_row
+ PriorityQueue pq
+
+ if (nonull_key && ! nonull_key->lookup(outer_ref))
+ return FALSE
+
+ for (i = 1; i <= n; i++)
+ {
+ vkey[i].lookup(outer_ref)
+ if (! vkey[i].is_eof())
+ pq.insert(i)
+ }
+ /*
+ Not all value keys are empty, thus we don't have only NULL
+ keys. If we had, the only possible match is a NULL row, and
+ we cheked there is no such row, therefore the result is known
+ to be FALSE.
+ In fact this algorithm makes sense for at least two non-NULL
+ columns.
+ */
+ assert(pq.elements > 1)
+
+ (min_key, min_row) = pq.pop()
+ matching_keys.add(min_key)
+ vkey[min_key].next()
+ if (! vkey[min_key].is_eof())
+ pq.insert(min_key)
+
+ while (TRUE)
+ {
+ (cur_min_key, cur_min_row) = pq.pop()
+
+ if (cur_min_row == min_row)
+ {
+ matching_keys.add(cur_min_key)
+ /* There cannot be a complete match, as we already checked for one. */
+ assert(matching_keys.elements < n)
+ }
+ else if (cur_min_key == nonull_key)
+ {
+ /*
+ The non-NULL key has no corresponding NULL index, so we know for
+ sure that the row 'min_row' is not a match.
+ */
+ (min_key, min_row) = (cur_min_key, cur_min_row)
+ matching_keys = {min_key}
+ }
+ else
+ {
+ assert(cur_min_row > min_row) /* Follows from the use of PQ. */
+ null_keys = set_difference(all keys vkey[], matching_keys)
+ /*
+ Check if all null_keys contain a NULL at row 'min_row'. The procedure
+ internally checks all keys in a special precomputed order. A prior
+ procedure determines an optimal order and a mapping
+ idx_no -> idx_order (encoded as an array).
+ */
+ if (test_null_row(null_keys, min_row))
+ return TRUE
+ else
+ {
+ (min_key, min_row) = (cur_min_key, cur_min_row)
+ matching_keys = {min_key}
+ }
+ }
+
+ vkey[cur_min_key].next()
+ if (! vkey[cur_min_key].is_eof())
+ pq.insert(cur_min_key)
+
+ if (pq.is_empty())
+ {
+ /* Check the last row of the last column in PQ for NULL matches. */
+ null_keys = set_difference(all keys vkey[], matching_keys)
+ if (test_null_row(null_keys, min_row))
+ return TRUE
+ else
+ return FALSE
+ }
+ }
+
+ /* We should never get here. */
+ assert(FALSE)
+ return FALSE
+}
+
+
+
+3. Directions for improvement
+========================================================================
+
Other consideration that may be taken into account:
1. If columns a_j1,...,a_jm do not contain null values in the temporary
-=-=(Timour - Sun, 06 Dec 2009, 14:36)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.12919 2009-12-06 14:36:18.000000000 +0200
+++ /tmp/wklog.68.new.12919 2009-12-06 14:36:18.000000000 +0200
@@ -87,3 +87,8 @@
7. If you get a row with nulls in all columns stop filling the temporary
table and return UNKNOWN for any tuple <v1,...,vn>.
+8. [timour]
+ Consider that due to materialization, we already have a unique index
+on all columns <a_1,..., a_n>. We can use the first key part of this index
+over column a_1, instead of the index rowid{a_i=v_i}. Thus we can avoid
+creating the index rowid{a_i=v_i}.
------------------------------------------------------------
-=-=(View All Progress Notes, 16 total)=-=-
http://askmonty.org/worklog/index.pl?tid=68&nolimit=1
DESCRIPTION:
The goal of this task is to implement efficient execution of NOT IN
subquery predicates of the form:
<oe_1,...,oe_n> NOT IN <non_correlated subquery>
when either some oe_i, or some subqury result column contains NULLs.
The problem with such predicates is that it is possible to use index
lookups only when neither argument of the predicate contains NULLs.
If some argument contains a NULL, then due to NULL semantics, it
plays the role of a wildcard. If we were to use regular index lookups,
then we would get 'no match' for some outer tuple (thus the predicate
evaluates to FALSE), while the SQL semantics means 'partial match', and
the predicate should evaluate to NULL.
This task implements an efficient algorithm to compute such 'parial
matches', where a NULL matches any value.
HIGH-LEVEL SPECIFICATION:
Contents
========================================================================
1. Initial idea as proposed by Igor
2. Algorithm for IN execution with partial matching
3. Directions for improvement
1. Initial idea as proposed by Igor
========================================================================
For each left side tuple (v_1,...,v_n) we have to find the following
set of rowids for the temp table containing N rows as the result of
materialization of the subquery:
R= INTERSECT (rowid{a_i=v_i} UNION rowid{a_i is null} where i runs
trough all indexes from [1..n] such that v_i is not null.
Bear in mind the following specifics of this intersection:
(1) For each i: rowid{a_i=v_i} and rowid{a_i is null} are disjoint
(2) For each i: rowid{a_i is null} is the same for each tuple,
that is, this set is independent of the left-side tuples.
Due to (2) it makes sense to build rowid{a_i is null} only once.
A good representation for such sets would be bitmaps:
- it requires minimum memory: not more than N*n bits in total
- search of an element in a set is extremely cheap
Taken all above into account I could suggest the following algorithm
to build R:
Using indexes (read about them below) for each column participating
in the intersection, merge ordered sets rowid{a_i=v_i} in the
following manner.
If a rowid r has been encountered maximum in k sets
rowid{a_i1=v_i1},...,rowid(a_ik=v_ik),
then it has to be checked against all rowid{a_i=v_i} such that i is
not in {i1,...,ik}.
As soon as we fail to find r in one of these sets we discard it.
If r has been found in all of them then r belongs to the set R.
Here we use the property (1):
any r from rowid{a_i=v_i} UNION rowid{a_i is null} is either
belongs to rowid{a_i=v_i} or to rowid{a_i is null}. From this we can
infer that for any r from R indexes a_i can be uniquely divided into
two groups:
- one contains indexes a_i where r belongs to the sets rowid{a_i=v_i},
- the other contains indexes a_j such that r belongs to
rowid{a_j is null}.
Now let's talk how to get elements from rowid{a_i=v_i} in a sorted
order needed for the merge procedure. We could use BTREE indexes for
temp table. But they are rather expensive and take a lot of memory as
the are implemented with RB trees.
I would suggest creating for each column from the temporary table just
an array of rowids sorted by the value from column a.
Index lookup in such an array is cheap. It's also rather cheap to check
that the next rowid refers to a row with a different value in column a.
The array can be created on demand.
2. Algorithm for IN execution with partial matching
========================================================================
2.1 Below is shown the top-level algorithm to execute an IN predicate
with partial matching. This algorithm is essentially the implementation
of Item_subselect:exec().
int lookup_with_null_semantics(outer_ref[], mat_subquery)
{
if (index_lookup(outer_ref, mat_subquery)
return TRUE
else
{
/*
Check if there is a partial match (UNKNOWN) or no match (NULL).
*/
if (this is the first partial match)
{
vkey[] = build array of value keys for each NULL-able column
of mat_subquery.
nkey[] = build a bitmap NULL index for each column of mat_subquery
that contains NULLs
nonull_key = build a key over all non-NULL columns of mat_subquery
}
if (partial_match(outer_ref, vkey[], nkey[], nonull_key)
return UNKNOWN
else
return FALSE
}
}
2.2 The implementation of partial matching is as follows
/*
Assumptions:
- It has already been checked if there is a complete match by a
regular index lookup, and the test failed.
- It has already been checked if there is a complete NULL row,
and if there was we wouldn't call this function. Thus we assume
that there is no complete NULL row.
- Not all vidx_i are empty, but some can be empty. If all were empty,
then the only possibility for a match is a complete NULL row, which
we already checked.
@param outer_ref - the uter (left) IN argument.
@param vidx[] - array of value keys
Ordered sequences of rowids of the corresponding columns a_i, such
that all rowids in idx_i are the ones where column a_i contains some
value or NULL. Each idx_i is derived dynamically, for each different
left argument of an IN predicate.
@param nidx[] - array of NULL keys
Bitmpas, one per each column, where a bit is set if the corresponding
row has a NULL value for the corresponding column.
@nonull_key - the only key over all columns of the materialized subquery
that do not contain NULLs
@returns
@retval FALSE if there is no match
@retval TRUE if there is a partial match
*/
Boolean partial_match(outer_ref, vkey[], nkey[], nonull_key)
{
/* Set of the keys (columns) that form a partial match. */
Set matching_keys = {}
/* A subset of all keys that need to be checked for NULL matches. */
Set null_keys = {}
Int min_key /* Key that contains the current minimum position. */
Int min_row /* Current row number of min_key. */
Int cur_min_key, cur_min_row
PriorityQueue pq
if (nonull_key && ! nonull_key->lookup(outer_ref))
return FALSE
for (i = 1; i <= n; i++)
{
if (vkey[i] != nonull_key)
vkey[i].lookup(outer_ref)
if (! vkey[i].is_eof())
pq.insert(i)
}
/*
Not all value keys are empty, thus we don't have only NULL
keys. If we had, the only possible match is a NULL row, and
we cheked there is no such row, therefore the result is known
to be FALSE.
In fact this algorithm makes sense for at least two non-NULL
columns.
*/
assert(pq.elements > 1)
(min_key, min_row) = pq.pop()
matching_keys.add(min_key)
vkey[min_key].next()
if (! vkey[min_key].is_eof())
pq.insert(min_key)
while (TRUE)
{
(cur_min_key, cur_min_row) = pq.pop()
if (cur_min_row == min_row)
{
matching_keys.add(cur_min_key)
/* There cannot be a complete match, as we already checked for one. */
assert(matching_keys.elements < n)
}
else if (vkey[cur_min_key] == nonull_key)
{
/*
The non-NULL key has no corresponding NULL index, so we know for
sure that the row 'min_row' is not a match.
*/
(min_key, min_row) = (cur_min_key, cur_min_row)
matching_keys = {min_key}
}
else
{
assert(cur_min_row > min_row) /* Follows from the use of PQ. */
null_keys = set_difference(all keys vkey[], matching_keys)
/*
Check if all null_keys contain a NULL at row 'min_row'. The procedure
internally checks all keys in a special precomputed order. A prior
procedure determines an optimal order and a mapping idx_no -> idx_order
(encoded as an array).
This procedure makes sure not to match the non-NULL column.
*/
if (test_null_row(null_keys, min_row))
return TRUE
else
{
(min_key, min_row) = (cur_min_key, cur_min_row)
matching_keys = {min_key}
}
}
vkey[cur_min_key].next()
if (! vkey[cur_min_key].is_eof())
pq.insert(cur_min_key)
else if (vkey[cur_min_key] == nonull_key)
{
/*
If there can't be more matches for the nonull_key, we know for sure
there is no match, since there is no possible NULL match.
*/
return FALSE
}
if (pq.is_empty())
{
/* Check the last row of the last column in PQ for NULL matches. */
null_keys = set_difference(all keys vkey[], matching_keys)
if (test_null_row(null_keys, min_row))
return TRUE
else
return FALSE
}
}
/* We should never get here. */
assert(FALSE)
return FALSE
}
3. Directions for improvement
========================================================================
Other consideration that may be taken into account:
1. If columns a_j1,...,a_jm do not contain null values in the temporary
table at all and v_j1,...,v_jm cannot be null, create for these columns
only one index array (and of course do not create any bitmaps for them).
[done]
2. Consider the ratio d(a_i)=N'(a_i)/V(a_i), where N'(a_i) is the number
of rows, where a_i is not null and V(a_i) is the number of distinct
values for a_i excluding nulls.
If d(a_i) is close to N'(a_i) then do not create any index array: check
whether there is a match running through the records that have been
filtered in. Anyway if d(a_i) is close to N'(a_i) then the intersection
with rowid{a_i=v_i} will not reduce the number of remaining rowids
significantly.
In other words is V(a_i) exceeds some threshold there is no sense to
create an index for a_i.
If additionally N-N'(a_i) is small do not create a bitmap for this
column either.
3. If for a column a_i d(a_i) is not close to N'(a_i), but N-N'(a_i) is
small a sorted array of rowids from the set rowid{a_i is null} can be
used instead of a bitmap.
4. We always have a match if R0= INTERSECT rowid{a_i is null} is not
empty. Here i runs through all indexes from [1..n] such that v_i is not
null. For a given subset of columns this fact has to be checked only
once. It can be easily done with bitmap intersection.
5. If v1,...,vn never can be a null, then indexes (sorted arrays) can be
created only for rows with nulls.
6. If v1,...,vn never can be a null and number of rows with nulls is
small do not create indexes and do not create bitmaps.
7. If you get a row with nulls in all columns stop filling the temporary
table and return UNKNOWN for any tuple <v1,...,vn>.
[This is wrong, because if we don't fill the whole temp table, there may
be some tuple(s) that would match some outer tuple. In such cases, if we
stop filling the temp table, we would miss a TRUE result. Having a partial
match doesn't preclude us from having a complete match].
8. [timour]
Consider that due to materialization, we already have a unique index
on all columns <a_1,..., a_n>. We can use the first key part of this index
over column a_1, instead of the index rowid{a_i=v_i}. Thus we can avoid
creating the index rowid{a_i=v_i}.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Timour): Subquery optimization: Efficient NOT IN execution with NULLs (68)
by worklog-noreply@askmonty.org 12 Mar '10
by worklog-noreply@askmonty.org 12 Mar '10
12 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subquery optimization: Efficient NOT IN execution with NULLs
CREATION DATE..: Fri, 27 Nov 2009, 13:22
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 68 (http://askmonty.org/worklog/?tid=68)
VERSION........: Server-9.x
STATUS.........: In-Progress
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 68
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 68
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 68
-=-=(Guest - Sat, 27 Feb 2010, 10:11)=-=-
Status updated.
No change.
-=-=(Guest - Sat, 27 Feb 2010, 10:11)=-=-
Status updated.
--- /tmp/wklog.68.old.24229 2010-02-27 10:11:57.000000000 +0000
+++ /tmp/wklog.68.new.24229 2010-02-27 10:11:57.000000000 +0000
@@ -1 +1 @@
-Assigned
+In-Progress
-=-=(Timour - Mon, 22 Feb 2010, 17:39)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.17116 2010-02-22 17:39:48.000000000 +0200
+++ /tmp/wklog.68.new.17116 2010-02-22 17:39:48.000000000 +0200
@@ -233,6 +233,7 @@
1. If columns a_j1,...,a_jm do not contain null values in the temporary
table at all and v_j1,...,v_jm cannot be null, create for these columns
only one index array (and of course do not create any bitmaps for them).
+[done]
2. Consider the ratio d(a_i)=N'(a_i)/V(a_i), where N'(a_i) is the number
of rows, where a_i is not null and V(a_i) is the number of distinct
@@ -264,6 +265,10 @@
7. If you get a row with nulls in all columns stop filling the temporary
table and return UNKNOWN for any tuple <v1,...,vn>.
+[This is wrong, because if we don't fill the whole temp table, there may
+ be some tuple(s) that would match some outer tuple. In such cases, if we
+ stop filling the temp table, we would miss a TRUE result. Having a partial
+ match doesn't preclude us from having a complete match].
8. [timour]
Consider that due to materialization, we already have a unique index
-=-=(Timour - Tue, 19 Jan 2010, 18:44)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.22569 2010-01-19 18:44:01.000000000 +0200
+++ /tmp/wklog.68.new.22569 2010-01-19 18:44:01.000000000 +0200
@@ -132,11 +132,10 @@
if (nonull_key && ! nonull_key->lookup(outer_ref))
return FALSE
- if (nonull_key)
- pq.insert(nonull_key)
for (i = 1; i <= n; i++)
{
+ if (vkey[i] != nonull_key)
vkey[i].lookup(outer_ref)
if (! vkey[i].is_eof())
pq.insert(i)
@@ -167,7 +166,7 @@
/* There cannot be a complete match, as we already checked for one. */
assert(matching_keys.elements < n)
}
- else if (cur_min_key == nonull_key)
+ else if (vkey[cur_min_key] == nonull_key)
{
/*
The non-NULL key has no corresponding NULL index, so we know for
@@ -183,8 +182,10 @@
/*
Check if all null_keys contain a NULL at row 'min_row'. The procedure
internally checks all keys in a special precomputed order. A prior
- procedure determines an optimal order and a mapping
- idx_no -> idx_order (encoded as an array).
+ procedure determines an optimal order and a mapping idx_no -> idx_order
+ (encoded as an array).
+
+ This procedure makes sure not to match the non-NULL column.
*/
if (test_null_row(null_keys, min_row))
return TRUE
@@ -198,6 +199,14 @@
vkey[cur_min_key].next()
if (! vkey[cur_min_key].is_eof())
pq.insert(cur_min_key)
+ else if (vkey[cur_min_key] == nonull_key)
+ {
+ /*
+ If there can't be more matches for the nonull_key, we know for sure
+ there is no match, since there is no possible NULL match.
+ */
+ return FALSE
+ }
if (pq.is_empty())
{
@@ -216,7 +225,6 @@
}
-
3. Directions for improvement
========================================================================
-=-=(Timour - Tue, 19 Jan 2010, 18:29)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.21045 2010-01-19 18:29:12.000000000 +0200
+++ /tmp/wklog.68.new.21045 2010-01-19 18:29:12.000000000 +0200
@@ -132,6 +132,8 @@
if (nonull_key && ! nonull_key->lookup(outer_ref))
return FALSE
+ if (nonull_key)
+ pq.insert(nonull_key)
for (i = 1; i <= n; i++)
{
-=-=(Guest - Tue, 19 Jan 2010, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.19825 2010-01-19 18:15:30.000000000 +0200
+++ /tmp/wklog.68.new.19825 2010-01-19 18:15:30.000000000 +0200
@@ -1,8 +1,16 @@
-This a copy of the initial algorithm proposed by Igor:
-======================================================
+Contents
+========================================================================
-For each left side tuple (v_1,...,v_n) we have to find the following set
-of rowids for the temp table containing N rows as the result of
+1. Initial idea as proposed by Igor
+2. Algorithm for IN execution with partial matching
+3. Directions for improvement
+
+
+1. Initial idea as proposed by Igor
+========================================================================
+
+For each left side tuple (v_1,...,v_n) we have to find the following
+set of rowids for the temp table containing N rows as the result of
materialization of the subquery:
R= INTERSECT (rowid{a_i=v_i} UNION rowid{a_i is null} where i runs
@@ -18,38 +26,198 @@
- it requires minimum memory: not more than N*n bits in total
- search of an element in a set is extremely cheap
-Taken all above into account I could suggest the following algorithm to
-build R:
+Taken all above into account I could suggest the following algorithm
+to build R:
- Using indexes (read about them below) for each column participating in the
- intersection,
- merge ordered sets rowid{a_i=v_i} in the following manner.
+ Using indexes (read about them below) for each column participating
+ in the intersection, merge ordered sets rowid{a_i=v_i} in the
+ following manner.
If a rowid r has been encountered maximum in k sets
-rowid{a_i1=v_i1},...,rowid(a_ik=v_ik),
+ rowid{a_i1=v_i1},...,rowid(a_ik=v_ik),
then it has to be checked against all rowid{a_i=v_i} such that i is
-not in {i1,...,ik}.
+ not in {i1,...,ik}.
As soon as we fail to find r in one of these sets we discard it.
If r has been found in all of them then r belongs to the set R.
-Here we use the property (1): any r from rowid{a_i=v_i} UNION rowid{a_i
-is null} is either
+Here we use the property (1):
+any r from rowid{a_i=v_i} UNION rowid{a_i is null} is either
belongs to rowid{a_i=v_i} or to rowid{a_i is null}. From this we can
-infer that for any r from R
-indexes a_i can be uniquely divided into two groups: one contains
-indexes a_i where r belongs to
-the sets rowid{a_i=v_i}, the other contains indexes a_j such that r
-belongs to rowid{a_j is null}.
-
-Now let's talk how to get elements from rowid{a_i=v_i} in a sorted order
-needed for the merge procedure. We could use BTREE indexes for temp
-table. But they are rather expensive and
-take a lot of memory as the are implemented with RB trees.
+infer that for any r from R indexes a_i can be uniquely divided into
+two groups:
+- one contains indexes a_i where r belongs to the sets rowid{a_i=v_i},
+- the other contains indexes a_j such that r belongs to
+ rowid{a_j is null}.
+
+Now let's talk how to get elements from rowid{a_i=v_i} in a sorted
+order needed for the merge procedure. We could use BTREE indexes for
+temp table. But they are rather expensive and take a lot of memory as
+the are implemented with RB trees.
I would suggest creating for each column from the temporary table just
an array of rowids sorted by the value from column a.
Index lookup in such an array is cheap. It's also rather cheap to check
that the next rowid refers to a row with a different value in column a.
The array can be created on demand.
+2. Algorithm for IN execution with partial matching
+========================================================================
+
+2.1 Below is shown the top-level algorithm to execute an IN predicate
+with partial matching. This algorithm is essentially the implementation
+of Item_subselect:exec().
+
+int lookup_with_null_semantics(outer_ref[], mat_subquery)
+{
+ if (index_lookup(outer_ref, mat_subquery)
+ return TRUE
+ else
+ {
+ /*
+ Check if there is a partial match (UNKNOWN) or no match (NULL).
+ */
+ if (this is the first partial match)
+ {
+ vkey[] = build array of value keys for each NULL-able column
+ of mat_subquery.
+ nkey[] = build a bitmap NULL index for each column of mat_subquery
+ that contains NULLs
+ nonull_key = build a key over all non-NULL columns of mat_subquery
+ }
+ if (partial_match(outer_ref, vkey[], nkey[], nonull_key)
+ return UNKNOWN
+ else
+ return FALSE
+ }
+}
+
+2.2 The implementation of partial matching is as follows
+
+/*
+ Assumptions:
+ - It has already been checked if there is a complete match by a
+ regular index lookup, and the test failed.
+ - It has already been checked if there is a complete NULL row,
+ and if there was we wouldn't call this function. Thus we assume
+ that there is no complete NULL row.
+ - Not all vidx_i are empty, but some can be empty. If all were empty,
+ then the only possibility for a match is a complete NULL row, which
+ we already checked.
+
+ @param outer_ref - the uter (left) IN argument.
+ @param vidx[] - array of value keys
+ Ordered sequences of rowids of the corresponding columns a_i, such
+ that all rowids in idx_i are the ones where column a_i contains some
+ value or NULL. Each idx_i is derived dynamically, for each different
+ left argument of an IN predicate.
+ @param nidx[] - array of NULL keys
+ Bitmpas, one per each column, where a bit is set if the corresponding
+ row has a NULL value for the corresponding column.
+ @nonull_key - the only key over all columns of the materialized subquery
+ that do not contain NULLs
+
+ @returns
+ @retval FALSE if there is no match
+ @retval TRUE if there is a partial match
+*/
+
+Boolean partial_match(outer_ref, vkey[], nkey[], nonull_key)
+{
+ /* Set of the keys (columns) that form a partial match. */
+ Set matching_keys = {}
+ /* A subset of all keys that need to be checked for NULL matches. */
+ Set null_keys = {}
+ Int min_key /* Key that contains the current minimum position. */
+ Int min_row /* Current row number of min_key. */
+ Int cur_min_key, cur_min_row
+ PriorityQueue pq
+
+ if (nonull_key && ! nonull_key->lookup(outer_ref))
+ return FALSE
+
+ for (i = 1; i <= n; i++)
+ {
+ vkey[i].lookup(outer_ref)
+ if (! vkey[i].is_eof())
+ pq.insert(i)
+ }
+ /*
+ Not all value keys are empty, thus we don't have only NULL
+ keys. If we had, the only possible match is a NULL row, and
+ we cheked there is no such row, therefore the result is known
+ to be FALSE.
+ In fact this algorithm makes sense for at least two non-NULL
+ columns.
+ */
+ assert(pq.elements > 1)
+
+ (min_key, min_row) = pq.pop()
+ matching_keys.add(min_key)
+ vkey[min_key].next()
+ if (! vkey[min_key].is_eof())
+ pq.insert(min_key)
+
+ while (TRUE)
+ {
+ (cur_min_key, cur_min_row) = pq.pop()
+
+ if (cur_min_row == min_row)
+ {
+ matching_keys.add(cur_min_key)
+ /* There cannot be a complete match, as we already checked for one. */
+ assert(matching_keys.elements < n)
+ }
+ else if (cur_min_key == nonull_key)
+ {
+ /*
+ The non-NULL key has no corresponding NULL index, so we know for
+ sure that the row 'min_row' is not a match.
+ */
+ (min_key, min_row) = (cur_min_key, cur_min_row)
+ matching_keys = {min_key}
+ }
+ else
+ {
+ assert(cur_min_row > min_row) /* Follows from the use of PQ. */
+ null_keys = set_difference(all keys vkey[], matching_keys)
+ /*
+ Check if all null_keys contain a NULL at row 'min_row'. The procedure
+ internally checks all keys in a special precomputed order. A prior
+ procedure determines an optimal order and a mapping
+ idx_no -> idx_order (encoded as an array).
+ */
+ if (test_null_row(null_keys, min_row))
+ return TRUE
+ else
+ {
+ (min_key, min_row) = (cur_min_key, cur_min_row)
+ matching_keys = {min_key}
+ }
+ }
+
+ vkey[cur_min_key].next()
+ if (! vkey[cur_min_key].is_eof())
+ pq.insert(cur_min_key)
+
+ if (pq.is_empty())
+ {
+ /* Check the last row of the last column in PQ for NULL matches. */
+ null_keys = set_difference(all keys vkey[], matching_keys)
+ if (test_null_row(null_keys, min_row))
+ return TRUE
+ else
+ return FALSE
+ }
+ }
+
+ /* We should never get here. */
+ assert(FALSE)
+ return FALSE
+}
+
+
+
+3. Directions for improvement
+========================================================================
+
Other consideration that may be taken into account:
1. If columns a_j1,...,a_jm do not contain null values in the temporary
-=-=(Timour - Sun, 06 Dec 2009, 14:36)=-=-
High-Level Specification modified.
--- /tmp/wklog.68.old.12919 2009-12-06 14:36:18.000000000 +0200
+++ /tmp/wklog.68.new.12919 2009-12-06 14:36:18.000000000 +0200
@@ -87,3 +87,8 @@
7. If you get a row with nulls in all columns stop filling the temporary
table and return UNKNOWN for any tuple <v1,...,vn>.
+8. [timour]
+ Consider that due to materialization, we already have a unique index
+on all columns <a_1,..., a_n>. We can use the first key part of this index
+over column a_1, instead of the index rowid{a_i=v_i}. Thus we can avoid
+creating the index rowid{a_i=v_i}.
------------------------------------------------------------
-=-=(View All Progress Notes, 16 total)=-=-
http://askmonty.org/worklog/index.pl?tid=68&nolimit=1
DESCRIPTION:
The goal of this task is to implement efficient execution of NOT IN
subquery predicates of the form:
<oe_1,...,oe_n> NOT IN <non_correlated subquery>
when either some oe_i, or some subqury result column contains NULLs.
The problem with such predicates is that it is possible to use index
lookups only when neither argument of the predicate contains NULLs.
If some argument contains a NULL, then due to NULL semantics, it
plays the role of a wildcard. If we were to use regular index lookups,
then we would get 'no match' for some outer tuple (thus the predicate
evaluates to FALSE), while the SQL semantics means 'partial match', and
the predicate should evaluate to NULL.
This task implements an efficient algorithm to compute such 'parial
matches', where a NULL matches any value.
HIGH-LEVEL SPECIFICATION:
Contents
========================================================================
1. Initial idea as proposed by Igor
2. Algorithm for IN execution with partial matching
3. Directions for improvement
1. Initial idea as proposed by Igor
========================================================================
For each left side tuple (v_1,...,v_n) we have to find the following
set of rowids for the temp table containing N rows as the result of
materialization of the subquery:
R= INTERSECT (rowid{a_i=v_i} UNION rowid{a_i is null} where i runs
trough all indexes from [1..n] such that v_i is not null.
Bear in mind the following specifics of this intersection:
(1) For each i: rowid{a_i=v_i} and rowid{a_i is null} are disjoint
(2) For each i: rowid{a_i is null} is the same for each tuple,
that is, this set is independent of the left-side tuples.
Due to (2) it makes sense to build rowid{a_i is null} only once.
A good representation for such sets would be bitmaps:
- it requires minimum memory: not more than N*n bits in total
- search of an element in a set is extremely cheap
Taken all above into account I could suggest the following algorithm
to build R:
Using indexes (read about them below) for each column participating
in the intersection, merge ordered sets rowid{a_i=v_i} in the
following manner.
If a rowid r has been encountered maximum in k sets
rowid{a_i1=v_i1},...,rowid(a_ik=v_ik),
then it has to be checked against all rowid{a_i=v_i} such that i is
not in {i1,...,ik}.
As soon as we fail to find r in one of these sets we discard it.
If r has been found in all of them then r belongs to the set R.
Here we use the property (1):
any r from rowid{a_i=v_i} UNION rowid{a_i is null} is either
belongs to rowid{a_i=v_i} or to rowid{a_i is null}. From this we can
infer that for any r from R indexes a_i can be uniquely divided into
two groups:
- one contains indexes a_i where r belongs to the sets rowid{a_i=v_i},
- the other contains indexes a_j such that r belongs to
rowid{a_j is null}.
Now let's talk how to get elements from rowid{a_i=v_i} in a sorted
order needed for the merge procedure. We could use BTREE indexes for
temp table. But they are rather expensive and take a lot of memory as
the are implemented with RB trees.
I would suggest creating for each column from the temporary table just
an array of rowids sorted by the value from column a.
Index lookup in such an array is cheap. It's also rather cheap to check
that the next rowid refers to a row with a different value in column a.
The array can be created on demand.
2. Algorithm for IN execution with partial matching
========================================================================
2.1 Below is shown the top-level algorithm to execute an IN predicate
with partial matching. This algorithm is essentially the implementation
of Item_subselect:exec().
int lookup_with_null_semantics(outer_ref[], mat_subquery)
{
if (index_lookup(outer_ref, mat_subquery)
return TRUE
else
{
/*
Check if there is a partial match (UNKNOWN) or no match (NULL).
*/
if (this is the first partial match)
{
vkey[] = build array of value keys for each NULL-able column
of mat_subquery.
nkey[] = build a bitmap NULL index for each column of mat_subquery
that contains NULLs
nonull_key = build a key over all non-NULL columns of mat_subquery
}
if (partial_match(outer_ref, vkey[], nkey[], nonull_key)
return UNKNOWN
else
return FALSE
}
}
2.2 The implementation of partial matching is as follows
/*
Assumptions:
- It has already been checked if there is a complete match by a
regular index lookup, and the test failed.
- It has already been checked if there is a complete NULL row,
and if there was we wouldn't call this function. Thus we assume
that there is no complete NULL row.
- Not all vidx_i are empty, but some can be empty. If all were empty,
then the only possibility for a match is a complete NULL row, which
we already checked.
@param outer_ref - the uter (left) IN argument.
@param vidx[] - array of value keys
Ordered sequences of rowids of the corresponding columns a_i, such
that all rowids in idx_i are the ones where column a_i contains some
value or NULL. Each idx_i is derived dynamically, for each different
left argument of an IN predicate.
@param nidx[] - array of NULL keys
Bitmpas, one per each column, where a bit is set if the corresponding
row has a NULL value for the corresponding column.
@nonull_key - the only key over all columns of the materialized subquery
that do not contain NULLs
@returns
@retval FALSE if there is no match
@retval TRUE if there is a partial match
*/
Boolean partial_match(outer_ref, vkey[], nkey[], nonull_key)
{
/* Set of the keys (columns) that form a partial match. */
Set matching_keys = {}
/* A subset of all keys that need to be checked for NULL matches. */
Set null_keys = {}
Int min_key /* Key that contains the current minimum position. */
Int min_row /* Current row number of min_key. */
Int cur_min_key, cur_min_row
PriorityQueue pq
if (nonull_key && ! nonull_key->lookup(outer_ref))
return FALSE
for (i = 1; i <= n; i++)
{
if (vkey[i] != nonull_key)
vkey[i].lookup(outer_ref)
if (! vkey[i].is_eof())
pq.insert(i)
}
/*
Not all value keys are empty, thus we don't have only NULL
keys. If we had, the only possible match is a NULL row, and
we cheked there is no such row, therefore the result is known
to be FALSE.
In fact this algorithm makes sense for at least two non-NULL
columns.
*/
assert(pq.elements > 1)
(min_key, min_row) = pq.pop()
matching_keys.add(min_key)
vkey[min_key].next()
if (! vkey[min_key].is_eof())
pq.insert(min_key)
while (TRUE)
{
(cur_min_key, cur_min_row) = pq.pop()
if (cur_min_row == min_row)
{
matching_keys.add(cur_min_key)
/* There cannot be a complete match, as we already checked for one. */
assert(matching_keys.elements < n)
}
else if (vkey[cur_min_key] == nonull_key)
{
/*
The non-NULL key has no corresponding NULL index, so we know for
sure that the row 'min_row' is not a match.
*/
(min_key, min_row) = (cur_min_key, cur_min_row)
matching_keys = {min_key}
}
else
{
assert(cur_min_row > min_row) /* Follows from the use of PQ. */
null_keys = set_difference(all keys vkey[], matching_keys)
/*
Check if all null_keys contain a NULL at row 'min_row'. The procedure
internally checks all keys in a special precomputed order. A prior
procedure determines an optimal order and a mapping idx_no -> idx_order
(encoded as an array).
This procedure makes sure not to match the non-NULL column.
*/
if (test_null_row(null_keys, min_row))
return TRUE
else
{
(min_key, min_row) = (cur_min_key, cur_min_row)
matching_keys = {min_key}
}
}
vkey[cur_min_key].next()
if (! vkey[cur_min_key].is_eof())
pq.insert(cur_min_key)
else if (vkey[cur_min_key] == nonull_key)
{
/*
If there can't be more matches for the nonull_key, we know for sure
there is no match, since there is no possible NULL match.
*/
return FALSE
}
if (pq.is_empty())
{
/* Check the last row of the last column in PQ for NULL matches. */
null_keys = set_difference(all keys vkey[], matching_keys)
if (test_null_row(null_keys, min_row))
return TRUE
else
return FALSE
}
}
/* We should never get here. */
assert(FALSE)
return FALSE
}
3. Directions for improvement
========================================================================
Other consideration that may be taken into account:
1. If columns a_j1,...,a_jm do not contain null values in the temporary
table at all and v_j1,...,v_jm cannot be null, create for these columns
only one index array (and of course do not create any bitmaps for them).
[done]
2. Consider the ratio d(a_i)=N'(a_i)/V(a_i), where N'(a_i) is the number
of rows, where a_i is not null and V(a_i) is the number of distinct
values for a_i excluding nulls.
If d(a_i) is close to N'(a_i) then do not create any index array: check
whether there is a match running through the records that have been
filtered in. Anyway if d(a_i) is close to N'(a_i) then the intersection
with rowid{a_i=v_i} will not reduce the number of remaining rowids
significantly.
In other words is V(a_i) exceeds some threshold there is no sense to
create an index for a_i.
If additionally N-N'(a_i) is small do not create a bitmap for this
column either.
3. If for a column a_i d(a_i) is not close to N'(a_i), but N-N'(a_i) is
small a sorted array of rowids from the set rowid{a_i is null} can be
used instead of a bitmap.
4. We always have a match if R0= INTERSECT rowid{a_i is null} is not
empty. Here i runs through all indexes from [1..n] such that v_i is not
null. For a given subset of columns this fact has to be checked only
once. It can be easily done with bitmap intersection.
5. If v1,...,vn never can be a null, then indexes (sorted arrays) can be
created only for rows with nulls.
6. If v1,...,vn never can be a null and number of rows with nulls is
small do not create indexes and do not create bitmaps.
7. If you get a row with nulls in all columns stop filling the temporary
table and return UNKNOWN for any tuple <v1,...,vn>.
[This is wrong, because if we don't fill the whole temp table, there may
be some tuple(s) that would match some outer tuple. In such cases, if we
stop filling the temp table, we would miss a TRUE result. Having a partial
match doesn't preclude us from having a complete match].
8. [timour]
Consider that due to materialization, we already have a unique index
on all columns <a_1,..., a_n>. We can use the first key part of this index
over column a_1, instead of the index rowid{a_i=v_i}. Thus we can avoid
creating the index rowid{a_i=v_i}.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Rev 2767: MWL#68 Subquery optimization: Efficient NOT IN execution with NULLs in file:///home/tsk/mprog/src/5.3-mwl68/
by timour@askmonty.org 11 Mar '10
by timour@askmonty.org 11 Mar '10
11 Mar '10
At file:///home/tsk/mprog/src/5.3-mwl68/
------------------------------------------------------------
revno: 2767
revision-id: timour(a)askmonty.org-20100311214331-kw8ng8aiy6h60vai
parent: timour(a)askmonty.org-20100309103615-dzmm6xt7ye5xfs25
committer: timour(a)askmonty.org
branch nick: 5.3-mwl68
timestamp: Thu 2010-03-11 23:43:31 +0200
message:
MWL#68 Subquery optimization: Efficient NOT IN execution with NULLs
This patch does three things:
- It adds the possibility to force the execution of top-level [NOT] IN
subquery predicates via the IN=>EXISTS transformation. This is done by
setting both optimizer switches partial_match_rowid_merge and
partial_match_table_scan to "off".
- It adjusts all test cases where the complete optimizer_switch is
selected because now we have two more switches.
- For those test cases where the plan changes because of the new available
strategies, we switch off both partial match strategies in order to
force the "old" IN=>EXISTS strategy. This is done because most of these
test cases specifically test bugs in this strategy.
=== modified file 'mysql-test/include/mix1.inc'
--- a/mysql-test/include/mix1.inc 2009-09-15 06:08:54 +0000
+++ b/mysql-test/include/mix1.inc 2010-03-11 21:43:31 +0000
@@ -1177,8 +1177,11 @@
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
--echo End of 5.0 tests
=== modified file 'mysql-test/r/index_merge_myisam.result'
--- a/mysql-test/r/index_merge_myisam.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/index_merge_myisam.result 2010-03-11 21:43:31 +0000
@@ -1419,19 +1419,19 @@
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge=off,index_merge_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge_union=on';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,index_merge_sort_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=4;
ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
set optimizer_switch=NULL;
@@ -1458,21 +1458,21 @@
set optimizer_switch='index_merge=off,index_merge_union=off,default';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set @@global.optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
#
# Check index_merge's @@optimizer_switch flags
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -1582,5 +1582,5 @@
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
drop table t0, t1;
=== modified file 'mysql-test/r/innodb_mysql.result'
--- a/mysql-test/r/innodb_mysql.result 2009-12-15 07:16:46 +0000
+++ b/mysql-test/r/innodb_mysql.result 2010-03-11 21:43:31 +0000
@@ -1425,12 +1425,15 @@
#
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
2 DEPENDENT SUBQUERY t1 system NULL NULL NULL NULL 0 const row not found
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 1
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
End of 5.0 tests
CREATE TABLE `t2` (
=== modified file 'mysql-test/r/myisam_mrr.result'
--- a/mysql-test/r/myisam_mrr.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/myisam_mrr.result 2010-03-11 21:43:31 +0000
@@ -394,7 +394,7 @@
# - engine_condition_pushdown does not affect ICP
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, key(a));
=== modified file 'mysql-test/r/ps.result'
--- a/mysql-test/r/ps.result 2009-05-27 15:19:44 +0000
+++ b/mysql-test/r/ps.result 2010-03-11 21:43:31 +0000
@@ -149,6 +149,8 @@
c32 set('monday', 'tuesday', 'wednesday')
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -177,6 +179,7 @@
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
set @arg00=1;
prepare stmt1 from ' create table t1 (m int) as select 1 as m ' ;
execute stmt1 ;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-02-17 21:59:41 +0000
+++ b/mysql-test/r/subselect.result 2010-03-11 21:43:31 +0000
@@ -1,4 +1,6 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4803,4 +4805,5 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-02-17 10:05:27 +0000
+++ b/mysql-test/r/subselect3.result 2010-03-11 21:43:31 +0000
@@ -63,12 +63,15 @@
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -692,6 +695,8 @@
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -759,6 +764,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -960,7 +966,7 @@
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -977,7 +983,7 @@
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-02-17 10:47:55 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-03-11 21:43:31 +0000
@@ -67,12 +67,15 @@
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -696,6 +699,8 @@
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -763,6 +768,7 @@
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -964,7 +970,7 @@
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -981,7 +987,7 @@
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off,semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-02-24 11:33:42 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-11 21:43:31 +0000
@@ -202,39 +202,39 @@
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-07 15:41:45 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-11 21:43:31 +0000
@@ -206,39 +206,39 @@
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/t/ps.test'
--- a/mysql-test/t/ps.test 2009-05-27 15:19:44 +0000
+++ b/mysql-test/t/ps.test 2010-03-11 21:43:31 +0000
@@ -163,6 +163,9 @@
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -171,6 +174,8 @@
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# parameters from variables (for field creation)
#
=== modified file 'mysql-test/t/subselect.test'
--- a/mysql-test/t/subselect.test 2010-01-17 20:52:20 +0000
+++ b/mysql-test/t/subselect.test 2010-03-11 21:43:31 +0000
@@ -11,6 +11,9 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
--enable_warnings
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
select (select 2);
explain extended select (select 2);
SELECT (SELECT 1) UNION SELECT (SELECT 2);
@@ -4061,4 +4064,6 @@
(SELECT LAST_INSERT_ID() FROM t1 ORDER BY MIN(a) ASC LIMIT 1);
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
+
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/subselect3.test'
--- a/mysql-test/t/subselect3.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect3.test 2010-03-11 21:43:31 +0000
@@ -59,9 +59,13 @@
show status like 'Handler_read_rnd_next';
select ' ^ This must show 11' Z;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
# This must show trigcond:
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
#
@@ -529,6 +533,9 @@
DROP TABLE t1, t2;
+# The next three test cases must be executed with the IN=>EXISTS strategy
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
#
# Bug #27870: crash of an equijoin query with WHERE condition containing
@@ -588,6 +595,8 @@
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #34763: item_subselect.cc:1235:Item_in_subselect::row_value_transformer:
# Assertion failed, unexpected error message:
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-09 10:36:15 +0000
+++ b/sql/opt_subselect.cc 2010-03-11 21:43:31 +0000
@@ -187,7 +187,11 @@
does not call setup_subquery_materialization(). We could make
SELECT ... FROM DUAL call that function but that doesn't seem
to be the case that is worth handling.
- 4. Subquery is non-correlated
+ 4. Either the subquery predicate is a top-level predicate, or at
+ least one partial match strategy is enabled. If no partial match
+ strategy is enabled, then materialization cannot be used for
+ non-top-level queries because it cannot handle NULLs correctly.
+ 5. Subquery is non-correlated
TODO:
This is an overly restrictive condition. It can be extended to:
(Subquery is non-correlated ||
@@ -195,13 +199,13 @@
(Subquery is correlated to the immediate outer query &&
Subquery !contains {GROUP BY, ORDER BY [LIMIT],
aggregate functions}) && subquery predicate is not under "NOT IN"))
- 5. No execution method was already chosen (by a prepared statement).
+ 6. No execution method was already chosen (by a prepared statement).
(*) The subquery must be part of a SELECT statement. The current
condition also excludes multi-table update statements.
- We have to determine whether we will perform subquery materialization
- before calling the IN=>EXISTS transformation, so that we know whether to
+ Determine whether we will perform subquery materialization before
+ calling the IN=>EXISTS transformation, so that we know whether to
perform the whole transformation or only that part of it which wraps
Item_in_subselect in an Item_in_optimizer.
*/
@@ -211,11 +215,14 @@
select_lex->master_unit()->first_select()->leaf_tables && // 3
thd->lex->sql_command == SQLCOM_SELECT && // *
select_lex->outer_select()->leaf_tables && // 3A
- subquery_types_allow_materialization(in_subs))
+ subquery_types_allow_materialization(in_subs) &&
+ // psergey-todo: duplicated_subselect_card_check: where it's done?
+ (in_subs->is_top_level_item() ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)) &&//4
+ !in_subs->is_correlated && // 5
+ in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
{
- // psergey-todo: duplicated_subselect_card_check: where it's done?
- if (!in_subs->is_correlated && // 4
- in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 5
in_subs->exec_method= Item_in_subselect::MATERIALIZATION;
}
1
0
[Maria-developers] bzr commit into file:///home/tsk/mprog/src/5.3-mwl68/ branch (timour:2767)
by timour@askmonty.org 11 Mar '10
by timour@askmonty.org 11 Mar '10
11 Mar '10
#At file:///home/tsk/mprog/src/5.3-mwl68/ based on revid:timour@askmonty.org-20100309103615-dzmm6xt7ye5xfs25
2767 timour(a)askmonty.org 2010-03-11
MWL#68 Subquery optimization: Efficient NOT IN execution with NULLs
This patch does three things:
- It adds the possibility to force the execution of top-level [NOT] IN
subquery predicates via the IN=>EXISTS transformation. This is done by
setting both optimizer switches partial_match_rowid_merge and
partial_match_table_scan to "off".
- It adjusts all test cases where the complete optimizer_switch is
selected because now we have two more switches.
- For those test cases where the plan changes because of the new available
strategies, we switch off both partial match strategies in order to
force the "old" IN=>EXISTS strategy. This is done because most of these
test cases specifically test bugs in this strategy.
@ sql/opt_subselect.cc
Adds the possibility to force the execution of top-level [NOT] IN
subquery predicates via the IN=>EXISTS transformation. This is done by
setting both optimizer switches partial_match_rowid_merge and
partial_match_table_scan to "off".
modified:
mysql-test/include/mix1.inc
mysql-test/r/index_merge_myisam.result
mysql-test/r/innodb_mysql.result
mysql-test/r/myisam_mrr.result
mysql-test/r/ps.result
mysql-test/r/subselect.result
mysql-test/r/subselect3.result
mysql-test/r/subselect3_jcl6.result
mysql-test/r/subselect_no_mat.result
mysql-test/r/subselect_no_opts.result
mysql-test/r/subselect_no_semijoin.result
mysql-test/r/subselect_sj.result
mysql-test/r/subselect_sj_jcl6.result
mysql-test/t/ps.test
mysql-test/t/subselect.test
mysql-test/t/subselect3.test
sql/opt_subselect.cc
=== modified file 'mysql-test/include/mix1.inc'
--- a/mysql-test/include/mix1.inc 2009-09-15 06:08:54 +0000
+++ b/mysql-test/include/mix1.inc 2010-03-11 21:43:31 +0000
@@ -1177,8 +1177,11 @@ DROP TABLE t1;
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
--echo End of 5.0 tests
=== modified file 'mysql-test/r/index_merge_myisam.result'
--- a/mysql-test/r/index_merge_myisam.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/index_merge_myisam.result 2010-03-11 21:43:31 +0000
@@ -1419,19 +1419,19 @@ drop table t1;
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge=off,index_merge_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='index_merge_union=on';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,index_merge_sort_union=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=off,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=4;
ERROR 42000: Variable 'optimizer_switch' can't be set to the value of '4'
set optimizer_switch=NULL;
@@ -1458,21 +1458,21 @@ set optimizer_switch=default;
set optimizer_switch='index_merge=off,index_merge_union=off,default';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=off,index_merge_union=off,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set @@global.optimizer_switch=default;
select @@global.optimizer_switch;
@@global.optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
#
# Check index_merge's @@optimizer_switch flags
#
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, c int, filler char(100),
@@ -1582,5 +1582,5 @@ id select_type table type possible_keys
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
drop table t0, t1;
=== modified file 'mysql-test/r/innodb_mysql.result'
--- a/mysql-test/r/innodb_mysql.result 2009-12-15 07:16:46 +0000
+++ b/mysql-test/r/innodb_mysql.result 2010-03-11 21:43:31 +0000
@@ -1425,12 +1425,15 @@ DROP TABLE t1;
#
create table t1 (a bit(1) not null,b int) engine=myisam;
create table t2 (c int) engine=innodb;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch='partial_match_rowid_merge=off,partial_match_table_scan=off';
explain
select b from t1 where a not in (select b from t1,t2 group by a) group by a;
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
2 DEPENDENT SUBQUERY t1 system NULL NULL NULL NULL 0 const row not found
2 DEPENDENT SUBQUERY t2 ALL NULL NULL NULL NULL 1
+set optimizer_switch=@save_optimizer_switch;
DROP TABLE t1,t2;
End of 5.0 tests
CREATE TABLE `t2` (
=== modified file 'mysql-test/r/myisam_mrr.result'
--- a/mysql-test/r/myisam_mrr.result 2010-01-17 14:51:10 +0000
+++ b/mysql-test/r/myisam_mrr.result 2010-03-11 21:43:31 +0000
@@ -394,7 +394,7 @@ drop table t0, t1;
# - engine_condition_pushdown does not affect ICP
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
create table t0 (a int);
insert into t0 values (0),(1),(2),(3),(4),(5),(6),(7),(8),(9);
create table t1 (a int, b int, key(a));
=== modified file 'mysql-test/r/ps.result'
--- a/mysql-test/r/ps.result 2009-05-27 15:19:44 +0000
+++ b/mysql-test/r/ps.result 2010-03-11 21:43:31 +0000
@@ -149,6 +149,8 @@ c29 longblob, c30 longtext, c31 enum('on
c32 set('monday', 'tuesday', 'wednesday')
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -177,6 +179,7 @@ id select_type table type possible_keys
2 DEPENDENT SUBQUERY NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
set @arg00=1;
prepare stmt1 from ' create table t1 (m int) as select 1 as m ' ;
execute stmt1 ;
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-02-17 21:59:41 +0000
+++ b/mysql-test/r/subselect.result 2010-03-11 21:43:31 +0000
@@ -1,4 +1,6 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4803,4 +4805,5 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
=== modified file 'mysql-test/r/subselect3.result'
--- a/mysql-test/r/subselect3.result 2010-02-17 10:05:27 +0000
+++ b/mysql-test/r/subselect3.result 2010-03-11 21:43:31 +0000
@@ -63,12 +63,15 @@ Handler_read_rnd_next 11
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -692,6 +695,8 @@ a MAX(b) test
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -759,6 +764,7 @@ id select_type table type possible_keys
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -960,7 +966,7 @@ i1 i2
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -977,7 +983,7 @@ i1 i2
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect3_jcl6.result'
--- a/mysql-test/r/subselect3_jcl6.result 2010-02-17 10:47:55 +0000
+++ b/mysql-test/r/subselect3_jcl6.result 2010-03-11 21:43:31 +0000
@@ -67,12 +67,15 @@ Handler_read_rnd_next 11
select ' ^ This must show 11' Z;
Z
^ This must show 11
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
id select_type table type possible_keys key key_len ref rows filtered Extra
1 PRIMARY t3 ALL NULL NULL NULL NULL 2 100.00
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 6 100.00 Using where; Using temporary; Using filesort
Warnings:
Note 1003 select <in_optimizer>(`test`.`t3`.`a`,<exists>(select max(`test`.`t1`.`ie`) AS `max(ie)` from `test`.`t1` where (`test`.`t1`.`oref` = 4) group by `test`.`t1`.`grp` having trigcond((<cache>(`test`.`t3`.`a`) = <ref_null_helper>(max(`test`.`t1`.`ie`)))))) AS `a in (select max(ie) from t1 where oref=4 group by grp)` from `test`.`t3`
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
create table t1 (a int, oref int, key(a));
insert into t1 values
@@ -696,6 +699,8 @@ a MAX(b) test
2 3 h
3 4 i
DROP TABLE t1, t2;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int, PRIMARY KEY(b));
INSERT INTO t1 VALUES (1), (NULL), (4);
@@ -763,6 +768,7 @@ id select_type table type possible_keys
1 PRIMARY t1 ALL NULL NULL NULL NULL 4 Using where
2 DEPENDENT SUBQUERY t2 unique_subquery PRIMARY PRIMARY 4 func 1 Using index; Using where
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
CREATE TABLE t1 (a INT);
INSERT INTO t1 VALUES(1);
CREATE TABLE t2 (placeholder CHAR(11));
@@ -964,7 +970,7 @@ i1 i2
# Baseline:
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 17
+Handler_read_rnd_next 18
INSERT INTO t1 VALUES (NULL, NULL);
FLUSH STATUS;
@@ -981,7 +987,7 @@ i1 i2
# (read record from t1, but do not read from t2)
SHOW STATUS LIKE '%Handler_read_rnd_next';
Variable_name Value
-Handler_read_rnd_next 18
+Handler_read_rnd_next 19
DROP TABLE t1,t2;
End of 5.1 tests
CREATE TABLE t1 (
=== modified file 'mysql-test/r/subselect_no_mat.result'
--- a/mysql-test/r/subselect_no_mat.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_mat.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_opts.result'
--- a/mysql-test/r/subselect_no_opts.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_opts.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='materialization=off,semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_no_semijoin.result'
--- a/mysql-test/r/subselect_no_semijoin.result 2010-02-21 07:33:54 +0000
+++ b/mysql-test/r/subselect_no_semijoin.result 2010-03-11 21:43:31 +0000
@@ -1,8 +1,10 @@
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='semijoin=off';
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
select (select 2);
(select 2)
2
@@ -4807,8 +4809,9 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
End of 5.1 tests.
set optimizer_switch=default;
show variables like 'optimizer_switch';
Variable_name Value
-optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
=== modified file 'mysql-test/r/subselect_sj.result'
--- a/mysql-test/r/subselect_sj.result 2010-02-24 11:33:42 +0000
+++ b/mysql-test/r/subselect_sj.result 2010-03-11 21:43:31 +0000
@@ -202,39 +202,39 @@ BUG#37120 optimizer_switch allowable val
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/r/subselect_sj_jcl6.result'
--- a/mysql-test/r/subselect_sj_jcl6.result 2010-03-07 15:41:45 +0000
+++ b/mysql-test/r/subselect_sj_jcl6.result 2010-03-11 21:43:31 +0000
@@ -206,39 +206,39 @@ BUG#37120 optimizer_switch allowable val
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,semijoin=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=on,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,semijoin=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=on,semijoin=off,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch='default,materialization=off,loosescan=off';
select @@optimizer_switch;
@@optimizer_switch
-index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on
+index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_condition_pushdown=on,firstmatch=on,loosescan=off,materialization=off,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on
set optimizer_switch=default;
drop table t0, t1, t2;
drop table t10, t11, t12;
=== modified file 'mysql-test/t/ps.test'
--- a/mysql-test/t/ps.test 2009-05-27 15:19:44 +0000
+++ b/mysql-test/t/ps.test 2010-03-11 21:43:31 +0000
@@ -163,6 +163,9 @@ create table t1
) engine = MYISAM ;
create table t2 like t1;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
set @stmt= ' explain SELECT (SELECT SUM(c1 + c12 + 0.0) FROM t2 where (t1.c2 - 0e-3) = t2.c2 GROUP BY t1.c15 LIMIT 1) as scalar_s, exists (select 1.0e+0 from t2 where t2.c3 * 9.0000000000 = t1.c4) as exists_s, c5 * 4 in (select c6 + 0.3e+1 from t2) as in_s, (c7 - 4, c8 - 4) in (select c9 + 4.0, c10 + 40e-1 from t2) as in_row_s FROM t1, (select c25 x, c32 y from t2) tt WHERE x * 1 = c25 ' ;
prepare stmt1 from @stmt ;
execute stmt1 ;
@@ -171,6 +174,8 @@ explain SELECT (SELECT SUM(c1 + c12 + 0.
deallocate prepare stmt1;
drop tables t1,t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# parameters from variables (for field creation)
#
=== modified file 'mysql-test/t/subselect.test'
--- a/mysql-test/t/subselect.test 2010-01-17 20:52:20 +0000
+++ b/mysql-test/t/subselect.test 2010-03-11 21:43:31 +0000
@@ -11,6 +11,9 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t11,t12;
--enable_warnings
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
select (select 2);
explain extended select (select 2);
SELECT (SELECT 1) UNION SELECT (SELECT 2);
@@ -4061,4 +4064,6 @@ SELECT 1 FROM t1 GROUP BY
(SELECT LAST_INSERT_ID() FROM t1 ORDER BY MIN(a) ASC LIMIT 1);
DROP TABLE t1;
+set @@optimizer_switch=@save_optimizer_switch;
+
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/subselect3.test'
--- a/mysql-test/t/subselect3.test 2010-01-17 14:51:10 +0000
+++ b/mysql-test/t/subselect3.test 2010-03-11 21:43:31 +0000
@@ -59,9 +59,13 @@ select a in (select max(ie) from t1 wher
show status like 'Handler_read_rnd_next';
select ' ^ This must show 11' Z;
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
+
# This must show trigcond:
explain extended select a in (select max(ie) from t1 where oref=4 group by grp) from t3;
+set @@optimizer_switch=@save_optimizer_switch;
drop table t1, t2, t3;
#
@@ -529,6 +533,9 @@ SELECT a, MAX(b),
DROP TABLE t1, t2;
+# The next three test cases must be executed with the IN=>EXISTS strategy
+set @save_optimizer_switch=@@optimizer_switch;
+set @@optimizer_switch="partial_match_rowid_merge=off,partial_match_table_scan=off";
#
# Bug #27870: crash of an equijoin query with WHERE condition containing
@@ -588,6 +595,8 @@ EXPLAIN SELECT a FROM t1 WHERE a NOT IN
DROP TABLE t1, t2;
+set @@optimizer_switch=@save_optimizer_switch;
+
#
# Bug #34763: item_subselect.cc:1235:Item_in_subselect::row_value_transformer:
# Assertion failed, unexpected error message:
=== modified file 'sql/opt_subselect.cc'
--- a/sql/opt_subselect.cc 2010-03-09 10:36:15 +0000
+++ b/sql/opt_subselect.cc 2010-03-11 21:43:31 +0000
@@ -187,7 +187,11 @@ int check_and_do_in_subquery_rewrites(JO
does not call setup_subquery_materialization(). We could make
SELECT ... FROM DUAL call that function but that doesn't seem
to be the case that is worth handling.
- 4. Subquery is non-correlated
+ 4. Either the subquery predicate is a top-level predicate, or at
+ least one partial match strategy is enabled. If no partial match
+ strategy is enabled, then materialization cannot be used for
+ non-top-level queries because it cannot handle NULLs correctly.
+ 5. Subquery is non-correlated
TODO:
This is an overly restrictive condition. It can be extended to:
(Subquery is non-correlated ||
@@ -195,13 +199,13 @@ int check_and_do_in_subquery_rewrites(JO
(Subquery is correlated to the immediate outer query &&
Subquery !contains {GROUP BY, ORDER BY [LIMIT],
aggregate functions}) && subquery predicate is not under "NOT IN"))
- 5. No execution method was already chosen (by a prepared statement).
+ 6. No execution method was already chosen (by a prepared statement).
(*) The subquery must be part of a SELECT statement. The current
condition also excludes multi-table update statements.
- We have to determine whether we will perform subquery materialization
- before calling the IN=>EXISTS transformation, so that we know whether to
+ Determine whether we will perform subquery materialization before
+ calling the IN=>EXISTS transformation, so that we know whether to
perform the whole transformation or only that part of it which wraps
Item_in_subselect in an Item_in_optimizer.
*/
@@ -211,11 +215,14 @@ int check_and_do_in_subquery_rewrites(JO
select_lex->master_unit()->first_select()->leaf_tables && // 3
thd->lex->sql_command == SQLCOM_SELECT && // *
select_lex->outer_select()->leaf_tables && // 3A
- subquery_types_allow_materialization(in_subs))
+ subquery_types_allow_materialization(in_subs) &&
+ // psergey-todo: duplicated_subselect_card_check: where it's done?
+ (in_subs->is_top_level_item() ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_ROWID_MERGE) ||
+ optimizer_flag(thd, OPTIMIZER_SWITCH_PARTIAL_MATCH_TABLE_SCAN)) &&//4
+ !in_subs->is_correlated && // 5
+ in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 6
{
- // psergey-todo: duplicated_subselect_card_check: where it's done?
- if (!in_subs->is_correlated && // 4
- in_subs->exec_method == Item_in_subselect::NOT_TRANSFORMED) // 5
in_subs->exec_method= Item_in_subselect::MATERIALIZATION;
}
1
0
Hi!
Some you request contradicts with Mony's ones so I think it should be
discussed somehow.
11 марта 2010, в 12:30, Sergei Golubchik написал(а):
> Hi, Sanja!
>
> Here's the review, below:
>
> Summary:
>
> 1. please, store options together with the objects they describe, not
> separately.
I tried to do so in my very first implementation and IMHO it was my
mistake. The same related to way of storing elements - it should be
the same for parsing and reading. During parsing we one one structures
and classes, for storing in TABLE/TABLE_SHARE others. Moving data
between them (taking into account different memroot where they are
allocated) was quite tricky. I can make it again if you both think
that it is really important.
> 2. Unknown option should be an error by default.
OK. The only problem is that it is contradict to Monty requirements.
Our initial decision was issue error if option was added explicitly.
The only problem is that it is very difficult to implement - we write
options to .frm first then read them and pass to engine. I have no
idea how to pass this information via/over frm.
> 3. use something my_getopt-like as we discussed, don't force every
> engine to parse its options
I can add such function for users to use, but it will be thier choice
use it or do not, is it OK?
> 4. make options immutable to avoid copying them in ::clone
I do not know way to do it if they should be allocated in different
mem_roots.
> 5. don't check for changed options in alter table with your
> check_if_incompatible_data. let the engine do that.
This and 8 require big changes engine and ALTER TEBLE. Monty's
requirement was do not touch current code. I would be glad if you
discuss it and make some non contradicting requirement.
> 6. parser: use ident, not IDENT_sys
OK
> 7. parser: make the equal sign optional
I have some doubts that it is doable
DATA DIRECTORY TEST VALUE ...
Does it mean:
DATA = DIRECTORY TEST = VALUE ...
or
DATA DIRECTORY = TEST VALUE ... ? - error
(ALTER TABLE uses create_table_options_space_separated list of options)
Other problem is should we store old options in new way, old way,
both. (I think in this case both).
> 8. few existing options, like row_format, insert_method, checksum,
> delay_key_write, key_block_size, min_rows/max_rows, avg_row_length,
> tablespace, connection, pack_keys could be moved into storage
> engines
> out of the parser.
See above.
> 9. make sure your code works (and tested) with table options specified
> per partition/subpartition
OK.
> 10. misc details, like using 'changed' or unnecessary complex encoding
> of options in the frm file, see below.
>
>> === added file 'mysql-test/r/create_options.result'
>> --- mysql-test/r/create_options.result 1970-01-01 00:00:00 +0000
>> +++ mysql-test/r/create_options.result 2010-03-04 20:46:55 +0000
>> @@ -0,0 +1,197 @@
>> +drop table if exists t1;
>> +create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1
>> TKEY1=NULL tkey1=1v2 tkey2=2v1 tkey3=3v1;
>> +Warnings:
>> +Warning 1650 Unused option 'tkey1'='1v2'
>> +Warning 1650 Unused option 'tkey2'='2v1'
>> +Warning 1650 Unused option 'tkey3'='3v1'
>> +Warning 1651 Unused option 'fkey1'='v1' of field 'a'
>> +Warning 1652 Unused option 'kkey1'='v1' of key 'akey'
>
> 1. Better "unknown" or "unsupported" e.g.
>
> Unknown option 'tkey1'
> Unsupported option 'fkey1' specified for field 'a'
> Invalid option 'kkey1' used for key 'akey'
>
> no, "invalid" is bad here, scratch that
ok
>
> 2. why there's no warning for TKEY1=NULL ?
Because it means remove option.
>
>> +drop table t1;
>> +create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1
>> tkey1=1v2 TKEY1=NULL tkey1=1v1 tkey1=1v2 tkey2=2v1 tkey3=3v1;
>
> I don't understand how this is different from the first test
> (and many of the tests bellow),
> could you please add short one-line comments to the .test file
keys are in different order.
> explaining what you test in each statement ?
OK
>
> also, a thought about "warning vs errors":
> making warnings for typos and unknown options is one of the most
> disliked features in MySQL - judging from the number of bugreports
> (bug reports about USE HASH/BTREE, mind you - only a couple of places
> where MySQL is promiscuous like that, guess what will happen when your
> patch will take it to a whole new level!).
>
> moving engines, and so on, I know - but most users don't care.
>
> STRICT mode is too strict here, I think, it adds too much strictness
> everywhere. What about adding a special mode that's only "strict" in
> create
> table (and alter table - user specified part) ? That should be ON by
> default
> (or, rather, a negative mode should be OFF by default).
>
> In other words - I want the patch to be optimized (performance, and
> user
> experience) for the common case, not to boundary cases. And the common
> case, I believe, is the one when a user does not change engines all
> the
> time. We support the boundary case, yes, but optimize for the common
> one.
You remember that I also was for errors, but MOnty still want
warnings. Also there is problem in implementation of the way we
agreed on (see abopve about ALTER TABLE).
>> +Warnings:
>> +Warning 1650 Unused option 'tkey1'='1v2'
>> +Warning 1650 Unused option 'tkey2'='2v1'
>> +Warning 1650 Unused option 'tkey3'='3v1'
>> +Warning 1651 Unused option 'fkey1'='v1' of field 'a'
>> +Warning 1652 Unused option 'kkey1'='v1' of key 'akey'
>> +drop table t1;
>> +create table t1 (a int fkey1=v1, key akey (a) kkey1=v1) tkey1=1v1
>> tkey1=1v2 TKEY1='NULL' tkey2=2v1 tkey3=3v1;
> ...
>> === added file 'mysql-test/t/create_options_example.test'
>> --- mysql-test/t/create_options_example.test 1970-01-01 00:00:00
>> +0000
>> +++ mysql-test/t/create_options_example.test 2010-03-04 20:46:55
>> +0000
>> @@ -0,0 +1,16 @@
>> +--source include/have_example_plugin.inc
>> +
>> +--disable_warnings
>> +drop table if exists t1;
>> +--enable_warnings
>> +
>> +#All vaues with warnings
>
> this should go into plugin.test or exampledb.test
Why not separate test?
>> +create table t1 (a int ttt=xxx E=1, key akey (a) kkk=xxx ) E=1
>> ttt=xxx ttt=yyy TTT=DEFAULT mmm=CCC zzz=MMM;
>> +
>> +drop table t1;
>> +
>> +# E=1 accepted by engine
>> +create table t1 (a int ttt=xxx E=1) ENGINE=EXAMPLE E=1 ttt=xxx
>> ttt=yyy TTT=DEFAULT mmm=CCC zzz=MMM;
>> +
>> +drop table t1;
>> +
>> === modified file 'sql/Makefile.am'
>> --- sql/Makefile.am 2010-03-03 14:44:14 +0000
>> +++ sql/Makefile.am 2010-03-04 20:46:55 +0000
>> @@ -124,7 +124,7 @@ mysqld_SOURCES = sql_lex.cc sql_handler.
>> sql_plugin.cc sql_binlog.cc \
>> sql_builtin.cc sql_tablespace.cc
>> partition_info.cc \
>> sql_servers.cc event_parse_data.cc \
>> - opt_table_elimination.cc
>> + opt_table_elimination.cc
>> sql_create_options.cc
>
> please make sure that 'make distcheck' works after your changes
OK
>>
>> nodist_mysqld_SOURCES = mini_client_errors.c pack.c client.c
>> my_time.c my_user.c
>>
>> === modified file 'storage/example/ha_example.cc'
>> --- storage/example/ha_example.cc 2010-03-03 14:44:14 +0000
>> +++ storage/example/ha_example.cc 2010-03-04 20:46:55 +0000
>> @@ -836,11 +836,43 @@ ha_rows ha_example::records_in_range(uin
>> int ha_example::create(const char *name, TABLE *table_arg,
>> HA_CREATE_INFO *create_info)
>> {
>> + CREATE_OPTION *opt;
>> DBUG_ENTER("ha_example::create");
>> /*
>> This is not implemented but we want someone to be able to see
>> that it
>> works.
>> */
>> + /* Example of checking parameters for table*/
>> + if (!create_info->create_table_options)
>> + DBUG_RETURN(0);
>> + for (opt= create_info->create_table_options->table_opt.first;
>> + opt;
>> + opt= opt->next)
>> + {
>> + /* check for legal options and its legal values */
>> + if (opt->key.length == 1 &&
>> + (opt->key.str[0] == 'e' || opt->key.str[0] == 'E') &&
>> + opt->val.length == 1 &&
>> + opt->val.str[0] == '1')
>> + opt->used= 1; /* tell MariaDB that we used the only legal
>> parameter */
>> + }
>> + /* Example of checking parameters for fields*/
>> + for (Field **field= table_arg->s->field; *field; field++)
>> + {
>> + if ((*field)->create_options.first)
>> + {
>> + for (opt= (*field)->create_options.first; opt; opt= opt->next)
>> + {
>> + /* check for legal options and its legal values */
>> + if (opt->key.length == 1 &&
>> + (opt->key.str[0] == 'e' || opt->key.str[0] == 'E') &&
>> + opt->val.length == 1 &&
>> + opt->val.str[0] == '1')
>> + opt->used= 1; /* tell MariaDB that we used the only
>> legal parameter */
>> + }
>> + }
>> + }
>
> No, that's way too complex and too much code.
> *every* engine will need to do that, which means - it should be done
> in the
> server for all engines. Why you didn't use my_getopt as we originally
> discussed ?
OK (see above)
>> +
>> DBUG_RETURN(0);
>> }
>>
>> === added file 'sql/sql_create_options.h'
>> --- sql/sql_create_options.h 1970-01-01 00:00:00 +0000
>> +++ sql/sql_create_options.h 2010-03-04 20:46:55 +0000
>> @@ -0,0 +1,102 @@
>> +
>> +#ifndef _SQL_CREATE_OPTIONS_H
>> +#define _SQL_CREATE_OPTIONS_H
>> +
>> +
>> +/* types of cretate options records on disk, also it is length of
>> extra data */
>
> 1. typo: create
> 2. I know what does "length of extra data" mean, but the comment does
> not help to understand it.
I just forgot to change comment after changing parameter (monty wanted
2 bytes for key number also just tu be able to increase number of keys)
>> +typedef enum enum_create_options_type {
>> + CREATE_OPTION_TABLE= 0,
>> + CREATE_OPTION_KEY= 1,
>> + CREATE_OPTION_FIELD= 2
>> +} CREATE_OPTION_TYPES;
>> +
>> +typedef struct st_create_option {
>> + /* pointer to the next option or NULL */
>> + struct st_create_option *next;
>> + /* pointer to Field or KEY or NULL */
>> + void *owner;
>
> 1. better to use union { Field *, KEY *}
> 2. even better - use 'const char* name' as you don't need anything
> else
> from your fields/keys here
> 3. even better remove this 'owner' at all, you don't need it - see
> below,
> if you iterate the list of fields and keys you always know
> what field/key the option belongs to.
OK
>> + /* key and value of the option (\0 terminated)*/
>> + LEX_STRING key, val;
>> + /* used to issue warnings about unused options */
>> + my_bool used;
>> +} CREATE_OPTION;
>> +
>> +struct st_table_options;
>> +
>> +
>> +class st_create_option_list {
>
> why did you need to create your own list implementation instead of
> using
> either one that MySQL already has ?
> (hint: LIST, I_List, List, or even dynamic array)
>
> and -
> why do you need any list at all, if you store options in Fields and
> KEYs
> and simply can use the existing lists of fields and keys ?
Historically. But yes now it would be better to use LIST (if it allow
to insert item at the end)...
>> +public:
>> + /**
>> + pointer on the first list element
>> + */
>> + CREATE_OPTION *first;
>> + /**
>> + pointer on last list '.next' or beginning of the list in case
>> of empty list
>> +
>> + @note:
>> + If it is NULL then it is just sign of array of list end
>> + */
>> +private:
>> + CREATE_OPTION **last;
>> +public:
>> + void empty() {first= NULL; last= &first;}
>> + st_create_option_list() {empty();}
>> + st_create_option_list(const st_create_option_list &o)
>> + {
>> + if ((first= o.first))
>> + last= o.last;
>> + else
>> + last= &first;
>> + }
>> + my_bool last_opt() { return last == NULL; }
>> + friend my_bool create_option_add(st_create_option_list *options,
>> + MEM_ROOT *root,
>> + const LEX_STRING *str_key,
>> + const LEX_STRING *str_val,
>> + my_bool *changed);
>> + friend st_create_option_list
>> *create_create_options_array(MEM_ROOT *root,
>> + uint n);
>> + friend my_bool create_options_read(const uchar *buff, uint length,
>> + MEM_ROOT *root,
>> + st_table_options *opt);
>> + friend my_bool create_options_clone(MEM_ROOT *root,
>> + st_create_option_list *opts);
>> +};
>> +typedef class st_create_option_list CREATE_OPTION_LIST;
>> +
>> +
>> +typedef struct st_table_options {
>> + CREATE_OPTION_LIST table_opt; /* table options list */
>> + CREATE_OPTION_LIST *field_opt; /* fields options array */
>> + CREATE_OPTION_LIST *key_opt; /* keys options array */
>> +} TABLE_OPTIONS;
>> +
>> +CREATE_OPTION_LIST *create_create_options_array(MEM_ROOT *root,
>> uint n);
>> +TABLE_OPTIONS *create_create_options(MEM_ROOT *root, uint fields,
>> uint keys);
>> +
>> +my_bool create_options_read(const uchar *buff, uint length,
>> MEM_ROOT *root,
>> + TABLE_OPTIONS *opt);
>> +
>> +my_bool create_option_add(CREATE_OPTION_LIST *options, MEM_ROOT
>> *root,
>> + const LEX_STRING *k, const LEX_STRING *v,
>> + my_bool *chanes);
>> +
>> +ulong create_options_length(TABLE_OPTIONS *opt);
>> +
>> +void create_options_store(uchar *buff, TABLE_OPTIONS *opt);
>> +
>> +void create_options_check_unused(THD *thd,
>> + TABLE_OPTIONS *options);
>> +
>> +struct st_table_share;
>> +void create_options_binding(struct st_table_share *share);
>> +
>> +my_bool create_options_clone(MEM_ROOT *root, CREATE_OPTION_LIST
>> *opt);
>> +
>> +CREATE_OPTION_LIST *create_table_list_merge(CREATE_OPTION_LIST
>> *source,
>> + CREATE_OPTION_LIST
>> *changes,
>> + MEM_ROOT *root,
>> + my_bool *changed);
>> +my_bool is_equal_create_options(CREATE_OPTION *opt1, CREATE_OPTION
>> *opt2);
>> +
>> +#endif
>> === modified file 'sql/table.h'
>> --- sql/table.h 2010-02-12 08:47:31 +0000
>> +++ sql/table.h 2010-03-04 20:46:55 +0000
>> @@ -340,6 +340,7 @@ typedef struct st_table_share
>> #ifdef NOT_YET
>> struct st_table *open_tables; /* link to open tables */
>> #endif
>> + TABLE_OPTIONS *create_table_options; /* text options for table */
>
> do you need TABLE_OPTIONS - I mean, table, field, and key options -
> here ? TABLE_SHARE has an array of KEYs and KEYs store options
> internally (in KEY::create_options). And exactly the same
> applies to Fields.
I described it in the beginning.
>>
>> /* The following is copied to each TABLE on OPEN */
>> Field **field;
>> === modified file 'sql/structs.h'
>> --- sql/structs.h 2010-02-01 06:14:12 +0000
>> +++ sql/structs.h 2010-03-04 20:46:55 +0000
>> @@ -101,6 +101,8 @@ typedef struct st_key {
>> int bdb_return_if_eq;
>> } handler;
>> struct st_table *table;
>> + /** reference to the list of options or NULL */
>> + CREATE_OPTION_LIST create_options;
>
> eh, strictly speaking 'create_options' is not a pointer and it
> cannot be NULL.
> And it is not a reference in the C++ sense either.
>
> you could've simply said "list of options"
Not fixed comment, sorry.
>> } KEY;
>>
>>
>> === modified file 'sql/handler.h'
>> --- sql/handler.h 2010-02-01 06:14:12 +0000
>> +++ sql/handler.h 2010-03-04 20:46:55 +0000
>> @@ -919,6 +919,12 @@ typedef struct st_ha_create_information
>> LEX_STRING connect_string;
>> const char *password, *tablespace;
>> LEX_STRING comment;
>> + TABLE_OPTIONS create_table_options_orig;
>> + /**
>> + Originally create_table_options points on above field, but
>> during ALTER
>> + TABLE of the options it points on new built parameters
>> + */
>> + TABLE_OPTIONS *create_table_options;
>
> after reading the patch I still don't understand why do you need
> create_table_options_orig
For avoiding allocating it for normal table, it will be changed in
ALTER TABLE process.
>> const char *data_file_name, *index_file_name;
>> const char *alias;
>> ulonglong max_rows,min_rows;
>> === modified file 'sql/sql_class.cc'
>> --- sql/sql_class.cc 2010-02-01 06:14:12 +0000
>> +++ sql/sql_class.cc 2010-03-04 20:46:55 +0000
>> @@ -109,6 +109,8 @@ Key::Key(const Key &rhs, MEM_ROOT *mem_r
>> generated(rhs.generated)
>> {
>> list_copy_and_replace_each_value(columns, mem_root);
>> + create_options= rhs.create_options;
>> + create_options_clone(mem_root, &create_options);
>
> in create_options_clone() you don't need to clone everything,
> this constructor only copies elements that can change during
> execution,
> for example field and key names don't change and don't need to be
> copied. And options don't change either, only their "used" property
> is.
> but it would be best if you would get rid of it and make options
> completely
> immutable.
There was problems like pointer on freed memory which gone after this
I suspect different mem_roots.
>> }
>>
>> /**
>> === modified file 'sql/field.h'
>> --- sql/field.h 2010-02-01 06:14:12 +0000
>> +++ sql/field.h 2010-03-04 20:46:55 +0000
>> @@ -137,6 +137,8 @@ class Field
>> struct st_table *table; // Pointer for table
>> struct st_table *orig_table; // Pointer to original table
>> const char **table_name, *field_name;
>> + /** reference to the list of options or NULL */
>
> this is neither a reference nor it can be NULL
old comment
>> + CREATE_OPTION_LIST create_options;
>> LEX_STRING comment;
>> /* Field is part of the following keys */
>> key_map key_start, part_of_key, part_of_key_not_clustered;
>> === modified file 'sql/field.cc'
>> --- sql/field.cc 2010-02-01 06:14:12 +0000
>> +++ sql/field.cc 2010-03-04 20:46:55 +0000
>> @@ -10220,6 +10225,7 @@ Create_field::Create_field(Field *old_fi
>> decimals= old_field->decimals();
>> vcol_info= old_field->vcol_info;
>> stored_in_db= old_field->stored_in_db;
>> + create_options= old_field->create_options;
>
> explain in a comment please why you don't need to copy the data
> here, and can simply assign pointers
Because copy constructor makes correct list assignment, is it correct
comment?
>
>>
>> /* Fix if the original table had 4 byte pointer blobs */
>> if (flags & BLOB_FLAG)
>> === modified file 'sql/sql_show.cc'
>> --- sql/sql_show.cc 2010-02-01 06:14:12 +0000
>> +++ sql/sql_show.cc 2010-03-04 20:46:55 +0000
>> @@ -1356,6 +1376,8 @@ int store_create_info(THD *thd, TABLE_LI
>> packet->append(STRING_WITH_LEN(" COMMENT "));
>> append_unescaped(packet, field->comment.str, field-
>> >comment.length);
>> }
>> + if (field->create_options.first)
>
> you don't need an if() here and below, append_create_options()
> can handle the case of create_options.first == 0
OK (but i will need change list implementation in any case)
>
>> + append_create_options(thd, packet, field-
>> >create_options.first);
>> }
>>
>> key_info= table->key_info;
>> @@ -1586,6 +1610,11 @@ int store_create_info(THD *thd, TABLE_LI
>> packet->append(STRING_WITH_LEN(" CONNECTION="));
>> append_unescaped(packet, share->connect_string.str, share-
>> >connect_string.length);
>> }
>> + /* create_table_options can be NULL for temporary tables */
>> + if (share->create_table_options &&
>
> why TABLE_SHARE::create_table_options is a pointer to something
> allocared
> on TABLE_SHARE::mem_root ? In Field and KEY it's simply
> a structure - part of the Field/KEY class, why not the same here ?
Most time it is pointer to create_table_options_orig.
It is not the same here because of ALTER TABLE and the way how it
plays with TABLE_SHARE.
>
>> + share->create_table_options->table_opt.first)
>> + append_create_options(thd, packet,
>> + share->create_table_options-
>> >table_opt.first);
>> append_directory(thd, packet, "DATA",
>> create_info.data_file_name);
>> append_directory(thd, packet, "INDEX",
>> create_info.index_file_name);
>> }
>> === modified file 'sql/sql_table.cc'
>> --- sql/sql_table.cc 2010-02-12 08:47:31 +0000
>> +++ sql/sql_table.cc 2010-03-04 20:46:55 +0000
>> @@ -5789,6 +5791,15 @@ compare_tables(TABLE *table,
>> DBUG_RETURN(0);
>> }
>>
>> + if (!is_equal_create_options(tmp_new_field-
>> >create_options.first,
>> + field->create_options.first))
>> + {
>
> I am not sure this should be checked on MySQL level, we don't know the
> semantics of options. I'd say this check belong to
> handler::check_if_incompatible_data() and should be implemented in the
> storage engine internally.
Monty even requested me to recreate .frm even if case of KEY was
chenged (which is clear do not chengr semantic) - i.e. any change ==
rewriting .frm. So your requests contradict here it should be
discussed (I do not see sens nor harm in such rewriting policy)
>> + DBUG_PRINT("info", ("Options difference in field '%s'",
>> + new_field->field_name));
>> + *need_copy_table= ALTER_TABLE_DATA_CHANGED;
>> + DBUG_RETURN(0);
>> + }
>> +
>> /* Don't pack rows in old tables if the user has requested
>> this. */
>> if (create_info->row_type == ROW_TYPE_DYNAMIC ||
>> (tmp_new_field->flags & BLOB_FLAG) ||
>> @@ -6112,6 +6125,41 @@ mysql_prepare_alter_table(THD *thd, TABL
>> }
>> restore_record(table, s->default_values); // Empty record for
>> DEFAULT
>>
>> + if (create_info->create_table_options_orig.table_opt.first)
>> + {
>> + CREATE_OPTION_LIST *res;
>> + my_bool changed= FALSE;
>> + if (!table->s->create_table_options &&
>> + !(table->s->create_table_options=
>> + create_create_options(&table->s->mem_root,
>> + table->s->fields, table->s->keys)))
>> + goto err;
>> +
>> + if (!(res=
>> + create_table_list_merge(&table->s->create_table_options-
>> >table_opt,
>> + &create_info->
>> +
>> create_table_options_orig.table_opt,
>> + thd->mem_root,
>> + &changed)))
>> + goto err;
>> + DBUG_ASSERT(res->first);
>> + create_info->create_table_options_orig.table_opt= *res;
>> +
>> + if (changed)
>> + alter_info->change_level= ALTER_TABLE_DATA_CHANGED;
>> + else
>> + {
>> + alter_info->flags&= ~ALTER_CREATE_OPT;
>> + DBUG_PRINT("info", ("Table options was not changed"));
>> + }
>> + }
>> + else
>> + if (table->s->create_table_options)
>> + create_info->create_table_options_orig.table_opt=
>> + table->s->create_table_options->table_opt;
>
> why don't you set ALTER_TABLE_DATA_CHANGED here ?
it used as flag from parser only.
>
>> + else
>> + create_info->create_table_options_orig.table_opt.empty();
>> +
>> /*
>> First collect all fields from table which isn't in drop_list
>> */
>> === modified file 'sql/sql_yacc.yy'
>> --- sql/sql_yacc.yy 2010-02-01 06:14:12 +0000
>> +++ sql/sql_yacc.yy 2010-03-04 20:46:55 +0000
>> @@ -4714,6 +4718,16 @@ create_table_option:
>> Lex->create_info.used_fields|=
>> HA_CREATE_USED_TRANSACTIONAL;
>> Lex->create_info.transactional= $3;
>> }
>> + | IDENT_sys equal plugin_option_value
>
> 1. why IDENT_sys and not ident ?
OK
> 2. perhaps we should make the equal sign optional ?
> first - that's backward compatible,
> second - that would allow us to simplify the code quite a bit,
> moving existing table and index options onto a new framework
Answered above.
>> + {
>> + LEX *lex= Lex;
>> + create_option_add(&(lex->
>> + create_info.
>> +
>> create_table_options_orig.table_opt),
>> + YYTHD->mem_root, &$1, &$3,
>> + NULL);
>> + lex->alter_info.flags|= ALTER_CREATE_OPT;
>> + }
>> ;
>>
>> default_charset:
>> @@ -13827,6 +13867,32 @@ uninstall:
>> }
>> ;
>>
>> +/
>> **************************************************************************
>> +
>> + Create options
>> +
>> +
>> **************************************************************************/
>> +
>> +plugin_option_value:
>> + DEFAULT
>> + {
>> + $$.str= NULL; /* We are going to remove the option */
>> + $$.length= 0;
>> + }
>> + | NULL_SYM
>
> I don't like this trick.
> If you don't support NULLs, dont't allow users to specify them
how it can be stored as parameter value? Such semantic prevent users
of thinking that assigning NULL will make it really NULL not "NULL".
>> + {
>> + $$.str= NULL; /* We are going to remove the option */
>> + $$.length= 0;
>> + }
>> + | IDENT_sys { $$ = $1; }
>> + | TEXT_STRING_sys { $$ = $1; }
>> + | DECIMAL_NUM { $$ = $1; }
>> + | FLOAT_NUM { $$ = $1; }
>> + | NUM { $$ = $1; }
>> + | LONG_NUM { $$ = $1; }
>> + | HEX_NUM { $$ = $1; }
>
> looks like you forgot a semicolon here
OK
>> +
>> +
>> /**
>> @} (end of group Parser)
>> */
>>
>> === added file 'sql/sql_create_options.cc'
>> --- sql/sql_create_options.cc 1970-01-01 00:00:00 +0000
>> +++ sql/sql_create_options.cc 2010-03-04 20:46:55 +0000
>> @@ -0,0 +1,646 @@
>> +
>> +#include "mysql_priv.h"
>> +
>> +/* Additional length of index for CREATE_OPTION_XXX types */
>
> the comment is confusing. I could understand from the code what
> create_options_len[] is for, but the comment did not help in the least
"Length of additional data stored for every CREATE_OPTION_XXX types "
Is it OK?
>
>> +static uint create_options_len[3]= {0, 2, 2};
>> +
>> +
>> +/**
>> + Adds new option to this list
>> +
>> + @param options pointer to the list
>> + @param root memroot to allocate option
>> + @param str_key key
>> + @param str_val value
>> + @param changed pointer to variable to report changed data
>> +
>> + @retval TRUE error
>> + @retval FALSE OK
>> +*/
>> +
>> +my_bool create_option_add(CREATE_OPTION_LIST *options, MEM_ROOT
>> *root,
>> + const LEX_STRING *str_key,
>> + const LEX_STRING *str_val,
>> + my_bool *changed)
>> +{
>> + CREATE_OPTION *cur_option, **option;
>> + char *key, *val;
>> + my_bool not_used;
>> + my_bool copy= FALSE;
>> + my_bool replace= FALSE;
>> + DBUG_ENTER("create_option_add");
>> + DBUG_PRINT("enter", ("key: '%s' value: '%s'",
>> + str_key->str, str_val->str));
>> + if (changed)
>> + copy= TRUE;
>> + else
>> + changed= ¬_used;
>> +
>> + DBUG_ASSERT(options->first ||
>> + (!options->first && options->last == &options-
>> >first));
>> + *changed= FALSE;
>
> Hmm, strange. From the way you use 'changed' I thought it should
> accumulate
> the results - I mean, it's one variable that is passed into
> create_option_add() for all options. Apparently at the end it should
> be
> true if *any* of the options has changed.
>
> But then, why do you set it to false inside create_option_add() ?
It was special case for call from ALTER TABLE and from parser. Only
ALTER TABLE was interested in changes and so required copying
parameters.
>> +
>> + /* try to find the option first */
>> + for (option= &(options->first);
>> + *option && my_strcasecmp(system_charset_info,
>> + str_key->str, (*option)->key.str);
>> + option= &((*option)->next)) ;
>> + if (str_val->str)
>> + {
>> + /* add / replace */
>> + if (*option)
>> + {
>> + /* replace */
>> + cur_option= *option;
>> + if (!(*changed) &&
>> + (cur_option->val.length != str_val->length ||
>> + memcmp(cur_option->val.str, str_val->str, str_val-
>> >length)))
>> + {
>> + *changed= TRUE;
>> + }
>> + replace= TRUE;
>> + }
>> + else
>> + {
>> + /* add */
>> + if (!(cur_option= (CREATE_OPTION *)alloc_root(root,
>> +
>> sizeof(CREATE_OPTION))))
>> + DBUG_RETURN(TRUE);
>> + bzero(cur_option, sizeof(CREATE_OPTION));
>> + *(options->last)= cur_option;
>> + options->last= &(cur_option->next);
>> + *changed= TRUE;
>> + }
>> + if (changed || replace)
>> + {
>> + /*
>> + In case of replace we use new key in case it differ only
>> in case
>> + like 'key' and 'KEY'
>> + */
>> + if (!multi_alloc_root(root, &key, str_key->length + 1,
>> + &val, str_val->length + 1, NULL))
>> + DBUG_RETURN(TRUE);
>> + cur_option->key.str=
>> + (char *)memcpy(key, str_key->str,
>> + (cur_option->key.length= str_key->length));
>> + key[str_key->length]= '\0';
>> + cur_option->val.str=
>> + (char *)memcpy(val, str_val->str,
>> + (cur_option->val.length= str_val->length));
>> + val[str_val->length]= '\0';
>> + cur_option->used= FALSE;
>> + cur_option->owner= NULL;
>> + }
>> + DBUG_ASSERT(options->first ||
>> + (!options->first && options->last == &options-
>> >first));
>> + }
>> + else
>> + {
>> + /* remove */
>> + if (*option)
>> + {
>> + if (options->last == &((*option)->next))
>> + options->last= option; /* we deleted last option */
>> + *option= (*option)->next;
>> + *changed= TRUE;
>> + DBUG_ASSERT(options->first ||
>> + (!options->first && options->last == &options-
>> >first));
>> + }
>> + }
>> + DBUG_RETURN(FALSE);
>> +}
>> +
>> +
>> +/**
>> + Creates empty fields/keys array for table create options structure
>> +
>> + @param root memroot where to allocate memory for this
>> structure
>> + @param n number of fields/keys
>> +
>> + @return pointer to array or NULL in case of error.
>> +*/
>> +
>> +CREATE_OPTION_LIST *create_create_options_array(MEM_ROOT *root,
>> uint n)
>
> "create_create" is not a good name :(
I did not found better but open for suggestion.
>
>> +{
>> + uint i;
>> + DBUG_ENTER("create_create_options_array");
>> + DBUG_PRINT("enter", ("Number: %u", n));
>> +
>> + CREATE_OPTION_LIST *res=
>> + (CREATE_OPTION_LIST *) alloc_root(root,
>> + sizeof(CREATE_OPTION_LIST) * (n
>> + 1));
>> + bzero(res, sizeof(CREATE_OPTION_LIST) * (n + 1));
>> + if (!res)
>> + DBUG_RETURN(NULL);
>> + for (i= 0; i < n; i++)
>> + res[i].last= &res[i].first;
>> + /* We do not do above for res[n]. It is sign of array end */
>> + DBUG_RETURN(res);
>> +}
>> +
>> +
>> +/**
>> + Reads options from this buffer
>> +
>> + @param buffer the buffer to read from
>> + @param mem_root memroot for allocating
>> + @param opt parametes to write to
>> +
>> + @retval TRUE Error
>> + @retval FALSE OK
>> +*/
>> +
>> +my_bool create_options_read(const uchar *buff, uint length,
>> MEM_ROOT *root,
>> + TABLE_OPTIONS *opt)
>> +{
>> + const uchar *buff_end= buff + length;
>> + DBUG_ENTER("create_options_read");
>> + while (buff < buff_end)
>> + {
>> + CREATE_OPTION *option;
>> + CREATE_OPTION_TYPES type;
>> + uint index= 0;
>> +
>> + if (!(option= (CREATE_OPTION *) alloc_root(root,
>> sizeof(CREATE_OPTION))))
>> + DBUG_RETURN(TRUE);
>> +
>> + DBUG_ASSERT(buff + 4 <= buff_end);
>> + option->val.length= uint2korr(buff);
>> + option->key.length= buff[2];
>> + option->next= NULL;
>> + type= (CREATE_OPTION_TYPES)buff[3];
>> + buff+= 4;
>> + switch (type) {
>> + case CREATE_OPTION_FIELD:
>
> interesting encoding. so basically you support the case when field,
> key, and table options are all written interleaved:
>
> <table option><key 1 option><field 5 option><table option><field 3
> option><key 4 option>...
>
> why the heck do you want to support it ?
Could you propose other encoding taking into account that some fields,
keys and tables do not have parameters and some has several ones?
>> + index= uint2korr(buff);
>> + buff+= 2;
>> + *(opt->field_opt[index].last)= option;
>> + opt->field_opt[index].last= &option->next;
>> + break;
>> + case CREATE_OPTION_KEY:
>> + index= uint2korr(buff);
>> + buff+= 2;
>> + *(opt->key_opt[index].last)= option;
>> + opt->key_opt[index].last= &option->next;
>> + break;
>> + case CREATE_OPTION_TABLE:
>> + /* table */
>> + *(opt->table_opt.last)= option;
>> + opt->table_opt.last= &option->next;
>> + break;
>> + default:
>> + DBUG_ASSERT(0);
>> + }
>> + if (!(option->key.str= strmake_root(root, (const char*)buff,
>> + option->key.length)))
>> + DBUG_RETURN(TRUE);
>> + buff+= option->key.length;
>> + if (!(option->val.str= strmake_root(root, (const char*)buff,
>> + option->val.length)))
>> + DBUG_RETURN(TRUE);
>> + buff+= option->val.length;
>> + option->used= FALSE;
>> + option->owner= NULL;
>> + DBUG_PRINT("info", ("type: %u index: %u key: '%s' value:
>> '%s'",
>> + (uint) type, (uint) index,
>> + option->key.str, option->val.str));
>> + }
>> + DBUG_RETURN(FALSE);
>> +}
>> +
>> +/**
>> + Calculates length of saved image of the option lists
>> +
>> + @param opt list of options
>> + @param extra_length type of the record
>
> eh, extra_length is not really a "type of the record", is it ?
it was, but you are right it should be fixed.
>> +
>> + @return length
>> +*/
>> +
>> +static ulong create_options_list_length(CREATE_OPTION_LIST *opts,
>> int extra_length)
>> +{
>> + CREATE_OPTION *opt;
>> + ulong res= 0;
>> + DBUG_ENTER("create_options_list_length");
>> + for (opt= opts->first; opt != NULL; opt= opt->next)
>> + {
>> + DBUG_PRINT("info", ("key: '%s' value: '%s'",
>> + (opt->key.str ? opt->key.str : "<NULL>"),
>> + (opt->val.str ? opt->val.str : "<NULL>")));
>> + DBUG_ASSERT(opt->key.length);
>> + /*
>> + length of disk for every record:
>> + 2 bytes - value length
>> + 1 byte - key length
>> + 1 byte - record type
>> + 0/2 bytes - none/key number/field number
>> + */
>> + res+= 2 + 1 + 1 + extra_length + opt->key.length + opt-
>> >val.length;
>> + }
>> + DBUG_RETURN(res);
>> +}
>> +
>> +/**
>> + Calculates length of saved image of the all options of the table
>> +
>> + @param opts table of options
>> +
>> + @return length
>> +*/
>> +
>> +ulong create_options_length(TABLE_OPTIONS *opt)
>> +{
>> + CREATE_OPTION_LIST *opts;
>> + ulong res;
>> + DBUG_ENTER("create_options_length");
>> +
>> + res=
>> + (opt->table_opt.first ?
>> + create_options_list_length(&opt->table_opt,
>> +
>> create_options_len[CREATE_OPTION_TABLE]):
>> + 0);
>> + if (opt->field_opt)
>> + {
>> + for (opts= opt->field_opt; !opts->last_opt(); opts++)
>
> why wouldn't you simply iterate over an array of the fixed length -
> you know how many fields and keys are there. And you wouldn't need
> this "invalid list" array element at the end.
To avoid knowing too much about other structures and classes.
> even better - as I wrote above, keep options together with fields/
> keys only
> and don't maintain a separate array of them.
I explained what problems it brings if you think that it is vitally
important I will make it.
>> + res+=
>> + create_options_list_length(opts,
>> +
>> create_options_len[CREATE_OPTION_FIELD]);
>> + }
>> + if (opt->key_opt)
>> + {
>> + for (opts= opt->key_opt; !opts->last_opt(); opts++)
>> + res+=
>> + create_options_list_length(opts,
>> +
>> create_options_len[CREATE_OPTION_KEY]);
>> + }
>> + DBUG_RETURN(res);
>> +}
>
>
> Regards,
> Sergei
1
0
[Maria-developers] Rev 2734: Maria WL#61 in file:///Users/bell/maria/bzr/work-maria-5.2-engine/
by sanja@askmonty.org 11 Mar '10
by sanja@askmonty.org 11 Mar '10
11 Mar '10
At file:///Users/bell/maria/bzr/work-maria-5.2-engine/
------------------------------------------------------------
revno: 2734
revision-id: sanja(a)askmonty.org-20100311150203-mg6478pobnln5x22
parent: psergey(a)askmonty.org-20091202142609-18bp41q8mejxl47t
committer: sanja(a)askmonty.org
branch nick: work-maria-5.2-engine
timestamp: Thu 2010-03-11 17:02:03 +0200
message:
Maria WL#61
Interface for maria extensions.
Alternative plugin interface with additional info (maturity and string version).
=== modified file 'CMakeLists.txt'
--- a/CMakeLists.txt 2009-10-03 19:24:13 +0000
+++ b/CMakeLists.txt 2010-03-11 15:02:03 +0000
@@ -250,7 +250,7 @@
ENDIF(WITH_${ENGINE}_STORAGE_ENGINE AND MYSQL_PLUGIN_STATIC)
IF (ENGINE_BUILD_TYPE STREQUAL "STATIC")
- SET (mysql_plugin_defs "${mysql_plugin_defs},builtin_${PLUGIN_NAME}_plugin")
+ SET (maria_plugin_defs "${maria_plugin_defs},builtin_maria_${PLUGIN_NAME}_plugin")
SET (MYSQLD_STATIC_ENGINE_LIBS ${MYSQLD_STATIC_ENGINE_LIBS} ${PLUGIN_NAME})
SET (STORAGE_ENGINE_DEFS "${STORAGE_ENGINE_DEFS} -DWITH_${ENGINE}_STORAGE_ENGINE")
SET (WITH_${ENGINE}_STORAGE_ENGINE TRUE)
@@ -268,7 +268,7 @@
# Special handling for partition(not really pluggable)
IF(NOT WITHOUT_PARTITION_STORAGE_ENGINE)
SET (STORAGE_ENGINE_DEFS "${STORAGE_ENGINE_DEFS} -DWITH_PARTITION_STORAGE_ENGINE")
- SET (mysql_plugin_defs "${mysql_plugin_defs},builtin_partition_plugin")
+ SET (maria_plugin_defs "${maria_plugin_defs},builtin_maria_partition_plugin")
ENDIF(NOT WITHOUT_PARTITION_STORAGE_ENGINE)
# Special handling for tmp tables with the maria engine
=== modified file 'config/ac-macros/plugins.m4'
--- a/config/ac-macros/plugins.m4 2009-04-25 10:05:32 +0000
+++ b/config/ac-macros/plugins.m4 2010-03-11 15:02:03 +0000
@@ -460,7 +460,7 @@
])
])
])
- mysql_plugin_defs="$mysql_plugin_defs, [builtin_]$2[_plugin]"
+ maria_plugin_defs="$maria_plugin_defs, [builtin_maria_]$2[_plugin]"
[with_plugin_]$2=yes
AC_MSG_RESULT([yes])
m4_ifdef([$11],[
=== modified file 'configure.in'
--- a/configure.in 2009-11-12 04:31:28 +0000
+++ b/configure.in 2010-03-11 15:02:03 +0000
@@ -2841,7 +2841,7 @@
AC_SUBST(mysql_plugin_dirs)
AC_SUBST(mysql_plugin_libs)
-AC_SUBST(mysql_plugin_defs)
+AC_SUBST(maria_plugin_defs)
# Now that sql_client_dirs and sql_server_dirs are stable, determine the union.
=== modified file 'include/mysql/plugin.h'
--- a/include/mysql/plugin.h 2009-09-07 20:50:10 +0000
+++ b/include/mysql/plugin.h 2010-03-11 15:02:03 +0000
@@ -65,7 +65,10 @@
Plugin API. Common for all plugin types.
*/
+/* MySQL plugin interface version */
#define MYSQL_PLUGIN_INTERFACE_VERSION 0x0100
+/* MariaDB plugin interface version */
+#define MARIA_PLUGIN_INTERFACE_VERSION 0x0100
/*
The allowable types of plugins
@@ -86,6 +89,21 @@
#define PLUGIN_LICENSE_GPL_STRING "GPL"
#define PLUGIN_LICENSE_BSD_STRING "BSD"
+/* definitions of code maturity for plugins */
+#define PLUGIN_MATURITY_UNKNOWN 0
+#define PLUGIN_MATURITY_TEST 1
+#define PLUGIN_MATURITY_ALPHA 2
+#define PLUGIN_MATURITY_BETA 3
+#define PLUGIN_MATURITY_GAMMA 4
+#define PLUGIN_MATURITY_RELEASE 5
+
+#define PLUGIN_MATURITY_UNKNOWN_STR "Unknown"
+#define PLUGIN_MATURITY_TEST_STR "Test"
+#define PLUGIN_MATURITY_ALPHA_STR "Alpha"
+#define PLUGIN_MATURITY_BETA_STR "Beta"
+#define PLUGIN_MATURITY_GAMMA_STR "Gamma"
+#define PLUGIN_MATURITY_RELEASE_STR "Release"
+
/*
Macros for beginning and ending plugin declarations. Between
mysql_declare_plugin and mysql_declare_plugin_end there should
@@ -94,15 +112,29 @@
#ifndef MYSQL_DYNAMIC_PLUGIN
+
#define __MYSQL_DECLARE_PLUGIN(NAME, VERSION, PSIZE, DECLS) \
int VERSION= MYSQL_PLUGIN_INTERFACE_VERSION; \
int PSIZE= sizeof(struct st_mysql_plugin); \
struct st_mysql_plugin DECLS[]= {
+
+#define __MARIA_DECLARE_PLUGIN(NAME, VERSION, PSIZE, DECLS) \
+int VERSION= MARIA_PLUGIN_INTERFACE_VERSION; \
+int PSIZE= sizeof(struct st_maria_plugin); \
+struct st_maria_plugin DECLS[]= {
+
#else
+
#define __MYSQL_DECLARE_PLUGIN(NAME, VERSION, PSIZE, DECLS) \
MYSQL_PLUGIN_EXPORT int _mysql_plugin_interface_version_= MYSQL_PLUGIN_INTERFACE_VERSION; \
MYSQL_PLUGIN_EXPORT int _mysql_sizeof_struct_st_plugin_= sizeof(struct st_mysql_plugin); \
MYSQL_PLUGIN_EXPORT struct st_mysql_plugin _mysql_plugin_declarations_[]= {
+
+#define __MARIA_DECLARE_PLUGIN(NAME, VERSION, PSIZE, DECLS) \
+MYSQL_PLUGIN_EXPORT int _maria_plugin_interface_version_= MARIA_PLUGIN_INTERFACE_VERSION; \
+MYSQL_PLUGIN_EXPORT int _maria_sizeof_struct_st_plugin_= sizeof(struct st_maria_plugin); \
+MYSQL_PLUGIN_EXPORT struct st_maria_plugin _maria_plugin_declarations_[]= {
+
#endif
#define mysql_declare_plugin(NAME) \
@@ -111,7 +143,14 @@
builtin_ ## NAME ## _sizeof_struct_st_plugin, \
builtin_ ## NAME ## _plugin)
+#define maria_declare_plugin(NAME) \
+__MARIA_DECLARE_PLUGIN(NAME, \
+ builtin_maria_ ## NAME ## _plugin_interface_version, \
+ builtin_maria_ ## NAME ## _sizeof_struct_st_plugin, \
+ builtin_maria_ ## NAME ## _plugin)
+
#define mysql_declare_plugin_end ,{0,0,0,0,0,0,0,0,0,0,0,0}}
+#define maria_declare_plugin_end ,{0,0,0,0,0,0,0,0,0,0,0,0,0,0}}
/*
declarations for SHOW STATUS support in plugins
@@ -407,6 +446,31 @@
void * __reserved1; /* reserved for dependency checking */
};
+/*
+ MariaDB extension for plugins declaration structure.
+
+ It also copy current MySQL plugin fields to have more independency
+ in plugins extension
+*/
+
+struct st_maria_plugin
+{
+ int type; /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ void *info; /* pointer to type-specific plugin descriptor */
+ const char *name; /* plugin name */
+ const char *author; /* plugin author (for SHOW PLUGINS) */
+ const char *descr; /* general descriptive text (for SHOW PLUGINS ) */
+ int license; /* the plugin license (PLUGIN_LICENSE_XXX) */
+ int (*init)(void *); /* the function to invoke when plugin is loaded */
+ int (*deinit)(void *);/* the function to invoke when plugin is unloaded */
+ unsigned int version; /* plugin version (for SHOW PLUGINS) */
+ struct st_mysql_show_var *status_vars;
+ struct st_mysql_sys_var **system_vars;
+ const char *version_info; /* plugin version string */
+ int maturity; /* HA_PLUGIN_MATURITY_XXX */
+ void * __reserved1; /* reserved for dependency checking */
+};
+
/*************************************************************************
API for Full-text parser plugin. (MYSQL_FTPARSER_PLUGIN)
*/
=== modified file 'include/mysql/plugin.h.pp'
--- a/include/mysql/plugin.h.pp 2008-10-10 15:28:41 +0000
+++ b/include/mysql/plugin.h.pp 2010-03-11 15:02:03 +0000
@@ -46,6 +46,23 @@
struct st_mysql_sys_var **system_vars;
void * __reserved1;
};
+struct st_maria_plugin
+{
+ int type;
+ void *info;
+ const char *name;
+ const char *author;
+ const char *descr;
+ int license;
+ int (*init)(void *);
+ int (*deinit)(void *);
+ unsigned int version;
+ struct st_mysql_show_var *status_vars;
+ struct st_mysql_sys_var **system_vars;
+ const char *version_info;
+ int maturity;
+ void * __reserved1;
+};
enum enum_ftparser_mode
{
MYSQL_FTPARSER_SIMPLE_MODE= 0,
=== modified file 'mysql-test/r/information_schema.result'
--- a/mysql-test/r/information_schema.result 2009-10-19 17:14:48 +0000
+++ b/mysql-test/r/information_schema.result 2010-03-11 15:02:03 +0000
@@ -1175,7 +1175,7 @@
group by column_type order by num;
column_type group_concat(table_schema, '.', table_name) num
varchar(27) information_schema.COLUMNS 1
-varchar(7) information_schema.ROUTINES,information_schema.VIEWS 2
+varchar(7) information_schema.PLUGINS,information_schema.ROUTINES,information_schema.VIEWS 3
varchar(20) information_schema.FILES,information_schema.FILES,information_schema.PLUGINS,information_schema.PLUGINS,information_schema.PLUGINS,information_schema.PROFILING 6
create table t1(f1 char(1) not null, f2 char(9) not null)
default character set utf8;
=== modified file 'plugin/daemon_example/daemon_example.cc'
--- a/plugin/daemon_example/daemon_example.cc 2007-06-27 14:49:12 +0000
+++ b/plugin/daemon_example/daemon_example.cc 2010-03-11 15:02:03 +0000
@@ -200,3 +200,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(daemon_example)
+{
+ MYSQL_DAEMON_PLUGIN,
+ &daemon_example_plugin,
+ "daemon_example",
+ "Brian Aker",
+ "Daemon example, creates a heartbeat beat file in mysql-heartbeat.log",
+ PLUGIN_LICENSE_GPL,
+ daemon_example_plugin_init, /* Plugin Init */
+ daemon_example_plugin_deinit, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_TEST, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'plugin/fulltext/plugin_example.c'
--- a/plugin/fulltext/plugin_example.c 2007-04-26 19:26:04 +0000
+++ b/plugin/fulltext/plugin_example.c 2010-03-11 15:02:03 +0000
@@ -270,4 +270,22 @@
NULL
}
mysql_declare_plugin_end;
+maria_declare_plugin(ftexample)
+{
+ MYSQL_FTPARSER_PLUGIN, /* type */
+ &simple_parser_descriptor, /* descriptor */
+ "simple_parser", /* name */
+ "MySQL AB", /* author */
+ "Simple Full-Text Parser", /* description */
+ PLUGIN_LICENSE_GPL,
+ simple_parser_plugin_init, /* init function (when loaded) */
+ simple_parser_plugin_deinit,/* deinit function (when unloaded) */
+ 0x0001, /* version */
+ simple_status, /* status variables */
+ simple_system_variables, /* system variables */
+ "0.01", /* string version */
+ PLUGIN_MATURITY_TEST, /* maturity */
+ NULL
+}
+maria_declare_plugin_end;
=== modified file 'sql/ha_ndbcluster.cc'
--- a/sql/ha_ndbcluster.cc 2009-09-07 20:50:10 +0000
+++ b/sql/ha_ndbcluster.cc 2010-03-11 15:02:03 +0000
@@ -10561,5 +10561,23 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(ndbcluster)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &ndbcluster_storage_engine,
+ ndbcluster_hton_name,
+ "MySQL AB",
+ "Clustered, fault-tolerant tables",
+ PLUGIN_LICENSE_GPL,
+ ndbcluster_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ ndb_status_variables_export,/* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_BETA, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
#endif
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc 2009-11-12 04:31:28 +0000
+++ b/sql/ha_partition.cc 2010-03-11 15:02:03 +0000
@@ -6510,5 +6510,23 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(partition)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &partition_storage_engine,
+ "partition",
+ "Mikael Ronstrom, MySQL AB",
+ "Partition Storage Engine Helper",
+ PLUGIN_LICENSE_GPL,
+ partition_initialize, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0100, /* 1.0 */
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
#endif
=== modified file 'sql/log.cc'
--- a/sql/log.cc 2009-11-12 04:31:28 +0000
+++ b/sql/log.cc 2010-03-11 15:02:03 +0000
@@ -5795,3 +5795,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(binlog)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &binlog_storage_engine,
+ "binlog",
+ "MySQL AB",
+ "This is a pseudo storage engine to represent the binlog in a transaction",
+ PLUGIN_LICENSE_GPL,
+ binlog_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'sql/sql_builtin.cc.in'
--- a/sql/sql_builtin.cc.in 2006-12-31 01:29:11 +0000
+++ b/sql/sql_builtin.cc.in 2010-03-11 15:02:03 +0000
@@ -15,13 +15,12 @@
#include <mysql/plugin.h>
-typedef struct st_mysql_plugin builtin_plugin[];
-
-extern builtin_plugin
- builtin_binlog_plugin@mysql_plugin_defs@;
-
-struct st_mysql_plugin *mysqld_builtins[]=
+typedef struct st_maria_plugin builtin_maria_plugin[];
+
+extern builtin_maria_plugin
+ builtin_maria_binlog_plugin@maria_plugin_defs@;
+
+struct st_maria_plugin *mariadb_builtins[]=
{
- builtin_binlog_plugin@mysql_plugin_defs@,(struct st_mysql_plugin *)0
+ builtin_maria_binlog_plugin@maria_plugin_defs@,(struct st_maria_plugin *)0
};
-
=== modified file 'sql/sql_plugin.cc'
--- a/sql/sql_plugin.cc 2009-11-12 04:31:28 +0000
+++ b/sql/sql_plugin.cc 2010-03-11 15:02:03 +0000
@@ -27,7 +27,7 @@
#define plugin_int_to_ref(A) &(A)
#endif
-extern struct st_mysql_plugin *mysqld_builtins[];
+extern struct st_maria_plugin *mariadb_builtins[];
/**
@note The order of the enumeration is critical.
@@ -82,6 +82,14 @@
"_mysql_sizeof_struct_st_plugin_";
static const char *plugin_declarations_sym= "_mysql_plugin_declarations_";
static int min_plugin_interface_version= MYSQL_PLUGIN_INTERFACE_VERSION & ~0xFF;
+static const char *maria_plugin_interface_version_sym=
+ "_maria_plugin_interface_version_";
+static const char *maria_sizeof_st_plugin_sym=
+ "_maria_sizeof_struct_st_plugin_";
+static const char *maria_plugin_declarations_sym=
+ "_maria_plugin_declarations_";
+static int min_maria_plugin_interface_version=
+ MARIA_PLUGIN_INTERFACE_VERSION & ~0xFF;
#endif
/* Note that 'int version' must be the first field of every plugin
@@ -205,7 +213,7 @@
const char *list);
static int test_plugin_options(MEM_ROOT *, struct st_plugin_int *,
int *, char **);
-static bool register_builtin(struct st_mysql_plugin *, struct st_plugin_int *,
+static bool register_builtin(struct st_maria_plugin *, struct st_plugin_int *,
struct st_plugin_int **);
static void unlock_variables(THD *thd, struct system_variables *vars);
static void cleanup_variables(THD *thd, struct system_variables *vars);
@@ -341,11 +349,261 @@
dlclose(p->handle);
#endif
my_free(p->dl.str, MYF(MY_ALLOW_ZERO_PTR));
- if (p->version != MYSQL_PLUGIN_INTERFACE_VERSION)
+ if (p->mariaversion != MARIA_PLUGIN_INTERFACE_VERSION)
my_free((uchar*)p->plugins, MYF(MY_ALLOW_ZERO_PTR));
}
+/**
+ Reads data from mysql plugin interface
+
+ @param plugin_dl Structure where the data should be put
+ @param sym Reverence on version info
+ @param dlpath Path to the module
+ @param report What errors should be reported
+
+ @retval FALSE OK
+ @retval TRUE ERROR
+*/
+
+static my_bool read_mysql_plugin_info(struct st_plugin_dl *plugin_dl,
+ void *sym, char *dlpath,
+ int report)
+{
+ DBUG_ENTER("read_maria_plugin_info");
+ /* Determine interface version */
+ if (!sym)
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), plugin_interface_version_sym);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), plugin_interface_version_sym);
+ DBUG_RETURN(TRUE);
+ }
+ plugin_dl->mariaversion= 0;
+ plugin_dl->mysqlversion= *(int *)sym;
+ /* Versioning */
+ if (plugin_dl->mysqlversion < min_plugin_interface_version ||
+ (plugin_dl->mysqlversion >> 8) > (MYSQL_PLUGIN_INTERFACE_VERSION >> 8))
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_OPEN_LIBRARY, MYF(0), dlpath, 0,
+ "plugin interface version mismatch");
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_OPEN_LIBRARY), dlpath, 0,
+ "plugin interface version mismatch");
+ DBUG_RETURN(TRUE);
+ }
+ /* Find plugin declarations */
+ if (!(sym= dlsym(plugin_dl->handle, plugin_declarations_sym)))
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), plugin_declarations_sym);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), plugin_declarations_sym);
+ DBUG_RETURN(TRUE);
+ }
+
+ /* convert mysql declaration to maria one */
+ {
+ int i;
+ uint sizeof_st_plugin;
+ struct st_mysql_plugin *old;
+ struct st_maria_plugin *cur;
+ char *ptr= (char *)sym;
+
+ if ((sym= dlsym(plugin_dl->handle, sizeof_st_plugin_sym)))
+ sizeof_st_plugin= *(int *)sym;
+ else
+ {
+#ifdef ERROR_ON_NO_SIZEOF_PLUGIN_SYMBOL
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), sizeof_st_plugin_sym);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), sizeof_st_plugin_sym);
+ DBUG_RETURN(TRUE);
+#else
+ /*
+ When the following assert starts failing, we'll have to switch
+ to the upper branch of the #ifdef
+ */
+ DBUG_ASSERT(min_plugin_interface_version == 0);
+ sizeof_st_plugin= (int)offsetof(struct st_mysql_plugin, version);
+#endif
+ }
+
+ for (i= 0;
+ ((struct st_mysql_plugin *)(ptr+i*sizeof_st_plugin))->info;
+ i++)
+ /* no op */;
+
+ cur= (struct st_maria_plugin*)
+ my_malloc(i * sizeof(struct st_maria_plugin),
+ MYF(MY_ZEROFILL|MY_WME));
+ if (!cur)
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_OUTOFMEMORY, MYF(0), plugin_dl->dl.length);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_OUTOFMEMORY), plugin_dl->dl.length);
+ DBUG_RETURN(TRUE);
+ }
+ /*
+ All st_plugin fields not initialized in the plugin explicitly, are
+ set to 0. It matches C standard behaviour for struct initializers that
+ have less values than the struct definition.
+ */
+ for (i=0;
+ (old=(struct st_mysql_plugin *)(ptr+i*sizeof_st_plugin))->info;
+ i++)
+ {
+
+ cur->type= old->type;
+ cur->info= old->info;
+ cur->name= old->name;
+ cur->author= old->author;
+ cur->descr= old->descr;
+ cur->license= old->license;
+ cur->init= old->init;
+ cur->deinit= old->deinit;
+ cur->version= old->version;
+ cur->status_vars= old->status_vars;
+ cur->system_vars= old->system_vars;
+ /*
+ Something like this should be added to process
+ new mysql plugin versions:
+ if (plugin_dl->mysqlversion > 0x0100)
+ {
+ cur->newfield= CONSTANT_MEANS_UNKNOWN;
+ }
+ else
+ {
+ cur->newfield= old->newfield;
+ }
+ */
+ /* Maria only fields */
+ cur->version_info= "Unknown";
+ cur->maturity= PLUGIN_MATURITY_UNKNOWN;
+ }
+
+ plugin_dl->plugins= (struct st_maria_plugin *)cur;
+ }
+
+ DBUG_RETURN(FALSE);
+}
+
+
+/**
+ Reads data from maria plugin interface
+
+ @param plugin_dl Structure where the data should be put
+ @param sym Reverence on version info
+ @param dlpath Path to the module
+ @param report what errors should be reported
+
+ @retval FALSE OK
+ @retval TRUE ERROR
+*/
+
+static my_bool read_maria_plugin_info(struct st_plugin_dl *plugin_dl,
+ void *sym, char *dlpath,
+ int report)
+{
+ DBUG_ENTER("read_maria_plugin_info");
+
+ /* Determine interface version */
+ if (!(sym))
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), plugin_interface_version_sym);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), plugin_interface_version_sym);
+ DBUG_RETURN(TRUE);
+ }
+ plugin_dl->mariaversion= *(int *)sym;
+ plugin_dl->mysqlversion= 0;
+ /* Versioning */
+ if (plugin_dl->mariaversion < min_maria_plugin_interface_version ||
+ (plugin_dl->mariaversion >> 8) > (MARIA_PLUGIN_INTERFACE_VERSION >> 8))
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_OPEN_LIBRARY, MYF(0), dlpath, 0,
+ "plugin interface version mismatch");
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_OPEN_LIBRARY), dlpath, 0,
+ "plugin interface version mismatch");
+ DBUG_RETURN(TRUE);
+ }
+ /* Find plugin declarations */
+ if (!(sym= dlsym(plugin_dl->handle, maria_plugin_declarations_sym)))
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), plugin_declarations_sym);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), plugin_declarations_sym);
+ DBUG_RETURN(TRUE);
+ }
+ if (plugin_dl->mariaversion != MARIA_PLUGIN_INTERFACE_VERSION)
+ {
+ int i;
+ uint sizeof_st_plugin;
+ struct st_maria_plugin *old, *cur;
+ char *ptr= (char *)sym;
+
+ if ((sym= dlsym(plugin_dl->handle, maria_sizeof_st_plugin_sym)))
+ sizeof_st_plugin= *(int *)sym;
+ else
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), sizeof_st_plugin_sym);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), sizeof_st_plugin_sym);
+ DBUG_RETURN(TRUE);
+ }
+
+ for (i= 0;
+ ((struct st_maria_plugin *)(ptr+i*sizeof_st_plugin))->info;
+ i++)
+ /* no op */;
+
+ cur= (struct st_maria_plugin*)
+ my_malloc(i * sizeof(struct st_maria_plugin),
+ MYF(MY_ZEROFILL|MY_WME));
+ if (!cur)
+ {
+ free_plugin_mem(plugin_dl);
+ if (report & REPORT_TO_USER)
+ my_error(ER_OUTOFMEMORY, MYF(0), plugin_dl->dl.length);
+ if (report & REPORT_TO_LOG)
+ sql_print_error(ER(ER_OUTOFMEMORY), plugin_dl->dl.length);
+ DBUG_RETURN(TRUE);
+ }
+ /*
+ All st_plugin fields not initialized in the plugin explicitly, are
+ set to 0. It matches C standard behaviour for struct initializers that
+ have less values than the struct definition.
+ */
+ for (i=0;
+ (old=(struct st_maria_plugin *)(ptr+i*sizeof_st_plugin))->info;
+ i++)
+ memcpy(cur+i, old, min(sizeof(cur[i]), sizeof_st_plugin));
+
+ sym= cur;
+ }
+ plugin_dl->plugins= (struct st_maria_plugin *)sym;
+
+ DBUG_RETURN(FALSE);
+}
+
static st_plugin_dl *plugin_dl_add(const LEX_STRING *dl, int report)
{
#ifdef HAVE_DLOPEN
@@ -399,98 +657,22 @@
sql_print_error(ER(ER_CANT_OPEN_LIBRARY), dlpath, errno, errmsg);
DBUG_RETURN(0);
}
- /* Determine interface version */
- if (!(sym= dlsym(plugin_dl.handle, plugin_interface_version_sym)))
- {
- free_plugin_mem(&plugin_dl);
- if (report & REPORT_TO_USER)
- my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), plugin_interface_version_sym);
- if (report & REPORT_TO_LOG)
- sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), plugin_interface_version_sym);
- DBUG_RETURN(0);
- }
- plugin_dl.version= *(int *)sym;
- /* Versioning */
- if (plugin_dl.version < min_plugin_interface_version ||
- (plugin_dl.version >> 8) > (MYSQL_PLUGIN_INTERFACE_VERSION >> 8))
- {
- free_plugin_mem(&plugin_dl);
- if (report & REPORT_TO_USER)
- my_error(ER_CANT_OPEN_LIBRARY, MYF(0), dlpath, 0,
- "plugin interface version mismatch");
- if (report & REPORT_TO_LOG)
- sql_print_error(ER(ER_CANT_OPEN_LIBRARY), dlpath, 0,
- "plugin interface version mismatch");
- DBUG_RETURN(0);
- }
- /* Find plugin declarations */
- if (!(sym= dlsym(plugin_dl.handle, plugin_declarations_sym)))
- {
- free_plugin_mem(&plugin_dl);
- if (report & REPORT_TO_USER)
- my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), plugin_declarations_sym);
- if (report & REPORT_TO_LOG)
- sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), plugin_declarations_sym);
- DBUG_RETURN(0);
- }
-
- if (plugin_dl.version != MYSQL_PLUGIN_INTERFACE_VERSION)
- {
- int i;
- uint sizeof_st_plugin;
- struct st_mysql_plugin *old, *cur;
- char *ptr= (char *)sym;
-
- if ((sym= dlsym(plugin_dl.handle, sizeof_st_plugin_sym)))
- sizeof_st_plugin= *(int *)sym;
- else
- {
-#ifdef ERROR_ON_NO_SIZEOF_PLUGIN_SYMBOL
- free_plugin_mem(&plugin_dl);
- if (report & REPORT_TO_USER)
- my_error(ER_CANT_FIND_DL_ENTRY, MYF(0), sizeof_st_plugin_sym);
- if (report & REPORT_TO_LOG)
- sql_print_error(ER(ER_CANT_FIND_DL_ENTRY), sizeof_st_plugin_sym);
- DBUG_RETURN(0);
-#else
- /*
- When the following assert starts failing, we'll have to switch
- to the upper branch of the #ifdef
- */
- DBUG_ASSERT(min_plugin_interface_version == 0);
- sizeof_st_plugin= (int)offsetof(struct st_mysql_plugin, version);
-#endif
- }
-
- for (i= 0;
- ((struct st_mysql_plugin *)(ptr+i*sizeof_st_plugin))->info;
- i++)
- /* no op */;
-
- cur= (struct st_mysql_plugin*)
- my_malloc(i*sizeof(struct st_mysql_plugin), MYF(MY_ZEROFILL|MY_WME));
- if (!cur)
- {
- free_plugin_mem(&plugin_dl);
- if (report & REPORT_TO_USER)
- my_error(ER_OUTOFMEMORY, MYF(0), plugin_dl.dl.length);
- if (report & REPORT_TO_LOG)
- sql_print_error(ER(ER_OUTOFMEMORY), plugin_dl.dl.length);
- DBUG_RETURN(0);
- }
- /*
- All st_plugin fields not initialized in the plugin explicitly, are
- set to 0. It matches C standard behaviour for struct initializers that
- have less values than the struct definition.
- */
- for (i=0;
- (old=(struct st_mysql_plugin *)(ptr+i*sizeof_st_plugin))->info;
- i++)
- memcpy(cur+i, old, min(sizeof(cur[i]), sizeof_st_plugin));
-
- sym= cur;
- }
- plugin_dl.plugins= (struct st_mysql_plugin *)sym;
+
+ /* Checks which plugin interface present and reads info */
+ if (!(sym= dlsym(plugin_dl.handle, maria_plugin_interface_version_sym)))
+ {
+ if (read_mysql_plugin_info(&plugin_dl,
+ dlsym(plugin_dl.handle,
+ plugin_interface_version_sym),
+ dlpath,
+ report))
+ DBUG_RETURN(0);
+ }
+ else
+ {
+ if (read_maria_plugin_info(&plugin_dl, sym, dlpath, report))
+ DBUG_RETURN(0);
+ }
/* Duplicate and convert dll name */
plugin_dl.dl.length= dl->length * files_charset_info->mbmaxlen + 1;
@@ -718,7 +900,7 @@
int *argc, char **argv, int report)
{
struct st_plugin_int tmp;
- struct st_mysql_plugin *plugin;
+ struct st_maria_plugin *plugin;
DBUG_ENTER("plugin_add");
if (plugin_find_internal(name, MYSQL_ANY_PLUGIN))
{
@@ -1120,8 +1302,8 @@
{
uint i;
bool is_myisam;
- struct st_mysql_plugin **builtins;
- struct st_mysql_plugin *plugin;
+ struct st_maria_plugin **builtins;
+ struct st_maria_plugin *plugin;
struct st_plugin_int tmp, *plugin_ptr, **reap;
MEM_ROOT tmp_root;
bool reaped_mandatory_plugin= FALSE;
@@ -1160,7 +1342,7 @@
/*
First we register builtin plugins
*/
- for (builtins= mysqld_builtins; *builtins; builtins++)
+ for (builtins= mariadb_builtins; *builtins; builtins++)
{
for (plugin= *builtins; plugin->info; plugin++)
{
@@ -1290,7 +1472,7 @@
}
-static bool register_builtin(struct st_mysql_plugin *plugin,
+static bool register_builtin(struct st_maria_plugin *plugin,
struct st_plugin_int *tmp,
struct st_plugin_int **ptr)
{
@@ -1326,7 +1508,7 @@
RETURN
false - plugin registered successfully
*/
-bool plugin_register_builtin(THD *thd, struct st_mysql_plugin *plugin)
+bool plugin_register_builtin(THD *thd, struct st_maria_plugin *plugin)
{
struct st_plugin_int tmp, *ptr;
bool result= true;
@@ -1455,7 +1637,7 @@
char buffer[FN_REFLEN];
LEX_STRING name= {buffer, 0}, dl= {NULL, 0}, *str= &name;
struct st_plugin_dl *plugin_dl;
- struct st_mysql_plugin *plugin;
+ struct st_maria_plugin *plugin;
char *p= buffer;
DBUG_ENTER("plugin_load_list");
while (list)
=== modified file 'sql/sql_plugin.h'
--- a/sql/sql_plugin.h 2009-05-14 12:03:33 +0000
+++ b/sql/sql_plugin.h 2010-03-11 15:02:03 +0000
@@ -62,8 +62,9 @@
{
LEX_STRING dl;
void *handle;
- struct st_mysql_plugin *plugins;
- int version;
+ struct st_maria_plugin *plugins;
+ int mysqlversion;
+ int mariaversion;
uint ref_count; /* number of plugins loaded from the library */
};
@@ -72,7 +73,7 @@
struct st_plugin_int
{
LEX_STRING name;
- struct st_mysql_plugin *plugin;
+ struct st_maria_plugin *plugin;
struct st_plugin_dl *plugin_dl;
uint state;
uint ref_count; /* number of threads using the plugin */
=== modified file 'sql/sql_show.cc'
--- a/sql/sql_show.cc 2009-11-12 04:31:28 +0000
+++ b/sql/sql_show.cc 2010-03-11 15:02:03 +0000
@@ -94,11 +94,19 @@
return my_snprintf(buf, buf_length, "%d.%d", version>>8,version&0xff);
}
+static const LEX_STRING maturity_name[]={
+ { C_STRING_WITH_LEN(PLUGIN_MATURITY_UNKNOWN_STR) },
+ { C_STRING_WITH_LEN(PLUGIN_MATURITY_TEST_STR) },
+ { C_STRING_WITH_LEN(PLUGIN_MATURITY_ALPHA_STR) },
+ { C_STRING_WITH_LEN(PLUGIN_MATURITY_BETA_STR) },
+ { C_STRING_WITH_LEN(PLUGIN_MATURITY_GAMMA_STR) },
+ { C_STRING_WITH_LEN(PLUGIN_MATURITY_RELEASE_STR) }};
+
static my_bool show_plugins(THD *thd, plugin_ref plugin,
void *arg)
{
TABLE *table= (TABLE*) arg;
- struct st_mysql_plugin *plug= plugin_decl(plugin);
+ struct st_maria_plugin *plug= plugin_decl(plugin);
struct st_plugin_dl *plugin_dl= plugin_dlib(plugin);
CHARSET_INFO *cs= system_charset_info;
char version_buf[20];
@@ -143,7 +151,9 @@
table->field[5]->set_notnull();
table->field[6]->store(version_buf,
make_version_string(version_buf, sizeof(version_buf),
- plugin_dl->version),
+ (plugin_dl->mariaversion ?
+ plugin_dl->mariaversion :
+ plugin_dl->mysqlversion)),
cs);
table->field[6]->set_notnull();
}
@@ -186,6 +196,26 @@
}
table->field[9]->set_notnull();
+ if ((uint) plug->maturity <= PLUGIN_MATURITY_RELEASE)
+ table->field[10]->store(maturity_name[plug->maturity].str,
+ maturity_name[plug->maturity].length,
+ cs);
+ else
+ {
+ DBUG_ASSERT(0);
+ table->field[10]->store("Unknown", 7, cs);
+ }
+ table->field[10]->set_notnull();
+
+ if (plug->version_info)
+ {
+ table->field[11]->store(plug->version_info,
+ strlen(plug->version_info), cs);
+ table->field[11]->set_notnull();
+ }
+ else
+ table->field[11]->set_null();
+
return schema_table_store_record(thd, table);
}
@@ -4293,7 +4323,7 @@
if (plugin_state(plugin) != PLUGIN_IS_READY)
{
- struct st_mysql_plugin *plug= plugin_decl(plugin);
+ struct st_maria_plugin *plug= plugin_decl(plugin);
if (!(wild && wild[0] &&
wild_case_compare(scs, plug->name,wild)))
{
@@ -6990,6 +7020,8 @@
{"PLUGIN_AUTHOR", NAME_CHAR_LEN, MYSQL_TYPE_STRING, 0, 1, 0, SKIP_OPEN_TABLE},
{"PLUGIN_DESCRIPTION", 65535, MYSQL_TYPE_STRING, 0, 1, 0, SKIP_OPEN_TABLE},
{"PLUGIN_LICENSE", 80, MYSQL_TYPE_STRING, 0, 1, "License", SKIP_OPEN_TABLE},
+ {"PLUGIN_MATURITY", 7, MYSQL_TYPE_STRING, 0, 1, 0, SKIP_OPEN_TABLE},
+ {"PLUGIN_AUTH_VERSION", 80, MYSQL_TYPE_STRING, 0, 1, 0, SKIP_OPEN_TABLE},
{0, 0, MYSQL_TYPE_STRING, 0, 0, 0, SKIP_OPEN_TABLE}
};
=== modified file 'storage/archive/ha_archive.cc'
--- a/storage/archive/ha_archive.cc 2009-09-07 20:50:10 +0000
+++ b/storage/archive/ha_archive.cc 2010-03-11 15:02:03 +0000
@@ -1642,4 +1642,22 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(archive)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &archive_storage_engine,
+ "ARCHIVE",
+ "Brian Aker, MySQL AB",
+ "Archive storage engine",
+ PLUGIN_LICENSE_GPL,
+ archive_db_init, /* Plugin Init */
+ archive_db_done, /* Plugin Deinit */
+ 0x0300 /* 3.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/blackhole/ha_blackhole.cc'
--- a/storage/blackhole/ha_blackhole.cc 2008-11-10 20:21:49 +0000
+++ b/storage/blackhole/ha_blackhole.cc 2010-03-11 15:02:03 +0000
@@ -369,3 +369,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(blackhole)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &blackhole_storage_engine,
+ "BLACKHOLE",
+ "MySQL AB",
+ "/dev/null storage engine (anything you write to it disappears)",
+ PLUGIN_LICENSE_GPL,
+ blackhole_init, /* Plugin Init */
+ blackhole_fini, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/csv/ha_tina.cc'
--- a/storage/csv/ha_tina.cc 2009-04-25 10:05:32 +0000
+++ b/storage/csv/ha_tina.cc 2010-03-11 15:02:03 +0000
@@ -1636,4 +1636,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
-
+maria_declare_plugin(csv)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &csv_storage_engine,
+ "CSV",
+ "Brian Aker, MySQL AB",
+ "CSV storage engine",
+ PLUGIN_LICENSE_GPL,
+ tina_init_func, /* Plugin Init */
+ tina_done_func, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/example/ha_example.cc'
--- a/storage/example/ha_example.cc 2008-02-24 13:12:17 +0000
+++ b/storage/example/ha_example.cc 2010-03-11 15:02:03 +0000
@@ -906,3 +906,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(example)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &example_storage_engine,
+ "EXAMPLE",
+ "Brian Aker, MySQL AB",
+ "Example storage engine",
+ PLUGIN_LICENSE_GPL,
+ example_init_func, /* Plugin Init */
+ example_done_func, /* Plugin Deinit */
+ 0x0001 /* 0.1 */,
+ NULL, /* status variables */
+ example_system_variables, /* system variables */
+ "0.1", /* string version */
+ PLUGIN_MATURITY_TEST, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/federated/ha_federated.cc'
--- a/storage/federated/ha_federated.cc 2009-09-07 20:50:10 +0000
+++ b/storage/federated/ha_federated.cc 2010-03-11 15:02:03 +0000
@@ -3379,3 +3379,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(federated)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &federated_storage_engine,
+ "FEDERATED",
+ "Patrick Galbraith and Brian Aker, MySQL AB",
+ "Federated MySQL storage engine",
+ PLUGIN_LICENSE_GPL,
+ federated_db_init, /* Plugin Init */
+ federated_done, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_BETA, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/federatedx/ha_federatedx.cc'
--- a/storage/federatedx/ha_federatedx.cc 2009-11-03 11:08:09 +0000
+++ b/storage/federatedx/ha_federatedx.cc 2010-03-11 15:02:03 +0000
@@ -3485,9 +3485,27 @@
PLUGIN_LICENSE_GPL,
federatedx_db_init, /* Plugin Init */
federatedx_done, /* Plugin Deinit */
- 0x0100 /* 1.0 */,
+ 0x0200 /* 2.0 */,
NULL, /* status variables */
NULL, /* system variables */
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(federated)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &federatedx_storage_engine,
+ "FEDERATED",
+ "Patrick Galbraith",
+ "FederatedX pluggable storage engine",
+ PLUGIN_LICENSE_GPL,
+ federatedx_db_init, /* Plugin Init */
+ federatedx_done, /* Plugin Deinit */
+ 0x0200 /* 2.0 */,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "2.0", /* string version */
+ PLUGIN_MATURITY_BETA, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/heap/ha_heap.cc'
--- a/storage/heap/ha_heap.cc 2009-09-07 20:50:10 +0000
+++ b/storage/heap/ha_heap.cc 2010-03-11 15:02:03 +0000
@@ -767,3 +767,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(heap)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &heap_storage_engine,
+ "MEMORY",
+ "MySQL AB",
+ "Hash based, stored in memory, useful for temporary tables",
+ PLUGIN_LICENSE_GPL,
+ heap_init,
+ NULL,
+ 0x0100, /* 1.0 */
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/ibmdb2i/ha_ibmdb2i.cc'
--- a/storage/ibmdb2i/ha_ibmdb2i.cc 2009-07-08 09:10:01 +0000
+++ b/storage/ibmdb2i/ha_ibmdb2i.cc 2010-03-11 15:02:03 +0000
@@ -3357,3 +3357,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(ibmdb2i)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &ibmdb2i_storage_engine,
+ "IBMDB2I",
+ "The IBM development team in Rochester, Minnesota",
+ "IBM DB2 for i Storage Engine",
+ PLUGIN_LICENSE_GPL,
+ ibmdb2i_init_func, /* Plugin Init */
+ ibmdb2i_done_func, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ NULL, /* status variables */
+ ibmdb2i_system_variables, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_UNKNOWN, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/innobase/handler/ha_innodb.cc'
--- a/storage/innobase/handler/ha_innodb.cc 2009-10-16 22:57:48 +0000
+++ b/storage/innobase/handler/ha_innodb.cc 2010-03-11 15:02:03 +0000
@@ -8684,6 +8684,24 @@
NULL /* reserved */
}
mysql_declare_plugin_end;
+maria_declare_plugin(innobase)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &innobase_storage_engine,
+ innobase_hton_name,
+ "Innobase OY",
+ "Supports transactions, row-level locking, and foreign keys",
+ PLUGIN_LICENSE_GPL,
+ innobase_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0100 /* 1.0 */,
+ innodb_status_variables_export,/* status variables */
+ innobase_system_variables, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* reserved */
+}
+maria_declare_plugin_end;
/** @brief Initialize the default value of innodb_commit_concurrency.
=== modified file 'storage/innodb_plugin/handler/i_s.cc'
--- a/storage/innodb_plugin/handler/i_s.cc 2009-08-14 15:18:52 +0000
+++ b/storage/innodb_plugin/handler/i_s.cc 2010-03-11 15:02:03 +0000
@@ -455,6 +455,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_trx_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_TRX"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB transactions"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_trx_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/* Fields of the dynamic table INFORMATION_SCHEMA.innodb_locks */
static ST_FIELD_INFO innodb_locks_fields_info[] =
{
@@ -730,6 +787,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_locks_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_LOCKS"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB conflicting locks"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_locks_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/* Fields of the dynamic table INFORMATION_SCHEMA.innodb_lock_waits */
static ST_FIELD_INFO innodb_lock_waits_fields_info[] =
{
@@ -913,6 +1027,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_lock_waits_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_LOCK_WAITS"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, "Innobase Oy"),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB which lock is blocking which"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_lock_waits_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/*******************************************************************//**
Common function to fill any of the dynamic tables:
INFORMATION_SCHEMA.innodb_trx
@@ -1245,6 +1416,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_mysql_plugin i_s_innodb_cmp_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMP"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compression"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmp_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_cmp_reset =
{
/* the plugin type (a MYSQL_XXX_PLUGIN value) */
@@ -1295,6 +1523,64 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmp_reset_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMP_RESET"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compression;"
+ " reset cumulated counts"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmp_reset_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/* Fields of the dynamic table information_schema.innodb_cmpmem. */
static ST_FIELD_INFO i_s_cmpmem_fields_info[] =
{
@@ -1511,6 +1797,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmpmem_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMPMEM"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compressed buffer pool"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmpmem_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_cmpmem_reset =
{
/* the plugin type (a MYSQL_XXX_PLUGIN value) */
@@ -1561,6 +1904,64 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmpmem_reset_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMPMEM_RESET"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compressed buffer pool;"
+ " reset cumulated counts"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmpmem_reset_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/*******************************************************************//**
Unbind a dynamic INFORMATION_SCHEMA table.
@return 0 on success */
=== modified file 'storage/maria/ha_maria.cc'
--- a/storage/maria/ha_maria.cc 2009-10-26 11:35:42 +0000
+++ b/storage/maria/ha_maria.cc 2010-03-11 15:02:03 +0000
@@ -3346,9 +3346,27 @@
PLUGIN_LICENSE_GPL,
ha_maria_init, /* Plugin Init */
NULL, /* Plugin Deinit */
- 0x0100, /* 1.0 */
+ 0x0105, /* 1.5 */
status_variables, /* status variables */
system_variables, /* system variables */
NULL
}
mysql_declare_plugin_end;
+maria_declare_plugin(maria)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &maria_storage_engine,
+ "MARIA",
+ "MySQL AB",
+ "Crash-safe tables with MyISAM heritage",
+ PLUGIN_LICENSE_GPL,
+ ha_maria_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0105, /* 1.5 */
+ status_variables, /* status variables */
+ system_variables, /* system variables */
+ "1.5", /* string version */
+ PLUGIN_MATURITY_GAMMA, /* maturity */
+ NULL
+}
+maria_declare_plugin_end;
=== modified file 'storage/myisam/ha_myisam.cc'
--- a/storage/myisam/ha_myisam.cc 2009-10-17 19:12:28 +0000
+++ b/storage/myisam/ha_myisam.cc 2010-03-11 15:02:03 +0000
@@ -2183,6 +2183,24 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(myisam)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &myisam_storage_engine,
+ "MyISAM",
+ "MySQL AB",
+ "Default engine as of MySQL 3.23 with great performance",
+ PLUGIN_LICENSE_GPL,
+ myisam_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0100, /* 1.0 */
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
#ifdef HAVE_QUERY_CACHE
=== modified file 'storage/myisammrg/ha_myisammrg.cc'
--- a/storage/myisammrg/ha_myisammrg.cc 2009-10-15 21:38:29 +0000
+++ b/storage/myisammrg/ha_myisammrg.cc 2010-03-11 15:02:03 +0000
@@ -1289,3 +1289,21 @@
NULL /* config options */
}
mysql_declare_plugin_end;
+maria_declare_plugin(myisammrg)
+{
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &myisammrg_storage_engine,
+ "MRG_MYISAM",
+ "MySQL AB",
+ "Collection of identical MyISAM tables",
+ PLUGIN_LICENSE_GPL,
+ myisammrg_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ 0x0100, /* 1.0 */
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0", /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
=== modified file 'storage/pbxt/src/ha_pbxt.cc'
--- a/storage/pbxt/src/ha_pbxt.cc 2009-09-03 06:15:03 +0000
+++ b/storage/pbxt/src/ha_pbxt.cc 2010-03-11 15:02:03 +0000
@@ -5507,6 +5507,42 @@
drizzle_declare_plugin_end;
#else
mysql_declare_plugin_end;
+#ifdef MARIADB_BASE_VERSION
+maria_declare_plugin(pbxt)
+{ /* PBXT */
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &pbxt_storage_engine,
+ "PBXT",
+ "Paul McCullagh, PrimeBase Technologies GmbH",
+ "High performance, multi-versioning transactional engine",
+ PLUGIN_LICENSE_GPL,
+ pbxt_init, /* Plugin Init */
+ pbxt_end, /* Plugin Deinit */
+ 0x0001 /* 0.1 */,
+ NULL, /* status variables */
+ pbxt_system_variables, /* system variables */
+ "1.0.09g RC3", /* string version */
+ PLUGIN_MATURITY_GAMMA, /* maturity */
+ NULL /* config options */
+},
+{ /* PBXT_STATISTICS */
+ MYSQL_INFORMATION_SCHEMA_PLUGIN,
+ &pbxt_statitics,
+ "PBXT_STATISTICS",
+ "Paul McCullagh, PrimeBase Technologies GmbH",
+ "PBXT internal system statitics",
+ PLUGIN_LICENSE_GPL,
+ pbxt_init_statitics, /* plugin init */
+ pbxt_exit_statitics, /* plugin deinit */
+ 0x0005,
+ NULL, /* status variables */
+ NULL, /* system variables */
+ "1.0.09g RC3", /* string version */
+ PLUGIN_MATURITY_GAMMA, /* maturity */
+ NULL /* config options */
+}
+maria_declare_plugin_end;
+#endif
#endif
#if defined(XT_WIN) && defined(XT_COREDUMP)
=== modified file 'storage/xtradb/handler/ha_innodb.cc'
--- a/storage/xtradb/handler/ha_innodb.cc 2009-10-16 22:57:48 +0000
+++ b/storage/xtradb/handler/ha_innodb.cc 2010-03-11 15:02:03 +0000
@@ -10540,6 +10540,39 @@
i_s_innodb_index_stats,
i_s_innodb_patches
mysql_declare_plugin_end;
+maria_declare_plugin(innobase)
+{ /* InnoDB */
+ MYSQL_STORAGE_ENGINE_PLUGIN,
+ &innobase_storage_engine,
+ innobase_hton_name,
+ "Innobase Oy",
+ "Supports transactions, row-level locking, and foreign keys",
+ PLUGIN_LICENSE_GPL,
+ innobase_init, /* Plugin Init */
+ NULL, /* Plugin Deinit */
+ INNODB_VERSION_SHORT,
+ innodb_status_variables_export,/* status variables */
+ innobase_system_variables, /* system variables */
+ INNODB_VERSION_STR, /* string version */
+ PLUGIN_MATURITY_RELEASE, /* maturity */
+ NULL /* reserved */
+},
+i_s_innodb_rseg_maria,
+i_s_innodb_buffer_pool_pages_maria,
+i_s_innodb_buffer_pool_pages_index_maria,
+i_s_innodb_buffer_pool_pages_blob_maria,
+i_s_innodb_trx_maria,
+i_s_innodb_locks_maria,
+i_s_innodb_lock_waits_maria,
+i_s_innodb_cmp_maria,
+i_s_innodb_cmp_reset_maria,
+i_s_innodb_cmpmem_maria,
+i_s_innodb_cmpmem_reset_maria,
+i_s_innodb_table_stats_maria,
+i_s_innodb_index_stats_maria,
+i_s_innodb_patches_maria
+maria_declare_plugin_end;
+
/** @brief Initialize the default value of innodb_commit_concurrency.
=== modified file 'storage/xtradb/handler/i_s.cc'
--- a/storage/xtradb/handler/i_s.cc 2009-09-15 10:46:35 +0000
+++ b/storage/xtradb/handler/i_s.cc 2010-03-11 15:02:03 +0000
@@ -390,6 +390,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_patches_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "XTRADB_ENHANCEMENTS"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, "Percona"),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Enhancements applied to InnoDB plugin"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_patches_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
static ST_FIELD_INFO i_s_innodb_buffer_pool_pages_fields_info[] =
{
@@ -1037,6 +1094,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_buffer_pool_pages_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_BUFFER_POOL_PAGES"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB buffer pool pages"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_innodb_buffer_pool_pages_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, 0x0100 /* 1.0 */),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_buffer_pool_pages_index =
{
/* the plugin type (a MYSQL_XXX_PLUGIN value) */
@@ -1086,6 +1200,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_buffer_pool_pages_index_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_BUFFER_POOL_PAGES_INDEX"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB buffer pool index pages"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_innodb_buffer_pool_pages_index_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, 0x0100 /* 1.0 */),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_buffer_pool_pages_blob =
{
/* the plugin type (a MYSQL_XXX_PLUGIN value) */
@@ -1135,6 +1306,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_buffer_pool_pages_blob_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_BUFFER_POOL_PAGES_BLOB"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB buffer pool blob pages"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_innodb_buffer_pool_pages_blob_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, 0x0100 /* 1.0 */),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/* Fields of the dynamic table INFORMATION_SCHEMA.innodb_trx */
static ST_FIELD_INFO innodb_trx_fields_info[] =
@@ -1370,6 +1598,64 @@
STRUCT_FLD(__reserved1, NULL)
};
+
+UNIV_INTERN struct st_maria_plugin i_s_innodb_trx_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_TRX"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB transactions"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_trx_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/* Fields of the dynamic table INFORMATION_SCHEMA.innodb_locks */
static ST_FIELD_INFO innodb_locks_fields_info[] =
{
@@ -1645,6 +1931,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_locks_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_LOCKS"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB conflicting locks"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_locks_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/* Fields of the dynamic table INFORMATION_SCHEMA.innodb_lock_waits */
static ST_FIELD_INFO innodb_lock_waits_fields_info[] =
{
@@ -1828,6 +2171,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_lock_waits_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_LOCK_WAITS"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, "Innobase Oy"),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB which lock is blocking which"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, innodb_lock_waits_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/***********************************************************************
Common function to fill any of the dynamic tables:
INFORMATION_SCHEMA.innodb_trx
@@ -2160,6 +2560,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmp_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMP"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compression"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmp_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_cmp_reset =
{
/* the plugin type (a MYSQL_XXX_PLUGIN value) */
@@ -2210,6 +2667,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmp_reset_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMP_RESET"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compression;"
+ " reset cumulated counts"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmp_reset_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
/* Fields of the dynamic table information_schema.innodb_cmpmem. */
static ST_FIELD_INFO i_s_cmpmem_fields_info[] =
{
@@ -2428,6 +2942,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmpmem_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMPMEM"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compressed buffer pool"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmpmem_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_cmpmem_reset =
{
/* the plugin type (a MYSQL_XXX_PLUGIN value) */
@@ -2478,6 +3049,64 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_cmpmem_reset_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_CMPMEM_RESET"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "Statistics for the InnoDB compressed buffer pool;"
+ " reset cumulated counts"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_cmpmem_reset_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, INNODB_VERSION_SHORT),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/***********************************************************************
Unbind a dynamic INFORMATION_SCHEMA table. */
static
@@ -2657,6 +3286,63 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_rseg_maria =
+{
+ /* the plugin type (a MYSQL_XXX_PLUGIN value) */
+ /* int */
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+
+ /* pointer to type-specific plugin descriptor */
+ /* void* */
+ STRUCT_FLD(info, &i_s_info),
+
+ /* plugin name */
+ /* const char* */
+ STRUCT_FLD(name, "INNODB_RSEG"),
+
+ /* plugin author (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(author, plugin_author),
+
+ /* general descriptive text (for SHOW PLUGINS) */
+ /* const char* */
+ STRUCT_FLD(descr, "InnoDB rollback segment information"),
+
+ /* the plugin license (PLUGIN_LICENSE_XXX) */
+ /* int */
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+
+ /* the function to invoke when plugin is loaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(init, i_s_innodb_rseg_init),
+
+ /* the function to invoke when plugin is unloaded */
+ /* int (*)(void*); */
+ STRUCT_FLD(deinit, i_s_common_deinit),
+
+ /* plugin version (for SHOW PLUGINS) */
+ /* unsigned int */
+ STRUCT_FLD(version, 0x0100 /* 1.0 */),
+
+ /* struct st_mysql_show_var* */
+ STRUCT_FLD(status_vars, NULL),
+
+ /* struct st_mysql_sys_var** */
+ STRUCT_FLD(system_vars, NULL),
+
+ /* string version */
+ /* const char * */
+ STRUCT_FLD(version_info, "1.0"),
+
+ /* Maturity */
+ /* int */
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+
+ /* reserved for dependency checking */
+ /* void* */
+ STRUCT_FLD(__reserved1, NULL)
+};
+
/***********************************************************************
*/
static ST_FIELD_INFO i_s_innodb_table_stats_info[] =
@@ -2937,6 +3623,24 @@
STRUCT_FLD(__reserved1, NULL)
};
+UNIV_INTERN struct st_maria_plugin i_s_innodb_table_stats_maria =
+{
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+ STRUCT_FLD(info, &i_s_info),
+ STRUCT_FLD(name, "INNODB_TABLE_STATS"),
+ STRUCT_FLD(author, plugin_author),
+ STRUCT_FLD(descr, "InnoDB table statistics in memory"),
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+ STRUCT_FLD(init, i_s_innodb_table_stats_init),
+ STRUCT_FLD(deinit, i_s_common_deinit),
+ STRUCT_FLD(version, 0x0100 /* 1.0 */),
+ STRUCT_FLD(status_vars, NULL),
+ STRUCT_FLD(system_vars, NULL),
+ STRUCT_FLD(version_info, "1.0"),
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+ STRUCT_FLD(__reserved1, NULL)
+};
+
UNIV_INTERN struct st_mysql_plugin i_s_innodb_index_stats =
{
STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
@@ -2952,3 +3656,21 @@
STRUCT_FLD(system_vars, NULL),
STRUCT_FLD(__reserved1, NULL)
};
+
+UNIV_INTERN struct st_maria_plugin i_s_innodb_index_stats_maria =
+{
+ STRUCT_FLD(type, MYSQL_INFORMATION_SCHEMA_PLUGIN),
+ STRUCT_FLD(info, &i_s_info),
+ STRUCT_FLD(name, "INNODB_INDEX_STATS"),
+ STRUCT_FLD(author, plugin_author),
+ STRUCT_FLD(descr, "InnoDB index statistics in memory"),
+ STRUCT_FLD(license, PLUGIN_LICENSE_GPL),
+ STRUCT_FLD(init, i_s_innodb_index_stats_init),
+ STRUCT_FLD(deinit, i_s_common_deinit),
+ STRUCT_FLD(version, 0x0100 /* 1.0 */),
+ STRUCT_FLD(status_vars, NULL),
+ STRUCT_FLD(system_vars, NULL),
+ STRUCT_FLD(version_info, "1.0"),
+ STRUCT_FLD(maturity, PLUGIN_MATURITY_RELEASE),
+ STRUCT_FLD(__reserved1, NULL)
+};
=== modified file 'storage/xtradb/handler/i_s.h'
--- a/storage/xtradb/handler/i_s.h 2009-06-25 01:43:25 +0000
+++ b/storage/xtradb/handler/i_s.h 2010-03-11 15:02:03 +0000
@@ -40,4 +40,19 @@
extern struct st_mysql_plugin i_s_innodb_table_stats;
extern struct st_mysql_plugin i_s_innodb_index_stats;
+extern struct st_maria_plugin i_s_innodb_buffer_pool_pages_maria;
+extern struct st_maria_plugin i_s_innodb_buffer_pool_pages_index_maria;
+extern struct st_maria_plugin i_s_innodb_buffer_pool_pages_blob_maria;
+extern struct st_maria_plugin i_s_innodb_trx_maria;
+extern struct st_maria_plugin i_s_innodb_locks_maria;
+extern struct st_maria_plugin i_s_innodb_lock_waits_maria;
+extern struct st_maria_plugin i_s_innodb_cmp_maria;
+extern struct st_maria_plugin i_s_innodb_cmp_reset_maria;
+extern struct st_maria_plugin i_s_innodb_cmpmem_maria;
+extern struct st_maria_plugin i_s_innodb_cmpmem_reset_maria;
+extern struct st_maria_plugin i_s_innodb_patches_maria;
+extern struct st_maria_plugin i_s_innodb_rseg_maria;
+extern struct st_maria_plugin i_s_innodb_table_stats_maria;
+extern struct st_maria_plugin i_s_innodb_index_stats_maria;
+
#endif /* i_s_h */
1
0
Hi!
Quick answer to solve "big" questions faster.
11 марта 2010, в 12:30, Sergei Golubchik написал(а):
> Hi, Sanja!
>
> Here's the review, below:
>
> Summary:
>
> 1. please, store options together with the objects they describe, not
> separately.
.frm file is not the place I can stuck where I want, IMHO one
extension as it stored now only the way to keep .frm compatible.
> 2. Unknown option should be an error by default.
> 3. use something my_getopt-like as we discussed, don't force every
> engine to parse its options
Above is exactly against Monty's expectations (I remember old
discussion, as far as I remember my_getopt-like idea was rejected at
the end, about error messages it can be done for creation, but for
alter table we agreed to have warnings).
[skip]
> 5. don't check for changed options in alter table with your
> check_if_incompatible_data. let the engine do that.
Do you mean additional call to engine?
[skip]
> 7. parser: make the equal sign optional
> 8. few existing options, like row_format, insert_method, checksum,
> delay_key_write, key_block_size, min_rows/max_rows, avg_row_length,
> tablespace, connection, pack_keys could be moved into storage
> engines
> out of the parser.
In some cases options goes without coma, there is 3 word option DATA
DIRECTORY <value>, INDEX DIRECTORY <value> so I can't imagine how to
move the existing options to engine and make equal sign optional.
[skip]
1
0
Hi,
Following this discussion in 2007 : http://lists.mysql.com/internals/34287
Is there any plan to implement such an optimisation in MariaDB ? (I
think a lot of web app using pagination could take benefit of such an
optimisation, although there are some workarounds to avoid big LIMIT for
pagination)
Thanks !
Jocelyn
3
4
10 Mar '10
Hi Arjen,
We are starting to think about MariaDB 5.2, in particular of start making
(alpha) releases with packages.
The current ourdelta packaging scripts for MariaDB 5.1 fail to build 5.2 (seen
in Buildbot).
I took a look at it, unfortunately it is more complicated than what I can
easily fix with my limited knowledge of packaging stuff.
For .deb, the "5.1" is part of packaging names, which again is part of package
dependencies. So it needs to be updated with correct names and "Replaces:",
"Conflicts:", etc headers and so on.
For .rpm the issue may be similar, not sure.
Can you (or other from OurDelta) help me with this, or suggest the best way
forward?
- Kristian.
2
1
[Maria-developers] Updated (by Igor): Backport optimizations for derived tables and views (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Title modified.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-Backport optimizations for derived tables and views.
+Backport optimizations for derived tables and views
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Version updated.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-WorkLog-3.4
+Server-9.x
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport optimizations for derived tables and views (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Title modified.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-Backport optimizations for derived tables and views.
+Backport optimizations for derived tables and views
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Version updated.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-WorkLog-3.4
+Server-9.x
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport optimizations for derived tables and views (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Title modified.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-Backport optimizations for derived tables and views.
+Backport optimizations for derived tables and views
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Version updated.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-WorkLog-3.4
+Server-9.x
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport optimizations for derived tables and views (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Title modified.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-Backport optimizations for derived tables and views.
+Backport optimizations for derived tables and views
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Version updated.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-WorkLog-3.4
+Server-9.x
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport optimizations for derived tables and views (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Title modified.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-Backport optimizations for derived tables and views.
+Backport optimizations for derived tables and views
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Version updated.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-WorkLog-3.4
+Server-9.x
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport optimizations for derived tables and views (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: Server-9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Title modified.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-Backport optimizations for derived tables and views.
+Backport optimizations for derived tables and views
-=-=(Igor - Wed, 10 Mar 2010, 22:17)=-=-
Version updated.
--- /tmp/wklog.106.old.2763 2010-03-10 22:17:28.000000000 +0000
+++ /tmp/wklog.106.new.2763 2010-03-10 22:17:28.000000000 +0000
@@ -1 +1 @@
-WorkLog-3.4
+Server-9.x
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Igor): Backport optimizations for derived tables and views. (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views.
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: WorkLog-3.4
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Igor): Backport optimizations for derived tables and views. (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views.
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: WorkLog-3.4
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Igor): Backport optimizations for derived tables and views. (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views.
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: WorkLog-3.4
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Igor): Backport optimizations for derived tables and views. (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views.
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: WorkLog-3.4
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Igor): Backport optimizations for derived tables and views. (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views.
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: WorkLog-3.4
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Igor): Backport optimizations for derived tables and views. (106)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport optimizations for derived tables and views.
CREATION DATE..: Wed, 10 Mar 2010, 22:16
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sanja, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 106 (http://askmonty.org/worklog/?tid=106)
VERSION........: WorkLog-3.4
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to backport the implementation of the late
materialization of derived tables and views and the additional optimizations for
derived tables/views from MySQL 6.0 code line to MariaDB 5.3.
Numerous bugs in the existing code concerning this functionality are to be fixed
within this task.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 22:02)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.2007 2010-03-10 22:02:23.000000000 +0000
+++ /tmp/wklog.90.new.2007 2010-03-10 22:02:23.000000000 +0000
@@ -13,8 +13,8 @@
for each record R2 in big_table such that oe=R1
pass R2 to output
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
+Semi-join materialization supports the inside-out strategy. This WL entry is
+about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports the inside-out strategy. This WL entry is
about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE (90)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: Inside-out execution for non-semijoin materialized
subqueries that are AND-parts of the WHERE
CREATION DATE..: Sun, 28 Feb 2010, 13:45
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 90 (http://askmonty.org/worklog/?tid=90)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: -1 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:52)=-=-
Status updated.
--- /tmp/wklog.90.old.882 2010-03-10 21:52:02.000000000 +0000
+++ /tmp/wklog.90.new.882 2010-03-10 21:52:02.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 15:37)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.23524 2010-02-28 15:37:47.000000000 +0000
+++ /tmp/wklog.90.new.23524 2010-02-28 15:37:47.000000000 +0000
@@ -15,3 +15,7 @@
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
+
+
+Once WL#89 is done, there will be a cost-based choice between
+Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
-=-=(Psergey - Sun, 28 Feb 2010, 15:22)=-=-
High-Level Specification modified.
--- /tmp/wklog.90.old.23033 2010-02-28 15:22:09.000000000 +0000
+++ /tmp/wklog.90.new.23033 2010-02-28 15:22:09.000000000 +0000
@@ -1 +1,33 @@
+Basic idea on how this could be achieved:
+
+Pre-optimization phase
+----------------------
+
+The rewrite
+~~~~~~~~~~~
+If we find a subquery predicate that is
+- not processed by current semi-join optimizations
+- is an AND-part of the WHERE/ON clause
+- can be executed with Materialization
+
+then
+- Remove the predicate from WHERE/ON clause
+- Add a special JOIN_TAB object instead.
+
+Plan options
+~~~~~~~~~~~~
+- Use the IN-equality to create KEYUSE elements.
+
+Optimization
+------------
+- Pre-optimize the subquery so we know materialization cost
+- Whenever best_access_path() encounters the "special JOIN_TAB" it should
+ consider two strategies:
+ A. Materialization and making lookups in the materialized table (if applicable)
+ B. Materialization and then scanning the materialized table.
+
+
+EXPLAIN
+-------
+TODO how this will look in EXPLAIN output?
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 90
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21903 2010-02-28 14:47:54.000000000 +0000
+++ /tmp/wklog.90.new.21903 2010-02-28 14:47:54.000000000 +0000
@@ -1 +1 @@
- Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
High Level Description modified.
--- /tmp/wklog.90.old.21880 2010-02-28 14:47:28.000000000 +0000
+++ /tmp/wklog.90.new.21880 2010-02-28 14:47:28.000000000 +0000
@@ -1,10 +1,17 @@
-For uncorrelated IN subqueries that can't be converted to semi-joins it is
-necessary to make a cost-based choice between IN->EXISTS and Materialization
-strategies.
+Consider the following case:
-Both strategies handle two cases:
-1. A simple case w/o NULLs handling
-2. Handling NULLs.
+SELECT * FROM big_table
+WHERE oe IN (SELECT ie FROM table_with_few_groups
+ WHERE ...
+ GROUP BY group_col) AND ...
-This WL is about making cost-based decision for #1.
+Here the best way to execute the query is:
+ Materialize the subquery;
+ # now run the join:
+ for each record R1 in materialized table
+ for each record R2 in big_table such that oe=R1
+ pass R2 to output
+
+Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
+entry is about adding support for such strategies for non-semijoin subqueries.
-=-=(Psergey - Sun, 28 Feb 2010, 14:47)=-=-
Title modified.
--- /tmp/wklog.90.old.21859 2010-02-28 14:47:02.000000000 +0000
+++ /tmp/wklog.90.new.21859 2010-02-28 14:47:02.000000000 +0000
@@ -1 +1 @@
-Subqueries: cost-based choice between Materialization and IN->EXISTS transformation
+ Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
-=-=(Psergey - Sun, 28 Feb 2010, 14:08)=-=-
Dependency created: 94 now depends on 90
DESCRIPTION:
Consider the following case:
SELECT * FROM big_table
WHERE oe IN (SELECT ie FROM table_with_few_groups
WHERE ...
GROUP BY group_col) AND ...
Here the best way to execute the query is:
Materialize the subquery;
# now run the join:
for each record R1 in materialized table
for each record R2 in big_table such that oe=R1
pass R2 to output
Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
entry is about adding support for such strategies for non-semijoin subqueries.
Once WL#89 is done, there will be a cost-based choice between
Materialization+lookup, Materialization+scan, and IN->EXISTS+lookup strategies.
HIGH-LEVEL SPECIFICATION:
Basic idea on how this could be achieved:
Pre-optimization phase
----------------------
The rewrite
~~~~~~~~~~~
If we find a subquery predicate that is
- not processed by current semi-join optimizations
- is an AND-part of the WHERE/ON clause
- can be executed with Materialization
then
- Remove the predicate from WHERE/ON clause
- Add a special JOIN_TAB object instead.
Plan options
~~~~~~~~~~~~
- Use the IN-equality to create KEYUSE elements.
Optimization
------------
- Pre-optimize the subquery so we know materialization cost
- Whenever best_access_path() encounters the "special JOIN_TAB" it should
consider two strategies:
A. Materialization and making lookups in the materialized table (if applicable)
B. Materialization and then scanning the materialized table.
EXPLAIN
-------
TODO how this will look in EXPLAIN output?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries: cost-based choice between Materialization and IN->EXISTS transformation (89)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries: cost-based choice between Materialization and IN->EXISTS
transformation
CREATION DATE..: Sun, 28 Feb 2010, 13:39
SUPERVISOR.....: Monty
IMPLEMENTOR....: Timour
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 89 (http://askmonty.org/worklog/?tid=89)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Category updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Server-RawIdeaBin
+Server-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:48)=-=-
Status updated.
--- /tmp/wklog.89.old.778 2010-03-10 21:48:08.000000000 +0000
+++ /tmp/wklog.89.new.778 2010-03-10 21:48:08.000000000 +0000
@@ -1 +1 @@
-Un-Assigned
+Assigned
-=-=(Psergey - Sun, 28 Feb 2010, 16:34)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24497 2010-02-28 16:34:05.000000000 +0000
+++ /tmp/wklog.89.new.24497 2010-02-28 16:34:05.000000000 +0000
@@ -36,8 +36,8 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
-Difficulty with computing the two costs
----------------------------------------
+Difficulty with the need to run select optimization two times
+-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
@@ -46,4 +46,10 @@
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
+The problem is that once one injects "oe=ie", it can trigger some optimization
+steps that are not possible to undo.
+- Example1: outer->inner join conversion
+- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
+- ... what else ?
+
-=-=(Psergey - Sun, 28 Feb 2010, 16:08)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24098 2010-02-28 16:08:56.000000000 +0000
+++ /tmp/wklog.89.new.24098 2010-02-28 16:08:56.000000000 +0000
@@ -36,3 +36,14 @@
So, we'll need to compute both exists_select_cost and materialization_cost.
+Difficulty with computing the two costs
+---------------------------------------
+The problem is in this scenario:
+1. We compute materialization_cost by running optimization for the original
+ subquery select.
+2. We compute exists_select_cost by running optimization for the subquery's
+ select with "oe=ie" injected into WHERE
+3. Then we find that cost #1 is less and want to execute the materialization
+ strategy.
+
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:57)=-=-
High-Level Specification modified.
--- /tmp/wklog.89.old.24045 2010-02-28 15:57:49.000000000 +0000
+++ /tmp/wklog.89.new.24045 2010-02-28 15:57:49.000000000 +0000
@@ -1 +1,38 @@
+Why need two optimizations
+--------------------------
+Consider a query with subquery:
+
+ SELECT
+ oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
+ FROM outer_tbl
+ WHERE outer_cond
+
+If we use Materialization strategy, the costs will be
+
+ cost of accessing outer_tbl +
+ materialization_cost +
+ #records(outer_tbl w/o outer_cond) * lookup_cost
+
+where
+
+ materialization_cost=
+ cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
+
+On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
+
+ SELECT
+ EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+ FROM outer_tbl
+ WHERE outer_cond
+
+and the costs will be
+
+ cost of accessing outer_tbl +
+ #records(outer_tbl w/o outer_cond) * exists_select_cost
+
+where
+ exists_select_cost=
+ cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
+
+So, we'll need to compute both exists_select_cost and materialization_cost.
-=-=(Psergey - Sun, 28 Feb 2010, 15:07)=-=-
Dependency created: 91 now depends on 89
DESCRIPTION:
For uncorrelated IN subqueries that can't be converted to semi-joins it is
necessary to make a cost-based choice between IN->EXISTS and Materialization
strategies.
Both strategies handle two cases:
1. A simple case w/o NULLs handling
2. Handling NULLs.
This WL is about making cost-based decision for #1.
HIGH-LEVEL SPECIFICATION:
Why need two optimizations
--------------------------
Consider a query with subquery:
SELECT
oe IN (SELECT ie FROM inner_tbl WHERE inner_cond)
FROM outer_tbl
WHERE outer_cond
If we use Materialization strategy, the costs will be
cost of accessing outer_tbl +
materialization_cost +
#records(outer_tbl w/o outer_cond) * lookup_cost
where
materialization_cost=
cost of executing the (SELECT ie FROM inner_tbl WHERE inner_cond)
On the other hand, for IN->EXISTS strategy, the subquery will be rewritten into
SELECT
EXISTS (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
FROM outer_tbl
WHERE outer_cond
and the costs will be
cost of accessing outer_tbl +
#records(outer_tbl w/o outer_cond) * exists_select_cost
where
exists_select_cost=
cost of executing the (SELECT 1 FROM inner_tbl WHERE inner_cond AND oe=ie)
So, we'll need to compute both exists_select_cost and materialization_cost.
Difficulty with the need to run select optimization two times
-------------------------------------------------------------
The problem is in this scenario:
1. We compute materialization_cost by running optimization for the original
subquery select.
2. We compute exists_select_cost by running optimization for the subquery's
select with "oe=ie" injected into WHERE
3. Then we find that cost #1 is less and want to execute the materialization
strategy.
The problem is that once one injects "oe=ie", it can trigger some optimization
steps that are not possible to undo.
- Example1: outer->inner join conversion
- non-Example: according to Igor, "oe=ie" won't participate in equality propagation.
- ... what else ?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries backport: fix known semi-join subquery bugs (92)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries backport: fix known semi-join subquery bugs
CREATION DATE..: Sun, 28 Feb 2010, 14:02
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 92 (http://askmonty.org/worklog/?tid=92)
VERSION........: WorkLog-3.4
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:33)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.32291 2010-03-10 21:33:29.000000000 +0000
+++ /tmp/wklog.92.new.32291 2010-03-10 21:33:29.000000000 +0000
@@ -1,3 +1,5 @@
+The goal of this task is to fix all known subquery semi-join bugs.
+
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
-=-=(Psergey - Sun, 28 Feb 2010, 16:41)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.24539 2010-02-28 16:41:06.000000000 +0000
+++ /tmp/wklog.92.new.24539 2010-02-28 16:41:06.000000000 +0000
@@ -1 +1,4 @@
We must fix known subquery semi-join bugs.
+* outer join + semi join problem
+* Duplicate Weedout + join caching problem.
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22593 2010-02-28 15:06:23.000000000 +0000
+++ /tmp/wklog.92.new.22593 2010-02-28 15:06:23.000000000 +0000
@@ -1 +1 @@
-
+We must fix known subquery semi-join bugs.
-=-=(Psergey - Sun, 28 Feb 2010, 15:03)=-=-
Title modified.
--- /tmp/wklog.92.old.22572 2010-02-28 15:03:51.000000000 +0000
+++ /tmp/wklog.92.new.22572 2010-02-28 15:03:51.000000000 +0000
@@ -1 +1 @@
-Unused
+Subqueries backport: fix known semi-join subquery bugs
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency deleted: 91 no longer depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22267 2010-02-28 14:57:52.000000000 +0000
+++ /tmp/wklog.92.new.22267 2010-02-28 14:57:52.000000000 +0000
@@ -1 +1 @@
-We must fix known semi-join subquery bugs.
+
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
Title modified.
--- /tmp/wklog.92.old.22249 2010-02-28 14:57:41.000000000 +0000
+++ /tmp/wklog.92.new.22249 2010-02-28 14:57:41.000000000 +0000
@@ -1 +1 @@
-Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Unused
-=-=(Psergey - Sun, 28 Feb 2010, 14:51)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.21961 2010-02-28 14:51:06.000000000 +0000
+++ /tmp/wklog.92.new.21961 2010-02-28 14:51:06.000000000 +0000
@@ -1,18 +1 @@
-Consider the following case:
-
-SELECT * FROM big_table
-WHERE oe IN (SELECT ie FROM table_with_few_groups
- WHERE ...
- GROUP BY group_col) AND ...
-
-Here the best way to execute the query is:
-
- Materialize the subquery;
- # now run the join:
- for each record R1 in materialized table
- for each record R2 in big_table such that oe=R1
- pass R2 to output
-
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
-
+We must fix known semi-join subquery bugs.
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=92&nolimit=1
DESCRIPTION:
The goal of this task is to fix all known subquery semi-join bugs.
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries backport: fix known semi-join subquery bugs (92)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries backport: fix known semi-join subquery bugs
CREATION DATE..: Sun, 28 Feb 2010, 14:02
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 92 (http://askmonty.org/worklog/?tid=92)
VERSION........: WorkLog-3.4
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:33)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.32291 2010-03-10 21:33:29.000000000 +0000
+++ /tmp/wklog.92.new.32291 2010-03-10 21:33:29.000000000 +0000
@@ -1,3 +1,5 @@
+The goal of this task is to fix all known subquery semi-join bugs.
+
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
-=-=(Psergey - Sun, 28 Feb 2010, 16:41)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.24539 2010-02-28 16:41:06.000000000 +0000
+++ /tmp/wklog.92.new.24539 2010-02-28 16:41:06.000000000 +0000
@@ -1 +1,4 @@
We must fix known subquery semi-join bugs.
+* outer join + semi join problem
+* Duplicate Weedout + join caching problem.
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22593 2010-02-28 15:06:23.000000000 +0000
+++ /tmp/wklog.92.new.22593 2010-02-28 15:06:23.000000000 +0000
@@ -1 +1 @@
-
+We must fix known subquery semi-join bugs.
-=-=(Psergey - Sun, 28 Feb 2010, 15:03)=-=-
Title modified.
--- /tmp/wklog.92.old.22572 2010-02-28 15:03:51.000000000 +0000
+++ /tmp/wklog.92.new.22572 2010-02-28 15:03:51.000000000 +0000
@@ -1 +1 @@
-Unused
+Subqueries backport: fix known semi-join subquery bugs
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency deleted: 91 no longer depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22267 2010-02-28 14:57:52.000000000 +0000
+++ /tmp/wklog.92.new.22267 2010-02-28 14:57:52.000000000 +0000
@@ -1 +1 @@
-We must fix known semi-join subquery bugs.
+
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
Title modified.
--- /tmp/wklog.92.old.22249 2010-02-28 14:57:41.000000000 +0000
+++ /tmp/wklog.92.new.22249 2010-02-28 14:57:41.000000000 +0000
@@ -1 +1 @@
-Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Unused
-=-=(Psergey - Sun, 28 Feb 2010, 14:51)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.21961 2010-02-28 14:51:06.000000000 +0000
+++ /tmp/wklog.92.new.21961 2010-02-28 14:51:06.000000000 +0000
@@ -1,18 +1 @@
-Consider the following case:
-
-SELECT * FROM big_table
-WHERE oe IN (SELECT ie FROM table_with_few_groups
- WHERE ...
- GROUP BY group_col) AND ...
-
-Here the best way to execute the query is:
-
- Materialize the subquery;
- # now run the join:
- for each record R1 in materialized table
- for each record R2 in big_table such that oe=R1
- pass R2 to output
-
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
-
+We must fix known semi-join subquery bugs.
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=92&nolimit=1
DESCRIPTION:
The goal of this task is to fix all known subquery semi-join bugs.
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries backport: fix known semi-join subquery bugs (92)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries backport: fix known semi-join subquery bugs
CREATION DATE..: Sun, 28 Feb 2010, 14:02
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 92 (http://askmonty.org/worklog/?tid=92)
VERSION........: WorkLog-3.4
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:33)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.32291 2010-03-10 21:33:29.000000000 +0000
+++ /tmp/wklog.92.new.32291 2010-03-10 21:33:29.000000000 +0000
@@ -1,3 +1,5 @@
+The goal of this task is to fix all known subquery semi-join bugs.
+
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
-=-=(Psergey - Sun, 28 Feb 2010, 16:41)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.24539 2010-02-28 16:41:06.000000000 +0000
+++ /tmp/wklog.92.new.24539 2010-02-28 16:41:06.000000000 +0000
@@ -1 +1,4 @@
We must fix known subquery semi-join bugs.
+* outer join + semi join problem
+* Duplicate Weedout + join caching problem.
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22593 2010-02-28 15:06:23.000000000 +0000
+++ /tmp/wklog.92.new.22593 2010-02-28 15:06:23.000000000 +0000
@@ -1 +1 @@
-
+We must fix known subquery semi-join bugs.
-=-=(Psergey - Sun, 28 Feb 2010, 15:03)=-=-
Title modified.
--- /tmp/wklog.92.old.22572 2010-02-28 15:03:51.000000000 +0000
+++ /tmp/wklog.92.new.22572 2010-02-28 15:03:51.000000000 +0000
@@ -1 +1 @@
-Unused
+Subqueries backport: fix known semi-join subquery bugs
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency deleted: 91 no longer depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22267 2010-02-28 14:57:52.000000000 +0000
+++ /tmp/wklog.92.new.22267 2010-02-28 14:57:52.000000000 +0000
@@ -1 +1 @@
-We must fix known semi-join subquery bugs.
+
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
Title modified.
--- /tmp/wklog.92.old.22249 2010-02-28 14:57:41.000000000 +0000
+++ /tmp/wklog.92.new.22249 2010-02-28 14:57:41.000000000 +0000
@@ -1 +1 @@
-Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Unused
-=-=(Psergey - Sun, 28 Feb 2010, 14:51)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.21961 2010-02-28 14:51:06.000000000 +0000
+++ /tmp/wklog.92.new.21961 2010-02-28 14:51:06.000000000 +0000
@@ -1,18 +1 @@
-Consider the following case:
-
-SELECT * FROM big_table
-WHERE oe IN (SELECT ie FROM table_with_few_groups
- WHERE ...
- GROUP BY group_col) AND ...
-
-Here the best way to execute the query is:
-
- Materialize the subquery;
- # now run the join:
- for each record R1 in materialized table
- for each record R2 in big_table such that oe=R1
- pass R2 to output
-
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
-
+We must fix known semi-join subquery bugs.
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=92&nolimit=1
DESCRIPTION:
The goal of this task is to fix all known subquery semi-join bugs.
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subqueries backport: fix known semi-join subquery bugs (92)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subqueries backport: fix known semi-join subquery bugs
CREATION DATE..: Sun, 28 Feb 2010, 14:02
SUPERVISOR.....: Monty
IMPLEMENTOR....:
COPIES TO......: Igor, Psergey, Timour
CATEGORY.......: Server-RawIdeaBin
TASK ID........: 92 (http://askmonty.org/worklog/?tid=92)
VERSION........: WorkLog-3.4
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:33)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.32291 2010-03-10 21:33:29.000000000 +0000
+++ /tmp/wklog.92.new.32291 2010-03-10 21:33:29.000000000 +0000
@@ -1,3 +1,5 @@
+The goal of this task is to fix all known subquery semi-join bugs.
+
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
-=-=(Psergey - Sun, 28 Feb 2010, 16:41)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.24539 2010-02-28 16:41:06.000000000 +0000
+++ /tmp/wklog.92.new.24539 2010-02-28 16:41:06.000000000 +0000
@@ -1 +1,4 @@
We must fix known subquery semi-join bugs.
+* outer join + semi join problem
+* Duplicate Weedout + join caching problem.
+
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 15:06)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22593 2010-02-28 15:06:23.000000000 +0000
+++ /tmp/wklog.92.new.22593 2010-02-28 15:06:23.000000000 +0000
@@ -1 +1 @@
-
+We must fix known subquery semi-join bugs.
-=-=(Psergey - Sun, 28 Feb 2010, 15:03)=-=-
Title modified.
--- /tmp/wklog.92.old.22572 2010-02-28 15:03:51.000000000 +0000
+++ /tmp/wklog.92.new.22572 2010-02-28 15:03:51.000000000 +0000
@@ -1 +1 @@
-Unused
+Subqueries backport: fix known semi-join subquery bugs
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency deleted: 91 no longer depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:58)=-=-
Dependency created: 91 now depends on 92
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.22267 2010-02-28 14:57:52.000000000 +0000
+++ /tmp/wklog.92.new.22267 2010-02-28 14:57:52.000000000 +0000
@@ -1 +1 @@
-We must fix known semi-join subquery bugs.
+
-=-=(Psergey - Sun, 28 Feb 2010, 14:57)=-=-
Title modified.
--- /tmp/wklog.92.old.22249 2010-02-28 14:57:41.000000000 +0000
+++ /tmp/wklog.92.new.22249 2010-02-28 14:57:41.000000000 +0000
@@ -1 +1 @@
-Subqueries: Inside-out execution for non-semijoin materialized subqueries that are AND-parts of the WHERE
+Unused
-=-=(Psergey - Sun, 28 Feb 2010, 14:51)=-=-
High Level Description modified.
--- /tmp/wklog.92.old.21961 2010-02-28 14:51:06.000000000 +0000
+++ /tmp/wklog.92.new.21961 2010-02-28 14:51:06.000000000 +0000
@@ -1,18 +1 @@
-Consider the following case:
-
-SELECT * FROM big_table
-WHERE oe IN (SELECT ie FROM table_with_few_groups
- WHERE ...
- GROUP BY group_col) AND ...
-
-Here the best way to execute the query is:
-
- Materialize the subquery;
- # now run the join:
- for each record R1 in materialized table
- for each record R2 in big_table such that oe=R1
- pass R2 to output
-
-Semi-join materialization supports such strategy with SJM-Scan strategy. This WL
-entry is about adding support for such strategies for non-semijoin subqueries.
-
+We must fix known semi-join subquery bugs.
------------------------------------------------------------
-=-=(View All Progress Notes, 11 total)=-=-
http://askmonty.org/worklog/index.pl?tid=92&nolimit=1
DESCRIPTION:
The goal of this task is to fix all known subquery semi-join bugs.
We must fix known subquery semi-join bugs.
* outer join + semi join problem
* Duplicate Weedout + join caching problem.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subquery optimization: Avoid recalculating subquery if external fields values found in subquery cache (66)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subquery optimization: Avoid recalculating subquery if external fields
values found in subquery cache
CREATION DATE..: Wed, 25 Nov 2009, 22:25
SUPERVISOR.....: Monty
IMPLEMENTOR....: Sanja
COPIES TO......:
CATEGORY.......: Client-Sprint
TASK ID........: 66 (http://askmonty.org/worklog/?tid=66)
VERSION........: 9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:29)=-=-
High Level Description modified.
--- /tmp/wklog.66.old.32188 2010-03-10 21:29:16.000000000 +0000
+++ /tmp/wklog.66.new.32188 2010-03-10 21:29:16.000000000 +0000
@@ -1,3 +1,10 @@
+The goal of this task is to optimize evaluation of subqueries and subquery
+predicates by storing the results of a correlated subquery together with
+correlation parameters in a cache and reusing those results for the same sets of
+parameters.
+
+Here's what is to be done in this task in more details:
+
Collect all outer items/references (left part of the subquiery and outer
references inside the subquery) in key string. Compare the string (which
represents certain value set of the references) against values in hash table and
-=-=(Igor - Wed, 10 Mar 2010, 21:13)=-=-
Dependency created: 91 now depends on 66
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Category updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Client-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Version updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-5.3
+9.x
-=-=(Monty - Fri, 29 Jan 2010, 19:07)=-=-
Version updated.
--- /tmp/wklog.66.old.5893 2010-01-29 19:07:10.000000000 +0200
+++ /tmp/wklog.66.new.5893 2010-01-29 19:07:10.000000000 +0200
@@ -1 +1 @@
-Server-5.2
+Server-5.3
-=-=(Psergey - Wed, 20 Jan 2010, 14:50)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.26873 2010-01-20 14:50:41.000000000 +0200
+++ /tmp/wklog.66.new.26873 2010-01-20 14:50:41.000000000 +0200
@@ -4,7 +4,6 @@
To check/discuss:
-----------------
-* Do we put subquery cache on all levels of subqueries or on highest level only
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
@@ -41,7 +40,12 @@
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
-We dont support cases when outer_expr or correlation_references are blobs.
+We don't support cases when outer_expr or correlation_references are blobs.
+
+All subquery predicates are cached. That is, if one subquery predicate is
+located within another, both of them will have caches (one option to reduce
+cache memory usage was to use cache only for the upper-most select. we decided
+against it).
2. Data structure used for the cache
------------------------------------
-=-=(Psergey - Wed, 20 Jan 2010, 13:07)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.17649 2010-01-20 13:07:07.000000000 +0200
+++ /tmp/wklog.66.new.17649 2010-01-20 13:07:07.000000000 +0200
@@ -3,7 +3,13 @@
To check/discuss:
- To put subquery cache on all levels of subqueries or on highest level only.
+-----------------
+* Do we put subquery cache on all levels of subqueries or on highest level only
+* Will there be any means to measure subquery cache hit rate?
+* MySQL-6.0 has a one-element predicate result cache. It is called "left
+ expression cache", grep for left_expr_cache in sql/item_subselect.*
+ When this WL is merged with 6.0's optimizations, these two caches will
+ need to be unified somehow.
<contents>
-=-=(Psergey - Mon, 18 Jan 2010, 16:40)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24899 2010-01-18 16:40:16.000000000 +0200
+++ /tmp/wklog.66.new.24899 2010-01-18 16:40:16.000000000 +0200
@@ -1,3 +1,5 @@
+* Target version: base on mysql-5.2 code
+
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
-=-=(Psergey - Mon, 18 Jan 2010, 16:37)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24586 2010-01-18 16:37:07.000000000 +0200
+++ /tmp/wklog.66.new.24586 2010-01-18 16:37:07.000000000 +0200
@@ -4,6 +4,11 @@
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
+How to fill the temptable
+-------------------------
+Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
+and is supposed to be quite trivial.
+
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
-=-=(Psergey - Mon, 18 Jan 2010, 16:34)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24328 2010-01-18 16:34:19.000000000 +0200
+++ /tmp/wklog.66.new.24328 2010-01-18 16:34:19.000000000 +0200
@@ -32,8 +32,8 @@
Question: or perhaps that is not necessarry?
</questionable>
-Execution process
-~~~~~~~~~~~~~~~~~
+Doing the lookup
+~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
@@ -42,4 +42,12 @@
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
+The code in this WL will use the same approach
+Extracting the value of the subquery predicate
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The goal of making the lookup is to get the value of subquery predicate.
+This is done by creating an Item_field $I which refers to appropriate
+temporary table's field and then subquery_predicate->val_int() will invoke
+$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
+forth.
------------------------------------------------------------
-=-=(View All Progress Notes, 17 total)=-=-
http://askmonty.org/worklog/index.pl?tid=66&nolimit=1
DESCRIPTION:
The goal of this task is to optimize evaluation of subqueries and subquery
predicates by storing the results of a correlated subquery together with
correlation parameters in a cache and reusing those results for the same sets of
parameters.
Here's what is to be done in this task in more details:
Collect all outer items/references (left part of the subquiery and outer
references inside the subquery) in key string. Compare the string (which
represents certain value set of the references) against values in hash table and
return cached result of subquery if the reference values combination has already
been used.
For example in the following subquery:
(L1, L2) IN (SELECT A, B FROM T WHERE T.F1>OTER_FIELD)
set of references to look into the subquery cache is (L1, L2, OTER_FIELD).
The subquery cache should be implemented as simple LRU connected to the subquery.
Size of the subquery cache (in number of results (but maybe in used memory
amount)) is limited by session variable (query parameter?).
HIGH-LEVEL SPECIFICATION:
Attach subquery cache to each Item_subquery. Interface should allow to use hash
or temporary table inside.
To check/discuss:
-----------------
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
When this WL is merged with 6.0's optimizations, these two caches will
need to be unified somehow.
<contents>
1. Scope of the task
2. Data structure used for the cache
3. Cache size
4. Interplay with other subquery optimizations
5. User interface
</contents>
1. Scope of the task
--------------------
This WL should handle all subquery predicates, i.e. it should handle these
cases:
outer_expr IN (SELECT correlated_select)
outer_expr $CMP$ ALL/ANY (SELECT correlated_select)
EXISTS (SELECT correlated_select)
scalar-context subquery: (SELECT correlated_select)
The cache will maintain
(outer_expr, correlation_references)-> subquery_item_result
mapping, where
- correlation_references is a list of tablename.column_name that are referred
from the correlated_select but tablename is a table that is ouside the
subquery.
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
We don't support cases when outer_expr or correlation_references are blobs.
All subquery predicates are cached. That is, if one subquery predicate is
located within another, both of them will have caches (one option to reduce
cache memory usage was to use cache only for the upper-most select. we decided
against it).
2. Data structure used for the cache
------------------------------------
There are two data structures available in the codebase that will allow fast
equality lookups:
1. HASH (mysys/hash.c) tables
2. Temporary tables (the ones that are used for e.g. GROUP BY)
None of them has any support for element eviction on overflow (using LRU or
some other policy).
Query cache and MyISAM/Maria's key/page cache ought to support some eviction
mechanism, but code-wise it is not readily reusable, one will need to factor
it out (or copy it).
We choose to use #2, and not to have any eviction policy. See subsequent
sections for details and reasoning behind the decision.
3. Cache size
-------------
Typically, a cache has some maximum size and a policy which is used to
select a cache entry for removal when the cache becomes full (e.g. find
and remove the least [recently] used entry)
For this WL entry we will use a cache of infinite size. The reasoning behind
this is that:
- is is easy to do: we have temporary tables that can grow to arbitrarily
large size while still providing the same insert/lookup interface.
- it suits us: unless the subquery is resolved with one index lookup,
hitting the cache would be many times cheaper than re-running the
subquery, so cache is worth having.
4. Interplay with other subquery optimizations
----------------------------------------------
* This WL entry should not care about IN->EXISTS transformation: caching for
IN subquery and result of its conversion to EXISTS would work in the same
way.
* This optimization is orthogonal to <=>ANY -> MIN/MAX rewrite (it will
work/be useful irrespectively of whether the rewrite has been performed or
not)
* TODO: compare this with materialization for uncorrelated IN-subqueries. Is
this basically the same?
A: no, it is not:
- IN-Materialization has to perform full materialization before it can
do the first subquery evaluation. This WL's code has almost no startup
costs.
- This optimization has temp.table of (corr_reference, predicate_value),
while IN-materialization will have (corr_reference) only.
5. User interface
-----------------
* There will be an @@optimizer_switch flag to turn this optimization on and
off (TODO: name of the flag?)
* TODO: how do we show this in EXPLAIN [EXTENDED]? The most easiest is to
print something in the warning text of EXPLAIN EXTEDED that would indicate
use of cache.
* temporary table sizing (max size for heap table, whether to use MyISAM or
Maria) will be controlled with common temp.table control variables.
LOW-LEVEL DESIGN:
* Target version: base on mysql-5.2 code
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
How to fill the temptable
-------------------------
Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
and is supposed to be quite trivial.
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
Setup process
~~~~~~~~~~~~~
Setup is performed in the same way as in setup_sj_materialization(),
see the code that starts these lines:
/*
Create/initialize everything we will need to index lookups into the
temptable.
*/
and ends at this line:
Remove the injected semi-join IN-equalities from join_tab conds. This
<questionable>
We'll also need to check equalities, i.e. do an equivalent of this:
if (!(sjm->in_equality= create_subq_in_equalities(thd, sjm,
emb_sj_nest->sj_subq_pred)))
DBUG_RETURN(TRUE); /* purecov: inspected */
Question: or perhaps that is not necessarry?
</questionable>
Doing the lookup
~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
if ((res= join_read_key2(join_tab, sjm->table, sjm->tab_ref)) == 1)
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
The code in this WL will use the same approach
Extracting the value of the subquery predicate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The goal of making the lookup is to get the value of subquery predicate.
This is done by creating an Item_field $I which refers to appropriate
temporary table's field and then subquery_predicate->val_int() will invoke
$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
forth.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subquery optimization: Avoid recalculating subquery if external fields values found in subquery cache (66)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subquery optimization: Avoid recalculating subquery if external fields
values found in subquery cache
CREATION DATE..: Wed, 25 Nov 2009, 22:25
SUPERVISOR.....: Monty
IMPLEMENTOR....: Sanja
COPIES TO......:
CATEGORY.......: Client-Sprint
TASK ID........: 66 (http://askmonty.org/worklog/?tid=66)
VERSION........: 9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:29)=-=-
High Level Description modified.
--- /tmp/wklog.66.old.32188 2010-03-10 21:29:16.000000000 +0000
+++ /tmp/wklog.66.new.32188 2010-03-10 21:29:16.000000000 +0000
@@ -1,3 +1,10 @@
+The goal of this task is to optimize evaluation of subqueries and subquery
+predicates by storing the results of a correlated subquery together with
+correlation parameters in a cache and reusing those results for the same sets of
+parameters.
+
+Here's what is to be done in this task in more details:
+
Collect all outer items/references (left part of the subquiery and outer
references inside the subquery) in key string. Compare the string (which
represents certain value set of the references) against values in hash table and
-=-=(Igor - Wed, 10 Mar 2010, 21:13)=-=-
Dependency created: 91 now depends on 66
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Category updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Client-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Version updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-5.3
+9.x
-=-=(Monty - Fri, 29 Jan 2010, 19:07)=-=-
Version updated.
--- /tmp/wklog.66.old.5893 2010-01-29 19:07:10.000000000 +0200
+++ /tmp/wklog.66.new.5893 2010-01-29 19:07:10.000000000 +0200
@@ -1 +1 @@
-Server-5.2
+Server-5.3
-=-=(Psergey - Wed, 20 Jan 2010, 14:50)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.26873 2010-01-20 14:50:41.000000000 +0200
+++ /tmp/wklog.66.new.26873 2010-01-20 14:50:41.000000000 +0200
@@ -4,7 +4,6 @@
To check/discuss:
-----------------
-* Do we put subquery cache on all levels of subqueries or on highest level only
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
@@ -41,7 +40,12 @@
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
-We dont support cases when outer_expr or correlation_references are blobs.
+We don't support cases when outer_expr or correlation_references are blobs.
+
+All subquery predicates are cached. That is, if one subquery predicate is
+located within another, both of them will have caches (one option to reduce
+cache memory usage was to use cache only for the upper-most select. we decided
+against it).
2. Data structure used for the cache
------------------------------------
-=-=(Psergey - Wed, 20 Jan 2010, 13:07)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.17649 2010-01-20 13:07:07.000000000 +0200
+++ /tmp/wklog.66.new.17649 2010-01-20 13:07:07.000000000 +0200
@@ -3,7 +3,13 @@
To check/discuss:
- To put subquery cache on all levels of subqueries or on highest level only.
+-----------------
+* Do we put subquery cache on all levels of subqueries or on highest level only
+* Will there be any means to measure subquery cache hit rate?
+* MySQL-6.0 has a one-element predicate result cache. It is called "left
+ expression cache", grep for left_expr_cache in sql/item_subselect.*
+ When this WL is merged with 6.0's optimizations, these two caches will
+ need to be unified somehow.
<contents>
-=-=(Psergey - Mon, 18 Jan 2010, 16:40)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24899 2010-01-18 16:40:16.000000000 +0200
+++ /tmp/wklog.66.new.24899 2010-01-18 16:40:16.000000000 +0200
@@ -1,3 +1,5 @@
+* Target version: base on mysql-5.2 code
+
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
-=-=(Psergey - Mon, 18 Jan 2010, 16:37)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24586 2010-01-18 16:37:07.000000000 +0200
+++ /tmp/wklog.66.new.24586 2010-01-18 16:37:07.000000000 +0200
@@ -4,6 +4,11 @@
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
+How to fill the temptable
+-------------------------
+Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
+and is supposed to be quite trivial.
+
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
-=-=(Psergey - Mon, 18 Jan 2010, 16:34)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24328 2010-01-18 16:34:19.000000000 +0200
+++ /tmp/wklog.66.new.24328 2010-01-18 16:34:19.000000000 +0200
@@ -32,8 +32,8 @@
Question: or perhaps that is not necessarry?
</questionable>
-Execution process
-~~~~~~~~~~~~~~~~~
+Doing the lookup
+~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
@@ -42,4 +42,12 @@
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
+The code in this WL will use the same approach
+Extracting the value of the subquery predicate
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The goal of making the lookup is to get the value of subquery predicate.
+This is done by creating an Item_field $I which refers to appropriate
+temporary table's field and then subquery_predicate->val_int() will invoke
+$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
+forth.
------------------------------------------------------------
-=-=(View All Progress Notes, 17 total)=-=-
http://askmonty.org/worklog/index.pl?tid=66&nolimit=1
DESCRIPTION:
The goal of this task is to optimize evaluation of subqueries and subquery
predicates by storing the results of a correlated subquery together with
correlation parameters in a cache and reusing those results for the same sets of
parameters.
Here's what is to be done in this task in more details:
Collect all outer items/references (left part of the subquiery and outer
references inside the subquery) in key string. Compare the string (which
represents certain value set of the references) against values in hash table and
return cached result of subquery if the reference values combination has already
been used.
For example in the following subquery:
(L1, L2) IN (SELECT A, B FROM T WHERE T.F1>OTER_FIELD)
set of references to look into the subquery cache is (L1, L2, OTER_FIELD).
The subquery cache should be implemented as simple LRU connected to the subquery.
Size of the subquery cache (in number of results (but maybe in used memory
amount)) is limited by session variable (query parameter?).
HIGH-LEVEL SPECIFICATION:
Attach subquery cache to each Item_subquery. Interface should allow to use hash
or temporary table inside.
To check/discuss:
-----------------
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
When this WL is merged with 6.0's optimizations, these two caches will
need to be unified somehow.
<contents>
1. Scope of the task
2. Data structure used for the cache
3. Cache size
4. Interplay with other subquery optimizations
5. User interface
</contents>
1. Scope of the task
--------------------
This WL should handle all subquery predicates, i.e. it should handle these
cases:
outer_expr IN (SELECT correlated_select)
outer_expr $CMP$ ALL/ANY (SELECT correlated_select)
EXISTS (SELECT correlated_select)
scalar-context subquery: (SELECT correlated_select)
The cache will maintain
(outer_expr, correlation_references)-> subquery_item_result
mapping, where
- correlation_references is a list of tablename.column_name that are referred
from the correlated_select but tablename is a table that is ouside the
subquery.
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
We don't support cases when outer_expr or correlation_references are blobs.
All subquery predicates are cached. That is, if one subquery predicate is
located within another, both of them will have caches (one option to reduce
cache memory usage was to use cache only for the upper-most select. we decided
against it).
2. Data structure used for the cache
------------------------------------
There are two data structures available in the codebase that will allow fast
equality lookups:
1. HASH (mysys/hash.c) tables
2. Temporary tables (the ones that are used for e.g. GROUP BY)
None of them has any support for element eviction on overflow (using LRU or
some other policy).
Query cache and MyISAM/Maria's key/page cache ought to support some eviction
mechanism, but code-wise it is not readily reusable, one will need to factor
it out (or copy it).
We choose to use #2, and not to have any eviction policy. See subsequent
sections for details and reasoning behind the decision.
3. Cache size
-------------
Typically, a cache has some maximum size and a policy which is used to
select a cache entry for removal when the cache becomes full (e.g. find
and remove the least [recently] used entry)
For this WL entry we will use a cache of infinite size. The reasoning behind
this is that:
- is is easy to do: we have temporary tables that can grow to arbitrarily
large size while still providing the same insert/lookup interface.
- it suits us: unless the subquery is resolved with one index lookup,
hitting the cache would be many times cheaper than re-running the
subquery, so cache is worth having.
4. Interplay with other subquery optimizations
----------------------------------------------
* This WL entry should not care about IN->EXISTS transformation: caching for
IN subquery and result of its conversion to EXISTS would work in the same
way.
* This optimization is orthogonal to <=>ANY -> MIN/MAX rewrite (it will
work/be useful irrespectively of whether the rewrite has been performed or
not)
* TODO: compare this with materialization for uncorrelated IN-subqueries. Is
this basically the same?
A: no, it is not:
- IN-Materialization has to perform full materialization before it can
do the first subquery evaluation. This WL's code has almost no startup
costs.
- This optimization has temp.table of (corr_reference, predicate_value),
while IN-materialization will have (corr_reference) only.
5. User interface
-----------------
* There will be an @@optimizer_switch flag to turn this optimization on and
off (TODO: name of the flag?)
* TODO: how do we show this in EXPLAIN [EXTENDED]? The most easiest is to
print something in the warning text of EXPLAIN EXTEDED that would indicate
use of cache.
* temporary table sizing (max size for heap table, whether to use MyISAM or
Maria) will be controlled with common temp.table control variables.
LOW-LEVEL DESIGN:
* Target version: base on mysql-5.2 code
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
How to fill the temptable
-------------------------
Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
and is supposed to be quite trivial.
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
Setup process
~~~~~~~~~~~~~
Setup is performed in the same way as in setup_sj_materialization(),
see the code that starts these lines:
/*
Create/initialize everything we will need to index lookups into the
temptable.
*/
and ends at this line:
Remove the injected semi-join IN-equalities from join_tab conds. This
<questionable>
We'll also need to check equalities, i.e. do an equivalent of this:
if (!(sjm->in_equality= create_subq_in_equalities(thd, sjm,
emb_sj_nest->sj_subq_pred)))
DBUG_RETURN(TRUE); /* purecov: inspected */
Question: or perhaps that is not necessarry?
</questionable>
Doing the lookup
~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
if ((res= join_read_key2(join_tab, sjm->table, sjm->tab_ref)) == 1)
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
The code in this WL will use the same approach
Extracting the value of the subquery predicate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The goal of making the lookup is to get the value of subquery predicate.
This is done by creating an Item_field $I which refers to appropriate
temporary table's field and then subquery_predicate->val_int() will invoke
$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
forth.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subquery optimization: Avoid recalculating subquery if external fields values found in subquery cache (66)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subquery optimization: Avoid recalculating subquery if external fields
values found in subquery cache
CREATION DATE..: Wed, 25 Nov 2009, 22:25
SUPERVISOR.....: Monty
IMPLEMENTOR....: Sanja
COPIES TO......:
CATEGORY.......: Client-Sprint
TASK ID........: 66 (http://askmonty.org/worklog/?tid=66)
VERSION........: 9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Category updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Client-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Version updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-5.3
+9.x
-=-=(Monty - Fri, 29 Jan 2010, 19:07)=-=-
Version updated.
--- /tmp/wklog.66.old.5893 2010-01-29 19:07:10.000000000 +0200
+++ /tmp/wklog.66.new.5893 2010-01-29 19:07:10.000000000 +0200
@@ -1 +1 @@
-Server-5.2
+Server-5.3
-=-=(Psergey - Wed, 20 Jan 2010, 14:50)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.26873 2010-01-20 14:50:41.000000000 +0200
+++ /tmp/wklog.66.new.26873 2010-01-20 14:50:41.000000000 +0200
@@ -4,7 +4,6 @@
To check/discuss:
-----------------
-* Do we put subquery cache on all levels of subqueries or on highest level only
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
@@ -41,7 +40,12 @@
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
-We dont support cases when outer_expr or correlation_references are blobs.
+We don't support cases when outer_expr or correlation_references are blobs.
+
+All subquery predicates are cached. That is, if one subquery predicate is
+located within another, both of them will have caches (one option to reduce
+cache memory usage was to use cache only for the upper-most select. we decided
+against it).
2. Data structure used for the cache
------------------------------------
-=-=(Psergey - Wed, 20 Jan 2010, 13:07)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.17649 2010-01-20 13:07:07.000000000 +0200
+++ /tmp/wklog.66.new.17649 2010-01-20 13:07:07.000000000 +0200
@@ -3,7 +3,13 @@
To check/discuss:
- To put subquery cache on all levels of subqueries or on highest level only.
+-----------------
+* Do we put subquery cache on all levels of subqueries or on highest level only
+* Will there be any means to measure subquery cache hit rate?
+* MySQL-6.0 has a one-element predicate result cache. It is called "left
+ expression cache", grep for left_expr_cache in sql/item_subselect.*
+ When this WL is merged with 6.0's optimizations, these two caches will
+ need to be unified somehow.
<contents>
-=-=(Psergey - Mon, 18 Jan 2010, 16:40)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24899 2010-01-18 16:40:16.000000000 +0200
+++ /tmp/wklog.66.new.24899 2010-01-18 16:40:16.000000000 +0200
@@ -1,3 +1,5 @@
+* Target version: base on mysql-5.2 code
+
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
-=-=(Psergey - Mon, 18 Jan 2010, 16:37)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24586 2010-01-18 16:37:07.000000000 +0200
+++ /tmp/wklog.66.new.24586 2010-01-18 16:37:07.000000000 +0200
@@ -4,6 +4,11 @@
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
+How to fill the temptable
+-------------------------
+Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
+and is supposed to be quite trivial.
+
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
-=-=(Psergey - Mon, 18 Jan 2010, 16:34)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24328 2010-01-18 16:34:19.000000000 +0200
+++ /tmp/wklog.66.new.24328 2010-01-18 16:34:19.000000000 +0200
@@ -32,8 +32,8 @@
Question: or perhaps that is not necessarry?
</questionable>
-Execution process
-~~~~~~~~~~~~~~~~~
+Doing the lookup
+~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
@@ -42,4 +42,12 @@
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
+The code in this WL will use the same approach
+Extracting the value of the subquery predicate
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The goal of making the lookup is to get the value of subquery predicate.
+This is done by creating an Item_field $I which refers to appropriate
+temporary table's field and then subquery_predicate->val_int() will invoke
+$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
+forth.
-=-=(Psergey - Mon, 18 Jan 2010, 16:23)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.23203 2010-01-18 16:23:18.000000000 +0200
+++ /tmp/wklog.66.new.23203 2010-01-18 16:23:18.000000000 +0200
@@ -31,3 +31,15 @@
Question: or perhaps that is not necessarry?
</questionable>
+
+Execution process
+~~~~~~~~~~~~~~~~~
+SJ-Materialization does lookup in sub_select_sjm(), with this code:
+
+ /* Do index lookup in the materialized table */
+ if ((res= join_read_key2(join_tab, sjm->table, sjm->tab_ref)) == 1)
+ DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
+ if (res || !sjm->in_equality->val_int())
+ DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
+
+
-=-=(Psergey - Mon, 18 Jan 2010, 16:22)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.23076 2010-01-18 16:22:07.000000000 +0200
+++ /tmp/wklog.66.new.23076 2010-01-18 16:22:07.000000000 +0200
@@ -4,3 +4,30 @@
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
+How to make lookups into temptable
+----------------------------------
+We'll reuse approach used by SJ-Materialization in 6.0.
+
+Setup process
+~~~~~~~~~~~~~
+Setup is performed in the same way as in setup_sj_materialization(),
+see the code that starts these lines:
+
+ /*
+ Create/initialize everything we will need to index lookups into the
+ temptable.
+ */
+
+and ends at this line:
+
+ Remove the injected semi-join IN-equalities from join_tab conds. This
+
+<questionable>
+We'll also need to check equalities, i.e. do an equivalent of this:
+
+ if (!(sjm->in_equality= create_subq_in_equalities(thd, sjm,
+ emb_sj_nest->sj_subq_pred)))
+ DBUG_RETURN(TRUE); /* purecov: inspected */
+
+Question: or perhaps that is not necessarry?
+</questionable>
------------------------------------------------------------
-=-=(View All Progress Notes, 15 total)=-=-
http://askmonty.org/worklog/index.pl?tid=66&nolimit=1
DESCRIPTION:
Collect all outer items/references (left part of the subquiery and outer
references inside the subquery) in key string. Compare the string (which
represents certain value set of the references) against values in hash table and
return cached result of subquery if the reference values combination has already
been used.
For example in the following subquery:
(L1, L2) IN (SELECT A, B FROM T WHERE T.F1>OTER_FIELD)
set of references to look into the subquery cache is (L1, L2, OTER_FIELD).
The subquery cache should be implemented as simple LRU connected to the subquery.
Size of the subquery cache (in number of results (but maybe in used memory
amount)) is limited by session variable (query parameter?).
HIGH-LEVEL SPECIFICATION:
Attach subquery cache to each Item_subquery. Interface should allow to use hash
or temporary table inside.
To check/discuss:
-----------------
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
When this WL is merged with 6.0's optimizations, these two caches will
need to be unified somehow.
<contents>
1. Scope of the task
2. Data structure used for the cache
3. Cache size
4. Interplay with other subquery optimizations
5. User interface
</contents>
1. Scope of the task
--------------------
This WL should handle all subquery predicates, i.e. it should handle these
cases:
outer_expr IN (SELECT correlated_select)
outer_expr $CMP$ ALL/ANY (SELECT correlated_select)
EXISTS (SELECT correlated_select)
scalar-context subquery: (SELECT correlated_select)
The cache will maintain
(outer_expr, correlation_references)-> subquery_item_result
mapping, where
- correlation_references is a list of tablename.column_name that are referred
from the correlated_select but tablename is a table that is ouside the
subquery.
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
We don't support cases when outer_expr or correlation_references are blobs.
All subquery predicates are cached. That is, if one subquery predicate is
located within another, both of them will have caches (one option to reduce
cache memory usage was to use cache only for the upper-most select. we decided
against it).
2. Data structure used for the cache
------------------------------------
There are two data structures available in the codebase that will allow fast
equality lookups:
1. HASH (mysys/hash.c) tables
2. Temporary tables (the ones that are used for e.g. GROUP BY)
None of them has any support for element eviction on overflow (using LRU or
some other policy).
Query cache and MyISAM/Maria's key/page cache ought to support some eviction
mechanism, but code-wise it is not readily reusable, one will need to factor
it out (or copy it).
We choose to use #2, and not to have any eviction policy. See subsequent
sections for details and reasoning behind the decision.
3. Cache size
-------------
Typically, a cache has some maximum size and a policy which is used to
select a cache entry for removal when the cache becomes full (e.g. find
and remove the least [recently] used entry)
For this WL entry we will use a cache of infinite size. The reasoning behind
this is that:
- is is easy to do: we have temporary tables that can grow to arbitrarily
large size while still providing the same insert/lookup interface.
- it suits us: unless the subquery is resolved with one index lookup,
hitting the cache would be many times cheaper than re-running the
subquery, so cache is worth having.
4. Interplay with other subquery optimizations
----------------------------------------------
* This WL entry should not care about IN->EXISTS transformation: caching for
IN subquery and result of its conversion to EXISTS would work in the same
way.
* This optimization is orthogonal to <=>ANY -> MIN/MAX rewrite (it will
work/be useful irrespectively of whether the rewrite has been performed or
not)
* TODO: compare this with materialization for uncorrelated IN-subqueries. Is
this basically the same?
A: no, it is not:
- IN-Materialization has to perform full materialization before it can
do the first subquery evaluation. This WL's code has almost no startup
costs.
- This optimization has temp.table of (corr_reference, predicate_value),
while IN-materialization will have (corr_reference) only.
5. User interface
-----------------
* There will be an @@optimizer_switch flag to turn this optimization on and
off (TODO: name of the flag?)
* TODO: how do we show this in EXPLAIN [EXTENDED]? The most easiest is to
print something in the warning text of EXPLAIN EXTEDED that would indicate
use of cache.
* temporary table sizing (max size for heap table, whether to use MyISAM or
Maria) will be controlled with common temp.table control variables.
LOW-LEVEL DESIGN:
* Target version: base on mysql-5.2 code
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
How to fill the temptable
-------------------------
Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
and is supposed to be quite trivial.
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
Setup process
~~~~~~~~~~~~~
Setup is performed in the same way as in setup_sj_materialization(),
see the code that starts these lines:
/*
Create/initialize everything we will need to index lookups into the
temptable.
*/
and ends at this line:
Remove the injected semi-join IN-equalities from join_tab conds. This
<questionable>
We'll also need to check equalities, i.e. do an equivalent of this:
if (!(sjm->in_equality= create_subq_in_equalities(thd, sjm,
emb_sj_nest->sj_subq_pred)))
DBUG_RETURN(TRUE); /* purecov: inspected */
Question: or perhaps that is not necessarry?
</questionable>
Doing the lookup
~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
if ((res= join_read_key2(join_tab, sjm->table, sjm->tab_ref)) == 1)
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
The code in this WL will use the same approach
Extracting the value of the subquery predicate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The goal of making the lookup is to get the value of subquery predicate.
This is done by creating an Item_field $I which refers to appropriate
temporary table's field and then subquery_predicate->val_int() will invoke
$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
forth.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Subquery optimization: Avoid recalculating subquery if external fields values found in subquery cache (66)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Subquery optimization: Avoid recalculating subquery if external fields
values found in subquery cache
CREATION DATE..: Wed, 25 Nov 2009, 22:25
SUPERVISOR.....: Monty
IMPLEMENTOR....: Sanja
COPIES TO......:
CATEGORY.......: Client-Sprint
TASK ID........: 66 (http://askmonty.org/worklog/?tid=66)
VERSION........: 9.x
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Category updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-BackLog
+Client-Sprint
-=-=(Igor - Wed, 10 Mar 2010, 21:12)=-=-
Version updated.
--- /tmp/wklog.66.old.31558 2010-03-10 21:12:50.000000000 +0000
+++ /tmp/wklog.66.new.31558 2010-03-10 21:12:50.000000000 +0000
@@ -1 +1 @@
-Server-5.3
+9.x
-=-=(Monty - Fri, 29 Jan 2010, 19:07)=-=-
Version updated.
--- /tmp/wklog.66.old.5893 2010-01-29 19:07:10.000000000 +0200
+++ /tmp/wklog.66.new.5893 2010-01-29 19:07:10.000000000 +0200
@@ -1 +1 @@
-Server-5.2
+Server-5.3
-=-=(Psergey - Wed, 20 Jan 2010, 14:50)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.26873 2010-01-20 14:50:41.000000000 +0200
+++ /tmp/wklog.66.new.26873 2010-01-20 14:50:41.000000000 +0200
@@ -4,7 +4,6 @@
To check/discuss:
-----------------
-* Do we put subquery cache on all levels of subqueries or on highest level only
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
@@ -41,7 +40,12 @@
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
-We dont support cases when outer_expr or correlation_references are blobs.
+We don't support cases when outer_expr or correlation_references are blobs.
+
+All subquery predicates are cached. That is, if one subquery predicate is
+located within another, both of them will have caches (one option to reduce
+cache memory usage was to use cache only for the upper-most select. we decided
+against it).
2. Data structure used for the cache
------------------------------------
-=-=(Psergey - Wed, 20 Jan 2010, 13:07)=-=-
High-Level Specification modified.
--- /tmp/wklog.66.old.17649 2010-01-20 13:07:07.000000000 +0200
+++ /tmp/wklog.66.new.17649 2010-01-20 13:07:07.000000000 +0200
@@ -3,7 +3,13 @@
To check/discuss:
- To put subquery cache on all levels of subqueries or on highest level only.
+-----------------
+* Do we put subquery cache on all levels of subqueries or on highest level only
+* Will there be any means to measure subquery cache hit rate?
+* MySQL-6.0 has a one-element predicate result cache. It is called "left
+ expression cache", grep for left_expr_cache in sql/item_subselect.*
+ When this WL is merged with 6.0's optimizations, these two caches will
+ need to be unified somehow.
<contents>
-=-=(Psergey - Mon, 18 Jan 2010, 16:40)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24899 2010-01-18 16:40:16.000000000 +0200
+++ /tmp/wklog.66.new.24899 2010-01-18 16:40:16.000000000 +0200
@@ -1,3 +1,5 @@
+* Target version: base on mysql-5.2 code
+
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
-=-=(Psergey - Mon, 18 Jan 2010, 16:37)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24586 2010-01-18 16:37:07.000000000 +0200
+++ /tmp/wklog.66.new.24586 2010-01-18 16:37:07.000000000 +0200
@@ -4,6 +4,11 @@
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
+How to fill the temptable
+-------------------------
+Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
+and is supposed to be quite trivial.
+
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
-=-=(Psergey - Mon, 18 Jan 2010, 16:34)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.24328 2010-01-18 16:34:19.000000000 +0200
+++ /tmp/wklog.66.new.24328 2010-01-18 16:34:19.000000000 +0200
@@ -32,8 +32,8 @@
Question: or perhaps that is not necessarry?
</questionable>
-Execution process
-~~~~~~~~~~~~~~~~~
+Doing the lookup
+~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
@@ -42,4 +42,12 @@
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
+The code in this WL will use the same approach
+Extracting the value of the subquery predicate
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The goal of making the lookup is to get the value of subquery predicate.
+This is done by creating an Item_field $I which refers to appropriate
+temporary table's field and then subquery_predicate->val_int() will invoke
+$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
+forth.
-=-=(Psergey - Mon, 18 Jan 2010, 16:23)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.23203 2010-01-18 16:23:18.000000000 +0200
+++ /tmp/wklog.66.new.23203 2010-01-18 16:23:18.000000000 +0200
@@ -31,3 +31,15 @@
Question: or perhaps that is not necessarry?
</questionable>
+
+Execution process
+~~~~~~~~~~~~~~~~~
+SJ-Materialization does lookup in sub_select_sjm(), with this code:
+
+ /* Do index lookup in the materialized table */
+ if ((res= join_read_key2(join_tab, sjm->table, sjm->tab_ref)) == 1)
+ DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
+ if (res || !sjm->in_equality->val_int())
+ DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
+
+
-=-=(Psergey - Mon, 18 Jan 2010, 16:22)=-=-
Low Level Design modified.
--- /tmp/wklog.66.old.23076 2010-01-18 16:22:07.000000000 +0200
+++ /tmp/wklog.66.new.23076 2010-01-18 16:22:07.000000000 +0200
@@ -4,3 +4,30 @@
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
+How to make lookups into temptable
+----------------------------------
+We'll reuse approach used by SJ-Materialization in 6.0.
+
+Setup process
+~~~~~~~~~~~~~
+Setup is performed in the same way as in setup_sj_materialization(),
+see the code that starts these lines:
+
+ /*
+ Create/initialize everything we will need to index lookups into the
+ temptable.
+ */
+
+and ends at this line:
+
+ Remove the injected semi-join IN-equalities from join_tab conds. This
+
+<questionable>
+We'll also need to check equalities, i.e. do an equivalent of this:
+
+ if (!(sjm->in_equality= create_subq_in_equalities(thd, sjm,
+ emb_sj_nest->sj_subq_pred)))
+ DBUG_RETURN(TRUE); /* purecov: inspected */
+
+Question: or perhaps that is not necessarry?
+</questionable>
------------------------------------------------------------
-=-=(View All Progress Notes, 15 total)=-=-
http://askmonty.org/worklog/index.pl?tid=66&nolimit=1
DESCRIPTION:
Collect all outer items/references (left part of the subquiery and outer
references inside the subquery) in key string. Compare the string (which
represents certain value set of the references) against values in hash table and
return cached result of subquery if the reference values combination has already
been used.
For example in the following subquery:
(L1, L2) IN (SELECT A, B FROM T WHERE T.F1>OTER_FIELD)
set of references to look into the subquery cache is (L1, L2, OTER_FIELD).
The subquery cache should be implemented as simple LRU connected to the subquery.
Size of the subquery cache (in number of results (but maybe in used memory
amount)) is limited by session variable (query parameter?).
HIGH-LEVEL SPECIFICATION:
Attach subquery cache to each Item_subquery. Interface should allow to use hash
or temporary table inside.
To check/discuss:
-----------------
* Will there be any means to measure subquery cache hit rate?
* MySQL-6.0 has a one-element predicate result cache. It is called "left
expression cache", grep for left_expr_cache in sql/item_subselect.*
When this WL is merged with 6.0's optimizations, these two caches will
need to be unified somehow.
<contents>
1. Scope of the task
2. Data structure used for the cache
3. Cache size
4. Interplay with other subquery optimizations
5. User interface
</contents>
1. Scope of the task
--------------------
This WL should handle all subquery predicates, i.e. it should handle these
cases:
outer_expr IN (SELECT correlated_select)
outer_expr $CMP$ ALL/ANY (SELECT correlated_select)
EXISTS (SELECT correlated_select)
scalar-context subquery: (SELECT correlated_select)
The cache will maintain
(outer_expr, correlation_references)-> subquery_item_result
mapping, where
- correlation_references is a list of tablename.column_name that are referred
from the correlated_select but tablename is a table that is ouside the
subquery.
- subquery_item_result is 'bool' for subquery predicates, and is of
some scalar or ROW(scalar1,...scalarN) type for scalar-context subquery.
We don't support cases when outer_expr or correlation_references are blobs.
All subquery predicates are cached. That is, if one subquery predicate is
located within another, both of them will have caches (one option to reduce
cache memory usage was to use cache only for the upper-most select. we decided
against it).
2. Data structure used for the cache
------------------------------------
There are two data structures available in the codebase that will allow fast
equality lookups:
1. HASH (mysys/hash.c) tables
2. Temporary tables (the ones that are used for e.g. GROUP BY)
None of them has any support for element eviction on overflow (using LRU or
some other policy).
Query cache and MyISAM/Maria's key/page cache ought to support some eviction
mechanism, but code-wise it is not readily reusable, one will need to factor
it out (or copy it).
We choose to use #2, and not to have any eviction policy. See subsequent
sections for details and reasoning behind the decision.
3. Cache size
-------------
Typically, a cache has some maximum size and a policy which is used to
select a cache entry for removal when the cache becomes full (e.g. find
and remove the least [recently] used entry)
For this WL entry we will use a cache of infinite size. The reasoning behind
this is that:
- is is easy to do: we have temporary tables that can grow to arbitrarily
large size while still providing the same insert/lookup interface.
- it suits us: unless the subquery is resolved with one index lookup,
hitting the cache would be many times cheaper than re-running the
subquery, so cache is worth having.
4. Interplay with other subquery optimizations
----------------------------------------------
* This WL entry should not care about IN->EXISTS transformation: caching for
IN subquery and result of its conversion to EXISTS would work in the same
way.
* This optimization is orthogonal to <=>ANY -> MIN/MAX rewrite (it will
work/be useful irrespectively of whether the rewrite has been performed or
not)
* TODO: compare this with materialization for uncorrelated IN-subqueries. Is
this basically the same?
A: no, it is not:
- IN-Materialization has to perform full materialization before it can
do the first subquery evaluation. This WL's code has almost no startup
costs.
- This optimization has temp.table of (corr_reference, predicate_value),
while IN-materialization will have (corr_reference) only.
5. User interface
-----------------
* There will be an @@optimizer_switch flag to turn this optimization on and
off (TODO: name of the flag?)
* TODO: how do we show this in EXPLAIN [EXTENDED]? The most easiest is to
print something in the warning text of EXPLAIN EXTEDED that would indicate
use of cache.
* temporary table sizing (max size for heap table, whether to use MyISAM or
Maria) will be controlled with common temp.table control variables.
LOW-LEVEL DESIGN:
* Target version: base on mysql-5.2 code
All items on which subquery depend could be collected in
st_select_lex::mark_as_dependent (direct of indirect reference?)
Temporary table index should be created by all fields except result field
(TMP_TABLE_PARAM::keyinfo).
How to fill the temptable
-------------------------
Can reuse approach from SJ-Materialization. Its code is in end_sj_materialize()
and is supposed to be quite trivial.
How to make lookups into temptable
----------------------------------
We'll reuse approach used by SJ-Materialization in 6.0.
Setup process
~~~~~~~~~~~~~
Setup is performed in the same way as in setup_sj_materialization(),
see the code that starts these lines:
/*
Create/initialize everything we will need to index lookups into the
temptable.
*/
and ends at this line:
Remove the injected semi-join IN-equalities from join_tab conds. This
<questionable>
We'll also need to check equalities, i.e. do an equivalent of this:
if (!(sjm->in_equality= create_subq_in_equalities(thd, sjm,
emb_sj_nest->sj_subq_pred)))
DBUG_RETURN(TRUE); /* purecov: inspected */
Question: or perhaps that is not necessarry?
</questionable>
Doing the lookup
~~~~~~~~~~~~~~~~
SJ-Materialization does lookup in sub_select_sjm(), with this code:
/* Do index lookup in the materialized table */
if ((res= join_read_key2(join_tab, sjm->table, sjm->tab_ref)) == 1)
DBUG_RETURN(NESTED_LOOP_ERROR); /* purecov: inspected */
if (res || !sjm->in_equality->val_int())
DBUG_RETURN(NESTED_LOOP_NO_MORE_ROWS);
The code in this WL will use the same approach
Extracting the value of the subquery predicate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The goal of making the lookup is to get the value of subquery predicate.
This is done by creating an Item_field $I which refers to appropriate
temporary table's field and then subquery_predicate->val_int() will invoke
$I->val_int(), subquery_predicate->val_str() will invoke $I->val_str() and so
forth.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport subquery optimizations (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport subquery optimizations
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:55)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30849 2010-03-10 20:55:30.000000000 +0000
+++ /tmp/wklog.104.new.30849 2010-03-10 20:55:30.000000000 +0000
@@ -1,2 +1,2 @@
-The target of this task is backport the code for subquery optimizations from the
+The goal of this task is to backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3.
-=-=(Igor - Wed, 10 Mar 2010, 20:54)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30758 2010-03-10 20:54:41.000000000 +0000
+++ /tmp/wklog.104.new.30758 2010-03-10 20:54:41.000000000 +0000
@@ -1,2 +1,2 @@
The target of this task is backport the code for subquery optimizations from the
-MySQL 6.0 code line to MariaDB 5.3
+MySQL 6.0 code line to MariaDB 5.3.
-=-=(Igor - Wed, 10 Mar 2010, 20:53)=-=-
Title modified.
--- /tmp/wklog.104.old.30184 2010-03-10 20:53:31.000000000 +0000
+++ /tmp/wklog.104.new.30184 2010-03-10 20:53:31.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code
+Backport subquery optimizations
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The goal of this task is to backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport subquery optimizations (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport subquery optimizations
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:55)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30849 2010-03-10 20:55:30.000000000 +0000
+++ /tmp/wklog.104.new.30849 2010-03-10 20:55:30.000000000 +0000
@@ -1,2 +1,2 @@
-The target of this task is backport the code for subquery optimizations from the
+The goal of this task is to backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3.
-=-=(Igor - Wed, 10 Mar 2010, 20:54)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30758 2010-03-10 20:54:41.000000000 +0000
+++ /tmp/wklog.104.new.30758 2010-03-10 20:54:41.000000000 +0000
@@ -1,2 +1,2 @@
The target of this task is backport the code for subquery optimizations from the
-MySQL 6.0 code line to MariaDB 5.3
+MySQL 6.0 code line to MariaDB 5.3.
-=-=(Igor - Wed, 10 Mar 2010, 20:53)=-=-
Title modified.
--- /tmp/wklog.104.old.30184 2010-03-10 20:53:31.000000000 +0000
+++ /tmp/wklog.104.new.30184 2010-03-10 20:53:31.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code
+Backport subquery optimizations
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The goal of this task is to backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport subquery optimizations (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport subquery optimizations
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:54)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30758 2010-03-10 20:54:41.000000000 +0000
+++ /tmp/wklog.104.new.30758 2010-03-10 20:54:41.000000000 +0000
@@ -1,2 +1,2 @@
The target of this task is backport the code for subquery optimizations from the
-MySQL 6.0 code line to MariaDB 5.3
+MySQL 6.0 code line to MariaDB 5.3.
-=-=(Igor - Wed, 10 Mar 2010, 20:53)=-=-
Title modified.
--- /tmp/wklog.104.old.30184 2010-03-10 20:53:31.000000000 +0000
+++ /tmp/wklog.104.new.30184 2010-03-10 20:53:31.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code
+Backport subquery optimizations
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The target of this task is backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport subquery optimizations (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport subquery optimizations
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:54)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30758 2010-03-10 20:54:41.000000000 +0000
+++ /tmp/wklog.104.new.30758 2010-03-10 20:54:41.000000000 +0000
@@ -1,2 +1,2 @@
The target of this task is backport the code for subquery optimizations from the
-MySQL 6.0 code line to MariaDB 5.3
+MySQL 6.0 code line to MariaDB 5.3.
-=-=(Igor - Wed, 10 Mar 2010, 20:53)=-=-
Title modified.
--- /tmp/wklog.104.old.30184 2010-03-10 20:53:31.000000000 +0000
+++ /tmp/wklog.104.new.30184 2010-03-10 20:53:31.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code
+Backport subquery optimizations
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The target of this task is backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport subquery optimizations (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport subquery optimizations
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:53)=-=-
Title modified.
--- /tmp/wklog.104.old.30184 2010-03-10 20:53:31.000000000 +0000
+++ /tmp/wklog.104.new.30184 2010-03-10 20:53:31.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code
+Backport subquery optimizations
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The target of this task is backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport subquery optimizations (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport subquery optimizations
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:53)=-=-
Title modified.
--- /tmp/wklog.104.old.30184 2010-03-10 20:53:31.000000000 +0000
+++ /tmp/wklog.104.new.30184 2010-03-10 20:53:31.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code
+Backport subquery optimizations
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The target of this task is backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport 6.0 subquery code (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport 6.0 subquery code
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The target of this task is backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): Backport 6.0 subquery code (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport 6.0 subquery code
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 20:52)=-=-
High Level Description modified.
--- /tmp/wklog.104.old.30172 2010-03-10 20:52:27.000000000 +0000
+++ /tmp/wklog.104.new.30172 2010-03-10 20:52:27.000000000 +0000
@@ -1 +1,2 @@
-Backport 6.0 subquery code to MariaDB 5.3
+The target of this task is backport the code for subquery optimizations from the
+MySQL 6.0 code line to MariaDB 5.3
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
The target of this task is backport the code for subquery optimizations from the
MySQL 6.0 code line to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Backport 6.0 subquery code (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport 6.0 subquery code
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
Backport 6.0 subquery code to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): Backport 6.0 subquery code (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport 6.0 subquery code
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 20:48)=-=-
Title modified.
--- /tmp/wklog.104.old.29904 2010-03-10 20:48:13.000000000 +0000
+++ /tmp/wklog.104.new.29904 2010-03-10 20:48:13.000000000 +0000
@@ -1 +1 @@
-Backport 6.0 subquery code to MariaDB 5.3
+Backport 6.0 subquery code
-=-=(Psergey - Wed, 10 Mar 2010, 18:55)=-=-
Dependency created: 91 now depends on 104
DESCRIPTION:
Backport 6.0 subquery code to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
10 Mar '10
Hi!
>>>>> "Colin" == Colin Charles <colin(a)askmonty.org> writes:
Colin> Hi!
Colin> http://askmonty.org/wiki/index.php/MariaDB:Download
Colin> Under source tarball, we recommend people to go grab stuff from
Colin> Launchpad
Colin> It will be nice if we told them how to do so
Colin> 1. How to use Launchpad
Colin> 2. What to branch
Colin> 3. Introduction to bazaar
Colin> I'm guessing all this should be part of our cogent manual
Colin> What do you think?
Something like this ?
http://askmonty.org/wiki/index.php/Getting_the_MariaDB_Source_Code
Of course, we can add more stuff and we should link to this page from
our source tarball.
Regards,
Monty
1
0
[Maria-developers] Rev 2818: Increased loop counts of sql-bench tests to get run times around in file:///Users/hakan/work/monty_program/maria/
by Hakan Kuecuekyilmaz 10 Mar '10
by Hakan Kuecuekyilmaz 10 Mar '10
10 Mar '10
At file:///Users/hakan/work/monty_program/maria/
------------------------------------------------------------
revno: 2818
revision-id: hakan(a)askmonty.org-20100217201002-gax8y3ts7yf6u50a
parent: monty(a)askmonty.org-20100212142113-wdv50xx19quursaf
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: maria
timestamp: Wed 2010-02-17 21:10:02 +0100
message:
Increased loop counts of sql-bench tests to get run times around
5 minutes on current machines. Tested on a Xeon machine and a new dual core laptop.
=== modified file 'sql-bench/test-ATIS.sh'
--- a/sql-bench/test-ATIS.sh 2009-05-29 13:40:55 +0000
+++ b/sql-bench/test-ATIS.sh 2010-02-17 20:10:02 +0000
@@ -28,7 +28,7 @@
use DBI;
use Benchmark;
-$opt_loop_count=100; # Run selects this many times
+$opt_loop_count=5000; # Run selects this many times
$pwd = cwd(); $pwd = "." if ($pwd eq '');
require "$pwd/bench-init.pl" || die "Can't read Configuration file: $!\n";
=== modified file 'sql-bench/test-alter-table.sh'
--- a/sql-bench/test-alter-table.sh 2009-05-29 13:40:55 +0000
+++ b/sql-bench/test-alter-table.sh 2010-02-17 20:10:02 +0000
@@ -25,7 +25,7 @@
use Benchmark;
$opt_start_field_count=8; # start with this many fields
-$opt_loop_count=100; # How many tests to do
+$opt_loop_count=10000; # How many tests to do
$opt_row_count=1000; # Rows in the table
$opt_field_count=1000; # Add until this many fields.
$opt_time_limit=10*60; # Don't wait more than 10 min for some tests
=== modified file 'sql-bench/test-big-tables.sh'
--- a/sql-bench/test-big-tables.sh 2009-05-29 13:40:55 +0000
+++ b/sql-bench/test-big-tables.sh 2010-02-17 20:10:02 +0000
@@ -25,7 +25,7 @@
use DBI;
use Benchmark;
-$opt_loop_count=1000; # Change this to make test harder/easier
+$opt_loop_count=70000; # Change this to make test harder/easier
$opt_field_count=1000;
$pwd = cwd(); $pwd = "." if ($pwd eq '');
=== modified file 'sql-bench/test-connect.sh'
--- a/sql-bench/test-connect.sh 2010-02-10 21:26:06 +0000
+++ b/sql-bench/test-connect.sh 2010-02-17 20:10:02 +0000
@@ -28,7 +28,7 @@
use DBI;
use Benchmark;
-$opt_loop_count=100000; # Change this to make test harder/easier
+$opt_loop_count=500000; # Change this to make test harder/easier
$str_length=65000; # This is the length of blob strings in PART:5
$max_test=20; # How many times to test if the server is busy
=== modified file 'sql-bench/test-select.sh'
--- a/sql-bench/test-select.sh 2009-05-29 13:40:55 +0000
+++ b/sql-bench/test-select.sh 2010-02-17 20:10:02 +0000
@@ -26,7 +26,7 @@
use Benchmark;
$opt_loop_count=10000;
-$opt_medium_loop_count=1000;
+$opt_medium_loop_count=7000;
$opt_small_loop_count=10;
$opt_regions=6;
$opt_groups=100;
=== modified file 'sql-bench/test-transactions.sh'
--- a/sql-bench/test-transactions.sh 2009-05-29 13:40:55 +0000
+++ b/sql-bench/test-transactions.sh 2010-02-17 20:10:02 +0000
@@ -28,8 +28,8 @@
$opt_groups=27; # Characters are 'A' -> Z
-$opt_loop_count=10000; # Change this to make test harder/easier
-$opt_medium_loop_count=100; # Change this to make test harder/easier
+$opt_loop_count=500000; # Change this to make test harder/easier
+$opt_medium_loop_count=10000; # Change this to make test harder/easier
$pwd = cwd(); $pwd = "." if ($pwd eq '');
require "$pwd/bench-init.pl" || die "Can't read Configuration file: $!\n";
=== modified file 'sql-bench/test-wisconsin.sh'
--- a/sql-bench/test-wisconsin.sh 2009-05-29 13:40:55 +0000
+++ b/sql-bench/test-wisconsin.sh 2010-02-17 20:10:02 +0000
@@ -21,7 +21,7 @@
use DBI;
use Benchmark;
-$opt_loop_count=10;
+$opt_loop_count=5000;
$pwd = cwd(); $pwd = "." if ($pwd eq '');
require "$pwd/bench-init.pl" || die "Can't read Configuration file: $!\n";
2
1
Hi,
Oli Sennhauser is asking how DELAY_KEY_WRITE is working in MariDB?
Is it different or better performing than the MySQL one?
Best regards,
Hakan
Begin forwarded message:
> From: "oli.sennhauser(a)bluewin.ch" <oli.sennhauser(a)bluewin.ch>
> Date: 16. Februar 2010 11:41:26 MEZ
> To: hakan(a)askmonty.org
> Subject: DELAY_KEY_WRITE und MariaDB
> Reply-To: oli.sennhauser(a)bluewin.ch
>
> Hoi Hakan,
>
> Hast Du infos wie sich DELAY_KEY_WRITE auf MariaDB auswirkt? Alles gleich oder viel bessert/schlechter? Performance?
>
> Danke und Gruss,
> Oli
--
Hakan Küçükyılmaz, QA/Benchmark Engineer, Stuttgart/Germany
Monty Program Ab, http://askmonty.org/
Skype: hakank_ Phone: +49 171 1919839
2
1
[Maria-developers] Updated (by Igor): ICP/MRR backport (67)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: ICP/MRR backport
CREATION DATE..: Thu, 26 Nov 2009, 15:19
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 67 (http://askmonty.org/worklog/?tid=67)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 19:14)=-=-
High Level Description modified.
--- /tmp/wklog.67.old.25641 2010-03-10 19:14:45.000000000 +0000
+++ /tmp/wklog.67.new.25641 2010-03-10 19:14:45.000000000 +0000
@@ -1,2 +1,2 @@
-Backport DS-MRR into MariaDB-5.2 codebase, also adding certain extra features to
-make it more usable.
+Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
+features to make it more usable.
-=-=(Guest - Wed, 10 Mar 2010, 19:12)=-=-
Title modified.
--- /tmp/wklog.67.old.25456 2010-03-10 19:12:57.000000000 +0000
+++ /tmp/wklog.67.new.25456 2010-03-10 19:12:57.000000000 +0000
@@ -1 +1 @@
-MRR backport
+ICP/MRR backport
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:09)=-=-
Dependency created: 94 now depends on 67
-=-=(Psergey - Thu, 26 Nov 2009, 20:21)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.9329 2009-11-26 20:21:28.000000000 +0200
+++ /tmp/wklog.67.new.9329 2009-11-26 20:21:28.000000000 +0200
@@ -65,17 +65,19 @@
2.5 Make MRR code more of a module
----------------------------------
-Some code in handler.cc can be moved to separate file.
-But changes in opt_range.cc can't.
-TODO: Sort out how much we really can do here. Initial guess is not much as the
-code consists of:
+It is not possible to make MRR to be a totally separate module, as its code
+consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
- calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
-- DS-MRR implementations which are spread over storage engines.
-and the only modularization we see is to move #1 into a separate file which
-won't achieve much.
+- DS-MRR impelementations which are spread over storage engines.
+
+We'll try to modularize what we can:
+- Move out default MRR implementation from handler.cc
+- Move possible parts out of opt_range.cc into a separate file.
+
+
2.6 Improve the cost model
--------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 19:06)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.6449 2009-11-26 19:06:04.000000000 +0200
+++ /tmp/wklog.67.new.6449 2009-11-26 19:06:04.000000000 +0200
@@ -1,4 +1,3 @@
-
<contents>
1. Requirements
2. Required actions
@@ -44,6 +43,7 @@
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.4161 2009-11-26 18:15:36.000000000 +0200
+++ /tmp/wklog.67.new.4161 2009-11-26 18:15:36.000000000 +0200
@@ -1,3 +1,17 @@
+
+<contents>
+1. Requirements
+2. Required actions
+2.1 Fix DS-MRR/InnoDB bugs
+2.2 Backport DS-MRR code to MariaDB 5.2
+2.3 Introduce control variables
+2.4 Other backport issues
+2.5 Make MRR code more of a module
+2.6 Improve the cost model
+2.7 Let DS-MRR support clustered primary keys
+</contents>
+
+
1. Requirements
===============
@@ -63,4 +77,28 @@
and the only modularization we see is to move #1 into a separate file which
won't achieve much.
+2.6 Improve the cost model
+--------------------------
+At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
+records_in_range() calls, followed by index_only_read_time() or read_time()
+calls to produce the estimate for read cost.
+
+ We should change this (TODO sort out how exactly)
+
+Note: this means that the query plans will change from MariaDB 5.2.
+
+2.7 Let DS-MRR support clustered primary keys
+---------------------------------------------
+At the moment DS-MRR is not supported for clustered primary keys. It is not
+needed when MRR is used for range access, because range access is done over
+an ordered list of ranges, but it is useful for BKA.
+
+TODO:
+ it's useful for BKA because BKA makes MRR scans over un-orderered
+ non-disjoint lists of ranges. Then we can sort these and do ordered scans.
+ There is still no use for DS-MRR over clustered primary key for range
+ access, where the ranges are disjoint and ordered.
+ How about postponing this item until BKA is backported?
+
+
-=-=(Guest - Thu, 26 Nov 2009, 16:52)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.694 2009-11-26 14:52:53.000000000 +0000
+++ /tmp/wklog.67.new.694 2009-11-26 14:52:53.000000000 +0000
@@ -1 +1,66 @@
+1. Requirements
+===============
+
+We need the following:
+
+1. Latest MRR interface support, including extensions to support ICP when
+ using BKA.
+2. Let DS-MRR support clustered primary keys (needed when using BKA).
+3. Remove conditions used for key access from the condition pushed to index
+ (ATM this manifests itself as "Using index condition" appearing where there
+ was no "Using where". TODO: example of this?)
+4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
+ is switched on/off by @@engine_condition_pushdown)
+5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
+ for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
+ makes it unobvious for a number of users.
+6. Rename multi_range_read_info_const() to look like it is not a part of MRR
+ interface.
+8. Try to make MRR to be more of a module
+7. Improve MRR's cost model.
+
+2. Required actions
+===================
+
+Roughly in the order in which it will be done:
+
+2.1 Fix DS-MRR/InnoDB bugs
+--------------------------
+We need to fix the bugs listed here:
+
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condition_pushdown
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+
+2.2 Backport DS-MRR code to MariaDB 5.2
+---------------------------------------
+The easiest way seems to be to to manually move the needed code from mysql-6.0
+(or whatever it's called now) to MariaDB.
+
+2.3 Introduce control variables
+-------------------------------
+Act on items #4 and #5 from the requirements. Should be easy as
+@@optimizer_switch is supported in 5.1 codebase.
+
+2.4 Other backport issues
+-------------------------
+* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
+ implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
+ but merging it into 5.1 can be very labor-intensive.
+ Will it be ok to disable NDB/MRR altogether?
+
+
+2.5 Make MRR code more of a module
+----------------------------------
+Some code in handler.cc can be moved to separate file.
+But changes in opt_range.cc can't.
+TODO: Sort out how much we really can do here. Initial guess is not much as the
+code consists of:
+- Default MRR implementation in handler.cc
+- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
+ calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ so there is not much point in moving them out.
+- DS-MRR implementations which are spread over storage engines.
+and the only modularization we see is to move #1 into a separate file which
+won't achieve much.
+
DESCRIPTION:
Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
features to make it more usable.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Requirements
2. Required actions
2.1 Fix DS-MRR/InnoDB bugs
2.2 Backport DS-MRR code to MariaDB 5.2
2.3 Introduce control variables
2.4 Other backport issues
2.5 Make MRR code more of a module
2.6 Improve the cost model
2.7 Let DS-MRR support clustered primary keys
</contents>
1. Requirements
===============
We need the following:
1. Latest MRR interface support, including extensions to support ICP when
using BKA.
2. Let DS-MRR support clustered primary keys (needed when using BKA).
3. Remove conditions used for key access from the condition pushed to index
(ATM this manifests itself as "Using index condition" appearing where there
was no "Using where". TODO: example of this?)
4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
is switched on/off by @@engine_condition_pushdown)
5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
makes it unobvious for a number of users.
6. Rename multi_range_read_info_const() to look like it is not a part of MRR
interface.
8. Try to make MRR to be more of a module
7. Improve MRR's cost model.
2. Required actions
===================
Roughly in the order in which it will be done:
2.1 Fix DS-MRR/InnoDB bugs
--------------------------
We need to fix the bugs listed here:
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
The easiest way seems to be to to manually move the needed code from mysql-6.0
(or whatever it's called now) to MariaDB.
2.3 Introduce control variables
-------------------------------
Act on items #4 and #5 from the requirements. Should be easy as
@@optimizer_switch is supported in 5.1 codebase.
2.4 Other backport issues
-------------------------
* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
but merging it into 5.1 can be very labor-intensive.
Will it be ok to disable NDB/MRR altogether?
2.5 Make MRR code more of a module
----------------------------------
It is not possible to make MRR to be a totally separate module, as its code
consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
- DS-MRR impelementations which are spread over storage engines.
We'll try to modularize what we can:
- Move out default MRR implementation from handler.cc
- Move possible parts out of opt_range.cc into a separate file.
2.6 Improve the cost model
--------------------------
At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
records_in_range() calls, followed by index_only_read_time() or read_time()
calls to produce the estimate for read cost.
We should change this (TODO sort out how exactly)
Note: this means that the query plans will change from MariaDB 5.2.
2.7 Let DS-MRR support clustered primary keys
---------------------------------------------
At the moment DS-MRR is not supported for clustered primary keys. It is not
needed when MRR is used for range access, because range access is done over
an ordered list of ranges, but it is useful for BKA.
TODO:
it's useful for BKA because BKA makes MRR scans over un-orderered
non-disjoint lists of ranges. Then we can sort these and do ordered scans.
There is still no use for DS-MRR over clustered primary key for range
access, where the ranges are disjoint and ordered.
How about postponing this item until BKA is backported?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Igor): ICP/MRR backport (67)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: ICP/MRR backport
CREATION DATE..: Thu, 26 Nov 2009, 15:19
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 67 (http://askmonty.org/worklog/?tid=67)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Igor - Wed, 10 Mar 2010, 19:14)=-=-
High Level Description modified.
--- /tmp/wklog.67.old.25641 2010-03-10 19:14:45.000000000 +0000
+++ /tmp/wklog.67.new.25641 2010-03-10 19:14:45.000000000 +0000
@@ -1,2 +1,2 @@
-Backport DS-MRR into MariaDB-5.2 codebase, also adding certain extra features to
-make it more usable.
+Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
+features to make it more usable.
-=-=(Guest - Wed, 10 Mar 2010, 19:12)=-=-
Title modified.
--- /tmp/wklog.67.old.25456 2010-03-10 19:12:57.000000000 +0000
+++ /tmp/wklog.67.new.25456 2010-03-10 19:12:57.000000000 +0000
@@ -1 +1 @@
-MRR backport
+ICP/MRR backport
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:09)=-=-
Dependency created: 94 now depends on 67
-=-=(Psergey - Thu, 26 Nov 2009, 20:21)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.9329 2009-11-26 20:21:28.000000000 +0200
+++ /tmp/wklog.67.new.9329 2009-11-26 20:21:28.000000000 +0200
@@ -65,17 +65,19 @@
2.5 Make MRR code more of a module
----------------------------------
-Some code in handler.cc can be moved to separate file.
-But changes in opt_range.cc can't.
-TODO: Sort out how much we really can do here. Initial guess is not much as the
-code consists of:
+It is not possible to make MRR to be a totally separate module, as its code
+consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
- calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
-- DS-MRR implementations which are spread over storage engines.
-and the only modularization we see is to move #1 into a separate file which
-won't achieve much.
+- DS-MRR impelementations which are spread over storage engines.
+
+We'll try to modularize what we can:
+- Move out default MRR implementation from handler.cc
+- Move possible parts out of opt_range.cc into a separate file.
+
+
2.6 Improve the cost model
--------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 19:06)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.6449 2009-11-26 19:06:04.000000000 +0200
+++ /tmp/wklog.67.new.6449 2009-11-26 19:06:04.000000000 +0200
@@ -1,4 +1,3 @@
-
<contents>
1. Requirements
2. Required actions
@@ -44,6 +43,7 @@
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.4161 2009-11-26 18:15:36.000000000 +0200
+++ /tmp/wklog.67.new.4161 2009-11-26 18:15:36.000000000 +0200
@@ -1,3 +1,17 @@
+
+<contents>
+1. Requirements
+2. Required actions
+2.1 Fix DS-MRR/InnoDB bugs
+2.2 Backport DS-MRR code to MariaDB 5.2
+2.3 Introduce control variables
+2.4 Other backport issues
+2.5 Make MRR code more of a module
+2.6 Improve the cost model
+2.7 Let DS-MRR support clustered primary keys
+</contents>
+
+
1. Requirements
===============
@@ -63,4 +77,28 @@
and the only modularization we see is to move #1 into a separate file which
won't achieve much.
+2.6 Improve the cost model
+--------------------------
+At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
+records_in_range() calls, followed by index_only_read_time() or read_time()
+calls to produce the estimate for read cost.
+
+ We should change this (TODO sort out how exactly)
+
+Note: this means that the query plans will change from MariaDB 5.2.
+
+2.7 Let DS-MRR support clustered primary keys
+---------------------------------------------
+At the moment DS-MRR is not supported for clustered primary keys. It is not
+needed when MRR is used for range access, because range access is done over
+an ordered list of ranges, but it is useful for BKA.
+
+TODO:
+ it's useful for BKA because BKA makes MRR scans over un-orderered
+ non-disjoint lists of ranges. Then we can sort these and do ordered scans.
+ There is still no use for DS-MRR over clustered primary key for range
+ access, where the ranges are disjoint and ordered.
+ How about postponing this item until BKA is backported?
+
+
-=-=(Guest - Thu, 26 Nov 2009, 16:52)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.694 2009-11-26 14:52:53.000000000 +0000
+++ /tmp/wklog.67.new.694 2009-11-26 14:52:53.000000000 +0000
@@ -1 +1,66 @@
+1. Requirements
+===============
+
+We need the following:
+
+1. Latest MRR interface support, including extensions to support ICP when
+ using BKA.
+2. Let DS-MRR support clustered primary keys (needed when using BKA).
+3. Remove conditions used for key access from the condition pushed to index
+ (ATM this manifests itself as "Using index condition" appearing where there
+ was no "Using where". TODO: example of this?)
+4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
+ is switched on/off by @@engine_condition_pushdown)
+5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
+ for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
+ makes it unobvious for a number of users.
+6. Rename multi_range_read_info_const() to look like it is not a part of MRR
+ interface.
+8. Try to make MRR to be more of a module
+7. Improve MRR's cost model.
+
+2. Required actions
+===================
+
+Roughly in the order in which it will be done:
+
+2.1 Fix DS-MRR/InnoDB bugs
+--------------------------
+We need to fix the bugs listed here:
+
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condition_pushdown
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+
+2.2 Backport DS-MRR code to MariaDB 5.2
+---------------------------------------
+The easiest way seems to be to to manually move the needed code from mysql-6.0
+(or whatever it's called now) to MariaDB.
+
+2.3 Introduce control variables
+-------------------------------
+Act on items #4 and #5 from the requirements. Should be easy as
+@@optimizer_switch is supported in 5.1 codebase.
+
+2.4 Other backport issues
+-------------------------
+* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
+ implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
+ but merging it into 5.1 can be very labor-intensive.
+ Will it be ok to disable NDB/MRR altogether?
+
+
+2.5 Make MRR code more of a module
+----------------------------------
+Some code in handler.cc can be moved to separate file.
+But changes in opt_range.cc can't.
+TODO: Sort out how much we really can do here. Initial guess is not much as the
+code consists of:
+- Default MRR implementation in handler.cc
+- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
+ calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ so there is not much point in moving them out.
+- DS-MRR implementations which are spread over storage engines.
+and the only modularization we see is to move #1 into a separate file which
+won't achieve much.
+
DESCRIPTION:
Backport ICP and DS-MRR into MariaDB-5.2 codebase, also adding certain extra
features to make it more usable.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Requirements
2. Required actions
2.1 Fix DS-MRR/InnoDB bugs
2.2 Backport DS-MRR code to MariaDB 5.2
2.3 Introduce control variables
2.4 Other backport issues
2.5 Make MRR code more of a module
2.6 Improve the cost model
2.7 Let DS-MRR support clustered primary keys
</contents>
1. Requirements
===============
We need the following:
1. Latest MRR interface support, including extensions to support ICP when
using BKA.
2. Let DS-MRR support clustered primary keys (needed when using BKA).
3. Remove conditions used for key access from the condition pushed to index
(ATM this manifests itself as "Using index condition" appearing where there
was no "Using where". TODO: example of this?)
4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
is switched on/off by @@engine_condition_pushdown)
5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
makes it unobvious for a number of users.
6. Rename multi_range_read_info_const() to look like it is not a part of MRR
interface.
8. Try to make MRR to be more of a module
7. Improve MRR's cost model.
2. Required actions
===================
Roughly in the order in which it will be done:
2.1 Fix DS-MRR/InnoDB bugs
--------------------------
We need to fix the bugs listed here:
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
The easiest way seems to be to to manually move the needed code from mysql-6.0
(or whatever it's called now) to MariaDB.
2.3 Introduce control variables
-------------------------------
Act on items #4 and #5 from the requirements. Should be easy as
@@optimizer_switch is supported in 5.1 codebase.
2.4 Other backport issues
-------------------------
* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
but merging it into 5.1 can be very labor-intensive.
Will it be ok to disable NDB/MRR altogether?
2.5 Make MRR code more of a module
----------------------------------
It is not possible to make MRR to be a totally separate module, as its code
consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
- DS-MRR impelementations which are spread over storage engines.
We'll try to modularize what we can:
- Move out default MRR implementation from handler.cc
- Move possible parts out of opt_range.cc into a separate file.
2.6 Improve the cost model
--------------------------
At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
records_in_range() calls, followed by index_only_read_time() or read_time()
calls to produce the estimate for read cost.
We should change this (TODO sort out how exactly)
Note: this means that the query plans will change from MariaDB 5.2.
2.7 Let DS-MRR support clustered primary keys
---------------------------------------------
At the moment DS-MRR is not supported for clustered primary keys. It is not
needed when MRR is used for range access, because range access is done over
an ordered list of ranges, but it is useful for BKA.
TODO:
it's useful for BKA because BKA makes MRR scans over un-orderered
non-disjoint lists of ranges. Then we can sort these and do ordered scans.
There is still no use for DS-MRR over clustered primary key for range
access, where the ranges are disjoint and ordered.
How about postponing this item until BKA is backported?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): ICP/MRR backport (67)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: ICP/MRR backport
CREATION DATE..: Thu, 26 Nov 2009, 15:19
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 67 (http://askmonty.org/worklog/?tid=67)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:12)=-=-
Title modified.
--- /tmp/wklog.67.old.25456 2010-03-10 19:12:57.000000000 +0000
+++ /tmp/wklog.67.new.25456 2010-03-10 19:12:57.000000000 +0000
@@ -1 +1 @@
-MRR backport
+ICP/MRR backport
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:09)=-=-
Dependency created: 94 now depends on 67
-=-=(Psergey - Thu, 26 Nov 2009, 20:21)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.9329 2009-11-26 20:21:28.000000000 +0200
+++ /tmp/wklog.67.new.9329 2009-11-26 20:21:28.000000000 +0200
@@ -65,17 +65,19 @@
2.5 Make MRR code more of a module
----------------------------------
-Some code in handler.cc can be moved to separate file.
-But changes in opt_range.cc can't.
-TODO: Sort out how much we really can do here. Initial guess is not much as the
-code consists of:
+It is not possible to make MRR to be a totally separate module, as its code
+consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
- calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
-- DS-MRR implementations which are spread over storage engines.
-and the only modularization we see is to move #1 into a separate file which
-won't achieve much.
+- DS-MRR impelementations which are spread over storage engines.
+
+We'll try to modularize what we can:
+- Move out default MRR implementation from handler.cc
+- Move possible parts out of opt_range.cc into a separate file.
+
+
2.6 Improve the cost model
--------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 19:06)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.6449 2009-11-26 19:06:04.000000000 +0200
+++ /tmp/wklog.67.new.6449 2009-11-26 19:06:04.000000000 +0200
@@ -1,4 +1,3 @@
-
<contents>
1. Requirements
2. Required actions
@@ -44,6 +43,7 @@
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.4161 2009-11-26 18:15:36.000000000 +0200
+++ /tmp/wklog.67.new.4161 2009-11-26 18:15:36.000000000 +0200
@@ -1,3 +1,17 @@
+
+<contents>
+1. Requirements
+2. Required actions
+2.1 Fix DS-MRR/InnoDB bugs
+2.2 Backport DS-MRR code to MariaDB 5.2
+2.3 Introduce control variables
+2.4 Other backport issues
+2.5 Make MRR code more of a module
+2.6 Improve the cost model
+2.7 Let DS-MRR support clustered primary keys
+</contents>
+
+
1. Requirements
===============
@@ -63,4 +77,28 @@
and the only modularization we see is to move #1 into a separate file which
won't achieve much.
+2.6 Improve the cost model
+--------------------------
+At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
+records_in_range() calls, followed by index_only_read_time() or read_time()
+calls to produce the estimate for read cost.
+
+ We should change this (TODO sort out how exactly)
+
+Note: this means that the query plans will change from MariaDB 5.2.
+
+2.7 Let DS-MRR support clustered primary keys
+---------------------------------------------
+At the moment DS-MRR is not supported for clustered primary keys. It is not
+needed when MRR is used for range access, because range access is done over
+an ordered list of ranges, but it is useful for BKA.
+
+TODO:
+ it's useful for BKA because BKA makes MRR scans over un-orderered
+ non-disjoint lists of ranges. Then we can sort these and do ordered scans.
+ There is still no use for DS-MRR over clustered primary key for range
+ access, where the ranges are disjoint and ordered.
+ How about postponing this item until BKA is backported?
+
+
-=-=(Guest - Thu, 26 Nov 2009, 16:52)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.694 2009-11-26 14:52:53.000000000 +0000
+++ /tmp/wklog.67.new.694 2009-11-26 14:52:53.000000000 +0000
@@ -1 +1,66 @@
+1. Requirements
+===============
+
+We need the following:
+
+1. Latest MRR interface support, including extensions to support ICP when
+ using BKA.
+2. Let DS-MRR support clustered primary keys (needed when using BKA).
+3. Remove conditions used for key access from the condition pushed to index
+ (ATM this manifests itself as "Using index condition" appearing where there
+ was no "Using where". TODO: example of this?)
+4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
+ is switched on/off by @@engine_condition_pushdown)
+5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
+ for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
+ makes it unobvious for a number of users.
+6. Rename multi_range_read_info_const() to look like it is not a part of MRR
+ interface.
+8. Try to make MRR to be more of a module
+7. Improve MRR's cost model.
+
+2. Required actions
+===================
+
+Roughly in the order in which it will be done:
+
+2.1 Fix DS-MRR/InnoDB bugs
+--------------------------
+We need to fix the bugs listed here:
+
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condition_pushdown
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+
+2.2 Backport DS-MRR code to MariaDB 5.2
+---------------------------------------
+The easiest way seems to be to to manually move the needed code from mysql-6.0
+(or whatever it's called now) to MariaDB.
+
+2.3 Introduce control variables
+-------------------------------
+Act on items #4 and #5 from the requirements. Should be easy as
+@@optimizer_switch is supported in 5.1 codebase.
+
+2.4 Other backport issues
+-------------------------
+* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
+ implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
+ but merging it into 5.1 can be very labor-intensive.
+ Will it be ok to disable NDB/MRR altogether?
+
+
+2.5 Make MRR code more of a module
+----------------------------------
+Some code in handler.cc can be moved to separate file.
+But changes in opt_range.cc can't.
+TODO: Sort out how much we really can do here. Initial guess is not much as the
+code consists of:
+- Default MRR implementation in handler.cc
+- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
+ calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ so there is not much point in moving them out.
+- DS-MRR implementations which are spread over storage engines.
+and the only modularization we see is to move #1 into a separate file which
+won't achieve much.
+
DESCRIPTION:
Backport DS-MRR into MariaDB-5.2 codebase, also adding certain extra features to
make it more usable.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Requirements
2. Required actions
2.1 Fix DS-MRR/InnoDB bugs
2.2 Backport DS-MRR code to MariaDB 5.2
2.3 Introduce control variables
2.4 Other backport issues
2.5 Make MRR code more of a module
2.6 Improve the cost model
2.7 Let DS-MRR support clustered primary keys
</contents>
1. Requirements
===============
We need the following:
1. Latest MRR interface support, including extensions to support ICP when
using BKA.
2. Let DS-MRR support clustered primary keys (needed when using BKA).
3. Remove conditions used for key access from the condition pushed to index
(ATM this manifests itself as "Using index condition" appearing where there
was no "Using where". TODO: example of this?)
4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
is switched on/off by @@engine_condition_pushdown)
5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
makes it unobvious for a number of users.
6. Rename multi_range_read_info_const() to look like it is not a part of MRR
interface.
8. Try to make MRR to be more of a module
7. Improve MRR's cost model.
2. Required actions
===================
Roughly in the order in which it will be done:
2.1 Fix DS-MRR/InnoDB bugs
--------------------------
We need to fix the bugs listed here:
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
The easiest way seems to be to to manually move the needed code from mysql-6.0
(or whatever it's called now) to MariaDB.
2.3 Introduce control variables
-------------------------------
Act on items #4 and #5 from the requirements. Should be easy as
@@optimizer_switch is supported in 5.1 codebase.
2.4 Other backport issues
-------------------------
* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
but merging it into 5.1 can be very labor-intensive.
Will it be ok to disable NDB/MRR altogether?
2.5 Make MRR code more of a module
----------------------------------
It is not possible to make MRR to be a totally separate module, as its code
consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
- DS-MRR impelementations which are spread over storage engines.
We'll try to modularize what we can:
- Move out default MRR implementation from handler.cc
- Move possible parts out of opt_range.cc into a separate file.
2.6 Improve the cost model
--------------------------
At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
records_in_range() calls, followed by index_only_read_time() or read_time()
calls to produce the estimate for read cost.
We should change this (TODO sort out how exactly)
Note: this means that the query plans will change from MariaDB 5.2.
2.7 Let DS-MRR support clustered primary keys
---------------------------------------------
At the moment DS-MRR is not supported for clustered primary keys. It is not
needed when MRR is used for range access, because range access is done over
an ordered list of ranges, but it is useful for BKA.
TODO:
it's useful for BKA because BKA makes MRR scans over un-orderered
non-disjoint lists of ranges. Then we can sort these and do ordered scans.
There is still no use for DS-MRR over clustered primary key for range
access, where the ranges are disjoint and ordered.
How about postponing this item until BKA is backported?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): ICP/MRR backport (67)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: ICP/MRR backport
CREATION DATE..: Thu, 26 Nov 2009, 15:19
SUPERVISOR.....: Monty
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 67 (http://askmonty.org/worklog/?tid=67)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:12)=-=-
Title modified.
--- /tmp/wklog.67.old.25456 2010-03-10 19:12:57.000000000 +0000
+++ /tmp/wklog.67.new.25456 2010-03-10 19:12:57.000000000 +0000
@@ -1 +1 @@
-MRR backport
+ICP/MRR backport
-=-=(Psergey - Sun, 28 Feb 2010, 14:56)=-=-
Dependency created: 91 now depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:54)=-=-
Dependency deleted: 94 no longer depends on 67
-=-=(Psergey - Sun, 28 Feb 2010, 14:09)=-=-
Dependency created: 94 now depends on 67
-=-=(Psergey - Thu, 26 Nov 2009, 20:21)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.9329 2009-11-26 20:21:28.000000000 +0200
+++ /tmp/wklog.67.new.9329 2009-11-26 20:21:28.000000000 +0200
@@ -65,17 +65,19 @@
2.5 Make MRR code more of a module
----------------------------------
-Some code in handler.cc can be moved to separate file.
-But changes in opt_range.cc can't.
-TODO: Sort out how much we really can do here. Initial guess is not much as the
-code consists of:
+It is not possible to make MRR to be a totally separate module, as its code
+consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
- calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
-- DS-MRR implementations which are spread over storage engines.
-and the only modularization we see is to move #1 into a separate file which
-won't achieve much.
+- DS-MRR impelementations which are spread over storage engines.
+
+We'll try to modularize what we can:
+- Move out default MRR implementation from handler.cc
+- Move possible parts out of opt_range.cc into a separate file.
+
+
2.6 Improve the cost model
--------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 19:06)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.6449 2009-11-26 19:06:04.000000000 +0200
+++ /tmp/wklog.67.new.6449 2009-11-26 19:06:04.000000000 +0200
@@ -1,4 +1,3 @@
-
<contents>
1. Requirements
2. Required actions
@@ -44,6 +43,7 @@
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
-=-=(Psergey - Thu, 26 Nov 2009, 18:15)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.4161 2009-11-26 18:15:36.000000000 +0200
+++ /tmp/wklog.67.new.4161 2009-11-26 18:15:36.000000000 +0200
@@ -1,3 +1,17 @@
+
+<contents>
+1. Requirements
+2. Required actions
+2.1 Fix DS-MRR/InnoDB bugs
+2.2 Backport DS-MRR code to MariaDB 5.2
+2.3 Introduce control variables
+2.4 Other backport issues
+2.5 Make MRR code more of a module
+2.6 Improve the cost model
+2.7 Let DS-MRR support clustered primary keys
+</contents>
+
+
1. Requirements
===============
@@ -63,4 +77,28 @@
and the only modularization we see is to move #1 into a separate file which
won't achieve much.
+2.6 Improve the cost model
+--------------------------
+At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
+records_in_range() calls, followed by index_only_read_time() or read_time()
+calls to produce the estimate for read cost.
+
+ We should change this (TODO sort out how exactly)
+
+Note: this means that the query plans will change from MariaDB 5.2.
+
+2.7 Let DS-MRR support clustered primary keys
+---------------------------------------------
+At the moment DS-MRR is not supported for clustered primary keys. It is not
+needed when MRR is used for range access, because range access is done over
+an ordered list of ranges, but it is useful for BKA.
+
+TODO:
+ it's useful for BKA because BKA makes MRR scans over un-orderered
+ non-disjoint lists of ranges. Then we can sort these and do ordered scans.
+ There is still no use for DS-MRR over clustered primary key for range
+ access, where the ranges are disjoint and ordered.
+ How about postponing this item until BKA is backported?
+
+
-=-=(Guest - Thu, 26 Nov 2009, 16:52)=-=-
High-Level Specification modified.
--- /tmp/wklog.67.old.694 2009-11-26 14:52:53.000000000 +0000
+++ /tmp/wklog.67.new.694 2009-11-26 14:52:53.000000000 +0000
@@ -1 +1,66 @@
+1. Requirements
+===============
+
+We need the following:
+
+1. Latest MRR interface support, including extensions to support ICP when
+ using BKA.
+2. Let DS-MRR support clustered primary keys (needed when using BKA).
+3. Remove conditions used for key access from the condition pushed to index
+ (ATM this manifests itself as "Using index condition" appearing where there
+ was no "Using where". TODO: example of this?)
+4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
+ is switched on/off by @@engine_condition_pushdown)
+5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
+ for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
+ makes it unobvious for a number of users.
+6. Rename multi_range_read_info_const() to look like it is not a part of MRR
+ interface.
+8. Try to make MRR to be more of a module
+7. Improve MRR's cost model.
+
+2. Required actions
+===================
+
+Roughly in the order in which it will be done:
+
+2.1 Fix DS-MRR/InnoDB bugs
+--------------------------
+We need to fix the bugs listed here:
+
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condition_pushdown
+http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
+
+2.2 Backport DS-MRR code to MariaDB 5.2
+---------------------------------------
+The easiest way seems to be to to manually move the needed code from mysql-6.0
+(or whatever it's called now) to MariaDB.
+
+2.3 Introduce control variables
+-------------------------------
+Act on items #4 and #5 from the requirements. Should be easy as
+@@optimizer_switch is supported in 5.1 codebase.
+
+2.4 Other backport issues
+-------------------------
+* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
+ implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
+ but merging it into 5.1 can be very labor-intensive.
+ Will it be ok to disable NDB/MRR altogether?
+
+
+2.5 Make MRR code more of a module
+----------------------------------
+Some code in handler.cc can be moved to separate file.
+But changes in opt_range.cc can't.
+TODO: Sort out how much we really can do here. Initial guess is not much as the
+code consists of:
+- Default MRR implementation in handler.cc
+- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
+ calls. These rely on opt_range.cc's internal structures like SEL_ARG trees and
+ so there is not much point in moving them out.
+- DS-MRR implementations which are spread over storage engines.
+and the only modularization we see is to move #1 into a separate file which
+won't achieve much.
+
DESCRIPTION:
Backport DS-MRR into MariaDB-5.2 codebase, also adding certain extra features to
make it more usable.
HIGH-LEVEL SPECIFICATION:
<contents>
1. Requirements
2. Required actions
2.1 Fix DS-MRR/InnoDB bugs
2.2 Backport DS-MRR code to MariaDB 5.2
2.3 Introduce control variables
2.4 Other backport issues
2.5 Make MRR code more of a module
2.6 Improve the cost model
2.7 Let DS-MRR support clustered primary keys
</contents>
1. Requirements
===============
We need the following:
1. Latest MRR interface support, including extensions to support ICP when
using BKA.
2. Let DS-MRR support clustered primary keys (needed when using BKA).
3. Remove conditions used for key access from the condition pushed to index
(ATM this manifests itself as "Using index condition" appearing where there
was no "Using where". TODO: example of this?)
4. Introduce a separate @@optimizer_switch flag for turning on/out ICP (atm it
is switched on/off by @@engine_condition_pushdown)
5. Introduce a separate @@mrr_buffer_size variable to control MRR buffer size
for range+MRR scans. ATM it is controlled by @@read_rnd_size flag and that
makes it unobvious for a number of users.
6. Rename multi_range_read_info_const() to look like it is not a part of MRR
interface.
8. Try to make MRR to be more of a module
7. Improve MRR's cost model.
2. Required actions
===================
Roughly in the order in which it will be done:
2.1 Fix DS-MRR/InnoDB bugs
--------------------------
We need to fix the bugs listed here:
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=index_condi…
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=mrr
http://bugs.mysql.com/search.php?cmd=display&status=Active&tags=icp
2.2 Backport DS-MRR code to MariaDB 5.2
---------------------------------------
The easiest way seems to be to to manually move the needed code from mysql-6.0
(or whatever it's called now) to MariaDB.
2.3 Introduce control variables
-------------------------------
Act on items #4 and #5 from the requirements. Should be easy as
@@optimizer_switch is supported in 5.1 codebase.
2.4 Other backport issues
-------------------------
* Figure out what to do with NDB/MRR. 5.1 codebase has "old" NDB/MRR
implementation. mysql-6.0 (and NDB's branch) have the updated NDB/MRR
but merging it into 5.1 can be very labor-intensive.
Will it be ok to disable NDB/MRR altogether?
2.5 Make MRR code more of a module
----------------------------------
It is not possible to make MRR to be a totally separate module, as its code
consists of :
- Default MRR implementation in handler.cc
- Changes in opt_range.cc to use MRR instead of multiple records_in_range()
calls. These rely on opt_range.cc's internal stuctures like SEL_ARG trees and
so there is not much point in moving them out.
- DS-MRR impelementations which are spread over storage engines.
We'll try to modularize what we can:
- Move out default MRR implementation from handler.cc
- Move possible parts out of opt_range.cc into a separate file.
2.6 Improve the cost model
--------------------------
At the moment DS-MRR cost formula re-uses non-MRR scan costs, which uses
records_in_range() calls, followed by index_only_read_time() or read_time()
calls to produce the estimate for read cost.
We should change this (TODO sort out how exactly)
Note: this means that the query plans will change from MariaDB 5.2.
2.7 Let DS-MRR support clustered primary keys
---------------------------------------------
At the moment DS-MRR is not supported for clustered primary keys. It is not
needed when MRR is used for range access, because range access is done over
an ordered list of ranges, but it is useful for BKA.
TODO:
it's useful for BKA because BKA makes MRR scans over un-orderered
non-disjoint lists of ranges. Then we can sort these and do ordered scans.
There is still no use for DS-MRR over clustered primary key for range
access, where the ranges are disjoint and ordered.
How about postponing this item until BKA is backported?
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): BKA backport (105)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: BKA backport
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:10)=-=-
Title modified.
--- /tmp/wklog.105.old.25340 2010-03-10 19:10:27.000000000 +0000
+++ /tmp/wklog.105.new.25340 2010-03-10 19:10:27.000000000 +0000
@@ -1 +1 @@
-Backport BKA
+BKA backport
-=-=(Guest - Wed, 10 Mar 2010, 19:09)=-=-
Dependency created: 91 now depends on 105
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): BKA backport (105)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: BKA backport
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:10)=-=-
Title modified.
--- /tmp/wklog.105.old.25340 2010-03-10 19:10:27.000000000 +0000
+++ /tmp/wklog.105.new.25340 2010-03-10 19:10:27.000000000 +0000
@@ -1 +1 @@
-Backport BKA
+BKA backport
-=-=(Guest - Wed, 10 Mar 2010, 19:09)=-=-
Dependency created: 91 now depends on 105
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): BKA backport (105)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: BKA backport
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:10)=-=-
Title modified.
--- /tmp/wklog.105.old.25340 2010-03-10 19:10:27.000000000 +0000
+++ /tmp/wklog.105.new.25340 2010-03-10 19:10:27.000000000 +0000
@@ -1 +1 @@
-Backport BKA
+BKA backport
-=-=(Guest - Wed, 10 Mar 2010, 19:09)=-=-
Dependency created: 91 now depends on 105
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): BKA backport (105)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: BKA backport
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:10)=-=-
Title modified.
--- /tmp/wklog.105.old.25340 2010-03-10 19:10:27.000000000 +0000
+++ /tmp/wklog.105.new.25340 2010-03-10 19:10:27.000000000 +0000
@@ -1 +1 @@
-Backport BKA
+BKA backport
-=-=(Guest - Wed, 10 Mar 2010, 19:09)=-=-
Dependency created: 91 now depends on 105
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Guest): BKA backport (105)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: BKA backport
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
-=-=(Guest - Wed, 10 Mar 2010, 19:10)=-=-
Title modified.
--- /tmp/wklog.105.old.25340 2010-03-10 19:10:27.000000000 +0000
+++ /tmp/wklog.105.new.25340 2010-03-10 19:10:27.000000000 +0000
@@ -1 +1 @@
-Backport BKA
+BKA backport
-=-=(Guest - Wed, 10 Mar 2010, 19:09)=-=-
Dependency created: 91 now depends on 105
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport BKA
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport BKA
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport BKA
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport BKA
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport BKA
CREATION DATE..: Wed, 10 Mar 2010, 19:07
SUPERVISOR.....: Monty
IMPLEMENTOR....: Igor
COPIES TO......: Igor, Monty, Psergey, Sergei, Timour
CATEGORY.......: Server-Sprint
TASK ID........: 105 (http://askmonty.org/worklog/?tid=105)
VERSION........: Server-5.3
STATUS.........: Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 32 (hours remain)
ORIG. ESTIMATE.: 32
PROGRESS NOTES:
DESCRIPTION:
The goal of this task is to back-port the optimizations that use join buffers
from the MySQL 6.0 code line into MariaDB 5.3 code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2830: Fix for Bug #534626 MyISAM table created in MariaDB not readable by MySQL
by noreply@launchpad.net 10 Mar '10
by noreply@launchpad.net 10 Mar '10
10 Mar '10
------------------------------------------------------------
revno: 2830
committer: Michael Widenius <monty(a)askmonty.org>
branch nick: maria-5.1
timestamp: Wed 2010-03-10 21:00:34 +0200
message:
Fix for Bug #534626 MyISAM table created in MariaDB not readable by MySQL
modified:
storage/myisam/mi_create.c
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (monty:2830) Bug#534626
by Michael Widenius 10 Mar '10
by Michael Widenius 10 Mar '10
10 Mar '10
#At lp:maria based on revid:monty@askmonty.org-20100310135540-mm65on3y6vktzpoy
2830 Michael Widenius 2010-03-10
Fix for Bug #534626 MyISAM table created in MariaDB not readable by MySQL
modified:
storage/myisam/mi_create.c
per-file messages:
storage/myisam/mi_create.c
Don't set HA_OPTION_NULL_FIELDS if table is not using CHECKSUM as this makes the table incompatible with MySQL.
=== modified file 'storage/myisam/mi_create.c'
--- a/storage/myisam/mi_create.c 2010-01-14 16:51:00 +0000
+++ b/storage/myisam/mi_create.c 2010-03-10 19:00:34 +0000
@@ -175,6 +175,13 @@ int mi_create(const char *name,uint keys
}
}
+ /*
+ Don't set HA_OPTION_NULL_FIELDS if no checksums, as this flag makes
+ that file incompatible with MySQL. This is ok, as this flag is only
+ used if one specifics table level checksums.
+ */
+ if (!(options & HA_OPTION_CHECKSUM))
+ options&= ~HA_OPTION_NULL_FIELDS;
if (packed || (flags & HA_PACK_RECORD))
options|=HA_OPTION_PACK_RECORD; /* Must use packed records */
/* We can't use checksum with static length rows */
1
0
[Maria-developers] New (by Psergey): Backport 6.0 subquery code to MariaDB 5.3 (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport 6.0 subquery code to MariaDB 5.3
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
Backport 6.0 subquery code to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Psergey): Backport 6.0 subquery code to MariaDB 5.3 (104)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Backport 6.0 subquery code to MariaDB 5.3
CREATION DATE..: Wed, 10 Mar 2010, 18:54
SUPERVISOR.....: Igor
IMPLEMENTOR....: Psergey
COPIES TO......:
CATEGORY.......: Server-Sprint
TASK ID........: 104 (http://askmonty.org/worklog/?tid=104)
VERSION........: Server-5.3
STATUS.........: Complete
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
Backport 6.0 subquery code to MariaDB 5.3
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2829: Automatic merge
by noreply@launchpad.net 10 Mar '10
by noreply@launchpad.net 10 Mar '10
10 Mar '10
Merge authors:
Michael Widenius (monty)
------------------------------------------------------------
revno: 2829 [merge]
committer: Michael Widenius <monty(a)askmonty.org>
branch nick: maria-5.1
timestamp: Wed 2010-03-10 15:55:40 +0200
message:
Automatic merge
modified:
mysql-test/r/foreign_key.result
mysql-test/t/foreign_key.test
sql/sql_delete.cc
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (monty:2829)
by Michael Widenius 10 Mar '10
by Michael Widenius 10 Mar '10
10 Mar '10
#At lp:maria based on revid:knielsen@knielsen-hq.org-20100310103214-4ao08thvv174zwv6
2829 Michael Widenius 2010-03-10 [merge]
Automatic merge
modified:
mysql-test/r/foreign_key.result
mysql-test/t/foreign_key.test
sql/sql_delete.cc
=== modified file 'mysql-test/r/foreign_key.result'
--- a/mysql-test/r/foreign_key.result 2001-09-28 05:05:54 +0000
+++ b/mysql-test/r/foreign_key.result 2010-03-10 13:39:02 +0000
@@ -1,4 +1,4 @@
-drop table if exists t1;
+drop table if exists t1,t2;
create table t1 (
a int not null references t2,
b int not null references t2 (c),
@@ -13,3 +13,30 @@ foreign key (a,b) references t3 (c,d) on
create index a on t1 (a);
create unique index b on t1 (a,b);
drop table t1;
+create table t1 (id int primary key) engine = innodb;
+create table t2 (id int PRIMARY KEY, FOREIGN KEY (id) REFERENCES t1(id)) engine=innodb;
+insert into t1 values (1), (2), (3), (4), (5), (6);
+insert into t2 values (3), (5);
+delete from t1;
+ERROR 23000: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `t2_ibfk_1` FOREIGN KEY (`id`) REFERENCES `t1` (`id`))
+select * from t1;
+id
+1
+2
+3
+4
+5
+6
+delete ignore from t1;
+Warnings:
+Error 1451 Cannot delete or update a parent row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `t2_ibfk_1` FOREIGN KEY (`id`) REFERENCES `t1` (`id`))
+Error 1451 Cannot delete or update a parent row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `t2_ibfk_1` FOREIGN KEY (`id`) REFERENCES `t1` (`id`))
+select row_count();
+row_count()
+-1
+select * from t1;
+id
+3
+5
+drop table t2;
+drop table t1;
=== modified file 'mysql-test/t/foreign_key.test'
--- a/mysql-test/t/foreign_key.test 2005-07-28 00:22:47 +0000
+++ b/mysql-test/t/foreign_key.test 2010-03-10 13:39:02 +0000
@@ -2,8 +2,10 @@
# Test syntax of foreign keys
#
+-- source include/have_innodb.inc
+
--disable_warnings
-drop table if exists t1;
+drop table if exists t1,t2;
--enable_warnings
create table t1 (
@@ -23,3 +25,23 @@ create unique index b on t1 (a,b);
drop table t1;
# End of 4.1 tests
+
+#
+# Test DELETE IGNORE
+# Bug#44987 DELETE IGNORE and FK constraint
+#
+
+create table t1 (id int primary key) engine = innodb;
+create table t2 (id int PRIMARY KEY, FOREIGN KEY (id) REFERENCES t1(id)) engine=innodb;
+insert into t1 values (1), (2), (3), (4), (5), (6);
+insert into t2 values (3), (5);
+
+--error 1451
+delete from t1;
+select * from t1;
+
+delete ignore from t1;
+select row_count();
+select * from t1;
+drop table t2;
+drop table t1;
=== modified file 'sql/sql_delete.cc'
--- a/sql/sql_delete.cc 2010-03-04 08:03:07 +0000
+++ b/sql/sql_delete.cc 2010-03-10 13:55:40 +0000
@@ -335,8 +335,11 @@ bool mysql_delete(THD *thd, TABLE_LIST *
InnoDB it can fail in a FOREIGN KEY error or an
out-of-tablespace error.
*/
- error= 1;
- break;
+ if (!select_lex->no_error)
+ {
+ error= 1;
+ break;
+ }
}
}
else
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (monty:2826) Bug#44987
by Michael Widenius 10 Mar '10
by Michael Widenius 10 Mar '10
10 Mar '10
#At lp:maria based on revid:monty@askmonty.org-20100309192224-ycijqc2xxv8v9dai
2826 Michael Widenius 2010-03-10
Fix for: Bug#44987 DELETE IGNORE and FK constraint
- Now DELETE IGNORE skips over rows with a foreign key constraints (as it was supposed to do)
modified:
mysql-test/r/foreign_key.result
mysql-test/t/foreign_key.test
sql/sql_delete.cc
per-file messages:
mysql-test/r/foreign_key.result
Test case for Bug#44987 DELETE IGNORE and FK constraint
mysql-test/t/foreign_key.test
Test case for Bug#44987 DELETE IGNORE and FK constraint
sql/sql_delete.cc
Firx for Bug#44987 DELETE IGNORE and FK constraint
Now DELETE IGNORE skips over rows with a foreign key constraints (as it was supposed to do)
Bug fix inspired by: Moritz Mertinkat
=== modified file 'mysql-test/r/foreign_key.result'
--- a/mysql-test/r/foreign_key.result 2001-09-28 05:05:54 +0000
+++ b/mysql-test/r/foreign_key.result 2010-03-10 13:39:02 +0000
@@ -1,4 +1,4 @@
-drop table if exists t1;
+drop table if exists t1,t2;
create table t1 (
a int not null references t2,
b int not null references t2 (c),
@@ -13,3 +13,30 @@ foreign key (a,b) references t3 (c,d) on
create index a on t1 (a);
create unique index b on t1 (a,b);
drop table t1;
+create table t1 (id int primary key) engine = innodb;
+create table t2 (id int PRIMARY KEY, FOREIGN KEY (id) REFERENCES t1(id)) engine=innodb;
+insert into t1 values (1), (2), (3), (4), (5), (6);
+insert into t2 values (3), (5);
+delete from t1;
+ERROR 23000: Cannot delete or update a parent row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `t2_ibfk_1` FOREIGN KEY (`id`) REFERENCES `t1` (`id`))
+select * from t1;
+id
+1
+2
+3
+4
+5
+6
+delete ignore from t1;
+Warnings:
+Error 1451 Cannot delete or update a parent row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `t2_ibfk_1` FOREIGN KEY (`id`) REFERENCES `t1` (`id`))
+Error 1451 Cannot delete or update a parent row: a foreign key constraint fails (`test`.`t2`, CONSTRAINT `t2_ibfk_1` FOREIGN KEY (`id`) REFERENCES `t1` (`id`))
+select row_count();
+row_count()
+-1
+select * from t1;
+id
+3
+5
+drop table t2;
+drop table t1;
=== modified file 'mysql-test/t/foreign_key.test'
--- a/mysql-test/t/foreign_key.test 2005-07-28 00:22:47 +0000
+++ b/mysql-test/t/foreign_key.test 2010-03-10 13:39:02 +0000
@@ -2,8 +2,10 @@
# Test syntax of foreign keys
#
+-- source include/have_innodb.inc
+
--disable_warnings
-drop table if exists t1;
+drop table if exists t1,t2;
--enable_warnings
create table t1 (
@@ -23,3 +25,23 @@ create unique index b on t1 (a,b);
drop table t1;
# End of 4.1 tests
+
+#
+# Test DELETE IGNORE
+# Bug#44987 DELETE IGNORE and FK constraint
+#
+
+create table t1 (id int primary key) engine = innodb;
+create table t2 (id int PRIMARY KEY, FOREIGN KEY (id) REFERENCES t1(id)) engine=innodb;
+insert into t1 values (1), (2), (3), (4), (5), (6);
+insert into t2 values (3), (5);
+
+--error 1451
+delete from t1;
+select * from t1;
+
+delete ignore from t1;
+select row_count();
+select * from t1;
+drop table t2;
+drop table t1;
=== modified file 'sql/sql_delete.cc'
--- a/sql/sql_delete.cc 2010-02-10 19:06:24 +0000
+++ b/sql/sql_delete.cc 2010-03-10 13:39:02 +0000
@@ -335,8 +335,11 @@ bool mysql_delete(THD *thd, TABLE_LIST *
InnoDB it can fail in a FOREIGN KEY error or an
out-of-tablespace error.
*/
- error= 1;
- break;
+ if (!select_lex->no_error)
+ {
+ error= 1;
+ break;
+ }
}
}
else
1
0
[Maria-developers] Updated (by Serg): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......: Sergei
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Serg - Wed, 10 Mar 2010, 12:41)=-=-
Observers changed: Sergei
-=-=(Shinguz - Wed, 10 Mar 2010, 12:41)=-=-
High Level Description modified.
--- /tmp/wklog.102.old.2095 2010-03-10 12:41:15.000000000 +0000
+++ /tmp/wklog.102.new.2095 2010-03-10 12:41:15.000000000 +0000
@@ -10,6 +10,9 @@
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
+Possibly the MySQL performance schema can cover part/all of it yet?
+http://dev.mysql.com/doc/performance-schema/en/index.html
+
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Possibly the MySQL performance schema can cover part/all of it yet?
http://dev.mysql.com/doc/performance-schema/en/index.html
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Serg): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......: Sergei
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Serg - Wed, 10 Mar 2010, 12:41)=-=-
Observers changed: Sergei
-=-=(Shinguz - Wed, 10 Mar 2010, 12:41)=-=-
High Level Description modified.
--- /tmp/wklog.102.old.2095 2010-03-10 12:41:15.000000000 +0000
+++ /tmp/wklog.102.new.2095 2010-03-10 12:41:15.000000000 +0000
@@ -10,6 +10,9 @@
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
+Possibly the MySQL performance schema can cover part/all of it yet?
+http://dev.mysql.com/doc/performance-schema/en/index.html
+
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Possibly the MySQL performance schema can cover part/all of it yet?
http://dev.mysql.com/doc/performance-schema/en/index.html
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Serg): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......: Sergei
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Serg - Wed, 10 Mar 2010, 12:41)=-=-
Observers changed: Sergei
-=-=(Shinguz - Wed, 10 Mar 2010, 12:41)=-=-
High Level Description modified.
--- /tmp/wklog.102.old.2095 2010-03-10 12:41:15.000000000 +0000
+++ /tmp/wklog.102.new.2095 2010-03-10 12:41:15.000000000 +0000
@@ -10,6 +10,9 @@
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
+Possibly the MySQL performance schema can cover part/all of it yet?
+http://dev.mysql.com/doc/performance-schema/en/index.html
+
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Possibly the MySQL performance schema can cover part/all of it yet?
http://dev.mysql.com/doc/performance-schema/en/index.html
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Shinguz): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Shinguz - Wed, 10 Mar 2010, 12:41)=-=-
High Level Description modified.
--- /tmp/wklog.102.old.2095 2010-03-10 12:41:15.000000000 +0000
+++ /tmp/wklog.102.new.2095 2010-03-10 12:41:15.000000000 +0000
@@ -10,6 +10,9 @@
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
+Possibly the MySQL performance schema can cover part/all of it yet?
+http://dev.mysql.com/doc/performance-schema/en/index.html
+
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Possibly the MySQL performance schema can cover part/all of it yet?
http://dev.mysql.com/doc/performance-schema/en/index.html
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Shinguz): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Shinguz - Wed, 10 Mar 2010, 12:41)=-=-
High Level Description modified.
--- /tmp/wklog.102.old.2095 2010-03-10 12:41:15.000000000 +0000
+++ /tmp/wklog.102.new.2095 2010-03-10 12:41:15.000000000 +0000
@@ -10,6 +10,9 @@
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
+Possibly the MySQL performance schema can cover part/all of it yet?
+http://dev.mysql.com/doc/performance-schema/en/index.html
+
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Possibly the MySQL performance schema can cover part/all of it yet?
http://dev.mysql.com/doc/performance-schema/en/index.html
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Shinguz): Index usage tracker (103)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Index usage tracker
CREATION DATE..: Wed, 10 Mar 2010, 11:29
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 103 (http://askmonty.org/worklog/?tid=103)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Shinguz - Wed, 10 Mar 2010, 11:30)=-=-
High Level Description modified.
--- /tmp/wklog.103.old.30875 2010-03-10 11:30:45.000000000 +0000
+++ /tmp/wklog.103.new.30875 2010-03-10 11:30:45.000000000 +0000
@@ -35,3 +35,6 @@
Possibly this could/should be implemented on the handler interface level because
there we know what we are touching? But I am not familiar with the code.
+
+If some more specification is need for implementation let me know and I will
+collect the stuff from the sources mentioned above.
DESCRIPTION:
What indexes are needed is often easy to find. What is more difficult is to find
which indexes are not used at all.
Thus some statistics about the index usage would be nice.
I think Percona has already something like this done some time ago. And big O
has similar functionality.
I could imagine something like this:
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
| table_name | index_name | index_length | create_time | update_time
| use_time | read_first | read_key | read_next | full_scan |
range_scan |
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
| test_table | idx1 | 1234567890 | 2010-01-01 00:00:00 | 2010-01-01
00:00:00 | 2010-03-10 11:34:56 | 1234 | 42560 | 2468 | 234 |
321 |
| test_table | idx2 | 123456 | 2010-01-01 00:00:00 | NULL
| NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx3 | 234561 | 2010-01-01 00:00:00 | 2010-03-10
11:12:34 | 2010-03-10 11:34:56 | 7890 | 89890 | 15780 | 678 |
321 |
| test_table | idx4 | 345612 | 2010-01-01 00:00:00 | 2010-03-10
11:34:56 | NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx5 | 456123 | 2010-01-01 00:00:00 | 2010-03-10
06:56:12 | NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx6 | 561234 | 2010-01-01 00:00:00 | 2010-03-10
01:12:34 | 2010-03-10 11:34:42 | 3456 | 12356 | 6912 | 123 |
12 |
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
Possibly this could/should be implemented on the handler interface level because
there we know what we are touching? But I am not familiar with the code.
If some more specification is need for implementation let me know and I will
collect the stuff from the sources mentioned above.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Shinguz): Index usage tracker (103)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Index usage tracker
CREATION DATE..: Wed, 10 Mar 2010, 11:29
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 103 (http://askmonty.org/worklog/?tid=103)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Shinguz - Wed, 10 Mar 2010, 11:30)=-=-
High Level Description modified.
--- /tmp/wklog.103.old.30875 2010-03-10 11:30:45.000000000 +0000
+++ /tmp/wklog.103.new.30875 2010-03-10 11:30:45.000000000 +0000
@@ -35,3 +35,6 @@
Possibly this could/should be implemented on the handler interface level because
there we know what we are touching? But I am not familiar with the code.
+
+If some more specification is need for implementation let me know and I will
+collect the stuff from the sources mentioned above.
DESCRIPTION:
What indexes are needed is often easy to find. What is more difficult is to find
which indexes are not used at all.
Thus some statistics about the index usage would be nice.
I think Percona has already something like this done some time ago. And big O
has similar functionality.
I could imagine something like this:
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
| table_name | index_name | index_length | create_time | update_time
| use_time | read_first | read_key | read_next | full_scan |
range_scan |
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
| test_table | idx1 | 1234567890 | 2010-01-01 00:00:00 | 2010-01-01
00:00:00 | 2010-03-10 11:34:56 | 1234 | 42560 | 2468 | 234 |
321 |
| test_table | idx2 | 123456 | 2010-01-01 00:00:00 | NULL
| NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx3 | 234561 | 2010-01-01 00:00:00 | 2010-03-10
11:12:34 | 2010-03-10 11:34:56 | 7890 | 89890 | 15780 | 678 |
321 |
| test_table | idx4 | 345612 | 2010-01-01 00:00:00 | 2010-03-10
11:34:56 | NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx5 | 456123 | 2010-01-01 00:00:00 | 2010-03-10
06:56:12 | NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx6 | 561234 | 2010-01-01 00:00:00 | 2010-03-10
01:12:34 | 2010-03-10 11:34:42 | 3456 | 12356 | 6912 | 123 |
12 |
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
Possibly this could/should be implemented on the handler interface level because
there we know what we are touching? But I am not familiar with the code.
If some more specification is need for implementation let me know and I will
collect the stuff from the sources mentioned above.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Shinguz): Index usage tracker (103)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: Index usage tracker
CREATION DATE..: Wed, 10 Mar 2010, 11:29
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 103 (http://askmonty.org/worklog/?tid=103)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
What indexes are needed is often easy to find. What is more difficult is to find
which indexes are not used at all.
Thus some statistics about the index usage would be nice.
I think Percona has already something like this done some time ago. And big O
has similar functionality.
I could imagine something like this:
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
| table_name | index_name | index_length | create_time | update_time
| use_time | read_first | read_key | read_next | full_scan |
range_scan |
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
| test_table | idx1 | 1234567890 | 2010-01-01 00:00:00 | 2010-01-01
00:00:00 | 2010-03-10 11:34:56 | 1234 | 42560 | 2468 | 234 |
321 |
| test_table | idx2 | 123456 | 2010-01-01 00:00:00 | NULL
| NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx3 | 234561 | 2010-01-01 00:00:00 | 2010-03-10
11:12:34 | 2010-03-10 11:34:56 | 7890 | 89890 | 15780 | 678 |
321 |
| test_table | idx4 | 345612 | 2010-01-01 00:00:00 | 2010-03-10
11:34:56 | NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx5 | 456123 | 2010-01-01 00:00:00 | 2010-03-10
06:56:12 | NULL | 0 | 0 | 0 | 0 |
0 |
| test_table | idx6 | 561234 | 2010-01-01 00:00:00 | 2010-03-10
01:12:34 | 2010-03-10 11:34:42 | 3456 | 12356 | 6912 | 123 |
12 |
+------------+------------+--------------+---------------------+---------------------+---------------------+------------+----------+-----------+-----------+------------+
Possibly this could/should be implemented on the handler interface level because
there we know what we are touching? But I am not familiar with the code.
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2828: Fix some compiler warnings seen in Buildbot.
by noreply@launchpad.net 10 Mar '10
by noreply@launchpad.net 10 Mar '10
10 Mar '10
------------------------------------------------------------
revno: 2828
committer: knielsen(a)knielsen-hq.org
branch nick: mariadb-5.1
timestamp: Wed 2010-03-10 11:32:14 +0100
message:
Fix some compiler warnings seen in Buildbot.
Add some extra error output and code cleanup in an attempt to fix/debug
a rare random testsuite problem in check_warnings, where the exit code
from mysqltest is somehow corrupted inside mysql-test-run.pl.
modified:
include/my_global.h
mysql-test/lib/My/SafeProcess.pm
mysql-test/mysql-test-run.pl
sql/sql_lex.cc
sql/table.cc
storage/federatedx/ha_federatedx.cc
storage/maria/ma_delete.c
storage/maria/ma_loghandler.c
storage/myisam/ft_stopwords.c
storage/myisam/mi_write.c
storage/xtradb/btr/btr0cur.c
support-files/compiler_warnings.supp
vio/viossl.c
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] Updated (by Shinguz): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] Updated (by Shinguz): MySQL/Maria Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL/Maria Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Server-9.x
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Title modified.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-MySQL Wait Interface
+MySQL/Maria Wait Interface
-=-=(Shinguz - Wed, 10 Mar 2010, 10:55)=-=-
Version updated.
--- /tmp/wklog.102.old.29019 2010-03-10 10:55:18.000000000 +0000
+++ /tmp/wklog.102.new.29019 2010-03-10 10:55:18.000000000 +0000
@@ -1 +1 @@
-Benchmarks-3.0
+Server-9.x
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Shinguz): MySQL Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] New (by Shinguz): MySQL Wait Interface (102)
by worklog-noreply@askmonty.org 10 Mar '10
by worklog-noreply@askmonty.org 10 Mar '10
10 Mar '10
-----------------------------------------------------------------------
WORKLOG TASK
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
TASK...........: MySQL Wait Interface
CREATION DATE..: Wed, 10 Mar 2010, 10:53
SUPERVISOR.....: Bothorsen
IMPLEMENTOR....:
COPIES TO......:
CATEGORY.......: Client-BackLog
TASK ID........: 102 (http://askmonty.org/worklog/?tid=102)
VERSION........: Benchmarks-3.0
STATUS.........: Un-Assigned
PRIORITY.......: 60
WORKED HOURS...: 0
ESTIMATE.......: 0 (hours remain)
ORIG. ESTIMATE.: 0
PROGRESS NOTES:
DESCRIPTION:
What lacks in MySQL is something similar like the Oracle Wait Interface. The
basic idea is, that every connection/thread (and the whole system) shows (on
request) where it spent how much time doing what.
They implemented the wait interface (which is possibly a sub-set of Sun's
dtrace) where you can see where you loose your time.
Marc Callaghan and Baron Schwartz blogged in this direction recently:
http://www.xaprb.com/blog/2009/11/07/a-review-of-optimizing-oracle-performa…
http://www.facebook.com/note.php?note_id=355839540932
Good sources to start with:
http://www.amazon.com/Oracle-Wait-Interface-Performance-Diagnostics/dp/0072…
http://books.google.ch/books?id=14OmJzfCfXMC&pg=PA6&lpg=PA6&dq=owi+is+a+per…
http://www.oracle.com/technology/books/pdfs/sample%20chapter%202729x.pdf
http://www.amazon.com/Optimizing-Oracle-Performance-Cary-Millsap/dp/0596005…
ESTIMATED WORK TIME
ESTIMATED COMPLETION DATE
-----------------------------------------------------------------------
WorkLog (v3.5.9)
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (knielsen:2828)
by knielsen@knielsen-hq.org 10 Mar '10
by knielsen@knielsen-hq.org 10 Mar '10
10 Mar '10
#At lp:maria
2828 knielsen(a)knielsen-hq.org 2010-03-10
Fix some compiler warnings seen in Buildbot.
Add some extra error output and code cleanup in an attempt to fix/debug
a rare random testsuite problem in check_warnings, where the exit code
from mysqltest is somehow corrupted inside mysql-test-run.pl.
modified:
include/my_global.h
mysql-test/lib/My/SafeProcess.pm
mysql-test/mysql-test-run.pl
sql/sql_lex.cc
sql/table.cc
storage/federatedx/ha_federatedx.cc
storage/maria/ma_delete.c
storage/maria/ma_loghandler.c
storage/myisam/ft_stopwords.c
storage/myisam/mi_write.c
storage/xtradb/btr/btr0cur.c
support-files/compiler_warnings.supp
vio/viossl.c
per-file messages:
include/my_global.h
Fix compiler warnings on some platforms.
mysql-test/lib/My/SafeProcess.pm
Move dereference of $? subprocess exit code closer to where it is generated,
to make the code more robust and on the chance that this will fix the
occasional problems in check_warnings we see in Buildbot.
mysql-test/mysql-test-run.pl
When check_warnings failed, also log the mysqld server for which it failed.
sql/sql_lex.cc
Fix compiler warning about possibly uninitialised value, by rewriting a for()
loop that is always executed at least once into a do .. while() loop with an
assert.
sql/table.cc
Fix compiler warning about uninitialised value.
storage/federatedx/ha_federatedx.cc
Fix uninitialised variable.
storage/maria/ma_delete.c
Fix compiler warning about uninitialised value.
storage/maria/ma_loghandler.c
Fix compiler warning about uninitialised value.
storage/myisam/ft_stopwords.c
Fix compiler warning.
storage/myisam/mi_write.c
Fix compiler warning about possibly uninitialised value, by rewriting a while()
loop that is always executed at least once into a do .. while() loop with an
assert.
storage/xtradb/btr/btr0cur.c
Fix compiler warning about possibly uninitialised value.
support-files/compiler_warnings.supp
Fix warning suppression to cover all cases in yassl.
vio/viossl.c
Fix compiler warning.
=== modified file 'include/my_global.h'
--- a/include/my_global.h 2010-03-04 08:03:07 +0000
+++ b/include/my_global.h 2010-03-10 10:32:14 +0000
@@ -1260,9 +1260,9 @@ do { doubleget_union _tmp; \
} while (0)
#define float4get(V,M) do { *((float *) &(V)) = *((const float*) (M)); } while(0)
#define float8get(V,M) doubleget((V),(M))
-#define float4store(V,M) memcpy((uchar*) V,(const uchar*) (&M),sizeof(float))
-#define floatstore(T,V) memcpy((uchar*)(T), (const uchar*)(&V),sizeof(float))
-#define floatget(V,M) memcpy((uchar*) &V,(const uchar*) (M),sizeof(float))
+#define float4store(V,M) memcpy((uchar*) V,(uchar*) (&M),sizeof(float))
+#define floatstore(T,V) memcpy((uchar*)(T), (uchar*)(&V),sizeof(float))
+#define floatget(V,M) memcpy((uchar*) &V,(uchar*) (M),sizeof(float))
#define float8store(V,M) doublestore((V),(M))
#else
=== modified file 'mysql-test/lib/My/SafeProcess.pm'
--- a/mysql-test/lib/My/SafeProcess.pm 2009-04-29 14:13:38 +0000
+++ b/mysql-test/lib/My/SafeProcess.pm 2010-03-10 10:32:14 +0000
@@ -384,9 +384,9 @@ sub kill {
sub _collect {
- my ($self)= @_;
+ my ($self, $exit_code)= @_;
- $self->{EXIT_STATUS}= $?;
+ $self->{EXIT_STATUS}= $exit_code;
_verbose("_collect: $self");
# Take the process out of running list
@@ -453,6 +453,7 @@ sub wait_one {
#_verbose("blocking: $blocking, use_alarm: $use_alarm");
my $retpid;
+ my $exit_code;
eval
{
# alarm should break the wait
@@ -461,6 +462,7 @@ sub wait_one {
alarm($timeout) if $use_alarm;
$retpid= waitpid($pid, $blocking ? 0 : &WNOHANG);
+ $exit_code= $?;
alarm(0) if $use_alarm;
};
@@ -492,7 +494,7 @@ sub wait_one {
#warn "wait_one: expected pid $pid but got $retpid"
# unless( $retpid == $pid );
- $self->_collect();
+ $self->_collect($exit_code);
return 0;
}
@@ -505,6 +507,8 @@ sub wait_one {
#
sub wait_any {
my $ret_pid;
+ my $exit_code;
+
if (IS_WIN32PERL) {
# Can't wait for -1 => use a polling loop
do {
@@ -514,6 +518,7 @@ sub wait_any {
last if $pid == $ret_pid;
}
} while ($ret_pid == 0);
+ $exit_code= $?;
}
else
{
@@ -523,6 +528,7 @@ sub wait_any {
print STDERR "wait_any, got invalid pid: $ret_pid\n";
return undef;
}
+ $exit_code= $?;
}
# Look it up in "running" table
@@ -532,7 +538,7 @@ sub wait_any {
print STDERR "running: ". join(", ", keys(%running)). "\n";
return undef;
}
- $proc->_collect;
+ $proc->_collect($exit_code);
return $proc;
}
=== modified file 'mysql-test/mysql-test-run.pl'
--- a/mysql-test/mysql-test-run.pl 2010-03-04 08:03:07 +0000
+++ b/mysql-test/mysql-test-run.pl 2010-03-10 10:32:14 +0000
@@ -4102,7 +4102,7 @@ sub start_check_warnings ($$) {
error => $errfile,
output => $errfile,
args => \$args,
- user_data => $errfile,
+ user_data => [$errfile, $mysqld],
verbose => $opt_verbose,
);
mtr_verbose("Started $proc");
@@ -4148,7 +4148,7 @@ sub check_warnings ($) {
if ( delete $started{$proc->pid()} ) {
# One check warning process returned
my $res= $proc->exit_status();
- my $err_file= $proc->user_data();
+ my ($err_file, $mysqld)= @{$proc->user_data()};
if ( $res == 0 or $res == 62 ){
@@ -4184,7 +4184,8 @@ sub check_warnings ($) {
my $report= mtr_grab_file($err_file);
$tinfo->{comment}.=
"Could not execute 'check-warnings' for ".
- "testcase '$tname' (res: $res):\n";
+ "testcase '$tname' (res: $res) server: '".
+ $mysqld->name() .":\n";
$tinfo->{comment}.= $report;
$result= 2;
=== modified file 'sql/sql_lex.cc'
--- a/sql/sql_lex.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_lex.cc 2010-03-10 10:32:14 +0000
@@ -1842,13 +1842,15 @@ void st_select_lex_unit::exclude_tree()
void st_select_lex::mark_as_dependent(st_select_lex *last, Item *dependency)
{
SELECT_LEX *next_to_last;
+
+ DBUG_ASSERT(this != last);
+
/*
Mark all selects from resolved to 1 before select where was
found table as depended (of select where was found table)
*/
- for (SELECT_LEX *s= this;
- s && s != last;
- s= s->outer_select())
+ SELECT_LEX *s= this;
+ do
{
if (!(s->uncacheable & UNCACHEABLE_DEPENDENT))
{
@@ -1866,7 +1868,8 @@ void st_select_lex::mark_as_dependent(st
}
}
next_to_last= s;
- }
+ } while ((s= s->outer_select()) != last && s != 0);
+
is_correlated= TRUE;
this->master_unit()->item->is_correlated= TRUE;
if (dependency)
=== modified file 'sql/table.cc'
--- a/sql/table.cc 2010-03-09 19:23:30 +0000
+++ b/sql/table.cc 2010-03-10 10:32:14 +0000
@@ -2053,6 +2053,8 @@ ulong get_form_pos(File file, uchar *hea
ulong ret_value=0;
DBUG_ENTER("get_form_pos");
+ LINT_INIT(buf);
+
names=uint2korr(head+8);
a_length=(names+2)*sizeof(char *); /* Room for two extra */
=== modified file 'storage/federatedx/ha_federatedx.cc'
--- a/storage/federatedx/ha_federatedx.cc 2009-12-03 11:34:11 +0000
+++ b/storage/federatedx/ha_federatedx.cc 2010-03-10 10:32:14 +0000
@@ -1783,7 +1783,7 @@ int ha_federatedx::open(const char *name
int ha_federatedx::close(void)
{
- int retval, error;
+ int retval= 0, error;
THD *thd= current_thd;
DBUG_ENTER("ha_federatedx::close");
=== modified file 'storage/maria/ma_delete.c'
--- a/storage/maria/ma_delete.c 2009-01-12 11:12:00 +0000
+++ b/storage/maria/ma_delete.c 2010-03-10 10:32:14 +0000
@@ -169,6 +169,8 @@ my_bool _ma_ck_delete(MARIA_HA *info, MA
MARIA_KEY org_key;
DBUG_ENTER("_ma_ck_delete");
+ LINT_INIT_STRUCT(org_key);
+
save_key_data= key->data;
if (share->now_transactional)
{
=== modified file 'storage/maria/ma_loghandler.c'
--- a/storage/maria/ma_loghandler.c 2010-01-06 21:27:53 +0000
+++ b/storage/maria/ma_loghandler.c 2010-03-10 10:32:14 +0000
@@ -1394,6 +1394,7 @@ LSN translog_get_file_max_lsn_stored(uin
{
LOGHANDLER_FILE_INFO info;
+ LINT_INIT_STRUCT(info);
File fd= open_logfile_by_number_no_cache(file);
if ((fd < 0) ||
(translog_read_file_header(&info, fd) | my_close(fd, MYF(MY_WME))))
=== modified file 'storage/myisam/ft_stopwords.c'
--- a/storage/myisam/ft_stopwords.c 2010-01-28 14:49:14 +0000
+++ b/storage/myisam/ft_stopwords.c 2010-03-10 10:32:14 +0000
@@ -45,7 +45,7 @@ static int ft_add_stopword(const char *w
{
FT_STOPWORD sw;
return !w ||
- (((sw.len= (uint) strlen(sw.pos=w)) >= ft_min_word_len) &&
+ (((sw.len= (uint) strlen(sw.pos=(const uchar *)w)) >= ft_min_word_len) &&
(tree_insert(stopwords3, &sw, 0, stopwords3->custom_arg)==NULL));
}
=== modified file 'storage/myisam/mi_write.c'
--- a/storage/myisam/mi_write.c 2009-12-03 11:19:05 +0000
+++ b/storage/myisam/mi_write.c 2010-03-10 10:32:14 +0000
@@ -735,10 +735,12 @@ static uchar *_mi_find_last_pos(MI_KEYDE
}
end=page+length-key_ref_length;
+ DBUG_ASSERT(page < end);
*key='\0';
length=0;
lastpos=page;
- while (page < end)
+
+ do
{
prevpos=lastpos; lastpos=page;
last_length=length;
@@ -749,7 +751,8 @@ static uchar *_mi_find_last_pos(MI_KEYDE
my_errno=HA_ERR_CRASHED;
DBUG_RETURN(0);
}
- }
+ } while (page < end);
+
*return_key_length=last_length;
*after_key=lastpos;
DBUG_PRINT("exit",("returns: 0x%lx page: 0x%lx end: 0x%lx",
=== modified file 'storage/xtradb/btr/btr0cur.c'
--- a/storage/xtradb/btr/btr0cur.c 2009-11-13 21:26:08 +0000
+++ b/storage/xtradb/btr/btr0cur.c 2010-03-10 10:32:14 +0000
@@ -3233,7 +3233,7 @@ btr_estimate_number_of_different_key_val
ulint matched_bytes;
ib_int64_t n_recs = 0;
ib_int64_t* n_diff;
- ib_int64_t* n_not_nulls;
+ ib_int64_t* n_not_nulls= 0;
ullint n_sample_pages; /* number of pages to sample */
ulint not_empty_flag = 0;
ulint total_external_size = 0;
=== modified file 'support-files/compiler_warnings.supp'
--- a/support-files/compiler_warnings.supp 2010-01-28 14:49:14 +0000
+++ b/support-files/compiler_warnings.supp 2010-03-10 10:32:14 +0000
@@ -108,7 +108,7 @@ ha_pbxt\.cc : variable.*might be clobber
#
# Yassl
include/runtime.hpp: .*pure_error.*
-.*/extra/yassl/taocrypt/.*: comparison with string literal
+.*/extra/yassl/.*taocrypt/.*: comparison with string literal
.*/extra/yassl/taocrypt/src/blowfish\.cpp: array subscript is above array bounds
.*/extra/yassl/taocrypt/src/file\.cpp: ignoring return value
.*/extra/yassl/taocrypt/src/integer\.cpp: control reaches end of non-void function
=== modified file 'vio/viossl.c'
--- a/vio/viossl.c 2010-01-29 10:42:31 +0000
+++ b/vio/viossl.c 2010-03-10 10:32:14 +0000
@@ -75,9 +75,11 @@ report_errors(SSL* ssl)
if (ssl)
{
+#ifndef DBUG_OFF
int error= SSL_get_error(ssl, l);
DBUG_PRINT("error", ("error: %s (%d)",
ERR_error_string(error, buf), error));
+#endif
}
DBUG_PRINT("info", ("socket_errno: %d", socket_errno));
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2827: Automerge MySQL 5.1.44 merge into latest MariaDB trunk.
by noreply@launchpad.net 10 Mar '10
by noreply@launchpad.net 10 Mar '10
10 Mar '10
Merge authors:
<Dao-Gang.Qu(a)sun.com>
<Li-Bing.Song(a)sun.com>
Alexander Barkov <bar(a)mysql.com>
Alexander Nozdrin <alik(a)sun.com>
Alexey Kopytov (akopytov)...
------------------------------------------------------------
revno: 2827 [merge]
committer: knielsen(a)knielsen-hq.org
branch nick: mariadb-5.1
timestamp: Wed 2010-03-10 10:12:23 +0100
message:
Automerge MySQL 5.1.44 merge into latest MariaDB trunk.
added:
mysql-test/extra/rpl_tests/rpl_mixing_engines.inc
mysql-test/extra/rpl_tests/rpl_set_null.test
mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test
mysql-test/include/binlog_inject_error.inc
mysql-test/include/truncate_file.inc
mysql-test/r/innodb-autoinc-44030.result
mysql-test/r/sp_sync.result
mysql-test/std_data/bug47142_master-bin.000001
mysql-test/suite/binlog/r/binlog_write_error.result
mysql-test/suite/binlog/t/binlog_write_error.test
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_49329.result
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_49329.test
mysql-test/suite/ndb/r/ndb_tmp_table_and_DDL.result
mysql-test/suite/ndb/t/ndb_tmp_table_and_DDL.test
mysql-test/suite/rpl/r/rpl_geometry.result
mysql-test/suite/rpl/r/rpl_loaddata_concurrent.result
mysql-test/suite/rpl/r/rpl_manual_change_index_file.result
mysql-test/suite/rpl/r/rpl_set_null_innodb.result
mysql-test/suite/rpl/r/rpl_set_null_myisam.result
mysql-test/suite/rpl/r/rpl_stm_binlog_direct.result
mysql-test/suite/rpl/r/rpl_tmp_table_and_DDL.result
mysql-test/suite/rpl/t/rpl_geometry.test
mysql-test/suite/rpl/t/rpl_loaddata_concurrent.test
mysql-test/suite/rpl/t/rpl_manual_change_index_file.test
mysql-test/suite/rpl/t/rpl_set_null_innodb.test
mysql-test/suite/rpl/t/rpl_set_null_myisam.test
mysql-test/suite/rpl/t/rpl_stm_binlog_direct-master.opt
mysql-test/suite/rpl/t/rpl_stm_binlog_direct.test
mysql-test/suite/rpl/t/rpl_tmp_table_and_DDL.test
mysql-test/suite/rpl_ndb/r/rpl_ndb_set_null.result
mysql-test/suite/rpl_ndb/t/rpl_ndb_set_null.test
mysql-test/t/innodb-autoinc-44030.test
mysql-test/t/partition_innodb-master.opt
mysql-test/t/sp_sync.test
renamed:
mysql-test/suite/binlog/r/binlog_tbl_metadata.result => mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result
mysql-test/suite/binlog/t/binlog_tbl_metadata.test => mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test
modified:
client/client_priv.h
client/mysql.cc
client/mysql_upgrade.c
client/mysqladmin.cc
client/mysqlbinlog.cc
client/mysqldump.c
configure.in
extra/yassl/taocrypt/src/asn.cpp
include/config-win.h
include/m_string.h
include/my_global.h
include/my_no_pthread.h
include/my_pthread.h
include/my_stacktrace.h
include/my_sys.h
include/myisam.h
libmysql/libmysql.c
mysql-test/collections/default.experimental
mysql-test/extra/rpl_tests/rpl_loaddata.test
mysql-test/extra/rpl_tests/rpl_row_func003.test
mysql-test/include/kill_query.inc
mysql-test/include/setup_fake_relay_log.inc
mysql-test/lib/v1/mysql-test-run.pl
mysql-test/mysql-test-run.pl
mysql-test/r/alter_table.result
mysql-test/r/bug46080.result*
mysql-test/r/count_distinct.result
mysql-test/r/create.result
mysql-test/r/ctype_ucs.result
mysql-test/r/ctype_utf8.result
mysql-test/r/delete.result
mysql-test/r/fulltext.result
mysql-test/r/fulltext_order_by.result
mysql-test/r/func_concat.result
mysql-test/r/func_str.result
mysql-test/r/func_time.result
mysql-test/r/gis.result
mysql-test/r/information_schema.result
mysql-test/r/innodb-autoinc.result
mysql-test/r/join_outer.result
mysql-test/r/myisam.result
mysql-test/r/mysql.result
mysql-test/r/mysql_upgrade.result
mysql-test/r/mysqlbinlog.result
mysql-test/r/openssl_1.result
mysql-test/r/order_by.result
mysql-test/r/partition.result
mysql-test/r/partition_bug18198.result
mysql-test/r/partition_error.result
mysql-test/r/partition_innodb.result
mysql-test/r/partition_pruning.result
mysql-test/r/ps.result
mysql-test/r/ps_ddl.result
mysql-test/r/select.result
mysql-test/r/sp-ucs2.result
mysql-test/r/sp.result
mysql-test/r/subselect.result
mysql-test/r/union.result
mysql-test/r/user_var.result
mysql-test/r/variables.result
mysql-test/std_data/Index.xml
mysql-test/std_data/cacert.pem
mysql-test/std_data/client-cert.pem
mysql-test/std_data/client-key.pem
mysql-test/std_data/server-cert.pem
mysql-test/std_data/server-key.pem
mysql-test/std_data/server8k-cert.pem
mysql-test/std_data/server8k-key.pem
mysql-test/suite/binlog/r/binlog_index.result
mysql-test/suite/binlog/r/binlog_killed_simulate.result
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result
mysql-test/suite/binlog/r/binlog_stm_blackhole.result
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
mysql-test/suite/binlog/r/binlog_unsafe.result
mysql-test/suite/binlog/t/binlog_index.test
mysql-test/suite/binlog/t/binlog_unsafe.test
mysql-test/suite/parts/inc/part_blocked_sql_funcs_main.inc
mysql-test/suite/parts/inc/partition_timestamp.inc
mysql-test/suite/parts/r/part_blocked_sql_func_innodb.result
mysql-test/suite/parts/r/part_blocked_sql_func_myisam.result
mysql-test/suite/parts/r/partition_datetime_innodb.result
mysql-test/suite/parts/r/partition_datetime_myisam.result
mysql-test/suite/pbxt/r/partition_error.result
mysql-test/suite/pbxt/r/partition_pruning.result
mysql-test/suite/pbxt/t/partition_error.test
mysql-test/suite/rpl/r/rpl_create_if_not_exists.result
mysql-test/suite/rpl/r/rpl_do_grant.result
mysql-test/suite/rpl/r/rpl_drop_temp.result
mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result
mysql-test/suite/rpl/r/rpl_killed_ddl.result
mysql-test/suite/rpl/r/rpl_loaddata.result
mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
mysql-test/suite/rpl/r/rpl_loaddata_map.result
mysql-test/suite/rpl/r/rpl_misc_functions.result
mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result
mysql-test/suite/rpl/r/rpl_optimize.result
mysql-test/suite/rpl/r/rpl_row_func003.result
mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result
mysql-test/suite/rpl/r/rpl_sp.result
mysql-test/suite/rpl/r/rpl_stm_log.result
mysql-test/suite/rpl/r/rpl_stm_maria.result
mysql-test/suite/rpl/r/rpl_stm_until.result
mysql-test/suite/rpl/r/rpl_temporary.result
mysql-test/suite/rpl/t/rpl_circular_for_4_hosts.test
mysql-test/suite/rpl/t/rpl_create_if_not_exists.test
mysql-test/suite/rpl/t/rpl_do_grant.test
mysql-test/suite/rpl/t/rpl_drop_temp.test
mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test
mysql-test/suite/rpl/t/rpl_killed_ddl.test
mysql-test/suite/rpl/t/rpl_misc_functions.test
mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test
mysql-test/suite/rpl/t/rpl_optimize.test
mysql-test/suite/rpl/t/rpl_stm_maria.test
mysql-test/suite/rpl/t/rpl_stm_until.test
mysql-test/suite/rpl/t/rpl_temporary.test
mysql-test/suite/rpl/t/rpl_timezone.test
mysql-test/suite/rpl/t/rpl_trigger.test
mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result
mysql-test/t/alter_table.test
mysql-test/t/bug46080.test
mysql-test/t/count_distinct.test
mysql-test/t/create.test
mysql-test/t/ctype_ucs.test
mysql-test/t/ctype_utf8.test
mysql-test/t/delete.test
mysql-test/t/disabled.def
mysql-test/t/fulltext.test
mysql-test/t/fulltext_order_by.test
mysql-test/t/func_concat.test
mysql-test/t/func_str.test
mysql-test/t/gis.test
mysql-test/t/information_schema.test
mysql-test/t/innodb-autoinc.test
mysql-test/t/join_outer.test
mysql-test/t/lock_multi.test
mysql-test/t/myisam.test
mysql-test/t/mysql.test
mysql-test/t/mysql_upgrade.test
mysql-test/t/mysqlbinlog.test
mysql-test/t/openssl_1.test
mysql-test/t/order_by.test
mysql-test/t/partition.test
mysql-test/t/partition_bug18198.test
mysql-test/t/partition_error.test
mysql-test/t/partition_innodb.test
mysql-test/t/partition_pruning.test
mysql-test/t/ps.test
mysql-test/t/ps_ddl.test
mysql-test/t/select.test
mysql-test/t/sp-ucs2.test
mysql-test/t/sp.test
mysql-test/t/subselect.test
mysql-test/t/union.test
mysql-test/t/user_var.test
mysql-test/t/variables.test
mysys/charset.c
mysys/default.c
mysys/mf_pack.c
mysys/my_getopt.c
mysys/my_init.c
mysys/my_thr_init.c
mysys/my_winthread.c
mysys/stacktrace.c
netware/libmysqlmain.c
scripts/mysql_system_tables_fix.sql
scripts/mysqld_multi.sh
server-tools/instance-manager/instance_map.cc
server-tools/instance-manager/listener.cc
server-tools/instance-manager/options.cc
server-tools/instance-manager/user_map.cc
sql/event_data_objects.cc
sql/event_db_repository.cc
sql/event_scheduler.cc*
sql/events.cc
sql/field.cc
sql/field.h
sql/filesort.cc
sql/ha_partition.cc
sql/ha_partition.h
sql/item.cc
sql/item.h
sql/item_cmpfunc.cc
sql/item_cmpfunc.h
sql/item_create.cc
sql/item_func.cc
sql/item_func.h
sql/item_strfunc.cc
sql/item_strfunc.h
sql/item_subselect.cc
sql/item_subselect.h
sql/item_timefunc.cc
sql/item_timefunc.h
sql/log.cc
sql/log.h
sql/log_event.cc
sql/log_event.h
sql/log_event_old.cc
sql/mysql_priv.h
sql/mysqld.cc
sql/rpl_injector.cc
sql/rpl_record.cc
sql/rpl_rli.cc
sql/rpl_rli.h
sql/rpl_utility.h
sql/set_var.cc
sql/share/errmsg.txt
sql/slave.cc
sql/sp.cc
sql/sp_head.cc
sql/sp_pcontext.h
sql/sql_acl.cc
sql/sql_base.cc
sql/sql_class.h
sql/sql_connect.cc
sql/sql_crypt.cc
sql/sql_crypt.h
sql/sql_db.cc
sql/sql_delete.cc
sql/sql_insert.cc
sql/sql_load.cc
sql/sql_parse.cc
sql/sql_partition.cc
sql/sql_partition.h
sql/sql_plugin.cc
sql/sql_prepare.cc
sql/sql_rename.cc
sql/sql_repl.cc
sql/sql_select.cc
sql/sql_select.h
sql/sql_servers.cc
sql/sql_show.cc
sql/sql_table.cc
sql/sql_tablespace.cc
sql/sql_test.cc
sql/sql_trigger.cc
sql/sql_udf.cc
sql/sql_union.cc
sql/sql_update.cc
sql/sql_view.cc
sql/sql_yacc.yy
sql/table.cc
storage/archive/ha_archive.cc
storage/ibmdb2i/db2i_constraints.cc
storage/ibmdb2i/ha_ibmdb2i.cc
storage/innobase/fil/fil0fil.c
storage/innobase/handler/ha_innodb.cc
storage/innobase/handler/ha_innodb.h
storage/innobase/include/fil0fil.h
storage/innobase/include/lock0lock.h
storage/innobase/include/mtr0mtr.h
storage/innobase/include/srv0srv.h
storage/innobase/lock/lock0lock.c
storage/innobase/log/log0log.c
storage/innobase/log/log0recv.c
storage/innobase/row/row0mysql.c
storage/innobase/srv/srv0srv.c
storage/innobase/srv/srv0start.c
storage/innodb_plugin/CMakeLists.txt
storage/innodb_plugin/handler/ha_innodb.cc
storage/myisam/mi_packrec.c
storage/myisam/mi_static.c
storage/myisam/myisamdef.h
storage/myisammrg/ha_myisammrg.cc
strings/Makefile.am
strings/ctype-ucs2.c
strings/strmov.c
support-files/Makefile.am
support-files/mysql.spec.sh
win/configure.js
mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result
mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test
The size of the diff (20019 lines) is larger than your specified limit of 5000 lines
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] [Branch ~maria-captains/maria/5.1] Rev 2826: Fixes for two test failures in Buildbot.
by noreply@launchpad.net 10 Mar '10
by noreply@launchpad.net 10 Mar '10
10 Mar '10
------------------------------------------------------------
revno: 2826
committer: knielsen(a)knielsen-hq.org
branch nick: mariadb-5.1
timestamp: Wed 2010-03-10 10:11:02 +0100
message:
Fixes for two test failures in Buildbot.
- Adjust timing in test case, to avoid test failures caused by high load
on machines and consequent race conditions in the test case.
- Add another variant of Valgrind suppressions for memory leak in system
libraries when unloading dynamic object files.
modified:
mysql-test/r/information_schema.result
mysql-test/t/information_schema.test
mysql-test/valgrind.supp
--
lp:maria
https://code.launchpad.net/~maria-captains/maria/5.1
Your team Maria developers is subscribed to branch lp:maria.
To unsubscribe from this branch go to https://code.launchpad.net/~maria-captains/maria/5.1/+edit-subscription.
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (knielsen:2827)
by knielsen@knielsen-hq.org 10 Mar '10
by knielsen@knielsen-hq.org 10 Mar '10
10 Mar '10
#At lp:maria
2827 knielsen(a)knielsen-hq.org 2010-03-10 [merge]
Automerge MySQL 5.1.44 merge into latest MariaDB trunk.
added:
mysql-test/extra/rpl_tests/rpl_mixing_engines.inc
mysql-test/extra/rpl_tests/rpl_set_null.test
mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test
mysql-test/include/binlog_inject_error.inc
mysql-test/include/truncate_file.inc
mysql-test/r/innodb-autoinc-44030.result
mysql-test/r/sp_sync.result
mysql-test/std_data/bug47142_master-bin.000001
mysql-test/suite/binlog/r/binlog_write_error.result
mysql-test/suite/binlog/t/binlog_write_error.test
mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_49329.result
mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_49329.test
mysql-test/suite/ndb/r/ndb_tmp_table_and_DDL.result
mysql-test/suite/ndb/t/ndb_tmp_table_and_DDL.test
mysql-test/suite/rpl/r/rpl_geometry.result
mysql-test/suite/rpl/r/rpl_loaddata_concurrent.result
mysql-test/suite/rpl/r/rpl_manual_change_index_file.result
mysql-test/suite/rpl/r/rpl_set_null_innodb.result
mysql-test/suite/rpl/r/rpl_set_null_myisam.result
mysql-test/suite/rpl/r/rpl_stm_binlog_direct.result
mysql-test/suite/rpl/r/rpl_tmp_table_and_DDL.result
mysql-test/suite/rpl/t/rpl_geometry.test
mysql-test/suite/rpl/t/rpl_loaddata_concurrent.test
mysql-test/suite/rpl/t/rpl_manual_change_index_file.test
mysql-test/suite/rpl/t/rpl_set_null_innodb.test
mysql-test/suite/rpl/t/rpl_set_null_myisam.test
mysql-test/suite/rpl/t/rpl_stm_binlog_direct-master.opt
mysql-test/suite/rpl/t/rpl_stm_binlog_direct.test
mysql-test/suite/rpl/t/rpl_tmp_table_and_DDL.test
mysql-test/suite/rpl_ndb/r/rpl_ndb_set_null.result
mysql-test/suite/rpl_ndb/t/rpl_ndb_set_null.test
mysql-test/t/innodb-autoinc-44030.test
mysql-test/t/partition_innodb-master.opt
mysql-test/t/sp_sync.test
renamed:
mysql-test/suite/binlog/r/binlog_tbl_metadata.result => mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result
mysql-test/suite/binlog/t/binlog_tbl_metadata.test => mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test
modified:
client/client_priv.h
client/mysql.cc
client/mysql_upgrade.c
client/mysqladmin.cc
client/mysqlbinlog.cc
client/mysqldump.c
configure.in
extra/yassl/taocrypt/src/asn.cpp
include/config-win.h
include/m_string.h
include/my_global.h
include/my_no_pthread.h
include/my_pthread.h
include/my_stacktrace.h
include/my_sys.h
include/myisam.h
libmysql/libmysql.c
mysql-test/collections/default.experimental
mysql-test/extra/rpl_tests/rpl_loaddata.test
mysql-test/extra/rpl_tests/rpl_row_func003.test
mysql-test/include/kill_query.inc
mysql-test/include/setup_fake_relay_log.inc
mysql-test/lib/v1/mysql-test-run.pl
mysql-test/mysql-test-run.pl
mysql-test/r/alter_table.result
mysql-test/r/bug46080.result*
mysql-test/r/count_distinct.result
mysql-test/r/create.result
mysql-test/r/ctype_ucs.result
mysql-test/r/ctype_utf8.result
mysql-test/r/delete.result
mysql-test/r/fulltext.result
mysql-test/r/fulltext_order_by.result
mysql-test/r/func_concat.result
mysql-test/r/func_str.result
mysql-test/r/func_time.result
mysql-test/r/gis.result
mysql-test/r/information_schema.result
mysql-test/r/innodb-autoinc.result
mysql-test/r/join_outer.result
mysql-test/r/myisam.result
mysql-test/r/mysql.result
mysql-test/r/mysql_upgrade.result
mysql-test/r/mysqlbinlog.result
mysql-test/r/openssl_1.result
mysql-test/r/order_by.result
mysql-test/r/partition.result
mysql-test/r/partition_bug18198.result
mysql-test/r/partition_error.result
mysql-test/r/partition_innodb.result
mysql-test/r/partition_pruning.result
mysql-test/r/ps.result
mysql-test/r/ps_ddl.result
mysql-test/r/select.result
mysql-test/r/sp-ucs2.result
mysql-test/r/sp.result
mysql-test/r/subselect.result
mysql-test/r/union.result
mysql-test/r/user_var.result
mysql-test/r/variables.result
mysql-test/std_data/Index.xml
mysql-test/std_data/cacert.pem
mysql-test/std_data/client-cert.pem
mysql-test/std_data/client-key.pem
mysql-test/std_data/server-cert.pem
mysql-test/std_data/server-key.pem
mysql-test/std_data/server8k-cert.pem
mysql-test/std_data/server8k-key.pem
mysql-test/suite/binlog/r/binlog_index.result
mysql-test/suite/binlog/r/binlog_killed_simulate.result
mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result
mysql-test/suite/binlog/r/binlog_stm_blackhole.result
mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result
mysql-test/suite/binlog/r/binlog_unsafe.result
mysql-test/suite/binlog/t/binlog_index.test
mysql-test/suite/binlog/t/binlog_unsafe.test
mysql-test/suite/parts/inc/part_blocked_sql_funcs_main.inc
mysql-test/suite/parts/inc/partition_timestamp.inc
mysql-test/suite/parts/r/part_blocked_sql_func_innodb.result
mysql-test/suite/parts/r/part_blocked_sql_func_myisam.result
mysql-test/suite/parts/r/partition_datetime_innodb.result
mysql-test/suite/parts/r/partition_datetime_myisam.result
mysql-test/suite/pbxt/r/partition_error.result
mysql-test/suite/pbxt/r/partition_pruning.result
mysql-test/suite/pbxt/t/partition_error.test
mysql-test/suite/rpl/r/rpl_create_if_not_exists.result
mysql-test/suite/rpl/r/rpl_do_grant.result
mysql-test/suite/rpl/r/rpl_drop_temp.result
mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result
mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result
mysql-test/suite/rpl/r/rpl_killed_ddl.result
mysql-test/suite/rpl/r/rpl_loaddata.result
mysql-test/suite/rpl/r/rpl_loaddata_fatal.result
mysql-test/suite/rpl/r/rpl_loaddata_map.result
mysql-test/suite/rpl/r/rpl_misc_functions.result
mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result
mysql-test/suite/rpl/r/rpl_optimize.result
mysql-test/suite/rpl/r/rpl_row_func003.result
mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result
mysql-test/suite/rpl/r/rpl_sp.result
mysql-test/suite/rpl/r/rpl_stm_log.result
mysql-test/suite/rpl/r/rpl_stm_maria.result
mysql-test/suite/rpl/r/rpl_stm_until.result
mysql-test/suite/rpl/r/rpl_temporary.result
mysql-test/suite/rpl/t/rpl_circular_for_4_hosts.test
mysql-test/suite/rpl/t/rpl_create_if_not_exists.test
mysql-test/suite/rpl/t/rpl_do_grant.test
mysql-test/suite/rpl/t/rpl_drop_temp.test
mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test
mysql-test/suite/rpl/t/rpl_killed_ddl.test
mysql-test/suite/rpl/t/rpl_misc_functions.test
mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test
mysql-test/suite/rpl/t/rpl_optimize.test
mysql-test/suite/rpl/t/rpl_stm_maria.test
mysql-test/suite/rpl/t/rpl_stm_until.test
mysql-test/suite/rpl/t/rpl_temporary.test
mysql-test/suite/rpl/t/rpl_timezone.test
mysql-test/suite/rpl/t/rpl_trigger.test
mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result
mysql-test/t/alter_table.test
mysql-test/t/bug46080.test
mysql-test/t/count_distinct.test
mysql-test/t/create.test
mysql-test/t/ctype_ucs.test
mysql-test/t/ctype_utf8.test
mysql-test/t/delete.test
mysql-test/t/disabled.def
mysql-test/t/fulltext.test
mysql-test/t/fulltext_order_by.test
mysql-test/t/func_concat.test
mysql-test/t/func_str.test
mysql-test/t/gis.test
mysql-test/t/information_schema.test
mysql-test/t/innodb-autoinc.test
mysql-test/t/join_outer.test
mysql-test/t/lock_multi.test
mysql-test/t/myisam.test
mysql-test/t/mysql.test
mysql-test/t/mysql_upgrade.test
mysql-test/t/mysqlbinlog.test
mysql-test/t/openssl_1.test
mysql-test/t/order_by.test
mysql-test/t/partition.test
mysql-test/t/partition_bug18198.test
mysql-test/t/partition_error.test
mysql-test/t/partition_innodb.test
mysql-test/t/partition_pruning.test
mysql-test/t/ps.test
mysql-test/t/ps_ddl.test
mysql-test/t/select.test
mysql-test/t/sp-ucs2.test
mysql-test/t/sp.test
mysql-test/t/subselect.test
mysql-test/t/union.test
mysql-test/t/user_var.test
mysql-test/t/variables.test
mysys/charset.c
mysys/default.c
mysys/mf_pack.c
mysys/my_getopt.c
mysys/my_init.c
mysys/my_thr_init.c
mysys/my_winthread.c
mysys/stacktrace.c
netware/libmysqlmain.c
scripts/mysql_system_tables_fix.sql
scripts/mysqld_multi.sh
server-tools/instance-manager/instance_map.cc
server-tools/instance-manager/listener.cc
server-tools/instance-manager/options.cc
server-tools/instance-manager/user_map.cc
sql/event_data_objects.cc
sql/event_db_repository.cc
sql/event_scheduler.cc*
sql/events.cc
sql/field.cc
sql/field.h
sql/filesort.cc
sql/ha_partition.cc
sql/ha_partition.h
sql/item.cc
sql/item.h
sql/item_cmpfunc.cc
sql/item_cmpfunc.h
sql/item_create.cc
sql/item_func.cc
sql/item_func.h
sql/item_strfunc.cc
sql/item_strfunc.h
sql/item_subselect.cc
sql/item_subselect.h
sql/item_timefunc.cc
sql/item_timefunc.h
sql/log.cc
sql/log.h
sql/log_event.cc
sql/log_event.h
sql/log_event_old.cc
sql/mysql_priv.h
sql/mysqld.cc
sql/rpl_injector.cc
sql/rpl_record.cc
sql/rpl_rli.cc
sql/rpl_rli.h
sql/rpl_utility.h
sql/set_var.cc
sql/share/errmsg.txt
sql/slave.cc
sql/sp.cc
sql/sp_head.cc
sql/sp_pcontext.h
sql/sql_acl.cc
sql/sql_base.cc
sql/sql_class.h
sql/sql_connect.cc
sql/sql_crypt.cc
sql/sql_crypt.h
sql/sql_db.cc
sql/sql_delete.cc
sql/sql_insert.cc
sql/sql_load.cc
sql/sql_parse.cc
sql/sql_partition.cc
sql/sql_partition.h
sql/sql_plugin.cc
sql/sql_prepare.cc
sql/sql_rename.cc
sql/sql_repl.cc
sql/sql_select.cc
sql/sql_select.h
sql/sql_servers.cc
sql/sql_show.cc
sql/sql_table.cc
sql/sql_tablespace.cc
sql/sql_test.cc
sql/sql_trigger.cc
sql/sql_udf.cc
sql/sql_union.cc
sql/sql_update.cc
sql/sql_view.cc
sql/sql_yacc.yy
sql/table.cc
storage/archive/ha_archive.cc
storage/ibmdb2i/db2i_constraints.cc
storage/ibmdb2i/ha_ibmdb2i.cc
storage/innobase/fil/fil0fil.c
storage/innobase/handler/ha_innodb.cc
storage/innobase/handler/ha_innodb.h
storage/innobase/include/fil0fil.h
storage/innobase/include/lock0lock.h
storage/innobase/include/mtr0mtr.h
storage/innobase/include/srv0srv.h
storage/innobase/lock/lock0lock.c
storage/innobase/log/log0log.c
storage/innobase/log/log0recv.c
storage/innobase/row/row0mysql.c
storage/innobase/srv/srv0srv.c
storage/innobase/srv/srv0start.c
storage/innodb_plugin/CMakeLists.txt
storage/innodb_plugin/handler/ha_innodb.cc
storage/myisam/mi_packrec.c
storage/myisam/mi_static.c
storage/myisam/myisamdef.h
storage/myisammrg/ha_myisammrg.cc
strings/Makefile.am
strings/ctype-ucs2.c
strings/strmov.c
support-files/Makefile.am
support-files/mysql.spec.sh
win/configure.js
mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result
mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test
=== modified file 'client/client_priv.h'
--- a/client/client_priv.h 2008-07-09 13:09:30 +0000
+++ b/client/client_priv.h 2010-03-04 08:03:07 +0000
@@ -31,6 +31,15 @@
# endif
#endif
+/* Version numbers for deprecation messages */
+#define VER_CELOSIA "5.6"
+
+#define WARN_DEPRECATED(Ver,Old,New) \
+ do { \
+ printf("Warning: The option '%s' is deprecated and will be removed " \
+ "in a future release. Please use %s instead.\n", (Old), (New)); \
+ } while(0);
+
enum options_client
{
OPT_CHARSETS_DIR=256, OPT_DEFAULT_CHARSET,
@@ -48,8 +57,8 @@ enum options_client
OPT_PROMPT, OPT_IGN_LINES,OPT_TRANSACTION,OPT_MYSQL_PROTOCOL,
OPT_SHARED_MEMORY_BASE_NAME, OPT_FRM, OPT_SKIP_OPTIMIZATION,
OPT_COMPATIBLE, OPT_RECONNECT, OPT_DELIMITER, OPT_SECURE_AUTH,
- OPT_OPEN_FILES_LIMIT, OPT_SET_CHARSET, OPT_CREATE_OPTIONS, OPT_SERVER_ARG,
- OPT_START_POSITION, OPT_STOP_POSITION, OPT_START_DATETIME, OPT_STOP_DATETIME,
+ OPT_OPEN_FILES_LIMIT, OPT_SET_CHARSET, OPT_SERVER_ARG,
+ OPT_POSITION, OPT_STOP_POSITION, OPT_START_DATETIME, OPT_STOP_DATETIME,
OPT_SIGINT_IGNORE, OPT_HEXBLOB, OPT_ORDER_BY_PRIMARY, OPT_COUNT,
#ifdef HAVE_NDBCLUSTER_DB
OPT_NDBCLUSTER, OPT_NDB_CONNECTSTRING,
@@ -81,5 +90,7 @@ enum options_client
OPT_DEBUG_INFO, OPT_DEBUG_CHECK, OPT_COLUMN_TYPES, OPT_ERROR_LOG_FILE,
OPT_WRITE_BINLOG, OPT_DUMP_DATE,
OPT_ABORT_SOURCE_ON_ERROR,
+ OPT_FIRST_SLAVE,
+ OPT_ALL,
OPT_MAX_CLIENT_OPTION
};
=== modified file 'client/mysql.cc'
--- a/client/mysql.cc 2010-01-15 15:27:55 +0000
+++ b/client/mysql.cc 2010-03-04 08:03:07 +0000
@@ -54,6 +54,9 @@ static char *server_version= NULL;
/* Array of options to pass to libemysqld */
#define MAX_SERVER_ARGS 64
+/* Version numbers for deprecation messages */
+#define VER_CELOSIA "5.6"
+
void* sql_alloc(unsigned size); // Don't use mysqld alloc for these
void sql_element_free(void *ptr);
#include "sql_string.h"
@@ -1349,7 +1352,7 @@ static struct my_option my_long_options[
(uchar**) &opt_rehash, (uchar**) &opt_rehash, 0, GET_BOOL, NO_ARG, 1, 0, 0, 0,
0, 0},
{"no-auto-rehash", 'A',
- "No automatic rehashing. One has to use 'rehash' to get table and field completion. This gives a quicker start of mysql and disables rehashing on reconnect. WARNING: options deprecated; use --disable-auto-rehash instead.",
+ "No automatic rehashing. One has to use 'rehash' to get table and field completion. This gives a quicker start of mysql and disables rehashing on reconnect.",
0, 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0},
{"batch", 'B',
"Don't use history file. Disable interactive behavior. (Enables --silent)", 0, 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0},
@@ -1418,7 +1421,7 @@ static struct my_option my_long_options[
{"line-numbers", OPT_LINE_NUMBERS, "Write line numbers for errors.",
(uchar**) &line_numbers, (uchar**) &line_numbers, 0, GET_BOOL,
NO_ARG, 1, 0, 0, 0, 0, 0},
- {"skip-line-numbers", 'L', "Don't write line number for errors. WARNING: -L is deprecated, use long version of this option instead.", 0, 0, 0, GET_NO_ARG,
+ {"skip-line-numbers", 'L', "Don't write line number for errors.", 0, 0, 0, GET_NO_ARG,
NO_ARG, 0, 0, 0, 0, 0, 0},
{"unbuffered", 'n', "Flush buffer after each query.", (uchar**) &unbuffered,
(uchar**) &unbuffered, 0, GET_BOOL, NO_ARG, 0, 0, 0, 0, 0, 0},
@@ -1426,7 +1429,7 @@ static struct my_option my_long_options[
(uchar**) &column_names, (uchar**) &column_names, 0, GET_BOOL,
NO_ARG, 1, 0, 0, 0, 0, 0},
{"skip-column-names", 'N',
- "Don't write column names in results. WARNING: -N is deprecated, use long version of this options instead.",
+ "Don't write column names in results.",
0, 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0},
{"set-variable", 'O',
"Change the value of a variable. Please note that this option is deprecated; you can set variables directly with --variable-name=value.",
@@ -1633,7 +1636,7 @@ get_one_option(int optid, const struct m
init_tee(argument);
break;
case OPT_NOTEE:
- printf("WARNING: option deprecated; use --disable-tee instead.\n");
+ WARN_DEPRECATED(VER_CELOSIA, "--no-tee", "--disable-tee");
if (opt_outfile)
end_tee();
break;
@@ -1656,7 +1659,7 @@ get_one_option(int optid, const struct m
}
break;
case OPT_NOPAGER:
- printf("WARNING: option deprecated; use --disable-pager instead.\n");
+ WARN_DEPRECATED(VER_CELOSIA, "--no-pager", "--disable-pager");
opt_nopager= 1;
break;
case OPT_MYSQL_PROTOCOL:
@@ -1702,12 +1705,18 @@ get_one_option(int optid, const struct m
if (!(status.line_buff= batch_readline_command(status.line_buff, argument)))
return 1;
break;
+ case 'g':
+ WARN_DEPRECATED(VER_CELOSIA, "-g, --no-named-commands", "--skip-named-commands");
+ break;
case 'o':
if (argument == disabled_my_option)
one_database= 0;
else
one_database= skip_updates= 1;
break;
+ case 'O':
+ WARN_DEPRECATED(VER_CELOSIA, "-O, --set-variable", "--variable-name=value");
+ break;
case 'p':
if (argument == disabled_my_option)
argument= (char*) ""; // Don't require password
@@ -3530,7 +3539,8 @@ print_table_data_vertically(MYSQL_RES *r
for (uint off=0; off < mysql_num_fields(result); off++)
{
field= mysql_fetch_field(result);
- tee_fprintf(PAGER, "%*s: ",(int) max_length,field->name);
+ if (column_names)
+ tee_fprintf(PAGER, "%*s: ",(int) max_length,field->name);
if (cur[off])
{
unsigned int i;
@@ -4215,7 +4225,7 @@ char *get_arg(char *line, my_bool get_ne
if (*ptr == '\\' && ptr[1]) // escaped character
{
// Remove the backslash
- strmov(ptr, ptr+1);
+ strmov_overlapp(ptr, ptr+1);
}
else if ((!quoted && *ptr == ' ') || (quoted && *ptr == qtype))
{
=== modified file 'client/mysql_upgrade.c'
--- a/client/mysql_upgrade.c 2009-12-03 11:34:11 +0000
+++ b/client/mysql_upgrade.c 2010-03-04 08:03:07 +0000
@@ -776,6 +776,10 @@ static int run_sql_fix_privilege_tables(
found_real_errors++;
print_line(line);
}
+ else if (strncmp(line, "WARNING", 7) == 0)
+ {
+ print_line(line);
+ }
} while ((line= get_line(line)) && *line);
}
=== modified file 'client/mysqladmin.cc'
--- a/client/mysqladmin.cc 2009-11-14 19:33:59 +0000
+++ b/client/mysqladmin.cc 2010-03-04 08:03:07 +0000
@@ -282,6 +282,9 @@ get_one_option(int optid, const struct m
charsets_dir = argument;
#endif
break;
+ case 'O':
+ WARN_DEPRECATED(VER_CELOSIA, "--set-variable", "--variable-name=value");
+ break;
case OPT_MYSQL_PROTOCOL:
opt_protocol= find_type_or_exit(argument, &sql_protocol_typelib,
opt->name);
=== modified file 'client/mysqlbinlog.cc'
--- a/client/mysqlbinlog.cc 2010-01-09 09:04:51 +0000
+++ b/client/mysqlbinlog.cc 2010-03-04 08:03:07 +0000
@@ -41,6 +41,7 @@
#define CLIENT_CAPABILITIES (CLIENT_LONG_PASSWORD | CLIENT_LONG_FLAG | CLIENT_LOCAL_FILES)
+
char server_version[SERVER_VERSION_LENGTH];
ulong server_id = 0;
@@ -1060,7 +1061,7 @@ static struct my_option my_long_options[
"built-in default (" STRINGIFY_ARG(MYSQL_PORT) ").",
(uchar**) &port, (uchar**) &port, 0, GET_INT, REQUIRED_ARG,
0, 0, 0, 0, 0, 0},
- {"position", 'j', "Deprecated. Use --start-position instead.",
+ {"position", OPT_POSITION, "Deprecated. Use --start-position instead.",
(uchar**) &start_position, (uchar**) &start_position, 0, GET_ULL,
REQUIRED_ARG, BIN_LOG_HEADER_SIZE, BIN_LOG_HEADER_SIZE,
/* COM_BINLOG_DUMP accepts only 4 bytes for the position */
@@ -1103,7 +1104,7 @@ static struct my_option my_long_options[
"(you should probably use quotes for your shell to set it properly).",
(uchar**) &start_datetime_str, (uchar**) &start_datetime_str,
0, GET_STR_ALLOC, REQUIRED_ARG, 0, 0, 0, 0, 0, 0},
- {"start-position", OPT_START_POSITION,
+ {"start-position", 'j',
"Start reading the binlog at position N. Applies to the first binlog "
"passed on the command line.",
(uchar**) &start_position, (uchar**) &start_position, 0, GET_ULL,
@@ -1314,6 +1315,9 @@ get_one_option(int optid, const struct m
case 'R':
remote_opt= 1;
break;
+ case OPT_POSITION:
+ WARN_DEPRECATED(VER_CELOSIA, "--position", "--start-position");
+ break;
case OPT_MYSQL_PROTOCOL:
opt_protocol= find_type_or_exit(argument, &sql_protocol_typelib,
opt->name);
=== modified file 'client/mysqldump.c'
--- a/client/mysqldump.c 2009-10-15 21:38:29 +0000
+++ b/client/mysqldump.c 2010-03-04 08:03:07 +0000
@@ -179,7 +179,7 @@ HASH ignore_table;
static struct my_option my_long_options[] =
{
- {"all", 'a', "Deprecated. Use --create-options instead.",
+ {"all", OPT_ALL, "Deprecated. Use --create-options instead.",
(uchar**) &create_options, (uchar**) &create_options, 0, GET_BOOL, NO_ARG, 1,
0, 0, 0, 0, 0},
{"all-databases", 'A',
@@ -230,7 +230,7 @@ static struct my_option my_long_options[
{"compress", 'C', "Use compression in server/client protocol.",
(uchar**) &opt_compress, (uchar**) &opt_compress, 0, GET_BOOL, NO_ARG, 0, 0, 0,
0, 0, 0},
- {"create-options", OPT_CREATE_OPTIONS,
+ {"create-options", 'a',
"Include all MySQL specific create options.",
(uchar**) &create_options, (uchar**) &create_options, 0, GET_BOOL, NO_ARG, 1,
0, 0, 0, 0, 0},
@@ -268,7 +268,7 @@ static struct my_option my_long_options[
(uchar**) &opt_events, (uchar**) &opt_events, 0, GET_BOOL,
NO_ARG, 0, 0, 0, 0, 0, 0},
{"extended-insert", 'e',
- "Allows utilization of the new, much faster INSERT syntax.",
+ "Use multiple-row INSERT syntax that include several VALUES lists.",
(uchar**) &extended_insert, (uchar**) &extended_insert, 0, GET_BOOL, NO_ARG,
1, 0, 0, 0, 0, 0},
{"fields-terminated-by", OPT_FTB,
@@ -282,7 +282,7 @@ static struct my_option my_long_options[
(uchar**) &opt_enclosed, 0, GET_STR, REQUIRED_ARG, 0, 0, 0, 0 ,0, 0},
{"fields-escaped-by", OPT_ESC, "Fields in the i.file are escaped by ...",
(uchar**) &escaped, (uchar**) &escaped, 0, GET_STR, REQUIRED_ARG, 0, 0, 0, 0, 0, 0},
- {"first-slave", 'x', "Deprecated, renamed to --lock-all-tables.",
+ {"first-slave", OPT_FIRST_SLAVE, "Deprecated, renamed to --lock-all-tables.",
(uchar**) &opt_lock_all_tables, (uchar**) &opt_lock_all_tables, 0, GET_BOOL, NO_ARG,
0, 0, 0, 0, 0, 0},
{"flush-logs", 'F', "Flush logs file in server before starting dump. "
@@ -366,8 +366,7 @@ static struct my_option my_long_options[
NO_ARG, 0, 0, 0, 0, 0, 0},
{"no-data", 'd', "No row information.", (uchar**) &opt_no_data,
(uchar**) &opt_no_data, 0, GET_BOOL, NO_ARG, 0, 0, 0, 0, 0, 0},
- {"no-set-names", 'N',
- "Deprecated. Use --skip-set-charset instead.",
+ {"no-set-names", 'N',"Suppress the SET NAMES statement",
0, 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0},
{"opt", OPT_OPTIMIZE,
"Same as --add-drop-table, --add-locks, --create-options, --quick, --extended-insert, --lock-tables, --set-charset, and --disable-keys. Enabled by default, disable with --skip-opt.",
@@ -760,6 +759,15 @@ get_one_option(int optid, const struct m
case '?':
usage();
exit(0);
+ case 'O':
+ WARN_DEPRECATED(VER_CELOSIA, "--set-variable", "--variable-name=value");
+ break;
+ case (int) OPT_ALL:
+ WARN_DEPRECATED(VER_CELOSIA, "--all", "--create-options");
+ break;
+ case (int) OPT_FIRST_SLAVE:
+ WARN_DEPRECATED(VER_CELOSIA, "--first-slave", "--lock-all-tables");
+ break;
case (int) OPT_MASTER_DATA:
if (!argument) /* work like in old versions */
opt_master_data= MYSQL_OPT_MASTER_DATA_EFFECTIVE_SQL;
@@ -808,7 +816,7 @@ get_one_option(int optid, const struct m
&err_ptr, &err_len);
if (err_len)
{
- strmake(buff, err_ptr, min(sizeof(buff), err_len));
+ strmake(buff, err_ptr, min(sizeof(buff) - 1, err_len));
fprintf(stderr, "Invalid mode to --compatible: %s\n", buff);
exit(1);
}
@@ -4486,7 +4494,7 @@ static ulong find_set(TYPELIB *lib, cons
for (; pos != end && *pos != ','; pos++) ;
var_len= (uint) (pos - start);
- strmake(buff, start, min(sizeof(buff), var_len));
+ strmake(buff, start, min(sizeof(buff) - 1, var_len));
find= find_type(buff, lib, var_len);
if (!find)
{
=== modified file 'configure.in'
--- a/configure.in 2010-01-29 20:37:22 +0000
+++ b/configure.in 2010-03-04 08:03:07 +0000
@@ -1,17 +1,22 @@
dnl -*- ksh -*-
dnl Process this file with autoconf to produce a configure script.
-AC_PREREQ(2.52)dnl Minimum Autoconf version required.
+# Minimum Autoconf version required.
+AC_PREREQ(2.59)
-AC_INIT(sql/mysqld.cc)
-AC_CANONICAL_SYSTEM
-# The Docs Makefile.am parses this line!
-# remember to also update version.c in ndb
-#
+# Remember to also update version.c in ndb.
# When changing major version number please also check switch statement
# in mysqlbinlog::check_master_version().
-AM_INIT_AUTOMAKE(mysql, 5.1.42-MariaDB)
-AM_CONFIG_HEADER([include/config.h:config.h.in])
+AC_INIT([MariaDB Server], [5.1.44-MariaDB], [], [mysql])
+AC_CONFIG_SRCDIR([sql/mysqld.cc])
+AC_CANONICAL_SYSTEM
+# USTAR format gives us the possibility to store longer path names in
+# TAR files, the path name is split into two parts, a 155 chacater
+# first part and a 100 character second part.
+AM_INIT_AUTOMAKE([1.9 tar-ustar])
+AC_PROG_LIBTOOL
+
+AM_CONFIG_HEADER([include/config.h])
# Request support for automake silent-rules if available.
# Default to verbose output. One can use the configure-time
@@ -31,12 +36,14 @@ NDB_SHARED_LIB_VERSION=$NDB_SHARED_LIB_M
# Remember that regexps needs to quote [ and ] since this is run through m4
# We take some made up examples
#
-# VERSION 5.1.40sp1-alpha 5.0.34a
-# MYSQL_NO_DASH_VERSION 5.1.40sp1 5.0.34a
-# MYSQL_NUMERIC_VERSION 5.1.40 5.0.34
-# MYSQL_BASE_VERSION 5.1 5.0
-# MYSQL_VERSION_ID 50140 50034
+# VERSION 5.1.40sp1-alpha 5.0.34a 5.5.1-m2
+# MYSQL_U_SCORE_VERSION 5.1.40sp1_alpha 5.0.34a 5.5.1_m2
+# MYSQL_NO_DASH_VERSION 5.1.40sp1 5.0.34a 5.5.1
+# MYSQL_NUMERIC_VERSION 5.1.40 5.0.34 5.5.1
+# MYSQL_BASE_VERSION 5.1 5.0 5.5
+# MYSQL_VERSION_ID 50140 50034 50501
#
+MYSQL_U_SCORE_VERSION=`echo $VERSION | sed -e "s|-|_|"`
MYSQL_NO_DASH_VERSION=`echo $VERSION | sed -e "s|-.*$||"`
MYSQL_NUMERIC_VERSION=`echo $MYSQL_NO_DASH_VERSION | sed -e "s|[[a-z]][[a-z0-9]]*$||"`
MYSQL_BASE_VERSION=`echo $MYSQL_NUMERIC_VERSION | sed -e "s|\.[[^.]]*$||"`
@@ -74,6 +81,7 @@ romanian russian serbian slovak spanish
#####
#####
+AC_SUBST(MYSQL_U_SCORE_VERSION)
AC_SUBST(MYSQL_NO_DASH_VERSION)
AC_SUBST(MYSQL_BASE_VERSION)
AC_SUBST(MYSQL_VERSION_ID)
@@ -2095,7 +2103,7 @@ AC_CHECK_FUNCS(alarm bcmp bfill bmove bs
sighold sigset sigthreadmask port_create sleep thr_yield \
snprintf socket stpcpy strcasecmp strerror strsignal strnlen strpbrk strstr \
strtol strtoll strtoul strtoull tell tempnam thr_setconcurrency vidattr \
- posix_fallocate backtrace backtrace_symbols backtrace_symbols_fd)
+ posix_fallocate backtrace backtrace_symbols backtrace_symbols_fd printstack)
#
#
@@ -2938,7 +2946,8 @@ echo " * Community Features: $E
echo ""
echo "---"
-# The following text is checked in ./Do-compile to verify that configure
+# The first line "Thank you ..." is checked in ./Do-compile to verify that configure
# ended sucessfully - don't remove it.
+echo
echo "Thank you for choosing MariaDB!"
echo
=== modified file 'extra/yassl/taocrypt/src/asn.cpp'
--- a/extra/yassl/taocrypt/src/asn.cpp 2010-01-27 10:38:29 +0000
+++ b/extra/yassl/taocrypt/src/asn.cpp 2010-03-04 08:03:07 +0000
@@ -652,22 +652,20 @@ word32 CertDecoder::GetDigest()
}
-// memory length checked add tag to buffer
-char* CertDecoder::AddTag(char* ptr, const char* buf_end, const char* tag_name,
- word32 tag_name_length, word32 tag_value_length)
+char *CertDecoder::AddTag(char *ptr, const char *buf_end,
+ const char *tag_name, word32 tag_name_length,
+ word32 tag_value_length)
{
- if (ptr + tag_name_length + tag_value_length > buf_end) {
- source_.SetError(CONTENT_E);
- return 0;
- }
-
- memcpy(ptr, tag_name, tag_name_length);
- ptr += tag_name_length;
-
- memcpy(ptr, source_.get_current(), tag_value_length);
- ptr += tag_value_length;
-
- return ptr;
+ if (ptr + tag_name_length + tag_value_length > buf_end)
+ return 0;
+
+ memcpy(ptr, tag_name, tag_name_length);
+ ptr+= tag_name_length;
+
+ memcpy(ptr, source_.get_current(), tag_value_length);
+ ptr+= tag_value_length;
+
+ return ptr;
}
@@ -680,19 +678,18 @@ void CertDecoder::GetName(NameType nt)
word32 length = GetSequence(); // length of all distinguished names
if (length >= ASN_NAME_MAX)
- return;
+ goto err;
length += source_.get_index();
- char* ptr;
- char* buf_end;
+ char *ptr, *buf_end;
if (nt == ISSUER) {
- ptr = issuer_;
- buf_end = ptr + sizeof(issuer_) - 1; // 1 byte for trailing 0
+ ptr= issuer_;
+ buf_end= ptr + sizeof(issuer_) - 1; // 1 byte for trailing 0
}
else {
- ptr = subject_;
- buf_end = ptr + sizeof(subject_) - 1; // 1 byte for trailing 0
+ ptr= subject_;
+ buf_end= ptr + sizeof(subject_) - 1; // 1 byte for trailing 0
}
while (source_.get_index() < length) {
@@ -718,32 +715,32 @@ void CertDecoder::GetName(NameType nt)
switch (id) {
case COMMON_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/CN=", 4, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/CN=", 4, strLen)))
+ goto err;
break;
case SUR_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/SN=", 4, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/SN=", 4, strLen)))
+ goto err;
break;
case COUNTRY_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/C=", 3, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/C=", 3, strLen)))
+ goto err;
break;
case LOCALITY_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/L=", 3, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/L=", 3, strLen)))
+ goto err;
break;
case STATE_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/ST=", 4, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/ST=", 4, strLen)))
+ goto err;
break;
case ORG_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/O=", 3, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/O=", 3, strLen)))
+ goto err;
break;
case ORGUNIT_NAME:
- if (!(ptr = AddTag(ptr, buf_end, "/OU=", 4, strLen)))
- return;
+ if (!(ptr= AddTag(ptr, buf_end, "/OU=", 4, strLen)))
+ goto err;
break;
}
@@ -758,21 +755,20 @@ void CertDecoder::GetName(NameType nt)
source_.advance(oidSz + 1);
word32 length = GetLength(source_);
- if (email) {
- if (!(ptr = AddTag(ptr, buf_end, "/emailAddress=", 14, length)))
- return;
- }
+ if (email && !(ptr= AddTag(ptr, buf_end, "/emailAddress=", 14, length)))
+ goto err;
source_.advance(length);
}
}
+ *ptr= 0;
- *ptr = 0;
-
- if (nt == ISSUER)
- sha.Final(issuerHash_);
- else
- sha.Final(subjectHash_);
+ sha.Final(nt == ISSUER ? issuerHash_ : subjectHash_);
+
+ return;
+
+err:
+ source_.SetError(CONTENT_E);
}
=== modified file 'include/config-win.h'
--- a/include/config-win.h 2009-09-07 20:50:10 +0000
+++ b/include/config-win.h 2010-03-04 08:03:07 +0000
@@ -192,7 +192,7 @@ typedef SSIZE_T ssize_t;
#define isnan(X) _isnan(X)
#define finite(X) _finite(X)
-#ifndef UNDEF_THREAD_HACK
+#ifndef MYSQL_CLIENT_NO_THREADS
#define THREAD
#endif
#define VOID_SIGHANDLER
=== modified file 'include/m_string.h'
--- a/include/m_string.h 2009-08-13 21:12:12 +0000
+++ b/include/m_string.h 2010-03-04 08:03:07 +0000
@@ -95,13 +95,7 @@ extern char NEAR _dig_vec_lower[];
/* Defined in strtod.c */
extern const double log_10[309];
-#ifdef BAD_STRING_COMPILER
-#define strmov(A,B) (memccpy(A,B,0,INT_MAX)-1)
-#else
extern char *strmov_overlapp(char *dest, const char *src);
-/* Warning: the following is likely not to work: */
-#define strmake_overlapp(A,B,C) strmake(A,B,C)
-#endif
#ifdef BAD_MEMCPY /* Problem with gcc on Alpha */
#define memcpy_fixed(A,B,C) bmove((A),(B),(C))
@@ -162,9 +156,6 @@ extern size_t strinstr(const char *str,c
extern size_t r_strinstr(const char *str, size_t from, const char *search);
extern char *strkey(char *dst,char *head,char *tail,char *flags);
extern char *strmake(char *dst,const char *src,size_t length);
-#ifndef strmake_overlapp
-extern char *strmake_overlapp(char *dst,const char *src, size_t length);
-#endif
#ifndef strmov
extern char *strmov(char *dst,const char *src);
=== modified file 'include/my_global.h'
--- a/include/my_global.h 2009-12-03 11:19:05 +0000
+++ b/include/my_global.h 2010-03-04 08:03:07 +0000
@@ -889,7 +889,7 @@ typedef SOCKET_SIZE_TYPE size_socket;
#define FLT_MAX ((float)3.40282346638528860e+38)
#endif
#ifndef SIZE_T_MAX
-#define SIZE_T_MAX ~((size_t) 0)
+#define SIZE_T_MAX (~((size_t) 0))
#endif
#ifndef isfinite
=== modified file 'include/my_no_pthread.h'
--- a/include/my_no_pthread.h 2006-12-23 19:20:40 +0000
+++ b/include/my_no_pthread.h 2009-12-12 18:11:25 +0000
@@ -47,4 +47,12 @@
#define rw_unlock(A)
#define rwlock_destroy(A)
+typedef int my_pthread_once_t;
+#define MY_PTHREAD_ONCE_INIT 0
+#define MY_PTHREAD_ONCE_DONE 1
+
+#define my_pthread_once(C,F) do { \
+ if (*(C) != MY_PTHREAD_ONCE_DONE) { F(); *(C)= MY_PTHREAD_ONCE_DONE; } \
+ } while(0)
+
#endif
=== modified file 'include/my_pthread.h'
--- a/include/my_pthread.h 2010-01-14 16:51:00 +0000
+++ b/include/my_pthread.h 2010-03-04 08:03:07 +0000
@@ -69,6 +69,11 @@ typedef int pthread_mutexattr_t;
#define pthread_handler_t EXTERNC void * __cdecl
typedef void * (__cdecl *pthread_handler)(void *);
+typedef volatile LONG my_pthread_once_t;
+#define MY_PTHREAD_ONCE_INIT 0
+#define MY_PTHREAD_ONCE_INPROGRESS 1
+#define MY_PTHREAD_ONCE_DONE 2
+
/*
Struct and macros to be used in combination with the
windows implementation of pthread_cond_timedwait
@@ -116,6 +121,7 @@ int pthread_attr_init(pthread_attr_t *co
int pthread_attr_setstacksize(pthread_attr_t *connect_att,DWORD stack);
int pthread_attr_setprio(pthread_attr_t *connect_att,int priority);
int pthread_attr_destroy(pthread_attr_t *connect_att);
+int my_pthread_once(my_pthread_once_t *once_control,void (*init_routine)(void));
struct tm *localtime_r(const time_t *timep,struct tm *tmp);
struct tm *gmtime_r(const time_t *timep,struct tm *tmp);
@@ -215,6 +221,10 @@ extern int my_pthread_getprio(pthread_t
#define pthread_handler_t EXTERNC void *
typedef void *(* pthread_handler)(void *);
+#define my_pthread_once_t pthread_once_t
+#define MY_PTHREAD_ONCE_INIT PTHREAD_ONCE_INIT
+#define my_pthread_once(C,F) pthread_once(C,F)
+
/* Test first for RTS or FSU threads */
#if defined(PTHREAD_SCOPE_GLOBAL) && !defined(PTHREAD_SCOPE_SYSTEM)
=== modified file 'include/my_stacktrace.h'
--- a/include/my_stacktrace.h 2008-06-19 14:02:32 +0000
+++ b/include/my_stacktrace.h 2010-01-27 10:42:20 +0000
@@ -23,7 +23,7 @@
(defined(__alpha__) && defined(__GNUC__))
#define HAVE_STACKTRACE 1
#endif
-#elif defined(__WIN__)
+#elif defined(__WIN__) || defined(HAVE_PRINTSTACK)
#define HAVE_STACKTRACE 1
#endif
=== modified file 'include/my_sys.h'
--- a/include/my_sys.h 2010-03-09 19:22:24 +0000
+++ b/include/my_sys.h 2010-03-10 09:12:23 +0000
@@ -999,7 +999,6 @@ extern my_bool resolve_collation(const c
CHARSET_INFO *default_cl,
CHARSET_INFO **cl);
-extern void free_charsets(void);
extern char *get_charsets_dir(char *buf);
extern my_bool my_charset_same(CHARSET_INFO *cs1, CHARSET_INFO *cs2);
extern my_bool init_compiled_charsets(myf flags);
=== modified file 'include/myisam.h'
--- a/include/myisam.h 2009-12-03 11:19:05 +0000
+++ b/include/myisam.h 2010-03-04 08:03:07 +0000
@@ -251,6 +251,8 @@ extern ulong myisam_bulk_insert_tree_siz
/* usually used to check if a symlink points into the mysql data home */
/* which is normally forbidden */
extern int (*myisam_test_invalid_symlink)(const char *filename);
+extern ulonglong myisam_mmap_size, myisam_mmap_used;
+extern pthread_mutex_t THR_LOCK_myisam_mmap;
/* Prototypes for myisam-functions */
@@ -296,6 +298,7 @@ extern int mi_delete_all_rows(struct st_
extern ulong _mi_calc_blob_length(uint length , const uchar *pos);
extern uint mi_get_pointer_length(ulonglong file_length, uint def);
+#define MEMMAP_EXTRA_MARGIN 7 /* Write this as a suffix for mmap file */
/* this is used to pass to mysql_myisamchk_table */
#define MYISAMCHK_REPAIR 1 /* equivalent to myisamchk -r */
=== modified file 'libmysql/libmysql.c'
--- a/libmysql/libmysql.c 2010-01-15 15:27:55 +0000
+++ b/libmysql/libmysql.c 2010-03-04 08:03:07 +0000
@@ -211,7 +211,6 @@ void STDCALL mysql_server_end()
}
else
{
- free_charsets();
mysql_thread_end();
}
@@ -719,7 +718,10 @@ my_bool STDCALL mysql_change_user(MYSQL
if (!passwd)
passwd="";
- /* Store user into the buffer */
+ /*
+ Store user into the buffer.
+ Advance position as strmake returns a pointer to the closing NUL.
+ */
end= strmake(end, user, USERNAME_LENGTH) + 1;
/* write scrambled password according to server capabilities */
@@ -1269,7 +1271,7 @@ mysql_list_fields(MYSQL *mysql, const ch
{
MYSQL_RES *result;
MYSQL_FIELD *fields;
- char buff[257],*end;
+ char buff[258],*end;
DBUG_ENTER("mysql_list_fields");
DBUG_PRINT("enter",("table: '%s' wild: '%s'",table,wild ? wild : ""));
@@ -2284,7 +2286,7 @@ mysql_stmt_param_metadata(MYSQL_STMT *st
/* Store type of parameter in network buffer. */
-static void store_param_type(uchar **pos, MYSQL_BIND *param)
+static void store_param_type(unsigned char **pos, MYSQL_BIND *param)
{
uint typecode= param->buffer_type | (param->is_unsigned ? 32768 : 0);
int2store(*pos, typecode);
=== modified file 'mysql-test/collections/default.experimental'
--- a/mysql-test/collections/default.experimental 2009-12-02 09:47:49 +0000
+++ b/mysql-test/collections/default.experimental 2010-02-01 12:05:21 +0000
@@ -14,13 +14,11 @@ funcs_2.ndb_charset
main.ctype_gbk_binlog @solaris # Bug#46010: main.ctype_gbk_binlog fails sporadically : Table 't2' already exists
main.plugin_load @solaris # Bug#42144
+main.outfile_loaddata @solaris # joro : Bug #46895
ndb.* # joro : NDB tests marked as experimental as agreed with bochklin
-rpl.rpl_cross_version* # Bug#48340 2009-12-01 Daogang rpl_cross_version: Found warnings/errors in server log file!
-rpl.rpl_get_master_version_and_clock* # Bug #49191 2009-12-01 Daogang rpl_get_master_version_and_clock failed on PB2: COM_REGISTER_SLAVE failed
rpl.rpl_innodb_bug28430* @solaris # Bug#46029
-rpl.rpl_trigger* # Bug#47810 2009-10-04 joro rpl.rpl_trigger.test fails with valgrind errors with the innodb plugin
rpl_ndb.* # joro : NDB tests marked as experimental as agreed with bochklin
rpl_ndb.rpl_ndb_log # Bug#38998
=== modified file 'mysql-test/extra/rpl_tests/rpl_loaddata.test'
--- a/mysql-test/extra/rpl_tests/rpl_loaddata.test 2009-12-08 09:26:11 +0000
+++ b/mysql-test/extra/rpl_tests/rpl_loaddata.test 2010-01-13 10:28:42 +0000
@@ -21,14 +21,26 @@ connection slave;
reset master;
connection master;
+# MTR is not case-sensitive.
+let $lower_stmt_head= load data;
+let $UPPER_STMT_HEAD= LOAD DATA;
+if (`SELECT '$lock_option' <> ''`)
+{
+ #if $lock_option is null, an extra blank is added into the statement,
+ #this will change the result of rpl_loaddata test case. so $lock_option
+ #is set only when it is not null.
+ let $lower_stmt_head= load data $lock_option;
+ let $UPPER_STMT_HEAD= LOAD DATA $lock_option;
+}
+
select last_insert_id();
create table t1(a int not null auto_increment, b int, primary key(a) );
-load data infile '../../std_data/rpl_loaddata.dat' into table t1;
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata.dat' into table t1;
# verify that LAST_INSERT_ID() is set by LOAD DATA INFILE
select last_insert_id();
create temporary table t2 (day date,id int(9),category enum('a','b','c'),name varchar(60));
-load data infile '../../std_data/rpl_loaddata2.dat' into table t2 fields terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by '\n##\n' starting by '>' ignore 1 lines;
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata2.dat' into table t2 fields terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by '\n##\n' starting by '>' ignore 1 lines;
create table t3 (day date,id int(9),category enum('a','b','c'),name varchar(60));
insert into t3 select * from t2;
@@ -56,7 +68,7 @@ sync_with_master;
insert into t1 values(1,10);
connection master;
-load data infile '../../std_data/rpl_loaddata.dat' into table t1;
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata.dat' into table t1;
save_master_pos;
connection slave;
@@ -70,9 +82,11 @@ connection slave;
set global sql_slave_skip_counter=1;
start slave;
sync_with_master;
---replace_result $MASTER_MYPORT MASTER_PORT
---replace_column 1 # 8 # 9 # 16 # 23 # 33 #
-show slave status;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_SQL_Errno, 1);
+echo Last_SQL_Errno=$last_error;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_SQL_Error, 1);
+echo Last_SQL_Error;
+echo $last_error;
# Trigger error again to test CHANGE MASTER
@@ -80,7 +94,7 @@ connection master;
set sql_log_bin=0;
delete from t1;
set sql_log_bin=1;
-load data infile '../../std_data/rpl_loaddata.dat' into table t1;
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata.dat' into table t1;
save_master_pos;
connection slave;
# The SQL slave thread should be stopped now.
@@ -92,9 +106,11 @@ connection slave;
stop slave;
change master to master_user='test';
change master to master_user='root';
---replace_result $MASTER_MYPORT MASTER_PORT
---replace_column 1 # 8 # 9 # 16 # 23 # 33 #
-show slave status;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_SQL_Errno, 1);
+echo Last_SQL_Errno=$last_error;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_SQL_Error, 1);
+echo Last_SQL_Error;
+echo $last_error;
# Trigger error again to test RESET SLAVE
@@ -105,7 +121,7 @@ connection master;
set sql_log_bin=0;
delete from t1;
set sql_log_bin=1;
-load data infile '../../std_data/rpl_loaddata.dat' into table t1;
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata.dat' into table t1;
save_master_pos;
connection slave;
# The SQL slave thread should be stopped now.
@@ -114,9 +130,11 @@ connection slave;
# RESET SLAVE and see if error is cleared in SHOW SLAVE STATUS.
stop slave;
reset slave;
---replace_result $MASTER_MYPORT MASTER_PORT
---replace_column 1 # 8 # 9 # 16 # 23 # 33 #
-show slave status;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_SQL_Errno, 1);
+echo Last_SQL_Errno=$last_error;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_SQL_Error, 1);
+echo Last_SQL_Error;
+echo $last_error;
# Finally, see if logging is done ok on master for a failing LOAD DATA INFILE
@@ -125,7 +143,7 @@ reset master;
eval create table t2 (day date,id int(9),category enum('a','b','c'),name varchar(60),
unique(day)) engine=$engine_type; # no transactions
--error ER_DUP_ENTRY
-load data infile '../../std_data/rpl_loaddata2.dat' into table t2 fields
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata2.dat' into table t2 fields
terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by
'\n##\n' starting by '>' ignore 1 lines;
select * from t2;
@@ -141,7 +159,7 @@ alter table t2 drop key day;
connection master;
delete from t2;
--error ER_DUP_ENTRY
-load data infile '../../std_data/rpl_loaddata2.dat' into table t2 fields
+eval $lower_stmt_head infile '../../std_data/rpl_loaddata2.dat' into table t2 fields
terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by
'\n##\n' starting by '>' ignore 1 lines;
connection slave;
@@ -154,7 +172,7 @@ drop table t1, t2;
CREATE TABLE t1 (word CHAR(20) NOT NULL PRIMARY KEY) ENGINE=INNODB;
--error ER_DUP_ENTRY
-LOAD DATA INFILE "../../std_data/words.dat" INTO TABLE t1;
+eval $UPPER_STMT_HEAD INFILE "../../std_data/words.dat" INTO TABLE t1;
DROP TABLE IF EXISTS t1;
@@ -182,17 +200,17 @@ DROP TABLE IF EXISTS t1;
-- echo ### assertion: works with cross-referenced database
-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
--- eval LOAD DATA LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE $db1.t1
+-- eval $UPPER_STMT_HEAD LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE $db1.t1
-- eval use $db1
-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
-- echo ### assertion: works with fully qualified name on current database
-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
--- eval LOAD DATA LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE $db1.t1
+-- eval $UPPER_STMT_HEAD LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE $db1.t1
-- echo ### assertion: works without fully qualified name on current database
-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
--- eval LOAD DATA LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE t1
+-- eval $UPPER_STMT_HEAD LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE t1
-- echo ### create connection without default database
-- echo ### connect (conn2,localhost,root,,*NO-ONE*);
@@ -200,7 +218,7 @@ connect (conn2,localhost,root,,*NO-ONE*)
-- connection conn2
-- echo ### assertion: works without stating the default database
-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
--- eval LOAD DATA LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE $db1.t1
+-- eval $UPPER_STMT_HEAD LOCAL INFILE '$MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE $db1.t1
-- echo ### disconnect and switch back to master connection
-- disconnect conn2
-- connection master
@@ -219,4 +237,18 @@ source include/diff_tables.inc;
-- sync_slave_with_master
+# BUG#49479: LOAD DATA INFILE is binlogged without escaping field names
+-- source include/master-slave-reset.inc
+-- connection master
+use test;
+CREATE TABLE t1 (`key` TEXT, `text` TEXT);
+
+LOAD DATA INFILE '../../std_data/loaddata2.dat' REPLACE INTO TABLE `t1` FIELDS TERMINATED BY ',';
+SELECT * FROM t1;
+
+-- sync_slave_with_master
+-- connection master
+DROP TABLE t1;
+-- sync_slave_with_master
+
# End of 4.1 tests
=== added file 'mysql-test/extra/rpl_tests/rpl_mixing_engines.inc'
--- a/mysql-test/extra/rpl_tests/rpl_mixing_engines.inc 1970-01-01 00:00:00 +0000
+++ b/mysql-test/extra/rpl_tests/rpl_mixing_engines.inc 2010-01-20 19:08:16 +0000
@@ -0,0 +1,554 @@
+################################################################################
+# This is an auxiliary file used by rpl_mixing_engines.test, and that it
+# executes SQL statements according to a format string, as specified in
+# rpl_mixing_engines.test. In addition, it accepts the special format
+# strings 'configure' and 'clean', used before and after everything else.
+################################################################################
+
+if (`SELECT HEX(@commands) = HEX('configure')`)
+{
+ connection master;
+
+ SET SQL_LOG_BIN=0;
+ eval CREATE TABLE nt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE tt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval SET SQL_LOG_BIN=1;
+
+ connection slave;
+
+ SET SQL_LOG_BIN=0;
+ eval CREATE TABLE nt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE nt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+ eval CREATE TABLE tt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ eval CREATE TABLE tt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = $engine_type;
+ SET SQL_LOG_BIN=1;
+
+ connection master;
+
+ INSERT INTO nt_1(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO nt_2(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO nt_3(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO nt_4(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO nt_5(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO nt_6(trans_id, stmt_id) VALUES(1,1);
+
+ INSERT INTO tt_1(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO tt_2(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO tt_3(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO tt_4(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO tt_5(trans_id, stmt_id) VALUES(1,1);
+ INSERT INTO tt_6(trans_id, stmt_id) VALUES(1,1);
+
+ DELIMITER |;
+
+ CREATE PROCEDURE pc_i_tt_5_suc (IN p_trans_id INTEGER, IN p_stmt_id INTEGER)
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM tt_5 WHERE trans_id= p_trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+ INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+ END|
+
+ CREATE PROCEDURE pc_i_nt_5_suc (IN p_trans_id INTEGER, IN p_stmt_id INTEGER)
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM nt_5 WHERE trans_id= p_trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+ INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+ END|
+
+ CREATE FUNCTION fc_i_tt_5_suc (p_trans_id INTEGER, p_stmt_id INTEGER) RETURNS VARCHAR(64)
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM tt_5 WHERE trans_id= p_trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+ INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+ RETURN "fc_i_tt_5_suc";
+ END|
+
+ CREATE FUNCTION fc_i_nt_5_suc (p_trans_id INTEGER, p_stmt_id INTEGER) RETURNS VARCHAR(64)
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM nt_5 WHERE trans_id= p_trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+ INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+ RETURN "fc_i_nt_5_suc";
+ END|
+
+ CREATE TRIGGER tr_i_tt_3_to_nt_3 AFTER INSERT ON tt_3 FOR EACH ROW
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM nt_3 WHERE trans_id= NEW.trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO nt_3(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+ INSERT INTO nt_3(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+ END|
+
+ CREATE TRIGGER tr_i_nt_4_to_tt_4 AFTER INSERT ON nt_4 FOR EACH ROW
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM tt_4 WHERE trans_id= NEW.trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO tt_4(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+ INSERT INTO tt_4(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+ END|
+
+ CREATE TRIGGER tr_i_tt_5_to_tt_6 AFTER INSERT ON tt_5 FOR EACH ROW
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM tt_6 WHERE trans_id= NEW.trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id, 1), 1) INTO in_stmt_id;
+ INSERT INTO tt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+ INSERT INTO tt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+ END|
+
+ CREATE TRIGGER tr_i_nt_5_to_nt_6 AFTER INSERT ON nt_5 FOR EACH ROW
+ BEGIN
+ DECLARE in_stmt_id INTEGER;
+ SELECT max(stmt_id) INTO in_stmt_id FROM nt_6 WHERE trans_id= NEW.trans_id;
+ SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id), 1) INTO in_stmt_id;
+ INSERT INTO nt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+ INSERT INTO nt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+ END|
+
+ DELIMITER ;|
+
+ let $pos_trans_command= query_get_value("SHOW MASTER STATUS", Position, 1);
+
+ let $trans_id= 7;
+ let $tb_id= 1;
+ let $stmt_id= 1;
+ let $commands= '';
+
+ SET @commands= '';
+}
+
+if (`SELECT HEX(@commands) = HEX('clean')`)
+{
+ connection master;
+
+ DROP TABLE tt_1;
+ DROP TABLE tt_2;
+ DROP TABLE tt_3;
+ DROP TABLE tt_4;
+ DROP TABLE tt_5;
+ DROP TABLE tt_6;
+
+ DROP TABLE nt_1;
+ DROP TABLE nt_2;
+ DROP TABLE nt_3;
+ DROP TABLE nt_4;
+ DROP TABLE nt_5;
+ DROP TABLE nt_6;
+
+ DROP PROCEDURE pc_i_tt_5_suc;
+ DROP PROCEDURE pc_i_nt_5_suc;
+ DROP FUNCTION fc_i_tt_5_suc;
+ DROP FUNCTION fc_i_nt_5_suc;
+
+ sync_slave_with_master;
+
+ SET @commands= '';
+}
+
+while (`SELECT HEX(@commands) != HEX('')`)
+{
+ --disable_query_log
+ SET @command= SUBSTRING_INDEX(@commands, ' ', 1);
+ let $command= `SELECT @command`;
+ --eval SET @check_commands= '$commands'
+ if (`SELECT HEX(@check_commands) = HEX('''')`)
+ {
+ let $commands= `SELECT @commands`;
+ }
+ --echo -b-b-b-b-b-b-b-b-b-b-b- >> $command << -b-b-b-b-b-b-b-b-b-b-b-
+ let $pos_command= query_get_value("SHOW MASTER STATUS", Position, 1);
+ --enable_query_log
+ if (`SELECT HEX(@command) = HEX('B')`)
+ {
+ eval BEGIN;
+ }
+ if (`SELECT HEX(@command) = HEX('T')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO tt_1(trans_id, stmt_id) VALUES ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('T-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO tt_5(trans_id, stmt_id) VALUES ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('T-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval SELECT fc_i_tt_5_suc ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('T-proc')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval CALL pc_i_tt_5_suc ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('eT')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_1(trans_id, stmt_id) VALUES ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('Te')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_1(trans_id, stmt_id) VALUES ($trans_id, $stmt_id), ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('Te-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_5`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_5 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_5(trans_id, stmt_id) VALUES ($trans_id, $stmt_id), ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('Te-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_1(trans_id, stmt_id, info) VALUES ($trans_id, $stmt_id, ''), ($old_trans_id, $old_stmt_id, fc_i_tt_5_suc ($trans_id, $stmt_id));
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('N')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO nt_1(trans_id, stmt_id) VALUES ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('N-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO nt_5(trans_id, stmt_id) VALUES ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('N-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval SELECT fc_i_nt_5_suc ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('N-proc')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval CALL pc_i_nt_5_suc ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('eN')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_1(trans_id, stmt_id) VALUES ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('Ne')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_1(trans_id, stmt_id) VALUES ($trans_id, $stmt_id), ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('Ne-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_5`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_5 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_5(trans_id, stmt_id) VALUES ($trans_id, $stmt_id), ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('Ne-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_1(trans_id, stmt_id, info) VALUES ($trans_id, $stmt_id, ''), ($old_trans_id, $old_stmt_id, fc_i_nt_5_suc ($trans_id, $stmt_id));
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('tN')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO nt_1(trans_id, stmt_id, info) SELECT $trans_id, $stmt_id, COUNT(*) FROM tt_1;
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('tNe')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_1(trans_id, stmt_id, info) SELECT $trans_id, $stmt_id, COUNT(*) FROM tt_1 UNION SELECT $old_trans_id, $old_stmt_id, COUNT(*) FROM tt_1;
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('nT')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO tt_1(trans_id, stmt_id, info) SELECT $trans_id, $stmt_id, COUNT(*) FROM nt_1;
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('nTe')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_1`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_1 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_1(trans_id, stmt_id, info) SELECT $trans_id, $stmt_id, COUNT(*) FROM nt_1 UNION SELECT $old_trans_id, $old_stmt_id, COUNT(*) FROM nt_1;
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('NT')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval UPDATE nt_3, tt_3 SET nt_3.info= "new text $trans_id --> $stmt_id", tt_3.info= "new text $trans_id --> $stmt_id" where nt_3.trans_id = tt_3.trans_id and tt_3.trans_id = 1;
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('NT-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO nt_4(trans_id, stmt_id) VALUES ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('NT-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO nt_5(trans_id, stmt_id, info) VALUES ($trans_id, $stmt_id, fc_i_tt_5_suc($trans_id, $stmt_id));
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('NeT-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_4`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_4 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_4(trans_id, stmt_id) VALUES ($trans_id, $stmt_id), ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('NeT-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from nt_5`;
+ let $old_stmt_id= `SELECT max(stmt_id) from nt_5 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO nt_5(trans_id, stmt_id, info) VALUES ($trans_id, $stmt_id, ''), ($old_trans_id, $old_stmt_id, fc_i_tt_5_suc ($trans_id, $stmt_id));
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('TN')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval UPDATE tt_4, nt_4 SET tt_4.info= "new text $trans_id --> $stmt_id", nt_4.info= "new text $trans_id --> $stmt_id" where nt_4.trans_id = tt_4.trans_id and tt_4.trans_id = 1;
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('TN-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO tt_3(trans_id, stmt_id) VALUES ($trans_id, $stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('TN-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ eval INSERT INTO tt_5(trans_id, stmt_id, info) VALUES ($trans_id, $stmt_id, fc_i_nt_5_suc($trans_id, $stmt_id));
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('TeN-trig')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_3`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_3 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_3(trans_id, stmt_id) VALUES ($trans_id, $stmt_id), ($old_trans_id, $old_stmt_id);
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('TeN-func')`)
+ {
+ #--echo DEBUG-- (trans_id, stmt_id) --> ($trans_id, $stmt_id)
+ let $old_trans_id= `SELECT max(trans_id) from tt_5`;
+ let $old_stmt_id= `SELECT max(stmt_id) from tt_5 where trans_id= $old_trans_id`;
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ eval INSERT INTO tt_5(trans_id, stmt_id, info) VALUES ($trans_id, $stmt_id, ''), ($old_trans_id, $old_stmt_id, fc_i_nt_5_suc ($trans_id, $stmt_id));
+ inc $stmt_id;
+ }
+ if (`SELECT HEX(@command) = HEX('CS-T->T')`)
+ {
+ --eval CREATE TABLE tt_xx_$tb_id (PRIMARY KEY(trans_id, stmt_id)) engine=$engine_type SELECT * FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CS-N->N')`)
+ {
+ --eval CREATE TABLE nt_xx_$tb_id (PRIMARY KEY(trans_id, stmt_id)) engine=MyIsam SELECT * FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CS-T->N')`)
+ {
+ --eval CREATE TABLE tt_xx_$tb_id (PRIMARY KEY(trans_id, stmt_id)) engine=$engine_type SELECT * FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CS-N->T')`)
+ {
+ --eval CREATE TABLE nt_xx_$tb_id (PRIMARY KEY(trans_id, stmt_id)) engine=MyIsam SELECT * FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CSe-T->T')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval CREATE TABLE tt_xx_$tb_id (PRIMARY KEY (stmt_id)) engine=$engine_type SELECT stmt_id FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CSe-N->N')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval CREATE TABLE nt_xx_$tb_id (PRIMARY KEY (stmt_id)) engine=MyIsam SELECT stmt_id FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CSe-T->N')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval CREATE TABLE tt_xx_$tb_id (PRIMARY KEY (stmt_id)) engine=$engine_type SELECT stmt_id FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CSe-N->T')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval CREATE TABLE nt_xx_$tb_id (PRIMARY KEY (stmt_id)) engine=MyIsam SELECT stmt_id FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('CT')`)
+ {
+ --eval CREATE TEMPORARY TABLE tt_xx_$tb_id (a int) engine=$engine_type;
+ }
+ if (`SELECT HEX(@command) = HEX('IS-T<-N')`)
+ {
+ --eval INSERT INTO tt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, stmt_id, USER() FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('ISe-T<-N')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval INSERT INTO tt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, trans_id, USER() FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('IS-N<-T')`)
+ {
+ --eval INSERT INTO nt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, stmt_id, USER() FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('ISe-N<-T')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval INSERT INTO nt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, trans_id, USER() FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('IS-T<-T')`)
+ {
+ --eval INSERT INTO tt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, stmt_id, USER() FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('ISe-T<-T')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval INSERT INTO tt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, trans_id, USER() FROM tt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('IS-N<-N')`)
+ {
+ --eval INSERT INTO nt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, stmt_id, USER() FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('ISe-N<-N')`)
+ {
+ --error ER_DUP_ENTRY, ER_DUP_KEY
+ --eval INSERT INTO nt_xx_$tb_id(trans_id, stmt_id, info) SELECT trans_id, trans_id, USER() FROM nt_1;
+ }
+ if (`SELECT HEX(@command) = HEX('trunc-CS-T')`)
+ {
+ eval TRUNCATE TABLE tt_xx_$tb_id;
+ }
+ if (`SELECT HEX(@command) = HEX('trunc-CS-N')`)
+ {
+ eval TRUNCATE TABLE nt_xx_$tb_id;
+ }
+ if (`SELECT HEX(@command) = HEX('trunc-CT')`)
+ {
+ eval TRUNCATE TABLE tt_xx_$tb_id;
+ }
+ if (`SELECT HEX(@command) = HEX('drop-CS')`)
+ {
+ --disable_warnings
+ eval DROP TABLE IF EXISTS tt_xx_$tb_id, nt_xx_$tb_id;
+ inc $tb_id;
+ --enable_warnings
+ }
+ if (`SELECT HEX(@command) = HEX('drop-CT')`)
+ {
+ --disable_warnings
+ eval DROP TEMPORARY TABLE IF EXISTS tt_xx_$tb_id;
+ inc $tb_id;
+ --enable_warnings
+ }
+ if (`SELECT HEX(@command) = HEX('C')`)
+ {
+ --error 0, ER_GET_ERRMSG
+ eval COMMIT;
+ }
+ if (`SELECT HEX(@command) = HEX('R')`)
+ {
+ --error 0, ER_GET_ERRMSG
+ eval ROLLBACK;
+ }
+ if (`SELECT HEX(@command) = HEX('S1')`)
+ {
+ eval SAVEPOINT s1;
+ }
+ if (`SELECT HEX(@command) = HEX('R1')`)
+ {
+ eval ROLLBACK TO s1;
+ }
+ --disable_query_log
+ SET @commands= LTRIM(SUBSTRING(@commands, LENGTH(@command) + 1));
+ inc $stmt_id;
+
+ let $binlog_start= $pos_command;
+ --source include/show_binlog_events.inc
+ --echo -e-e-e-e-e-e-e-e-e-e-e- >> $command << -e-e-e-e-e-e-e-e-e-e-e-
+ if (`SELECT HEX(@commands) = HEX('')`)
+ {
+ let $binlog_start= $pos_trans_command;
+ --echo -b-b-b-b-b-b-b-b-b-b-b- >> $commands << -b-b-b-b-b-b-b-b-b-b-b-
+ --source include/show_binlog_events.inc
+ --echo -e-e-e-e-e-e-e-e-e-e-e- >> $commands << -e-e-e-e-e-e-e-e-e-e-e-
+ --echo
+ let $pos_trans_command= query_get_value("SHOW MASTER STATUS", Position, 1);
+ let $stmt_id= 1;
+ inc $trans_id;
+ let $commands= '';
+ }
+}
=== modified file 'mysql-test/extra/rpl_tests/rpl_row_func003.test'
--- a/mysql-test/extra/rpl_tests/rpl_row_func003.test 2007-06-18 13:36:10 +0000
+++ b/mysql-test/extra/rpl_tests/rpl_row_func003.test 2010-01-13 09:00:03 +0000
@@ -18,6 +18,8 @@
# Vs slave. #
#############################################################################
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
+
# Begin clean up test section
connection master;
--disable_warnings
@@ -43,10 +45,12 @@ RETURN tmp;
END|
delimiter ;|
+--disable_warnings
INSERT INTO test.t1 VALUES (null,test.f1()),(null,test.f1()),(null,test.f1());
sleep 6;
INSERT INTO test.t1 VALUES (null,test.f1()),(null,test.f1()),(null,test.f1());
sleep 6;
+--enable_warnings
#Select in this test are used for debugging
#select * from test.t1;
@@ -56,7 +60,9 @@ sleep 6;
connection master;
SET AUTOCOMMIT=0;
START TRANSACTION;
+--disable_warnings
INSERT INTO test.t1 VALUES (null,test.f1());
+--enable_warnings
ROLLBACK;
SET AUTOCOMMIT=1;
#select * from test.t1;
=== added file 'mysql-test/extra/rpl_tests/rpl_set_null.test'
--- a/mysql-test/extra/rpl_tests/rpl_set_null.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/extra/rpl_tests/rpl_set_null.test 2010-01-21 17:20:24 +0000
@@ -0,0 +1,86 @@
+# Both of the following tests check that comparison of binlog BI
+# against SE record will not fail due to remains from previous values
+# in the SE record (before a given field was set to null).
+#
+# In MIXED mode:
+# - Insert and update are executed as statements
+# - Delete is executed as a row event
+# - Assertion: checks that comparison will not fail because the update
+# statement will clear the record contents for the nulled
+# field. If data was not cleared, some engines may keep
+# the value and return it later as garbage - despite the
+# fact that field is null. This may cause slave to
+# falsely fail in the comparison (memcmp would fail
+# because of "garbage" in record data).
+#
+# In ROW mode:
+# - Insert, update and delete are executed as row events.
+# - Assertion: checks that comparison will not fail because the update
+# rows event will clear the record contents before
+# feeding the new value to the SE. This protects against
+# SEs that do not clear record contents when storing
+# nulled fields. If the engine did not clear the data it
+# would cause slave to falsely fail in the comparison
+# (memcmp would fail because of "garbage" in record
+# data). This scenario is pretty much the same described
+# above in MIXED mode, but checks different execution
+# path in the slave.
+
+# BUG#49481: RBR: MyISAM and bit fields may cause slave to stop on
+# delete cant find record
+
+-- source include/master-slave-reset.inc
+
+-- connection master
+-- eval CREATE TABLE t1 (c1 BIT, c2 INT) Engine=$engine
+INSERT INTO `t1` VALUES ( 1, 1 );
+UPDATE t1 SET c1=NULL where c2=1;
+-- sync_slave_with_master
+
+-- let $diff_table_1=master:test.t1
+-- let $diff_table_2=slave:test.t1
+-- source include/diff_tables.inc
+
+-- connection master
+# triggers switch to row mode when on mixed
+DELETE FROM t1 WHERE c2=1 LIMIT 1;
+-- sync_slave_with_master
+
+-- let $diff_table_1=master:test.t1
+-- let $diff_table_2=slave:test.t1
+-- source include/diff_tables.inc
+
+-- connection master
+DROP TABLE t1;
+-- sync_slave_with_master
+
+-- source include/master-slave-reset.inc
+
+-- connection master
+
+# BUG#49482: RBR: Replication may break on deletes when MyISAM tables
+# + char field are used
+
+-- eval CREATE TABLE t1 (c1 CHAR) Engine=$engine
+
+INSERT INTO t1 ( c1 ) VALUES ( 'w' ) ;
+SELECT * FROM t1;
+UPDATE t1 SET c1=NULL WHERE c1='w';
+-- sync_slave_with_master
+
+-- let $diff_table_1=master:test.t1
+-- let $diff_table_2=slave:test.t1
+-- source include/diff_tables.inc
+
+-- connection master
+# triggers switch to row mode when on mixed
+DELETE FROM t1 LIMIT 2;
+-- sync_slave_with_master
+
+-- let $diff_table_1=master:test.t1
+-- let $diff_table_2=slave:test.t1
+-- source include/diff_tables.inc
+
+-- connection master
+DROP TABLE t1;
+-- sync_slave_with_master
=== added file 'mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test'
--- a/mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/extra/rpl_tests/rpl_tmp_table_and_DDL.test 2010-01-22 09:38:21 +0000
@@ -0,0 +1,159 @@
+#
+# This test verify if executing DDL statement before trying to manipulate
+# a temporary table causes row-based replication to break with error 'table
+# does not exist'.
+#
+
+# CREATE TABLE when a temporary table is open.
+CREATE TEMPORARY TABLE t1 (a INT);
+EVAL CREATE TABLE t2 (a INT, b INT) ENGINE= $ENGINE_TYPE;
+INSERT INTO t1 VALUES (1);
+
+# CREATE EVENT when a temporary table is open.
+CREATE EVENT e1 ON SCHEDULE EVERY 10 HOUR DO SELECT 1;
+INSERT INTO t1 VALUES (1);
+
+# ALTER EVENT when a temporary table is open.
+ALTER EVENT e1 ON SCHEDULE EVERY 20 HOUR DO SELECT 1;
+INSERT INTO t1 VALUES (1);
+
+# DROP EVENT when a temporary table is open.
+DROP EVENT IF EXISTS e1;
+INSERT INTO t1 VALUES (1);
+
+# CREATE PROCEDURE when a temporary table is open.
+CREATE PROCEDURE p1() SELECT 1;
+INSERT INTO t1 VALUES (1);
+
+# Alter PROCEDURE when a temporary table is open.
+ALTER PROCEDURE p1 SQL SECURITY INVOKER;
+INSERT INTO t1 VALUES (1);
+
+# CREATE FUNCTION when a temporary table is open.
+CREATE FUNCTION f1() RETURNS INT RETURN 123;
+INSERT INTO t1 VALUES (1);
+
+# ALTER FUNCTION when a temporary table is open.
+ALTER FUNCTION f1 SQL SECURITY INVOKER;
+INSERT INTO t1 VALUES (1);
+
+# CREATE DATABASE when a temporary table is open.
+CREATE DATABASE mysqltest1;
+INSERT INTO t1 VALUES (1);
+
+# DROP DATABASE when a temporary table is open.
+DROP DATABASE mysqltest1;
+INSERT INTO t1 VALUES (1);
+
+# CREATE USER when a temporary table is open.
+CREATE USER test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# GRANT select on table to user when a temporary table is open.
+GRANT SELECT ON t2 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# GRANT all on function to user when a temporary table is open.
+GRANT ALL ON f1 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# GRANT all on procedure to user when a temporary table is open.
+GRANT ALL ON p1 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# GRANT usage on *.* to user when a temporary table is open.
+GRANT USAGE ON *.* TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# REVOKE ALL PRIVILEGES on function to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON f1 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# REVOKE ALL PRIVILEGES on procedure to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON p1 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# REVOKE ALL PRIVILEGES on table to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON t2 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# REVOKE usage on *.* from user when a temporary table is open.
+REVOKE USAGE ON *.* FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+
+# RENAME USER when a temporary table is open.
+RENAME USER test_1@localhost TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+
+# DROP USER when a temporary table is open.
+DROP USER test_2@localhost;
+INSERT INTO t1 VALUES (1);
+
+# Test ACL statement in sub statement
+DELIMITER |;
+CREATE PROCEDURE p2()
+BEGIN
+ # CREATE USER when a temporary table is open.
+ CREATE TEMPORARY TABLE t3 (a INT);
+ CREATE USER test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # GRANT select on table to user when a temporary table is open.
+ GRANT SELECT ON t2 TO test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # GRANT all on function to user when a temporary table is open.
+ GRANT ALL ON f1 TO test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # GRANT all on procedure to user when a temporary table is open.
+ GRANT ALL ON p1 TO test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # GRANT usage on *.* to user when a temporary table is open.
+ GRANT USAGE ON *.* TO test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # REVOKE ALL PRIVILEGES on function to user when a temporary table is open.
+ REVOKE ALL PRIVILEGES ON f1 FROM test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # REVOKE ALL PRIVILEGES on procedure to user when a temporary table is open.
+ REVOKE ALL PRIVILEGES ON p1 FROM test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # REVOKE ALL PRIVILEGES on table to user when a temporary table is open.
+ REVOKE ALL PRIVILEGES ON t2 FROM test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # REVOKE usage on *.* from user when a temporary table is open.
+ REVOKE USAGE ON *.* FROM test_2@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # RENAME USER when a temporary table is open.
+ RENAME USER test_2@localhost TO test_3@localhost;
+ INSERT INTO t1 VALUES (1);
+
+ # DROP USER when a temporary table is open.
+ DROP USER test_3@localhost;
+ INSERT INTO t1 VALUES (1);
+ DROP TEMPORARY TABLE t3;
+END |
+DELIMITER ;|
+
+# DROP PROCEDURE when a temporary table is open.
+DROP PROCEDURE p1;
+INSERT INTO t1 VALUES (1);
+DROP PROCEDURE p2;
+INSERT INTO t1 VALUES (1);
+
+# DROP FUNCTION when a temporary table is open.
+DROP FUNCTION f1;
+INSERT INTO t1 VALUES (1);
+
+# DROP TABLE when a temporary table is open.
+DROP TABLE t2;
+INSERT INTO t1 VALUES (1);
+
+DROP TEMPORARY TABLE t1;
+
=== added file 'mysql-test/include/binlog_inject_error.inc'
--- a/mysql-test/include/binlog_inject_error.inc 1970-01-01 00:00:00 +0000
+++ b/mysql-test/include/binlog_inject_error.inc 2010-01-24 07:03:23 +0000
@@ -0,0 +1,22 @@
+#
+# === Name
+#
+# binlog_inject_error.inc
+#
+# === Description
+#
+# Inject binlog write error when running the query, verifies that the
+# query is ended with the proper error (ER_ERROR_ON_WRITE).
+#
+# === Usage
+#
+# let query= 'CREATE TABLE t1 (a INT)';
+# source include/binlog_inject_error.inc;
+#
+
+SET GLOBAL debug='d,injecting_fault_writing';
+--echo $query;
+--replace_regex /(errno: .*)/(errno: #)/
+--error ER_ERROR_ON_WRITE
+--eval $query
+SET GLOBAL debug='';
=== modified file 'mysql-test/include/kill_query.inc'
--- a/mysql-test/include/kill_query.inc 2009-03-27 05:19:50 +0000
+++ b/mysql-test/include/kill_query.inc 2009-12-10 03:44:19 +0000
@@ -52,7 +52,7 @@ if (`SELECT '$debug_lock' != ''`)
# reap the result of the waiting query
connection $connection_name;
-error 0, 1317, 1307, 1306, 1334, 1305;
+error 0, 1317, 1307, 1306, 1334, 1305, 1034;
reap;
connection master;
=== modified file 'mysql-test/include/setup_fake_relay_log.inc'
--- a/mysql-test/include/setup_fake_relay_log.inc 2009-02-09 13:17:04 +0000
+++ b/mysql-test/include/setup_fake_relay_log.inc 2010-02-02 15:16:47 +0000
@@ -69,7 +69,22 @@ let $_fake_relay_log_purge= `SELECT @@gl
# Create relay log file.
copy_file $fake_relay_log $_fake_relay_log;
# Create relay log index.
---exec echo $_fake_filename-fake.000001 > $_fake_relay_index
+
+if (`SELECT LENGTH(@@secure_file_priv) > 0`)
+{
+ -- let $_file_priv_dir= `SELECT @@secure_file_priv`;
+ -- let $_suffix= `SELECT UUID()`
+ -- let $_tmp_file= $_file_priv_dir/fake-index.$_suffix
+
+ -- eval select '$_fake_filename-fake.000001\n' into dumpfile '$_tmp_file'
+ -- copy_file $_tmp_file $_fake_relay_index
+ -- remove_file $_tmp_file
+}
+
+if (`SELECT LENGTH(@@secure_file_priv) = 0`)
+{
+ -- eval select '$_fake_filename-fake.000001\n' into dumpfile '$_fake_relay_index'
+}
# Setup replication from existing relay log.
eval CHANGE MASTER TO MASTER_HOST='dummy.localdomain', RELAY_LOG_FILE='$_fake_filename-fake.000001', RELAY_LOG_POS=4;
=== added file 'mysql-test/include/truncate_file.inc'
--- a/mysql-test/include/truncate_file.inc 1970-01-01 00:00:00 +0000
+++ b/mysql-test/include/truncate_file.inc 2010-01-08 05:42:23 +0000
@@ -0,0 +1,16 @@
+# truncate a giving file, all contents of the file are be cleared
+
+if (`SELECT 'x$file' = 'x'`)
+{
+ --echo Please assign a file name to $file!!
+ exit;
+}
+
+let TRUNCATE_FILE= $file;
+
+perl;
+use Env;
+Env::import('TRUNCATE_FILE');
+open FILE, '>', $TRUNCATE_FILE || die "Can not open file $file";
+close FILE;
+EOF
=== modified file 'mysql-test/lib/v1/mysql-test-run.pl'
--- a/mysql-test/lib/v1/mysql-test-run.pl 2009-12-09 16:43:00 +0000
+++ b/mysql-test/lib/v1/mysql-test-run.pl 2010-03-04 08:03:07 +0000
@@ -1117,14 +1117,16 @@ sub command_line_setup () {
if ( ! $opt_testcase_timeout )
{
- $opt_testcase_timeout= $default_testcase_timeout;
+ $opt_testcase_timeout=
+ $ENV{MTR_TESTCASE_TIMEOUT} || $default_testcase_timeout;
$opt_testcase_timeout*= 10 if $opt_valgrind;
$opt_testcase_timeout*= 10 if ($opt_debug and $glob_win32);
}
if ( ! $opt_suite_timeout )
{
- $opt_suite_timeout= $default_suite_timeout;
+ $opt_suite_timeout=
+ $ENV{MTR_SUITE_TIMEOUT} || $default_suite_timeout;
$opt_suite_timeout*= 6 if $opt_valgrind;
$opt_suite_timeout*= 6 if ($opt_debug and $glob_win32);
}
=== modified file 'mysql-test/mysql-test-run.pl'
--- a/mysql-test/mysql-test-run.pl 2010-02-10 19:06:24 +0000
+++ b/mysql-test/mysql-test-run.pl 2010-03-04 08:03:07 +0000
@@ -201,10 +201,10 @@ my $opt_mark_progress;
my $opt_sleep;
-my $opt_testcase_timeout= 15; # 15 minutes
-my $opt_suite_timeout = 360; # 6 hours
-my $opt_shutdown_timeout= 10; # 10 seconds
-my $opt_start_timeout = 180; # 180 seconds
+my $opt_testcase_timeout= $ENV{MTR_TESTCASE_TIMEOUT} || 15; # minutes
+my $opt_suite_timeout = $ENV{MTR_SUITE_TIMEOUT} || 360; # minutes
+my $opt_shutdown_timeout= $ENV{MTR_SHUTDOWN_TIMEOUT} || 10; # seconds
+my $opt_start_timeout = $ENV{MTR_START_TIMEOUT} || 180; # seconds
sub testcase_timeout { return $opt_testcase_timeout * 60; };
sub suite_timeout { return $opt_suite_timeout * 60; };
@@ -4018,6 +4018,8 @@ sub extract_warning_lines ($) {
qr/Error reading packet/,
qr/Slave: Can't drop database.* database doesn't exist/,
qr/Slave: Operation DROP USER failed for 'create_rout_db'/,
+ qr|Checking table: '\./mtr/test_suppressions'|,
+ qr|mysqld: Table '\./mtr/test_suppressions' is marked as crashed and should be repaired|
);
my $matched_lines= [];
=== modified file 'mysql-test/r/alter_table.result'
--- a/mysql-test/r/alter_table.result 2009-12-03 11:19:05 +0000
+++ b/mysql-test/r/alter_table.result 2010-03-04 08:03:07 +0000
@@ -1245,4 +1245,11 @@ ALTER TABLE t1 CHANGE COLUMN f1 f1_no_re
affected rows: 0
info: Records: 0 Duplicates: 0 Warnings: 0
DROP TABLE t1;
+#
+# Bug #31145: ALTER TABLE DROP COLUMN, ADD COLUMN crashes (linux)
+# or freezes (win) the server
+#
+CREATE TABLE t1 (a TEXT, id INT, b INT);
+ALTER TABLE t1 DROP COLUMN a, ADD COLUMN c TEXT FIRST;
+DROP TABLE t1;
End of 5.1 tests
=== modified file 'mysql-test/r/bug46080.result' (properties changed: -x to +x)
--- a/mysql-test/r/bug46080.result 2009-09-03 06:38:06 +0000
+++ b/mysql-test/r/bug46080.result 2010-02-02 12:17:21 +0000
@@ -2,8 +2,8 @@
# Bug #46080: group_concat(... order by) crashes server when
# sort_buffer_size cannot allocate
#
-call mtr.add_suppression("Out of memory at line .*, 'my_alloc.c'");
-call mtr.add_suppression("needed .* byte .*k., memory in use: .* bytes .*k");
+call mtr.add_suppression("Out of memory at line .*, '.*my_alloc.c'");
+call mtr.add_suppression("needed .* byte (.*k)., memory in use: .* bytes (.*k)");
CREATE TABLE t1(a CHAR(255));
INSERT INTO t1 VALUES ('a');
SET @@SESSION.sort_buffer_size=5*16*1000000;
=== modified file 'mysql-test/r/count_distinct.result'
--- a/mysql-test/r/count_distinct.result 2005-05-29 23:32:50 +0000
+++ b/mysql-test/r/count_distinct.result 2009-12-22 09:52:23 +0000
@@ -40,6 +40,26 @@ select t2.isbn,city,t1.libname,count(dis
isbn city libname a
007 Berkeley Berkeley Public1 2
000 New York New York Public Libra 2
+select t2.isbn,city,@bar:=t1.libname,count(distinct t1.libname) as a
+from t3 left join t1 on t3.libname=t1.libname left join t2
+on t3.isbn=t2.isbn group by city having count(distinct
+t1.libname) > 1;
+isbn city @bar:=t1.libname a
+007 Berkeley Berkeley Public1 2
+000 New York New York Public Libra 2
+SELECT @bar;
+@bar
+Berkeley Public2
+select t2.isbn,city,concat(@bar:=t1.libname),count(distinct t1.libname) as a
+from t3 left join t1 on t3.libname=t1.libname left join t2
+on t3.isbn=t2.isbn group by city having count(distinct
+t1.libname) > 1;
+isbn city concat(@bar:=t1.libname) a
+007 Berkeley Berkeley Public1 2
+000 New York New York Public Libra 2
+SELECT @bar;
+@bar
+Berkeley Public2
drop table t1, t2, t3;
create table t1 (f1 int);
insert into t1 values (1);
=== modified file 'mysql-test/r/create.result'
--- a/mysql-test/r/create.result 2009-12-27 13:54:41 +0000
+++ b/mysql-test/r/create.result 2010-03-04 08:03:07 +0000
@@ -820,16 +820,13 @@ i
drop table t1;
create temporary table t1 (j int);
create table if not exists t1 select 1;
-Warnings:
-Note 1050 Table 't1' already exists
select * from t1;
j
-1
drop temporary table t1;
select * from t1;
-ERROR 42S02: Table 'test.t1' doesn't exist
+1
+1
drop table t1;
-ERROR 42S02: Unknown table 't1'
create table t1 (i int);
insert into t1 values (1), (2);
lock tables t1 read;
=== modified file 'mysql-test/r/ctype_ucs.result'
--- a/mysql-test/r/ctype_ucs.result 2009-12-03 12:02:37 +0000
+++ b/mysql-test/r/ctype_ucs.result 2010-03-04 08:03:07 +0000
@@ -116,6 +116,26 @@ select binary 'a a' > 'a', binary 'a \
binary 'a a' > 'a' binary 'a \0' > 'a' binary 'a\0' > 'a'
1 1 1
SET CHARACTER SET koi8r;
+create table t1 (a varchar(2) character set ucs2 collate ucs2_bin, key(a));
+insert into t1 values ('A'),('A'),('B'),('C'),('D'),('A\t');
+insert into t1 values ('A\0'),('A\0'),('A\0'),('A\0'),('AZ');
+select hex(a) from t1 where a like 'A_' order by a;
+hex(a)
+00410000
+00410000
+00410000
+00410000
+00410009
+0041005A
+select hex(a) from t1 ignore key(a) where a like 'A_' order by a;
+hex(a)
+00410000
+00410000
+00410000
+00410000
+00410009
+0041005A
+drop table t1;
CREATE TABLE t1 (word VARCHAR(64) CHARACTER SET ucs2, word2 CHAR(64) CHARACTER SET ucs2);
INSERT INTO t1 VALUES (_koi8r'�koi8r'� (X'2004',X'2004');
SELECT hex(word) FROM t1 ORDER BY word;
=== modified file 'mysql-test/r/ctype_utf8.result'
--- a/mysql-test/r/ctype_utf8.result 2010-01-04 12:35:54 +0000
+++ b/mysql-test/r/ctype_utf8.result 2010-03-04 08:03:07 +0000
@@ -1850,6 +1850,24 @@ select hex(_utf8 B'001111111111');
ERROR HY000: Invalid utf8 character string: 'FF'
select (_utf8 X'616263FF');
ERROR HY000: Invalid utf8 character string: 'FF'
+#
+# Bug#44131 Binary-mode "order by" returns records in incorrect order for UTF-8 strings
+#
+CREATE TABLE t1 (id int not null primary key, name varchar(10)) character set utf8;
+INSERT INTO t1 VALUES
+(2,'一二三01'),(3,'一二三09'),(4,'一二三02'),(5,'一二三08'),
+(6,'一二三11'),(7,'一二三91'),(8,'一二三21'),(9,'一二三81');
+SELECT * FROM t1 ORDER BY BINARY(name);
+id name
+2 一二三01
+4 一二三02
+5 一二三08
+3 一二三09
+6 一二三11
+8 一二三21
+9 一二三81
+7 一二三91
+DROP TABLE t1;
CREATE TABLE t1 (a INT NOT NULL, b INT NOT NULL);
INSERT INTO t1 VALUES (70000, 1092), (70001, 1085), (70002, 1065);
SELECT CONVERT(a, CHAR), CONVERT(b, CHAR) FROM t1 GROUP BY b;
=== modified file 'mysql-test/r/delete.result'
--- a/mysql-test/r/delete.result 2009-11-18 09:32:03 +0000
+++ b/mysql-test/r/delete.result 2010-01-29 09:36:28 +0000
@@ -337,3 +337,16 @@ END |
DELETE IGNORE FROM t1;
ERROR HY000: Can't update table 't1' in stored function/trigger because it is already used by statement which invoked this stored function/trigger.
DROP TABLE t1;
+#
+# Bug #49552 : sql_buffer_result cause crash + not found records
+# in multitable delete/subquery
+#
+CREATE TABLE t1(a INT);
+INSERT INTO t1 VALUES (1),(2),(3);
+SET SESSION SQL_BUFFER_RESULT=1;
+DELETE t1 FROM (SELECT SUM(a) a FROM t1) x,t1;
+SET SESSION SQL_BUFFER_RESULT=DEFAULT;
+SELECT * FROM t1;
+a
+DROP TABLE t1;
+End of 5.1 tests
=== modified file 'mysql-test/r/fulltext.result'
--- a/mysql-test/r/fulltext.result 2010-01-15 15:27:55 +0000
+++ b/mysql-test/r/fulltext.result 2010-03-04 08:03:07 +0000
@@ -560,6 +560,20 @@ MATCH (col) AGAINST('findme')
DEALLOCATE PREPARE s;
DROP TABLE t1;
#
+# Bug #49250 : spatial btree index corruption and crash
+# Part two : fulltext syntax check
+#
+CREATE TABLE t1(col1 TEXT,
+FULLTEXT INDEX USING BTREE (col1));
+ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'USING BTREE (col1))' at line 2
+CREATE TABLE t2(col1 TEXT);
+CREATE FULLTEXT INDEX USING BTREE ON t2(col);
+ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'USING BTREE ON t2(col)' at line 1
+ALTER TABLE t2 ADD FULLTEXT INDEX USING BTREE (col1);
+ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'USING BTREE (col1)' at line 1
+DROP TABLE t2;
+End of 5.0 tests
+#
# Bug #47930: MATCH IN BOOLEAN MODE returns too many results
# inside subquery
#
@@ -597,4 +611,12 @@ WHERE t3.a=t1.a AND MATCH(b2) AGAINST('s
count(*)
0
DROP TABLE t1,t2,t3;
+#
+# Bug #49445: Assertion failed: 0, file .\item_row.cc, line 55 with
+# fulltext search and row op
+#
+CREATE TABLE t1(a CHAR(1),FULLTEXT(a));
+SELECT 1 FROM t1 WHERE MATCH(a) AGAINST ('') AND ROW(a,a) > ROW(1,1);
+1
+DROP TABLE t1;
End of 5.1 tests
=== modified file 'mysql-test/r/fulltext_order_by.result'
--- a/mysql-test/r/fulltext_order_by.result 2005-08-12 16:27:54 +0000
+++ b/mysql-test/r/fulltext_order_by.result 2009-12-22 15:52:15 +0000
@@ -126,7 +126,7 @@ group by
a.text, b.id, b.betreff
order by
match(b.betreff) against ('+abc' in boolean mode) desc;
-ERROR 42S22: Unknown column 'b.betreff' in 'order clause'
+ERROR 42000: Incorrect usage/placement of 'MATCH()'
select a.text, b.id, b.betreff
from
t2 a inner join t3 b on a.id = b.forum inner join
@@ -142,7 +142,7 @@ where
match(c.beitrag) against ('+abc' in boolean mode)
order by
match(b.betreff) against ('+abc' in boolean mode) desc;
-ERROR 42S22: Unknown column 'b.betreff' in 'order clause'
+ERROR 42000: Incorrect usage/placement of 'MATCH()'
select a.text, b.id, b.betreff
from
t2 a inner join t3 b on a.id = b.forum inner join
@@ -158,7 +158,7 @@ where
match(c.beitrag) against ('+abc' in boolean mode)
order by
match(betreff) against ('+abc' in boolean mode) desc;
-text id betreff
+ERROR 42000: Incorrect usage/placement of 'MATCH()'
(select b.id, b.betreff from t3 b) union
(select b.id, b.betreff from t3 b)
order by match(betreff) against ('+abc' in boolean mode) desc;
=== modified file 'mysql-test/r/func_concat.result'
--- a/mysql-test/r/func_concat.result 2009-05-21 08:06:43 +0000
+++ b/mysql-test/r/func_concat.result 2010-01-13 04:16:36 +0000
@@ -1,4 +1,5 @@
DROP TABLE IF EXISTS t1;
+DROP PROCEDURE IF EXISTS p1;
CREATE TABLE t1 ( number INT NOT NULL, alpha CHAR(6) NOT NULL );
INSERT INTO t1 VALUES (1413006,'idlfmv'),
(1413065,'smpsfz'),(1413127,'sljrhx'),(1413304,'qerfnd');
@@ -119,4 +120,14 @@ id select_type table type possible_keys
1 SIMPLE t2 index NULL PRIMARY 102 NULL 3 Using index
1 SIMPLE t1 eq_ref PRIMARY,a PRIMARY 318 func,const,const 1
DROP TABLE t1, t2;
+#
+# Bug #50096: CONCAT_WS inside procedure returning wrong data
+#
+CREATE PROCEDURE p1(a varchar(255), b int, c int)
+SET @query = CONCAT_WS(",", a, b, c);
+CALL p1("abcde", "0", "1234");
+SELECT @query;
+@query
+abcde,0,1234
+DROP PROCEDURE p1;
# End of 5.1 tests
=== modified file 'mysql-test/r/func_str.result'
--- a/mysql-test/r/func_str.result 2009-09-10 10:30:03 +0000
+++ b/mysql-test/r/func_str.result 2009-12-04 15:36:58 +0000
@@ -2558,3 +2558,32 @@ id select_type table type possible_keys
1 PRIMARY <derived2> ALL NULL NULL NULL NULL 2 Using join buffer
2 DERIVED t1 ALL NULL NULL NULL NULL 2
drop table t1;
+#
+# Bug#49141: Encode function is significantly slower in 5.1 compared to 5.0
+#
+DROP TABLE IF EXISTS t1, t2;
+CREATE TABLE t1 (a VARCHAR(20), b INT);
+CREATE TABLE t2 (a VARCHAR(20), b INT);
+INSERT INTO t1 VALUES ('ABC', 1);
+INSERT INTO t2 VALUES ('ABC', 1);
+SELECT DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), t2.a)
+FROM t1,t2 WHERE t1.b = t1.b > 0 GROUP BY t2.b;
+DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), t2.a)
+secret
+SELECT DECODE((SELECT ENCODE('secret', 'ABC') FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), t2.a)
+FROM t1,t2 WHERE t1.b = t1.b > 0 GROUP BY t2.b;
+DECODE((SELECT ENCODE('secret', 'ABC') FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), t2.a)
+secret
+SELECT DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), 'ABC')
+FROM t1,t2 WHERE t1.b = t1.b > 0 GROUP BY t2.b;
+DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), 'ABC')
+secret
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+INSERT INTO t1 VALUES ('EDF', 3), ('BCD', 2), ('ABC', 1);
+INSERT INTO t2 VALUES ('EDF', 3), ('BCD', 2), ('ABC', 1);
+SELECT DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b LIMIT 1), t2.a)
+FROM t2 WHERE t2.b = 1 GROUP BY t2.b;
+DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b LIMIT 1), t2.a)
+secret
+DROP TABLE t1, t2;
=== modified file 'mysql-test/r/func_time.result'
--- a/mysql-test/r/func_time.result 2009-01-23 12:22:05 +0000
+++ b/mysql-test/r/func_time.result 2010-01-21 08:10:05 +0000
@@ -682,7 +682,7 @@ select timestampadd(SQL_TSI_FRAC_SECOND,
timestampadd(SQL_TSI_FRAC_SECOND, 1, date)
2003-01-02 00:00:00.000001
Warnings:
-Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 6.2. Please use MICROSECOND instead
+Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 5.6. Please use MICROSECOND instead
select timestampdiff(MONTH, '2001-02-01', '2001-05-01') as a;
a
3
@@ -717,7 +717,7 @@ select timestampdiff(SQL_TSI_FRAC_SECOND
a
7689538999999
Warnings:
-Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 6.2. Please use MICROSECOND instead
+Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 5.6. Please use MICROSECOND instead
select timestampdiff(SQL_TSI_DAY, '1986-02-01', '1986-03-01') as a1,
timestampdiff(SQL_TSI_DAY, '1900-02-01', '1900-03-01') as a2,
timestampdiff(SQL_TSI_DAY, '1996-02-01', '1996-03-01') as a3,
@@ -1088,7 +1088,7 @@ timestampdiff(SQL_TSI_FRAC_SECOND, '2001
id select_type table type possible_keys key key_len ref rows filtered Extra
1 SIMPLE NULL NULL NULL NULL NULL NULL NULL NULL No tables used
Warnings:
-Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 6.2. Please use MICROSECOND instead
+Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 5.6. Please use MICROSECOND instead
Note 1003 select timestampdiff(WEEK,'2001-02-01','2001-05-01') AS `a1`,timestampdiff(SECOND_FRAC,'2001-02-01 12:59:59.120000','2001-05-01 12:58:58.119999') AS `a2`
select time_format('100:00:00', '%H %k %h %I %l');
time_format('100:00:00', '%H %k %h %I %l')
@@ -1287,12 +1287,12 @@ SELECT TIMESTAMPADD(FRAC_SECOND, 1, '200
TIMESTAMPADD(FRAC_SECOND, 1, '2008-02-18')
2008-02-18 00:00:00.000001
Warnings:
-Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 6.2. Please use MICROSECOND instead
+Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 5.6. Please use MICROSECOND instead
SELECT TIMESTAMPDIFF(FRAC_SECOND, '2008-02-17', '2008-02-18');
TIMESTAMPDIFF(FRAC_SECOND, '2008-02-17', '2008-02-18')
86400000000
Warnings:
-Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 6.2. Please use MICROSECOND instead
+Warning 1287 The syntax 'FRAC_SECOND' is deprecated and will be removed in MySQL 5.6. Please use MICROSECOND instead
SELECT DATE_ADD('2008-02-18', INTERVAL 1 FRAC_SECOND);
ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FRAC_SECOND)' at line 1
SELECT DATE_SUB('2008-02-18', INTERVAL 1 FRAC_SECOND);
=== modified file 'mysql-test/r/gis.result'
--- a/mysql-test/r/gis.result 2009-12-08 09:26:11 +0000
+++ b/mysql-test/r/gis.result 2010-01-13 10:28:42 +0000
@@ -984,6 +984,19 @@ GEOMFROMTEXT(
SELECT 1 FROM t1 WHERE a <> (SELECT GEOMETRYCOLLECTIONFROMWKB(b) FROM t1);
1
DROP TABLE t1;
+#
+# Bug #49250 : spatial btree index corruption and crash
+# Part one : spatial syntax check
+#
+CREATE TABLE t1(col1 MULTIPOLYGON NOT NULL,
+SPATIAL INDEX USING BTREE (col1));
+ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'USING BTREE (col1))' at line 2
+CREATE TABLE t2(col1 MULTIPOLYGON NOT NULL);
+CREATE SPATIAL INDEX USING BTREE ON t2(col);
+ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'USING BTREE ON t2(col)' at line 1
+ALTER TABLE t2 ADD SPATIAL INDEX USING BTREE (col1);
+ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'USING BTREE (col1)' at line 1
+DROP TABLE t2;
End of 5.0 tests
create table t1 (f1 tinyint(1), f2 char(1), f3 varchar(1), f4 geometry, f5 datetime);
create view v1 as select * from t1;
=== modified file 'mysql-test/r/information_schema.result'
--- a/mysql-test/r/information_schema.result 2010-03-10 09:11:02 +0000
+++ b/mysql-test/r/information_schema.result 2010-03-10 09:12:23 +0000
@@ -1617,4 +1617,26 @@ SET TIMESTAMP=@@TIMESTAMP + 10000000;
SELECT 'NOT_OK' AS TEST_RESULT FROM INFORMATION_SCHEMA.PROCESSLIST WHERE time < 0;
TEST_RESULT
SET TIMESTAMP=DEFAULT;
+#
+# Bug #50276: Security flaw in INFORMATION_SCHEMA.TABLES
+#
+CREATE DATABASE db1;
+USE db1;
+CREATE TABLE t1 (id INT);
+CREATE USER nonpriv;
+USE test;
+# connected as nonpriv
+# Should return 0
+SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='t1';
+COUNT(*)
+0
+USE INFORMATION_SCHEMA;
+# Should return 0
+SELECT COUNT(*) FROM TABLES WHERE TABLE_NAME='t1';
+COUNT(*)
+0
+# connected as root
+DROP USER nonpriv;
+DROP TABLE db1.t1;
+DROP DATABASE db1;
End of 5.1 tests.
=== added file 'mysql-test/r/innodb-autoinc-44030.result'
--- a/mysql-test/r/innodb-autoinc-44030.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/innodb-autoinc-44030.result 2010-01-22 10:03:18 +0000
@@ -0,0 +1,30 @@
+drop table if exists t1;
+SET @@SESSION.AUTO_INCREMENT_INCREMENT=1, @@SESSION.AUTO_INCREMENT_OFFSET=1;
+CREATE TABLE t1 (c1 INT PRIMARY KEY AUTO_INCREMENT) ENGINE=InnoDB;
+INSERT INTO t1 VALUES (null);
+INSERT INTO t1 VALUES (null);
+ALTER TABLE t1 CHANGE c1 d1 INT NOT NULL AUTO_INCREMENT;
+SELECT * FROM t1;
+d1
+1
+2
+SELECT * FROM t1;
+d1
+1
+2
+INSERT INTO t1 VALUES(null);
+Got one of the listed errors
+ALTER TABLE t1 AUTO_INCREMENT = 3;
+SHOW CREATE TABLE t1;
+Table Create Table
+t1 CREATE TABLE `t1` (
+ `d1` int(11) NOT NULL AUTO_INCREMENT,
+ PRIMARY KEY (`d1`)
+) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
+INSERT INTO t1 VALUES(null);
+SELECT * FROM t1;
+d1
+1
+2
+3
+DROP TABLE t1;
=== modified file 'mysql-test/r/innodb-autoinc.result'
--- a/mysql-test/r/innodb-autoinc.result 2010-01-15 17:02:57 +0000
+++ b/mysql-test/r/innodb-autoinc.result 2010-03-04 08:03:07 +0000
@@ -868,35 +868,6 @@ Got one of the listed errors
DROP TABLE t1;
DROP TABLE t2;
SET @@SESSION.AUTO_INCREMENT_INCREMENT=1, @@SESSION.AUTO_INCREMENT_OFFSET=1;
-CREATE TABLE t1 (c1 INT PRIMARY KEY AUTO_INCREMENT) ENGINE=InnoDB;
-INSERT INTO t1 VALUES (null);
-INSERT INTO t1 VALUES (null);
-ALTER TABLE t1 CHANGE c1 d1 INT NOT NULL AUTO_INCREMENT;
-SELECT * FROM t1;
-d1
-1
-2
-SELECT * FROM t1;
-d1
-1
-2
-INSERT INTO t1 VALUES(null);
-Got one of the listed errors
-ALTER TABLE t1 AUTO_INCREMENT = 3;
-SHOW CREATE TABLE t1;
-Table Create Table
-t1 CREATE TABLE `t1` (
- `d1` int(11) NOT NULL AUTO_INCREMENT,
- PRIMARY KEY (`d1`)
-) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1
-INSERT INTO t1 VALUES(null);
-SELECT * FROM t1;
-d1
-1
-2
-3
-DROP TABLE t1;
-SET @@SESSION.AUTO_INCREMENT_INCREMENT=1, @@SESSION.AUTO_INCREMENT_OFFSET=1;
SHOW VARIABLES LIKE "auto_inc%";
Variable_name Value
auto_increment_increment 1
@@ -1111,43 +1082,43 @@ c1 c2
3 innodb
4 NULL
DROP TABLE t1;
-CREATE TABLE T1 (c1 INT AUTO_INCREMENT, c2 INT, PRIMARY KEY(c1)) AUTO_INCREMENT=10 ENGINE=InnoDB;
-CREATE INDEX i1 on T1(c2);
-SHOW CREATE TABLE T1;
+CREATE TABLE t1 (c1 INT AUTO_INCREMENT, c2 INT, PRIMARY KEY(c1)) AUTO_INCREMENT=10 ENGINE=InnoDB;
+CREATE INDEX i1 on t1(c2);
+SHOW CREATE TABLE t1;
Table Create Table
-T1 CREATE TABLE `T1` (
+t1 CREATE TABLE `t1` (
`c1` int(11) NOT NULL AUTO_INCREMENT,
`c2` int(11) DEFAULT NULL,
PRIMARY KEY (`c1`),
KEY `i1` (`c2`)
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
-INSERT INTO T1 (c2) values (0);
-SELECT * FROM T1;
+INSERT INTO t1 (c2) values (0);
+SELECT * FROM t1;
c1 c2
10 0
-DROP TABLE T1;
-CREATE TABLE T1(C1 DOUBLE AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
-INSERT INTO T1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
-INSERT INTO T1(C2) VALUES ('innodb');
-SHOW CREATE TABLE T1;
+DROP TABLE t1;
+CREATE TABLE t1(C1 DOUBLE AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
+INSERT INTO t1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
+INSERT INTO t1(C2) VALUES ('innodb');
+SHOW CREATE TABLE t1;
Table Create Table
-T1 CREATE TABLE `T1` (
+t1 CREATE TABLE `t1` (
`C1` double NOT NULL AUTO_INCREMENT,
`C2` char(10) DEFAULT NULL,
PRIMARY KEY (`C1`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1
-DROP TABLE T1;
-CREATE TABLE T1(C1 FLOAT AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
-INSERT INTO T1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
-INSERT INTO T1(C2) VALUES ('innodb');
-SHOW CREATE TABLE T1;
+DROP TABLE t1;
+CREATE TABLE t1(C1 FLOAT AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
+INSERT INTO t1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
+INSERT INTO t1(C2) VALUES ('innodb');
+SHOW CREATE TABLE t1;
Table Create Table
-T1 CREATE TABLE `T1` (
+t1 CREATE TABLE `t1` (
`C1` float NOT NULL AUTO_INCREMENT,
`C2` char(10) DEFAULT NULL,
PRIMARY KEY (`C1`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1
-DROP TABLE T1;
+DROP TABLE t1;
CREATE TABLE t1 (c1 INT AUTO_INCREMENT PRIMARY KEY) ENGINE=InnoDB;
INSERT INTO t1 SET c1 = 1;
SHOW CREATE TABLE t1;
=== modified file 'mysql-test/r/join_outer.result'
--- a/mysql-test/r/join_outer.result 2007-05-27 19:22:44 +0000
+++ b/mysql-test/r/join_outer.result 2009-12-17 09:55:18 +0000
@@ -1254,3 +1254,38 @@ SELECT * FROM t1 LEFT JOIN t2 ON e<>0 WH
c e d
1 0 NULL
DROP TABLE t1,t2;
+#
+# Bug#47650: using group by with rollup without indexes returns incorrect
+# results with where
+#
+CREATE TABLE t1 ( a INT );
+INSERT INTO t1 VALUES (1);
+CREATE TABLE t2 ( a INT, b INT );
+INSERT INTO t2 VALUES (1, 1),(1, 2),(1, 3),(2, 4),(2, 5);
+EXPLAIN
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 LEFT JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 Using temporary; Using filesort
+1 SIMPLE t2 ALL NULL NULL NULL NULL 5
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 LEFT JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+a COUNT( t2.b ) SUM( t2.b ) MAX( t2.b )
+1 3 6 3
+NULL 3 6 3
+EXPLAIN
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 Using filesort
+1 SIMPLE t2 ALL NULL NULL NULL NULL 5 Using where
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+a COUNT( t2.b ) SUM( t2.b ) MAX( t2.b )
+1 3 6 3
+NULL 3 6 3
+DROP TABLE t1, t2;
=== modified file 'mysql-test/r/myisam.result'
--- a/mysql-test/r/myisam.result 2009-12-03 11:19:05 +0000
+++ b/mysql-test/r/myisam.result 2010-03-04 08:03:07 +0000
@@ -1853,6 +1853,21 @@ CHECK TABLE t1;
Table Op Msg_type Msg_text
test.t1 check status OK
DROP TABLE t1;
+#
+# Bug #49465: valgrind warnings and incorrect live checksum...
+#
+CREATE TABLE t1(
+a VARCHAR(1), b VARCHAR(1), c VARCHAR(1),
+f VARCHAR(1), g VARCHAR(1), h VARCHAR(1),
+i VARCHAR(1), j VARCHAR(1), k VARCHAR(1)) CHECKSUM=1;
+INSERT INTO t1 VALUES('', '', '', '', '', '', '', '', '');
+CHECKSUM TABLE t1 QUICK;
+Table Checksum
+test.t1 467455460
+CHECKSUM TABLE t1 EXTENDED;
+Table Checksum
+test.t1 467455460
+DROP TABLE t1;
End of 5.0 tests
create table t1 (a int not null, key `a` (a) key_block_size=1024);
show create table t1;
=== modified file 'mysql-test/r/mysql.result'
--- a/mysql-test/r/mysql.result 2009-11-27 14:41:45 +0000
+++ b/mysql-test/r/mysql.result 2009-12-17 20:06:36 +0000
@@ -229,4 +229,10 @@ a: b
</row>
</resultset>
drop table t1;
-End of 5.0 tests
+
+Bug #47147: mysql client option --skip-column-names does not apply to vertical output
+
+*************************** 1. row ***************************
+1
+
+End of tests
=== modified file 'mysql-test/r/mysql_upgrade.result'
--- a/mysql-test/r/mysql_upgrade.result 2009-01-26 14:20:33 +0000
+++ b/mysql-test/r/mysql_upgrade.result 2009-12-04 16:00:20 +0000
@@ -127,3 +127,45 @@ mysql.time_zone_transition
mysql.time_zone_transition_type OK
mysql.user OK
set GLOBAL sql_mode=default;
+#
+# Bug #41569 mysql_upgrade (ver 5.1) add 3 fields to mysql.proc table
+# but does not set values.
+#
+CREATE PROCEDURE testproc() BEGIN END;
+UPDATE mysql.proc SET character_set_client = NULL WHERE name LIKE 'testproc';
+UPDATE mysql.proc SET collation_connection = NULL WHERE name LIKE 'testproc';
+UPDATE mysql.proc SET db_collation = NULL WHERE name LIKE 'testproc';
+mtr.global_suppressions OK
+mtr.test_suppressions OK
+mysql.columns_priv OK
+mysql.db OK
+mysql.event OK
+mysql.func OK
+mysql.general_log
+Error : You can't use locks with log tables.
+status : OK
+mysql.help_category OK
+mysql.help_keyword OK
+mysql.help_relation OK
+mysql.help_topic OK
+mysql.host OK
+mysql.ndb_binlog_index OK
+mysql.plugin OK
+mysql.proc OK
+mysql.procs_priv OK
+mysql.servers OK
+mysql.slow_log
+Error : You can't use locks with log tables.
+status : OK
+mysql.tables_priv OK
+mysql.time_zone OK
+mysql.time_zone_leap_second OK
+mysql.time_zone_name OK
+mysql.time_zone_transition OK
+mysql.time_zone_transition_type OK
+mysql.user OK
+CALL testproc();
+DROP PROCEDURE testproc;
+WARNING: NULL values of the 'character_set_client' column ('mysql.proc' table) have been updated with a default value (latin1). Please verify if necessary.
+WARNING: NULL values of the 'collation_connection' column ('mysql.proc' table) have been updated with a default value (latin1_swedish_ci). Please verify if necessary.
+WARNING: NULL values of the 'db_collation' column ('mysql.proc' table) have been updated with default values. Please verify if necessary.
=== modified file 'mysql-test/r/mysqlbinlog.result'
--- a/mysql-test/r/mysqlbinlog.result 2009-09-30 02:31:25 +0000
+++ b/mysql-test/r/mysqlbinlog.result 2010-01-27 12:23:28 +0000
@@ -44,16 +44,16 @@ SET TIMESTAMP=1000000000/*!*/;
insert into t2 values ()
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
DELIMITER ;
# End of log file
@@ -93,6 +93,7 @@ ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
--- --position --
+Warning: The option '--position' is deprecated and will be removed in a future release. Please use --start-position instead.
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
@@ -144,16 +145,16 @@ SET TIMESTAMP=1000000000/*!*/;
insert into t2 values ()
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (word)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`word`)
/*!*/;
DELIMITER ;
# End of log file
@@ -193,6 +194,7 @@ ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
--- --position --
+Warning: The option '--position' is deprecated and will be removed in a future release. Please use --start-position instead.
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
@@ -233,6 +235,7 @@ DELIMITER ;
# End of log file
ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
+Warning: The option '--position' is deprecated and will be removed in a future release. Please use --start-position instead.
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
@@ -359,29 +362,29 @@ SET @@session.collation_database=DEFAULT
create table t1 (a varchar(64) character set utf8)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
SET @@session.collation_database=7/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-#-#' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
SET @@session.collation_database=7/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-a-0' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-a-0' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-b-0' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-b-0' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
-LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-c-0' INTO TABLE `t1` CHARACTER SET koi8r FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a)
+LOAD DATA LOCAL INFILE 'MYSQLTEST_VARDIR/tmp/SQL_LOAD_MB-c-0' INTO TABLE `t1` CHARACTER SET koi8r FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`)
/*!*/;
SET TIMESTAMP=1000000000/*!*/;
drop table t1
=== modified file 'mysql-test/r/openssl_1.result'
--- a/mysql-test/r/openssl_1.result 2010-01-29 10:42:31 +0000
+++ b/mysql-test/r/openssl_1.result 2010-03-04 08:03:07 +0000
@@ -3,10 +3,8 @@ create table t1(f1 int);
insert into t1 values (5);
grant select on test.* to ssl_user1@localhost require SSL;
grant select on test.* to ssl_user2@localhost require cipher "DHE-RSA-AES256-SHA";
-grant select on test.* to ssl_user3@localhost require cipher
-"DHE-RSA-AES256-SHA" AND SUBJECT "/C=FI/ST=Tuusula/O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org";
-grant select on test.* to ssl_user4@localhost require cipher
-"DHE-RSA-AES256-SHA" AND SUBJECT "/C=FI/ST=Tuusula/O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org" ISSUER "/C=FI/ST=Tuusula/O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org";
+grant select on test.* to ssl_user3@localhost require cipher "DHE-RSA-AES256-SHA" AND SUBJECT "/C=SE/ST=Uppsala/O=MySQL AB";
+grant select on test.* to ssl_user4@localhost require cipher "DHE-RSA-AES256-SHA" AND SUBJECT "/C=SE/ST=Uppsala/O=MySQL AB" ISSUER "/C=SE/ST=Uppsala/L=Uppsala/O=MySQL AB";
grant select on test.* to ssl_user5@localhost require cipher "DHE-RSA-AES256-SHA" AND SUBJECT "xxx";
flush privileges;
connect(localhost,ssl_user5,,test,MASTER_PORT,MASTER_SOCKET);
=== modified file 'mysql-test/r/order_by.result'
--- a/mysql-test/r/order_by.result 2009-11-10 08:58:43 +0000
+++ b/mysql-test/r/order_by.result 2009-12-10 15:38:01 +0000
@@ -1463,6 +1463,15 @@ id select_type table type possible_keys
SELECT 1 AS col FROM t1 WHERE a=2 AND (c=10 OR c IS NULL) ORDER BY c;
col
1
+# Must use ref-or-null on the a_c index
+EXPLAIN
+SELECT 1 AS col FROM t1 WHERE a=2 AND (c=10 OR c IS NULL) ORDER BY c DESC;
+id select_type table type possible_keys key key_len ref rows Extra
+x x x ref_or_null a_c,a x x x x x
+# Must return 1 row
+SELECT 1 AS col FROM t1 WHERE a=2 AND (c=10 OR c IS NULL) ORDER BY c DESC;
+col
+1
DROP TABLE t1;
End of 5.0 tests
CREATE TABLE t2 (a varchar(32), b int(11), c float, d double,
=== modified file 'mysql-test/r/partition.result'
--- a/mysql-test/r/partition.result 2010-01-15 15:27:55 +0000
+++ b/mysql-test/r/partition.result 2010-03-04 08:03:07 +0000
@@ -24,8 +24,8 @@ a timestamp NOT NULL DEFAULT CURRENT_TIM
b varchar(10),
PRIMARY KEY (a)
)
-PARTITION BY RANGE (to_days(a)) (
-PARTITION p1 VALUES LESS THAN (733407),
+PARTITION BY RANGE (UNIX_TIMESTAMP(a)) (
+PARTITION p1 VALUES LESS THAN (1199134800),
PARTITION pmax VALUES LESS THAN MAXVALUE
);
INSERT INTO t1 VALUES ('2007-07-30 17:35:48', 'p1');
@@ -37,7 +37,7 @@ a b
2009-07-14 17:35:55 pmax
2009-09-21 17:31:42 pmax
ALTER TABLE t1 REORGANIZE PARTITION pmax INTO (
-PARTITION p3 VALUES LESS THAN (733969),
+PARTITION p3 VALUES LESS THAN (1247688000),
PARTITION pmax VALUES LESS THAN MAXVALUE);
SELECT * FROM t1;
a b
@@ -51,9 +51,9 @@ t1 CREATE TABLE `t1` (
`b` varchar(10) DEFAULT NULL,
PRIMARY KEY (`a`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
-/*!50100 PARTITION BY RANGE (to_days(a))
-(PARTITION p1 VALUES LESS THAN (733407) ENGINE = MyISAM,
- PARTITION p3 VALUES LESS THAN (733969) ENGINE = MyISAM,
+/*!50100 PARTITION BY RANGE (UNIX_TIMESTAMP(a))
+(PARTITION p1 VALUES LESS THAN (1199134800) ENGINE = MyISAM,
+ PARTITION p3 VALUES LESS THAN (1247688000) ENGINE = MyISAM,
PARTITION pmax VALUES LESS THAN MAXVALUE ENGINE = MyISAM) */
DROP TABLE t1;
create table t1 (a int NOT NULL, b varchar(5) NOT NULL)
=== modified file 'mysql-test/r/partition_bug18198.result'
--- a/mysql-test/r/partition_bug18198.result 2007-06-13 15:28:59 +0000
+++ b/mysql-test/r/partition_bug18198.result 2009-12-13 20:29:50 +0000
@@ -126,7 +126,7 @@ ERROR HY000: This partition function is
create table t1 (col1 date)
partition by range(unix_timestamp(col1))
(partition p0 values less than (10), partition p1 values less than (30));
-ERROR HY000: This partition function is not allowed
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
create table t1 (col1 datetime)
partition by range(week(col1))
(partition p0 values less than (10), partition p1 values less than (30));
=== modified file 'mysql-test/r/partition_error.result'
--- a/mysql-test/r/partition_error.result 2009-02-18 20:29:30 +0000
+++ b/mysql-test/r/partition_error.result 2009-12-13 20:29:50 +0000
@@ -138,7 +138,7 @@ primary key(a,b))
partition by hash (rand(a))
partitions 2
(partition x1, partition x2);
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')
partitions 2
(partition x1, partition x2)' at line 6
CREATE TABLE t1 (
@@ -149,7 +149,7 @@ primary key(a,b))
partition by range (rand(a))
partitions 2
(partition x1 values less than (0), partition x2 values less than (2));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')
partitions 2
(partition x1 values less than (0), partition x2 values less than' at line 6
CREATE TABLE t1 (
@@ -160,7 +160,7 @@ primary key(a,b))
partition by list (rand(a))
partitions 2
(partition x1 values in (1), partition x2 values in (2));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')
partitions 2
(partition x1 values in (1), partition x2 values in (2))' at line 6
CREATE TABLE t1 (
@@ -275,7 +275,7 @@ c int not null,
primary key (a,b))
partition by key (a)
subpartition by hash (rand(a+b));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 7
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 7
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -372,7 +372,7 @@ partition by range (3+4)
partitions 2
(partition x1 values less than (4) tablespace ts1,
partition x2 values less than (8) tablespace ts2);
-ERROR HY000: Constant/Random expression in (sub)partitioning function is not allowed
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -542,7 +542,7 @@ partition by list (3+4)
partitions 2
(partition x1 values in (4) tablespace ts1,
partition x2 values in (8) tablespace ts2);
-ERROR HY000: Constant/Random expression in (sub)partitioning function is not allowed
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -634,13 +634,13 @@ partition by range (ascii(v))
ERROR HY000: This partition function is not allowed
create table t1 (a int)
partition by hash (rand(a));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 2
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 2
create table t1 (a int)
partition by hash(CURTIME() + a);
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 2
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 2
create table t1 (a int)
partition by hash (NOW()+a);
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 2
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 2
create table t1 (a int)
partition by hash (extract(hour from convert_tz(a, '+00:00', '+00:00')));
ERROR HY000: This partition function is not allowed
@@ -651,3 +651,295 @@ ERROR HY000: This partition function is
create table t1 (a char(10))
partition by hash (extractvalue(a,'a'));
ERROR HY000: This partition function is not allowed
+#
+# Bug #42849: innodb crash with varying time_zone on partitioned
+# timestamp primary key
+#
+CREATE TABLE old (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (UNIX_TIMESTAMP(a)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (a) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: The PARTITION function returns the wrong type
+ALTER TABLE old
+PARTITION BY RANGE (a) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: The PARTITION function returns the wrong type
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (a+0) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (a+0) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (a % 2) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (a % 2) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (ABS(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (ABS(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (CEILING(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (CEILING(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (FLOOR(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (FLOOR(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (TO_DAYS(a)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (TO_DAYS(a)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (DAYOFYEAR(a)) (
+PARTITION p VALUES LESS THAN (231),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (DAYOFYEAR(a)) (
+PARTITION p VALUES LESS THAN (231),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (DAYOFMONTH(a)) (
+PARTITION p VALUES LESS THAN (19),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (DAYOFMONTH(a)) (
+PARTITION p VALUES LESS THAN (19),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (DAYOFWEEK(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (DAYOFWEEK(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (MONTH(a)) (
+PARTITION p VALUES LESS THAN (8),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (MONTH(a)) (
+PARTITION p VALUES LESS THAN (8),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (HOUR(a)) (
+PARTITION p VALUES LESS THAN (17),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (HOUR(a)) (
+PARTITION p VALUES LESS THAN (17),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (MINUTE(a)) (
+PARTITION p VALUES LESS THAN (55),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (MINUTE(a)) (
+PARTITION p VALUES LESS THAN (55),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (QUARTER(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (QUARTER(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (SECOND(a)) (
+PARTITION p VALUES LESS THAN (7),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (SECOND(a)) (
+PARTITION p VALUES LESS THAN (7),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEARWEEK(a)) (
+PARTITION p VALUES LESS THAN (200833),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (YEARWEEK(a)) (
+PARTITION p VALUES LESS THAN (200833),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEAR(a)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (WEEKDAY(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (WEEKDAY(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (TIME_TO_SEC(a)) (
+PARTITION p VALUES LESS THAN (64507),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (TIME_TO_SEC(a)) (
+PARTITION p VALUES LESS THAN (64507),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (EXTRACT(DAY FROM a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (EXTRACT(DAY FROM a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL, b TIMESTAMP NOT NULL, PRIMARY KEY(a,b))
+PARTITION BY RANGE (DATEDIFF(a, a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (DATEDIFF(a, a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEAR(a + 0)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a + 0)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (TO_DAYS(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (TO_DAYS(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEAR(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old ADD COLUMN b DATE;
+CREATE TABLE new (a TIMESTAMP, b DATE)
+PARTITION BY RANGE (YEAR(a + b)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a + b)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP, b DATE)
+PARTITION BY RANGE (TO_DAYS(a + b)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (TO_DAYS(a + b)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP, b date)
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+CREATE TABLE new (a TIMESTAMP, b TIMESTAMP)
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+ALTER TABLE old MODIFY b TIMESTAMP;
+ALTER TABLE old
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
+DROP TABLE old;
+End of 5.1 tests
=== modified file 'mysql-test/r/partition_innodb.result'
--- a/mysql-test/r/partition_innodb.result 2009-09-10 06:54:26 +0000
+++ b/mysql-test/r/partition_innodb.result 2010-01-18 16:49:18 +0000
@@ -274,3 +274,47 @@ CREATE TABLE t1 (a INT) ENGINE=InnoDB
PARTITION BY list(a) (PARTITION p1 VALUES IN (1));
CREATE INDEX i1 ON t1 (a);
DROP TABLE t1;
+#
+# Bug#47343: InnoDB fails to clean-up after lock wait timeout on
+# REORGANIZE PARTITION
+#
+CREATE TABLE t1 (
+a INT,
+b DATE NOT NULL,
+PRIMARY KEY (a, b)
+) ENGINE=InnoDB
+PARTITION BY RANGE (a) (
+PARTITION pMAX VALUES LESS THAN MAXVALUE
+) ;
+INSERT INTO t1 VALUES (1, '2001-01-01'), (2, '2002-02-02'), (3, '2003-03-03');
+START TRANSACTION;
+SELECT * FROM t1 FOR UPDATE;
+a b
+1 2001-01-01
+2 2002-02-02
+3 2003-03-03
+# Connection con1
+ALTER TABLE t1 REORGANIZE PARTITION pMAX INTO
+(PARTITION p3 VALUES LESS THAN (3),
+PARTITION pMAX VALUES LESS THAN MAXVALUE);
+ERROR HY000: Lock wait timeout exceeded; try restarting transaction
+SHOW WARNINGS;
+Level Code Message
+Error 1205 Lock wait timeout exceeded; try restarting transaction
+ALTER TABLE t1 REORGANIZE PARTITION pMAX INTO
+(PARTITION p3 VALUES LESS THAN (3),
+PARTITION pMAX VALUES LESS THAN MAXVALUE);
+ERROR HY000: Lock wait timeout exceeded; try restarting transaction
+SHOW WARNINGS;
+Level Code Message
+Error 1205 Lock wait timeout exceeded; try restarting transaction
+t1.frm
+t1.par
+# Connection default
+SELECT * FROM t1;
+a b
+1 2001-01-01
+2 2002-02-02
+3 2003-03-03
+COMMIT;
+DROP TABLE t1;
=== modified file 'mysql-test/r/partition_pruning.result'
--- a/mysql-test/r/partition_pruning.result 2009-12-08 09:26:11 +0000
+++ b/mysql-test/r/partition_pruning.result 2010-01-17 21:00:37 +0000
@@ -1,4 +1,614 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+#
+# Bug#49742: Partition Pruning not working correctly for RANGE
+#
+CREATE TABLE t1 (a INT PRIMARY KEY)
+PARTITION BY RANGE (a) (
+PARTITION p0 VALUES LESS THAN (1),
+PARTITION p1 VALUES LESS THAN (2),
+PARTITION p2 VALUES LESS THAN (3),
+PARTITION p3 VALUES LESS THAN (4),
+PARTITION p4 VALUES LESS THAN (5),
+PARTITION p5 VALUES LESS THAN (6),
+PARTITION max VALUES LESS THAN MAXVALUE);
+INSERT INTO t1 VALUES (-1),(0),(1),(2),(3),(4),(5),(6),(7),(8);
+SELECT * FROM t1 WHERE a < 1;
+a
+-1
+0
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0 index PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+SELECT * FROM t1 WHERE a < 2;
+a
+-1
+0
+1
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1 index PRIMARY PRIMARY 4 NULL 3 Using where; Using index
+SELECT * FROM t1 WHERE a < 3;
+a
+-1
+0
+1
+2
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2 index PRIMARY PRIMARY 4 NULL 4 Using where; Using index
+SELECT * FROM t1 WHERE a < 4;
+a
+-1
+0
+1
+2
+3
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3 index PRIMARY PRIMARY 4 NULL 5 Using where; Using index
+SELECT * FROM t1 WHERE a < 5;
+a
+-1
+0
+1
+2
+3
+4
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4 index PRIMARY PRIMARY 4 NULL 6 Using where; Using index
+SELECT * FROM t1 WHERE a < 6;
+a
+-1
+0
+1
+2
+3
+4
+5
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,p5 index PRIMARY PRIMARY 4 NULL 7 Using where; Using index
+SELECT * FROM t1 WHERE a < 7;
+a
+-1
+0
+1
+2
+3
+4
+5
+6
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 7;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,p5,max range PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a <= 1;
+a
+-1
+0
+1
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1 index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a <= 2;
+a
+-1
+0
+1
+2
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2 index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a <= 3;
+a
+-1
+0
+1
+2
+3
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3 index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a <= 4;
+a
+-1
+0
+1
+2
+3
+4
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4 index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a <= 5;
+a
+-1
+0
+1
+2
+3
+4
+5
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,p5 index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a <= 6;
+a
+-1
+0
+1
+2
+3
+4
+5
+6
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,p5,max range PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a <= 7;
+a
+-1
+0
+1
+2
+3
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 7;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,p5,max range PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a = 1;
+a
+1
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p1 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 2;
+a
+2
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p2 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 3;
+a
+3
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p3 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 4;
+a
+4
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p4 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 5;
+a
+5
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p5 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 6;
+a
+6
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max const PRIMARY PRIMARY 4 const 1 Using index
+SELECT * FROM t1 WHERE a = 7;
+a
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 7;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max const PRIMARY PRIMARY 4 const 1 Using index
+SELECT * FROM t1 WHERE a >= 1;
+a
+1
+2
+3
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p1,p2,p3,p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a >= 2;
+a
+2
+3
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p2,p3,p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a >= 3;
+a
+3
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p3,p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a >= 4;
+a
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a >= 5;
+a
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a >= 6;
+a
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a >= 7;
+a
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 7;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max range PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+SELECT * FROM t1 WHERE a > 1;
+a
+2
+3
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p2,p3,p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a > 2;
+a
+3
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p3,p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a > 3;
+a
+4
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p4,p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a > 4;
+a
+5
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p5,max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a > 5;
+a
+6
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max index PRIMARY PRIMARY 4 NULL 10 Using where; Using index
+SELECT * FROM t1 WHERE a > 6;
+a
+7
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max range PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+SELECT * FROM t1 WHERE a > 7;
+a
+8
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 7;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max range PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+DROP TABLE t1;
+CREATE TABLE t1 (a INT PRIMARY KEY)
+PARTITION BY RANGE (a) (
+PARTITION p0 VALUES LESS THAN (1),
+PARTITION p1 VALUES LESS THAN (2),
+PARTITION p2 VALUES LESS THAN (3),
+PARTITION p3 VALUES LESS THAN (4),
+PARTITION p4 VALUES LESS THAN (5),
+PARTITION max VALUES LESS THAN MAXVALUE);
+INSERT INTO t1 VALUES (-1),(0),(1),(2),(3),(4),(5),(6),(7);
+SELECT * FROM t1 WHERE a < 1;
+a
+-1
+0
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0 index PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+SELECT * FROM t1 WHERE a < 2;
+a
+-1
+0
+1
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1 index PRIMARY PRIMARY 4 NULL 3 Using where; Using index
+SELECT * FROM t1 WHERE a < 3;
+a
+-1
+0
+1
+2
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2 index PRIMARY PRIMARY 4 NULL 4 Using where; Using index
+SELECT * FROM t1 WHERE a < 4;
+a
+-1
+0
+1
+2
+3
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3 index PRIMARY PRIMARY 4 NULL 5 Using where; Using index
+SELECT * FROM t1 WHERE a < 5;
+a
+-1
+0
+1
+2
+3
+4
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4 index PRIMARY PRIMARY 4 NULL 6 Using where; Using index
+SELECT * FROM t1 WHERE a < 6;
+a
+-1
+0
+1
+2
+3
+4
+5
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,max range PRIMARY PRIMARY 4 NULL 8 Using where; Using index
+SELECT * FROM t1 WHERE a <= 1;
+a
+-1
+0
+1
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1 index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a <= 2;
+a
+-1
+0
+1
+2
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2 index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a <= 3;
+a
+-1
+0
+1
+2
+3
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3 index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a <= 4;
+a
+-1
+0
+1
+2
+3
+4
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4 index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a <= 5;
+a
+-1
+0
+1
+2
+3
+4
+5
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,max range PRIMARY PRIMARY 4 NULL 8 Using where; Using index
+SELECT * FROM t1 WHERE a <= 6;
+a
+-1
+0
+1
+2
+3
+4
+5
+6
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p0,p1,p2,p3,p4,max range PRIMARY PRIMARY 4 NULL 8 Using where; Using index
+SELECT * FROM t1 WHERE a = 1;
+a
+1
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p1 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 2;
+a
+2
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p2 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 3;
+a
+3
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p3 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 4;
+a
+4
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p4 system PRIMARY NULL NULL NULL 1
+SELECT * FROM t1 WHERE a = 5;
+a
+5
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max const PRIMARY PRIMARY 4 const 1 Using index
+SELECT * FROM t1 WHERE a = 6;
+a
+6
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max const PRIMARY PRIMARY 4 const 1 Using index
+SELECT * FROM t1 WHERE a >= 1;
+a
+1
+2
+3
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p1,p2,p3,p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a >= 2;
+a
+2
+3
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p2,p3,p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a >= 3;
+a
+3
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p3,p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a >= 4;
+a
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a >= 5;
+a
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a >= 6;
+a
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max range PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+SELECT * FROM t1 WHERE a > 1;
+a
+2
+3
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 1;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p2,p3,p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a > 2;
+a
+3
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 2;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p3,p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a > 3;
+a
+4
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 3;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 p4,max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a > 4;
+a
+5
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 4;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max index PRIMARY PRIMARY 4 NULL 9 Using where; Using index
+SELECT * FROM t1 WHERE a > 5;
+a
+6
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 5;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max range PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+SELECT * FROM t1 WHERE a > 6;
+a
+7
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 6;
+id select_type table partitions type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 max range PRIMARY PRIMARY 4 NULL 2 Using where; Using index
+DROP TABLE t1;
# test of RANGE and index
CREATE TABLE t1 (a DATE, KEY(a))
PARTITION BY RANGE (TO_DAYS(a))
@@ -1816,7 +2426,7 @@ id select_type table partitions type pos
1 SIMPLE t2 p0,p4 ALL NULL NULL NULL NULL 910 Using where
explain partitions select * from t2 where (a > 100 AND a < 600);
id select_type table partitions type possible_keys key key_len ref rows Extra
-1 SIMPLE t2 p0,p1,p2,p3 ALL NULL NULL NULL NULL 910 Using where
+1 SIMPLE t2 p0,p1,p2 ALL NULL NULL NULL NULL 910 Using where
explain partitions select * from t2 where b = 4;
id select_type table partitions type possible_keys key key_len ref rows Extra
1 SIMPLE t2 p0,p1,p2,p3,p4 ref b b 5 const 76 Using where
=== modified file 'mysql-test/r/ps.result'
--- a/mysql-test/r/ps.result 2009-05-27 15:19:44 +0000
+++ b/mysql-test/r/ps.result 2009-12-26 11:25:56 +0000
@@ -1917,6 +1917,53 @@ execute stmt using @arg;
?
-12345.5432100000
deallocate prepare stmt;
+#
+# Bug#48508: Crash on prepared statement re-execution.
+#
+create table t1(b int);
+insert into t1 values (0);
+create view v1 AS select 1 as a from t1 where b;
+prepare stmt from "select * from v1 where a";
+execute stmt;
+a
+execute stmt;
+a
+deallocate prepare stmt;
+drop table t1;
+drop view v1;
+create table t1(a bigint);
+create table t2(b tinyint);
+insert into t2 values (null);
+prepare stmt from "select 1 from t1 join t2 on a xor b where b > 1 and a =1";
+execute stmt;
+1
+execute stmt;
+1
+deallocate prepare stmt;
+drop table t1,t2;
+#
+#
+# Bug #49570: Assertion failed: !(order->used & map)
+# on re-execution of prepared statement
+#
+CREATE TABLE t1(a INT PRIMARY KEY);
+INSERT INTO t1 VALUES(0), (1);
+PREPARE stmt FROM
+"SELECT 1 FROM t1 JOIN t1 t2 USING(a) GROUP BY t2.a, t1.a";
+EXECUTE stmt;
+1
+1
+1
+EXECUTE stmt;
+1
+1
+1
+EXECUTE stmt;
+1
+1
+1
+DEALLOCATE PREPARE stmt;
+DROP TABLE t1;
End of 5.0 tests.
create procedure proc_1() reset query cache;
call proc_1();
@@ -2922,4 +2969,23 @@ execute stmt;
Db Name Definer Time zone Type Execute at Interval value Interval field Starts Ends Status Originator character_set_client collation_connection Database Collation
drop table t1;
deallocate prepare stmt;
+#
+# Bug#49141: Encode function is significantly slower in 5.1 compared to 5.0
+#
+prepare encode from "select encode(?, ?) into @ciphertext";
+prepare decode from "select decode(?, ?) into @plaintext";
+set @str="abc", @key="cba";
+execute encode using @str, @key;
+execute decode using @ciphertext, @key;
+select @plaintext;
+@plaintext
+abc
+set @str="bcd", @key="dcb";
+execute encode using @str, @key;
+execute decode using @ciphertext, @key;
+select @plaintext;
+@plaintext
+bcd
+deallocate prepare encode;
+deallocate prepare decode;
End of 5.1 tests.
=== modified file 'mysql-test/r/ps_ddl.result'
--- a/mysql-test/r/ps_ddl.result 2008-08-13 19:42:21 +0000
+++ b/mysql-test/r/ps_ddl.result 2010-01-16 07:44:24 +0000
@@ -1695,23 +1695,23 @@ SUCCESS
drop table t2;
create temporary table t2 (a int);
execute stmt;
-ERROR 42S01: Table 't2' already exists
call p_verify_reprepare_count(1);
SUCCESS
execute stmt;
ERROR 42S01: Table 't2' already exists
-call p_verify_reprepare_count(0);
+call p_verify_reprepare_count(1);
SUCCESS
drop temporary table t2;
execute stmt;
-call p_verify_reprepare_count(1);
+ERROR 42S01: Table 't2' already exists
+call p_verify_reprepare_count(0);
SUCCESS
drop table t2;
execute stmt;
-call p_verify_reprepare_count(0);
+call p_verify_reprepare_count(1);
SUCCESS
drop table t2;
=== modified file 'mysql-test/r/select.result'
--- a/mysql-test/r/select.result 2010-01-15 15:27:55 +0000
+++ b/mysql-test/r/select.result 2010-03-04 08:03:07 +0000
@@ -4440,6 +4440,154 @@ SELECT 1 FROM t2 JOIN t1 ON 1=1
WHERE a != '1' AND NOT a >= b OR NOT ROW(b,a )<> ROW(a,a);
1
DROP TABLE t1,t2;
+#
+# Bug #49199: Optimizer handles incorrectly:
+# field='const1' AND field='const2' in some cases
+
+CREATE TABLE t1(a DATETIME NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+a
+2001-01-01 00:00:00
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 100.00
+Warnings:
+Note 1003 select '2001-01-01 00:00:00' AS `a` from `test`.`t1` where 1
+DROP TABLE t1;
+CREATE TABLE t1(a DATE NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+a
+2001-01-01
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 100.00
+Warnings:
+Note 1003 select '2001-01-01' AS `a` from `test`.`t1` where 1
+DROP TABLE t1;
+CREATE TABLE t1(a TIMESTAMP NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+a
+2001-01-01 00:00:00
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 100.00
+Warnings:
+Note 1003 select '2001-01-01 00:00:00' AS `a` from `test`.`t1` where 1
+DROP TABLE t1;
+CREATE TABLE t1(a DATETIME NOT NULL, b DATE NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01', '2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+a b
+2001-01-01 00:00:00 2001-01-01
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 100.00
+Warnings:
+Note 1003 select '2001-01-01 00:00:00' AS `a`,'2001-01-01' AS `b` from `test`.`t1` where 1
+DROP TABLE t1;
+CREATE TABLE t1(a DATETIME NOT NULL, b VARCHAR(20) NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01', '2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+a b
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE NULL NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const tables
+Warnings:
+Note 1003 select '2001-01-01 00:00:00' AS `a`,'2001-01-01' AS `b` from `test`.`t1` where 0
+SELECT * FROM t1 WHERE a='2001-01-01 00:00:00' AND a=b AND b='2001-01-01';
+a b
+2001-01-01 00:00:00 2001-01-01
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01 00:00:00' AND a=b AND b='2001-01-01';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE t1 system NULL NULL NULL NULL 1 100.00
+Warnings:
+Note 1003 select '2001-01-01 00:00:00' AS `a`,'2001-01-01' AS `b` from `test`.`t1` where 1
+DROP TABLE t1;
+CREATE TABLE t1(a DATETIME NOT NULL, b DATE NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01', '2001-01-01');
+SELECT x.a, y.a, z.a FROM t1 x
+JOIN t1 y ON x.a=y.a
+JOIN t1 z ON y.a=z.a
+WHERE x.a='2001-01-01' AND z.a='2001-01-01 00:00:00';
+a a a
+2001-01-01 00:00:00 2001-01-01 00:00:00 2001-01-01 00:00:00
+EXPLAIN EXTENDED SELECT x.a, y.a, z.a FROM t1 x
+JOIN t1 y ON x.a=y.a
+JOIN t1 z ON y.a=z.a
+WHERE x.a='2001-01-01' AND z.a='2001-01-01 00:00:00';
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 SIMPLE x system NULL NULL NULL NULL 1 100.00
+1 SIMPLE y system NULL NULL NULL NULL 1 100.00
+1 SIMPLE z system NULL NULL NULL NULL 1 100.00
+Warnings:
+Note 1003 select '2001-01-01 00:00:00' AS `a`,'2001-01-01 00:00:00' AS `a`,'2001-01-01 00:00:00' AS `a` from `test`.`t1` `x` join `test`.`t1` `y` join `test`.`t1` `z` where 1
+DROP TABLE t1;
+#
+# Bug #49897: crash in ptr_compare when char(0) NOT NULL
+# column is used for ORDER BY
+#
+SET @old_sort_buffer_size= @@session.sort_buffer_size;
+SET @@sort_buffer_size= 40000;
+CREATE TABLE t1(a CHAR(0) NOT NULL);
+INSERT INTO t1 VALUES (0), (0), (0);
+INSERT INTO t1 SELECT t11.a FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a FROM t1 t11, t1 t12;
+EXPLAIN SELECT a FROM t1 ORDER BY a;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 ALL NULL NULL NULL NULL 24492
+SELECT a FROM t1 ORDER BY a;
+DROP TABLE t1;
+CREATE TABLE t1(a CHAR(0) NOT NULL, b CHAR(0) NOT NULL, c int);
+INSERT INTO t1 VALUES (0, 0, 0), (0, 0, 2), (0, 0, 1);
+INSERT INTO t1 SELECT t11.a, t11.b, t11.c FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a, t11.b, t11.c FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a, t11.b, t11.c FROM t1 t11, t1 t12;
+EXPLAIN SELECT a FROM t1 ORDER BY a LIMIT 5;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 ALL NULL NULL NULL NULL 24492
+SELECT a FROM t1 ORDER BY a LIMIT 5;
+a
+
+
+
+
+
+EXPLAIN SELECT * FROM t1 ORDER BY a, b LIMIT 5;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 ALL NULL NULL NULL NULL 24492
+SELECT * FROM t1 ORDER BY a, b LIMIT 5;
+a b c
+ 0
+ 2
+ 1
+ 0
+ 2
+EXPLAIN SELECT * FROM t1 ORDER BY a, b, c LIMIT 5;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 ALL NULL NULL NULL NULL 24492 Using filesort
+SELECT * FROM t1 ORDER BY a, b, c LIMIT 5;
+a b c
+ 0
+ 0
+ 0
+ 0
+ 0
+EXPLAIN SELECT * FROM t1 ORDER BY c, a LIMIT 5;
+id select_type table type possible_keys key key_len ref rows Extra
+1 SIMPLE t1 ALL NULL NULL NULL NULL 24492 Using filesort
+SELECT * FROM t1 ORDER BY c, a LIMIT 5;
+a b c
+ 0
+ 0
+ 0
+ 0
+ 0
+SET @@sort_buffer_size= @old_sort_buffer_size;
+DROP TABLE t1;
End of 5.0 tests
create table t1(a INT, KEY (a));
INSERT INTO t1 VALUES (1),(2),(3),(4),(5);
=== modified file 'mysql-test/r/sp-ucs2.result'
--- a/mysql-test/r/sp-ucs2.result 2007-02-19 10:57:06 +0000
+++ b/mysql-test/r/sp-ucs2.result 2009-12-02 11:17:08 +0000
@@ -12,3 +12,29 @@ a
foo string
drop function bug17615|
drop table t3|
+SET NAMES utf8;
+DROP FUNCTION IF EXISTS bug48766;
+CREATE FUNCTION bug48766 ()
+RETURNS ENUM( 'w' ) CHARACTER SET ucs2
+RETURN 0;
+SHOW CREATE FUNCTION bug48766;
+Function sql_mode Create Function character_set_client collation_connection Database Collation
+bug48766 CREATE DEFINER=`root`@`localhost` FUNCTION `bug48766`() RETURNS enum('w') CHARSET ucs2
+RETURN 0 utf8 utf8_general_ci latin1_swedish_ci
+SELECT DTD_IDENTIFIER FROM INFORMATION_SCHEMA.ROUTINES
+WHERE ROUTINE_NAME='bug48766';
+DTD_IDENTIFIER
+enum('w') CHARSET ucs2
+DROP FUNCTION bug48766;
+CREATE FUNCTION bug48766 ()
+RETURNS ENUM('а','б','в','г') CHARACTER SET ucs2
+RETURN 0;
+SHOW CREATE FUNCTION bug48766;
+Function sql_mode Create Function character_set_client collation_connection Database Collation
+bug48766 CREATE DEFINER=`root`@`localhost` FUNCTION `bug48766`() RETURNS enum('а','б','в','г') CHARSET ucs2
+RETURN 0 utf8 utf8_general_ci latin1_swedish_ci
+SELECT DTD_IDENTIFIER FROM INFORMATION_SCHEMA.ROUTINES
+WHERE ROUTINE_NAME='bug48766';
+DTD_IDENTIFIER
+enum('а','б','в','г') CHARSET ucs2
+DROP FUNCTION bug48766;
=== modified file 'mysql-test/r/sp.result'
--- a/mysql-test/r/sp.result 2009-11-13 01:03:26 +0000
+++ b/mysql-test/r/sp.result 2009-12-23 13:44:03 +0000
@@ -6963,6 +6963,22 @@ CALL p1();
CALL p1();
DROP PROCEDURE p1;
DROP TABLE t1;
+CREATE TABLE t1 ( f1 integer, primary key (f1));
+CREATE TABLE t2 LIKE t1;
+CREATE TEMPORARY TABLE t3 LIKE t1;
+CREATE PROCEDURE p1 () BEGIN SELECT f1 FROM t3 AS A WHERE A.f1 IN ( SELECT f1 FROM t3 ) ;
+END|
+CALL p1;
+ERROR HY000: Can't reopen table: 'A'
+CREATE VIEW t3 AS SELECT f1 FROM t2 A WHERE A.f1 IN ( SELECT f1 FROM t2 );
+DROP TABLE t3;
+CALL p1;
+f1
+CALL p1;
+f1
+DROP PROCEDURE p1;
+DROP TABLE t1, t2;
+DROP VIEW t3;
#
# Bug #46629: Item_in_subselect::val_int(): Assertion `0'
# on subquery inside a SP
=== added file 'mysql-test/r/sp_sync.result'
--- a/mysql-test/r/sp_sync.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/r/sp_sync.result 2010-01-12 14:16:26 +0000
@@ -0,0 +1,23 @@
+Tests of syncronization of stored procedure execution.
+#
+# Bug#48157: crash in Item_field::used_tables
+#
+CREATE TABLE t1 AS SELECT 1 AS a, 1 AS b;
+CREATE TABLE t2 AS SELECT 1 AS a, 1 AS b;
+CREATE PROCEDURE p1()
+BEGIN
+UPDATE t1 JOIN t2 USING( a, b ) SET t1.b = 1, t2.b = 1;
+END|
+LOCK TABLES t1 WRITE, t2 WRITE;
+SET DEBUG_SYNC = 'multi_update_reopen_tables SIGNAL parked WAIT_FOR go';
+CALL p1();
+DROP TABLE t1, t2;
+SET DEBUG_SYNC = 'now WAIT_FOR parked';
+CREATE TABLE t1 AS SELECT 1 AS a, 1 AS b;
+CREATE TABLE t2 AS SELECT 1 AS a, 1 AS b;
+SET DEBUG_SYNC = 'now SIGNAL go';
+# Without the DEBUG_SYNC supplied in the same patch as this test in the
+# code, this test statement will hang.
+DROP TABLE t1, t2;
+DROP PROCEDURE p1;
+SET DEBUG_SYNC = 'RESET';
=== modified file 'mysql-test/r/subselect.result'
--- a/mysql-test/r/subselect.result 2010-01-15 15:27:55 +0000
+++ b/mysql-test/r/subselect.result 2010-03-04 08:03:07 +0000
@@ -4410,6 +4410,31 @@ WHERE a = 230;
MAX(b) (SELECT COUNT(*) FROM st1,st2 WHERE st2.b <= t1.b)
NULL 0
DROP TABLE t1, st1, st2;
+#
+# Bug #48709: Assertion failed in sql_select.cc:11782:
+# int join_read_key(JOIN_TAB*)
+#
+CREATE TABLE t1 (pk int PRIMARY KEY, int_key int);
+INSERT INTO t1 VALUES (10,1), (14,1);
+CREATE TABLE t2 (pk int PRIMARY KEY, int_key int);
+INSERT INTO t2 VALUES (3,3), (5,NULL), (7,3);
+# should have eq_ref for t1
+EXPLAIN
+SELECT * FROM t2 outr
+WHERE outr.int_key NOT IN (SELECT t1.pk FROM t1, t2)
+ORDER BY outr.pk;
+id select_type table type possible_keys key key_len ref rows Extra
+x x outr ALL x x x x x x
+x x t1 eq_ref x x x x x x
+x x t2 index x x x x x x
+# should not crash on debug binaries
+SELECT * FROM t2 outr
+WHERE outr.int_key NOT IN (SELECT t1.pk FROM t1, t2)
+ORDER BY outr.pk;
+pk int_key
+3 3
+7 3
+DROP TABLE t1,t2;
End of 5.0 tests.
CREATE TABLE t1 (a INT, b INT);
INSERT INTO t1 VALUES (2,22),(1,11),(2,22);
@@ -4574,4 +4599,17 @@ SELECT 1 FROM t1 GROUP BY
1
1
DROP TABLE t1;
+#
+# Bug #49512 : subquery with aggregate function crash
+# subselect_single_select_engine::exec()
+CREATE TABLE t1(a INT);
+INSERT INTO t1 VALUES();
+# should not crash
+SELECT 1 FROM t1 WHERE a <> SOME
+(
+SELECT MAX((SELECT a FROM t1 LIMIT 1)) AS d
+FROM t1,t1 a
+);
+1
+DROP TABLE t1;
End of 5.1 tests.
=== modified file 'mysql-test/r/union.result'
--- a/mysql-test/r/union.result 2009-09-15 10:46:35 +0000
+++ b/mysql-test/r/union.result 2010-03-04 08:03:07 +0000
@@ -1588,3 +1588,63 @@ Warnings:
Note 1003 select '0' AS `a` from `test`.`t1` union select '0' AS `a` from `test`.`t1` order by `a`
DROP TABLE t1;
End of 5.0 tests
+#
+# Bug #49734: Crash on EXPLAIN EXTENDED UNION ... ORDER BY
+# <any non-const-function>
+#
+CREATE TABLE t1 (a VARCHAR(10), FULLTEXT KEY a (a));
+INSERT INTO t1 VALUES (1),(2);
+CREATE TABLE t2 (b INT);
+INSERT INTO t2 VALUES (1),(2);
+# Should not crash
+EXPLAIN EXTENDED
+SELECT * FROM t1 UNION SELECT * FROM t1 ORDER BY a + 12;
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+2 UNION t1 ALL NULL NULL NULL NULL 2 100.00
+NULL UNION RESULT <union1,2> ALL NULL NULL NULL NULL NULL NULL Using filesort
+Warnings:
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` union select `test`.`t1`.`a` AS `a` from `test`.`t1` order by (`a` + 12)
+# Should not crash
+SELECT * FROM t1 UNION SELECT * FROM t1 ORDER BY a + 12;
+a
+1
+2
+# Should not crash
+EXPLAIN EXTENDED
+SELECT * FROM t1 UNION SELECT * FROM t1
+ORDER BY MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE);
+ERROR 42000: Incorrect usage/placement of 'MATCH()'
+# Should not crash
+SELECT * FROM t1 UNION SELECT * FROM t1
+ORDER BY MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE);
+ERROR 42000: Incorrect usage/placement of 'MATCH()'
+# Should not crash
+(SELECT * FROM t1) UNION (SELECT * FROM t1)
+ORDER BY MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE);
+a
+1
+2
+# Should not crash
+EXPLAIN EXTENDED
+SELECT * FROM t1 UNION SELECT * FROM t1
+ORDER BY (SELECT a FROM t2 WHERE b = 12);
+id select_type table type possible_keys key key_len ref rows filtered Extra
+1 PRIMARY t1 ALL NULL NULL NULL NULL 2 100.00
+2 UNION t1 ALL NULL NULL NULL NULL 2 100.00
+3 SUBQUERY t2 ALL NULL NULL NULL NULL 2 100.00 Using where
+NULL UNION RESULT <union1,2> ALL NULL NULL NULL NULL NULL NULL Using filesort
+Warnings:
+Note 1276 Field or reference 'test.t1.a' of SELECT #3 was resolved in SELECT #2
+Note 1003 select `test`.`t1`.`a` AS `a` from `test`.`t1` union select `test`.`t1`.`a` AS `a` from `test`.`t1` order by (select `test`.`t1`.`a` AS `a` from `test`.`t2` where (`test`.`t2`.`b` = 12))
+# Should not crash
+SELECT * FROM t1 UNION SELECT * FROM t1
+ORDER BY (SELECT a FROM t2 WHERE b = 12);
+# Should not crash
+SELECT * FROM t2 UNION SELECT * FROM t2
+ORDER BY (SELECT * FROM t1 WHERE MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE));
+b
+1
+2
+DROP TABLE t1,t2;
+End of 5.1 tests
=== modified file 'mysql-test/r/user_var.result'
--- a/mysql-test/r/user_var.result 2009-05-15 13:03:22 +0000
+++ b/mysql-test/r/user_var.result 2009-12-22 10:38:33 +0000
@@ -409,6 +409,21 @@ SELECT a, b FROM t1 WHERE a=2 AND b=3 GR
a b
2 3
DROP TABLE t1;
+CREATE TABLE t1 (f1 int(11) default NULL, f2 int(11) default NULL);
+CREATE TABLE t2 (f1 int(11) default NULL, f2 int(11) default NULL, foo int(11));
+CREATE TABLE t3 (f1 int(11) default NULL, f2 int(11) default NULL);
+INSERT INTO t1 VALUES(10, 10);
+INSERT INTO t1 VALUES(10, 10);
+INSERT INTO t2 VALUES(10, 10, 10);
+INSERT INTO t2 VALUES(10, 10, 10);
+INSERT INTO t3 VALUES(10, 10);
+INSERT INTO t3 VALUES(10, 10);
+SELECT MIN(t2.f1),
+@bar:= (SELECT MIN(t3.f2) FROM t3 WHERE t3.f2 > foo)
+FROM t1,t2 WHERE t1.f1 = t2.f1 ORDER BY t2.f1;
+MIN(t2.f1) @bar:= (SELECT MIN(t3.f2) FROM t3 WHERE t3.f2 > foo)
+10 NULL
+DROP TABLE t1, t2, t3;
End of 5.0 tests
CREATE TABLE t1 (i INT);
CREATE TRIGGER t_after_insert AFTER INSERT ON t1 FOR EACH ROW SET @bug42188 = 10;
=== modified file 'mysql-test/r/variables.result'
--- a/mysql-test/r/variables.result 2010-03-09 19:22:24 +0000
+++ b/mysql-test/r/variables.result 2010-03-10 09:12:23 +0000
@@ -559,7 +559,7 @@ set sql_log_bin=1;
set sql_log_off=1;
set sql_log_update=1;
Warnings:
-Note 1315 The update log is deprecated and replaced by the binary log; SET SQL_LOG_UPDATE has been ignored
+Note 1315 The update log is deprecated and replaced by the binary log; SET SQL_LOG_UPDATE has been ignored. This option will be removed in MySQL 5.6.
set sql_low_priority_updates=1;
set sql_max_join_size=200;
select @@sql_max_join_size,@@max_join_size;
@@ -1009,6 +1009,12 @@ ERROR HY000: Variable 'hostname' is a re
show variables like 'hostname';
Variable_name Value
hostname #
+#
+# BUG#37408 - Compressed MyISAM files should not require/use mmap()
+#
+# Test 'myisam_mmap_size' option is not dynamic
+SET @@myisam_mmap_size= 500M;
+ERROR HY000: Variable 'myisam_mmap_size' is a read only variable
End of 5.0 tests
set join_buffer_size=1;
Warnings:
=== modified file 'mysql-test/std_data/Index.xml'
--- a/mysql-test/std_data/Index.xml 2009-10-12 07:43:15 +0000
+++ b/mysql-test/std_data/Index.xml 2009-12-15 09:48:29 +0000
@@ -8,6 +8,13 @@
</rules>
</collation>
+ <collation name="utf8_hugeid_ci" id="2047000000">
+ <rules>
+ <reset>a</reset>
+ <s>b</s>
+ </rules>
+ </collation>
+
</charset>
<charset name="ucs2">
=== added file 'mysql-test/std_data/bug47142_master-bin.000001'
Binary files a/mysql-test/std_data/bug47142_master-bin.000001 1970-01-01 00:00:00 +0000 and b/mysql-test/std_data/bug47142_master-bin.000001 2010-01-25 15:46:48 +0000 differ
=== modified file 'mysql-test/std_data/cacert.pem'
--- a/mysql-test/std_data/cacert.pem 2010-01-29 10:42:31 +0000
+++ b/mysql-test/std_data/cacert.pem 2010-03-04 08:03:07 +0000
@@ -1,19 +1,17 @@
-----BEGIN CERTIFICATE-----
-MIIDIjCCAougAwIBAgIJAJhuvLP+2mGwMA0GCSqGSIb3DQEBBQUAMGoxCzAJBgNV
-BAYTAkZJMRAwDgYDVQQIEwdUdXVzdWxhMRkwFwYDVQQKExBNb250eSBQcm9ncmFt
-IEFiMS4wLAYJKoZIhvcNAQkBFh9hYnN0cmFjdC5kZXZlbG9wZXJAYXNrbW9udHku
-b3JnMB4XDTEwMDEyODIxNTcyNVoXDTEwMDIyNzIxNTcyNVowajELMAkGA1UEBhMC
-RkkxEDAOBgNVBAgTB1R1dXN1bGExGTAXBgNVBAoTEE1vbnR5IFByb2dyYW0gQWIx
-LjAsBgkqhkiG9w0BCQEWH2Fic3RyYWN0LmRldmVsb3BlckBhc2ttb250eS5vcmcw
-gZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMIaY4pwbst50S32xJH3bSXhPGep
-6gx1AWwZKsHTXL3VeMO6PHmC8zu5HM0zbOcrIJcXL3YVnpmE4b9OQxIiMSx1Yd+U
-u8/sTkxgpsEKhCbIzECIwPhppyT/JP5aSXCadEvg+PSjikv8dOVkD68wVG4CcFIX
-MFttsPebBVzEokZZAgMBAAGjgc8wgcwwHQYDVR0OBBYEFOCKaNHFFPrju8AwzWxS
-f96IKfRwMIGcBgNVHSMEgZQwgZGAFOCKaNHFFPrju8AwzWxSf96IKfRwoW6kbDBq
-MQswCQYDVQQGEwJGSTEQMA4GA1UECBMHVHV1c3VsYTEZMBcGA1UEChMQTW9udHkg
-UHJvZ3JhbSBBYjEuMCwGCSqGSIb3DQEJARYfYWJzdHJhY3QuZGV2ZWxvcGVyQGFz
-a21vbnR5Lm9yZ4IJAJhuvLP+2mGwMAwGA1UdEwQFMAMBAf8wDQYJKoZIhvcNAQEF
-BQADgYEAsmCX2/k9AInq2qhXtnkLip6cB0iOerLTNAzEijZc/aVf4wUjkL3cqhmC
-kSTCwAHIOxp+ICwh6ky3xghXjoI9QnPFDVkRkzPT2tV0IoBaeQuI4e0CU2EY7L3P
-XoDqp3oq1XtVcr9ZZdP68fBYUG/qcrWcXWk45ZFaBmBv3TotsGk=
+MIICrTCCAhagAwIBAgIJAMI7xZKjhrDbMA0GCSqGSIb3DQEBBAUAMEQxCzAJBgNV
+BAYTAlNFMRAwDgYDVQQIEwdVcHBzYWxhMRAwDgYDVQQHEwdVcHBzYWxhMREwDwYD
+VQQKEwhNeVNRTCBBQjAeFw0xMDAxMjkxMTQ3MTBaFw0xNTAxMjgxMTQ3MTBaMEQx
+CzAJBgNVBAYTAlNFMRAwDgYDVQQIEwdVcHBzYWxhMRAwDgYDVQQHEwdVcHBzYWxh
+MREwDwYDVQQKEwhNeVNRTCBBQjCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA
+wQYsOEfrN4ESP3FjsI8cghE+tZVuyK2gck61lwieVxjgFMtBd65mI5a1y9pmlOI1
+yM4SB2Ppqcuw7/e1CdV1y7lvHrGNt5yqEHbN4QX1gvsN8TQauP/2WILturk4R4Hq
+rKg0ZySu7f1Xhl0ed9a48LpaEHD17IcxWEGMMJwAxF0CAwEAAaOBpjCBozAMBgNV
+HRMEBTADAQH/MB0GA1UdDgQWBBSvktYQ0ahLnyxyVKqty+WpBbBrDTB0BgNVHSME
+bTBrgBSvktYQ0ahLnyxyVKqty+WpBbBrDaFIpEYwRDELMAkGA1UEBhMCU0UxEDAO
+BgNVBAgTB1VwcHNhbGExEDAOBgNVBAcTB1VwcHNhbGExETAPBgNVBAoTCE15U1FM
+IEFCggkAwjvFkqOGsNswDQYJKoZIhvcNAQEEBQADgYEAdKN1PjwMHAKG2Ww1145g
+JQGBnKxSFOUaoSvkBi/4ntTM+ysnViWh7WvxyWjR9zU9arfr7aqsDeQxm0XDOqzj
+AQ/cQIla2/Li8tXyfc06bisH/IHRaSc2zWqioTKbEwMdVOdrvq4a8V8ic3xYyIWn
+7F4WeS07J8LKardSvM0+hOA=
-----END CERTIFICATE-----
=== modified file 'mysql-test/std_data/client-cert.pem'
--- a/mysql-test/std_data/client-cert.pem 2010-01-29 10:42:31 +0000
+++ b/mysql-test/std_data/client-cert.pem 2010-03-04 08:03:07 +0000
@@ -1,60 +1,46 @@
Certificate:
Data:
- Version: 3 (0x2)
- Serial Number: 2 (0x2)
- Signature Algorithm: sha1WithRSAEncryption
- Issuer: C=FI, ST=Tuusula, O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org
+ Version: 1 (0x0)
+ Serial Number: 1048577 (0x100001)
+ Signature Algorithm: md5WithRSAEncryption
+ Issuer: C=SE, ST=Uppsala, L=Uppsala, O=MySQL AB
Validity
- Not Before: Jan 28 22:01:38 2010 GMT
- Not After : Dec 7 22:01:38 2019 GMT
- Subject: C=FI, ST=Tuusula, O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org
+ Not Before: Jan 29 11:50:22 2010 GMT
+ Not After : Jan 28 11:50:22 2015 GMT
+ Subject: C=SE, ST=Uppsala, O=MySQL AB
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
- RSA Public Key: (1024 bit)
- Modulus (1024 bit):
- 00:bd:dc:3d:f8:3c:0b:d4:d2:c0:a3:9c:34:2d:e7:
- 11:ff:4d:43:35:17:f6:0d:91:01:92:9e:4f:4d:c0:
- 38:d5:62:03:55:33:db:66:a2:91:d3:f2:b6:23:34:
- 95:53:50:3d:4f:e3:0c:d7:76:fd:f6:54:64:e6:f6:
- dc:70:74:7c:6b:74:41:59:b0:19:5d:62:90:3c:a7:
- c8:5e:21:8f:2b:22:6b:c7:43:9b:be:79:84:60:da:
- 16:c9:ce:ee:fd:66:cb:54:81:e2:b5:1c:cf:f9:74:
- de:38:2b:28:d4:31:33:55:d2:30:1c:a3:e4:c2:c7:
- 31:46:43:d5:33:3d:8a:0b:47
+ Public-Key: (1024 bit)
+ Modulus:
+ 00:cc:9a:37:49:13:66:dc:cf:e3:0b:13:a1:23:ed:
+ 78:db:4e:bd:11:f6:8c:0d:76:f9:a3:32:56:9a:f8:
+ a1:21:6a:55:4e:4d:3f:e6:67:9d:26:99:b2:cd:a4:
+ 9a:d2:2b:59:5c:d7:8a:d3:60:68:f8:18:bd:c5:be:
+ 15:e1:2a:3c:a3:d4:61:cb:f5:11:94:17:81:81:f7:
+ 87:8c:f6:6a:d2:ee:d8:e6:77:f6:62:66:4d:2e:16:
+ 8d:08:81:4a:c9:c6:4b:31:e5:b9:c7:8a:84:96:48:
+ a7:47:8c:0d:26:90:56:4e:e6:a5:6e:8c:b3:f2:9f:
+ fc:3d:78:9b:49:6e:86:83:77
Exponent: 65537 (0x10001)
- X509v3 extensions:
- X509v3 Basic Constraints:
- CA:FALSE
- Netscape Comment:
- OpenSSL Generated Certificate
- X509v3 Subject Key Identifier:
- BE:E6:DB:19:8D:DB:72:9A:85:EE:B2:B8:5D:E7:FF:61:DF:09:08:AF
- X509v3 Authority Key Identifier:
- keyid:E0:8A:68:D1:C5:14:FA:E3:BB:C0:30:CD:6C:52:7F:DE:88:29:F4:70
-
- Signature Algorithm: sha1WithRSAEncryption
- 41:95:6d:0a:a4:ee:af:68:cd:94:26:59:9a:18:b7:75:3c:c5:
- 0f:22:d3:5c:31:9b:85:a0:93:b3:f0:50:29:ba:1e:d3:5a:43:
- 0b:77:2d:98:87:a7:a7:39:0f:40:8d:03:d3:b3:67:43:77:bc:
- 3c:51:c2:f9:9e:7a:2d:39:c4:5c:16:d7:70:d6:74:d1:6c:e1:
- 6a:4d:fd:1f:10:af:64:3b:f4:64:e9:b2:b3:fb:c8:cd:c5:41:
- cd:99:e0:ac:83:1d:81:2c:6b:99:ba:80:02:12:72:f7:3b:bb:
- 93:72:00:da:ff:d3:87:75:d2:3a:a4:ca:4d:c1:8b:c1:21:50:
- cb:57
+ Signature Algorithm: md5WithRSAEncryption
+ 5e:1f:a3:53:5f:24:13:1c:f8:28:32:b0:7f:69:69:f3:0e:c0:
+ 34:87:10:03:7d:da:15:8b:bd:19:b8:1a:56:31:e7:85:49:81:
+ c9:7f:45:20:74:3e:89:c0:e0:26:84:51:cc:04:16:ce:69:99:
+ 01:e1:26:99:b3:e3:f5:bd:ec:5f:a0:84:e4:38:da:75:78:7b:
+ 89:9c:d2:cd:60:95:20:ba:8e:e3:7c:e6:df:76:3a:7c:89:77:
+ 02:94:86:11:3a:c4:61:7d:6f:71:83:21:8a:17:fb:17:e2:ee:
+ 02:6b:61:c1:b4:52:63:d7:d8:46:b2:c5:9c:6f:38:91:8a:35:
+ 32:0b
-----BEGIN CERTIFICATE-----
-MIICxTCCAi6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBqMQswCQYDVQQGEwJGSTEQ
-MA4GA1UECBMHVHV1c3VsYTEZMBcGA1UEChMQTW9udHkgUHJvZ3JhbSBBYjEuMCwG
-CSqGSIb3DQEJARYfYWJzdHJhY3QuZGV2ZWxvcGVyQGFza21vbnR5Lm9yZzAeFw0x
-MDAxMjgyMjAxMzhaFw0xOTEyMDcyMjAxMzhaMGoxCzAJBgNVBAYTAkZJMRAwDgYD
-VQQIEwdUdXVzdWxhMRkwFwYDVQQKExBNb250eSBQcm9ncmFtIEFiMS4wLAYJKoZI
-hvcNAQkBFh9hYnN0cmFjdC5kZXZlbG9wZXJAYXNrbW9udHkub3JnMIGfMA0GCSqG
-SIb3DQEBAQUAA4GNADCBiQKBgQC93D34PAvU0sCjnDQt5xH/TUM1F/YNkQGSnk9N
-wDjVYgNVM9tmopHT8rYjNJVTUD1P4wzXdv32VGTm9txwdHxrdEFZsBldYpA8p8he
-IY8rImvHQ5u+eYRg2hbJzu79ZstUgeK1HM/5dN44KyjUMTNV0jAco+TCxzFGQ9Uz
-PYoLRwIDAQABo3sweTAJBgNVHRMEAjAAMCwGCWCGSAGG+EIBDQQfFh1PcGVuU1NM
-IEdlbmVyYXRlZCBDZXJ0aWZpY2F0ZTAdBgNVHQ4EFgQUvubbGY3bcpqF7rK4Xef/
-Yd8JCK8wHwYDVR0jBBgwFoAU4Ipo0cUU+uO7wDDNbFJ/3ogp9HAwDQYJKoZIhvcN
-AQEFBQADgYEAQZVtCqTur2jNlCZZmhi3dTzFDyLTXDGbhaCTs/BQKboe01pDC3ct
-mIenpzkPQI0D07NnQ3e8PFHC+Z56LTnEXBbXcNZ00Wzhak39HxCvZDv0ZOmys/vI
-zcVBzZngrIMdgSxrmbqAAhJy9zu7k3IA2v/Th3XSOqTKTcGLwSFQy1c=
+MIIB5zCCAVACAxAAATANBgkqhkiG9w0BAQQFADBEMQswCQYDVQQGEwJTRTEQMA4G
+A1UECBMHVXBwc2FsYTEQMA4GA1UEBxMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwg
+QUIwHhcNMTAwMTI5MTE1MDIyWhcNMTUwMTI4MTE1MDIyWjAyMQswCQYDVQQGEwJT
+RTEQMA4GA1UECBMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwgQUIwgZ8wDQYJKoZI
+hvcNAQEBBQADgY0AMIGJAoGBAMyaN0kTZtzP4wsToSPteNtOvRH2jA12+aMyVpr4
+oSFqVU5NP+ZnnSaZss2kmtIrWVzXitNgaPgYvcW+FeEqPKPUYcv1EZQXgYH3h4z2
+atLu2OZ39mJmTS4WjQiBSsnGSzHluceKhJZIp0eMDSaQVk7mpW6Ms/Kf/D14m0lu
+hoN3AgMBAAEwDQYJKoZIhvcNAQEEBQADgYEAXh+jU18kExz4KDKwf2lp8w7ANIcQ
+A33aFYu9GbgaVjHnhUmByX9FIHQ+icDgJoRRzAQWzmmZAeEmmbPj9b3sX6CE5Dja
+dXh7iZzSzWCVILqO43zm33Y6fIl3ApSGETrEYX1vcYMhihf7F+LuAmthwbRSY9fY
+RrLFnG84kYo1Mgs=
-----END CERTIFICATE-----
=== modified file 'mysql-test/std_data/client-key.pem'
--- a/mysql-test/std_data/client-key.pem 2010-01-29 10:42:31 +0000
+++ b/mysql-test/std_data/client-key.pem 2010-03-04 08:03:07 +0000
@@ -1,15 +1,15 @@
-----BEGIN RSA PRIVATE KEY-----
-MIICXgIBAAKBgQC93D34PAvU0sCjnDQt5xH/TUM1F/YNkQGSnk9NwDjVYgNVM9tm
-opHT8rYjNJVTUD1P4wzXdv32VGTm9txwdHxrdEFZsBldYpA8p8heIY8rImvHQ5u+
-eYRg2hbJzu79ZstUgeK1HM/5dN44KyjUMTNV0jAco+TCxzFGQ9UzPYoLRwIDAQAB
-AoGBAJa2lprPT7UJ99Ho1aL6ota/RnKHKtNqII17DgjyZis9OtgP6kJ3GrvdF6iq
-vT79my4nVrJTyxYXuGF/5U1/qqNjuPPBE1Xbu1ubQlFv8CT0kKYynQ7Z3ls8fAHC
-B3VJXnUVlG+GHtUEFFG4FQVX1fn/Sga67ioJ6ivAiBlHKaPBAkEA5f2ToWlj4u9O
-KgfRkN54wdIp4yu2c40pbhMfKGjGGsBAHk92+qSBpzEmxLcI6Ay+4/QysSR4jYmK
-jCJuxiTu1QJBANNU3Hx8Il2SF/2BqGLcIh2SHxzKQIT5wAyD2jb+P2cHvbk6pKGR
-VTmw5bibxXmYMS6J/L2zUF2xtFe+Svwz96sCQEnKYSqBqOWvyBFeLtPfPTlal8vm
-Q4SxfuBtTCrn6t+8XRYcgt0KGPsunvSwkS/6nuh+eiExxWgMACLUDVyPjv0CQQC4
-sJJc7LOv6Oy0bWr2swHRrBEqvQsz63zOszCzHPHWHirNxZV5aiT8XT/2XZRwlvRs
-gsVyGFLk/1fn0vN/g/8vAkEAxUdzUKvC1ZwjzGhgcz2bQU0tEZN4C9jBCiwOI2ud
-BpAsPG0xAGGL2+hz0B0n88XiTHobiTZ1bg4Z41i4pXx2ZA==
+MIICXQIBAAKBgQDMmjdJE2bcz+MLE6Ej7XjbTr0R9owNdvmjMlaa+KEhalVOTT/m
+Z50mmbLNpJrSK1lc14rTYGj4GL3FvhXhKjyj1GHL9RGUF4GB94eM9mrS7tjmd/Zi
+Zk0uFo0IgUrJxksx5bnHioSWSKdHjA0mkFZO5qVujLPyn/w9eJtJboaDdwIDAQAB
+AoGASqk/4We2En+93y3jkIO4pXafIe3w/3zZ7caRue1ehx4RUQh5d+95djuB9u7J
+HEZ7TpjM7QNyao5EueL6gvbxt0LXFvqAMni7yM9tt/HUYtHHPqYiRtUny9bKYFTm
+l8szCCMal/wD9GZU9ByHDNHm7tHUMyMhARNTYSgx+SERFmECQQD/6jJocC4SXf6f
+T3LqimWR02lbJ7qCoDgRglsUXh0zjrG+IIiAyE+QOCCx1GMe3Uw6bsIuYwdHT6as
+WcdPs04xAkEAzKulvEvLVvN5zfa/DTYRTV7jh6aDleOxjsD5oN/oJXoACnPzVuUL
+qQQMNtuAXm6Q1QItrRxpQsSKbY0UQka6JwJBAOSgoNoG5lIIYTKIMvzwGV+XBLeo
+HYsXgh+6Wo4uql3mLErUG78ZtWL9kc/tE4R+ZdyKGLaCR/1gXmH5bwN4B/ECQEBb
+uUH8k3REG4kojesZlVc+/00ojzgS4UKCa/yqa9VdB6ZBz8MDQydinnShkTwgiGpy
+xOoqhO753o2UT0qH8wECQQC99IEJWUnwvExVMkLaZH5NjAFJkb22sjkmuT11tAgU
+RQgOMoDOm6driojnOnDWOkx1r1Gy9NgMLooduja4v6cx
-----END RSA PRIVATE KEY-----
=== modified file 'mysql-test/std_data/server-cert.pem'
--- a/mysql-test/std_data/server-cert.pem 2010-01-29 10:42:31 +0000
+++ b/mysql-test/std_data/server-cert.pem 2010-03-04 08:03:07 +0000
@@ -1,61 +1,41 @@
Certificate:
Data:
- Version: 3 (0x2)
- Serial Number: 1 (0x1)
- Signature Algorithm: sha1WithRSAEncryption
- Issuer: C=FI, ST=Tuusula, O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org
+ Version: 1 (0x0)
+ Serial Number: 1048578 (0x100002)
+ Signature Algorithm: md5WithRSAEncryption
+ Issuer: C=SE, ST=Uppsala, L=Uppsala, O=MySQL AB
Validity
- Not Before: Jan 28 21:59:14 2010 GMT
- Not After : Dec 7 21:59:14 2019 GMT
- Subject: C=FI, ST=Tuusula, O=Monty Program Ab, CN=localhost/emailAddress=abstract.developer(a)askmonty.org
+ Not Before: Jan 29 11:56:49 2010 GMT
+ Not After : Jan 28 11:56:49 2015 GMT
+ Subject: C=SE, ST=Uppsala, O=MySQL AB, CN=localhost
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
- RSA Public Key: (1024 bit)
- Modulus (1024 bit):
- 00:cc:79:74:2a:14:7e:77:06:b0:ec:1b:b6:da:70:
- 4c:4f:0e:94:04:8a:e7:69:f4:4c:9c:67:22:41:56:
- 3a:96:53:9e:95:9d:17:55:80:41:c0:13:d6:94:0f:
- cd:2c:51:fe:a4:6e:f2:74:d8:5d:3b:3a:80:e3:85:
- 5b:a5:bc:7d:5c:25:55:e5:40:77:fe:f3:cb:5b:cd:
- da:a5:f4:36:64:30:a2:a0:7f:93:b3:c4:56:75:2a:
- c0:f7:65:2a:d7:e6:ff:88:25:03:e0:b4:40:2e:74:
- 4c:cb:46:95:73:cb:25:5c:87:0e:ec:0f:5a:19:c2:
- b6:dc:9e:e8:f0:30:b1:9c:99
+ Public-Key: (512 bit)
+ Modulus:
+ 00:cd:e4:87:51:9d:72:11:a0:d1:fa:f3:92:8b:13:
+ 1c:eb:f7:e2:9a:2f:72:a8:d6:65:48:d1:69:af:1b:
+ c0:4c:13:e5:60:60:51:41:e9:ab:a6:bc:13:bb:0c:
+ 5e:32:7c:d9:6c:9e:cd:05:24:84:78:db:80:91:2e:
+ d8:88:2b:c2:ed
Exponent: 65537 (0x10001)
- X509v3 extensions:
- X509v3 Basic Constraints:
- CA:FALSE
- Netscape Comment:
- OpenSSL Generated Certificate
- X509v3 Subject Key Identifier:
- 6D:13:3B:40:52:3C:AF:18:EA:33:D1:B7:56:21:1B:05:FE:0B:9E:38
- X509v3 Authority Key Identifier:
- keyid:E0:8A:68:D1:C5:14:FA:E3:BB:C0:30:CD:6C:52:7F:DE:88:29:F4:70
-
- Signature Algorithm: sha1WithRSAEncryption
- 97:db:65:23:7f:f1:15:3c:1e:83:ac:0e:0a:50:a0:0c:22:b8:
- 45:d4:ca:21:05:47:3b:3d:03:b5:6c:4b:8d:bb:5f:57:c3:c7:
- 4e:71:23:cf:33:a3:7f:a0:3d:bd:58:75:b8:37:22:16:2f:e9:
- ed:ae:9b:94:29:81:6e:34:79:cf:41:bd:3d:8d:17:d7:22:1c:
- 1b:58:c7:0f:79:13:56:1d:e8:d8:4e:e5:07:3f:79:1b:dd:c4:
- 06:9b:c5:b6:02:34:43:c5:bf:e5:87:ad:f1:c1:8a:f2:be:c2:
- 00:1d:d4:27:1f:87:c8:80:31:ec:6e:97:95:b4:84:40:d1:73:
- 42:71
+ Signature Algorithm: md5WithRSAEncryption
+ 73:ce:9c:6e:39:46:b4:14:be:da:3f:f3:1b:ba:90:bc:23:43:
+ d7:82:2a:70:4e:a6:d9:5a:65:5c:b7:df:71:df:75:77:c5:80:
+ a4:af:fa:d2:59:e2:fd:c9:9c:f0:98:95:8e:69:a9:8c:7c:d8:
+ 6f:48:d2:e3:36:e0:cd:ff:3f:d1:a5:e6:ab:75:09:c4:50:10:
+ c4:96:dd:bf:3b:de:32:46:da:ca:4a:f1:d6:52:8a:33:2f:ab:
+ f5:2e:70:3f:d4:9c:be:00:c8:03:f9:39:8a:df:5b:70:3c:40:
+ ef:03:be:7c:3d:1d:32:32:f3:51:81:e2:83:30:6e:3d:38:9b:
+ fb:3c
-----BEGIN CERTIFICATE-----
-MIIC2TCCAkKgAwIBAgIBATANBgkqhkiG9w0BAQUFADBqMQswCQYDVQQGEwJGSTEQ
-MA4GA1UECBMHVHV1c3VsYTEZMBcGA1UEChMQTW9udHkgUHJvZ3JhbSBBYjEuMCwG
-CSqGSIb3DQEJARYfYWJzdHJhY3QuZGV2ZWxvcGVyQGFza21vbnR5Lm9yZzAeFw0x
-MDAxMjgyMTU5MTRaFw0xOTEyMDcyMTU5MTRaMH4xCzAJBgNVBAYTAkZJMRAwDgYD
-VQQIEwdUdXVzdWxhMRkwFwYDVQQKExBNb250eSBQcm9ncmFtIEFiMRIwEAYDVQQD
-Ewlsb2NhbGhvc3QxLjAsBgkqhkiG9w0BCQEWH2Fic3RyYWN0LmRldmVsb3BlckBh
-c2ttb250eS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMx5dCoUfncG
-sOwbttpwTE8OlASK52n0TJxnIkFWOpZTnpWdF1WAQcAT1pQPzSxR/qRu8nTYXTs6
-gOOFW6W8fVwlVeVAd/7zy1vN2qX0NmQwoqB/k7PEVnUqwPdlKtfm/4glA+C0QC50
-TMtGlXPLJVyHDuwPWhnCttye6PAwsZyZAgMBAAGjezB5MAkGA1UdEwQCMAAwLAYJ
-YIZIAYb4QgENBB8WHU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0GA1Ud
-DgQWBBRtEztAUjyvGOoz0bdWIRsF/gueODAfBgNVHSMEGDAWgBTgimjRxRT647vA
-MM1sUn/eiCn0cDANBgkqhkiG9w0BAQUFAAOBgQCX22Ujf/EVPB6DrA4KUKAMIrhF
-1MohBUc7PQO1bEuNu19Xw8dOcSPPM6N/oD29WHW4NyIWL+ntrpuUKYFuNHnPQb09
-jRfXIhwbWMcPeRNWHejYTuUHP3kb3cQGm8W2AjRDxb/lh63xwYryvsIAHdQnH4fI
-gDHsbpeVtIRA0XNCcQ==
+MIIBtzCCASACAxAAAjANBgkqhkiG9w0BAQQFADBEMQswCQYDVQQGEwJTRTEQMA4G
+A1UECBMHVXBwc2FsYTEQMA4GA1UEBxMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwg
+QUIwHhcNMTAwMTI5MTE1NjQ5WhcNMTUwMTI4MTE1NjQ5WjBGMQswCQYDVQQGEwJT
+RTEQMA4GA1UECBMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwgQUIxEjAQBgNVBAMT
+CWxvY2FsaG9zdDBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQDN5IdRnXIRoNH685KL
+Exzr9+KaL3Ko1mVI0WmvG8BME+VgYFFB6aumvBO7DF4yfNlsns0FJIR424CRLtiI
+K8LtAgMBAAEwDQYJKoZIhvcNAQEEBQADgYEAc86cbjlGtBS+2j/zG7qQvCND14Iq
+cE6m2VplXLffcd91d8WApK/60lni/cmc8JiVjmmpjHzYb0jS4zbgzf8/0aXmq3UJ
+xFAQxJbdvzveMkbaykrx1lKKMy+r9S5wP9ScvgDIA/k5it9bcDxA7wO+fD0dMjLz
+UYHigzBuPTib+zw=
-----END CERTIFICATE-----
=== modified file 'mysql-test/std_data/server-key.pem'
--- a/mysql-test/std_data/server-key.pem 2010-01-29 10:42:31 +0000
+++ b/mysql-test/std_data/server-key.pem 2010-03-04 08:03:07 +0000
@@ -1,15 +1,9 @@
-----BEGIN RSA PRIVATE KEY-----
-MIICXgIBAAKBgQDMeXQqFH53BrDsG7bacExPDpQEiudp9EycZyJBVjqWU56VnRdV
-gEHAE9aUD80sUf6kbvJ02F07OoDjhVulvH1cJVXlQHf+88tbzdql9DZkMKKgf5Oz
-xFZ1KsD3ZSrX5v+IJQPgtEAudEzLRpVzyyVchw7sD1oZwrbcnujwMLGcmQIDAQAB
-AoGBAMdMYkNZsmJFbVDVOobzCg3Mgc1jrmeBrOKNS8AvUe+QFXRyp3m5B102eOHb
-/PmD+hU/5qao9UZzoYkiRM/oRq45jrqJEYwWrX007bKK0F9hnErtC1ImM1nBFVhx
-6+6cr+ShUkvtj8+wJ2d5bIccUzGCUfFR5tb5BnePTXK8IVoBAkEA7WGNxHAVKgjS
-AzlpHr5fvpivA07hNVJizTwZdWGGYeETilZhkkuMRwREceeohF6ILMf0FTZdFSa/
-8EeLa3icIQJBANyDKFjynKwWy5pyRSz75mVwrEi+4eTQPsCPNWLkbpbEPwqPLYWJ
-2VSFkISXF7b7Od48JkQWgiB8/kXqMDEdsXkCQQCzZvj3ryWvoP7nhOoXXBWMPGR4
-gZLe86bMKVGsTsp7CtnzwRj4sbQQr/7yfvvzHmaYQX4M0gtDQwfolomd7YdBAkEA
-y24ETuqjNu9grf81aiaJipPDnOjcJOcovSRgr/blPxmUvv0Pld5yLNN7W5a4PgrO
-fAMpmi7ZpXcqbP17sBQgoQJAWTDFKAmfHPVdDGZuCw4yceP5d+Tv7ABglZUvpPKx
-kAvGN1WBASUuCQJDOIgzl6gvYX07S5p147i9mv7UBWOpvw==
+MIIBOwIBAAJBAM3kh1GdchGg0frzkosTHOv34povcqjWZUjRaa8bwEwT5WBgUUHp
+q6a8E7sMXjJ82WyezQUkhHjbgJEu2Igrwu0CAwEAAQJBAJuwhFbF3NzRpBbEmnqJ
+4GPa1UJMQMLFJF+04tqj/HxJcAIVhOJhGmmtYNw1yjz/ZsPnfJCMz4eFOtdjvGtf
+peECIQDmFFg2WLvYo+2m9w9V7z4ZIkg7ixYkI/ObUUctfZkPOQIhAOUWnrvjFrAX
+bIvYT/YR50+3ZDLEc51XxNgJnWqWYl1VAiEAnTOFWgyivFC1DgF8PvDp8u5TgCt2
+A1d1GMgd490O+TECIC/WMl0/hTxOF9930vKqOGf//o9PUGkZq8QE9fcM4gtlAiAE
+iOcFpnLjtWj57jrhuw214ucnB5rklkQQe+AtcARNkg==
-----END RSA PRIVATE KEY-----
=== modified file 'mysql-test/std_data/server8k-cert.pem'
--- a/mysql-test/std_data/server8k-cert.pem 2010-01-29 10:42:31 +0000
+++ b/mysql-test/std_data/server8k-cert.pem 2010-03-04 08:03:07 +0000
@@ -1,138 +1,125 @@
Certificate:
Data:
- Version: 3 (0x2)
- Serial Number: 4 (0x4)
- Signature Algorithm: sha1WithRSAEncryption
+ Version: 1 (0x0)
+ Serial Number: 1048579 (0x100003)
+ Signature Algorithm: md5WithRSAEncryption
Issuer: C=SE, ST=Uppsala, L=Uppsala, O=MySQL AB
Validity
- Not Before: Jan 28 11:12:27 2009 GMT
- Not After : Jan 28 11:12:27 2010 GMT
+ Not Before: Jan 29 12:01:53 2010 GMT
+ Not After : Jan 28 12:01:53 2015 GMT
Subject: C=SE, ST=Uppsala, O=MySQL AB, CN=server
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
- RSA Public Key: (8192 bit)
- Modulus (8192 bit):
- 00:c0:8f:22:03:24:59:67:46:14:d6:8f:60:09:58:
- 06:07:45:f1:78:71:55:f1:ea:b9:30:8a:cd:c3:3c:
- b9:bf:65:6e:18:ed:a0:b8:c9:19:56:6f:c4:90:19:
- c8:65:09:db:ff:bf:82:a1:08:ad:01:4f:5a:a3:d4:
- 3d:78:7e:4b:4a:01:a4:7d:e8:7b:05:3e:7d:d8:b9:
- 55:58:60:d6:1c:ce:e8:32:62:2c:19:60:f3:ed:05:
- 99:6d:c9:77:07:2e:11:6d:0b:9a:c7:68:38:46:e8:
- fa:31:80:df:e8:79:f0:f1:fd:a9:94:c3:fa:0d:f5:
- 78:ac:49:7e:d5:17:fd:e1:ee:44:f3:c7:0e:30:32:
- 5d:a9:19:25:e4:bb:21:1d:fe:3c:84:48:40:f5:58:
- f4:bf:13:8c:85:68:bb:ec:f5:dd:c6:38:d1:b0:77:
- 1f:a6:8e:4f:8d:e2:6f:49:74:f5:3f:90:65:8e:99:
- 1e:59:9c:1c:b5:26:24:c4:b1:de:1e:fb:96:65:c4:
- 31:14:1a:53:b8:5e:62:8a:c7:04:f7:b4:36:a4:af:
- 07:c8:27:06:ed:dd:e6:f4:8c:62:f1:65:40:d0:9f:
- 9f:a9:14:c8:8e:8b:74:d6:67:5a:d0:c9:4d:35:a1:
- d5:7b:39:3a:42:9f:e4:d0:f4:c6:0f:2e:42:30:4b:
- 56:b2:3d:6d:8e:2d:58:c5:69:99:35:49:95:95:99:
- b6:87:29:2b:32:d1:50:08:cd:25:14:48:6d:10:99:
- 85:61:3c:41:26:21:55:cc:1f:cf:ad:b0:2f:b9:89:
- d8:4e:a0:18:ff:75:1d:b6:97:7c:c5:fa:8b:dc:93:
- 17:86:0a:64:d4:09:35:d5:83:34:6d:5c:6d:c6:8c:
- cd:b9:ec:c2:93:c6:c1:b7:cc:04:6f:22:e0:07:bf:
- e0:d9:9b:2f:d5:a0:50:cc:f9:f0:95:83:8f:f4:30:
- 83:72:94:d7:b5:4b:da:cc:9f:54:3b:8d:78:77:0b:
- 24:6c:0f:c2:96:61:96:2f:b8:5f:b5:7a:ab:7a:5b:
- 97:7a:a9:ad:40:8b:f2:d6:c6:8d:81:d9:94:61:8f:
- 9d:03:c5:b9:10:03:68:83:bf:04:81:cc:ac:bd:34:
- 89:e8:d4:8d:43:20:e2:b6:a4:11:3d:15:2a:82:0c:
- d6:3a:6a:8c:62:d4:93:bc:c3:80:bf:1b:b4:2b:0a:
- 7a:34:f0:cd:1e:82:3f:25:0f:d1:04:a8:0a:05:19:
- b0:d6:16:83:39:af:0b:45:7d:cb:14:7e:4d:aa:aa:
- c2:39:a8:46:38:ab:bd:ab:2a:bd:34:43:7f:da:25:
- de:2b:fb:69:3b:fe:3b:87:fd:98:94:76:4a:bf:04:
- a3:31:e3:3a:ff:6f:04:fa:fa:24:e4:2a:89:e9:0e:
- bf:44:4c:72:85:82:3c:89:4a:03:63:01:41:92:53:
- d0:82:60:6e:d8:ff:8c:a2:b4:1a:3b:20:6d:ae:74:
- 92:30:4e:48:e3:51:a6:cb:73:97:06:13:03:32:23:
- 9b:7d:a2:c7:3a:a9:af:97:8c:51:ed:fe:fa:b4:b4:
- 1a:a3:87:fc:cf:8c:8e:e6:80:15:03:fd:fe:7d:bd:
- b1:76:f1:5f:b3:09:2b:4c:4d:a7:7c:b5:72:b1:d6:
- db:38:c0:67:a4:54:bc:87:09:a5:39:ba:1a:7e:3f:
- 74:60:ad:3d:4b:be:94:53:f3:64:16:c7:33:35:ec:
- 41:00:95:b6:de:99:62:a2:7a:28:9a:45:4d:fa:cd:
- a6:77:f6:de:58:72:50:c8:7d:69:38:db:07:04:84:
- d8:4d:39:f7:50:13:43:ae:2d:af:45:a4:2a:39:56:
- 3c:b8:b7:d8:26:a4:36:c9:23:aa:aa:b8:49:0b:21:
- ba:9e:7a:2b:7f:4d:29:9f:0e:00:1e:b4:5e:a6:fa:
- 49:fe:8d:e5:74:57:d8:ba:d9:92:2c:d2:ac:84:1d:
- f2:a6:a4:44:1c:bf:88:41:32:7e:d1:c3:2f:6e:bc:
- 0f:5d:19:a6:8f:74:2b:67:ba:dd:a9:db:68:b5:ce:
- 9d:25:48:df:54:08:d0:1d:4f:2e:5b:24:bc:05:0f:
- fb:58:46:fa:02:ca:53:93:29:cf:10:27:c2:a0:18:
- d0:f5:d4:b9:3c:5e:df:8e:6c:f5:7c:b9:b4:54:cc:
- 39:16:5d:3c:da:96:b3:c3:6c:d4:70:5d:d3:30:a7:
- a6:bd:6f:dd:41:bc:a8:de:42:60:59:9a:85:25:0d:
- 2a:45:c3:05:b4:6e:7a:4a:4d:ca:8c:0a:e5:6c:34:
- bc:20:9b:6d:4a:ca:ca:b6:a6:3a:a0:db:c3:0e:20:
- 1a:12:1b:77:dd:cb:1d:7f:c3:0d:0d:e7:c1:fd:96:
- d2:c7:68:80:99:a0:d9:8a:33:21:a3:8b:a2:5a:a7:
- 7e:27:06:02:7f:ed:60:11:37:34:54:17:7f:4d:90:
- 14:1e:69:37:0d:ba:f0:2b:f0:a3:2d:62:79:c8:76:
- a8:ea:c8:e7:3b:1f:c6:4f:c2:0c:d7:ac:f0:77:53:
- 5d:f0:50:b4:df:9b:03:ca:4d:41:e1:18:b2:25:30:
- 86:1d:63:e5:67:b1:53:cd:6b:4e:83:1a:b9:5e:2d:
- 05:15:6b:d4:8e:b1:97:fc:31:03:57:cb:bf:27:7f:
- cd:5f:27:7e:66:e7:3c:17:09:b6:11:2a:4f:33:cd:
- eb:1a:d3:6f:d5:15:8b:8b:ce:68:6b:7e:9a:95:e5:
- 74:7f:17:57:d9
+ Public-Key: (8192 bit)
+ Modulus:
+ 00:ca:aa:1d:c4:11:ec:91:f0:c7:ff:5f:90:92:fc:
+ 40:0c:5e:b7:3d:00:c5:20:d5:0f:89:31:07:d7:41:
+ 4c:8b:60:80:aa:38:14:de:93:6b:9c:74:88:41:68:
+ b5:02:41:01:2d:86:a2:7a:95:53:5e:7b:67:2f:6c:
+ 1e:29:51:f9:44:fd:4a:80:be:b2:23:a1:3e:1b:38:
+ cf:88:c4:71:ee:f8:6b:41:c5:2d:c0:c3:52:ac:59:
+ 7d:81:34:19:95:32:b8:9a:51:b6:41:36:d4:c4:a1:
+ ae:84:e6:38:b9:e8:bf:96:be:19:7a:6b:77:4d:e0:
+ de:e6:b3:b6:6b:bc:3d:dd:68:bc:4b:c4:eb:f5:36:
+ 93:ed:56:a2:15:50:8a:10:e8:d6:22:ed:6c:b1:cd:
+ c3:18:c9:f6:0a:e1:de:61:65:62:d6:14:41:8c:b5:
+ fb:14:68:c1:cf:12:5d:41:21:9d:57:11:43:7d:bb:
+ 43:2c:21:bb:c3:44:7d:a8:cf:1f:c3:71:75:b5:47:
+ c2:7d:ce:38:3c:73:64:9e:15:d8:a7:27:cf:bd:40:
+ c8:45:08:e3:c8:39:a8:0b:8e:c2:5b:7b:f1:47:91:
+ 12:91:cc:e1:00:e0:94:5b:bd:32:e4:0c:8d:c3:be:
+ cc:76:32:52:12:69:b0:18:e0:b0:c2:76:34:5a:5f:
+ 79:d9:f6:81:9d:02:0a:61:69:1c:33:ce:49:fa:76:
+ 03:1e:07:5b:27:0b:bf:34:9e:34:96:b8:03:9b:50:
+ 3a:6a:2f:17:7a:14:cf:65:63:00:37:52:a8:73:ce:
+ 4b:14:40:f4:d2:9a:56:54:33:b8:77:2e:42:5b:8f:
+ ec:1f:18:f4:ad:ab:8a:4a:8d:6d:70:25:f3:58:e7:
+ cb:66:51:14:7d:16:f4:eb:6d:56:76:76:51:6e:d6:
+ 1d:da:d3:8d:c0:64:5a:67:4e:af:e2:bf:33:d1:b8:
+ f6:2a:fc:57:87:a7:35:5e:80:c9:ac:fc:87:c9:71:
+ 17:91:bf:b7:4d:a3:ed:3c:1b:27:f4:66:a0:f9:46:
+ 03:27:cc:ea:80:f6:4b:40:f6:41:94:cd:bd:0a:b3:
+ ef:26:be:de:6f:69:ae:0f:3f:1c:55:63:33:90:9b:
+ ed:ca:5a:12:4d:de:4b:06:c2:a2:92:b0:42:3d:31:
+ af:a4:15:12:15:f8:8a:e9:88:8d:cf:fd:85:66:50:
+ 6f:11:f1:9f:48:f3:b5:ba:9d:86:68:24:a2:5d:a8:
+ 7c:54:42:fa:d8:b5:c5:f2:dd:0e:0f:d0:68:e4:54:
+ 7e:c5:b9:a0:9b:65:2d:77:f4:8f:b9:30:0a:d5:86:
+ 5c:ed:c9:7c:d1:da:9d:0d:63:50:ee:e5:1e:92:63:
+ cc:a2:0c:e8:4a:96:02:4d:dc:8f:df:7c:8f:08:18:
+ a8:30:88:d7:af:89:ad:fc:57:4b:10:f9:f1:cb:48:
+ e8:b6:3b:c8:3f:fc:c2:d3:d1:4a:10:3c:1b:6b:64:
+ dc:e5:65:1e:5b:b2:da:b1:e2:24:97:8f:ee:c0:4b:
+ 8e:18:83:7c:17:a6:3c:45:b3:60:06:23:f2:2f:18:
+ 13:9e:17:8a:c6:72:79:8c:4d:04:f3:9d:ea:e0:25:
+ d3:33:8c:1e:11:47:63:1f:a5:45:3f:bd:85:b3:fe:
+ a5:68:ee:48:b7:0c:a4:c9:7f:72:d0:75:66:9b:6a:
+ f9:a0:50:f3:a8:59:6d:a3:dd:38:4f:70:2b:bb:ff:
+ 92:2e:71:ab:ef:e9:00:ed:0d:d1:b4:6f:f0:8e:b2:
+ 09:fb:4d:61:0d:d9:10:d5:54:11:cd:03:94:84:fd:
+ a8:68:e4:45:6e:1e:6a:1e:2f:85:a1:6d:f5:b6:c0:
+ f1:ee:f7:36:e9:fe:c2:f7:ad:cc:13:46:5b:88:42:
+ f0:2d:1f:b5:0e:7e:b5:2b:e4:8d:ab:b9:87:30:6a:
+ 3d:12:f4:ad:f3:1c:ac:cc:1a:48:29:2a:96:7b:80:
+ 00:0b:6e:59:87:bf:a3:ca:70:99:1b:1c:fd:72:3d:
+ b2:d3:94:4a:cf:55:75:be:1f:40:ec:55:35:48:2d:
+ 55:f0:00:da:3c:b0:60:ba:11:32:66:54:0b:be:06:
+ a4:5e:b7:c9:59:bb:4d:f4:92:06:26:48:6e:c2:12:
+ d4:7c:f0:20:b8:a2:e1:bc:6a:b6:19:0e:37:47:55:
+ c9:f2:49:0d:96:75:a2:84:64:bf:34:fc:be:b2:41:
+ e4:f5:88:eb:e1:b7:26:a5:e5:41:c2:20:0c:f6:e2:
+ a8:a5:e7:76:54:a5:fb:4b:80:05:7d:18:85:7a:ba:
+ bc:b7:ad:c0:2f:60:85:cc:15:12:1c:2f:0a:9e:f3:
+ 7c:40:cf:f4:3e:23:d2:95:ca:d0:06:58:52:f0:84:
+ d8:0f:3d:eb:ff:12:68:94:79:8f:be:40:29:5f:98:
+ c8:90:6c:05:2f:99:8c:2a:63:78:1f:23:b1:29:c5:
+ e7:49:c9:b2:92:0f:53:0b:d5:71:28:17:c2:19:bf:
+ 60:bf:7c:87:a8:ab:c1:f4:0a:c1:b8:d2:68:ee:c1:
+ ce:a7:13:13:17:6d:24:5d:a2:37:a6:d7:7d:48:8b:
+ 2b:74:2d:40:2e:ca:19:d5:b6:3e:6c:42:71:fa:cf:
+ 85:87:f9:de:80:73:8b:89:f4:70:f0:d8:d7:ff:40:
+ 41:9c:c7:15:6d:9b:6e:4c:b5:52:02:99:79:32:73:
+ ca:26:a0:ac:31:6f:c4:b0:f5:da:bb:c2:1f:e0:9f:
+ 44:ba:25:f7:9f
Exponent: 65537 (0x10001)
- X509v3 extensions:
- X509v3 Basic Constraints:
- CA:FALSE
- X509v3 Subject Key Identifier:
- 58:12:24:59:A7:3C:29:15:89:5A:C2:12:DB:E7:A5:42:10:21:B7:BA
- X509v3 Authority Key Identifier:
- keyid:F2:E2:EA:55:65:A4:9A:E2:AC:9D:97:F5:45:6C:F6:F7:8C:11:AD:DF
- DirName:/C=SE/ST=Uppsala/L=Uppsala/O=MySQL AB
- serial:95:E9:78:F5:34:50:E4:D5
-
- Signature Algorithm: sha1WithRSAEncryption
- cd:cb:5c:83:35:ea:cb:cb:c3:a8:c3:95:e2:e6:6f:4d:d8:e4:
- ee:41:dd:3f:35:82:ac:2f:fd:63:89:4f:3a:19:d7:81:75:b3:
- a3:fc:36:b2:12:d5:c6:56:bc:13:60:37:33:6e:a0:d8:ae:7c:
- 88:f9:4b:ee:7b:1f:c8:f0:56:19:07:4d:bb:45:52:1c:78:81:
- 07:7c:13:86:b8:86:70:85:e4:71:25:58:78:d1:be:de:22:82:
- 6d:1a:4b:06:ac:f0:e8:50:87:c7:69:64:c2:61:43:cd:96:06:
- a6:7e:09:a9:02:01:2a:a2:40:f3:cd:10:80:48:d0:34:55:40:
- b9:ce
+ Signature Algorithm: md5WithRSAEncryption
+ 08:75:dc:b9:3f:aa:b6:7e:81:7a:39:d1:ee:ed:44:b6:ce:1b:
+ 37:c4:4c:19:d0:66:e6:eb:b5:4f:2a:ef:95:58:64:21:55:01:
+ 12:30:ac:8a:95:d1:06:de:29:46:a4:f1:7d:7f:b0:1e:d2:4e:
+ fb:f6:fa:9a:74:be:85:62:db:0b:82:90:58:62:c5:5f:f1:80:
+ 02:9f:c5:fb:f3:6b:b0:b4:3b:04:b1:e5:53:c2:d0:00:a1:1a:
+ 9d:65:60:6f:73:98:67:e0:9c:c8:12:94:79:59:bf:43:7b:f5:
+ 77:c8:8f:df:b1:cd:11:1c:01:19:99:c2:22:42:f7:41:ae:b4:
+ b8:1a
-----BEGIN CERTIFICATE-----
-MIIGJTCCBY6gAwIBAgIBBDANBgkqhkiG9w0BAQUFADBEMQswCQYDVQQGEwJTRTEQ
-MA4GA1UECBMHVXBwc2FsYTEQMA4GA1UEBxMHVXBwc2FsYTERMA8GA1UEChMITXlT
-UUwgQUIwHhcNMDkwMTI4MTExMjI3WhcNMTAwMTI4MTExMjI3WjBDMQswCQYDVQQG
-EwJTRTEQMA4GA1UECBMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwgQUIxDzANBgNV
-BAMTBnNlcnZlcjCCBCIwDQYJKoZIhvcNAQEBBQADggQPADCCBAoCggQBAMCPIgMk
-WWdGFNaPYAlYBgdF8XhxVfHquTCKzcM8ub9lbhjtoLjJGVZvxJAZyGUJ2/+/gqEI
-rQFPWqPUPXh+S0oBpH3oewU+fdi5VVhg1hzO6DJiLBlg8+0FmW3JdwcuEW0Lmsdo
-OEbo+jGA3+h58PH9qZTD+g31eKxJftUX/eHuRPPHDjAyXakZJeS7IR3+PIRIQPVY
-9L8TjIVou+z13cY40bB3H6aOT43ib0l09T+QZY6ZHlmcHLUmJMSx3h77lmXEMRQa
-U7heYorHBPe0NqSvB8gnBu3d5vSMYvFlQNCfn6kUyI6LdNZnWtDJTTWh1Xs5OkKf
-5ND0xg8uQjBLVrI9bY4tWMVpmTVJlZWZtocpKzLRUAjNJRRIbRCZhWE8QSYhVcwf
-z62wL7mJ2E6gGP91HbaXfMX6i9yTF4YKZNQJNdWDNG1cbcaMzbnswpPGwbfMBG8i
-4Ae/4NmbL9WgUMz58JWDj/Qwg3KU17VL2syfVDuNeHcLJGwPwpZhli+4X7V6q3pb
-l3qprUCL8tbGjYHZlGGPnQPFuRADaIO/BIHMrL00iejUjUMg4rakET0VKoIM1jpq
-jGLUk7zDgL8btCsKejTwzR6CPyUP0QSoCgUZsNYWgzmvC0V9yxR+TaqqwjmoRjir
-vasqvTRDf9ol3iv7aTv+O4f9mJR2Sr8EozHjOv9vBPr6JOQqiekOv0RMcoWCPIlK
-A2MBQZJT0IJgbtj/jKK0Gjsgba50kjBOSONRpstzlwYTAzIjm32ixzqpr5eMUe3+
-+rS0GqOH/M+MjuaAFQP9/n29sXbxX7MJK0xNp3y1crHW2zjAZ6RUvIcJpTm6Gn4/
-dGCtPUu+lFPzZBbHMzXsQQCVtt6ZYqJ6KJpFTfrNpnf23lhyUMh9aTjbBwSE2E05
-91ATQ64tr0WkKjlWPLi32CakNskjqqq4SQshup56K39NKZ8OAB60Xqb6Sf6N5XRX
-2LrZkizSrIQd8qakRBy/iEEyftHDL268D10Zpo90K2e63anbaLXOnSVI31QI0B1P
-LlskvAUP+1hG+gLKU5MpzxAnwqAY0PXUuTxe345s9Xy5tFTMORZdPNqWs8Ns1HBd
-0zCnpr1v3UG8qN5CYFmahSUNKkXDBbRuekpNyowK5Ww0vCCbbUrKyramOqDbww4g
-GhIbd93LHX/DDQ3nwf2W0sdogJmg2YozIaOLolqnficGAn/tYBE3NFQXf02QFB5p
-Nw268Cvwoy1iech2qOrI5zsfxk/CDNes8HdTXfBQtN+bA8pNQeEYsiUwhh1j5Wex
-U81rToMauV4tBRVr1I6xl/wxA1fLvyd/zV8nfmbnPBcJthEqTzPN6xrTb9UVi4vO
-aGt+mpXldH8XV9kCAwEAAaOBozCBoDAJBgNVHRMEAjAAMB0GA1UdDgQWBBRYEiRZ
-pzwpFYlawhLb56VCECG3ujB0BgNVHSMEbTBrgBTy4upVZaSa4qydl/VFbPb3jBGt
-36FIpEYwRDELMAkGA1UEBhMCU0UxEDAOBgNVBAgTB1VwcHNhbGExEDAOBgNVBAcT
-B1VwcHNhbGExETAPBgNVBAoTCE15U1FMIEFCggkAlel49TRQ5NUwDQYJKoZIhvcN
-AQEFBQADgYEAzctcgzXqy8vDqMOV4uZvTdjk7kHdPzWCrC/9Y4lPOhnXgXWzo/w2
-shLVxla8E2A3M26g2K58iPlL7nsfyPBWGQdNu0VSHHiBB3wThriGcIXkcSVYeNG+
-3iKCbRpLBqzw6FCHx2lkwmFDzZYGpn4JqQIBKqJA880QgEjQNFVAuc4=
+MIIFfDCCBOUCAxAAAzANBgkqhkiG9w0BAQQFADBEMQswCQYDVQQGEwJTRTEQMA4G
+A1UECBMHVXBwc2FsYTEQMA4GA1UEBxMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwg
+QUIwHhcNMTAwMTI5MTIwMTUzWhcNMTUwMTI4MTIwMTUzWjBDMQswCQYDVQQGEwJT
+RTEQMA4GA1UECBMHVXBwc2FsYTERMA8GA1UEChMITXlTUUwgQUIxDzANBgNVBAMT
+BnNlcnZlcjCCBCIwDQYJKoZIhvcNAQEBBQADggQPADCCBAoCggQBAMqqHcQR7JHw
+x/9fkJL8QAxetz0AxSDVD4kxB9dBTItggKo4FN6Ta5x0iEFotQJBAS2GonqVU157
+Zy9sHilR+UT9SoC+siOhPhs4z4jEce74a0HFLcDDUqxZfYE0GZUyuJpRtkE21MSh
+roTmOLnov5a+GXprd03g3uaztmu8Pd1ovEvE6/U2k+1WohVQihDo1iLtbLHNwxjJ
+9grh3mFlYtYUQYy1+xRowc8SXUEhnVcRQ327Qywhu8NEfajPH8NxdbVHwn3OODxz
+ZJ4V2Kcnz71AyEUI48g5qAuOwlt78UeREpHM4QDglFu9MuQMjcO+zHYyUhJpsBjg
+sMJ2NFpfedn2gZ0CCmFpHDPOSfp2Ax4HWycLvzSeNJa4A5tQOmovF3oUz2VjADdS
+qHPOSxRA9NKaVlQzuHcuQluP7B8Y9K2rikqNbXAl81jny2ZRFH0W9OttVnZ2UW7W
+HdrTjcBkWmdOr+K/M9G49ir8V4enNV6Ayaz8h8lxF5G/t02j7TwbJ/RmoPlGAyfM
+6oD2S0D2QZTNvQqz7ya+3m9prg8/HFVjM5Cb7cpaEk3eSwbCopKwQj0xr6QVEhX4
+iumIjc/9hWZQbxHxn0jztbqdhmgkol2ofFRC+ti1xfLdDg/QaORUfsW5oJtlLXf0
+j7kwCtWGXO3JfNHanQ1jUO7lHpJjzKIM6EqWAk3cj998jwgYqDCI16+JrfxXSxD5
+8ctI6LY7yD/8wtPRShA8G2tk3OVlHluy2rHiJJeP7sBLjhiDfBemPEWzYAYj8i8Y
+E54XisZyeYxNBPOd6uAl0zOMHhFHYx+lRT+9hbP+pWjuSLcMpMl/ctB1Zptq+aBQ
+86hZbaPdOE9wK7v/ki5xq+/pAO0N0bRv8I6yCftNYQ3ZENVUEc0DlIT9qGjkRW4e
+ah4vhaFt9bbA8e73Nun+wvetzBNGW4hC8C0ftQ5+tSvkjau5hzBqPRL0rfMcrMwa
+SCkqlnuAAAtuWYe/o8pwmRsc/XI9stOUSs9Vdb4fQOxVNUgtVfAA2jywYLoRMmZU
+C74GpF63yVm7TfSSBiZIbsIS1HzwILii4bxqthkON0dVyfJJDZZ1ooRkvzT8vrJB
+5PWI6+G3JqXlQcIgDPbiqKXndlSl+0uABX0YhXq6vLetwC9ghcwVEhwvCp7zfEDP
+9D4j0pXK0AZYUvCE2A896/8SaJR5j75AKV+YyJBsBS+ZjCpjeB8jsSnF50nJspIP
+UwvVcSgXwhm/YL98h6irwfQKwbjSaO7BzqcTExdtJF2iN6bXfUiLK3QtQC7KGdW2
+PmxCcfrPhYf53oBzi4n0cPDY1/9AQZzHFW2bbky1UgKZeTJzyiagrDFvxLD12rvC
+H+CfRLol958CAwEAATANBgkqhkiG9w0BAQQFAAOBgQAIddy5P6q2foF6OdHu7US2
+zhs3xEwZ0Gbm67VPKu+VWGQhVQESMKyKldEG3ilGpPF9f7Ae0k779vqadL6FYtsL
+gpBYYsVf8YACn8X782uwtDsEseVTwtAAoRqdZWBvc5hn4JzIEpR5Wb9De/V3yI/f
+sc0RHAEZmcIiQvdBrrS4Gg==
-----END CERTIFICATE-----
=== modified file 'mysql-test/std_data/server8k-key.pem'
--- a/mysql-test/std_data/server8k-key.pem 2009-06-11 16:21:32 +0000
+++ b/mysql-test/std_data/server8k-key.pem 2010-01-29 14:54:27 +0000
@@ -1,99 +1,99 @@
-----BEGIN RSA PRIVATE KEY-----
-MIISKAIBAAKCBAEAwI8iAyRZZ0YU1o9gCVgGB0XxeHFV8eq5MIrNwzy5v2VuGO2g
-uMkZVm/EkBnIZQnb/7+CoQitAU9ao9Q9eH5LSgGkfeh7BT592LlVWGDWHM7oMmIs
-GWDz7QWZbcl3By4RbQuax2g4Ruj6MYDf6Hnw8f2plMP6DfV4rEl+1Rf94e5E88cO
-MDJdqRkl5LshHf48hEhA9Vj0vxOMhWi77PXdxjjRsHcfpo5PjeJvSXT1P5Bljpke
-WZwctSYkxLHeHvuWZcQxFBpTuF5iiscE97Q2pK8HyCcG7d3m9Ixi8WVA0J+fqRTI
-jot01mda0MlNNaHVezk6Qp/k0PTGDy5CMEtWsj1tji1YxWmZNUmVlZm2hykrMtFQ
-CM0lFEhtEJmFYTxBJiFVzB/PrbAvuYnYTqAY/3Udtpd8xfqL3JMXhgpk1Ak11YM0
-bVxtxozNuezCk8bBt8wEbyLgB7/g2Zsv1aBQzPnwlYOP9DCDcpTXtUvazJ9UO414
-dwskbA/ClmGWL7hftXqreluXeqmtQIvy1saNgdmUYY+dA8W5EANog78EgcysvTSJ
-6NSNQyDitqQRPRUqggzWOmqMYtSTvMOAvxu0Kwp6NPDNHoI/JQ/RBKgKBRmw1haD
-Oa8LRX3LFH5NqqrCOahGOKu9qyq9NEN/2iXeK/tpO/47h/2YlHZKvwSjMeM6/28E
-+vok5CqJ6Q6/RExyhYI8iUoDYwFBklPQgmBu2P+MorQaOyBtrnSSME5I41Gmy3OX
-BhMDMiObfaLHOqmvl4xR7f76tLQao4f8z4yO5oAVA/3+fb2xdvFfswkrTE2nfLVy
-sdbbOMBnpFS8hwmlOboafj90YK09S76UU/NkFsczNexBAJW23plionoomkVN+s2m
-d/beWHJQyH1pONsHBITYTTn3UBNDri2vRaQqOVY8uLfYJqQ2ySOqqrhJCyG6nnor
-f00pnw4AHrRepvpJ/o3ldFfYutmSLNKshB3ypqREHL+IQTJ+0cMvbrwPXRmmj3Qr
-Z7rdqdtotc6dJUjfVAjQHU8uWyS8BQ/7WEb6AspTkynPECfCoBjQ9dS5PF7fjmz1
-fLm0VMw5Fl082pazw2zUcF3TMKemvW/dQbyo3kJgWZqFJQ0qRcMFtG56Sk3KjArl
-bDS8IJttSsrKtqY6oNvDDiAaEht33csdf8MNDefB/ZbSx2iAmaDZijMho4uiWqd+
-JwYCf+1gETc0VBd/TZAUHmk3DbrwK/CjLWJ5yHao6sjnOx/GT8IM16zwd1Nd8FC0
-35sDyk1B4RiyJTCGHWPlZ7FTzWtOgxq5Xi0FFWvUjrGX/DEDV8u/J3/NXyd+Zuc8
-Fwm2ESpPM83rGtNv1RWLi85oa36aleV0fxdX2QIDAQABAoIEAGv5ltvmLQ/A93xc
-x0BWEINRkBa2jrfpo9B5dOnuikWtza/Cx+X2NfQHFlSrcHhfr/JX5BsCb2iVo8DM
-CXAgeX1VMHS9wQXuxciaHCZDnqxmxUNDU3EjsYQOKLusRcdL6M+Zuz/ny+7PQ0Qw
-/N0yS46Wa9oUjon3RKRvTeSV4HIpFpcP3n/eLjDc/ielWuujnTGcBnjNWegvQROp
-5/7221YElGh8U84kbK2l9DtfjwoGoTv11lPvOxXE/scg6em7r9j+y3p3TMzMeDtT
-YBC6CA4Oa7GrWLJXROOKOQ0ddtvFNlUsZ02vG2QCbqU2y8mwJrJDI80qNbeKGel3
-SfwkssedtGoOOYHxNczwpyVNHVHrHuMPBe75gbo+5pFxVJ5ymCGWfbLJf73oVsqW
-ZimoknvkozW4+mlVlcmo3X73IxTW2U4RlXthYdj9KXsBLRaKVCQJDc934eHWkXHU
-GF2U2NonqOVd8YG/FmZQ2ig6EcW97hC6wnsWT2Uc7UNAE2RM4bY0xCUHaQiKTrEs
-CI6wpbbTV+XhDu2HmL9G+fsuSIu0RoSOCmr5jQDAVwCNPXFgBgcIxbPZ/UCJ7RHj
-GrWPBldAN8ip4osiA+B3XwBabcvwXP2fgBP/eLWN1St3q3tw5xpHpqCuhNuPSqsc
-0ntz0oIdJyRR6fXWmRFex4kXQ597z5ozm0uyg8arV3HJFxDC3DI6kKfs86/oqMSW
-l+9g+d4x6VrUOCTDk0bjN3T8HQ9ASfy9JVacqk6yuXX7a0WeeT+x9JsvFAjg2KmG
-CJUtm5w5siItMDSPpcRE4hlfgh+M7ZKS3PFgH3vvwfPMbC/IC93QoSaFzRJMyobX
-ei6PNwqJvL+HADlMfLmehE2w9ycp4Fe1Gw/NW0Ed1S6Ajo45hgXQJSIrzla6eglg
-JPsPpQ8b+weZNQ8zvc0KvfRJmZKKEb9dHvFdi68I1kV8aapQsjrMOjwHC2pnCFh/
-axkVc7a59fKUs7L6nAJhCs2sSixTorZz5PvJ6mXhWu72TCzu+kThNnEORrlWPHQl
-RFEAFpDDaGSzOMlhb92CWUMPyZU2qtzMzv4QGbP5YqTy121hXuT5OBKCF3eNLihV
-aje16k0RMFqqW3Olbm7Mp2P1C6DuwzsUJBnNwB5JzhC79Po88zNAl2d1h+qysKU1
-jxF316nhpWJ2dGJ/sbJ+XpUMd/tVrNFQMA254GFfXycsfBoQOSY5d6GfRwKUDOou
-xImbIzGUAaIYdsGKDuKtqs5S21JMJjJ/J5CwjLu9tbpP/jsp22KHCpraHAQCupSp
-+SFwWI7tRUXzREuxJixfUOnJFQYOATnMFvvtk1d6v4xoPYCVEhHq8gHqJkTyTi3Y
-BPVwT1UCggIBAOEy5gThTrEqSVFUcFJm9bJxtWZt/YhOIJWNNxeaxExHzy5hPpsw
-fZXtN4MUCeMSWI4isgIujmltwgOHMjQqsJPISn/1gVrqLmrZ2PnFzko/WA8rMUfd
-EUnOOpj2bKpChlRGHi76ZV4XGgoTXyO6mrVUcUgf3reSImdcdQ5IHa7J+lWhCQGb
-neZIyDOk41LX1TxjcYkY7vuUgmbBYComXPm2UaY3HN4E/3ElXntj6PrlozL33A56
-z4UPfv2Vv9kl0ydkTJe/WcUN2htqLFCYygF2XLlwbv2SYDCT31PkJUORbScUM46A
-DOhlxvLBFcpF+l0RtCtvnrKyFy9yZJKrcLh9x6xVChZ/aQqSptSHjll5IEcVm54Y
-Z1TjWizCI4txnaBFV0UCLt1CZrllXnyIksZLS4/dVqUIKmkxPBQUpiD5dmgDcmPB
-/LdWzS6k4MH3J3Y3tu3MNPHDwgUtnifSZrsWSYPK0F8J0dMU/mLaS9eOplAH7Eo1
-t7OrrImvitM6tUdErRYilIaoS/6YPmsPST5gY1N4n8Lf4sAE/tY8fwaWRpTVSrIw
-CoFwLtHESUOhqfuAOdr1EkDfo/RQTUVdnmWZ+D0j3du8MmsMje4x3f2CjBDXqArl
-gNnBQELDmrdif8KELNjlEpTIz0T7wEfquhVQ2dzhFpL7RLAgggD+oEBLAoICAQDa
-5WOWrAtaI1cC5C7LFxM2qXTHGRttfAtVxuigJapLqNASJuu59GGRxsCVwhthbNFh
-aCMSj+fZK7QNFkaoPwuZCEtzy0ErkVZzxYp3cP6b99mzGoCcuqiHiW5qhEkbxwdC
-f3YEsSGqE6j8TPW8feiziqo8q+QPSudI9ngkH1gjgbIrTu9iaxKJcF2CwBxe5tfB
-uFBNPIgJAaLPejRKQu17MAV2jDnBDIsZUZnm53IxQ+giIYUBay3cfC1KMJu/AnZ/
-CxETjgqqnzqdFW0b0o49Q6YQa6QXAiSjs+lL/BhjbdA5quVdFmA3CoASFQbihYfM
-4vilUg7Y4wXfzS7DyBZdfppIn+HI8PPSMv/lfdsQXecl5TU1fBDPRWYPpTZqm1II
-HDCkmGRKet/j4/oobabNRrJ6PJcxNjqeMVv/a72pypDRPIXzNxLb1BkfWDGfgu2R
-YAdRNBSJSpdoHDZ+1VO2A+/8gz9Zuiv1WxoX7+u3pCAd+0vCfHiaXiFVc7fI8F+m
-rtDmN5p3DD9l1+/v7yd+7eUezwxYecElw5E5MyAJRTYGrim8g7XvF/u9rXvH09VP
-TeIE8oJ7XzrxCmtGIxlJs6FmgUbUblOyfPZDUqPnzlo8Ru1H2iKRo2FPiMfij8mh
-H3wgFTnZpGDQjw/xop51bxVueXrmOeguS0wmk/8Z6wKCAgEA0y+bPApadJRWS1nn
-N69sTBqMZfFR6Eh0ECts9criuTJCXZk+T+SqcTYTb+4T04k52Jk63Aby8HXIkuxv
-LTK3gu86xkLiOvMP8o43Bwz0BvbeSuNThLQQ6Wjn1NiLUSOvu0pCNgYFl7YMalR+
-TRBK0y/MSDny762wa8Pt1iXVCDxLcY/h1UstSW8JqDzCHcdgJhCPwWTLgMxleZ1w
-5DYzzM2oRjq67I49Sssjjo1ESD2fzUVZbY7IG11L1t1fG3F4UiGiHlCJC92Qo1Lv
-Geoezj5EeHay70Mcx5F0xsRWGcZAWXx9WO5GrI39g1uFZro3Lp5SmsVDSwrt6UXa
-gR0bSThTTw40tqJnTE34+6ff25JWrbLay+jQxm+q+fxZvwQeMNW2IHYKot4JXWVt
-tVWSZzjnNJP6FCvTMfDFCYPPw26OFr7cwCaEKx7QriRazitMK3XWK6zsHalZwudj
-wK50PpCJAnno7KdVySCP6v4ST6Rr3POBKJq1ml2tITWo96u/ooUJ2I83QAyFr8zw
-BBBCvKdBnl6pW+P/TdmhbiEvcmrs59gaA34/6+DbV0Y++piZwswd9XML2iCgLZY8
-0IcZ6uf4PsXq4Yzcrz0HwM+tAXcyiPzkjstpCUxMShALgFxzuWOgdwpjYXnrviJk
-0EyUkzbOCHBhbhcK9CyYHfyrJX8CggIAdWwgJC9eV5glkPN+9osGT4hPkI4zXGPy
-YK03FNGfrL59/37JbRNfU6fen3dk4LpTB4Gpbserg6AiEfMlLBPF0O3WK+OYrhpk
-2e3Z/YCr1Fb8fUt2Op0W0r4ycQlNfo0ho9ZkJNgwSuAJAm72U4rnTYjREYLT8DAq
-KcWtZRM7YLCuNvU9DPqLExcn0n/juDT1AIIy8XvLLamnAM15R2znn/F+vL00Lg7g
-f1B60pbNdwgKemSoyL4J+ADU+rtgkPJtRnFVU7walLSd6K4ZvZcRnmOvrZdQitcn
-eHmGaLBvFMdPr9+w8mKScnQ7h3eoHdOrqYkIAQcn18jQ2eFjeLrY5IaJlPPPVs+K
-u/OHuj/tR7ZXzMhL5skK62U6/qGNs1pmgts8bM8i3aFUgRdGlnFbzTpje5cNM+T3
-RO0NgNL3ByIW1Wc2I+YjQ7FfWKUi2YKOljGBO1pIue09kyevRBKDuVwbXMW7MhLg
-idm5AaY+OGDeqbaoSUgkGgrsrr5IlI39gZi9jwG85qe3Spavq3ILKdfL1N8UrFGD
-/xIN0TVPtilede7vjKTK79tZu8JYaDWGc+g/mo/M1wmawLrqGNGzOwoVRruKl2In
-m9PU9wBZ1HuphDQ4DRdC/AU8qkGhmDOx4bDWEQ/R3KKFHNvhnamyfyR7xqt79gyS
-NGNIElnJuskCggIARFaK6yAVmaL74Qu3iiELj8FU9Cw8kPP5HeWUfGxCjlegdH3R
-FBtoQlDcQjYzO2uZR94Itg3yk3Dt+xbf7KxUsODwlgLj1UhV4eOXUDTosBFTrbTG
-v9gnRVH0Eyu9tF+CMUcCXhq6tnIrQOVv1ozcdXfIpk9gvIbfh4rlo6X0iM8Xge2t
-Vo7awq05t4wJBkO1xUtOaw9HabaszK/CU1iNV7cIBmaFF3AEP/KVfOs+kjubc9AF
-mqC+LVVClvJPNzm1YA5JZlxmQ0u1xXFqZv0OMoibgY+gSzaiAQz3eKB6vEv4Xv4U
-kaF9nEUTEjowpTE6uX9X0mGkXXT2wXmlTjosZFnxRX5IIrRNug30plRra5CNYPGp
-3uTmD/D7Nzi1iYitJg3yhrTQmCWiJY3x4Z0xophLkio2nlJ9WoTKf1AwTIATY7fa
-pX9bxEKldYXrYZNFlbqBPFgA/36v+JDVfMf2E9yRMCt0LAJ0HUM6zP0ngMv+S1TP
-Pu6X0WXR9JeuoaF4uJSty/xwdpST/CkHflFLVsk5n3tNQfWGjqoTSOJMgL9NRY9e
-Pc/OshHZHeCVFUSXtcf1pfmmBtT6FHX0L4cgVqA5xO8RYapnLDAFLXq2/dRv3NwW
-W9CzZcZKh7jmJw4iSIY5IU1+ThgugWoxlkcmjs/egjBclL8BBfqRIwx/vOE=
+MIISKgIBAAKCBAEAyqodxBHskfDH/1+QkvxADF63PQDFINUPiTEH10FMi2CAqjgU
+3pNrnHSIQWi1AkEBLYaiepVTXntnL2weKVH5RP1KgL6yI6E+GzjPiMRx7vhrQcUt
+wMNSrFl9gTQZlTK4mlG2QTbUxKGuhOY4uei/lr4Zemt3TeDe5rO2a7w93Wi8S8Tr
+9TaT7VaiFVCKEOjWIu1ssc3DGMn2CuHeYWVi1hRBjLX7FGjBzxJdQSGdVxFDfbtD
+LCG7w0R9qM8fw3F1tUfCfc44PHNknhXYpyfPvUDIRQjjyDmoC47CW3vxR5ESkczh
+AOCUW70y5AyNw77MdjJSEmmwGOCwwnY0Wl952faBnQIKYWkcM85J+nYDHgdbJwu/
+NJ40lrgDm1A6ai8XehTPZWMAN1Koc85LFED00ppWVDO4dy5CW4/sHxj0rauKSo1t
+cCXzWOfLZlEUfRb0621WdnZRbtYd2tONwGRaZ06v4r8z0bj2KvxXh6c1XoDJrPyH
+yXEXkb+3TaPtPBsn9Gag+UYDJ8zqgPZLQPZBlM29CrPvJr7eb2muDz8cVWMzkJvt
+yloSTd5LBsKikrBCPTGvpBUSFfiK6YiNz/2FZlBvEfGfSPO1up2GaCSiXah8VEL6
+2LXF8t0OD9Bo5FR+xbmgm2Utd/SPuTAK1YZc7cl80dqdDWNQ7uUekmPMogzoSpYC
+TdyP33yPCBioMIjXr4mt/FdLEPnxy0jotjvIP/zC09FKEDwba2Tc5WUeW7LaseIk
+l4/uwEuOGIN8F6Y8RbNgBiPyLxgTnheKxnJ5jE0E853q4CXTM4weEUdjH6VFP72F
+s/6laO5ItwykyX9y0HVmm2r5oFDzqFlto904T3Aru/+SLnGr7+kA7Q3RtG/wjrIJ
++01hDdkQ1VQRzQOUhP2oaORFbh5qHi+FoW31tsDx7vc26f7C963ME0ZbiELwLR+1
+Dn61K+SNq7mHMGo9EvSt8xyszBpIKSqWe4AAC25Zh7+jynCZGxz9cj2y05RKz1V1
+vh9A7FU1SC1V8ADaPLBguhEyZlQLvgakXrfJWbtN9JIGJkhuwhLUfPAguKLhvGq2
+GQ43R1XJ8kkNlnWihGS/NPy+skHk9Yjr4bcmpeVBwiAM9uKoped2VKX7S4AFfRiF
+erq8t63AL2CFzBUSHC8KnvN8QM/0PiPSlcrQBlhS8ITYDz3r/xJolHmPvkApX5jI
+kGwFL5mMKmN4HyOxKcXnScmykg9TC9VxKBfCGb9gv3yHqKvB9ArBuNJo7sHOpxMT
+F20kXaI3ptd9SIsrdC1ALsoZ1bY+bEJx+s+Fh/negHOLifRw8NjX/0BBnMcVbZtu
+TLVSApl5MnPKJqCsMW/EsPXau8If4J9EuiX3nwIDAQABAoIEAElnTjqq502AsV+c
+hGfId4ZDdAjjU4LtyJ+/I4DihM/ilxeQEnb/XDWhu4w9WXpEgyGzJvxRQ43wElKJ
+zW7X4voK58Yzy5++EhmX/QsjY8TTMz3yJf0wgawtCZkXfsCcS2KRf/qk2nGRwf0e
+yaMEWwhFOEMv01lgvjs/Ei55Usrz2Wd0HqaFKxUGkNQ5hJhVTOH/rqPDzAsZc0VD
+w+Dw8NhrI8bMTvF4c+IFW8NwYmWbuh87CTxdx30VPJI82ttWJ/UN1bLtU08J2IKt
+lPgOIl8ArMjcTGxD/cqZ3Wl3Pc/XCqvGUiSYMwP7Rgh1R4+DdtjEpxdGMmMAVuVI
+HPQyqpa4gv+UMqBPish0yjSuM7jXnztINOvg9Vk1sxC5AT9eaRltmiS1s+lVxe+T
+43ulf0ccYXJD/WclWSGCwloNFuokPIV+Lgo1pKsp4XDgoxQfkXwH8Q4dEqebY9rT
+Tv9FGb1bMbdl22X1oSu2lBltBZaB/QnruV7L2GaQ0tqLKizgBRuvZFSE+DWdMb6d
+9mnEB8LWtca/nzogXb5qv4GEMUX4FUAmSf1FnGWZwwDi1DFfJ860RVKf0xokGGQ3
+cm3H/F4veds88Z1hsAu0bG8h/bEAim+Whvag995cFHDD4on41KXW8wX1on9VFA1W
+CkaGUPhLRytXDBVCSJkOYYFSJlb2wqONiWe4Tn5hsantCfliTj/GVkgDq2h7dAGR
+WyoqTntJAv/xJsUOV9WmGXnWNeZX8BSO3P5dnXnMzhCWQGoprXmWFyJ3TYCJ2+CO
+rzkZbtuKvTvGc3sDJgrSVmmg0BrOkH+GyYVlJdTDBmfzoORludDCFHECa8oK7NwY
+t3o0eNlG6IqTxl2HIoPneW9nXFQtCXv6tpJjljwjlz5WpJG+kBW6bDedcxZu7olZ
+fqtnyZTB2SjzzbGdQ4JvFup8MxNyPvYiqumQXJgkyXFVDl/UFhjWuGe04i8NBJgJ
+xORcjfgLrKH1XKVBWPJdh/2YeUKIIvQ9RB4WVqXgGmD/21tgv1bVEMYabh23e/HE
+Fe1U2XQPJKxGCEtG6b4zhFP+PeZACS+Vk5IVJYK9n4SepPBPgX/wbJLOcKGpsKjp
+yx5WjopMO6T+VUV8HIduuZ+E8+uAILHDmo2Bq+LHblaxd4SkM0+hL2H36imK5CUO
+5fLuvHW88LvFtQw6xhP20s+BnmgzE5ZvNG4Iedkjvwe9HmdNDew0UYT5vNJN0ehh
+OlraBC++JYwEclrBD9SRvprT63XKDG735pPvzLQi7WKDCBn1/JEgxDIO8nkMewOZ
+FU48Mdmkn9wqPeIigQciwl62fuAQCGRG+RXMQqra4A1apqMZQEauTK50VhHDGdbc
+ye9LHaECggIBAO9lAzoYS/Lu0ticMt24P8BSbGdxSNIpEyIlTTs+7A0UjpfXsoK9
+4EJWZ7lhgbQh+SCTS662SeC+s8M6bT+3mELxUC5S/N3aCPyfjcM3JaoACkI9+VMn
+9otJZjAEwH7cNpMN0Xa8fHCEma3l3XKiVxEJbuJC86S5mpkjeXVnDajAidBtevBd
+LWJ9n2yXk+ZKUyI0mjpqItwUxOgQ/MOIvqAu66xyjg08/I1QQTuIrReAA+oaVKhp
+c42Ufn26hUhNrQCBAtMAO3VC/chciet6vEMNEM13GqLp4+PcPhRX90gO4+bNrScD
+WgiW/jc24CGan8gAenBWC/3l/C6JUsMp+ZYmPozsa0zo6edgiO/f2KXe5nP87wZT
+MxaYJgnyXJxMefI79kUHPrhpXZxuiSCEWLhCBN34Lhpr2L491i2g/FJj9i6N3EzE
+N3ic5Q63o4QFusjqIm3taQQFoGP2Cgg9owz5WJ0uRz/gtOE3XQiQA7+ozoAXOlTw
+pJK5MMtVrEoOLIbVJIpxfDcKDp3yorR8QCQLHgDBmFeNCDmk+7YP33dRIc/AVNLF
+q7cecqEc7D8AkXX8Q53GfCEg+uqbdeMQXK4BUE9iwRK9RiFhas/RJe73+Iio3S0L
+ekLpnnOfvk744ws+JWsLpsfC/ZE7OxBLPtq2xvGl/RT2G7tCjmpX3CbPAoICAQDY
+uOEJks2T105EcMPJjzNHCCqjK6S7qZaWkF3KT1Z0Mu5oUZwwHamsMg4BQJ2mjMrL
+fRBKfXQLA6vgE7zysw3F300RDxE1RVow5+JLDQ4bqupp27/M0a8fuwksyOdKHqCV
+YHzuTCxbVIFZawTjfOxJVXDHKCFCilfY1LsA+V+oFe3Ej8YYxWXkXA9ZLigpmt3s
+Wu6eFcZgF3utzIGjI6eP6lL5bWp6Bh9Avp2xrOvpFwE2m02Y7/Zom6MT4DXvByY2
+KHHQLsasEMpeLuxQXjLeTocwcxBwBFKhX95yFuv31k00VydT+NExtaZeUYi9l19J
+WmM4GjFjAqa3uUwMNVv5JfWtKMyk4FOox2XftLvMiIhV95B8hAGxtYr3hPkGg80O
+AWPq6OKUD332COXRaHkmL5aQdN3gP5zh9+rH6icLrrZbrQidVRyDw03doRoGrH7i
+ixXLyYoW80PHgqUDPohd5bFkZpi2vwXMl1YQ2TfN9TvYFSGme9YCm9ZuypnqauW/
+aAf0FI1MNwS+XDREtzPdFi0me6WxpKL4a2Z3GGNxIFuBjQ/uydWpjxkny9qI3KAp
+SgjI3kBUDGq3gf0R+Xo/d4d/4asK9Nv2Fi0X+RfGqioFaTbQl/1zhNdvhP9IcwEJ
+DLVQ3UhMdfg285RarC2Sihui0M8Smi9od9Dj6rdWMQKCAgEAiQVRFoRnnDGz/wVQ
+W/Wkj6jdoUuG+btG10lwbhOyuj3k6+Yqp4iUfoPENKgpu/eiB1InhGWT3Y5ph7m+
+ZDTqco56bTlUwIqWkDmmw3CiHy6MsKOWPFFoXQry8VMW9sWGex7yoDp8I07SQ2WJ
+HZ7rpLW4gMr/d25AnZxfXaJRgCBMAT9YmZFLc88hW99aaPproO1oxTyQnVVJ6uYm
+NqjjKv4QKJEc21jn2N5xp+iv4f6Evw65G/fXitbOm5oRxXOoLNyqyCie35wrc+37
+hwumC97DmkasuUiUBoy9/5jl0ZmsOiPJEsZpVvdNpD7FhJZjE++qJPgrPvTPJbe1
+5jz1PUrAjJqZQ9kgYC2x01JVR4NQdlz0VrNyT2FgjFrrRQ7E0bAeYh4meRjd2rat
+yC3YNgabkI0HnlnSIfl0yIMXSPUsKDNMP6gjc+aheI4FioBZC7xvXmn/rKynw+9E
+iLj2xWtGnBir8VTlUu8EUe1UJ/Qv1cL1wT5HhC95TTjJN03rkHUYyCDyjvIzsZX6
+KMHhWIAAeUBVuO7hIVVcOTXWmw2WA7o7ErTPdy13QN40Hk9t8pEkBn9f9vpQg83d
+aMypr3LTC80jY11wcZS3tSEpzCCkYVv91FV4cioTZmytWbg9A+dbNWzi1f22ctTr
+FoVrAXaSYie2trOy5bjPmPCW8qMCggIBALQUKymBSkDmTqqf6I+65ajIKGWdBizJ
+Jc/F9aj9c6DqER+tcFKq0ym6DdkMj/KsWnXrXXYH+DyOuGpg/EfOcEtS2P6rvmi9
+T8wDYg1qs6ZZxp5fcmgGc7Wx/FWyOj1kZZq5qhV4RgM9nJ1oR4+fZdcpn6RcvAZG
+XehWG20byVgpoIAL11cN7zRpKne32rd3b5/NjyjcfxGpcaNgovej0L/MvVV0jV0H
+aUCrIu1X+k6cRu3Q7hF+kwkpCcCiNS6AikfGI4wQ0hR3fy/zXXkKTMpcBglEEwyB
+Cwf8WSID2d79uvka0hr8TRc5ERyeMzkWZp7U9EzRtufGdDGFTqN2Uw4bdKCFnkYC
+AIHl7ciMrN+vM1n7c5uDNMUtTGOPojy/l8tjbFrtWBgfJ1Mg4ZW3cbNBJ6Kw+Qw0
+z28USYoEDp2uduiGRvo0lpUF29Wk37Nb8bLcTygeNxgK2u8Up3iipT0gdt4uQgbX
+g0IVHfayB6SjeS57oJJto85XHz7AKlSWroD1OGagDSifLtneU7AlanryymGHrI6H
+dsNkuqeLJFYDxQVI6UxJebiCpyxiPxwp9wtX8SS3SEyOZL5GzLn6ypGiCH1CTpW0
+EHHSy3V4DUGOc4w7eMirAnbSkxCfOmBA70NNw/uFY2XlQHKow0T0fImfKIeJagbT
+B0GPDYvUpLKBAoICAQCzYnq8xupXK7lvTLaj936qGSe54OC2sj9+UpsFiPxglNY2
+sO5zKWKyY7+rjK6zG2ciGfPEDsZNIqKw1W/KBfR2kRLqkt4bC3fSCvUztx0vtGUe
+veXlqiwETdE7RJXoaGJrgJArYJvpOd8PtWGeM+sSJNNrUlGlJnSiZ0CcypqUZgZL
+WzGFfLOQYAXCykdB1iZkBqU2C5wktvCb9sVz6G3TmAwSKTENOWWZWmh+W0J4pZFV
+ZEyvsxViJRQbwxa0kC0F5J/UtWZknO79/ZFj1H4jiAR45EjWHE+UZAkFwG8BSl54
+EKOx7GDanuRILr0dtbyi4d31nCYXdjs3x2+1N3exw4oKQIvNuF54WoowbNPu0kEb
+G+7/kLwcJqRnSV4AiLuMz5aOte7JJSw5tzgZZlAQwJO7IDfrLqodivcXF5yirwiF
+dyBpzSDmupy/aTHnCpT+l0H96jRU2awxaeRHZUqZog8gMHsslNVZEFvUFDJ7AUN/
+yyfUzJYjH18pZt0hS7jNb1O7KxZCkWGMiEcxHkgF/UINab5qruNBVKOkJ5vqGhYi
+uNkgeGsQtXJcpqMRRiVXJE0kE+26gk+iaYnBJN9jnwy8OEAlYFUHsbCPObe/vPMQ
+3RLl+ZoKdFkN/gTiy70wUTRVw+tWk+iAZc7GPX1CqDFOqGZ2t+xdF8hpsMtEww==
-----END RSA PRIVATE KEY-----
=== modified file 'mysql-test/suite/binlog/r/binlog_index.result'
--- a/mysql-test/suite/binlog/r/binlog_index.result 2009-01-23 12:22:05 +0000
+++ b/mysql-test/suite/binlog/r/binlog_index.result 2009-12-16 19:52:56 +0000
@@ -1,3 +1,8 @@
+call mtr.add_suppression('Attempting backtrace');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to process registered files that would be purged.');
+call mtr.add_suppression('MSYQL_BIN_LOG::open failed to sync the index file');
+call mtr.add_suppression('Turning logging off for the whole duration of the MySQL server process.');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to clean registers before purging logs.');
flush logs;
flush logs;
flush logs;
@@ -21,7 +26,6 @@ flush logs;
*** must be a warning master-bin.000001 was not found ***
Warnings:
Warning 1612 Being purged log master-bin.000001 was not found
-Warning 1612 Being purged log master-bin.000001 was not found
*** must show one record, of the active binlog, left in the index file after PURGE ***
show binary logs;
Log_name File_size
@@ -34,7 +38,114 @@ purge binary logs TO 'master-bin.000002'
ERROR HY000: Fatal error during log purge
show warnings;
Level Code Message
-Error 1377 a problem with deleting master-bin.000001; consider examining correspondence of your binlog index file to the actual binlog files
+Warning 1377 a problem with deleting master-bin.000001; consider examining correspondence of your binlog index file to the actual binlog files
Error 1377 Fatal error during log purge
reset master;
+# crash_purge_before_update_index
+flush logs;
+SET SESSION debug="+d,crash_purge_before_update_index";
+purge binary logs TO 'master-bin.000002';
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000001
+master-bin.000002
+master-bin.000003
+
+# crash_purge_non_critical_after_update_index
+flush logs;
+SET SESSION debug="+d,crash_purge_non_critical_after_update_index";
+purge binary logs TO 'master-bin.000004';
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000004
+master-bin.000005
+
+# crash_purge_critical_after_update_index
+flush logs;
+SET SESSION debug="+d,crash_purge_critical_after_update_index";
+purge binary logs TO 'master-bin.000006';
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+
+# crash_create_non_critical_before_update_index
+SET SESSION debug="+d,crash_create_non_critical_before_update_index";
+flush logs;
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+
+# crash_create_critical_before_update_index
+SET SESSION debug="+d,crash_create_critical_before_update_index";
+flush logs;
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+
+# crash_create_after_update_index
+SET SESSION debug="+d,crash_create_after_update_index";
+flush logs;
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+
+#
+# This should put the server in unsafe state and stop
+# accepting any command. If we inject a fault at this
+# point and continue the execution the server crashes.
+# Besides the flush command does not report an error.
+#
+# fault_injection_registering_index
+SET SESSION debug="+d,fault_injection_registering_index";
+flush logs;
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+master-bin.000012
+
+# fault_injection_updating_index
+SET SESSION debug="+d,fault_injection_updating_index";
+flush logs;
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+master-bin.000012
+master-bin.000013
+
+SET SESSION debug="";
End of tests
=== modified file 'mysql-test/suite/binlog/r/binlog_killed_simulate.result'
--- a/mysql-test/suite/binlog/r/binlog_killed_simulate.result 2009-09-28 12:41:10 +0000
+++ b/mysql-test/suite/binlog/r/binlog_killed_simulate.result 2009-12-06 01:11:32 +0000
@@ -19,7 +19,7 @@ ERROR 70100: Query execution was interru
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t2` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a, b) ;file_id=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t2` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`, `b`) ;file_id=#
select
(@a:=load_file("MYSQLTEST_VARDIR/tmp/binlog_killed_bug27571.binlog"))
is not null;
=== modified file 'mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result'
--- a/mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result 2009-10-06 10:25:36 +0000
+++ b/mysql-test/suite/binlog/r/binlog_row_mix_innodb_myisam.result 2010-01-22 09:38:21 +0000
@@ -772,8 +772,11 @@ insert into t2 values (bug27417(2));
ERROR 23000: Duplicate entry '2' for key 'PRIMARY'
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
-master-bin.000001 # Intvar # # INSERT_ID=3
-master-bin.000001 # Query # # use `test`; insert into t2 values (bug27417(2))
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Table_map # # table_id: # (test.t2)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
+master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 3 */;
count(*)
3
@@ -787,8 +790,11 @@ count(*)
2
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
-master-bin.000001 # Intvar # # INSERT_ID=4
-master-bin.000001 # Query # # use `test`; delete from t2 where a=bug27417(3)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Table_map # # table_id: # (test.t2)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
+master-bin.000001 # Query # # COMMIT
select count(*) from t1 /* must be 5 */;
count(*)
5
@@ -810,8 +816,9 @@ ERROR 23000: Duplicate entry '1' for key
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query # # BEGIN
-master-bin.000001 # Intvar # # INSERT_ID=1
-master-bin.000001 # Query # # use `test`; insert into t2 values (bug27417(1))
+master-bin.000001 # Table_map # # table_id: # (test.t2)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 1 */;
count(*)
@@ -825,8 +832,10 @@ ERROR 23000: Duplicate entry '2' for key
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query # # BEGIN
-master-bin.000001 # Intvar # # INSERT_ID=2
-master-bin.000001 # Query # # use `test`; insert into t2 select bug27417(1) union select bug27417(2)
+master-bin.000001 # Table_map # # table_id: # (test.t2)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: #
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 2 */;
count(*)
@@ -838,8 +847,13 @@ update t3 set b=b+bug27417(1);
ERROR 23000: Duplicate entry '4' for key 'b'
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
-master-bin.000001 # Intvar # # INSERT_ID=4
-master-bin.000001 # Query # # use `test`; update t3 set b=b+bug27417(1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Table_map # # table_id: # (test.t3)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: #
+master-bin.000001 # Update_rows # # table_id: #
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
+master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 2 */;
count(*)
2
@@ -853,8 +867,9 @@ ERROR 23000: Duplicate entry '2' for key
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query # # BEGIN
-master-bin.000001 # Intvar # # INSERT_ID=6
-master-bin.000001 # Query # # use `test`; UPDATE t4,t3 SET t4.a=t3.a + bug27417(1) /* top level non-ta table */
+master-bin.000001 # Table_map # # table_id: # (test.t4)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 4 */;
count(*)
@@ -869,7 +884,7 @@ UPDATE t3,t4 SET t3.a=t4.a + bug27417(1)
ERROR 23000: Duplicate entry '2' for key 'PRIMARY'
select count(*) from t1 /* must be 1 */;
count(*)
-1
+2
drop table t4;
delete from t1;
delete from t2;
@@ -884,8 +899,10 @@ ERROR 23000: Duplicate entry '1' for key
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query # # BEGIN
-master-bin.000001 # Intvar # # INSERT_ID=9
-master-bin.000001 # Query # # use `test`; delete from t2
+master-bin.000001 # Table_map # # table_id: # (test.t2)
+master-bin.000001 # Table_map # # table_id: # (test.t3)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 1 */;
count(*)
@@ -904,7 +921,11 @@ ERROR 23000: Duplicate entry '1' for key
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query # # BEGIN
-master-bin.000001 # Query # # use `test`; delete t2.* from t2,t5 where t2.a=t5.a + 1
+master-bin.000001 # Table_map # # table_id: # (test.t2)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Delete_rows # # table_id: #
+master-bin.000001 # Write_rows # # table_id: #
+master-bin.000001 # Delete_rows # # table_id: # flags: STMT_END_F
master-bin.000001 # Query # # ROLLBACK
select count(*) from t1 /* must be 1 */;
count(*)
@@ -924,12 +945,11 @@ count(*)
show binlog events from <binlog_start>;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query # # BEGIN
-master-bin.000001 # Intvar # # INSERT_ID=10
-master-bin.000001 # User var # # @`b`=_latin1 0x3135 COLLATE latin1_swedish_ci
-master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
-master-bin.000001 # Intvar # # INSERT_ID=10
-master-bin.000001 # User var # # @`b`=_latin1 0x3135 COLLATE latin1_swedish_ci
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t4` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a, @b) SET b=((@b) + `bug27417`(2)) ;file_id=#
+master-bin.000001 # Table_map # # table_id: # (test.t4)
+master-bin.000001 # Table_map # # table_id: # (test.t1)
+master-bin.000001 # Write_rows # # table_id: #
+master-bin.000001 # Write_rows # # table_id: #
+master-bin.000001 # Write_rows # # table_id: # flags: STMT_END_F
master-bin.000001 # Query # # ROLLBACK
drop trigger trg_del_t2;
drop table t1,t2,t3,t4,t5;
=== modified file 'mysql-test/suite/binlog/r/binlog_stm_blackhole.result'
--- a/mysql-test/suite/binlog/r/binlog_stm_blackhole.result 2009-09-28 12:41:10 +0000
+++ b/mysql-test/suite/binlog/r/binlog_stm_blackhole.result 2009-12-06 01:11:32 +0000
@@ -127,7 +127,7 @@ master-bin.000001 # Query # # COMMIT
master-bin.000001 # Query # # use `test`; create table t2 (a varchar(200)) engine=blackhole
master-bin.000001 # Query # # BEGIN
master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=581
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t2` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a) ;file_id=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t2` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`) ;file_id=#
master-bin.000001 # Query # # COMMIT
master-bin.000001 # Query # # use `test`; alter table t1 add b int
master-bin.000001 # Query # # use `test`; alter table t1 drop b
=== modified file 'mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result'
--- a/mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result 2009-11-18 14:50:31 +0000
+++ b/mysql-test/suite/binlog/r/binlog_stm_mix_innodb_myisam.result 2009-12-06 01:11:32 +0000
@@ -628,7 +628,7 @@ master-bin.000001 # Query # # BEGIN
master-bin.000001 # Intvar # # INSERT_ID=10
master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
master-bin.000001 # Intvar # # INSERT_ID=10
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t4` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a, @b) SET b=((@b) + `bug27417`(2)) ;file_id=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t4` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`, @b) SET `b`=((@b) + `bug27417`(2)) ;file_id=#
master-bin.000001 # Query # # ROLLBACK
/* the output must denote there is the query */;
drop trigger trg_del_t2;
@@ -866,7 +866,7 @@ master-bin.000001 # User var # # @`b`=_l
master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
master-bin.000001 # Intvar # # INSERT_ID=10
master-bin.000001 # User var # # @`b`=_latin1 0x3135 COLLATE latin1_swedish_ci
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t4` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a, @b) SET b=((@b) + `bug27417`(2)) ;file_id=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/rpl_loaddata.dat' INTO TABLE `t4` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`, @b) SET `b`=((@b) + `bug27417`(2)) ;file_id=#
master-bin.000001 # Query # # ROLLBACK
drop trigger trg_del_t2;
drop table t1,t2,t3,t4,t5;
=== modified file 'mysql-test/suite/binlog/r/binlog_unsafe.result'
--- a/mysql-test/suite/binlog/r/binlog_unsafe.result 2010-01-19 10:36:52 +0000
+++ b/mysql-test/suite/binlog/r/binlog_unsafe.result 2010-03-04 08:03:07 +0000
@@ -379,6 +379,9 @@ Note 1592 Statement may not be safe to l
INSERT INTO t1 VALUES (VERSION());
Warnings:
Note 1592 Statement may not be safe to log in statement format.
+INSERT INTO t1 VALUES (RAND());
+Warnings:
+Note 1592 Statement may not be safe to log in statement format.
DELETE FROM t1;
SET TIME_ZONE= '+03:00';
SET TIMESTAMP=1000000;
=== added file 'mysql-test/suite/binlog/r/binlog_write_error.result'
--- a/mysql-test/suite/binlog/r/binlog_write_error.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/binlog/r/binlog_write_error.result 2010-01-24 07:03:23 +0000
@@ -0,0 +1,108 @@
+#
+# Initialization
+#
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP FUNCTION IF EXISTS f2;
+DROP PROCEDURE IF EXISTS p1;
+DROP PROCEDURE IF EXISTS p2;
+DROP TRIGGER IF EXISTS tr1;
+DROP TRIGGER IF EXISTS tr2;
+DROP VIEW IF EXISTS v1, v2;
+#
+# Test injecting binlog write error when executing queries
+#
+SET GLOBAL debug='d,injecting_fault_writing';
+CREATE TABLE t1 (a INT);
+CREATE TABLE t1 (a INT);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+INSERT INTO t1 VALUES (1),(2),(3);
+SET GLOBAL debug='d,injecting_fault_writing';
+INSERT INTO t1 VALUES (4),(5),(6);
+INSERT INTO t1 VALUES (4),(5),(6);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+UPDATE t1 set a=a+1;
+UPDATE t1 set a=a+1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DELETE FROM t1;
+DELETE FROM t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+CREATE TRIGGER tr1 AFTER INSERT ON t1 FOR EACH ROW INSERT INTO t1 VALUES (new.a + 100);
+CREATE TRIGGER tr1 AFTER INSERT ON t1 FOR EACH ROW INSERT INTO t1 VALUES (new.a + 100);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DROP TRIGGER tr1;
+DROP TRIGGER tr1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+ALTER TABLE t1 ADD (b INT);
+ALTER TABLE t1 ADD (b INT);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+CREATE VIEW v1 AS SELECT a FROM t1;
+CREATE VIEW v1 AS SELECT a FROM t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DROP VIEW v1;
+DROP VIEW v1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+CREATE PROCEDURE p1(OUT rows INT) SELECT count(*) INTO rows FROM t1;
+CREATE PROCEDURE p1(OUT rows INT) SELECT count(*) INTO rows FROM t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DROP PROCEDURE p1;
+DROP PROCEDURE p1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DROP TABLE t1;
+DROP TABLE t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+CREATE FUNCTION f1() RETURNS INT return 1;
+CREATE FUNCTION f1() RETURNS INT return 1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DROP FUNCTION f1;
+DROP FUNCTION f1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+CREATE USER user1;
+CREATE USER user1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM user1;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM user1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+SET GLOBAL debug='d,injecting_fault_writing';
+DROP USER user1;
+DROP USER user1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug='';
+#
+# Cleanup
+#
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP PROCEDURE IF EXISTS p1;
+DROP TRIGGER IF EXISTS tr1;
+DROP VIEW IF EXISTS v1, v2;
=== modified file 'mysql-test/suite/binlog/t/binlog_index.test'
--- a/mysql-test/suite/binlog/t/binlog_index.test 2008-04-05 11:09:53 +0000
+++ b/mysql-test/suite/binlog/t/binlog_index.test 2009-12-08 16:03:19 +0000
@@ -3,6 +3,18 @@
#
source include/have_log_bin.inc;
source include/not_embedded.inc;
+# Don't test this under valgrind, memory leaks will occur
+--source include/not_valgrind.inc
+source include/have_debug.inc;
+call mtr.add_suppression('Attempting backtrace');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to process registered files that would be purged.');
+call mtr.add_suppression('MSYQL_BIN_LOG::open failed to sync the index file');
+call mtr.add_suppression('Turning logging off for the whole duration of the MySQL server process.');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to clean registers before purging logs.');
+let $old=`select @@debug`;
+
+let $MYSQLD_DATADIR= `select @@datadir`;
+let $INDEX=$MYSQLD_DATADIR/master-bin.index;
#
# testing purge binary logs TO
@@ -13,7 +25,6 @@ flush logs;
flush logs;
source include/show_binary_logs.inc;
-let $MYSQLD_DATADIR= `select @@datadir`;
remove_file $MYSQLD_DATADIR/master-bin.000001;
# there must be a warning with file names
@@ -66,4 +77,159 @@ rmdir $MYSQLD_DATADIR/master-bin.000001;
--disable_warnings
reset master;
--enable_warnings
+
+--echo # crash_purge_before_update_index
+flush logs;
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug="+d,crash_purge_before_update_index";
+--error 2013
+purge binary logs TO 'master-bin.000002';
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000001;
+file_exists $MYSQLD_DATADIR/master-bin.000002;
+file_exists $MYSQLD_DATADIR/master-bin.000003;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_purge_non_critical_after_update_index
+flush logs;
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug="+d,crash_purge_non_critical_after_update_index";
+--error 2013
+purge binary logs TO 'master-bin.000004';
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000001;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000002;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000003;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_purge_critical_after_update_index
+flush logs;
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug="+d,crash_purge_critical_after_update_index";
+--error 2013
+purge binary logs TO 'master-bin.000006';
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000004;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000005;
+file_exists $MYSQLD_DATADIR/master-bin.000006;
+file_exists $MYSQLD_DATADIR/master-bin.000007;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000008;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_create_non_critical_before_update_index
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug="+d,crash_create_non_critical_before_update_index";
+--error 2013
+flush logs;
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000008;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000009;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_create_critical_before_update_index
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug="+d,crash_create_critical_before_update_index";
+--error 2013
+flush logs;
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000009;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000010;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000011;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_create_after_update_index
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug="+d,crash_create_after_update_index";
+--error 2013
+flush logs;
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000010;
+file_exists $MYSQLD_DATADIR/master-bin.000011;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo #
+--echo # This should put the server in unsafe state and stop
+--echo # accepting any command. If we inject a fault at this
+--echo # point and continue the execution the server crashes.
+--echo # Besides the flush command does not report an error.
+--echo #
+
+--echo # fault_injection_registering_index
+SET SESSION debug="+d,fault_injection_registering_index";
+flush logs;
+--source include/restart_mysqld.inc
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # fault_injection_updating_index
+SET SESSION debug="+d,fault_injection_updating_index";
+flush logs;
+--source include/restart_mysqld.inc
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+eval SET SESSION debug="$old";
+
--echo End of tests
=== modified file 'mysql-test/suite/binlog/t/binlog_unsafe.test'
--- a/mysql-test/suite/binlog/t/binlog_unsafe.test 2010-01-19 10:36:52 +0000
+++ b/mysql-test/suite/binlog/t/binlog_unsafe.test 2010-03-04 08:03:07 +0000
@@ -47,6 +47,8 @@
# BUG#34768: nondeterministic INSERT using LIMIT logged in stmt mode if binlog_format=mixed
# BUG#41980, SBL, INSERT .. SELECT .. LIMIT = ERROR, even when @@SQL_LOG_BIN is 0
# BUG#42640: mysqld crashes when unsafe statements are executed (STRICT_TRANS_TABLES mode)
+# BUG#47995: Mark user functions as unsafe
+# BUG#49222: Mare RAND() unsafe
#
# ==== Related test cases ====
#
@@ -391,6 +393,7 @@ SET @@SESSION.SQL_MODE = @save_sql_mode;
#
# BUG#47995: Mark user functions as unsafe
+# BUG#49222: Mare RAND() unsafe
#
# Test that the system functions that are supposed to be marked unsafe
# generate a warning. Each INSERT statement below should generate a
@@ -400,27 +403,28 @@ SET @@SESSION.SQL_MODE = @save_sql_mode;
CREATE TABLE t1 (a VARCHAR(1000));
INSERT INTO t1 VALUES (CURRENT_USER()); #marked unsafe before BUG#47995
INSERT INTO t1 VALUES (FOUND_ROWS()); #marked unsafe before BUG#47995
-INSERT INTO t1 VALUES (GET_LOCK('tmp', 1));
-INSERT INTO t1 VALUES (IS_FREE_LOCK('tmp'));
-INSERT INTO t1 VALUES (IS_USED_LOCK('tmp'));
-INSERT INTO t1 VALUES (LOAD_FILE('../../std_data/words2.dat')); #marked unsafe before BUG#47995
+INSERT INTO t1 VALUES (GET_LOCK('tmp', 1)); #marked unsafe in BUG#47995
+INSERT INTO t1 VALUES (IS_FREE_LOCK('tmp')); #marked unsafe in BUG#47995
+INSERT INTO t1 VALUES (IS_USED_LOCK('tmp')); #marked unsafe in BUG#47995
+INSERT INTO t1 VALUES (LOAD_FILE('../../std_data/words2.dat')); #marked unsafe in BUG#39701
INSERT INTO t1 VALUES (MASTER_POS_WAIT('dummy arg', 4711, 1));
-INSERT INTO t1 VALUES (RELEASE_LOCK('tmp'));
+INSERT INTO t1 VALUES (RELEASE_LOCK('tmp')); #marked unsafe in BUG#47995
INSERT INTO t1 VALUES (ROW_COUNT()); #marked unsafe before BUG#47995
INSERT INTO t1 VALUES (SESSION_USER()); #marked unsafe before BUG#47995
-INSERT INTO t1 VALUES (SLEEP(1));
-INSERT INTO t1 VALUES (SYSDATE());
+INSERT INTO t1 VALUES (SLEEP(1)); #marked unsafe in BUG#47995
+INSERT INTO t1 VALUES (SYSDATE()); #marked unsafe in BUG#47995
INSERT INTO t1 VALUES (SYSTEM_USER()); #marked unsafe before BUG#47995
INSERT INTO t1 VALUES (USER()); #marked unsafe before BUG#47995
INSERT INTO t1 VALUES (UUID()); #marked unsafe before BUG#47995
INSERT INTO t1 VALUES (UUID_SHORT()); #marked unsafe before BUG#47995
-INSERT INTO t1 VALUES (VERSION());
+INSERT INTO t1 VALUES (VERSION()); #marked unsafe in BUG#47995
+INSERT INTO t1 VALUES (RAND()); #marked unsafe in BUG#49222
DELETE FROM t1;
# Since we replicate the TIMESTAMP variable, functions affected by the
# TIMESTAMP variable are safe to replicate. So we check that the
-# following following functions depend on the TIMESTAMP variable and
-# don't generate a warning.
+# following following functions that depend on the TIMESTAMP variable
+# are not unsafe and don't generate a warning.
SET TIME_ZONE= '+03:00';
SET TIMESTAMP=1000000;
=== added file 'mysql-test/suite/binlog/t/binlog_write_error.test'
--- a/mysql-test/suite/binlog/t/binlog_write_error.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/binlog/t/binlog_write_error.test 2010-01-24 07:03:23 +0000
@@ -0,0 +1,101 @@
+#
+# === Name ===
+#
+# binlog_write_error.test
+#
+# === Description ===
+#
+# This test case check if the error of writing binlog file is properly
+# reported and handled when executing statements.
+#
+# === Related Bugs ===
+#
+# BUG#37148
+#
+
+source include/have_log_bin.inc;
+source include/have_debug.inc;
+
+--echo #
+--echo # Initialization
+--echo #
+
+disable_warnings;
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP FUNCTION IF EXISTS f2;
+DROP PROCEDURE IF EXISTS p1;
+DROP PROCEDURE IF EXISTS p2;
+DROP TRIGGER IF EXISTS tr1;
+DROP TRIGGER IF EXISTS tr2;
+DROP VIEW IF EXISTS v1, v2;
+enable_warnings;
+
+--echo #
+--echo # Test injecting binlog write error when executing queries
+--echo #
+
+let $query= CREATE TABLE t1 (a INT);
+source include/binlog_inject_error.inc;
+
+INSERT INTO t1 VALUES (1),(2),(3);
+
+let $query= INSERT INTO t1 VALUES (4),(5),(6);
+source include/binlog_inject_error.inc;
+
+let $query= UPDATE t1 set a=a+1;
+source include/binlog_inject_error.inc;
+
+let $query= DELETE FROM t1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE TRIGGER tr1 AFTER INSERT ON t1 FOR EACH ROW INSERT INTO t1 VALUES (new.a + 100);
+source include/binlog_inject_error.inc;
+
+let $query= DROP TRIGGER tr1;
+source include/binlog_inject_error.inc;
+
+let $query= ALTER TABLE t1 ADD (b INT);
+source include/binlog_inject_error.inc;
+
+let $query= CREATE VIEW v1 AS SELECT a FROM t1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP VIEW v1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE PROCEDURE p1(OUT rows INT) SELECT count(*) INTO rows FROM t1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP PROCEDURE p1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP TABLE t1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE FUNCTION f1() RETURNS INT return 1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP FUNCTION f1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE USER user1;
+source include/binlog_inject_error.inc;
+
+let $query= REVOKE ALL PRIVILEGES, GRANT OPTION FROM user1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP USER user1;
+source include/binlog_inject_error.inc;
+
+--echo #
+--echo # Cleanup
+--echo #
+
+disable_warnings;
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP PROCEDURE IF EXISTS p1;
+DROP TRIGGER IF EXISTS tr1;
+DROP VIEW IF EXISTS v1, v2;
+enable_warnings;
=== added file 'mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_49329.result'
--- a/mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_49329.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/ibmdb2i/r/ibmdb2i_bug_49329.result 2009-12-11 07:01:16 +0000
@@ -0,0 +1,9 @@
+create table ABC (i int) engine=ibmdb2i;
+insert into ABC values(1);
+create table abc (i int) engine=ibmdb2i;
+insert into abc values (2);
+select * from ABC;
+i
+1
+drop table ABC;
+drop table abc;
=== added file 'mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_49329.test'
--- a/mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_49329.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/ibmdb2i/t/ibmdb2i_bug_49329.test 2009-12-11 07:01:16 +0000
@@ -0,0 +1,10 @@
+source suite/ibmdb2i/include/have_ibmdb2i.inc;
+source include/have_case_sensitive_file_system.inc;
+
+create table ABC (i int) engine=ibmdb2i;
+insert into ABC values(1);
+create table abc (i int) engine=ibmdb2i;
+insert into abc values (2);
+select * from ABC;
+drop table ABC;
+drop table abc;
=== added file 'mysql-test/suite/ndb/r/ndb_tmp_table_and_DDL.result'
--- a/mysql-test/suite/ndb/r/ndb_tmp_table_and_DDL.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/ndb/r/ndb_tmp_table_and_DDL.result 2010-01-22 09:38:21 +0000
@@ -0,0 +1,90 @@
+CREATE TEMPORARY TABLE t1 (a INT);
+CREATE TABLE t2 (a INT, b INT) ENGINE= NDB;
+INSERT INTO t1 VALUES (1);
+CREATE EVENT e1 ON SCHEDULE EVERY 10 HOUR DO SELECT 1;
+INSERT INTO t1 VALUES (1);
+ALTER EVENT e1 ON SCHEDULE EVERY 20 HOUR DO SELECT 1;
+INSERT INTO t1 VALUES (1);
+DROP EVENT IF EXISTS e1;
+INSERT INTO t1 VALUES (1);
+CREATE PROCEDURE p1() SELECT 1;
+INSERT INTO t1 VALUES (1);
+ALTER PROCEDURE p1 SQL SECURITY INVOKER;
+INSERT INTO t1 VALUES (1);
+CREATE FUNCTION f1() RETURNS INT RETURN 123;
+INSERT INTO t1 VALUES (1);
+ALTER FUNCTION f1 SQL SECURITY INVOKER;
+INSERT INTO t1 VALUES (1);
+CREATE DATABASE mysqltest1;
+INSERT INTO t1 VALUES (1);
+DROP DATABASE mysqltest1;
+INSERT INTO t1 VALUES (1);
+CREATE USER test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT SELECT ON t2 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT ALL ON f1 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT ALL ON p1 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT USAGE ON *.* TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE ALL PRIVILEGES ON f1 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE ALL PRIVILEGES ON p1 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE ALL PRIVILEGES ON t2 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE USAGE ON *.* FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+RENAME USER test_1@localhost TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+DROP USER test_2@localhost;
+INSERT INTO t1 VALUES (1);
+CREATE PROCEDURE p2()
+BEGIN
+# CREATE USER when a temporary table is open.
+CREATE TEMPORARY TABLE t3 (a INT);
+CREATE USER test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT select on table to user when a temporary table is open.
+GRANT SELECT ON t2 TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT all on function to user when a temporary table is open.
+GRANT ALL ON f1 TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT all on procedure to user when a temporary table is open.
+GRANT ALL ON p1 TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT usage on *.* to user when a temporary table is open.
+GRANT USAGE ON *.* TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE ALL PRIVILEGES on function to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON f1 FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE ALL PRIVILEGES on procedure to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON p1 FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE ALL PRIVILEGES on table to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON t2 FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE usage on *.* from user when a temporary table is open.
+REVOKE USAGE ON *.* FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# RENAME USER when a temporary table is open.
+RENAME USER test_2@localhost TO test_3@localhost;
+INSERT INTO t1 VALUES (1);
+# DROP USER when a temporary table is open.
+DROP USER test_3@localhost;
+INSERT INTO t1 VALUES (1);
+DROP TEMPORARY TABLE t3;
+END |
+DROP PROCEDURE p1;
+INSERT INTO t1 VALUES (1);
+DROP PROCEDURE p2;
+INSERT INTO t1 VALUES (1);
+DROP FUNCTION f1;
+INSERT INTO t1 VALUES (1);
+DROP TABLE t2;
+INSERT INTO t1 VALUES (1);
+DROP TEMPORARY TABLE t1;
=== added file 'mysql-test/suite/ndb/t/ndb_tmp_table_and_DDL.test'
--- a/mysql-test/suite/ndb/t/ndb_tmp_table_and_DDL.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/ndb/t/ndb_tmp_table_and_DDL.test 2010-01-22 09:38:21 +0000
@@ -0,0 +1,11 @@
+#
+# Bug#49132
+# This test verifies if executing DDL statement before trying to manipulate
+# a temporary table causes row-based replication to break with error 'table
+# does not exist' base on ndb engine.
+#
+
+source include/have_ndb.inc;
+
+LET $ENGINE_TYPE= NDB;
+source extra/rpl_tests/rpl_tmp_table_and_DDL.test;
=== modified file 'mysql-test/suite/parts/inc/part_blocked_sql_funcs_main.inc'
--- a/mysql-test/suite/parts/inc/part_blocked_sql_funcs_main.inc 2007-11-20 15:04:07 +0000
+++ b/mysql-test/suite/parts/inc/part_blocked_sql_funcs_main.inc 2009-12-14 17:27:43 +0000
@@ -152,10 +152,16 @@ let $valsqlfunc = timestampdiff(YEAR,'20
let $coltype = datetime;
--source suite/parts/inc/partition_blocked_sql_funcs.inc
-let $sqlfunc = unix_timestamp(col1);
-let $valsqlfunc = unix_timestamp ('2002-05-01');
-let $coltype = date;
---source suite/parts/inc/partition_blocked_sql_funcs.inc
+################################################################################
+# After the fix for bug #42849 the server behavior does not fit into this test's
+# architecture: for UNIX_TIMESTAMP() some of the queries in
+# suite/parts/inc/partition_blocked_sql_funcs.inc will fail with a different
+# error (ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR) and some will succeed where
+################################################################################
+#let $sqlfunc = unix_timestamp(col1);
+#let $valsqlfunc = unix_timestamp ('2002-05-01');
+#let $coltype = date;
+#--source suite/parts/inc/partition_blocked_sql_funcs.inc
let $sqlfunc = week(col1);
let $valsqlfunc = week('2002-05-01');
=== modified file 'mysql-test/suite/parts/inc/partition_timestamp.inc'
--- a/mysql-test/suite/parts/inc/partition_timestamp.inc 2010-01-27 17:41:05 +0000
+++ b/mysql-test/suite/parts/inc/partition_timestamp.inc 2010-03-04 08:03:07 +0000
@@ -36,51 +36,57 @@ select count(*) from t2;
select * from t2;
drop table t2;
-eval create table t3 (a timestamp not null, primary key(a)) engine=$engine
-partition by range (month(a)) subpartition by key (a)
-subpartitions 3 (
-partition quarter1 values less than (4),
-partition quarter2 values less than (7),
-partition quarter3 values less than (10),
-partition quarter4 values less than (13)
-);
-show create table t3;
-let $count=12;
---echo $count inserts;
---disable_query_log
-SET TIME_ZONE= '+03:00';
-begin;
-while ($count)
-{
-eval insert into t3 values (date_add('1970-01-01 00:00:00',interval $count-1 month));
-dec $count;
-}
-commit;
---enable_query_log
-select count(*) from t3;
-select * from t3;
-drop table t3;
+################################################################################
+# The following 2 tests are no longer valid after bug #42849 has been fixed:
+# it is not possible to use a timezone-dependent (such as month(timestamp_col)
+# or just a timestamp_col in a numeric context) anymore.
+################################################################################
-eval create table t4 (a timestamp not null, primary key(a)) engine=$engine
-partition by list (month(a)) subpartition by key (a)
-subpartitions 3 (
-partition quarter1 values in (0,1,2,3),
-partition quarter2 values in (4,5,6),
-partition quarter3 values in (7,8,9),
-partition quarter4 values in (10,11,12)
-);
-show create table t4;
-let $count=12;
---echo $count inserts;
---disable_query_log
-begin;
-while ($count)
-{
-eval insert into t4 values (date_add('1970-01-01 00:00:00',interval $count-1 month));
-dec $count;
-}
-commit;
---enable_query_log
-select count(*) from t4;
-select * from t4;
-drop table t4;
+# eval create table t3 (a timestamp not null, primary key(a)) engine=$engine
+# partition by range (month(a)) subpartition by key (a)
+# subpartitions 3 (
+# partition quarter1 values less than (4),
+# partition quarter2 values less than (7),
+# partition quarter3 values less than (10),
+# partition quarter4 values less than (13)
+# );
+# show create table t3;
+# let $count=12;
+# --echo $count inserts;
+# --disable_query_log
+# SET TIME_ZONE= '+03:00';
+# begin;
+# while ($count)
+# {
+# eval insert into t3 values (date_add('1970-01-01 00:00:00',interval $count-1 month));
+# dec $count;
+# }
+# commit;
+# --enable_query_log
+# select count(*) from t3;
+# select * from t3;
+# drop table t3;
+
+# eval create table t4 (a timestamp not null, primary key(a)) engine=$engine
+# partition by list (month(a)) subpartition by key (a)
+# subpartitions 3 (
+# partition quarter1 values in (0,1,2,3),
+# partition quarter2 values in (4,5,6),
+# partition quarter3 values in (7,8,9),
+# partition quarter4 values in (10,11,12)
+# );
+# show create table t4;
+# let $count=12;
+# --echo $count inserts;
+# --disable_query_log
+# begin;
+# while ($count)
+# {
+# eval insert into t4 values (date_add('1970-01-01 00:00:00',interval $count-1 month));
+# dec $count;
+# }
+# commit;
+# --enable_query_log
+# select count(*) from t4;
+# select * from t4;
+# drop table t4;
=== modified file 'mysql-test/suite/parts/r/part_blocked_sql_func_innodb.result'
--- a/mysql-test/suite/parts/r/part_blocked_sql_func_innodb.result 2007-08-27 20:08:32 +0000
+++ b/mysql-test/suite/parts/r/part_blocked_sql_func_innodb.result 2009-12-14 17:27:43 +0000
@@ -2942,104 +2942,6 @@ drop table if exists t44 ;
drop table if exists t55 ;
drop table if exists t66 ;
-------------------------------------------------------------------------
---- unix_timestamp(col1) in partition with coltype date
--------------------------------------------------------------------------
-must all fail!
-drop table if exists t1 ;
-drop table if exists t2 ;
-drop table if exists t3 ;
-drop table if exists t4 ;
-drop table if exists t5 ;
-drop table if exists t6 ;
-create table t1 (col1 date) engine='INNODB'
-partition by range(unix_timestamp(col1))
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-create table t2 (col1 date) engine='INNODB'
-partition by list(unix_timestamp(col1))
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-create table t3 (col1 date) engine='INNODB'
-partition by hash(unix_timestamp(col1));
-Got one of the listed errors
-create table t4 (colint int, col1 date) engine='INNODB'
-partition by range(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-create table t5 (colint int, col1 date) engine='INNODB'
-partition by list(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-create table t6 (colint int, col1 date) engine='INNODB'
-partition by range(colint)
-(partition p0 values less than (unix_timestamp ('2002-05-01')),
-partition p1 values less than maxvalue);
-Got one of the listed errors
-drop table if exists t11 ;
-drop table if exists t22 ;
-drop table if exists t33 ;
-drop table if exists t44 ;
-drop table if exists t55 ;
-drop table if exists t66 ;
-create table t11 (col1 date) engine='INNODB' ;
-create table t22 (col1 date) engine='INNODB' ;
-create table t33 (col1 date) engine='INNODB' ;
-create table t44 (colint int, col1 date) engine='INNODB' ;
-create table t55 (colint int, col1 date) engine='INNODB' ;
-create table t66 (colint int, col1 date) engine='INNODB' ;
-alter table t11
-partition by range(unix_timestamp(col1))
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-alter table t22
-partition by list(unix_timestamp(col1))
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-alter table t33
-partition by hash(unix_timestamp(col1));
-Got one of the listed errors
-alter table t44
-partition by range(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-alter table t55
-partition by list(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-alter table t66
-partition by range(colint)
-(partition p0 values less than (unix_timestamp ('2002-05-01')),
-partition p1 values less than maxvalue);
-Got one of the listed errors
-drop table if exists t1 ;
-drop table if exists t2 ;
-drop table if exists t3 ;
-drop table if exists t4 ;
-drop table if exists t5 ;
-drop table if exists t6 ;
-drop table if exists t11 ;
-drop table if exists t22 ;
-drop table if exists t33 ;
-drop table if exists t44 ;
-drop table if exists t55 ;
-drop table if exists t66 ;
--------------------------------------------------------------------------
--- week(col1) in partition with coltype datetime
-------------------------------------------------------------------------
must all fail!
=== modified file 'mysql-test/suite/parts/r/part_blocked_sql_func_myisam.result'
--- a/mysql-test/suite/parts/r/part_blocked_sql_func_myisam.result 2007-08-27 20:08:32 +0000
+++ b/mysql-test/suite/parts/r/part_blocked_sql_func_myisam.result 2009-12-14 17:27:43 +0000
@@ -2942,104 +2942,6 @@ drop table if exists t44 ;
drop table if exists t55 ;
drop table if exists t66 ;
-------------------------------------------------------------------------
---- unix_timestamp(col1) in partition with coltype date
--------------------------------------------------------------------------
-must all fail!
-drop table if exists t1 ;
-drop table if exists t2 ;
-drop table if exists t3 ;
-drop table if exists t4 ;
-drop table if exists t5 ;
-drop table if exists t6 ;
-create table t1 (col1 date) engine='MYISAM'
-partition by range(unix_timestamp(col1))
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-create table t2 (col1 date) engine='MYISAM'
-partition by list(unix_timestamp(col1))
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-create table t3 (col1 date) engine='MYISAM'
-partition by hash(unix_timestamp(col1));
-Got one of the listed errors
-create table t4 (colint int, col1 date) engine='MYISAM'
-partition by range(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-create table t5 (colint int, col1 date) engine='MYISAM'
-partition by list(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-create table t6 (colint int, col1 date) engine='MYISAM'
-partition by range(colint)
-(partition p0 values less than (unix_timestamp ('2002-05-01')),
-partition p1 values less than maxvalue);
-Got one of the listed errors
-drop table if exists t11 ;
-drop table if exists t22 ;
-drop table if exists t33 ;
-drop table if exists t44 ;
-drop table if exists t55 ;
-drop table if exists t66 ;
-create table t11 (col1 date) engine='MYISAM' ;
-create table t22 (col1 date) engine='MYISAM' ;
-create table t33 (col1 date) engine='MYISAM' ;
-create table t44 (colint int, col1 date) engine='MYISAM' ;
-create table t55 (colint int, col1 date) engine='MYISAM' ;
-create table t66 (colint int, col1 date) engine='MYISAM' ;
-alter table t11
-partition by range(unix_timestamp(col1))
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-alter table t22
-partition by list(unix_timestamp(col1))
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-alter table t33
-partition by hash(unix_timestamp(col1));
-Got one of the listed errors
-alter table t44
-partition by range(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values less than (15),
-partition p1 values less than (31));
-Got one of the listed errors
-alter table t55
-partition by list(colint)
-subpartition by hash(unix_timestamp(col1)) subpartitions 2
-(partition p0 values in (1,2,3,4,5,6,7,8,9,10),
-partition p1 values in (11,12,13,14,15,16,17,18,19,20),
-partition p2 values in (21,22,23,24,25,26,27,28,29,30));
-Got one of the listed errors
-alter table t66
-partition by range(colint)
-(partition p0 values less than (unix_timestamp ('2002-05-01')),
-partition p1 values less than maxvalue);
-Got one of the listed errors
-drop table if exists t1 ;
-drop table if exists t2 ;
-drop table if exists t3 ;
-drop table if exists t4 ;
-drop table if exists t5 ;
-drop table if exists t6 ;
-drop table if exists t11 ;
-drop table if exists t22 ;
-drop table if exists t33 ;
-drop table if exists t44 ;
-drop table if exists t55 ;
-drop table if exists t66 ;
--------------------------------------------------------------------------
--- week(col1) in partition with coltype datetime
-------------------------------------------------------------------------
must all fail!
=== modified file 'mysql-test/suite/parts/r/partition_datetime_innodb.result'
--- a/mysql-test/suite/parts/r/partition_datetime_innodb.result 2009-10-28 07:52:34 +0000
+++ b/mysql-test/suite/parts/r/partition_datetime_innodb.result 2010-03-04 08:03:07 +0000
@@ -125,90 +125,6 @@ a
1971-01-01 00:00:58
1971-01-01 00:00:59
drop table t2;
-create table t3 (a timestamp not null, primary key(a)) engine='InnoDB'
-partition by range (month(a)) subpartition by key (a)
-subpartitions 3 (
-partition quarter1 values less than (4),
-partition quarter2 values less than (7),
-partition quarter3 values less than (10),
-partition quarter4 values less than (13)
-);
-show create table t3;
-Table Create Table
-t3 CREATE TABLE `t3` (
- `a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (`a`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1
-/*!50100 PARTITION BY RANGE (month(a))
-SUBPARTITION BY KEY (a)
-SUBPARTITIONS 3
-(PARTITION quarter1 VALUES LESS THAN (4) ENGINE = InnoDB,
- PARTITION quarter2 VALUES LESS THAN (7) ENGINE = InnoDB,
- PARTITION quarter3 VALUES LESS THAN (10) ENGINE = InnoDB,
- PARTITION quarter4 VALUES LESS THAN (13) ENGINE = InnoDB) */
-12 inserts;
-Warnings:
-Warning 1264 Out of range value for column 'a' at row 1
-select count(*) from t3;
-count(*)
-12
-select * from t3;
-a
-0000-00-00 00:00:00
-1970-02-01 00:00:00
-1970-03-01 00:00:00
-1970-04-01 00:00:00
-1970-05-01 00:00:00
-1970-06-01 00:00:00
-1970-07-01 00:00:00
-1970-08-01 00:00:00
-1970-09-01 00:00:00
-1970-10-01 00:00:00
-1970-11-01 00:00:00
-1970-12-01 00:00:00
-drop table t3;
-create table t4 (a timestamp not null, primary key(a)) engine='InnoDB'
-partition by list (month(a)) subpartition by key (a)
-subpartitions 3 (
-partition quarter1 values in (0,1,2,3),
-partition quarter2 values in (4,5,6),
-partition quarter3 values in (7,8,9),
-partition quarter4 values in (10,11,12)
-);
-show create table t4;
-Table Create Table
-t4 CREATE TABLE `t4` (
- `a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (`a`)
-) ENGINE=InnoDB DEFAULT CHARSET=latin1
-/*!50100 PARTITION BY LIST (month(a))
-SUBPARTITION BY KEY (a)
-SUBPARTITIONS 3
-(PARTITION quarter1 VALUES IN (0,1,2,3) ENGINE = InnoDB,
- PARTITION quarter2 VALUES IN (4,5,6) ENGINE = InnoDB,
- PARTITION quarter3 VALUES IN (7,8,9) ENGINE = InnoDB,
- PARTITION quarter4 VALUES IN (10,11,12) ENGINE = InnoDB) */
-12 inserts;
-Warnings:
-Warning 1264 Out of range value for column 'a' at row 1
-select count(*) from t4;
-count(*)
-12
-select * from t4;
-a
-0000-00-00 00:00:00
-1970-02-01 00:00:00
-1970-03-01 00:00:00
-1970-04-01 00:00:00
-1970-05-01 00:00:00
-1970-06-01 00:00:00
-1970-07-01 00:00:00
-1970-08-01 00:00:00
-1970-09-01 00:00:00
-1970-10-01 00:00:00
-1970-11-01 00:00:00
-1970-12-01 00:00:00
-drop table t4;
create table t1 (a date not null, primary key(a)) engine='InnoDB'
partition by key (a) (
partition pa1 max_rows=20 min_rows=2,
=== modified file 'mysql-test/suite/parts/r/partition_datetime_myisam.result'
--- a/mysql-test/suite/parts/r/partition_datetime_myisam.result 2009-10-28 07:52:34 +0000
+++ b/mysql-test/suite/parts/r/partition_datetime_myisam.result 2010-03-04 08:03:07 +0000
@@ -125,90 +125,6 @@ a
1971-01-01 00:00:58
1971-01-01 00:00:59
drop table t2;
-create table t3 (a timestamp not null, primary key(a)) engine='MyISAM'
-partition by range (month(a)) subpartition by key (a)
-subpartitions 3 (
-partition quarter1 values less than (4),
-partition quarter2 values less than (7),
-partition quarter3 values less than (10),
-partition quarter4 values less than (13)
-);
-show create table t3;
-Table Create Table
-t3 CREATE TABLE `t3` (
- `a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (`a`)
-) ENGINE=MyISAM DEFAULT CHARSET=latin1
-/*!50100 PARTITION BY RANGE (month(a))
-SUBPARTITION BY KEY (a)
-SUBPARTITIONS 3
-(PARTITION quarter1 VALUES LESS THAN (4) ENGINE = MyISAM,
- PARTITION quarter2 VALUES LESS THAN (7) ENGINE = MyISAM,
- PARTITION quarter3 VALUES LESS THAN (10) ENGINE = MyISAM,
- PARTITION quarter4 VALUES LESS THAN (13) ENGINE = MyISAM) */
-12 inserts;
-Warnings:
-Warning 1264 Out of range value for column 'a' at row 1
-select count(*) from t3;
-count(*)
-12
-select * from t3;
-a
-0000-00-00 00:00:00
-1970-02-01 00:00:00
-1970-03-01 00:00:00
-1970-04-01 00:00:00
-1970-05-01 00:00:00
-1970-06-01 00:00:00
-1970-07-01 00:00:00
-1970-08-01 00:00:00
-1970-09-01 00:00:00
-1970-10-01 00:00:00
-1970-11-01 00:00:00
-1970-12-01 00:00:00
-drop table t3;
-create table t4 (a timestamp not null, primary key(a)) engine='MyISAM'
-partition by list (month(a)) subpartition by key (a)
-subpartitions 3 (
-partition quarter1 values in (0,1,2,3),
-partition quarter2 values in (4,5,6),
-partition quarter3 values in (7,8,9),
-partition quarter4 values in (10,11,12)
-);
-show create table t4;
-Table Create Table
-t4 CREATE TABLE `t4` (
- `a` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
- PRIMARY KEY (`a`)
-) ENGINE=MyISAM DEFAULT CHARSET=latin1
-/*!50100 PARTITION BY LIST (month(a))
-SUBPARTITION BY KEY (a)
-SUBPARTITIONS 3
-(PARTITION quarter1 VALUES IN (0,1,2,3) ENGINE = MyISAM,
- PARTITION quarter2 VALUES IN (4,5,6) ENGINE = MyISAM,
- PARTITION quarter3 VALUES IN (7,8,9) ENGINE = MyISAM,
- PARTITION quarter4 VALUES IN (10,11,12) ENGINE = MyISAM) */
-12 inserts;
-Warnings:
-Warning 1264 Out of range value for column 'a' at row 1
-select count(*) from t4;
-count(*)
-12
-select * from t4;
-a
-0000-00-00 00:00:00
-1970-02-01 00:00:00
-1970-03-01 00:00:00
-1970-04-01 00:00:00
-1970-05-01 00:00:00
-1970-06-01 00:00:00
-1970-07-01 00:00:00
-1970-08-01 00:00:00
-1970-09-01 00:00:00
-1970-10-01 00:00:00
-1970-11-01 00:00:00
-1970-12-01 00:00:00
-drop table t4;
create table t1 (a date not null, primary key(a)) engine='MyISAM'
partition by key (a) (
partition pa1 max_rows=20 min_rows=2,
=== modified file 'mysql-test/suite/pbxt/r/partition_error.result'
--- a/mysql-test/suite/pbxt/r/partition_error.result 2009-04-02 10:03:14 +0000
+++ b/mysql-test/suite/pbxt/r/partition_error.result 2010-03-09 15:03:54 +0000
@@ -107,7 +107,7 @@ primary key(a,b))
partition by hash (rand(a))
partitions 2
(partition x1, partition x2);
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')
partitions 2
(partition x1, partition x2)' at line 6
CREATE TABLE t1 (
@@ -118,7 +118,7 @@ primary key(a,b))
partition by range (rand(a))
partitions 2
(partition x1 values less than (0), partition x2 values less than (2));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')
partitions 2
(partition x1 values less than (0), partition x2 values less than' at line 6
CREATE TABLE t1 (
@@ -129,7 +129,7 @@ primary key(a,b))
partition by list (rand(a))
partitions 2
(partition x1 values in (1), partition x2 values in (2));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')
partitions 2
(partition x1 values in (1), partition x2 values in (2))' at line 6
CREATE TABLE t1 (
@@ -244,7 +244,7 @@ c int not null,
primary key (a,b))
partition by key (a)
subpartition by hash (rand(a+b));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 7
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 7
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -341,7 +341,7 @@ partition by range (3+4)
partitions 2
(partition x1 values less than (4) tablespace ts1,
partition x2 values less than (8) tablespace ts2);
-ERROR HY000: Constant/Random expression in (sub)partitioning function is not allowed
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -511,7 +511,7 @@ partition by list (3+4)
partitions 2
(partition x1 values in (4) tablespace ts1,
partition x2 values in (8) tablespace ts2);
-ERROR HY000: Constant/Random expression in (sub)partitioning function is not allowed
+ERROR HY000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -603,13 +603,13 @@ partition by range (ascii(v))
ERROR HY000: This partition function is not allowed
create table t1 (a int)
partition by hash (rand(a));
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 2
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 2
create table t1 (a int)
partition by hash(CURTIME() + a);
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 2
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 2
create table t1 (a int)
partition by hash (NOW()+a);
-ERROR 42000: Constant/Random expression in (sub)partitioning function is not allowed near ')' at line 2
+ERROR 42000: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed near ')' at line 2
create table t1 (a int)
partition by hash (extract(hour from convert_tz(a, '+00:00', '+00:00')));
ERROR HY000: This partition function is not allowed
=== modified file 'mysql-test/suite/pbxt/r/partition_pruning.result'
--- a/mysql-test/suite/pbxt/r/partition_pruning.result 2009-08-17 15:57:58 +0000
+++ b/mysql-test/suite/pbxt/r/partition_pruning.result 2010-03-09 15:03:54 +0000
@@ -527,7 +527,7 @@ id select_type table partitions type pos
1 SIMPLE t2 p0,p4 ALL NULL NULL NULL NULL 910 Using where
explain partitions select * from t2 where (a > 100 AND a < 600);
id select_type table partitions type possible_keys key key_len ref rows Extra
-1 SIMPLE t2 p0,p1,p2,p3 ALL NULL NULL NULL NULL 910 Using where
+1 SIMPLE t2 p0,p1,p2 ALL NULL NULL NULL NULL 910 Using where
analyze table t2;
Table Op Msg_type Msg_text
test.t2 analyze status OK
=== modified file 'mysql-test/suite/pbxt/t/partition_error.test'
--- a/mysql-test/suite/pbxt/t/partition_error.test 2009-04-02 10:03:14 +0000
+++ b/mysql-test/suite/pbxt/t/partition_error.test 2010-03-09 15:03:54 +0000
@@ -421,7 +421,7 @@ partitions 2
#
# Partition by range, constant partition function not allowed
#
---error ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -636,7 +636,7 @@ partition by list (a);
#
# Partition by list, constant partition function not allowed
#
---error ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
CREATE TABLE t1 (
a int not null,
b int not null,
=== modified file 'mysql-test/suite/rpl/r/rpl_create_if_not_exists.result'
--- a/mysql-test/suite/rpl/r/rpl_create_if_not_exists.result 2009-08-29 08:52:22 +0000
+++ b/mysql-test/suite/rpl/r/rpl_create_if_not_exists.result 2010-01-16 07:44:24 +0000
@@ -31,3 +31,37 @@ SHOW EVENTS in mysqltest;
Db Name Definer Time zone Type Execute at Interval value Interval field Starts Ends Status Originator character_set_client collation_connection Database Collation
mysqltest e root@localhost SYSTEM ONE TIME # NULL NULL NULL NULL SLAVESIDE_DISABLED 1 latin1 latin1_swedish_ci latin1_swedish_ci
DROP DATABASE IF EXISTS mysqltest;
+-------------BUG#47418-------------
+USE test;
+DROP TABLE IF EXISTS t3;
+CREATE TABLE t3(c1 INTEGER);
+INSERT INTO t3 VALUES(33);
+CREATE TEMPORARY TABLE t1(c1 INTEGER);
+CREATE TEMPORARY TABLE t2(c1 INTEGER);
+INSERT INTO t1 VALUES(1);
+INSERT INTO t2 VALUES(1);
+CREATE TABLE IF NOT EXISTS t1(c1 INTEGER) SELECT c1 FROM t3;
+CREATE TABLE t2(c1 INTEGER) SELECT c1 FROM t3;
+SELECT * FROM t1;
+c1
+1
+SELECT * FROM t2;
+c1
+1
+SELECT * FROM t1;
+c1
+33
+SELECT * FROM t2;
+c1
+33
+DROP TEMPORARY TABLE t1;
+DROP TEMPORARY TABLE t2;
+SELECT * FROM t1;
+c1
+33
+SELECT * FROM t2;
+c1
+33
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
=== modified file 'mysql-test/suite/rpl/r/rpl_do_grant.result'
--- a/mysql-test/suite/rpl/r/rpl_do_grant.result 2009-09-01 11:38:17 +0000
+++ b/mysql-test/suite/rpl/r/rpl_do_grant.result 2009-12-06 23:12:11 +0000
@@ -169,4 +169,77 @@ DROP USER 'create_rout_db'@'localhost';
call mtr.add_suppression("Slave: Operation DROP USER failed for 'create_rout_db'@'localhost' Error_code: 1396");
USE mtr;
call mtr.add_suppression("Slave: Operation DROP USER failed for 'create_rout_db'@'localhost' Error_code: 1396");
+######## BUG#49119 #######
+### i) test case from the 'how to repeat section'
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1(c1 INT);
+CREATE PROCEDURE p1() SELECT * FROM t1 |
+REVOKE EXECUTE ON PROCEDURE p1 FROM 'root'@'localhost';
+ERROR 42000: There is no such grant defined for user 'root' on host 'localhost' on routine 'p1'
+DROP TABLE t1;
+DROP PROCEDURE p1;
+### ii) Test case in which REVOKE partially succeeds
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1(c1 INT);
+CREATE PROCEDURE p1() SELECT * FROM t1 |
+CREATE USER 'user49119'@'localhost';
+GRANT EXECUTE ON PROCEDURE p1 TO 'user49119'@'localhost';
+##############################################################
+### Showing grants for both users: root and user49119 (master)
+SHOW GRANTS FOR 'user49119'@'localhost';
+Grants for user49119@localhost
+GRANT USAGE ON *.* TO 'user49119'@'localhost'
+GRANT EXECUTE ON PROCEDURE `test`.`p1` TO 'user49119'@'localhost'
+SHOW GRANTS FOR CURRENT_USER;
+Grants for root@localhost
+GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION
+##############################################################
+##############################################################
+### Showing grants for both users: root and user49119 (master)
+SHOW GRANTS FOR 'user49119'@'localhost';
+Grants for user49119@localhost
+GRANT USAGE ON *.* TO 'user49119'@'localhost'
+GRANT EXECUTE ON PROCEDURE `test`.`p1` TO 'user49119'@'localhost'
+SHOW GRANTS FOR CURRENT_USER;
+Grants for root@localhost
+GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION
+##############################################################
+## This statement will make the revoke fail because root has no
+## execute grant. However, it will still revoke the grant for
+## user49119.
+REVOKE EXECUTE ON PROCEDURE p1 FROM 'user49119'@'localhost', 'root'@'localhost';
+ERROR 42000: There is no such grant defined for user 'root' on host 'localhost' on routine 'p1'
+##############################################################
+### Showing grants for both users: root and user49119 (master)
+### after revoke statement failure
+SHOW GRANTS FOR 'user49119'@'localhost';
+Grants for user49119@localhost
+GRANT USAGE ON *.* TO 'user49119'@'localhost'
+SHOW GRANTS FOR CURRENT_USER;
+Grants for root@localhost
+GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION
+##############################################################
+#############################################################
+### Showing grants for both users: root and user49119 (slave)
+### after revoke statement failure (should match
+SHOW GRANTS FOR 'user49119'@'localhost';
+Grants for user49119@localhost
+GRANT USAGE ON *.* TO 'user49119'@'localhost'
+SHOW GRANTS FOR CURRENT_USER;
+Grants for root@localhost
+GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION
+##############################################################
+DROP TABLE t1;
+DROP PROCEDURE p1;
+DROP USER 'user49119'@'localhost';
"End of test"
=== modified file 'mysql-test/suite/rpl/r/rpl_drop_temp.result'
--- a/mysql-test/suite/rpl/r/rpl_drop_temp.result 2009-08-28 09:45:57 +0000
+++ b/mysql-test/suite/rpl/r/rpl_drop_temp.result 2009-12-31 04:04:19 +0000
@@ -12,3 +12,17 @@ show status like 'Slave_open_temp_tables
Variable_name Value
Slave_open_temp_tables 0
drop database mysqltest;
+DROP TEMPORARY TABLE IF EXISTS tmp1;
+Warnings:
+Note 1051 Unknown table 'tmp1'
+CREATE TEMPORARY TABLE t1 ( a int );
+DROP TEMPORARY TABLE t1, t2;
+ERROR 42S02: Unknown table 't2'
+DROP TEMPORARY TABLE tmp2;
+ERROR 42S02: Unknown table 'tmp2'
+stop slave;
+**** On Master ****
+CREATE TEMPORARY TABLE tmp3 (a int);
+DROP TEMPORARY TABLE tmp3;
+SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
+START SLAVE;
=== added file 'mysql-test/suite/rpl/r/rpl_geometry.result'
--- a/mysql-test/suite/rpl/r/rpl_geometry.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_geometry.result 2010-01-05 06:25:29 +0000
@@ -0,0 +1,18 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+create table t1(a varchar(100),
+b multipoint not null,
+c varchar(256));
+insert into t1 set
+a='hello',
+b=geomfromtext('multipoint(1 1)'),
+c='geometry';
+create table t2 (a int(11) not null auto_increment primary key,
+b geometrycollection default null,
+c decimal(10,0));
+insert into t2(c) values (null);
+drop table t1, t2;
=== modified file 'mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result'
--- a/mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result 2009-12-03 11:19:05 +0000
+++ b/mysql-test/suite/rpl/r/rpl_get_master_version_and_clock.result 2010-03-04 08:03:07 +0000
@@ -4,10 +4,9 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
-call mtr.add_suppression("Get master clock failed with error: ");
-call mtr.add_suppression("Get master SERVER_ID failed with error: ");
-call mtr.add_suppression("Slave I/O: Master command COM_REGISTER_SLAVE failed: failed registering on master, reconnecting to try again");
+call mtr.add_suppression("Slave I/O: Master command COM_REGISTER_SLAVE failed: .*");
call mtr.add_suppression("Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; .*");
+call mtr.add_suppression("Slave I/O thread .* register on master");
SELECT IS_FREE_LOCK("debug_lock.before_get_UNIX_TIMESTAMP");
IS_FREE_LOCK("debug_lock.before_get_UNIX_TIMESTAMP")
1
=== modified file 'mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result'
--- a/mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result 2009-09-28 12:41:10 +0000
+++ b/mysql-test/suite/rpl/r/rpl_innodb_mixed_dml.result 2010-02-02 13:38:44 +0000
@@ -885,7 +885,7 @@ master-bin.000001 # Query 1 # use `test_
master-bin.000001 # Xid 1 # #
master-bin.000001 # Query 1 # BEGIN
master-bin.000001 # Begin_load_query 1 # ;file_id=#;block_len=#
-master-bin.000001 # Execute_load_query 1 # use `test_rpl`; LOAD DATA INFILE 'MYSQLTEST_VARDIR/std_data/rpl_mixed.dat' INTO TABLE `t1` FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (a, b) ;file_id=#
+master-bin.000001 # Execute_load_query 1 # use `test_rpl`; LOAD DATA INFILE 'MYSQLTEST_VARDIR/std_data/rpl_mixed.dat' INTO TABLE `t1` FIELDS TERMINATED BY '|' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`a`, `b`) ;file_id=#
master-bin.000001 # Xid 1 # #
master-bin.000001 # Query 1 # BEGIN
master-bin.000001 # Query 1 # use `test_rpl`; DELETE FROM t1
=== modified file 'mysql-test/suite/rpl/r/rpl_killed_ddl.result'
--- a/mysql-test/suite/rpl/r/rpl_killed_ddl.result 2009-04-08 23:42:51 +0000
+++ b/mysql-test/suite/rpl/r/rpl_killed_ddl.result 2009-12-10 03:51:42 +0000
@@ -63,7 +63,7 @@ source include/diff_master_slave.inc;
DROP DATABASE d1;
source include/kill_query.inc;
source include/diff_master_slave.inc;
-DROP DATABASE d2;
+DROP DATABASE IF EXISTS d2;
source include/kill_query.inc;
source include/diff_master_slave.inc;
CREATE EVENT e2
@@ -115,6 +115,7 @@ source include/diff_master_slave.inc;
DROP INDEX i1 on t1;
source include/kill_query.inc;
source include/diff_master_slave.inc;
+CREATE TABLE IF NOT EXISTS t4 (a int);
CREATE TRIGGER tr2 BEFORE INSERT ON t4
FOR EACH ROW BEGIN
DELETE FROM t1 WHERE a=NEW.a;
=== modified file 'mysql-test/suite/rpl/r/rpl_loaddata.result'
--- a/mysql-test/suite/rpl/r/rpl_loaddata.result 2009-12-08 09:26:11 +0000
+++ b/mysql-test/suite/rpl/r/rpl_loaddata.result 2010-01-13 10:28:42 +0000
@@ -34,9 +34,9 @@ insert into t1 values(1,10);
load data infile '../../std_data/rpl_loaddata.dat' into table t1;
set global sql_slave_skip_counter=1;
start slave;
-show slave status;
-Slave_IO_State Master_Host Master_User Master_Port Connect_Retry Master_Log_File Read_Master_Log_Pos Relay_Log_File Relay_Log_Pos Relay_Master_Log_File Slave_IO_Running Slave_SQL_Running Replicate_Do_DB Replicate_Ignore_DB Replicate_Do_Table Replicate_Ignore_Table Replicate_Wild_Do_Table Replicate_Wild_Ignore_Table Last_Errno Last_Error Skip_Counter Exec_Master_Log_Pos Relay_Log_Space Until_Condition Until_Log_File Until_Log_Pos Master_SSL_Allowed Master_SSL_CA_File Master_SSL_CA_Path Master_SSL_Cert Master_SSL_Cipher Master_SSL_Key Seconds_Behind_Master Master_SSL_Verify_Server_Cert Last_IO_Errno Last_IO_Error Last_SQL_Errno Last_SQL_Error
-# 127.0.0.1 root MASTER_PORT 1 master-bin.000001 2009 # # master-bin.000001 Yes Yes # 0 0 2009 # None 0 No # No 0 0
+Last_SQL_Errno=0
+Last_SQL_Error
+
set sql_log_bin=0;
delete from t1;
set sql_log_bin=1;
@@ -44,9 +44,9 @@ load data infile '../../std_data/rpl_loa
stop slave;
change master to master_user='test';
change master to master_user='root';
-show slave status;
-Slave_IO_State Master_Host Master_User Master_Port Connect_Retry Master_Log_File Read_Master_Log_Pos Relay_Log_File Relay_Log_Pos Relay_Master_Log_File Slave_IO_Running Slave_SQL_Running Replicate_Do_DB Replicate_Ignore_DB Replicate_Do_Table Replicate_Ignore_Table Replicate_Wild_Do_Table Replicate_Wild_Ignore_Table Last_Errno Last_Error Skip_Counter Exec_Master_Log_Pos Relay_Log_Space Until_Condition Until_Log_File Until_Log_Pos Master_SSL_Allowed Master_SSL_CA_File Master_SSL_CA_Path Master_SSL_Cert Master_SSL_Cipher Master_SSL_Key Seconds_Behind_Master Master_SSL_Verify_Server_Cert Last_IO_Errno Last_IO_Error Last_SQL_Errno Last_SQL_Error
-# 127.0.0.1 root MASTER_PORT 1 master-bin.000001 2044 # # master-bin.000001 No No # 0 0 2044 # None 0 No # No 0 0
+Last_SQL_Errno=0
+Last_SQL_Error
+
set global sql_slave_skip_counter=1;
start slave;
set sql_log_bin=0;
@@ -55,9 +55,9 @@ set sql_log_bin=1;
load data infile '../../std_data/rpl_loaddata.dat' into table t1;
stop slave;
reset slave;
-show slave status;
-Slave_IO_State Master_Host Master_User Master_Port Connect_Retry Master_Log_File Read_Master_Log_Pos Relay_Log_File Relay_Log_Pos Relay_Master_Log_File Slave_IO_Running Slave_SQL_Running Replicate_Do_DB Replicate_Ignore_DB Replicate_Do_Table Replicate_Ignore_Table Replicate_Wild_Do_Table Replicate_Wild_Ignore_Table Last_Errno Last_Error Skip_Counter Exec_Master_Log_Pos Relay_Log_Space Until_Condition Until_Log_File Until_Log_Pos Master_SSL_Allowed Master_SSL_CA_File Master_SSL_CA_Path Master_SSL_Cert Master_SSL_Cipher Master_SSL_Key Seconds_Behind_Master Master_SSL_Verify_Server_Cert Last_IO_Errno Last_IO_Error Last_SQL_Errno Last_SQL_Error
-# 127.0.0.1 root MASTER_PORT 1 4 # # No No # 0 0 0 # None 0 No # No 0 0
+Last_SQL_Errno=0
+Last_SQL_Error
+
reset master;
create table t2 (day date,id int(9),category enum('a','b','c'),name varchar(60),
unique(day)) engine=MyISAM;
@@ -115,3 +115,20 @@ use b48297_db1;
Comparing tables master:b48297_db1.t1 and slave:b48297_db1.t1
DROP DATABASE b48297_db1;
DROP DATABASE b42897_db2;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+use test;
+CREATE TABLE t1 (`key` TEXT, `text` TEXT);
+LOAD DATA INFILE '../../std_data/loaddata2.dat' REPLACE INTO TABLE `t1` FIELDS TERMINATED BY ',';
+SELECT * FROM t1;
+key text
+Field A 'Field B'
+Field 1 'Field 2'
+Field 3 'Field 4'
+'Field 5' 'Field 6'
+Field 6 'Field 7'
+DROP TABLE t1;
=== added file 'mysql-test/suite/rpl/r/rpl_loaddata_concurrent.result'
--- a/mysql-test/suite/rpl/r/rpl_loaddata_concurrent.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_loaddata_concurrent.result 2010-01-07 10:34:27 +0000
@@ -0,0 +1,145 @@
+CREATE TABLE t1 (c1 char(50));
+LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE t1;
+LOAD DATA CONCURRENT INFILE '../../std_data/words.dat' INTO TABLE t1;
+show binlog events from <binlog_start>;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; CREATE TABLE t1 (c1 char(50))
+master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`c1`) ;file_id=#
+master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA CONCURRENT INFILE '../../std_data/words.dat' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`c1`) ;file_id=#
+DROP TABLE t1;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+reset master;
+select last_insert_id();
+last_insert_id()
+0
+create table t1(a int not null auto_increment, b int, primary key(a) );
+load data CONCURRENT infile '../../std_data/rpl_loaddata.dat' into table t1;
+select last_insert_id();
+last_insert_id()
+1
+create temporary table t2 (day date,id int(9),category enum('a','b','c'),name varchar(60));
+load data CONCURRENT infile '../../std_data/rpl_loaddata2.dat' into table t2 fields terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by '\n##\n' starting by '>' ignore 1 lines;
+create table t3 (day date,id int(9),category enum('a','b','c'),name varchar(60));
+insert into t3 select * from t2;
+select * from t1;
+a b
+1 10
+2 15
+select * from t3;
+day id category name
+2003-02-22 2461 b a a a @ % ' " a
+2003-03-22 2161 c asdf
+2003-03-22 2416 a bbbbb
+drop table t1;
+drop table t2;
+drop table t3;
+create table t1(a int, b int, unique(b));
+insert into t1 values(1,10);
+load data CONCURRENT infile '../../std_data/rpl_loaddata.dat' into table t1;
+set global sql_slave_skip_counter=1;
+start slave;
+Last_SQL_Errno=0
+Last_SQL_Error
+
+set sql_log_bin=0;
+delete from t1;
+set sql_log_bin=1;
+load data CONCURRENT infile '../../std_data/rpl_loaddata.dat' into table t1;
+stop slave;
+change master to master_user='test';
+change master to master_user='root';
+Last_SQL_Errno=0
+Last_SQL_Error
+
+set global sql_slave_skip_counter=1;
+start slave;
+set sql_log_bin=0;
+delete from t1;
+set sql_log_bin=1;
+load data CONCURRENT infile '../../std_data/rpl_loaddata.dat' into table t1;
+stop slave;
+reset slave;
+Last_SQL_Errno=0
+Last_SQL_Error
+
+reset master;
+create table t2 (day date,id int(9),category enum('a','b','c'),name varchar(60),
+unique(day)) engine=MyISAM;
+load data CONCURRENT infile '../../std_data/rpl_loaddata2.dat' into table t2 fields
+terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by
+'\n##\n' starting by '>' ignore 1 lines;
+ERROR 23000: Duplicate entry '2003-03-22' for key 'day'
+select * from t2;
+day id category name
+2003-02-22 2461 b a a a @ % ' " a
+2003-03-22 2161 c asdf
+start slave;
+select * from t2;
+day id category name
+2003-02-22 2461 b a a a @ % ' " a
+2003-03-22 2161 c asdf
+alter table t2 drop key day;
+delete from t2;
+load data CONCURRENT infile '../../std_data/rpl_loaddata2.dat' into table t2 fields
+terminated by ',' optionally enclosed by '%' escaped by '@' lines terminated by
+'\n##\n' starting by '>' ignore 1 lines;
+ERROR 23000: Duplicate entry '2003-03-22' for key 'day'
+drop table t1, t2;
+drop table t1, t2;
+CREATE TABLE t1 (word CHAR(20) NOT NULL PRIMARY KEY) ENGINE=INNODB;
+LOAD DATA CONCURRENT INFILE "../../std_data/words.dat" INTO TABLE t1;
+ERROR 23000: Duplicate entry 'Aarhus' for key 'PRIMARY'
+DROP TABLE IF EXISTS t1;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+drop database if exists b48297_db1;
+drop database if exists b42897_db2;
+create database b48297_db1;
+create database b42897_db2;
+use b48297_db1;
+CREATE TABLE t1 (c1 VARCHAR(256)) engine=MyISAM;;
+use b42897_db2;
+### assertion: works with cross-referenced database
+LOAD DATA CONCURRENT LOCAL INFILE 'MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE b48297_db1.t1;
+use b48297_db1;
+### assertion: works with fully qualified name on current database
+LOAD DATA CONCURRENT LOCAL INFILE 'MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE b48297_db1.t1;
+### assertion: works without fully qualified name on current database
+LOAD DATA CONCURRENT LOCAL INFILE 'MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE t1;
+### create connection without default database
+### connect (conn2,localhost,root,,*NO-ONE*);
+### assertion: works without stating the default database
+LOAD DATA CONCURRENT LOCAL INFILE 'MYSQLTEST_VARDIR/std_data/loaddata5.dat' INTO TABLE b48297_db1.t1;
+### disconnect and switch back to master connection
+use b48297_db1;
+Comparing tables master:b48297_db1.t1 and slave:b48297_db1.t1
+DROP DATABASE b48297_db1;
+DROP DATABASE b42897_db2;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+use test;
+CREATE TABLE t1 (`key` TEXT, `text` TEXT);
+LOAD DATA INFILE '../../std_data/loaddata2.dat' REPLACE INTO TABLE `t1` FIELDS TERMINATED BY ',';
+SELECT * FROM t1;
+key text
+Field A 'Field B'
+Field 1 'Field 2'
+Field 3 'Field 4'
+'Field 5' 'Field 6'
+Field 6 'Field 7'
+DROP TABLE t1;
=== modified file 'mysql-test/suite/rpl/r/rpl_loaddata_fatal.result'
--- a/mysql-test/suite/rpl/r/rpl_loaddata_fatal.result 2009-09-28 12:41:10 +0000
+++ b/mysql-test/suite/rpl/r/rpl_loaddata_fatal.result 2009-12-06 01:11:32 +0000
@@ -53,7 +53,7 @@ Master_User root
Master_Port MASTER_PORT
Connect_Retry 1
Master_Log_File master-bin.000001
-Read_Master_Log_Pos 556
+Read_Master_Log_Pos 560
Relay_Log_File #
Relay_Log_Pos #
Relay_Master_Log_File master-bin.000001
=== modified file 'mysql-test/suite/rpl/r/rpl_loaddata_map.result'
--- a/mysql-test/suite/rpl/r/rpl_loaddata_map.result 2009-09-28 12:41:10 +0000
+++ b/mysql-test/suite/rpl/r/rpl_loaddata_map.result 2009-12-06 01:11:32 +0000
@@ -20,7 +20,7 @@ master-bin.000001 # Query # # use `test`
master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
master-bin.000001 # Append_block # # ;file_id=#;block_len=#
master-bin.000001 # Append_block # # ;file_id=#;block_len=#
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug30435_5k.txt' INTO TABLE `t2` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (id) ;file_id=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug30435_5k.txt' INTO TABLE `t2` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' (`id`) ;file_id=#
==== Verify results on slave ====
[on slave]
select count(*) from t2 /* 5 000 */;
=== added file 'mysql-test/suite/rpl/r/rpl_manual_change_index_file.result'
--- a/mysql-test/suite/rpl/r/rpl_manual_change_index_file.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_manual_change_index_file.result 2010-01-08 05:42:23 +0000
@@ -0,0 +1,25 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+FLUSH LOGS;
+CREATE TABLE t1(c1 INT);
+FLUSH LOGS;
+call mtr.add_suppression('Got fatal error 1236 from master when reading data from binary log: .*could not find next log');
+Last_IO_Error
+Got fatal error 1236 from master when reading data from binary log: 'could not find next log'
+CREATE TABLE t2(c1 INT);
+FLUSH LOGS;
+CREATE TABLE t3(c1 INT);
+FLUSH LOGS;
+CREATE TABLE t4(c1 INT);
+START SLAVE IO_THREAD;
+SHOW TABLES;
+Tables_in_test
+t1
+t2
+t3
+t4
+DROP TABLE t1, t2, t3, t4;
=== modified file 'mysql-test/suite/rpl/r/rpl_misc_functions.result'
--- a/mysql-test/suite/rpl/r/rpl_misc_functions.result 2008-10-07 12:22:28 +0000
+++ b/mysql-test/suite/rpl/r/rpl_misc_functions.result 2010-01-13 09:00:03 +0000
@@ -4,6 +4,7 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
create table t1(id int, i int, r1 int, r2 int, p varchar(100));
insert into t1 values(1, connection_id(), 0, 0, "");
insert into t1 values(2, 0, rand()*1000, rand()*1000, "");
=== modified file 'mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result'
--- a/mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result 2009-11-18 14:50:31 +0000
+++ b/mysql-test/suite/rpl/r/rpl_nondeterministic_functions.result 2010-01-13 09:00:03 +0000
@@ -4,6 +4,7 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
CREATE TABLE t1 (a VARCHAR(1000));
INSERT INTO t1 VALUES (CONNECTION_ID());
INSERT INTO t1 VALUES (CONNECTION_ID());
=== modified file 'mysql-test/suite/rpl/r/rpl_optimize.result'
--- a/mysql-test/suite/rpl/r/rpl_optimize.result 2007-06-27 12:28:02 +0000
+++ b/mysql-test/suite/rpl/r/rpl_optimize.result 2010-01-13 09:00:03 +0000
@@ -4,6 +4,7 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
create table t1 (a int not null auto_increment primary key, b int, key(b));
INSERT INTO t1 (a) VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
INSERT INTO t1 (a) SELECT null FROM t1;
=== modified file 'mysql-test/suite/rpl/r/rpl_row_func003.result'
--- a/mysql-test/suite/rpl/r/rpl_row_func003.result 2007-06-27 12:28:02 +0000
+++ b/mysql-test/suite/rpl/r/rpl_row_func003.result 2010-01-13 09:00:03 +0000
@@ -4,6 +4,7 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
DROP FUNCTION IF EXISTS test.f1;
DROP TABLE IF EXISTS test.t1;
CREATE TABLE test.t1 (a INT NOT NULL AUTO_INCREMENT, c CHAR(16),PRIMARY KEY(a))ENGINE=INNODB;
=== modified file 'mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result'
--- a/mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result 2008-04-02 09:49:22 +0000
+++ b/mysql-test/suite/rpl/r/rpl_row_mysqlbinlog.result 2010-01-27 12:23:28 +0000
@@ -152,6 +152,7 @@ c1 c3 c4 c5
5 2006-02-22 00:00:00 Tested in Texas 11
--- Test 2 position test --
+Warning: The option '--position' is deprecated and will be removed in a future release. Please use --start-position instead.
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
@@ -314,6 +315,7 @@ ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
--- Test 7 reading stdin w/position --
+Warning: The option '--position' is deprecated and will be removed in a future release. Please use --start-position instead.
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
=== renamed file 'mysql-test/suite/binlog/r/binlog_tbl_metadata.result' => 'mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result'
--- a/mysql-test/suite/binlog/r/binlog_tbl_metadata.result 2009-05-12 11:53:46 +0000
+++ b/mysql-test/suite/rpl/r/rpl_row_tbl_metadata.result 2010-01-07 17:45:54 +0000
@@ -1,5 +1,11 @@
-RESET MASTER;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
DROP TABLE IF EXISTS `t1`;
+### TABLE with field_metadata_size == 290
CREATE TABLE `t1` (
`c1` int(11) NOT NULL AUTO_INCREMENT,
`c2` varchar(30) NOT NULL,
@@ -150,7 +156,51 @@ CREATE TABLE `t1` (
PRIMARY KEY (`c1`)
) ENGINE=InnoDB;
LOCK TABLES `t1` WRITE;
-INSERT INTO `t1` VALUES ('1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1');
-DROP TABLE `t1`;
+INSERT INTO `t1`(c2) VALUES ('1');
FLUSH LOGS;
+### assertion: the slave replicated event successfully and tables match
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE `t1`;
=== Using mysqlbinlog to detect failure. Before the patch mysqlbinlog would find a corrupted event, thence would fail.
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+### action: generating several tables with different metadata
+### sizes (resorting to perl)
+### testing table with 249 field metadata size.
+### testing table with 250 field metadata size.
+### testing table with 251 field metadata size.
+### testing table with 252 field metadata size.
+### testing table with 253 field metadata size.
+### testing table with 254 field metadata size.
+### testing table with 255 field metadata size.
+### testing table with 256 field metadata size.
+### testing table with 257 field metadata size.
+### testing table with 258 field metadata size.
+FLUSH LOGS;
+### assertion: the slave replicated event successfully and tables match for t10
+Comparing tables master:test.t10 and slave:test.t10
+### assertion: the slave replicated event successfully and tables match for t9
+Comparing tables master:test.t9 and slave:test.t9
+### assertion: the slave replicated event successfully and tables match for t8
+Comparing tables master:test.t8 and slave:test.t8
+### assertion: the slave replicated event successfully and tables match for t7
+Comparing tables master:test.t7 and slave:test.t7
+### assertion: the slave replicated event successfully and tables match for t6
+Comparing tables master:test.t6 and slave:test.t6
+### assertion: the slave replicated event successfully and tables match for t5
+Comparing tables master:test.t5 and slave:test.t5
+### assertion: the slave replicated event successfully and tables match for t4
+Comparing tables master:test.t4 and slave:test.t4
+### assertion: the slave replicated event successfully and tables match for t3
+Comparing tables master:test.t3 and slave:test.t3
+### assertion: the slave replicated event successfully and tables match for t2
+Comparing tables master:test.t2 and slave:test.t2
+### assertion: the slave replicated event successfully and tables match for t1
+Comparing tables master:test.t1 and slave:test.t1
+### assertion: check that binlog is not corrupt. Using mysqlbinlog to
+### detect failure. Before the patch mysqlbinlog would find
+### a corrupted event, thence would fail.
=== added file 'mysql-test/suite/rpl/r/rpl_set_null_innodb.result'
--- a/mysql-test/suite/rpl/r/rpl_set_null_innodb.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_set_null_innodb.result 2010-01-21 17:20:24 +0000
@@ -0,0 +1,35 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1 (c1 BIT, c2 INT) Engine=InnoDB;
+INSERT INTO `t1` VALUES ( 1, 1 );
+UPDATE t1 SET c1=NULL where c2=1;
+Comparing tables master:test.t1 and slave:test.t1
+DELETE FROM t1 WHERE c2=1 LIMIT 1;
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE t1;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1 (c1 CHAR) Engine=InnoDB;
+INSERT INTO t1 ( c1 ) VALUES ( 'w' ) ;
+SELECT * FROM t1;
+c1
+w
+UPDATE t1 SET c1=NULL WHERE c1='w';
+Comparing tables master:test.t1 and slave:test.t1
+DELETE FROM t1 LIMIT 2;
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE t1;
=== added file 'mysql-test/suite/rpl/r/rpl_set_null_myisam.result'
--- a/mysql-test/suite/rpl/r/rpl_set_null_myisam.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_set_null_myisam.result 2010-01-21 17:20:24 +0000
@@ -0,0 +1,35 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1 (c1 BIT, c2 INT) Engine=MyISAM;
+INSERT INTO `t1` VALUES ( 1, 1 );
+UPDATE t1 SET c1=NULL where c2=1;
+Comparing tables master:test.t1 and slave:test.t1
+DELETE FROM t1 WHERE c2=1 LIMIT 1;
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE t1;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1 (c1 CHAR) Engine=MyISAM;
+INSERT INTO t1 ( c1 ) VALUES ( 'w' ) ;
+SELECT * FROM t1;
+c1
+w
+UPDATE t1 SET c1=NULL WHERE c1='w';
+Comparing tables master:test.t1 and slave:test.t1
+DELETE FROM t1 LIMIT 2;
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE t1;
=== modified file 'mysql-test/suite/rpl/r/rpl_sp.result'
--- a/mysql-test/suite/rpl/r/rpl_sp.result 2009-07-02 13:40:27 +0000
+++ b/mysql-test/suite/rpl/r/rpl_sp.result 2010-02-02 13:38:44 +0000
@@ -195,7 +195,7 @@ set @old_log_bin_trust_routine_creators=
set @old_log_bin_trust_function_creators= @@global.log_bin_trust_function_creators;
set global log_bin_trust_routine_creators=1;
Warnings:
-Warning 1287 The syntax '@@log_bin_trust_routine_creators' is deprecated and will be removed in MySQL 6.0. Please use '@@log_bin_trust_function_creators' instead
+Warning 1287 The syntax '@@log_bin_trust_routine_creators' is deprecated and will be removed in MySQL 5.6. Please use '@@log_bin_trust_function_creators' instead
set global log_bin_trust_function_creators=0;
set global log_bin_trust_function_creators=1;
set @old_log_bin_trust_routine_creators= @@global.log_bin_trust_routine_creators;
@@ -559,11 +559,11 @@ end
master-bin.000001 # Query 1 # use `mysqltest`; SELECT `mysqltest2`.`f1`()
set @@global.log_bin_trust_routine_creators= @old_log_bin_trust_routine_creators;
Warnings:
-Warning 1287 The syntax '@@log_bin_trust_routine_creators' is deprecated and will be removed in MySQL 6.0. Please use '@@log_bin_trust_function_creators' instead
+Warning 1287 The syntax '@@log_bin_trust_routine_creators' is deprecated and will be removed in MySQL 5.6. Please use '@@log_bin_trust_function_creators' instead
set @@global.log_bin_trust_function_creators= @old_log_bin_trust_function_creators;
set @@global.log_bin_trust_routine_creators= @old_log_bin_trust_routine_creators;
Warnings:
-Warning 1287 The syntax '@@log_bin_trust_routine_creators' is deprecated and will be removed in MySQL 6.0. Please use '@@log_bin_trust_function_creators' instead
+Warning 1287 The syntax '@@log_bin_trust_routine_creators' is deprecated and will be removed in MySQL 5.6. Please use '@@log_bin_trust_function_creators' instead
set @@global.log_bin_trust_function_creators= @old_log_bin_trust_function_creators;
drop database mysqltest;
drop database mysqltest2;
=== added file 'mysql-test/suite/rpl/r/rpl_stm_binlog_direct.result'
--- a/mysql-test/suite/rpl/r/rpl_stm_binlog_direct.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_stm_binlog_direct.result 2010-01-20 19:08:16 +0000
@@ -0,0 +1,1360 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+set @@session.binlog_direct_non_transactional_updates= TRUE;
+#########################################################################
+# CONFIGURATION
+#########################################################################
+SET @commands= 'configure';
+SET SQL_LOG_BIN=0;
+CREATE TABLE nt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE tt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+SET SQL_LOG_BIN=1;
+SET SQL_LOG_BIN=0;
+CREATE TABLE nt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE nt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = MyISAM;
+CREATE TABLE tt_1 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_2 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_3 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_4 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_5 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+CREATE TABLE tt_6 (trans_id INT, stmt_id INT, info VARCHAR(64), PRIMARY KEY(trans_id, stmt_id)) ENGINE = Innodb;
+SET SQL_LOG_BIN=1;
+INSERT INTO nt_1(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO nt_2(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO nt_3(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO nt_4(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO nt_5(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO nt_6(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO tt_1(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO tt_2(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO tt_3(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO tt_4(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO tt_5(trans_id, stmt_id) VALUES(1,1);
+INSERT INTO tt_6(trans_id, stmt_id) VALUES(1,1);
+CREATE PROCEDURE pc_i_tt_5_suc (IN p_trans_id INTEGER, IN p_stmt_id INTEGER)
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM tt_5 WHERE trans_id= p_trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+END|
+CREATE PROCEDURE pc_i_nt_5_suc (IN p_trans_id INTEGER, IN p_stmt_id INTEGER)
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM nt_5 WHERE trans_id= p_trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+END|
+CREATE FUNCTION fc_i_tt_5_suc (p_trans_id INTEGER, p_stmt_id INTEGER) RETURNS VARCHAR(64)
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM tt_5 WHERE trans_id= p_trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+RETURN "fc_i_tt_5_suc";
+END|
+CREATE FUNCTION fc_i_nt_5_suc (p_trans_id INTEGER, p_stmt_id INTEGER) RETURNS VARCHAR(64)
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM nt_5 WHERE trans_id= p_trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, p_stmt_id), 1) INTO in_stmt_id;
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id);
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (p_trans_id, in_stmt_id + 1);
+RETURN "fc_i_nt_5_suc";
+END|
+CREATE TRIGGER tr_i_tt_3_to_nt_3 AFTER INSERT ON tt_3 FOR EACH ROW
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM nt_3 WHERE trans_id= NEW.trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id), 1) INTO in_stmt_id;
+INSERT INTO nt_3(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+INSERT INTO nt_3(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+END|
+CREATE TRIGGER tr_i_nt_4_to_tt_4 AFTER INSERT ON nt_4 FOR EACH ROW
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM tt_4 WHERE trans_id= NEW.trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id), 1) INTO in_stmt_id;
+INSERT INTO tt_4(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+INSERT INTO tt_4(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+END|
+CREATE TRIGGER tr_i_tt_5_to_tt_6 AFTER INSERT ON tt_5 FOR EACH ROW
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM tt_6 WHERE trans_id= NEW.trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id, 1), 1) INTO in_stmt_id;
+INSERT INTO tt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+INSERT INTO tt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+END|
+CREATE TRIGGER tr_i_nt_5_to_nt_6 AFTER INSERT ON nt_5 FOR EACH ROW
+BEGIN
+DECLARE in_stmt_id INTEGER;
+SELECT max(stmt_id) INTO in_stmt_id FROM nt_6 WHERE trans_id= NEW.trans_id;
+SELECT COALESCE(greatest(in_stmt_id + 1, NEW.stmt_id), 1) INTO in_stmt_id;
+INSERT INTO nt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id);
+INSERT INTO nt_6(trans_id, stmt_id) VALUES (NEW.trans_id, in_stmt_id + 1);
+END|
+SET @commands= '';
+#########################################################################
+# 1 - BINLOG ORDER
+#########################################################################
+
+
+
+
+#
+#3) Generates in the binlog what follows:
+# --> STMT "N B T C" entries, format S.
+#
+SET @commands= 'B T N C';
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (7, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (7, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (7, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (7, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (7, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (7, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (8, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (8, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (8, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (8, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N-trig C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (8, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (8, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N-trig C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (9, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (9, 4);
+fc_i_nt_5_suc (9, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(9,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (9, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N-func C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(9,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (9, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N-func C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (10, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (10, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',10), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',10), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (10, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N-proc C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',10), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',10), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (10, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N-proc C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (11, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (11, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (11, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (11, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (11, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (11, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (12, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (12, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (12, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (12, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N-trig C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (12, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (12, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N-trig C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (13, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (13, 4);
+fc_i_nt_5_suc (13, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(13,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (13, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N-func C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(13,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (13, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N-func C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (14, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (14, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',14), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',14), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (14, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N-proc C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',14), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',14), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (14, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N-proc C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (15, 2);
+fc_i_tt_5_suc (15, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (15, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (15, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(15,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (15, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(15,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (16, 2);
+fc_i_tt_5_suc (16, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (16, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (16, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(16,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N-trig C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (16, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(16,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N-trig C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (17, 2);
+fc_i_tt_5_suc (17, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (17, 4);
+fc_i_nt_5_suc (17, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(17,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(17,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N-func C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(17,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(17,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N-func C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (18, 2);
+fc_i_tt_5_suc (18, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (18, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',18), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',18), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(18,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N-proc C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',18), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',18), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(18,2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N-proc C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (19, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (19, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (19, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',19), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',19), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (19, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',19), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',19), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (20, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (20, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (20, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',20), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',20), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N-trig C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (20, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',20), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',20), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N-trig C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (21, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (21, 4);
+fc_i_nt_5_suc (21, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(21,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',21), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',21), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N-func C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(21,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',21), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',21), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N-func C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (22, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (22, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N-proc C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',22), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N-proc C << -e-e-e-e-e-e-e-e-e-e-e-
+
+
+
+
+
+#
+#3.e) Generates in the binlog what follows if T-* fails:
+# --> STMT "N" entry, format S.
+# Otherwise, what follows if N-* fails and a N-Table is changed:
+# --> STMT "N B T C" entries, format S.
+#
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> eT << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (10, 2);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> eT << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (23, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (23, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B eT N C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (23, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> B eT N C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> Te << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (24, 2), (10, 2);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> Te << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (24, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (24, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B Te N C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (24, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> B Te N C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (25, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> eN << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (24, 4);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> eN << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (25, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T eN C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (25, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T eN C << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (26, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> Ne << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (26, 4), (24, 4);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (26, 4), (24, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> Ne << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> C << -b-b-b-b-b-b-b-b-b-b-b-
+COMMIT;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (26, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> C << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T Ne C << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (26, 4), (24, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (26, 2)
+master-bin.000001 # Xid # # COMMIT /* XID */
+-e-e-e-e-e-e-e-e-e-e-e- >> B T Ne C << -e-e-e-e-e-e-e-e-e-e-e-
+
+
+
+
+
+#
+#4) Generates in the binlog what follows:
+# --> STMT "N B T R" entries, format S.
+#
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (27, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (27, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (27, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (27, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (27, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (27, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (28, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (28, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (28, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (28, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N-trig R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (28, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (28, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N-trig R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (29, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (29, 4);
+fc_i_nt_5_suc (29, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(29,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (29, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N-func R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(29,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (29, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N-func R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (30, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (30, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',30), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',30), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (30, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T N-proc R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',30), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',30), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (30, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T N-proc R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (31, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (31, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (31, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (31, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (31, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (31, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (32, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (32, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (32, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (32, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N-trig R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (32, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (32, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N-trig R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (33, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (33, 4);
+fc_i_nt_5_suc (33, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(33,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (33, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N-func R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(33,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (33, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N-func R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_5(trans_id, stmt_id) VALUES (34, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (34, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',34), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',34), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (34, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-trig N-proc R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',34), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',34), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES (34, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-trig N-proc R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (35, 2);
+fc_i_tt_5_suc (35, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (35, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (35, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(35,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (35, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(35,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (36, 2);
+fc_i_tt_5_suc (36, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (36, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (36, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(36,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N-trig R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (36, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(36,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N-trig R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (37, 2);
+fc_i_tt_5_suc (37, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (37, 4);
+fc_i_nt_5_suc (37, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(37,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(37,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N-func R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(37,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(37,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N-func R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_tt_5_suc (38, 2);
+fc_i_tt_5_suc (38, 2)
+fc_i_tt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (38, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',38), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',38), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(38,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-func N-proc R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',38), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',38), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_tt_5_suc`(38,2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-func N-proc R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (39, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (39, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (39, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',39), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',39), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (39, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',39), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',39), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (40, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-trig << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_5(trans_id, stmt_id) VALUES (40, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (40, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-trig << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',40), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',40), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N-trig R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES (40, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',40), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',40), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N-trig R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (41, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-func << -b-b-b-b-b-b-b-b-b-b-b-
+SELECT fc_i_nt_5_suc (41, 4);
+fc_i_nt_5_suc (41, 4)
+fc_i_nt_5_suc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(41,4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-func << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',41), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',41), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N-func R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; SELECT `test`.`fc_i_nt_5_suc`(41,4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',41), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',41), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N-func R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_tt_5_suc (42, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N-proc << -b-b-b-b-b-b-b-b-b-b-b-
+CALL pc_i_nt_5_suc (42, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1) + 1)
+-e-e-e-e-e-e-e-e-e-e-e- >> N-proc << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T-proc N-proc R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1))
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_5(trans_id, stmt_id) VALUES ( NAME_CONST('p_trans_id',42), NAME_CONST('in_stmt_id',1) + 1)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T-proc N-proc R << -e-e-e-e-e-e-e-e-e-e-e-
+
+
+
+
+
+#
+#4.e) Generates in the binlog what follows if T* fails:
+# --> STMT "B N C" entry, format S.
+# Otherwise, what follows if N* fails and a N-Table is changed:
+# --> STMT "N" entries, format S.
+#
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> eT << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (26, 2);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> eT << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (43, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (43, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B eT N R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (43, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> B eT N R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> Te << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (44, 2), (26, 2);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> Te << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> N << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (44, 4);
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (44, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> N << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B Te N R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (44, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> B Te N R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (45, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> eN << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (44, 4);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> eN << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T eN R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B T eN R << -e-e-e-e-e-e-e-e-e-e-e-
+
+-b-b-b-b-b-b-b-b-b-b-b- >> B << -b-b-b-b-b-b-b-b-b-b-b-
+BEGIN;
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> B << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> T << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO tt_1(trans_id, stmt_id) VALUES (46, 2);
+Log_name Pos Event_type Server_id End_log_pos Info
+-e-e-e-e-e-e-e-e-e-e-e- >> T << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> Ne << -b-b-b-b-b-b-b-b-b-b-b-
+INSERT INTO nt_1(trans_id, stmt_id) VALUES (46, 4), (44, 4);
+Got one of the listed errors
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (46, 4), (44, 4)
+-e-e-e-e-e-e-e-e-e-e-e- >> Ne << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> R << -b-b-b-b-b-b-b-b-b-b-b-
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (46, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> R << -e-e-e-e-e-e-e-e-e-e-e-
+-b-b-b-b-b-b-b-b-b-b-b- >> B T Ne R << -b-b-b-b-b-b-b-b-b-b-b-
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Query # # use `test`; INSERT INTO nt_1(trans_id, stmt_id) VALUES (46, 4), (44, 4)
+master-bin.000001 # Query # # BEGIN
+master-bin.000001 # Query # # use `test`; INSERT INTO tt_1(trans_id, stmt_id) VALUES (46, 2)
+master-bin.000001 # Query # # ROLLBACK
+-e-e-e-e-e-e-e-e-e-e-e- >> B T Ne R << -e-e-e-e-e-e-e-e-e-e-e-
+
+###################################################################################
+# CHECK CONSISTENCY
+###################################################################################
+###################################################################################
+# CLEAN
+###################################################################################
=== modified file 'mysql-test/suite/rpl/r/rpl_stm_log.result'
--- a/mysql-test/suite/rpl/r/rpl_stm_log.result 2009-09-28 12:41:10 +0000
+++ b/mysql-test/suite/rpl/r/rpl_stm_log.result 2009-12-06 01:11:32 +0000
@@ -25,7 +25,7 @@ master-bin.000001 # Query 1 # use `test`
master-bin.000001 # Query 1 # use `test`; drop table t1
master-bin.000001 # Query 1 # use `test`; create table t1 (word char(20) not null)ENGINE=MyISAM
master-bin.000001 # Begin_load_query 1 # ;file_id=1;block_len=581
-master-bin.000001 # Execute_load_query 1 # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' IGNORE 1 LINES (word) ;file_id=1
+master-bin.000001 # Execute_load_query 1 # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' IGNORE 1 LINES (`word`) ;file_id=1
show binlog events from 106 limit 1;
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Query 1 # use `test`; create table t1(n int not null auto_increment primary key)ENGINE=MyISAM
@@ -193,7 +193,7 @@ master-bin.000001 # Query # # use `test`
master-bin.000001 # Query # # use `test`; drop table t1
master-bin.000001 # Query # # use `test`; create table t1 (word char(20) not null)ENGINE=MyISAM
master-bin.000001 # Begin_load_query # # ;file_id=#;block_len=#
-master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' IGNORE 1 LINES (word) ;file_id=#
+master-bin.000001 # Execute_load_query # # use `test`; LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' IGNORE 1 LINES (`word`) ;file_id=#
master-bin.000001 # Rotate # # master-bin.000002;pos=4
show binlog events in 'master-bin.000002';
Log_name Pos Event_type Server_id End_log_pos Info
@@ -218,7 +218,7 @@ slave-bin.000001 # Query 1 # use `test`;
slave-bin.000001 # Query 1 # use `test`; drop table t1
slave-bin.000001 # Query 1 # use `test`; create table t1 (word char(20) not null)ENGINE=MyISAM
slave-bin.000001 # Begin_load_query 1 # ;file_id=1;block_len=581
-slave-bin.000001 # Execute_load_query 1 # use `test`; LOAD DATA INFILE '../../tmp/SQL_LOAD-2-1-1.data' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' IGNORE 1 LINES (word) ;file_id=1
+slave-bin.000001 # Execute_load_query 1 # use `test`; LOAD DATA INFILE '../../tmp/SQL_LOAD-2-1-1.data' INTO TABLE `t1` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' IGNORE 1 LINES (`word`) ;file_id=1
slave-bin.000001 # Query 1 # use `test`; create table t3 (a int)ENGINE=MyISAM
slave-bin.000001 # Rotate 2 # slave-bin.000002;pos=4
show binlog events in 'slave-bin.000002' from 4;
=== modified file 'mysql-test/suite/rpl/r/rpl_stm_maria.result'
--- a/mysql-test/suite/rpl/r/rpl_stm_maria.result 2008-01-20 04:25:26 +0000
+++ b/mysql-test/suite/rpl/r/rpl_stm_maria.result 2010-03-04 08:03:07 +0000
@@ -4,6 +4,7 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
DROP TABLE IF EXISTS t1;
DROP TABLE IF EXISTS t2;
DROP TABLE IF EXISTS t3;
=== modified file 'mysql-test/suite/rpl/r/rpl_stm_until.result'
--- a/mysql-test/suite/rpl/r/rpl_stm_until.result 2008-07-23 11:23:52 +0000
+++ b/mysql-test/suite/rpl/r/rpl_stm_until.result 2010-01-27 17:27:49 +0000
@@ -212,3 +212,51 @@ start slave sql_thread;
start slave until master_log_file='master-bin.000001', master_log_pos=776;
Warnings:
Note 1254 Slave is already running
+include/stop_slave.inc
+drop table if exists t1;
+reset slave;
+change master to master_host='127.0.0.1',master_port=MASTER_PORT, master_user='root';
+drop table if exists t1;
+reset master;
+create table t1 (a int primary key auto_increment);
+start slave;
+include/stop_slave.inc
+master and slave are in sync now
+select 0 as zero;
+zero
+0
+insert into t1 set a=null;
+insert into t1 set a=null;
+select count(*) as two from t1;
+two
+2
+start slave until master_log_file='master-bin.000001', master_log_pos= UNTIL_POS;;
+slave stopped at the prescribed position
+select 0 as zero;
+zero
+0
+select count(*) as one from t1;
+one
+1
+drop table t1;
+start slave;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+drop table if exists t1;
+Warnings:
+Note 1051 Unknown table 't1'
+flush logs;
+stop slave;
+reset slave;
+start slave until master_log_file='master-bin.000001', master_log_pos=294 /* to stop right before DROP */;
+show tables /* t1 must exist */;
+Tables_in_test
+t1
+drop table t1;
+stop slave;
+reset slave;
+reset master;
=== modified file 'mysql-test/suite/rpl/r/rpl_temporary.result'
--- a/mysql-test/suite/rpl/r/rpl_temporary.result 2010-01-11 13:15:28 +0000
+++ b/mysql-test/suite/rpl/r/rpl_temporary.result 2010-03-04 08:03:07 +0000
@@ -37,8 +37,10 @@ ERROR 42000: Access denied; you need the
SELECT @@session.sql_select_limit = @save_select_limit;
@@session.sql_select_limit = @save_select_limit
1
+SET @save_conn_id= connection_id();
SET @@session.pseudo_thread_id=100;
SET @@session.pseudo_thread_id=connection_id();
+SET @@session.pseudo_thread_id=@save_conn_id;
SET @@session.sql_log_bin=0;
SET @@session.sql_log_bin=1;
drop table if exists t1,t2;
=== added file 'mysql-test/suite/rpl/r/rpl_tmp_table_and_DDL.result'
--- a/mysql-test/suite/rpl/r/rpl_tmp_table_and_DDL.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/r/rpl_tmp_table_and_DDL.result 2010-01-22 09:38:21 +0000
@@ -0,0 +1,96 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TEMPORARY TABLE t1 (a INT);
+CREATE TABLE t2 (a INT, b INT) ENGINE= MyISAM;
+INSERT INTO t1 VALUES (1);
+CREATE EVENT e1 ON SCHEDULE EVERY 10 HOUR DO SELECT 1;
+INSERT INTO t1 VALUES (1);
+ALTER EVENT e1 ON SCHEDULE EVERY 20 HOUR DO SELECT 1;
+INSERT INTO t1 VALUES (1);
+DROP EVENT IF EXISTS e1;
+INSERT INTO t1 VALUES (1);
+CREATE PROCEDURE p1() SELECT 1;
+INSERT INTO t1 VALUES (1);
+ALTER PROCEDURE p1 SQL SECURITY INVOKER;
+INSERT INTO t1 VALUES (1);
+CREATE FUNCTION f1() RETURNS INT RETURN 123;
+INSERT INTO t1 VALUES (1);
+ALTER FUNCTION f1 SQL SECURITY INVOKER;
+INSERT INTO t1 VALUES (1);
+CREATE DATABASE mysqltest1;
+INSERT INTO t1 VALUES (1);
+DROP DATABASE mysqltest1;
+INSERT INTO t1 VALUES (1);
+CREATE USER test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT SELECT ON t2 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT ALL ON f1 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT ALL ON p1 TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+GRANT USAGE ON *.* TO test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE ALL PRIVILEGES ON f1 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE ALL PRIVILEGES ON p1 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE ALL PRIVILEGES ON t2 FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+REVOKE USAGE ON *.* FROM test_1@localhost;
+INSERT INTO t1 VALUES (1);
+RENAME USER test_1@localhost TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+DROP USER test_2@localhost;
+INSERT INTO t1 VALUES (1);
+CREATE PROCEDURE p2()
+BEGIN
+# CREATE USER when a temporary table is open.
+CREATE TEMPORARY TABLE t3 (a INT);
+CREATE USER test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT select on table to user when a temporary table is open.
+GRANT SELECT ON t2 TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT all on function to user when a temporary table is open.
+GRANT ALL ON f1 TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT all on procedure to user when a temporary table is open.
+GRANT ALL ON p1 TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# GRANT usage on *.* to user when a temporary table is open.
+GRANT USAGE ON *.* TO test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE ALL PRIVILEGES on function to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON f1 FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE ALL PRIVILEGES on procedure to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON p1 FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE ALL PRIVILEGES on table to user when a temporary table is open.
+REVOKE ALL PRIVILEGES ON t2 FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# REVOKE usage on *.* from user when a temporary table is open.
+REVOKE USAGE ON *.* FROM test_2@localhost;
+INSERT INTO t1 VALUES (1);
+# RENAME USER when a temporary table is open.
+RENAME USER test_2@localhost TO test_3@localhost;
+INSERT INTO t1 VALUES (1);
+# DROP USER when a temporary table is open.
+DROP USER test_3@localhost;
+INSERT INTO t1 VALUES (1);
+DROP TEMPORARY TABLE t3;
+END |
+DROP PROCEDURE p1;
+INSERT INTO t1 VALUES (1);
+DROP PROCEDURE p2;
+INSERT INTO t1 VALUES (1);
+DROP FUNCTION f1;
+INSERT INTO t1 VALUES (1);
+DROP TABLE t2;
+INSERT INTO t1 VALUES (1);
+DROP TEMPORARY TABLE t1;
=== modified file 'mysql-test/suite/rpl/t/rpl_circular_for_4_hosts.test'
--- a/mysql-test/suite/rpl/t/rpl_circular_for_4_hosts.test 2008-10-29 17:38:18 +0000
+++ b/mysql-test/suite/rpl/t/rpl_circular_for_4_hosts.test 2009-12-16 04:41:15 +0000
@@ -233,16 +233,7 @@ COMMIT;
--connection master_a
--enable_query_log
-
---let $wait_condition= SELECT COUNT(*)=400 FROM t2 WHERE c = 1
---connection master_a
---source include/wait_condition.inc
---connection master_b
---source include/wait_condition.inc
---connection master_c
---source include/wait_condition.inc
---connection master_d
---source include/wait_condition.inc
+--source include/circular_rpl_for_4_hosts_sync.inc
--connection master_a
SELECT 'Master A',b,COUNT(*) FROM t2 WHERE c = 1 GROUP BY b ORDER BY b;
@@ -282,15 +273,7 @@ ROLLBACK;
--connection master_a
--enable_query_log
---let $wait_condition= SELECT COUNT(*)=200 FROM t2 WHERE c = 2
---connection master_a
---source include/wait_condition.inc
---connection master_b
---source include/wait_condition.inc
---connection master_c
---source include/wait_condition.inc
---connection master_d
---source include/wait_condition.inc
+--source include/circular_rpl_for_4_hosts_sync.inc
--connection master_a
SELECT 'Master A',b,COUNT(*) FROM t2 WHERE c = 2 GROUP BY b ORDER BY b;
=== modified file 'mysql-test/suite/rpl/t/rpl_create_if_not_exists.test'
--- a/mysql-test/suite/rpl/t/rpl_create_if_not_exists.test 2009-08-13 02:48:57 +0000
+++ b/mysql-test/suite/rpl/t/rpl_create_if_not_exists.test 2010-01-16 07:44:24 +0000
@@ -67,4 +67,57 @@ SHOW EVENTS in mysqltest;
connection master;
DROP DATABASE IF EXISTS mysqltest;
+
+#
+# BUG#47418 RBR fails, failure with mixup of base/temporary/view TABLE DDL
+#
+# Before the patch for this bug, 'CREATE TABLE IF NOT EXIST ... SELECT'
+# statement was binlogged as a TEMPORARY table if the object existed as
+# a temporary table. This was caused by that the temporary table was opened
+# and the results of the 'SELECT' was inserted into the temporary table if
+# a temporary table existed with the same name.
+#
+# After the patch for this bug, the base table is created and the results of
+# the 'SELECT' are inserted into it, even though a temporary table exists with
+# the same name, and the statement is still binlogged as a base table.
+#
+
+echo -------------BUG#47418-------------;
+connection master;
+USE test;
+DROP TABLE IF EXISTS t3;
+--enable_warnings
+CREATE TABLE t3(c1 INTEGER);
+INSERT INTO t3 VALUES(33);
+
+CREATE TEMPORARY TABLE t1(c1 INTEGER);
+CREATE TEMPORARY TABLE t2(c1 INTEGER);
+INSERT INTO t1 VALUES(1);
+INSERT INTO t2 VALUES(1);
+
+CREATE TABLE IF NOT EXISTS t1(c1 INTEGER) SELECT c1 FROM t3;
+CREATE TABLE t2(c1 INTEGER) SELECT c1 FROM t3;
+
+# In these two statements, t1 and t2 are the temporary table. there is only
+# value '1' in them. The records of t2 are not inserted into them.
+SELECT * FROM t1;
+SELECT * FROM t2;
+sync_slave_with_master;
+
+# In these two statements, t1 and t2 are the base table. The recoreds of t2
+# are inserted into it when CREATE TABLE ... SELECT was executed.
+SELECT * FROM t1;
+SELECT * FROM t2;
+
+connection master;
+DROP TEMPORARY TABLE t1;
+DROP TEMPORARY TABLE t2;
+#In these two statements, t1 and t2 are the base table.
+SELECT * FROM t1;
+SELECT * FROM t2;
+
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+
source include/master-slave-end.inc;
=== modified file 'mysql-test/suite/rpl/t/rpl_do_grant.test'
--- a/mysql-test/suite/rpl/t/rpl_do_grant.test 2009-12-03 11:19:05 +0000
+++ b/mysql-test/suite/rpl/t/rpl_do_grant.test 2010-03-04 08:03:07 +0000
@@ -216,4 +216,104 @@ connection slave;
USE mtr;
call mtr.add_suppression("Slave: Operation DROP USER failed for 'create_rout_db'@'localhost' Error_code: 1396");
+# BUG#49119: Master crashes when executing 'REVOKE ... ON
+# {PROCEDURE|FUNCTION} FROM ...'
+#
+# The tests are divided into two test cases:
+#
+# i) a test case that mimics the one in the bug report.
+#
+# - We show that, despite the fact, that a revoke command fails
+# when binlogging is active, the master will not hit an
+# assertion.
+#
+# ii) a test case that partially succeeds on the master will also
+# partially succeed on the slave.
+#
+# - The revoke statement that partially succeeds tries to revoke
+# an EXECUTE grant for two users, and only one of the user has
+# the specific grant. This will cause mysql to drop one of the
+# grants and report error for the statement. The slave should
+# also drop the grants that the master succeed and the SQL
+# thread should not stop on statement failure.
+
+-- echo ######## BUG#49119 #######
+-- echo ### i) test case from the 'how to repeat section'
+-- source include/master-slave-reset.inc
+-- connection master
+
+CREATE TABLE t1(c1 INT);
+DELIMITER |;
+CREATE PROCEDURE p1() SELECT * FROM t1 |
+DELIMITER ;|
+-- error ER_NONEXISTING_PROC_GRANT
+REVOKE EXECUTE ON PROCEDURE p1 FROM 'root'@'localhost';
+
+-- sync_slave_with_master
+
+-- connection master
+DROP TABLE t1;
+DROP PROCEDURE p1;
+
+-- sync_slave_with_master
+
+-- echo ### ii) Test case in which REVOKE partially succeeds
+
+-- connection master
+-- source include/master-slave-reset.inc
+-- connection master
+
+CREATE TABLE t1(c1 INT);
+DELIMITER |;
+CREATE PROCEDURE p1() SELECT * FROM t1 |
+DELIMITER ;|
+
+CREATE USER 'user49119'@'localhost';
+GRANT EXECUTE ON PROCEDURE p1 TO 'user49119'@'localhost';
+
+-- echo ##############################################################
+-- echo ### Showing grants for both users: root and user49119 (master)
+SHOW GRANTS FOR 'user49119'@'localhost';
+SHOW GRANTS FOR CURRENT_USER;
+-- echo ##############################################################
+
+-- sync_slave_with_master
+
+-- echo ##############################################################
+-- echo ### Showing grants for both users: root and user49119 (master)
+SHOW GRANTS FOR 'user49119'@'localhost';
+SHOW GRANTS FOR CURRENT_USER;
+-- echo ##############################################################
+
+-- connection master
+
+-- echo ## This statement will make the revoke fail because root has no
+-- echo ## execute grant. However, it will still revoke the grant for
+-- echo ## user49119.
+-- error ER_NONEXISTING_PROC_GRANT
+REVOKE EXECUTE ON PROCEDURE p1 FROM 'user49119'@'localhost', 'root'@'localhost';
+
+-- echo ##############################################################
+-- echo ### Showing grants for both users: root and user49119 (master)
+-- echo ### after revoke statement failure
+SHOW GRANTS FOR 'user49119'@'localhost';
+SHOW GRANTS FOR CURRENT_USER;
+-- echo ##############################################################
+
+-- sync_slave_with_master
+
+-- echo #############################################################
+-- echo ### Showing grants for both users: root and user49119 (slave)
+-- echo ### after revoke statement failure (should match
+SHOW GRANTS FOR 'user49119'@'localhost';
+SHOW GRANTS FOR CURRENT_USER;
+-- echo ##############################################################
+
+-- connection master
+DROP TABLE t1;
+DROP PROCEDURE p1;
+DROP USER 'user49119'@'localhost';
+
+-- sync_slave_with_master
+
--echo "End of test"
=== modified file 'mysql-test/suite/rpl/t/rpl_drop_temp.test'
--- a/mysql-test/suite/rpl/t/rpl_drop_temp.test 2009-09-13 20:52:14 +0000
+++ b/mysql-test/suite/rpl/t/rpl_drop_temp.test 2009-12-31 04:04:19 +0000
@@ -34,4 +34,36 @@ connection master;
drop database mysqltest;
sync_slave_with_master;
+#
+# Bug#49137
+# This test verifies if DROP MULTI TEMPORARY TABLE
+# will cause different errors on master and slave,
+# when one or more of these tables do not exist.
+#
+
+connection master;
+DROP TEMPORARY TABLE IF EXISTS tmp1;
+CREATE TEMPORARY TABLE t1 ( a int );
+--error 1051
+DROP TEMPORARY TABLE t1, t2;
+--error 1051
+DROP TEMPORARY TABLE tmp2;
+sync_slave_with_master;
+
+connection slave;
+stop slave;
+wait_for_slave_to_stop;
+
+--echo **** On Master ****
+connection master;
+CREATE TEMPORARY TABLE tmp3 (a int);
+DROP TEMPORARY TABLE tmp3;
+
+connection slave;
+SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
+START SLAVE;
+
+connection master;
+sync_slave_with_master;
+
# End of 4.1 tests
=== added file 'mysql-test/suite/rpl/t/rpl_geometry.test'
--- a/mysql-test/suite/rpl/t/rpl_geometry.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_geometry.test 2010-01-05 06:25:29 +0000
@@ -0,0 +1,26 @@
+source include/master-slave.inc;
+source include/have_binlog_format_row.inc;
+
+#
+# Bug#48776, Bug#43784
+#
+create table t1(a varchar(100),
+ b multipoint not null,
+ c varchar(256));
+
+insert into t1 set
+ a='hello',
+ b=geomfromtext('multipoint(1 1)'),
+ c='geometry';
+
+create table t2 (a int(11) not null auto_increment primary key,
+ b geometrycollection default null,
+ c decimal(10,0));
+
+insert into t2(c) values (null);
+
+sync_slave_with_master;
+
+connection master;
+drop table t1, t2;
+source include/master-slave-end.inc;
=== modified file 'mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test'
--- a/mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test 2009-10-29 02:26:59 +0000
+++ b/mysql-test/suite/rpl/t/rpl_get_master_version_and_clock.test 2010-01-27 02:52:13 +0000
@@ -16,12 +16,13 @@
source include/master-slave.inc;
source include/have_debug.inc;
-call mtr.add_suppression("Get master clock failed with error: ");
-call mtr.add_suppression("Get master SERVER_ID failed with error: ");
-call mtr.add_suppression("Slave I/O: Master command COM_REGISTER_SLAVE failed: failed registering on master, reconnecting to try again");
+
+connection slave;
+call mtr.add_suppression("Slave I/O: Master command COM_REGISTER_SLAVE failed: .*");
call mtr.add_suppression("Fatal error: The slave I/O thread stops because master and slave have equal MySQL server ids; .*");
+call mtr.add_suppression("Slave I/O thread .* register on master");
+
#Test case 1: Try to get the value of the UNIX_TIMESTAMP from master under network disconnection
-connection slave;
let $debug_saved= `select @@global.debug`;
let $debug_lock= "debug_lock.before_get_UNIX_TIMESTAMP";
=== modified file 'mysql-test/suite/rpl/t/rpl_killed_ddl.test'
--- a/mysql-test/suite/rpl/t/rpl_killed_ddl.test 2009-09-25 06:42:43 +0000
+++ b/mysql-test/suite/rpl/t/rpl_killed_ddl.test 2010-03-04 08:03:07 +0000
@@ -156,13 +156,12 @@ source include/kill_query_and_diff_maste
send DROP DATABASE d1;
source include/kill_query_and_diff_master_slave.inc;
-send DROP DATABASE d2;
+send DROP DATABASE IF EXISTS d2;
source include/kill_query_and_diff_master_slave.inc;
######## EVENT ########
-let $diff_statement= SELECT event_name, event_body, execute_at
- FROM information_schema.events where event_name like 'e%';
+let $diff_statement= SELECT event_name, event_body, execute_at FROM information_schema.events where event_name like 'e%';
send CREATE EVENT e2
ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 1 DAY
@@ -229,7 +228,7 @@ source include/kill_query_and_diff_maste
send DROP PROCEDURE p1;
source include/kill_query_and_diff_master_slave.inc;
-# Temporarily disabled, see comment above for DROP FUNCTION IF EXISTS
+# Temporarily disabled because of bug#43353, see comment above for DROP FUNCTION IF EXISTS
#send DROP PROCEDURE IF EXISTS p2;
#source include/kill_query_and_diff_master_slave.inc;
@@ -280,6 +279,11 @@ source include/kill_query_and_diff_maste
######## TRIGGER ########
+# Make sure table t4 exists
+connection master;
+CREATE TABLE IF NOT EXISTS t4 (a int);
+connection master1;
+
let $diff_statement= SHOW TRIGGERS LIKE 'v%';
DELIMITER //;
=== added file 'mysql-test/suite/rpl/t/rpl_loaddata_concurrent.test'
--- a/mysql-test/suite/rpl/t/rpl_loaddata_concurrent.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_loaddata_concurrent.test 2009-12-16 04:25:46 +0000
@@ -0,0 +1,13 @@
+-- source include/not_ndb_default.inc
+-- source include/have_log_bin.inc
+
+let $binlog_start= query_get_value(SHOW MASTER STATUS, Position, 1);
+CREATE TABLE t1 (c1 char(50));
+LOAD DATA INFILE '../../std_data/words.dat' INTO TABLE t1;
+LOAD DATA CONCURRENT INFILE '../../std_data/words.dat' INTO TABLE t1;
+-- source include/show_binlog_events.inc
+DROP TABLE t1;
+
+let $lock_option= CONCURRENT;
+let $engine_type=MyISAM;
+-- source extra/rpl_tests/rpl_loaddata.test
=== added file 'mysql-test/suite/rpl/t/rpl_manual_change_index_file.test'
--- a/mysql-test/suite/rpl/t/rpl_manual_change_index_file.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_manual_change_index_file.test 2010-01-08 15:32:40 +0000
@@ -0,0 +1,106 @@
+source include/master-slave.inc;
+
+#
+# BUG#28421 Infinite loop on slave relay logs
+#
+# That, manually deleteing one or more entries from 'master-bin.index', will
+# cause master infinitely loop to send one binlog file.
+#
+# Manually changing index file is a illegal action, so when this happen, we
+# send a fatal error to slave and close the dump session.
+
+FLUSH LOGS;
+# Now, 2 entries in index file.
+# ./master-bin.000001
+# ./master-bin.000002
+
+CREATE TABLE t1(c1 INT);
+# Now, the current dump file(master-bin.000002) is the second line of index
+# file
+sync_slave_with_master;
+# Now, all events has been replicate to slave. As current dump file
+# (master-bin.000002) is the last binlog file, so master is waiting for new
+# events.
+
+connection master;
+# Delete './master-bin.000001' from index file.
+let $MYSQLD_DATADIR= `SELECT @@DATADIR`;
+let $file= $MYSQLD_DATADIR/master-bin.index;
+source include/truncate_file.inc;
+
+if (`SELECT CONVERT(@@VERSION_COMPILE_OS USING latin1) NOT IN ('Win32', 'Win64', 'Windows')`)
+{
+append_file $MYSQLD_DATADIR/master-bin.index;
+./master-bin.000002
+EOF
+sleep 0.00000001;
+}
+
+if (`SELECT CONVERT(@@VERSION_COMPILE_OS USING latin1) IN ('Win32', 'Win64', 'Windows')`)
+{
+append_file $MYSQLD_DATADIR/master-bin.index;
+.\master-bin.000002
+EOF
+sleep 0.00000001;
+}
+
+# Now, only 1 entry in index file. ./master-bin.000002
+
+# Generate master-bin.000003, but it is in the second line.
+FLUSH LOGS;
+# Now, 2 entries in index file.
+# ./master-bin.000002
+# ./master-bin.000003
+
+# Now, master know that new binlog file(master-bin.000003) has been generated.
+# It expects that the new binlog file is in third line of index file, but
+# there is no third line in index file. It is so strange that master sends an
+# error to slave.
+call mtr.add_suppression('Got fatal error 1236 from master when reading data from binary log: .*could not find next log');
+connection slave;
+source include/wait_for_slave_io_to_stop.inc;
+let $last_error= query_get_value(SHOW SLAVE STATUS, Last_IO_Error, 1);
+echo Last_IO_Error;
+echo $last_error;
+
+connection master;
+
+source include/truncate_file.inc;
+
+if (`SELECT CONVERT(@@VERSION_COMPILE_OS USING latin1) NOT IN ('Win32', 'Win64', 'Windows')`)
+{
+append_file $MYSQLD_DATADIR/master-bin.index;
+./master-bin.000001
+./master-bin.000002
+./master-bin.000003
+EOF
+sleep 0.00000001;
+}
+
+if (`SELECT CONVERT(@@VERSION_COMPILE_OS USING latin1) IN ('Win32', 'Win64', 'Windows')`)
+{
+append_file $MYSQLD_DATADIR/master-bin.index;
+.\master-bin.000001
+.\master-bin.000002
+.\master-bin.000003
+EOF
+sleep 0.00000001;
+}
+
+CREATE TABLE t2(c1 INT);
+FLUSH LOGS;
+CREATE TABLE t3(c1 INT);
+FLUSH LOGS;
+CREATE TABLE t4(c1 INT);
+
+connection slave;
+START SLAVE IO_THREAD;
+source include/wait_for_slave_io_to_start.inc;
+
+connection master;
+sync_slave_with_master;
+SHOW TABLES;
+
+connection master;
+DROP TABLE t1, t2, t3, t4;
+source include/master-slave-end.inc;
=== modified file 'mysql-test/suite/rpl/t/rpl_misc_functions.test'
--- a/mysql-test/suite/rpl/t/rpl_misc_functions.test 2008-10-07 08:25:12 +0000
+++ b/mysql-test/suite/rpl/t/rpl_misc_functions.test 2010-01-13 09:00:03 +0000
@@ -3,12 +3,16 @@
#
source include/master-slave.inc;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
+
create table t1(id int, i int, r1 int, r2 int, p varchar(100));
insert into t1 values(1, connection_id(), 0, 0, "");
# don't put rand and password in the same query, to see if they replicate
# independently
# Pure rand test
+--disable_warnings
insert into t1 values(2, 0, rand()*1000, rand()*1000, "");
+--enable_warnings
# change the rand suite on the master (we do this because otherwise password()
# benefits from the fact that the above rand() is well replicated :
# it picks the same sequence element, which hides a possible bug in password() replication.
@@ -19,7 +23,9 @@ set sql_log_bin=1;
# Pure password test
insert into t1 values(3, 0, 0, 0, password('does_this_work?'));
# "altogether now"
+--disable_warnings
insert into t1 values(4, connection_id(), rand()*1000, rand()*1000, password('does_this_still_work?'));
+--enable_warnings
select * into outfile 'rpl_misc_functions.outfile' from t1;
let $MYSQLD_DATADIR= `select @@datadir`;
sync_slave_with_master;
@@ -73,11 +79,13 @@ DELIMITER ;|
# Exercise the functions and procedures then compare the results on
# the master to those on the slave.
+--disable_warnings
CALL test_replication_sp1();
CALL test_replication_sp2();
INSERT INTO t1 (col_a) VALUES (test_replication_sf());
INSERT INTO t1 (col_a) VALUES (test_replication_sf());
INSERT INTO t1 (col_a) VALUES (test_replication_sf());
+--enable_warnings
--sync_slave_with_master
=== modified file 'mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test'
--- a/mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test 2009-11-18 14:50:31 +0000
+++ b/mysql-test/suite/rpl/t/rpl_nondeterministic_functions.test 2010-01-13 09:00:03 +0000
@@ -17,6 +17,8 @@
--source include/master-slave.inc
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
+
CREATE TABLE t1 (a VARCHAR(1000));
# We replicate the connection_id in the query_log_event
@@ -41,7 +43,9 @@ INSERT INTO t1 VALUES
(UTC_TIMESTAMP());
# We replicate the random seed in a rand_log_event
+--disable_warnings
INSERT INTO t1 VALUES (RAND());
+--enable_warnings
# We replicate the last_insert_id in an intvar_log_event
INSERT INTO t1 VALUES (LAST_INSERT_ID());
=== modified file 'mysql-test/suite/rpl/t/rpl_optimize.test'
--- a/mysql-test/suite/rpl/t/rpl_optimize.test 2009-06-05 15:35:22 +0000
+++ b/mysql-test/suite/rpl/t/rpl_optimize.test 2010-03-04 08:03:07 +0000
@@ -15,6 +15,8 @@
-- source include/not_staging.inc
-- source include/master-slave.inc
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
+
create table t1 (a int not null auto_increment primary key, b int, key(b));
INSERT INTO t1 (a) VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10);
INSERT INTO t1 (a) SELECT null FROM t1;
@@ -32,8 +34,8 @@ INSERT INTO t1 (a) SELECT null FROM t1;
INSERT INTO t1 (a) SELECT null FROM t1;
save_master_pos;
# a few updates to force OPTIMIZE to do something
-update t1 set b=(a/2*rand());
--disable_warnings
+update t1 set b=(a/2*rand());
delete from t1 order by b limit 10000;
--enable_warnings
=== renamed file 'mysql-test/suite/binlog/t/binlog_tbl_metadata.test' => 'mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test'
--- a/mysql-test/suite/binlog/t/binlog_tbl_metadata.test 2009-05-12 11:53:46 +0000
+++ b/mysql-test/suite/rpl/t/rpl_row_tbl_metadata.test 2010-01-14 10:49:40 +0000
@@ -2,38 +2,39 @@
# BUG#42749: infinite loop writing to row based binlog - processlist shows
# "freeing items"
#
+#
# WHY
# ===
-#
-# This bug would make table map event to report data_written one byte less
-# than what would actually be written in its body. This would cause one byte shorter
-# event end_log_pos. The ultimate impact was that it would make fixing the
-# position in MYSQL_BIN_LOG::write_cache bogus or end up in an infinite loop.
+#
+# This bug would make table map event to report data_written one
+# byte less than what would actually be written in its body. This
+# would cause one byte shorter event end_log_pos. The ultimate
+# impact was that it would make fixing the position in
+# MYSQL_BIN_LOG::write_cache bogus or end up in an infinite loop.
#
# HOW
# ===
#
# Checking that the patch fixes the problem is done as follows:
-# i) a table with several fields is created;
+#
+# i) one table with m_field_metadata sized at 290
# ii) an insert is performed;
# iii) the logs are flushed;
# iv) mysqlbinlog is used to check if it succeeds.
#
-# In step iv), before the bug was fixed, the test case would fail with
-# mysqlbinlog reporting that it was unable to succeed in reading the event.
-#
+# In step iv), before the bug was fixed, the test case would fail
+# with mysqlbinlog reporting that it was unable to succeed in
+# reading the event.
--- source include/have_log_bin.inc
+-- source include/master-slave.inc
-- source include/have_innodb.inc
-- source include/have_binlog_format_row.inc
--- connection default
-
-RESET MASTER;
-- disable_warnings
DROP TABLE IF EXISTS `t1`;
-- enable_warnings
+-- echo ### TABLE with field_metadata_size == 290
CREATE TABLE `t1` (
`c1` int(11) NOT NULL AUTO_INCREMENT,
`c2` varchar(30) NOT NULL,
@@ -185,15 +186,155 @@ CREATE TABLE `t1` (
) ENGINE=InnoDB;
LOCK TABLES `t1` WRITE;
+INSERT INTO `t1`(c2) VALUES ('1');
+FLUSH LOGS;
+
+-- sync_slave_with_master
+-- connection master
-INSERT INTO `t1` VALUES ('1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1','1');
+-- echo ### assertion: the slave replicated event successfully and tables match
+-- let $diff_table_1=master:test.t1
+-- let $diff_table_2=slave:test.t1
+-- source include/diff_tables.inc
DROP TABLE `t1`;
-FLUSH LOGS;
+-- connection master
+-- sync_slave_with_master
+-- connection master
-- echo === Using mysqlbinlog to detect failure. Before the patch mysqlbinlog would find a corrupted event, thence would fail.
-- let $MYSQLD_DATADIR= `SELECT @@datadir`;
-- exec $MYSQL_BINLOG $MYSQLD_DATADIR/master-bin.000001 > $MYSQLTEST_VARDIR/tmp/mysqlbinlog_bug42749.binlog
-- remove_file $MYSQLTEST_VARDIR/tmp/mysqlbinlog_bug42749.binlog
+
+#############################################################
+# BUG#50018: binlog corruption when table has many columns
+#
+# Same test from BUG#42749, but now we generate some SQL which
+# creates and inserts into tables with metadata size from 249
+# to 258.
+#
+# The test works as follows:
+# 1. SQL for several CREATE TABLE and INSERTS are generated
+# into a file.
+# 2. This file is then "sourced"
+# 3. The slave is synchronized with the master
+# 4. FLUSH LOGS on master
+# 5. Compare tables on master and slave.
+# 6. run mysqlbinlog on master's binary log
+#
+# Steps #5 and #6 assert that binary log is not corrupted
+# in both cases: when slave is replaying events and when
+# mysqlbinlog is used to read the binary log
+
+-- source include/master-slave-reset.inc
+-- connection master
+
+# Create several tables with field_metadata_size ranging
+# from 249 to 258 (so that we cover 251 and 255 range).
+# This should exercise the switch between using 1 or 3
+# bytes to pack m_field_metadata_size.
+#
+# Each varchar field takes up to 2 metadata bytes, see:
+#
+# Field_varstring::do_save_field_metadata (field.cc)
+#
+# The float field takes 1 byte, see:
+#
+# Field_float::do_save_field_metadata (field.cc)
+#
+
+-- let $generated_sql= $MYSQLTEST_VARDIR/tmp/b50018.sql
+-- let B50018_FILE= $generated_sql
+
+-- echo ### action: generating several tables with different metadata
+-- echo ### sizes (resorting to perl)
+-- perl
+my $file= $ENV{'B50018_FILE'};
+open(FILE, ">", "$file") or die "Unable to open bug 50018 generated SQL file: $!" ;
+
+my $tables= "";
+my $ntables= 10;
+my $base_ncols= 124;
+
+for my $i (1..$ntables)
+{
+ my $ncols= $base_ncols + $i;
+ my $metadata_size= $ncols_variable * 2 + 1;
+
+ print FILE "-- echo ### testing table with " . ($base_ncols*2 + $i) . " field metadata size.\n";
+ print FILE "CREATE TABLE t$i (\n";
+ for my $n (1..$base_ncols)
+ {
+ print FILE "c$n VARCHAR(30) NOT NULL DEFAULT 'BUG#50018',\n";
+ }
+
+ for my $n (1..$i)
+ {
+ print FILE "c" . ($base_ncols+$n) . " FLOAT NOT NULL DEFAULT 0";
+ if ($n < $i)
+ {
+ print FILE ",\n";
+ }
+ }
+
+ print FILE ") Engine=InnoDB;\n";
+
+ $tables.= " t$i WRITE";
+ if ($i < $ntables)
+ {
+ $tables .=",";
+ }
+
+ print FILE "LOCK TABLES t$i WRITE;\n";
+ print FILE "INSERT INTO t$i(c". ($base_ncols+1) . ") VALUES (50018);\n";
+ print FILE "UNLOCK TABLES;";
+}
+
+close(FILE);
+
+EOF
+
+## we don't need this in the result file
+## however, for debugging purposes you
+## may want to reactivate query logging
+-- disable_query_log
+-- source $generated_sql
+-- enable_query_log
+
+-- sync_slave_with_master
+-- connection master
+
+FLUSH LOGS;
+
+-- let $ntables=10
+while($ntables)
+{
+ -- echo ### assertion: the slave replicated event successfully and tables match for t$ntables
+ -- let $diff_table_1=master:test.t$ntables
+ -- let $diff_table_2=slave:test.t$ntables
+ -- source include/diff_tables.inc
+
+ -- connection master
+ -- disable_query_log
+ -- eval DROP TABLE t$ntables
+ -- enable_query_log
+ -- sync_slave_with_master
+ -- connection master
+
+ -- dec $ntables
+}
+
+-- echo ### assertion: check that binlog is not corrupt. Using mysqlbinlog to
+-- echo ### detect failure. Before the patch mysqlbinlog would find
+-- echo ### a corrupted event, thence would fail.
+-- let $MYSQLD_DATADIR= `SELECT @@datadir`;
+-- exec $MYSQL_BINLOG -v --hex $MYSQLD_DATADIR/master-bin.000001 > $MYSQLTEST_VARDIR/tmp/mysqlbinlog_bug50018.binlog
+
+## clean up
+## For debugging purposes you might want not to remove these
+-- remove_file $MYSQLTEST_VARDIR/tmp/mysqlbinlog_bug50018.binlog
+-- remove_file $generated_sql
+-- source include/master-slave-end.inc
=== added file 'mysql-test/suite/rpl/t/rpl_set_null_innodb.test'
--- a/mysql-test/suite/rpl/t/rpl_set_null_innodb.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_set_null_innodb.test 2010-01-21 17:20:24 +0000
@@ -0,0 +1,6 @@
+-- source include/have_binlog_format_mixed_or_row.inc
+-- source include/master-slave.inc
+-- source include/have_innodb.inc
+
+-- let $engine= InnoDB
+-- source extra/rpl_tests/rpl_set_null.test
=== added file 'mysql-test/suite/rpl/t/rpl_set_null_myisam.test'
--- a/mysql-test/suite/rpl/t/rpl_set_null_myisam.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_set_null_myisam.test 2010-01-21 17:20:24 +0000
@@ -0,0 +1,5 @@
+-- source include/have_binlog_format_mixed_or_row.inc
+-- source include/master-slave.inc
+
+-- let $engine= MyISAM
+-- source extra/rpl_tests/rpl_set_null.test
=== added file 'mysql-test/suite/rpl/t/rpl_stm_binlog_direct-master.opt'
--- a/mysql-test/suite/rpl/t/rpl_stm_binlog_direct-master.opt 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_stm_binlog_direct-master.opt 2010-01-20 19:08:16 +0000
@@ -0,0 +1 @@
+--binlog-direct-non-transactional-updates
=== added file 'mysql-test/suite/rpl/t/rpl_stm_binlog_direct.test'
--- a/mysql-test/suite/rpl/t/rpl_stm_binlog_direct.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_stm_binlog_direct.test 2010-01-20 19:08:16 +0000
@@ -0,0 +1,230 @@
+################################################################################
+# This test case checks if the option "binlog-direct-non-transactional-updates"
+# makes non-transactional changes in the statement format to be written to the
+# binary log as soon as the statement commits.
+#
+# In what follows, we use the include file rpl_mixing_engines.inc to generate
+# sql commands from a format string. The format string consists of a sequence of
+# 'codes' separated by spaces. Before it set of commands, we paste the expected
+# sequence in the binary log. The following codes exist:
+#
+# - Define the scope of a transaction:
+# B - Begin.
+# C - Commit.
+# R - Rollback.
+#
+# - Change only T-Tables:
+# T - Updates a T-Table.
+# T-trig - Updates T-Tables through a trigger.
+# T-func - Updates T-Tables through a function.
+# T-proc - Updates T-Tables through a procedure.
+# eT - Fails while updating the first tuple in a T-Table.
+# Te - Fails while updating an n-tuple (n > 1) in a T-Table.
+# Te-trig - Fails while updating an n-tuple (n > 1) in a T-Table.
+# Te-func - Fails while updating an n-tuple (n > 1) in a T-Table.
+#
+# - Change only N-Tables
+# N - Updates a N-Table.
+# N-trig - Updates N-Tables through a trigger.
+# N-func - Updates N-Tables through a function.
+# N-proc - Updates N-Tables through a procedure.
+# eN - Fails while updating the first tuple in a N-Table.
+# Ne - Fails while updating an n-tuple (n > 1) in a N-Table.
+# Ne-trig - Fails while updating an n-tuple (n > 1) in a N-Table.
+# Ne-func - Fails while updating an n-tuple (n > 1) in a N-Table.
+################################################################################
+
+--source include/have_binlog_format_statement.inc
+--source include/master-slave.inc
+--source include/have_innodb.inc
+
+set @@session.binlog_direct_non_transactional_updates= TRUE;
+
+--echo #########################################################################
+--echo # CONFIGURATION
+--echo #########################################################################
+
+--let $engine_type= Innodb
+SET @commands= 'configure';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+--echo #########################################################################
+--echo # 1 - BINLOG ORDER
+--echo #########################################################################
+connection master;
+
+--echo
+--echo
+--echo
+--echo
+--echo #
+--echo #3) Generates in the binlog what follows:
+--echo # --> STMT "N B T C" entries, format S.
+--echo #
+SET @commands= 'B T N C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T N-trig C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T N-func C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T N-proc C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N-trig C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N-func C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N-proc C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N-trig C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N-func C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N-proc C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N-trig C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N-func C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N-proc C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+
+--echo
+--echo
+--echo
+--echo
+--echo #
+--echo #3.e) Generates in the binlog what follows if T-* fails:
+--echo # --> STMT "N" entry, format S.
+--echo # Otherwise, what follows if N-* fails and a N-Table is changed:
+--echo # --> STMT "N B T C" entries, format S.
+--echo #
+SET @commands= 'B eT N C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B Te N C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T eN C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T Ne C';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+
+--echo
+--echo
+--echo
+--echo
+--echo #
+--echo #4) Generates in the binlog what follows:
+--echo # --> STMT "N B T R" entries, format S.
+--echo #
+SET @commands= 'B T N R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T N-trig R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T N-func R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T N-proc R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N-trig R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N-func R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-trig N-proc R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N-trig R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N-func R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-func N-proc R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N-trig R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N-func R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T-proc N-proc R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+
+--echo
+--echo
+--echo
+--echo
+--echo #
+--echo #4.e) Generates in the binlog what follows if T* fails:
+--echo # --> STMT "B N C" entry, format S.
+--echo # Otherwise, what follows if N* fails and a N-Table is changed:
+--echo # --> STMT "N" entries, format S.
+--echo #
+SET @commands= 'B eT N R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B Te N R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T eN R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+SET @commands= 'B T Ne R';
+--source extra/rpl_tests/rpl_mixing_engines.inc
+
+
+--echo ###################################################################################
+--echo # CHECK CONSISTENCY
+--echo ###################################################################################
+connection master;
+sync_slave_with_master;
+
+--exec $MYSQL_DUMP --compact --order-by-primary --skip-extended-insert --no-create-info test > $MYSQLTEST_VARDIR/tmp/test-nmt-master.sql
+--exec $MYSQL_DUMP_SLAVE --compact --order-by-primary --skip-extended-insert --no-create-info test > $MYSQLTEST_VARDIR/tmp/test-nmt-slave.sql
+--diff_files $MYSQLTEST_VARDIR/tmp/test-nmt-master.sql $MYSQLTEST_VARDIR/tmp/test-nmt-slave.sql
+
+--echo ###################################################################################
+--echo # CLEAN
+--echo ###################################################################################
+SET @commands= 'clean';
+--source extra/rpl_tests/rpl_mixing_engines.inc
=== modified file 'mysql-test/suite/rpl/t/rpl_stm_maria.test'
--- a/mysql-test/suite/rpl/t/rpl_stm_maria.test 2008-01-20 04:25:26 +0000
+++ b/mysql-test/suite/rpl/t/rpl_stm_maria.test 2010-03-04 08:03:07 +0000
@@ -4,6 +4,9 @@
--source include/have_binlog_format_mixed_or_statement.inc
--source include/master-slave.inc
+# Suppress warnings that rand() is unsafe in statement binlog mode
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
+
--disable_warnings
DROP TABLE IF EXISTS t1;
DROP TABLE IF EXISTS t2;
@@ -37,10 +40,12 @@ insert into t3 values(100,"log",0,0,0);
SET @@RAND_SEED1=658490765, @@RAND_SEED2=635893186;
+--disable_warnings
insert into t1 values(1,1,rand()),(NULL,2,rand());
insert into t2 (b) values(last_insert_id());
insert into t2 values(3,0),(NULL,0);
insert into t2 values(NULL,0),(500,0);
+--enable_warnings
select a,b, truncate(rand_value,4) from t1;
select * from t2;
=== modified file 'mysql-test/suite/rpl/t/rpl_stm_until.test'
--- a/mysql-test/suite/rpl/t/rpl_stm_until.test 2008-07-23 11:23:52 +0000
+++ b/mysql-test/suite/rpl/t/rpl_stm_until.test 2010-01-27 17:27:49 +0000
@@ -98,3 +98,102 @@ start slave until relay_log_file='slave-
start slave sql_thread;
start slave until master_log_file='master-bin.000001', master_log_pos=776;
+#
+# bug#47210 first execution of "start slave until" stops too early
+#
+# testing that a slave rotate event that is caused by stopping the slave
+# does not intervene anymore in UNTIL condition.
+#
+
+connection slave;
+source include/stop_slave.inc;
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+reset slave;
+--replace_result $MASTER_MYPORT MASTER_PORT
+eval change master to master_host='127.0.0.1',master_port=$MASTER_MYPORT, master_user='root';
+
+connection master;
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+reset master;
+create table t1 (a int primary key auto_increment);
+save_master_pos;
+let $master_pos= query_get_value(SHOW MASTER STATUS, Position, 1);
+
+connection slave;
+start slave;
+sync_with_master;
+
+# at this point slave will close the relay log stamping it with its own
+# Rotate log event. This event won't be examined on matter of the master
+# UNTIL pos anymore.
+source include/stop_slave.inc;
+let $slave_exec_pos= query_get_value(SHOW SLAVE STATUS, Exec_Master_Log_Pos, 1);
+
+--echo master and slave are in sync now
+let $diff_pos= `select $master_pos - $slave_exec_pos`;
+eval select $diff_pos as zero;
+
+connection master;
+insert into t1 set a=null;
+let $until_pos= query_get_value(SHOW MASTER STATUS, Position, 1);
+insert into t1 set a=null;
+select count(*) as two from t1;
+
+connection slave;
+--replace_result $until_pos UNTIL_POS;
+eval start slave until master_log_file='master-bin.000001', master_log_pos= $until_pos;
+source include/wait_for_slave_sql_to_stop.inc;
+let $slave_exec_pos= query_get_value(SHOW SLAVE STATUS, Exec_Master_Log_Pos, 1);
+--echo slave stopped at the prescribed position
+let $diff_pos= `select $until_pos - $slave_exec_pos`;
+eval select $diff_pos as zero;
+select count(*) as one from t1;
+
+
+connection master;
+drop table t1;
+
+connection slave;
+start slave;
+sync_with_master;
+
+ # Bug #47142 "slave start until" stops 1 event too late in 4.1 to 5.0 replication
+#
+# testing fixes that refine the start position of prior-5.0 master's event
+# and by that provide correct execution of
+# START SLAVE UNTIL ... master_log_pos= x;
+# Keep the test at the end of the file because it manipulates with binlog files
+# to substitute the genuine one with a prepared on 4.1 server.
+#
+
+--source include/master-slave-reset.inc
+
+connection master;
+drop table if exists t1; # there is create table t1 in bug47142_master-bin.000001
+flush logs;
+let $MYSQLD_DATADIR= `select @@datadir`;
+--remove_file $MYSQLD_DATADIR/master-bin.000001
+--copy_file $MYSQL_TEST_DIR/std_data/bug47142_master-bin.000001 $MYSQLD_DATADIR/master-bin.000001
+
+connection slave;
+stop slave;
+reset slave;
+start slave until master_log_file='master-bin.000001', master_log_pos=294 /* to stop right before DROP */;
+--source include/wait_for_slave_sql_to_stop.inc
+
+show tables /* t1 must exist */;
+
+# clean-up of Bug #47142 testing
+
+drop table t1; # drop on slave only, master does not have t1.
+stop slave;
+reset slave;
+
+connection master;
+reset master;
+
+# End of tests
=== modified file 'mysql-test/suite/rpl/t/rpl_temporary.test'
--- a/mysql-test/suite/rpl/t/rpl_temporary.test 2010-01-11 13:15:28 +0000
+++ b/mysql-test/suite/rpl/t/rpl_temporary.test 2010-03-04 08:03:07 +0000
@@ -116,8 +116,10 @@ SET @@session.sql_select_limit=10, @@ses
SELECT @@session.sql_select_limit = @save_select_limit; #shouldn't have changed
# Now as root, to be sure it works
connection con2;
+SET @save_conn_id= connection_id();
SET @@session.pseudo_thread_id=100;
SET @@session.pseudo_thread_id=connection_id();
+SET @@session.pseudo_thread_id=@save_conn_id;
SET @@session.sql_log_bin=0;
SET @@session.sql_log_bin=1;
=== modified file 'mysql-test/suite/rpl/t/rpl_timezone.test'
--- a/mysql-test/suite/rpl/t/rpl_timezone.test 2009-03-25 10:42:16 +0000
+++ b/mysql-test/suite/rpl/t/rpl_timezone.test 2009-12-16 19:53:56 +0000
@@ -179,8 +179,11 @@ insert into t1 values('2008-12-23 19:39:
--connection master1
SET @@session.time_zone='+02:00';
insert delayed into t1 values ('2008-12-23 19:39:39',2);
-# Forces table t1 to be closed and flushes the query cache.
-# This makes sure that 'delayed insert' is executed before next statement.
+
+# wait for the delayed insert to be executed
+let $wait_condition= SELECT date FROM t1 WHERE a=2;
+--source include/wait_condition.inc
+
flush table t1;
flush logs;
select * from t1;
=== added file 'mysql-test/suite/rpl/t/rpl_tmp_table_and_DDL.test'
--- a/mysql-test/suite/rpl/t/rpl_tmp_table_and_DDL.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl/t/rpl_tmp_table_and_DDL.test 2010-01-22 09:38:21 +0000
@@ -0,0 +1,13 @@
+#
+# Bug#49132
+# This test verifies if executing DDL statement before trying to manipulate
+# a temporary table causes row-based replication to break with error 'table
+# does not exist' base on myisam engine.
+#
+
+source include/master-slave.inc;
+source include/have_binlog_format_row.inc;
+
+LET $ENGINE_TYPE= MyISAM;
+source extra/rpl_tests/rpl_tmp_table_and_DDL.test;
+
=== modified file 'mysql-test/suite/rpl/t/rpl_trigger.test'
--- a/mysql-test/suite/rpl/t/rpl_trigger.test 2009-11-18 14:50:31 +0000
+++ b/mysql-test/suite/rpl/t/rpl_trigger.test 2010-01-13 09:00:03 +0000
@@ -40,10 +40,12 @@ insert into t3 values(100,"log",0,0,0);
SET @@RAND_SEED1=658490765, @@RAND_SEED2=635893186;
# Emulate that we have rows 2-9 deleted on the slave
+--disable_warnings
insert into t1 values(1,1,rand()),(NULL,2,rand());
insert into t2 (b) values(last_insert_id());
insert into t2 values(3,0),(NULL,0);
insert into t2 values(NULL,0),(500,0);
+--enable_warnings
select a,b, truncate(rand_value,4) from t1;
select * from t2;
=== modified file 'mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result'
--- a/mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result 2007-06-27 12:28:02 +0000
+++ b/mysql-test/suite/rpl_ndb/r/rpl_ndb_func003.result 2010-01-13 09:00:03 +0000
@@ -4,6 +4,7 @@ reset master;
reset slave;
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
start slave;
+CALL mtr.add_suppression('Statement may not be safe to log in statement format.');
DROP FUNCTION IF EXISTS test.f1;
DROP TABLE IF EXISTS test.t1;
CREATE TABLE test.t1 (a INT NOT NULL AUTO_INCREMENT, c CHAR(16),PRIMARY KEY(a))ENGINE=NDB;
=== added file 'mysql-test/suite/rpl_ndb/r/rpl_ndb_set_null.result'
--- a/mysql-test/suite/rpl_ndb/r/rpl_ndb_set_null.result 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl_ndb/r/rpl_ndb_set_null.result 2010-01-21 17:20:24 +0000
@@ -0,0 +1,35 @@
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1 (c1 BIT, c2 INT) Engine=NDB;
+INSERT INTO `t1` VALUES ( 1, 1 );
+UPDATE t1 SET c1=NULL where c2=1;
+Comparing tables master:test.t1 and slave:test.t1
+DELETE FROM t1 WHERE c2=1 LIMIT 1;
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE t1;
+stop slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+reset master;
+reset slave;
+drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
+start slave;
+CREATE TABLE t1 (c1 CHAR) Engine=NDB;
+INSERT INTO t1 ( c1 ) VALUES ( 'w' ) ;
+SELECT * FROM t1;
+c1
+w
+UPDATE t1 SET c1=NULL WHERE c1='w';
+Comparing tables master:test.t1 and slave:test.t1
+DELETE FROM t1 LIMIT 2;
+Comparing tables master:test.t1 and slave:test.t1
+DROP TABLE t1;
=== added file 'mysql-test/suite/rpl_ndb/t/rpl_ndb_set_null.test'
--- a/mysql-test/suite/rpl_ndb/t/rpl_ndb_set_null.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/suite/rpl_ndb/t/rpl_ndb_set_null.test 2010-01-21 17:20:24 +0000
@@ -0,0 +1,6 @@
+-- source include/have_ndb.inc
+-- source include/have_binlog_format_mixed_or_row.inc
+-- source include/ndb_master-slave.inc
+
+-- let $engine= NDB
+-- source extra/rpl_tests/rpl_set_null.test
=== modified file 'mysql-test/t/alter_table.test'
--- a/mysql-test/t/alter_table.test 2009-12-03 11:19:05 +0000
+++ b/mysql-test/t/alter_table.test 2010-03-04 08:03:07 +0000
@@ -1088,4 +1088,16 @@ ALTER TABLE t1 CHANGE COLUMN f1 f1_no_re
--disable_info
DROP TABLE t1;
+
+--echo #
+--echo # Bug #31145: ALTER TABLE DROP COLUMN, ADD COLUMN crashes (linux)
+--echo # or freezes (win) the server
+--echo #
+
+CREATE TABLE t1 (a TEXT, id INT, b INT);
+ALTER TABLE t1 DROP COLUMN a, ADD COLUMN c TEXT FIRST;
+
+DROP TABLE t1;
+
+
--echo End of 5.1 tests
=== modified file 'mysql-test/t/bug46080.test'
--- a/mysql-test/t/bug46080.test 2009-09-03 06:38:06 +0000
+++ b/mysql-test/t/bug46080.test 2010-02-02 12:17:21 +0000
@@ -3,8 +3,8 @@
--echo # sort_buffer_size cannot allocate
--echo #
-call mtr.add_suppression("Out of memory at line .*, 'my_alloc.c'");
-call mtr.add_suppression("needed .* byte .*k., memory in use: .* bytes .*k");
+call mtr.add_suppression("Out of memory at line .*, '.*my_alloc.c'");
+call mtr.add_suppression("needed .* byte (.*k)., memory in use: .* bytes (.*k)");
CREATE TABLE t1(a CHAR(255));
INSERT INTO t1 VALUES ('a');
=== modified file 'mysql-test/t/count_distinct.test'
--- a/mysql-test/t/count_distinct.test 2005-07-28 14:09:54 +0000
+++ b/mysql-test/t/count_distinct.test 2009-12-22 09:52:23 +0000
@@ -35,6 +35,25 @@ insert into t1 values ('NYC Lib','New Yo
select t2.isbn,city,t1.libname,count(t1.libname) as a from t3 left join t1 on t3.libname=t1.libname left join t2 on t3.isbn=t2.isbn group by city,t1.libname;
select t2.isbn,city,t1.libname,count(distinct t1.libname) as a from t3 left join t1 on t3.libname=t1.libname left join t2 on t3.isbn=t2.isbn group by city having count(distinct t1.libname) > 1;
select t2.isbn,city,t1.libname,count(distinct t1.libname) as a from t3 left join t1 on t3.libname=t1.libname left join t2 on t3.isbn=t2.isbn group by city having count(distinct concat(t1.libname,'a')) > 1;
+
+select t2.isbn,city,@bar:=t1.libname,count(distinct t1.libname) as a
+ from t3 left join t1 on t3.libname=t1.libname left join t2
+ on t3.isbn=t2.isbn group by city having count(distinct
+ t1.libname) > 1;
+#
+# Wrong result, see bug#49872
+#
+SELECT @bar;
+
+select t2.isbn,city,concat(@bar:=t1.libname),count(distinct t1.libname) as a
+ from t3 left join t1 on t3.libname=t1.libname left join t2
+ on t3.isbn=t2.isbn group by city having count(distinct
+ t1.libname) > 1;
+#
+# Wrong result, see bug#49872
+#
+SELECT @bar;
+
drop table t1, t2, t3;
#
=== modified file 'mysql-test/t/create.test'
--- a/mysql-test/t/create.test 2009-12-27 13:54:41 +0000
+++ b/mysql-test/t/create.test 2010-03-04 08:03:07 +0000
@@ -721,16 +721,15 @@ drop table t1;
# Base vs temporary tables dillema (a.k.a. bug#24508 "Inconsistent
# results of CREATE TABLE ... SELECT when temporary table exists").
# In this situation we either have to create non-temporary table and
-# insert data in it or insert data in temporary table without creation
-# of permanent table. Since currently temporary tables always shadow
-# permanent tables we adopt second approach.
+# insert data in it or insert data in temporary table without creation of
+# permanent table. After patch for Bug#47418, we create the base table and
+# instert data into it, even though a temporary table exists with the same
+# name.
create temporary table t1 (j int);
create table if not exists t1 select 1;
select * from t1;
drop temporary table t1;
---error ER_NO_SUCH_TABLE
select * from t1;
---error ER_BAD_TABLE_ERROR
drop table t1;
=== modified file 'mysql-test/t/ctype_ucs.test'
--- a/mysql-test/t/ctype_ucs.test 2009-12-03 12:02:37 +0000
+++ b/mysql-test/t/ctype_ucs.test 2010-03-04 08:03:07 +0000
@@ -15,6 +15,16 @@ SET character_set_connection=ucs2;
SET CHARACTER SET koi8r;
#
+# BUG#49028, error in LIKE with ucs2
+#
+create table t1 (a varchar(2) character set ucs2 collate ucs2_bin, key(a));
+insert into t1 values ('A'),('A'),('B'),('C'),('D'),('A\t');
+insert into t1 values ('A\0'),('A\0'),('A\0'),('A\0'),('AZ');
+select hex(a) from t1 where a like 'A_' order by a;
+select hex(a) from t1 ignore key(a) where a like 'A_' order by a;
+drop table t1;
+
+#
# Check that 0x20 is only trimmed when it is
# a part of real SPACE character, not just a part
# of a multibyte sequence.
=== modified file 'mysql-test/t/ctype_utf8.test'
--- a/mysql-test/t/ctype_utf8.test 2009-12-27 13:54:41 +0000
+++ b/mysql-test/t/ctype_utf8.test 2010-03-04 08:03:07 +0000
@@ -1449,6 +1449,16 @@ select hex(_utf8 B'001111111111');
--error ER_INVALID_CHARACTER_STRING
select (_utf8 X'616263FF');
+--echo #
+--echo # Bug#44131 Binary-mode "order by" returns records in incorrect order for UTF-8 strings
+--echo #
+CREATE TABLE t1 (id int not null primary key, name varchar(10)) character set utf8;
+INSERT INTO t1 VALUES
+(2,'一二三01'),(3,'一二三09'),(4,'一二三02'),(5,'一二三08'),
+(6,'一二三11'),(7,'一二三91'),(8,'一二三21'),(9,'一二三81');
+SELECT * FROM t1 ORDER BY BINARY(name);
+DROP TABLE t1;
+
#
# Bug #36772: When using UTF8, CONVERT with GROUP BY returns truncated results
#
=== modified file 'mysql-test/t/delete.test'
--- a/mysql-test/t/delete.test 2009-11-18 09:32:03 +0000
+++ b/mysql-test/t/delete.test 2010-01-29 09:36:28 +0000
@@ -357,4 +357,21 @@ END |
--error ER_CANT_UPDATE_USED_TABLE_IN_SF_OR_TRG
DELETE IGNORE FROM t1;
-DROP TABLE t1;
\ No newline at end of file
+DROP TABLE t1;
+
+
+--echo #
+--echo # Bug #49552 : sql_buffer_result cause crash + not found records
+--echo # in multitable delete/subquery
+--echo #
+
+CREATE TABLE t1(a INT);
+INSERT INTO t1 VALUES (1),(2),(3);
+SET SESSION SQL_BUFFER_RESULT=1;
+DELETE t1 FROM (SELECT SUM(a) a FROM t1) x,t1;
+
+SET SESSION SQL_BUFFER_RESULT=DEFAULT;
+SELECT * FROM t1;
+DROP TABLE t1;
+
+--echo End of 5.1 tests
=== modified file 'mysql-test/t/disabled.def'
--- a/mysql-test/t/disabled.def 2010-01-15 17:02:57 +0000
+++ b/mysql-test/t/disabled.def 2010-03-04 08:03:07 +0000
@@ -11,4 +11,3 @@
##############################################################################
kill : Bug#37780 2008-12-03 HHunger need some changes to be robust enough for pushbuild.
query_cache_28249 : Bug#43861 2009-03-25 main.query_cache_28249 fails sporadically
-rpl_killed_ddl : Bug#45520: rpl_killed_ddl fails sporadically in pb2
=== modified file 'mysql-test/t/fulltext.test'
--- a/mysql-test/t/fulltext.test 2010-01-15 15:27:55 +0000
+++ b/mysql-test/t/fulltext.test 2010-03-04 08:03:07 +0000
@@ -497,6 +497,27 @@ EXECUTE s;
DEALLOCATE PREPARE s;
DROP TABLE t1;
+
+--echo #
+--echo # Bug #49250 : spatial btree index corruption and crash
+--echo # Part two : fulltext syntax check
+--echo #
+
+--error ER_PARSE_ERROR
+CREATE TABLE t1(col1 TEXT,
+ FULLTEXT INDEX USING BTREE (col1));
+CREATE TABLE t2(col1 TEXT);
+--error ER_PARSE_ERROR
+CREATE FULLTEXT INDEX USING BTREE ON t2(col);
+--error ER_PARSE_ERROR
+ALTER TABLE t2 ADD FULLTEXT INDEX USING BTREE (col1);
+
+DROP TABLE t2;
+
+
+--echo End of 5.0 tests
+
+
--echo #
--echo # Bug #47930: MATCH IN BOOLEAN MODE returns too many results
--echo # inside subquery
@@ -536,4 +557,14 @@ SELECT count(*) FROM t1 WHERE
DROP TABLE t1,t2,t3;
+--echo #
+--echo # Bug #49445: Assertion failed: 0, file .\item_row.cc, line 55 with
+--echo # fulltext search and row op
+--echo #
+
+CREATE TABLE t1(a CHAR(1),FULLTEXT(a));
+SELECT 1 FROM t1 WHERE MATCH(a) AGAINST ('') AND ROW(a,a) > ROW(1,1);
+DROP TABLE t1;
+
+
--echo End of 5.1 tests
=== modified file 'mysql-test/t/fulltext_order_by.test'
--- a/mysql-test/t/fulltext_order_by.test 2005-08-12 16:27:54 +0000
+++ b/mysql-test/t/fulltext_order_by.test 2009-12-22 15:52:15 +0000
@@ -80,7 +80,7 @@ CREATE TABLE t3 (
FULLTEXT KEY betreff (betreff)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=996 ;
---error 1054
+--error ER_CANT_USE_OPTION_HERE
select a.text, b.id, b.betreff
from
t2 a inner join t3 b on a.id = b.forum inner join
@@ -100,7 +100,7 @@ group by
order by
match(b.betreff) against ('+abc' in boolean mode) desc;
---error 1054
+--error ER_CANT_USE_OPTION_HERE
select a.text, b.id, b.betreff
from
t2 a inner join t3 b on a.id = b.forum inner join
@@ -117,6 +117,7 @@ where
order by
match(b.betreff) against ('+abc' in boolean mode) desc;
+--error ER_CANT_USE_OPTION_HERE
select a.text, b.id, b.betreff
from
t2 a inner join t3 b on a.id = b.forum inner join
=== modified file 'mysql-test/t/func_concat.test'
--- a/mysql-test/t/func_concat.test 2009-05-21 08:06:43 +0000
+++ b/mysql-test/t/func_concat.test 2010-01-13 04:16:36 +0000
@@ -4,6 +4,7 @@
--disable_warnings
DROP TABLE IF EXISTS t1;
+DROP PROCEDURE IF EXISTS p1;
--enable_warnings
CREATE TABLE t1 ( number INT NOT NULL, alpha CHAR(6) NOT NULL );
@@ -111,4 +112,16 @@ EXPLAIN SELECT CONCAT('gui_', t2.a), t1.
DROP TABLE t1, t2;
+--echo #
+--echo # Bug #50096: CONCAT_WS inside procedure returning wrong data
+--echo #
+
+CREATE PROCEDURE p1(a varchar(255), b int, c int)
+ SET @query = CONCAT_WS(",", a, b, c);
+
+CALL p1("abcde", "0", "1234");
+SELECT @query;
+
+DROP PROCEDURE p1;
+
--echo # End of 5.1 tests
=== modified file 'mysql-test/t/func_str.test'
--- a/mysql-test/t/func_str.test 2009-09-10 10:30:03 +0000
+++ b/mysql-test/t/func_str.test 2009-12-04 15:36:58 +0000
@@ -1318,3 +1318,37 @@ insert into t1 values (-1),(null);
explain select 1 as a from t1,(select decode(f1,f1) as b from t1) a;
explain select 1 as a from t1,(select encode(f1,f1) as b from t1) a;
drop table t1;
+
+--echo #
+--echo # Bug#49141: Encode function is significantly slower in 5.1 compared to 5.0
+--echo #
+
+--disable_warnings
+DROP TABLE IF EXISTS t1, t2;
+--enable_warnings
+
+CREATE TABLE t1 (a VARCHAR(20), b INT);
+CREATE TABLE t2 (a VARCHAR(20), b INT);
+
+INSERT INTO t1 VALUES ('ABC', 1);
+INSERT INTO t2 VALUES ('ABC', 1);
+
+SELECT DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), t2.a)
+ FROM t1,t2 WHERE t1.b = t1.b > 0 GROUP BY t2.b;
+
+SELECT DECODE((SELECT ENCODE('secret', 'ABC') FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), t2.a)
+ FROM t1,t2 WHERE t1.b = t1.b > 0 GROUP BY t2.b;
+
+SELECT DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b), 'ABC')
+ FROM t1,t2 WHERE t1.b = t1.b > 0 GROUP BY t2.b;
+
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+
+INSERT INTO t1 VALUES ('EDF', 3), ('BCD', 2), ('ABC', 1);
+INSERT INTO t2 VALUES ('EDF', 3), ('BCD', 2), ('ABC', 1);
+
+SELECT DECODE((SELECT ENCODE('secret', t1.a) FROM t1,t2 WHERE t1.a = t2.a GROUP BY t1.b LIMIT 1), t2.a)
+ FROM t2 WHERE t2.b = 1 GROUP BY t2.b;
+
+DROP TABLE t1, t2;
=== modified file 'mysql-test/t/gis.test'
--- a/mysql-test/t/gis.test 2009-12-08 09:26:11 +0000
+++ b/mysql-test/t/gis.test 2010-01-13 10:28:42 +0000
@@ -670,6 +670,21 @@ SELECT 1 FROM t1 WHERE a <> (SELECT GEOM
DROP TABLE t1;
+--echo #
+--echo # Bug #49250 : spatial btree index corruption and crash
+--echo # Part one : spatial syntax check
+--echo #
+
+--error ER_PARSE_ERROR
+CREATE TABLE t1(col1 MULTIPOLYGON NOT NULL,
+ SPATIAL INDEX USING BTREE (col1));
+CREATE TABLE t2(col1 MULTIPOLYGON NOT NULL);
+--error ER_PARSE_ERROR
+CREATE SPATIAL INDEX USING BTREE ON t2(col);
+--error ER_PARSE_ERROR
+ALTER TABLE t2 ADD SPATIAL INDEX USING BTREE (col1);
+
+DROP TABLE t2;
--echo End of 5.0 tests
=== modified file 'mysql-test/t/information_schema.test'
--- a/mysql-test/t/information_schema.test 2010-03-10 09:11:02 +0000
+++ b/mysql-test/t/information_schema.test 2010-03-10 09:12:23 +0000
@@ -1386,6 +1386,33 @@ SET TIMESTAMP=@@TIMESTAMP + 10000000;
SELECT 'NOT_OK' AS TEST_RESULT FROM INFORMATION_SCHEMA.PROCESSLIST WHERE time < 0;
SET TIMESTAMP=DEFAULT;
+
+--echo #
+--echo # Bug #50276: Security flaw in INFORMATION_SCHEMA.TABLES
+--echo #
+CREATE DATABASE db1;
+USE db1;
+CREATE TABLE t1 (id INT);
+CREATE USER nonpriv;
+USE test;
+
+connect (nonpriv_con, localhost, nonpriv,,);
+connection nonpriv_con;
+--echo # connected as nonpriv
+--echo # Should return 0
+SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME='t1';
+USE INFORMATION_SCHEMA;
+--echo # Should return 0
+SELECT COUNT(*) FROM TABLES WHERE TABLE_NAME='t1';
+
+connection default;
+--echo # connected as root
+disconnect nonpriv_con;
+DROP USER nonpriv;
+DROP TABLE db1.t1;
+DROP DATABASE db1;
+
+
--echo End of 5.1 tests.
# Wait till all disconnects are completed
=== added file 'mysql-test/t/innodb-autoinc-44030.test'
--- a/mysql-test/t/innodb-autoinc-44030.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/innodb-autoinc-44030.test 2010-03-04 08:03:07 +0000
@@ -0,0 +1,40 @@
+-- source include/have_innodb.inc
+# embedded server ignores 'delayed', so skip this
+-- source include/not_embedded.inc
+
+let $file_format_check=`select @@innodb_file_format_check`;
+
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+
+#
+# 44030: Error: (1500) Couldn't read the MAX(ID) autoinc value from
+# the index (PRIMARY)
+# This test requires a restart of the server
+SET @@SESSION.AUTO_INCREMENT_INCREMENT=1, @@SESSION.AUTO_INCREMENT_OFFSET=1;
+CREATE TABLE t1 (c1 INT PRIMARY KEY AUTO_INCREMENT) ENGINE=InnoDB;
+INSERT INTO t1 VALUES (null);
+INSERT INTO t1 VALUES (null);
+ALTER TABLE t1 CHANGE c1 d1 INT NOT NULL AUTO_INCREMENT;
+SELECT * FROM t1;
+# Restart the server
+-- source include/restart_mysqld.inc
+# The MySQL and InnoDB data dictionaries should now be out of sync.
+# The select should print message to the error log
+SELECT * FROM t1;
+# MySQL have made a change (http://lists.mysql.com/commits/75268) that no
+# longer results in the two data dictionaries being out of sync. If they
+# revert their changes then this check for ER_AUTOINC_READ_FAILED will need
+# to be enabled. Also, see http://bugs.mysql.com/bug.php?id=47621.
+-- error ER_AUTOINC_READ_FAILED,1467
+INSERT INTO t1 VALUES(null);
+ALTER TABLE t1 AUTO_INCREMENT = 3;
+SHOW CREATE TABLE t1;
+INSERT INTO t1 VALUES(null);
+SELECT * FROM t1;
+DROP TABLE t1;
+
+--disable_query_log
+EVAL SET GLOBAL innodb_file_format_check=$file_format_check;
+--enable_query_log
=== modified file 'mysql-test/t/innodb-autoinc.test'
--- a/mysql-test/t/innodb-autoinc.test 2010-01-15 21:12:30 +0000
+++ b/mysql-test/t/innodb-autoinc.test 2010-03-04 08:03:07 +0000
@@ -480,32 +480,6 @@ INSERT INTO t2 SELECT c1 FROM t1;
INSERT INTO t2 SELECT NULL FROM t1;
DROP TABLE t1;
DROP TABLE t2;
-#
-# 44030: Error: (1500) Couldn't read the MAX(ID) autoinc value from
-# the index (PRIMARY)
-# This test requires a restart of the server
-SET @@SESSION.AUTO_INCREMENT_INCREMENT=1, @@SESSION.AUTO_INCREMENT_OFFSET=1;
-CREATE TABLE t1 (c1 INT PRIMARY KEY AUTO_INCREMENT) ENGINE=InnoDB;
-INSERT INTO t1 VALUES (null);
-INSERT INTO t1 VALUES (null);
-ALTER TABLE t1 CHANGE c1 d1 INT NOT NULL AUTO_INCREMENT;
-SELECT * FROM t1;
-# Restart the server
--- source include/restart_mysqld.inc
-# The MySQL and InnoDB data dictionaries should now be out of sync.
-# The select should print message to the error log
-SELECT * FROM t1;
-# MySQL have made a change (http://lists.mysql.com/commits/75268) that no
-# longer results in the two data dictionaries being out of sync. If they
-# revert their changes then this check for ER_AUTOINC_READ_FAILED will need
-# to be enabled.
--- error ER_AUTOINC_READ_FAILED,1467
-INSERT INTO t1 VALUES(null);
-ALTER TABLE t1 AUTO_INCREMENT = 3;
-SHOW CREATE TABLE t1;
-INSERT INTO t1 VALUES(null);
-SELECT * FROM t1;
-DROP TABLE t1;
# If the user has specified negative values for an AUTOINC column then
# InnoDB should ignore those values when setting the table's max value.
@@ -616,30 +590,30 @@ DROP TABLE t1;
# 47125: auto_increment start value is ignored if an index is created
# and engine=innodb
#
-CREATE TABLE T1 (c1 INT AUTO_INCREMENT, c2 INT, PRIMARY KEY(c1)) AUTO_INCREMENT=10 ENGINE=InnoDB;
-CREATE INDEX i1 on T1(c2);
-SHOW CREATE TABLE T1;
-INSERT INTO T1 (c2) values (0);
-SELECT * FROM T1;
-DROP TABLE T1;
+CREATE TABLE t1 (c1 INT AUTO_INCREMENT, c2 INT, PRIMARY KEY(c1)) AUTO_INCREMENT=10 ENGINE=InnoDB;
+CREATE INDEX i1 on t1(c2);
+SHOW CREATE TABLE t1;
+INSERT INTO t1 (c2) values (0);
+SELECT * FROM t1;
+DROP TABLE t1;
##
# 49032: Use the correct function to read the AUTOINC column value
#
-CREATE TABLE T1(C1 DOUBLE AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
-INSERT INTO T1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
+CREATE TABLE t1(C1 DOUBLE AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
+INSERT INTO t1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
# Restart the server
-- source include/restart_mysqld.inc
-INSERT INTO T1(C2) VALUES ('innodb');
-SHOW CREATE TABLE T1;
-DROP TABLE T1;
-CREATE TABLE T1(C1 FLOAT AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
-INSERT INTO T1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
+INSERT INTO t1(C2) VALUES ('innodb');
+SHOW CREATE TABLE t1;
+DROP TABLE t1;
+CREATE TABLE t1(C1 FLOAT AUTO_INCREMENT KEY, C2 CHAR(10)) ENGINE=InnoDB;
+INSERT INTO t1(C1, C2) VALUES (1, 'innodb'), (3, 'innodb');
# Restart the server
-- source include/restart_mysqld.inc
-INSERT INTO T1(C2) VALUES ('innodb');
-SHOW CREATE TABLE T1;
-DROP TABLE T1;
+INSERT INTO t1(C2) VALUES ('innodb');
+SHOW CREATE TABLE t1;
+DROP TABLE t1;
##
# 47720: REPLACE INTO Autoincrement column with negative values
=== modified file 'mysql-test/t/join_outer.test'
--- a/mysql-test/t/join_outer.test 2007-06-06 17:57:07 +0000
+++ b/mysql-test/t/join_outer.test 2009-12-17 09:55:18 +0000
@@ -867,3 +867,32 @@ SELECT * FROM t1 LEFT JOIN t2 ON e<>0 WH
DROP TABLE t1,t2;
+--echo #
+--echo # Bug#47650: using group by with rollup without indexes returns incorrect
+--echo # results with where
+--echo #
+CREATE TABLE t1 ( a INT );
+INSERT INTO t1 VALUES (1);
+
+CREATE TABLE t2 ( a INT, b INT );
+INSERT INTO t2 VALUES (1, 1),(1, 2),(1, 3),(2, 4),(2, 5);
+
+EXPLAIN
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 LEFT JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 LEFT JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+
+EXPLAIN
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+
+SELECT t1.a, COUNT( t2.b ), SUM( t2.b ), MAX( t2.b )
+FROM t1 JOIN t2 USING( a )
+GROUP BY t1.a WITH ROLLUP;
+
+DROP TABLE t1, t2;
=== modified file 'mysql-test/t/lock_multi.test'
--- a/mysql-test/t/lock_multi.test 2009-07-10 23:12:13 +0000
+++ b/mysql-test/t/lock_multi.test 2009-12-18 20:32:55 +0000
@@ -626,9 +626,11 @@ let $wait_condition=
--source include/wait_condition.inc
let $tlwb= `show status like 'Table_locks_waited'`;
unlock tables;
+connection waiter;
+--reap
+connection default;
drop table t1;
disconnect waiter;
-connection default;
--disable_query_log
eval SET @tlwa= SUBSTRING_INDEX('$tlwa', ' ', -1);
eval SET @tlwb= SUBSTRING_INDEX('$tlwb', ' ', -1);
=== modified file 'mysql-test/t/myisam.test'
--- a/mysql-test/t/myisam.test 2009-12-03 11:19:05 +0000
+++ b/mysql-test/t/myisam.test 2010-03-04 08:03:07 +0000
@@ -1190,6 +1190,20 @@ SELECT a FROM t1;
CHECK TABLE t1;
DROP TABLE t1;
+
+--echo #
+--echo # Bug #49465: valgrind warnings and incorrect live checksum...
+--echo #
+CREATE TABLE t1(
+a VARCHAR(1), b VARCHAR(1), c VARCHAR(1),
+f VARCHAR(1), g VARCHAR(1), h VARCHAR(1),
+i VARCHAR(1), j VARCHAR(1), k VARCHAR(1)) CHECKSUM=1;
+INSERT INTO t1 VALUES('', '', '', '', '', '', '', '', '');
+CHECKSUM TABLE t1 QUICK;
+CHECKSUM TABLE t1 EXTENDED;
+DROP TABLE t1;
+
+
--echo End of 5.0 tests
=== modified file 'mysql-test/t/mysql.test'
--- a/mysql-test/t/mysql.test 2010-01-15 15:27:55 +0000
+++ b/mysql-test/t/mysql.test 2010-03-04 08:03:07 +0000
@@ -408,5 +408,10 @@ insert into t1 values ('\0b\0');
--exec $MYSQL --xml test -e "select a from t1"
drop table t1;
+--echo
+--echo Bug #47147: mysql client option --skip-column-names does not apply to vertical output
+--echo
+--exec $MYSQL --skip-column-names --vertical test -e "select 1 as a"
---echo End of 5.0 tests
+--echo
+--echo End of tests
=== modified file 'mysql-test/t/mysql_upgrade.test'
--- a/mysql-test/t/mysql_upgrade.test 2009-10-05 13:22:23 +0000
+++ b/mysql-test/t/mysql_upgrade.test 2010-03-04 08:03:07 +0000
@@ -90,3 +90,22 @@ DROP USER mysqltest1@'%';
set GLOBAL sql_mode='STRICT_ALL_TABLES,ANSI_QUOTES,NO_ZERO_DATE';
--exec $MYSQL_UPGRADE --skip-verbose --force 2>&1
eval set GLOBAL sql_mode=default;
+
+
+--echo #
+--echo # Bug #41569 mysql_upgrade (ver 5.1) add 3 fields to mysql.proc table
+--echo # but does not set values.
+--echo #
+
+# Create a stored procedure and set the fields in question to null.
+# When running mysql_upgrade, a warning should be written.
+
+CREATE PROCEDURE testproc() BEGIN END;
+UPDATE mysql.proc SET character_set_client = NULL WHERE name LIKE 'testproc';
+UPDATE mysql.proc SET collation_connection = NULL WHERE name LIKE 'testproc';
+UPDATE mysql.proc SET db_collation = NULL WHERE name LIKE 'testproc';
+--exec $MYSQL_UPGRADE --skip-verbose --force 2> $MYSQLTEST_VARDIR/tmp/41569.txt
+CALL testproc();
+DROP PROCEDURE testproc;
+--cat_file $MYSQLTEST_VARDIR/tmp/41569.txt
+--remove_file $MYSQLTEST_VARDIR/tmp/41569.txt
=== modified file 'mysql-test/t/mysqlbinlog.test'
--- a/mysql-test/t/mysqlbinlog.test 2009-09-30 02:31:25 +0000
+++ b/mysql-test/t/mysqlbinlog.test 2009-12-06 01:11:32 +0000
@@ -71,7 +71,7 @@ select "--- --position --" as "";
--enable_query_log
--replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
--replace_regex /SQL_LOAD_MB-[0-9]-[0-9]/SQL_LOAD_MB-#-#/
---exec $MYSQL_BINLOG --short-form --local-load=$MYSQLTEST_VARDIR/tmp/ --position=330 $MYSQLD_DATADIR/master-bin.000002
+--exec $MYSQL_BINLOG --short-form --local-load=$MYSQLTEST_VARDIR/tmp/ --position=332 $MYSQLD_DATADIR/master-bin.000002
# These are tests for remote binlog.
@@ -108,7 +108,7 @@ select "--- --position --" as "";
--enable_query_log
--replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
--replace_regex /SQL_LOAD_MB-[0-9]-[0-9]/SQL_LOAD_MB-#-#/
---exec $MYSQL_BINLOG --short-form --local-load=$MYSQLTEST_VARDIR/tmp/ --read-from-remote-server --position=330 --user=root --host=127.0.0.1 --port=$MASTER_MYPORT master-bin.000002
+--exec $MYSQL_BINLOG --short-form --local-load=$MYSQLTEST_VARDIR/tmp/ --read-from-remote-server --position=332 --user=root --host=127.0.0.1 --port=$MASTER_MYPORT master-bin.000002
# Bug#7853 mysqlbinlog does not accept input from stdin
--disable_query_log
=== modified file 'mysql-test/t/openssl_1.test'
--- a/mysql-test/t/openssl_1.test 2010-01-29 10:42:31 +0000
+++ b/mysql-test/t/openssl_1.test 2010-03-04 08:03:07 +0000
@@ -15,10 +15,8 @@ insert into t1 values (5);
grant select on test.* to ssl_user1@localhost require SSL;
grant select on test.* to ssl_user2@localhost require cipher "DHE-RSA-AES256-SHA";
-grant select on test.* to ssl_user3@localhost require cipher
-"DHE-RSA-AES256-SHA" AND SUBJECT "/C=FI/ST=Tuusula/O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org";
-grant select on test.* to ssl_user4@localhost require cipher
-"DHE-RSA-AES256-SHA" AND SUBJECT "/C=FI/ST=Tuusula/O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org" ISSUER "/C=FI/ST=Tuusula/O=Monty Program Ab/emailAddress=abstract.developer(a)askmonty.org";
+grant select on test.* to ssl_user3@localhost require cipher "DHE-RSA-AES256-SHA" AND SUBJECT "/C=SE/ST=Uppsala/O=MySQL AB";
+grant select on test.* to ssl_user4@localhost require cipher "DHE-RSA-AES256-SHA" AND SUBJECT "/C=SE/ST=Uppsala/O=MySQL AB" ISSUER "/C=SE/ST=Uppsala/L=Uppsala/O=MySQL AB";
grant select on test.* to ssl_user5@localhost require cipher "DHE-RSA-AES256-SHA" AND SUBJECT "xxx";
flush privileges;
=== modified file 'mysql-test/t/order_by.test'
--- a/mysql-test/t/order_by.test 2010-01-15 15:27:55 +0000
+++ b/mysql-test/t/order_by.test 2010-03-04 08:03:07 +0000
@@ -888,6 +888,15 @@ SELECT 1 AS col FROM t1 WHERE a=2 AND (c
--echo # Must return 1 row
SELECT 1 AS col FROM t1 WHERE a=2 AND (c=10 OR c IS NULL) ORDER BY c;
+# part 2 of the problem : DESC test cases
+--echo # Must use ref-or-null on the a_c index
+--replace_column 1 x 2 x 3 x 6 x 7 x 8 x 9 x 10 x
+EXPLAIN
+SELECT 1 AS col FROM t1 WHERE a=2 AND (c=10 OR c IS NULL) ORDER BY c DESC;
+--echo # Must return 1 row
+SELECT 1 AS col FROM t1 WHERE a=2 AND (c=10 OR c IS NULL) ORDER BY c DESC;
+
+
DROP TABLE t1;
=== modified file 'mysql-test/t/partition.test'
--- a/mysql-test/t/partition.test 2010-01-15 15:27:55 +0000
+++ b/mysql-test/t/partition.test 2010-03-04 08:03:07 +0000
@@ -53,8 +53,8 @@ CREATE TABLE t1 (
b varchar(10),
PRIMARY KEY (a)
)
-PARTITION BY RANGE (to_days(a)) (
- PARTITION p1 VALUES LESS THAN (733407),
+PARTITION BY RANGE (UNIX_TIMESTAMP(a)) (
+ PARTITION p1 VALUES LESS THAN (1199134800),
PARTITION pmax VALUES LESS THAN MAXVALUE
);
@@ -64,7 +64,7 @@ INSERT INTO t1 VALUES ('2009-09-21 17:31
SELECT * FROM t1;
ALTER TABLE t1 REORGANIZE PARTITION pmax INTO (
- PARTITION p3 VALUES LESS THAN (733969),
+ PARTITION p3 VALUES LESS THAN (1247688000),
PARTITION pmax VALUES LESS THAN MAXVALUE);
SELECT * FROM t1;
SHOW CREATE TABLE t1;
=== modified file 'mysql-test/t/partition_bug18198.test'
--- a/mysql-test/t/partition_bug18198.test 2007-06-13 15:28:59 +0000
+++ b/mysql-test/t/partition_bug18198.test 2009-12-13 20:29:50 +0000
@@ -158,7 +158,7 @@ create table t1 (col1 datetime)
partition by range(timestampdiff(day,5,col1))
(partition p0 values less than (10), partition p1 values less than (30));
--- error ER_PARTITION_FUNCTION_IS_NOT_ALLOWED
+-- error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
create table t1 (col1 date)
partition by range(unix_timestamp(col1))
(partition p0 values less than (10), partition p1 values less than (30));
=== modified file 'mysql-test/t/partition_error.test'
--- a/mysql-test/t/partition_error.test 2009-02-18 20:29:30 +0000
+++ b/mysql-test/t/partition_error.test 2009-12-13 20:29:50 +0000
@@ -466,7 +466,7 @@ partitions 2
#
# Partition by range, constant partition function not allowed
#
---error ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -681,7 +681,7 @@ partition by list (a);
#
# Partition by list, constant partition function not allowed
#
---error ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
CREATE TABLE t1 (
a int not null,
b int not null,
@@ -840,4 +840,364 @@ partition by range (a + (select count(*)
create table t1 (a char(10))
partition by hash (extractvalue(a,'a'));
+--echo #
+--echo # Bug #42849: innodb crash with varying time_zone on partitioned
+--echo # timestamp primary key
+--echo #
+
+# A correctly partitioned table to test that trying to repartition it using
+# timezone-dependent expression will throw an error.
+CREATE TABLE old (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (UNIX_TIMESTAMP(a)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+# Check that allowed arithmetic/math functions involving TIMESTAMP values result
+# in ER_PARTITION_FUNC_NOT_ALLOWED_ERROR when used as a partitioning function
+
+--error ER_PARTITION_FUNC_NOT_ALLOWED_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (a) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_PARTITION_FUNC_NOT_ALLOWED_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (a) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (a+0) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (a+0) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (a % 2) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (a % 2) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (ABS(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (ABS(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (CEILING(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (CEILING(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (FLOOR(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (FLOOR(a)) (
+PARTITION p VALUES LESS THAN (20080819),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+# Check that allowed date/time functions involving TIMESTAMP values result
+# in ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR when used as a partitioning function
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (TO_DAYS(a)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (TO_DAYS(a)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (DAYOFYEAR(a)) (
+PARTITION p VALUES LESS THAN (231),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (DAYOFYEAR(a)) (
+PARTITION p VALUES LESS THAN (231),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (DAYOFMONTH(a)) (
+PARTITION p VALUES LESS THAN (19),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (DAYOFMONTH(a)) (
+PARTITION p VALUES LESS THAN (19),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (DAYOFWEEK(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (DAYOFWEEK(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (MONTH(a)) (
+PARTITION p VALUES LESS THAN (8),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (MONTH(a)) (
+PARTITION p VALUES LESS THAN (8),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (HOUR(a)) (
+PARTITION p VALUES LESS THAN (17),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (HOUR(a)) (
+PARTITION p VALUES LESS THAN (17),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (MINUTE(a)) (
+PARTITION p VALUES LESS THAN (55),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (MINUTE(a)) (
+PARTITION p VALUES LESS THAN (55),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (QUARTER(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (QUARTER(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (SECOND(a)) (
+PARTITION p VALUES LESS THAN (7),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (SECOND(a)) (
+PARTITION p VALUES LESS THAN (7),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEARWEEK(a)) (
+PARTITION p VALUES LESS THAN (200833),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (YEARWEEK(a)) (
+PARTITION p VALUES LESS THAN (200833),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEAR(a)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (WEEKDAY(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (WEEKDAY(a)) (
+PARTITION p VALUES LESS THAN (3),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (TIME_TO_SEC(a)) (
+PARTITION p VALUES LESS THAN (64507),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (TIME_TO_SEC(a)) (
+PARTITION p VALUES LESS THAN (64507),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (EXTRACT(DAY FROM a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (EXTRACT(DAY FROM a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL, b TIMESTAMP NOT NULL, PRIMARY KEY(a,b))
+PARTITION BY RANGE (DATEDIFF(a, a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (DATEDIFF(a, a)) (
+PARTITION p VALUES LESS THAN (18),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEAR(a + 0)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a + 0)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (TO_DAYS(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (TO_DAYS(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP NOT NULL PRIMARY KEY)
+PARTITION BY RANGE (YEAR(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a + '2008-01-01')) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+ALTER TABLE old ADD COLUMN b DATE;
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP, b DATE)
+PARTITION BY RANGE (YEAR(a + b)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (YEAR(a + b)) (
+PARTITION p VALUES LESS THAN (2008),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP, b DATE)
+PARTITION BY RANGE (TO_DAYS(a + b)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (TO_DAYS(a + b)) (
+PARTITION p VALUES LESS THAN (733638),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP, b date)
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+CREATE TABLE new (a TIMESTAMP, b TIMESTAMP)
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+
+ALTER TABLE old MODIFY b TIMESTAMP;
+
+--error ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ALTER TABLE old
+PARTITION BY RANGE (UNIX_TIMESTAMP(a + b)) (
+PARTITION p VALUES LESS THAN (1219089600),
+PARTITION pmax VALUES LESS THAN MAXVALUE);
+DROP TABLE old;
+
+--echo End of 5.1 tests
=== added file 'mysql-test/t/partition_innodb-master.opt'
--- a/mysql-test/t/partition_innodb-master.opt 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/partition_innodb-master.opt 2010-01-18 16:49:18 +0000
@@ -0,0 +1 @@
+--innodb_lock_wait_timeout=1
=== modified file 'mysql-test/t/partition_innodb.test'
--- a/mysql-test/t/partition_innodb.test 2009-12-03 11:19:05 +0000
+++ b/mysql-test/t/partition_innodb.test 2010-03-04 08:03:07 +0000
@@ -5,6 +5,8 @@
drop table if exists t1;
--enable_warnings
+let $MYSQLD_DATADIR= `SELECT @@datadir`;
+
#
# Bug#47029: Crash when reorganize partition with subpartition
#
@@ -296,6 +298,47 @@ CREATE TABLE t1 (a INT) ENGINE=InnoDB
PARTITION BY list(a) (PARTITION p1 VALUES IN (1));
CREATE INDEX i1 ON t1 (a);
DROP TABLE t1;
-let $MYSQLD_DATADIR= `SELECT @@datadir`;
+
# Before the fix it should show extra file like #sql-2405_2.par
--list_files $MYSQLD_DATADIR/test/ *
+
+--echo #
+--echo # Bug#47343: InnoDB fails to clean-up after lock wait timeout on
+--echo # REORGANIZE PARTITION
+--echo #
+CREATE TABLE t1 (
+ a INT,
+ b DATE NOT NULL,
+ PRIMARY KEY (a, b)
+) ENGINE=InnoDB
+PARTITION BY RANGE (a) (
+ PARTITION pMAX VALUES LESS THAN MAXVALUE
+) ;
+
+INSERT INTO t1 VALUES (1, '2001-01-01'), (2, '2002-02-02'), (3, '2003-03-03');
+
+START TRANSACTION;
+SELECT * FROM t1 FOR UPDATE;
+
+connect (con1, localhost, root,,);
+--echo # Connection con1
+--error ER_LOCK_WAIT_TIMEOUT
+ALTER TABLE t1 REORGANIZE PARTITION pMAX INTO
+(PARTITION p3 VALUES LESS THAN (3),
+ PARTITION pMAX VALUES LESS THAN MAXVALUE);
+SHOW WARNINGS;
+--error ER_LOCK_WAIT_TIMEOUT
+ALTER TABLE t1 REORGANIZE PARTITION pMAX INTO
+(PARTITION p3 VALUES LESS THAN (3),
+ PARTITION pMAX VALUES LESS THAN MAXVALUE);
+SHOW WARNINGS;
+
+#Contents of the 'test' database directory:
+--list_files $MYSQLD_DATADIR/test
+
+disconnect con1;
+connection default;
+--echo # Connection default
+SELECT * FROM t1;
+COMMIT;
+DROP TABLE t1;
=== modified file 'mysql-test/t/partition_pruning.test'
--- a/mysql-test/t/partition_pruning.test 2009-08-28 10:55:59 +0000
+++ b/mysql-test/t/partition_pruning.test 2009-12-22 17:59:37 +0000
@@ -8,6 +8,166 @@
drop table if exists t1,t2,t3,t4,t5,t6,t7,t8,t9;
--enable_warnings
+--echo #
+--echo # Bug#49742: Partition Pruning not working correctly for RANGE
+--echo #
+CREATE TABLE t1 (a INT PRIMARY KEY)
+PARTITION BY RANGE (a) (
+PARTITION p0 VALUES LESS THAN (1),
+PARTITION p1 VALUES LESS THAN (2),
+PARTITION p2 VALUES LESS THAN (3),
+PARTITION p3 VALUES LESS THAN (4),
+PARTITION p4 VALUES LESS THAN (5),
+PARTITION p5 VALUES LESS THAN (6),
+PARTITION max VALUES LESS THAN MAXVALUE);
+
+INSERT INTO t1 VALUES (-1),(0),(1),(2),(3),(4),(5),(6),(7),(8);
+
+SELECT * FROM t1 WHERE a < 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 1;
+SELECT * FROM t1 WHERE a < 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 2;
+SELECT * FROM t1 WHERE a < 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 3;
+SELECT * FROM t1 WHERE a < 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 4;
+SELECT * FROM t1 WHERE a < 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 5;
+SELECT * FROM t1 WHERE a < 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 6;
+SELECT * FROM t1 WHERE a < 7;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 7;
+SELECT * FROM t1 WHERE a <= 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 1;
+SELECT * FROM t1 WHERE a <= 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 2;
+SELECT * FROM t1 WHERE a <= 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 3;
+SELECT * FROM t1 WHERE a <= 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 4;
+SELECT * FROM t1 WHERE a <= 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 5;
+SELECT * FROM t1 WHERE a <= 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 6;
+SELECT * FROM t1 WHERE a <= 7;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 7;
+SELECT * FROM t1 WHERE a = 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 1;
+SELECT * FROM t1 WHERE a = 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 2;
+SELECT * FROM t1 WHERE a = 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 3;
+SELECT * FROM t1 WHERE a = 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 4;
+SELECT * FROM t1 WHERE a = 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 5;
+SELECT * FROM t1 WHERE a = 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 6;
+SELECT * FROM t1 WHERE a = 7;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 7;
+SELECT * FROM t1 WHERE a >= 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 1;
+SELECT * FROM t1 WHERE a >= 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 2;
+SELECT * FROM t1 WHERE a >= 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 3;
+SELECT * FROM t1 WHERE a >= 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 4;
+SELECT * FROM t1 WHERE a >= 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 5;
+SELECT * FROM t1 WHERE a >= 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 6;
+SELECT * FROM t1 WHERE a >= 7;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 7;
+SELECT * FROM t1 WHERE a > 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 1;
+SELECT * FROM t1 WHERE a > 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 2;
+SELECT * FROM t1 WHERE a > 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 3;
+SELECT * FROM t1 WHERE a > 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 4;
+SELECT * FROM t1 WHERE a > 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 5;
+SELECT * FROM t1 WHERE a > 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 6;
+SELECT * FROM t1 WHERE a > 7;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 7;
+DROP TABLE t1;
+
+CREATE TABLE t1 (a INT PRIMARY KEY)
+PARTITION BY RANGE (a) (
+PARTITION p0 VALUES LESS THAN (1),
+PARTITION p1 VALUES LESS THAN (2),
+PARTITION p2 VALUES LESS THAN (3),
+PARTITION p3 VALUES LESS THAN (4),
+PARTITION p4 VALUES LESS THAN (5),
+PARTITION max VALUES LESS THAN MAXVALUE);
+
+INSERT INTO t1 VALUES (-1),(0),(1),(2),(3),(4),(5),(6),(7);
+
+SELECT * FROM t1 WHERE a < 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 1;
+SELECT * FROM t1 WHERE a < 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 2;
+SELECT * FROM t1 WHERE a < 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 3;
+SELECT * FROM t1 WHERE a < 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 4;
+SELECT * FROM t1 WHERE a < 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 5;
+SELECT * FROM t1 WHERE a < 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a < 6;
+SELECT * FROM t1 WHERE a <= 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 1;
+SELECT * FROM t1 WHERE a <= 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 2;
+SELECT * FROM t1 WHERE a <= 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 3;
+SELECT * FROM t1 WHERE a <= 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 4;
+SELECT * FROM t1 WHERE a <= 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 5;
+SELECT * FROM t1 WHERE a <= 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a <= 6;
+SELECT * FROM t1 WHERE a = 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 1;
+SELECT * FROM t1 WHERE a = 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 2;
+SELECT * FROM t1 WHERE a = 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 3;
+SELECT * FROM t1 WHERE a = 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 4;
+SELECT * FROM t1 WHERE a = 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 5;
+SELECT * FROM t1 WHERE a = 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a = 6;
+SELECT * FROM t1 WHERE a >= 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 1;
+SELECT * FROM t1 WHERE a >= 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 2;
+SELECT * FROM t1 WHERE a >= 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 3;
+SELECT * FROM t1 WHERE a >= 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 4;
+SELECT * FROM t1 WHERE a >= 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 5;
+SELECT * FROM t1 WHERE a >= 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a >= 6;
+SELECT * FROM t1 WHERE a > 1;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 1;
+SELECT * FROM t1 WHERE a > 2;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 2;
+SELECT * FROM t1 WHERE a > 3;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 3;
+SELECT * FROM t1 WHERE a > 4;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 4;
+SELECT * FROM t1 WHERE a > 5;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 5;
+SELECT * FROM t1 WHERE a > 6;
+EXPLAIN PARTITIONS SELECT * FROM t1 WHERE a > 6;
+DROP TABLE t1;
+
#
# Bug#20577: Partitions: use of to_days() function leads to selection failures
#
=== modified file 'mysql-test/t/ps.test'
--- a/mysql-test/t/ps.test 2009-05-27 15:19:44 +0000
+++ b/mysql-test/t/ps.test 2009-12-26 11:25:56 +0000
@@ -1991,6 +1991,45 @@ select @arg;
execute stmt using @arg;
deallocate prepare stmt;
+--echo #
+--echo # Bug#48508: Crash on prepared statement re-execution.
+--echo #
+create table t1(b int);
+insert into t1 values (0);
+create view v1 AS select 1 as a from t1 where b;
+prepare stmt from "select * from v1 where a";
+execute stmt;
+execute stmt;
+deallocate prepare stmt;
+drop table t1;
+drop view v1;
+
+create table t1(a bigint);
+create table t2(b tinyint);
+insert into t2 values (null);
+prepare stmt from "select 1 from t1 join t2 on a xor b where b > 1 and a =1";
+execute stmt;
+execute stmt;
+deallocate prepare stmt;
+drop table t1,t2;
+--echo #
+
+
+--echo #
+--echo # Bug #49570: Assertion failed: !(order->used & map)
+--echo # on re-execution of prepared statement
+--echo #
+CREATE TABLE t1(a INT PRIMARY KEY);
+INSERT INTO t1 VALUES(0), (1);
+PREPARE stmt FROM
+ "SELECT 1 FROM t1 JOIN t1 t2 USING(a) GROUP BY t2.a, t1.a";
+EXECUTE stmt;
+EXECUTE stmt;
+EXECUTE stmt;
+DEALLOCATE PREPARE stmt;
+DROP TABLE t1;
+
+
--echo End of 5.0 tests.
#
@@ -3009,5 +3048,21 @@ execute stmt;
drop table t1;
deallocate prepare stmt;
+--echo #
+--echo # Bug#49141: Encode function is significantly slower in 5.1 compared to 5.0
+--echo #
+
+prepare encode from "select encode(?, ?) into @ciphertext";
+prepare decode from "select decode(?, ?) into @plaintext";
+set @str="abc", @key="cba";
+execute encode using @str, @key;
+execute decode using @ciphertext, @key;
+select @plaintext;
+set @str="bcd", @key="dcb";
+execute encode using @str, @key;
+execute decode using @ciphertext, @key;
+select @plaintext;
+deallocate prepare encode;
+deallocate prepare decode;
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/ps_ddl.test'
--- a/mysql-test/t/ps_ddl.test 2008-08-13 19:42:21 +0000
+++ b/mysql-test/t/ps_ddl.test 2010-01-16 07:44:24 +0000
@@ -1445,18 +1445,19 @@ call p_verify_reprepare_count(0);
drop table t2;
# Temporary table with name of table to be created exists
create temporary table t2 (a int);
---error ER_TABLE_EXISTS_ERROR
+# Temporary table and base table are not in the same name space.
execute stmt;
call p_verify_reprepare_count(1);
--error ER_TABLE_EXISTS_ERROR
execute stmt;
-call p_verify_reprepare_count(0);
+call p_verify_reprepare_count(1);
drop temporary table t2;
+--error ER_TABLE_EXISTS_ERROR
execute stmt;
-call p_verify_reprepare_count(1);
+call p_verify_reprepare_count(0);
drop table t2;
execute stmt;
-call p_verify_reprepare_count(0);
+call p_verify_reprepare_count(1);
drop table t2;
# View with name of table to be created exists
# Attention:
=== modified file 'mysql-test/t/select.test'
--- a/mysql-test/t/select.test 2009-12-15 17:08:21 +0000
+++ b/mysql-test/t/select.test 2010-01-29 11:08:49 +0000
@@ -3786,6 +3786,96 @@ SELECT 1 FROM t2 JOIN t1 ON 1=1
DROP TABLE t1,t2;
+--echo #
+--echo # Bug #49199: Optimizer handles incorrectly:
+--echo # field='const1' AND field='const2' in some cases
+--echo
+CREATE TABLE t1(a DATETIME NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+DROP TABLE t1;
+
+CREATE TABLE t1(a DATE NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+DROP TABLE t1;
+
+CREATE TABLE t1(a TIMESTAMP NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a='2001-01-01 00:00:00';
+DROP TABLE t1;
+
+CREATE TABLE t1(a DATETIME NOT NULL, b DATE NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01', '2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+DROP TABLE t1;
+
+CREATE TABLE t1(a DATETIME NOT NULL, b VARCHAR(20) NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01', '2001-01-01');
+SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01' AND a=b AND b='2001-01-01 00:00:00';
+
+SELECT * FROM t1 WHERE a='2001-01-01 00:00:00' AND a=b AND b='2001-01-01';
+EXPLAIN EXTENDED SELECT * FROM t1 WHERE a='2001-01-01 00:00:00' AND a=b AND b='2001-01-01';
+DROP TABLE t1;
+
+CREATE TABLE t1(a DATETIME NOT NULL, b DATE NOT NULL);
+INSERT INTO t1 VALUES('2001-01-01', '2001-01-01');
+SELECT x.a, y.a, z.a FROM t1 x
+ JOIN t1 y ON x.a=y.a
+ JOIN t1 z ON y.a=z.a
+ WHERE x.a='2001-01-01' AND z.a='2001-01-01 00:00:00';
+EXPLAIN EXTENDED SELECT x.a, y.a, z.a FROM t1 x
+ JOIN t1 y ON x.a=y.a
+ JOIN t1 z ON y.a=z.a
+ WHERE x.a='2001-01-01' AND z.a='2001-01-01 00:00:00';
+DROP TABLE t1;
+
+
+--echo #
+--echo # Bug #49897: crash in ptr_compare when char(0) NOT NULL
+--echo # column is used for ORDER BY
+--echo #
+SET @old_sort_buffer_size= @@session.sort_buffer_size;
+SET @@sort_buffer_size= 40000;
+
+CREATE TABLE t1(a CHAR(0) NOT NULL);
+--disable_warnings
+INSERT INTO t1 VALUES (0), (0), (0);
+--enable_warnings
+INSERT INTO t1 SELECT t11.a FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a FROM t1 t11, t1 t12;
+EXPLAIN SELECT a FROM t1 ORDER BY a;
+--disable_result_log
+SELECT a FROM t1 ORDER BY a;
+--enable_result_log
+DROP TABLE t1;
+
+CREATE TABLE t1(a CHAR(0) NOT NULL, b CHAR(0) NOT NULL, c int);
+--disable_warnings
+INSERT INTO t1 VALUES (0, 0, 0), (0, 0, 2), (0, 0, 1);
+--enable_warnings
+INSERT INTO t1 SELECT t11.a, t11.b, t11.c FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a, t11.b, t11.c FROM t1 t11, t1 t12;
+INSERT INTO t1 SELECT t11.a, t11.b, t11.c FROM t1 t11, t1 t12;
+EXPLAIN SELECT a FROM t1 ORDER BY a LIMIT 5;
+SELECT a FROM t1 ORDER BY a LIMIT 5;
+EXPLAIN SELECT * FROM t1 ORDER BY a, b LIMIT 5;
+SELECT * FROM t1 ORDER BY a, b LIMIT 5;
+EXPLAIN SELECT * FROM t1 ORDER BY a, b, c LIMIT 5;
+SELECT * FROM t1 ORDER BY a, b, c LIMIT 5;
+EXPLAIN SELECT * FROM t1 ORDER BY c, a LIMIT 5;
+SELECT * FROM t1 ORDER BY c, a LIMIT 5;
+
+SET @@sort_buffer_size= @old_sort_buffer_size;
+DROP TABLE t1;
+
+
--echo End of 5.0 tests
#
=== modified file 'mysql-test/t/sp-ucs2.test'
--- a/mysql-test/t/sp-ucs2.test 2007-02-19 10:57:06 +0000
+++ b/mysql-test/t/sp-ucs2.test 2009-12-02 11:17:08 +0000
@@ -26,3 +26,32 @@ drop table t3|
delimiter ;|
+
+#
+# Bug#48766 SHOW CREATE FUNCTION returns extra data in return clause
+#
+SET NAMES utf8;
+--disable_warnings
+DROP FUNCTION IF EXISTS bug48766;
+--enable_warnings
+#
+# Test that Latin letters are not prepended with extra '\0'.
+#
+CREATE FUNCTION bug48766 ()
+ RETURNS ENUM( 'w' ) CHARACTER SET ucs2
+ RETURN 0;
+SHOW CREATE FUNCTION bug48766;
+SELECT DTD_IDENTIFIER FROM INFORMATION_SCHEMA.ROUTINES
+WHERE ROUTINE_NAME='bug48766';
+DROP FUNCTION bug48766;
+#
+# Test non-Latin characters
+#
+CREATE FUNCTION bug48766 ()
+ RETURNS ENUM('а','б','в','г') CHARACTER SET ucs2
+ RETURN 0;
+SHOW CREATE FUNCTION bug48766;
+SELECT DTD_IDENTIFIER FROM INFORMATION_SCHEMA.ROUTINES
+WHERE ROUTINE_NAME='bug48766';
+
+DROP FUNCTION bug48766;
=== modified file 'mysql-test/t/sp.test'
--- a/mysql-test/t/sp.test 2009-11-13 01:03:26 +0000
+++ b/mysql-test/t/sp.test 2009-12-23 13:44:03 +0000
@@ -8242,6 +8242,25 @@ while ($tab_count)
DROP PROCEDURE p1;
DROP TABLE t1;
+#
+# Bug#47649 crash during CALL procedure
+#
+CREATE TABLE t1 ( f1 integer, primary key (f1));
+CREATE TABLE t2 LIKE t1;
+CREATE TEMPORARY TABLE t3 LIKE t1;
+delimiter |;
+CREATE PROCEDURE p1 () BEGIN SELECT f1 FROM t3 AS A WHERE A.f1 IN ( SELECT f1 FROM t3 ) ;
+END|
+delimiter ;|
+--error ER_CANT_REOPEN_TABLE
+CALL p1;
+CREATE VIEW t3 AS SELECT f1 FROM t2 A WHERE A.f1 IN ( SELECT f1 FROM t2 );
+DROP TABLE t3;
+CALL p1;
+CALL p1;
+DROP PROCEDURE p1;
+DROP TABLE t1, t2;
+DROP VIEW t3;
--echo #
--echo # Bug #46629: Item_in_subselect::val_int(): Assertion `0'
=== added file 'mysql-test/t/sp_sync.test'
--- a/mysql-test/t/sp_sync.test 1970-01-01 00:00:00 +0000
+++ b/mysql-test/t/sp_sync.test 2010-01-15 08:51:39 +0000
@@ -0,0 +1,58 @@
+# This test should work in embedded server after mysqltest is fixed
+-- source include/not_embedded.inc
+
+--echo Tests of syncronization of stored procedure execution.
+
+--source include/have_debug_sync.inc
+
+--echo #
+--echo # Bug#48157: crash in Item_field::used_tables
+--echo #
+
+CREATE TABLE t1 AS SELECT 1 AS a, 1 AS b;
+CREATE TABLE t2 AS SELECT 1 AS a, 1 AS b;
+
+DELIMITER |;
+
+CREATE PROCEDURE p1()
+BEGIN
+ UPDATE t1 JOIN t2 USING( a, b ) SET t1.b = 1, t2.b = 1;
+END|
+
+DELIMITER ;|
+
+connect (con1,localhost,root,,);
+connect (con2,localhost,root,,);
+
+connection con1;
+LOCK TABLES t1 WRITE, t2 WRITE;
+
+connection con2;
+LET $ID= `select connection_id()`;
+SET DEBUG_SYNC = 'multi_update_reopen_tables SIGNAL parked WAIT_FOR go';
+--send CALL p1()
+
+connection con1;
+let $wait_condition= SELECT 1 FROM information_schema.processlist WHERE ID = $ID AND
+state = "Locked";
+--source include/wait_condition.inc
+DROP TABLE t1, t2;
+SET DEBUG_SYNC = 'now WAIT_FOR parked';
+CREATE TABLE t1 AS SELECT 1 AS a, 1 AS b;
+CREATE TABLE t2 AS SELECT 1 AS a, 1 AS b;
+SET DEBUG_SYNC = 'now SIGNAL go';
+
+connection con2;
+--reap
+
+disconnect con1;
+disconnect con2;
+connection default;
+
+--echo # Without the DEBUG_SYNC supplied in the same patch as this test in the
+--echo # code, this test statement will hang.
+DROP TABLE t1, t2;
+DROP PROCEDURE p1;
+
+SET DEBUG_SYNC = 'RESET';
+
=== modified file 'mysql-test/t/subselect.test'
--- a/mysql-test/t/subselect.test 2010-01-15 15:27:55 +0000
+++ b/mysql-test/t/subselect.test 2010-03-04 08:03:07 +0000
@@ -3374,6 +3374,32 @@ WHERE a = 230;
DROP TABLE t1, st1, st2;
+--echo #
+--echo # Bug #48709: Assertion failed in sql_select.cc:11782:
+--echo # int join_read_key(JOIN_TAB*)
+--echo #
+
+CREATE TABLE t1 (pk int PRIMARY KEY, int_key int);
+INSERT INTO t1 VALUES (10,1), (14,1);
+
+CREATE TABLE t2 (pk int PRIMARY KEY, int_key int);
+INSERT INTO t2 VALUES (3,3), (5,NULL), (7,3);
+
+--echo # should have eq_ref for t1
+--replace_column 1 x 2 x 5 x 6 x 7 x 8 x 9 x 10 x
+EXPLAIN
+SELECT * FROM t2 outr
+WHERE outr.int_key NOT IN (SELECT t1.pk FROM t1, t2)
+ORDER BY outr.pk;
+
+--echo # should not crash on debug binaries
+SELECT * FROM t2 outr
+WHERE outr.int_key NOT IN (SELECT t1.pk FROM t1, t2)
+ORDER BY outr.pk;
+
+DROP TABLE t1,t2;
+
+
--echo End of 5.0 tests.
#
@@ -3569,4 +3595,19 @@ SELECT 1 FROM t1 GROUP BY
(SELECT LAST_INSERT_ID() FROM t1 ORDER BY MIN(a) ASC LIMIT 1);
DROP TABLE t1;
+--echo #
+--echo # Bug #49512 : subquery with aggregate function crash
+--echo # subselect_single_select_engine::exec()
+
+CREATE TABLE t1(a INT);
+INSERT INTO t1 VALUES();
+
+--echo # should not crash
+SELECT 1 FROM t1 WHERE a <> SOME
+(
+ SELECT MAX((SELECT a FROM t1 LIMIT 1)) AS d
+ FROM t1,t1 a
+);
+DROP TABLE t1;
+
--echo End of 5.1 tests.
=== modified file 'mysql-test/t/union.test'
--- a/mysql-test/t/union.test 2009-05-15 07:11:07 +0000
+++ b/mysql-test/t/union.test 2010-01-06 10:24:51 +0000
@@ -1102,3 +1102,58 @@ DROP TABLE t1;
--echo End of 5.0 tests
+
+
+--echo #
+--echo # Bug #49734: Crash on EXPLAIN EXTENDED UNION ... ORDER BY
+--echo # <any non-const-function>
+--echo #
+
+CREATE TABLE t1 (a VARCHAR(10), FULLTEXT KEY a (a));
+INSERT INTO t1 VALUES (1),(2);
+CREATE TABLE t2 (b INT);
+INSERT INTO t2 VALUES (1),(2);
+
+--echo # Should not crash
+EXPLAIN EXTENDED
+SELECT * FROM t1 UNION SELECT * FROM t1 ORDER BY a + 12;
+
+--echo # Should not crash
+SELECT * FROM t1 UNION SELECT * FROM t1 ORDER BY a + 12;
+
+
+--echo # Should not crash
+--error ER_CANT_USE_OPTION_HERE
+EXPLAIN EXTENDED
+SELECT * FROM t1 UNION SELECT * FROM t1
+ ORDER BY MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE);
+
+--echo # Should not crash
+--error ER_CANT_USE_OPTION_HERE
+SELECT * FROM t1 UNION SELECT * FROM t1
+ ORDER BY MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE);
+
+--echo # Should not crash
+(SELECT * FROM t1) UNION (SELECT * FROM t1)
+ ORDER BY MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE);
+
+
+--echo # Should not crash
+EXPLAIN EXTENDED
+SELECT * FROM t1 UNION SELECT * FROM t1
+ ORDER BY (SELECT a FROM t2 WHERE b = 12);
+
+--echo # Should not crash
+--disable_result_log
+SELECT * FROM t1 UNION SELECT * FROM t1
+ ORDER BY (SELECT a FROM t2 WHERE b = 12);
+--enable_result_log
+
+--echo # Should not crash
+SELECT * FROM t2 UNION SELECT * FROM t2
+ ORDER BY (SELECT * FROM t1 WHERE MATCH(a) AGAINST ('+abc' IN BOOLEAN MODE));
+
+DROP TABLE t1,t2;
+
+
+--echo End of 5.1 tests
=== modified file 'mysql-test/t/user_var.test'
--- a/mysql-test/t/user_var.test 2009-05-15 13:03:22 +0000
+++ b/mysql-test/t/user_var.test 2009-12-22 10:38:33 +0000
@@ -295,6 +295,26 @@ SELECT @a, @b;
SELECT a, b FROM t1 WHERE a=2 AND b=3 GROUP BY a, b;
DROP TABLE t1;
+#
+# Bug#47371: reference by same column name
+#
+CREATE TABLE t1 (f1 int(11) default NULL, f2 int(11) default NULL);
+CREATE TABLE t2 (f1 int(11) default NULL, f2 int(11) default NULL, foo int(11));
+CREATE TABLE t3 (f1 int(11) default NULL, f2 int(11) default NULL);
+
+INSERT INTO t1 VALUES(10, 10);
+INSERT INTO t1 VALUES(10, 10);
+INSERT INTO t2 VALUES(10, 10, 10);
+INSERT INTO t2 VALUES(10, 10, 10);
+INSERT INTO t3 VALUES(10, 10);
+INSERT INTO t3 VALUES(10, 10);
+
+SELECT MIN(t2.f1),
+@bar:= (SELECT MIN(t3.f2) FROM t3 WHERE t3.f2 > foo)
+FROM t1,t2 WHERE t1.f1 = t2.f1 ORDER BY t2.f1;
+
+DROP TABLE t1, t2, t3;
+
--echo End of 5.0 tests
#
=== modified file 'mysql-test/t/variables.test'
--- a/mysql-test/t/variables.test 2010-01-11 13:15:28 +0000
+++ b/mysql-test/t/variables.test 2010-03-04 08:03:07 +0000
@@ -772,6 +772,12 @@ set @@hostname= "anothername";
--replace_column 2 #
show variables like 'hostname';
+--echo #
+--echo # BUG#37408 - Compressed MyISAM files should not require/use mmap()
+--echo #
+--echo # Test 'myisam_mmap_size' option is not dynamic
+--error ER_INCORRECT_GLOBAL_LOCAL_VAR
+SET @@myisam_mmap_size= 500M;
--echo End of 5.0 tests
#
=== modified file 'mysys/charset.c'
--- a/mysys/charset.c 2009-09-07 20:50:10 +0000
+++ b/mysys/charset.c 2010-03-04 08:03:07 +0000
@@ -220,7 +220,8 @@ copy_uca_collation(CHARSET_INFO *to, CHA
static int add_collation(CHARSET_INFO *cs)
{
if (cs->name && (cs->number ||
- (cs->number=get_collation_number_internal(cs->name))))
+ (cs->number=get_collation_number_internal(cs->name))) &&
+ cs->number < array_elements(all_charsets))
{
if (!all_charsets[cs->number])
{
@@ -324,7 +325,6 @@ static int add_collation(CHARSET_INFO *c
#define MY_CHARSET_INDEX "Index.xml"
const char *charsets_dir= NULL;
-static int charset_initialized=0;
static my_bool my_read_charset_file(const char *filename, myf myflags)
@@ -402,63 +402,37 @@ static void *cs_alloc(size_t size)
}
-#ifdef __NETWARE__
-my_bool STDCALL init_available_charsets(myf myflags)
-#else
-static my_bool init_available_charsets(myf myflags)
-#endif
+static my_pthread_once_t charsets_initialized= MY_PTHREAD_ONCE_INIT;
+
+static void init_available_charsets(void)
{
char fname[FN_REFLEN + sizeof(MY_CHARSET_INDEX)];
- my_bool error=FALSE;
- /*
- We have to use charset_initialized to not lock on THR_LOCK_charset
- inside get_internal_charset...
- */
- if (!charset_initialized)
+ CHARSET_INFO **cs;
+
+ bzero(&all_charsets,sizeof(all_charsets));
+ init_compiled_charsets(MYF(0));
+
+ /* Copy compiled charsets */
+ for (cs=all_charsets;
+ cs < all_charsets+array_elements(all_charsets)-1 ;
+ cs++)
{
- CHARSET_INFO **cs;
- /*
- To make things thread safe we are not allowing other threads to interfere
- while we may changing the cs_info_table
- */
- pthread_mutex_lock(&THR_LOCK_charset);
- if (!charset_initialized)
+ if (*cs)
{
- bzero(&all_charsets,sizeof(all_charsets));
- init_compiled_charsets(myflags);
-
- /* Copy compiled charsets */
- for (cs=all_charsets;
- cs < all_charsets+array_elements(all_charsets)-1 ;
- cs++)
- {
- if (*cs)
- {
- if (cs[0]->ctype)
- if (init_state_maps(*cs))
- *cs= NULL;
- }
- }
-
- strmov(get_charsets_dir(fname), MY_CHARSET_INDEX);
- error= my_read_charset_file(fname,myflags);
- charset_initialized=1;
+ if (cs[0]->ctype)
+ if (init_state_maps(*cs))
+ *cs= NULL;
}
- pthread_mutex_unlock(&THR_LOCK_charset);
}
- return error;
-}
-
-
-void free_charsets(void)
-{
- charset_initialized=0;
+
+ strmov(get_charsets_dir(fname), MY_CHARSET_INDEX);
+ my_read_charset_file(fname, MYF(0));
}
uint get_collation_number(const char *name)
{
- init_available_charsets(MYF(0));
+ my_pthread_once(&charsets_initialized, init_available_charsets);
return get_collation_number_internal(name);
}
@@ -466,7 +440,7 @@ uint get_collation_number(const char *na
uint get_charset_number(const char *charset_name, uint cs_flags)
{
CHARSET_INFO **cs;
- init_available_charsets(MYF(0));
+ my_pthread_once(&charsets_initialized, init_available_charsets);
for (cs= all_charsets;
cs < all_charsets+array_elements(all_charsets)-1 ;
@@ -483,7 +457,7 @@ uint get_charset_number(const char *char
const char *get_charset_name(uint charset_number)
{
CHARSET_INFO *cs;
- init_available_charsets(MYF(0));
+ my_pthread_once(&charsets_initialized, init_available_charsets);
cs=all_charsets[charset_number];
if (cs && (cs->number == charset_number) && cs->name )
@@ -541,7 +515,7 @@ CHARSET_INFO *get_charset(uint cs_number
if (cs_number == default_charset_info->number)
return default_charset_info;
- (void) init_available_charsets(MYF(0)); /* If it isn't initialized */
+ my_pthread_once(&charsets_initialized, init_available_charsets);
if (!cs_number || cs_number >= array_elements(all_charsets)-1)
return NULL;
@@ -563,7 +537,7 @@ CHARSET_INFO *get_charset_by_name(const
{
uint cs_number;
CHARSET_INFO *cs;
- (void) init_available_charsets(MYF(0)); /* If it isn't initialized */
+ my_pthread_once(&charsets_initialized, init_available_charsets);
cs_number=get_collation_number(cs_name);
cs= cs_number ? get_internal_charset(cs_number,flags) : NULL;
@@ -588,7 +562,7 @@ CHARSET_INFO *get_charset_by_csname(cons
DBUG_ENTER("get_charset_by_csname");
DBUG_PRINT("enter",("name: '%s'", cs_name));
- (void) init_available_charsets(MYF(0)); /* If it isn't initialized */
+ my_pthread_once(&charsets_initialized, init_available_charsets);
cs_number= get_charset_number(cs_name, cs_flags);
cs= cs_number ? get_internal_charset(cs_number, flags) : NULL;
=== modified file 'mysys/default.c'
--- a/mysys/default.c 2009-03-24 13:58:52 +0000
+++ b/mysys/default.c 2009-12-18 18:44:24 +0000
@@ -650,7 +650,7 @@ static int search_default_file_with_ext(
int recursion_level)
{
char name[FN_REFLEN + 10], buff[4096], curr_gr[4096], *ptr, *end, **tmp_ext;
- char *value, option[4096], tmp[FN_REFLEN];
+ char *value, option[4096+2], tmp[FN_REFLEN];
static const char includedir_keyword[]= "includedir";
static const char include_keyword[]= "include";
const int max_recursion_level= 10;
=== modified file 'mysys/mf_pack.c'
--- a/mysys/mf_pack.c 2009-08-28 16:21:54 +0000
+++ b/mysys/mf_pack.c 2009-12-18 18:44:24 +0000
@@ -245,7 +245,7 @@ my_bool my_use_symdir=0; /* Set this if
#ifdef USE_SYMDIR
void symdirget(char *dir)
{
- char buff[FN_REFLEN];
+ char buff[FN_REFLEN+1];
char *pos=strend(dir);
if (dir[0] && pos[-1] != FN_DEVCHAR && my_access(dir, F_OK))
{
@@ -257,7 +257,7 @@ void symdirget(char *dir)
*pos++=temp; *pos=0; /* Restore old filename */
if (file >= 0)
{
- if ((length= my_read(file, buff, sizeof(buff), MYF(0))) > 0)
+ if ((length= my_read(file, buff, sizeof(buff) - 1, MYF(0))) > 0)
{
for (pos= buff + length ;
pos > buff && (iscntrl(pos[-1]) || isspace(pos[-1])) ;
=== modified file 'mysys/my_getopt.c'
--- a/mysys/my_getopt.c 2010-01-15 15:27:55 +0000
+++ b/mysys/my_getopt.c 2010-03-04 08:03:07 +0000
@@ -145,6 +145,10 @@ int handle_options(int *argc, char ***ar
{ /* --set-variable, or -O */
if (*cur_arg == 'O')
{
+ my_getopt_error_reporter(WARNING_LEVEL,
+ "%s: Option '-O' is deprecated. "
+ "Use --variable-name=value instead.",
+ my_progname);
must_be_var= 1;
if (!(*++cur_arg)) /* If not -Ovar=# */
@@ -164,6 +168,11 @@ int handle_options(int *argc, char ***ar
}
else if (!getopt_compare_strings(cur_arg, "-set-variable", 13))
{
+ my_getopt_error_reporter(WARNING_LEVEL,
+ "%s: Option '--set-variable' is deprecated. "
+ "Use --variable-name=value instead.",
+ my_progname);
+
must_be_var= 1;
if (cur_arg[13] == '=')
{
=== modified file 'mysys/my_init.c'
--- a/mysys/my_init.c 2009-10-16 15:44:58 +0000
+++ b/mysys/my_init.c 2010-03-04 08:03:07 +0000
@@ -166,7 +166,6 @@ void my_end(int infoflag)
my_print_open_files();
}
}
- free_charsets();
my_error_unregister_all();
my_once_free();
#ifdef THREAD
=== modified file 'mysys/my_thr_init.c'
--- a/mysys/my_thr_init.c 2010-01-29 18:42:22 +0000
+++ b/mysys/my_thr_init.c 2010-03-04 08:03:07 +0000
@@ -30,7 +30,9 @@ pthread_key(struct st_my_thread_var, THR
#endif /* USE_TLS */
pthread_mutex_t THR_LOCK_malloc,THR_LOCK_open,
THR_LOCK_lock,THR_LOCK_isam,THR_LOCK_myisam,THR_LOCK_heap,
- THR_LOCK_net, THR_LOCK_charset, THR_LOCK_threads, THR_LOCK_time;
+ THR_LOCK_net, THR_LOCK_charset, THR_LOCK_threads, THR_LOCK_time,
+ THR_LOCK_myisam_mmap;
+
pthread_cond_t THR_COND_threads;
uint THR_thread_count= 0;
uint my_thread_end_wait_time= 5;
@@ -156,6 +158,7 @@ my_bool my_thread_global_init(void)
pthread_mutex_init(&THR_LOCK_lock,MY_MUTEX_INIT_FAST);
pthread_mutex_init(&THR_LOCK_isam,MY_MUTEX_INIT_SLOW);
pthread_mutex_init(&THR_LOCK_myisam,MY_MUTEX_INIT_SLOW);
+ pthread_mutex_init(&THR_LOCK_myisam_mmap,MY_MUTEX_INIT_FAST);
pthread_mutex_init(&THR_LOCK_heap,MY_MUTEX_INIT_FAST);
pthread_mutex_init(&THR_LOCK_net,MY_MUTEX_INIT_FAST);
pthread_mutex_init(&THR_LOCK_charset,MY_MUTEX_INIT_FAST);
@@ -253,6 +256,7 @@ void my_thread_destroy_mutex(void)
pthread_mutex_destroy(&THR_LOCK_lock);
pthread_mutex_destroy(&THR_LOCK_isam);
pthread_mutex_destroy(&THR_LOCK_myisam);
+ pthread_mutex_destroy(&THR_LOCK_myisam_mmap);
pthread_mutex_destroy(&THR_LOCK_heap);
pthread_mutex_destroy(&THR_LOCK_net);
pthread_mutex_destroy(&THR_LOCK_time);
=== modified file 'mysys/my_winthread.c'
--- a/mysys/my_winthread.c 2009-03-22 12:16:09 +0000
+++ b/mysys/my_winthread.c 2010-03-04 08:03:07 +0000
@@ -148,4 +148,36 @@ int win_pthread_setspecific(void *a,void
return 0;
}
+
+/*
+ One time initialization. For simplicity, we assume initializer thread
+ does not exit within init_routine().
+*/
+int my_pthread_once(my_pthread_once_t *once_control,
+ void (*init_routine)(void))
+{
+ LONG state= InterlockedCompareExchange(once_control, MY_PTHREAD_ONCE_INPROGRESS,
+ MY_PTHREAD_ONCE_INIT);
+ switch(state)
+ {
+ case MY_PTHREAD_ONCE_INIT:
+ /* This is initializer thread */
+ (*init_routine)();
+ *once_control= MY_PTHREAD_ONCE_DONE;
+ break;
+
+ case MY_PTHREAD_ONCE_INPROGRESS:
+ /* init_routine in progress. Wait for its completion */
+ while(*once_control == MY_PTHREAD_ONCE_INPROGRESS)
+ {
+ Sleep(1);
+ }
+ break;
+ case MY_PTHREAD_ONCE_DONE:
+ /* Nothing to do */
+ break;
+ }
+ return 0;
+}
+
#endif
=== modified file 'mysys/stacktrace.c'
--- a/mysys/stacktrace.c 2008-09-16 13:23:07 +0000
+++ b/mysys/stacktrace.c 2010-01-27 10:42:20 +0000
@@ -63,7 +63,26 @@ void my_safe_print_str(const char* name,
fputc('\n', stderr);
}
-#if HAVE_BACKTRACE && (HAVE_BACKTRACE_SYMBOLS || HAVE_BACKTRACE_SYMBOLS_FD)
+#if defined(HAVE_PRINTSTACK)
+
+/* Use Solaris' symbolic stack trace routine. */
+#include <ucontext.h>
+
+void my_print_stacktrace(uchar* stack_bottom __attribute__((unused)),
+ ulong thread_stack __attribute__((unused)))
+{
+ if (printstack(fileno(stderr)) == -1)
+ fprintf(stderr, "Error when traversing the stack, stack appears corrupt.\n");
+ else
+ fprintf(stderr,
+ "Please read "
+ "http://dev.mysql.com/doc/refman/5.1/en/resolve-stack-dump.html\n"
+ "and follow instructions on how to resolve the stack trace.\n"
+ "Resolved stack trace is much more helpful in diagnosing the\n"
+ "problem, so please do resolve it\n");
+}
+
+#elif HAVE_BACKTRACE && (HAVE_BACKTRACE_SYMBOLS || HAVE_BACKTRACE_SYMBOLS_FD)
#if BACKTRACE_DEMANGLE
=== modified file 'netware/libmysqlmain.c'
--- a/netware/libmysqlmain.c 2003-01-31 23:42:26 +0000
+++ b/netware/libmysqlmain.c 2009-12-12 18:11:25 +0000
@@ -18,7 +18,7 @@
#include "my_global.h"
-my_bool init_available_charsets(myf myflags);
+void init_available_charsets(void);
/* this function is required so that global memory is allocated against this
library nlm, and not against a paticular client */
@@ -31,7 +31,7 @@ int _NonAppStart(void *NLMHandle, void *
{
mysql_server_init(0, NULL, NULL);
- init_available_charsets(MYF(0));
+ init_available_charsets();
return 0;
}
=== modified file 'scripts/mysql_system_tables_fix.sql'
--- a/scripts/mysql_system_tables_fix.sql 2009-10-27 10:09:36 +0000
+++ b/scripts/mysql_system_tables_fix.sql 2009-12-03 16:15:47 +0000
@@ -415,18 +415,48 @@ ALTER TABLE proc ADD character_set_clien
ALTER TABLE proc MODIFY character_set_client
char(32) collate utf8_bin DEFAULT NULL;
+SELECT CASE WHEN COUNT(*) > 0 THEN
+CONCAT ("WARNING: NULL values of the 'character_set_client' column ('mysql.proc' table) have been updated with a default value (", @@character_set_client, "). Please verify if necessary.")
+ELSE NULL
+END
+AS value FROM proc WHERE character_set_client IS NULL;
+
+UPDATE proc SET character_set_client = @@character_set_client
+ WHERE character_set_client IS NULL;
+
ALTER TABLE proc ADD collation_connection
char(32) collate utf8_bin DEFAULT NULL
AFTER character_set_client;
ALTER TABLE proc MODIFY collation_connection
char(32) collate utf8_bin DEFAULT NULL;
+SELECT CASE WHEN COUNT(*) > 0 THEN
+CONCAT ("WARNING: NULL values of the 'collation_connection' column ('mysql.proc' table) have been updated with a default value (", @@collation_connection, "). Please verify if necessary.")
+ELSE NULL
+END
+AS value FROM proc WHERE collation_connection IS NULL;
+
+UPDATE proc SET collation_connection = @@collation_connection
+ WHERE collation_connection IS NULL;
+
ALTER TABLE proc ADD db_collation
char(32) collate utf8_bin DEFAULT NULL
AFTER collation_connection;
ALTER TABLE proc MODIFY db_collation
char(32) collate utf8_bin DEFAULT NULL;
+SELECT CASE WHEN COUNT(*) > 0 THEN
+CONCAT ("WARNING: NULL values of the 'db_collation' column ('mysql.proc' table) have been updated with default values. Please verify if necessary.")
+ELSE NULL
+END
+AS value FROM proc WHERE db_collation IS NULL;
+
+UPDATE proc AS p SET db_collation =
+ ( SELECT DEFAULT_COLLATION_NAME
+ FROM INFORMATION_SCHEMA.SCHEMATA
+ WHERE SCHEMA_NAME = p.db)
+ WHERE db_collation IS NULL;
+
ALTER TABLE proc ADD body_utf8 longblob DEFAULT NULL
AFTER db_collation;
ALTER TABLE proc MODIFY body_utf8 longblob DEFAULT NULL;
=== modified file 'scripts/mysqld_multi.sh'
--- a/scripts/mysqld_multi.sh 2009-06-19 15:32:10 +0000
+++ b/scripts/mysqld_multi.sh 2010-01-21 08:10:05 +0000
@@ -68,7 +68,10 @@ sub main
# than a correct --defaults-extra-file option
unshift @defaults_options, "--defaults-extra-file=$1";
+ print "WARNING: --config-file is deprecated and will be removed\n";
+ print "in MySQL 5.6. Please use --defaults-extra-file instead\n";
}
+ }
}
foreach (@defaults_options)
=== modified file 'server-tools/instance-manager/instance_map.cc'
--- a/server-tools/instance-manager/instance_map.cc 2009-03-19 13:42:36 +0000
+++ b/server-tools/instance-manager/instance_map.cc 2009-12-18 19:14:09 +0000
@@ -117,13 +117,13 @@ static void parse_option(const char *opt
while (*ptr == '-')
++ptr;
- strmake(option_name_buf, ptr, MAX_OPTION_LEN + 1);
+ strmake(option_name_buf, ptr, MAX_OPTION_LEN);
eq_pos= strchr(ptr, '=');
if (eq_pos)
{
option_name_buf[eq_pos - ptr]= 0;
- strmake(option_value_buf, eq_pos + 1, MAX_OPTION_LEN + 1);
+ strmake(option_value_buf, eq_pos + 1, MAX_OPTION_LEN);
}
else
{
=== modified file 'server-tools/instance-manager/listener.cc'
--- a/server-tools/instance-manager/listener.cc 2009-04-25 10:05:32 +0000
+++ b/server-tools/instance-manager/listener.cc 2010-03-04 08:03:07 +0000
@@ -272,7 +272,7 @@ create_unix_socket(struct sockaddr_un &u
unix_socket_address.sun_family= AF_UNIX;
strmake(unix_socket_address.sun_path, Options::Main::socket_file_name,
- sizeof(unix_socket_address.sun_path));
+ sizeof(unix_socket_address.sun_path) - 1);
unlink(unix_socket_address.sun_path); // in case we have stale socket file
/*
=== modified file 'server-tools/instance-manager/options.cc'
--- a/server-tools/instance-manager/options.cc 2009-03-19 13:42:36 +0000
+++ b/server-tools/instance-manager/options.cc 2009-12-18 19:14:09 +0000
@@ -533,10 +533,10 @@ static int setup_windows_defaults()
return 1;
}
- strmake(base_name, base_name_ptr, FN_REFLEN);
+ strmake(base_name, base_name_ptr, FN_REFLEN - 1);
*base_name_ptr= 0;
- strmake(im_name, base_name, FN_REFLEN);
+ strmake(im_name, base_name, FN_REFLEN - 1);
ptr= strrchr(im_name, '.');
if (!ptr)
=== modified file 'server-tools/instance-manager/user_map.cc'
--- a/server-tools/instance-manager/user_map.cc 2009-06-29 14:00:47 +0000
+++ b/server-tools/instance-manager/user_map.cc 2009-12-18 19:14:09 +0000
@@ -25,7 +25,7 @@
User::User(const LEX_STRING *user_name_arg, const char *password)
{
user_length= (uint8) (strmake(user, user_name_arg->str,
- USERNAME_LENGTH + 1) - user);
+ USERNAME_LENGTH) - user);
set_password(password);
}
=== modified file 'sql/event_data_objects.cc'
--- a/sql/event_data_objects.cc 2009-12-03 11:19:05 +0000
+++ b/sql/event_data_objects.cc 2010-03-04 08:03:07 +0000
@@ -1400,7 +1400,7 @@ Event_job_data::execute(THD *thd, bool d
#endif
if (check_access(thd, EVENT_ACL, dbname.str,
- 0, 0, 0, is_schema_db(dbname.str)))
+ 0, 0, 0, is_schema_db(dbname.str, dbname.length)))
{
/*
This aspect of behavior is defined in the worklog,
=== modified file 'sql/event_db_repository.cc'
--- a/sql/event_db_repository.cc 2010-01-15 15:27:55 +0000
+++ b/sql/event_db_repository.cc 2010-03-04 08:03:07 +0000
@@ -1045,6 +1045,7 @@ update_timing_fields_for_event(THD *thd,
TABLE *table= NULL;
Field **fields;
int ret= 1;
+ bool save_binlog_row_based;
DBUG_ENTER("Event_db_repository::update_timing_fields_for_event");
@@ -1052,8 +1053,8 @@ update_timing_fields_for_event(THD *thd,
Turn off row binlogging of event timing updates. These are not used
for RBR of events replicated to the slave.
*/
- if (thd->current_stmt_binlog_row_based)
- thd->clear_current_stmt_binlog_row_based();
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
+ thd->clear_current_stmt_binlog_row_based();
DBUG_ASSERT(thd->security_ctx->master_access & SUPER_ACL);
@@ -1095,6 +1096,8 @@ update_timing_fields_for_event(THD *thd,
end:
if (table)
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(test(ret));
}
=== modified file 'sql/event_scheduler.cc' (properties changed: -x to +x)
--- a/sql/event_scheduler.cc 2009-09-07 20:50:10 +0000
+++ b/sql/event_scheduler.cc 2010-03-04 08:03:07 +0000
@@ -235,8 +235,9 @@ event_scheduler_thread(void *arg)
if (!res)
scheduler->run(thd);
+ DBUG_LEAVE; // Against gcc warnings
my_thread_end();
- DBUG_RETURN(0); // Against gcc warnings
+ return 0;
}
=== modified file 'sql/events.cc'
--- a/sql/events.cc 2009-12-03 11:19:05 +0000
+++ b/sql/events.cc 2010-03-04 08:03:07 +0000
@@ -388,6 +388,7 @@ Events::create_event(THD *thd, Event_par
bool if_not_exists)
{
int ret;
+ bool save_binlog_row_based;
DBUG_ENTER("Events::create_event");
/*
@@ -414,7 +415,8 @@ Events::create_event(THD *thd, Event_par
DBUG_ASSERT(parse_data->expression || parse_data->execute_at);
if (check_access(thd, EVENT_ACL, parse_data->dbname.str, 0, 0, 0,
- is_schema_db(parse_data->dbname.str)))
+ is_schema_db(parse_data->dbname.str,
+ parse_data->dbname.length)))
DBUG_RETURN(TRUE);
if (check_db_dir_existence(parse_data->dbname.str))
@@ -429,8 +431,8 @@ Events::create_event(THD *thd, Event_par
Turn off row binlogging of this statement and use statement-based
so that all supporting tables are updated for CREATE EVENT command.
*/
- if (thd->current_stmt_binlog_row_based)
- thd->clear_current_stmt_binlog_row_based();
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
+ thd->clear_current_stmt_binlog_row_based();
pthread_mutex_lock(&LOCK_event_metadata);
@@ -470,14 +472,18 @@ Events::create_event(THD *thd, Event_par
{
sql_print_error("Event Error: An error occurred while creating query string, "
"before writing it into binary log.");
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(TRUE);
}
/* If the definer is not set or set to CURRENT_USER, the value of CURRENT_USER
will be written into the binary log as the definer for the SQL thread. */
- write_bin_log(thd, TRUE, log_query.c_ptr(), log_query.length());
+ ret= write_bin_log(thd, TRUE, log_query.c_ptr(), log_query.length());
}
}
pthread_mutex_unlock(&LOCK_event_metadata);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(ret);
}
@@ -507,6 +513,7 @@ Events::update_event(THD *thd, Event_par
LEX_STRING *new_dbname, LEX_STRING *new_name)
{
int ret;
+ bool save_binlog_row_based;
Event_queue_element *new_element;
DBUG_ENTER("Events::update_event");
@@ -525,7 +532,8 @@ Events::update_event(THD *thd, Event_par
DBUG_RETURN(TRUE);
if (check_access(thd, EVENT_ACL, parse_data->dbname.str, 0, 0, 0,
- is_schema_db(parse_data->dbname.str)))
+ is_schema_db(parse_data->dbname.str,
+ parse_data->dbname.length)))
DBUG_RETURN(TRUE);
if (new_dbname) /* It's a rename */
@@ -547,7 +555,7 @@ Events::update_event(THD *thd, Event_par
access it.
*/
if (check_access(thd, EVENT_ACL, new_dbname->str, 0, 0, 0,
- is_schema_db(new_dbname->str)))
+ is_schema_db(new_dbname->str, new_dbname->length)))
DBUG_RETURN(TRUE);
/* Check that the target database exists */
@@ -562,8 +570,8 @@ Events::update_event(THD *thd, Event_par
Turn off row binlogging of this statement and use statement-based
so that all supporting tables are updated for UPDATE EVENT command.
*/
- if (thd->current_stmt_binlog_row_based)
- thd->clear_current_stmt_binlog_row_based();
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
+ thd->clear_current_stmt_binlog_row_based();
pthread_mutex_lock(&LOCK_event_metadata);
@@ -595,10 +603,12 @@ Events::update_event(THD *thd, Event_par
new_element);
/* Binlog the alter event. */
DBUG_ASSERT(thd->query() && thd->query_length());
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ ret= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
}
pthread_mutex_unlock(&LOCK_event_metadata);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(ret);
}
@@ -632,6 +642,7 @@ bool
Events::drop_event(THD *thd, LEX_STRING dbname, LEX_STRING name, bool if_exists)
{
int ret;
+ bool save_binlog_row_based;
DBUG_ENTER("Events::drop_event");
/*
@@ -652,15 +663,15 @@ Events::drop_event(THD *thd, LEX_STRING
DBUG_RETURN(TRUE);
if (check_access(thd, EVENT_ACL, dbname.str, 0, 0, 0,
- is_schema_db(dbname.str)))
+ is_schema_db(dbname.str, dbname.length)))
DBUG_RETURN(TRUE);
/*
Turn off row binlogging of this statement and use statement-based so
that all supporting tables are updated for DROP EVENT command.
*/
- if (thd->current_stmt_binlog_row_based)
- thd->clear_current_stmt_binlog_row_based();
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
+ thd->clear_current_stmt_binlog_row_based();
pthread_mutex_lock(&LOCK_event_metadata);
/* On error conditions my_error() is called so no need to handle here */
@@ -670,9 +681,11 @@ Events::drop_event(THD *thd, LEX_STRING
event_queue->drop_event(thd, dbname, name);
/* Binlog the drop event. */
DBUG_ASSERT(thd->query() && thd->query_length());
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ ret= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
pthread_mutex_unlock(&LOCK_event_metadata);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(ret);
}
@@ -809,7 +822,7 @@ Events::show_create_event(THD *thd, LEX_
DBUG_RETURN(TRUE);
if (check_access(thd, EVENT_ACL, dbname.str, 0, 0, 0,
- is_schema_db(dbname.str)))
+ is_schema_db(dbname.str, dbname.length)))
DBUG_RETURN(TRUE);
/*
@@ -867,7 +880,7 @@ Events::fill_schema_events(THD *thd, TAB
if (thd->lex->sql_command == SQLCOM_SHOW_EVENTS)
{
DBUG_ASSERT(thd->lex->select_lex.db);
- if (!is_schema_db(thd->lex->select_lex.db) && // There is no events in I_S
+ if (!is_schema_db(thd->lex->select_lex.db) && // There is no events in I_S
check_access(thd, EVENT_ACL, thd->lex->select_lex.db, 0, 0, 0, 0))
DBUG_RETURN(1);
db= thd->lex->select_lex.db;
=== modified file 'sql/field.cc'
--- a/sql/field.cc 2010-01-15 15:27:55 +0000
+++ b/sql/field.cc 2010-03-04 08:03:07 +0000
@@ -8287,8 +8287,7 @@ uint Field_blob::is_equal(Create_field *
return ((new_field->sql_type == get_blob_type_from_length(max_data_length()))
&& new_field->charset == field_charset &&
- ((Field_blob *)new_field->field)->max_data_length() ==
- max_data_length());
+ new_field->pack_length == pack_length());
}
=== modified file 'sql/field.h'
--- a/sql/field.h 2010-01-15 15:27:55 +0000
+++ b/sql/field.h 2010-03-04 08:03:07 +0000
@@ -1926,7 +1926,12 @@ public:
uint32 max_display_length() { return field_length; }
uint size_of() const { return sizeof(*this); }
Item_result result_type () const { return INT_RESULT; }
- int reset(void) { bzero(ptr, bytes_in_rec); return 0; }
+ int reset(void) {
+ bzero(ptr, bytes_in_rec);
+ if (bit_ptr && (bit_len > 0)) // reset odd bits among null bits
+ clr_rec_bits(bit_ptr, bit_ofs, bit_len);
+ return 0;
+ }
int store(const char *to, uint length, CHARSET_INFO *charset);
int store(double nr);
int store(longlong nr, bool unsigned_val);
=== modified file 'sql/filesort.cc'
--- a/sql/filesort.cc 2009-09-03 14:05:38 +0000
+++ b/sql/filesort.cc 2010-03-04 08:03:07 +0000
@@ -142,6 +142,8 @@ ha_rows filesort(THD *thd, TABLE *table,
error= 1;
bzero((char*) ¶m,sizeof(param));
param.sort_length= sortlength(thd, sortorder, s_length, &multi_byte_charset);
+ /* filesort cannot handle zero-length records. */
+ DBUG_ASSERT(param.sort_length);
param.ref_length= table->file->ref_length;
param.addon_field= 0;
param.addon_length= 0;
=== modified file 'sql/ha_partition.cc'
--- a/sql/ha_partition.cc 2009-12-03 11:19:05 +0000
+++ b/sql/ha_partition.cc 2010-03-04 08:03:07 +0000
@@ -1215,17 +1215,28 @@ int ha_partition::prepare_new_partition(
partition_element *p_elem)
{
int error;
- bool create_flag= FALSE;
DBUG_ENTER("prepare_new_partition");
if ((error= set_up_table_before_create(tbl, part_name, create_info,
0, p_elem)))
- goto error;
+ goto error_create;
if ((error= file->ha_create(part_name, tbl, create_info)))
- goto error;
- create_flag= TRUE;
+ {
+ /*
+ Added for safety, InnoDB reports HA_ERR_FOUND_DUPP_KEY
+ if the table/partition already exists.
+ If we return that error code, then print_error would try to
+ get_dup_key on a non-existing partition.
+ So return a more reasonable error code.
+ */
+ if (error == HA_ERR_FOUND_DUPP_KEY)
+ error= HA_ERR_TABLE_EXIST;
+ goto error_create;
+ }
+ DBUG_PRINT("info", ("partition %s created", part_name));
if ((error= file->ha_open(tbl, part_name, m_mode, m_open_test_lock)))
- goto error;
+ goto error_open;
+ DBUG_PRINT("info", ("partition %s opened", part_name));
/*
Note: if you plan to add another call that may return failure,
better to do it before external_lock() as cleanup_new_partition()
@@ -1233,12 +1244,15 @@ int ha_partition::prepare_new_partition(
Otherwise see description for cleanup_new_partition().
*/
if ((error= file->ha_external_lock(ha_thd(), m_lock_type)))
- goto error;
+ goto error_external_lock;
+ DBUG_PRINT("info", ("partition %s external locked", part_name));
DBUG_RETURN(0);
-error:
- if (create_flag)
- VOID(file->ha_delete_table(part_name));
+error_external_lock:
+ VOID(file->close());
+error_open:
+ VOID(file->ha_delete_table(part_name));
+error_create:
DBUG_RETURN(error);
}
@@ -1272,19 +1286,23 @@ error:
void ha_partition::cleanup_new_partition(uint part_count)
{
- handler **save_m_file= m_file;
DBUG_ENTER("ha_partition::cleanup_new_partition");
- if (m_added_file && m_added_file[0])
+ if (m_added_file)
{
- m_file= m_added_file;
- m_added_file= NULL;
+ THD *thd= ha_thd();
+ handler **file= m_added_file;
+ while ((part_count > 0) && (*file))
+ {
+ (*file)->ha_external_lock(thd, F_UNLCK);
+ (*file)->close();
- external_lock(ha_thd(), F_UNLCK);
- /* delete_table also needed, a bit more complex */
- close();
+ /* Leave the (*file)->ha_delete_table(part_name) to the ddl-log */
- m_file= save_m_file;
+ file++;
+ part_count--;
+ }
+ m_added_file= NULL;
}
DBUG_VOID_RETURN;
}
@@ -1590,7 +1608,15 @@ int ha_partition::change_partitions(HA_C
part_elem->part_state= PART_TO_BE_DROPPED;
}
m_new_file= new_file_array;
- DBUG_RETURN(copy_partitions(copied, deleted));
+ if ((error= copy_partitions(copied, deleted)))
+ {
+ /*
+ Close and unlock the new temporary partitions.
+ They will later be deleted through the ddl-log.
+ */
+ cleanup_new_partition(part_count);
+ }
+ DBUG_RETURN(error);
}
@@ -1679,6 +1705,7 @@ int ha_partition::copy_partitions(ulongl
}
DBUG_RETURN(FALSE);
error:
+ m_reorged_file[reorg_part]->ha_rnd_end();
DBUG_RETURN(result);
}
@@ -5746,6 +5773,23 @@ const key_map *ha_partition::keys_to_use
DBUG_RETURN(m_file[0]->keys_to_use_for_scanning());
}
+#define MAX_PARTS_FOR_OPTIMIZER_CALLS 10
+/*
+ Prepare start variables for estimating optimizer costs.
+
+ @param[out] num_used_parts Number of partitions after pruning.
+ @param[out] check_min_num Number of partitions to call.
+ @param[out] first first used partition.
+*/
+void ha_partition::partitions_optimizer_call_preparations(uint *first,
+ uint *num_used_parts,
+ uint *check_min_num)
+{
+ *first= bitmap_get_first_set(&(m_part_info->used_partitions));
+ *num_used_parts= bitmap_bits_set(&(m_part_info->used_partitions));
+ *check_min_num= min(MAX_PARTS_FOR_OPTIMIZER_CALLS, *num_used_parts);
+}
+
/*
Return time for a scan of the table
@@ -5759,43 +5803,67 @@ const key_map *ha_partition::keys_to_use
double ha_partition::scan_time()
{
- double scan_time= 0;
- handler **file;
+ double scan_time= 0.0;
+ uint first, part_id, num_used_parts, check_min_num, partitions_called= 0;
DBUG_ENTER("ha_partition::scan_time");
- for (file= m_file; *file; file++)
- if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
- scan_time+= (*file)->scan_time();
+ partitions_optimizer_call_preparations(&first, &num_used_parts, &check_min_num);
+ for (part_id= first; partitions_called < num_used_parts ; part_id++)
+ {
+ if (!bitmap_is_set(&(m_part_info->used_partitions), part_id))
+ continue;
+ scan_time+= m_file[part_id]->scan_time();
+ partitions_called++;
+ if (partitions_called >= check_min_num && scan_time != 0.0)
+ {
+ DBUG_RETURN(scan_time *
+ (double) num_used_parts / (double) partitions_called);
+ }
+ }
DBUG_RETURN(scan_time);
}
/*
- Get time to read
+ Estimate rows for records_in_range or estimate_rows_upper_bound.
- SYNOPSIS
- read_time()
- index Index number used
- ranges Number of ranges
- rows Number of rows
-
- RETURN VALUE
- time for read
+ @param is_records_in_range call records_in_range instead of
+ estimate_rows_upper_bound.
+ @param inx (only for records_in_range) index to use.
+ @param min_key (only for records_in_range) start of range.
+ @param max_key (only for records_in_range) end of range.
- DESCRIPTION
- This will be optimised later to include whether or not the index can
- be used with partitioning. To achieve we need to add another parameter
- that specifies how many of the index fields that are bound in the ranges.
- Possibly added as a new call to handlers.
+ @return Number of rows or HA_POS_ERROR.
*/
-
-double ha_partition::read_time(uint index, uint ranges, ha_rows rows)
+ha_rows ha_partition::estimate_rows(bool is_records_in_range, uint inx,
+ key_range *min_key, key_range *max_key)
{
- DBUG_ENTER("ha_partition::read_time");
+ ha_rows rows, estimated_rows= 0;
+ uint first, part_id, num_used_parts, check_min_num, partitions_called= 0;
+ DBUG_ENTER("ha_partition::records_in_range");
- DBUG_RETURN(m_file[0]->read_time(index, ranges, rows));
+ partitions_optimizer_call_preparations(&first, &num_used_parts, &check_min_num);
+ for (part_id= first; partitions_called < num_used_parts ; part_id++)
+ {
+ if (!bitmap_is_set(&(m_part_info->used_partitions), part_id))
+ continue;
+ if (is_records_in_range)
+ rows= m_file[part_id]->records_in_range(inx, min_key, max_key);
+ else
+ rows= m_file[part_id]->estimate_rows_upper_bound();
+ if (rows == HA_POS_ERROR)
+ DBUG_RETURN(HA_POS_ERROR);
+ estimated_rows+= rows;
+ partitions_called++;
+ if (partitions_called >= check_min_num && estimated_rows)
+ {
+ DBUG_RETURN(estimated_rows * num_used_parts / partitions_called);
+ }
+ }
+ DBUG_RETURN(estimated_rows);
}
+
/*
Find number of records in a range
@@ -5823,22 +5891,9 @@ double ha_partition::read_time(uint inde
ha_rows ha_partition::records_in_range(uint inx, key_range *min_key,
key_range *max_key)
{
- handler **file;
- ha_rows in_range= 0;
DBUG_ENTER("ha_partition::records_in_range");
- file= m_file;
- do
- {
- if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
- {
- ha_rows tmp_in_range= (*file)->records_in_range(inx, min_key, max_key);
- if (tmp_in_range == HA_POS_ERROR)
- DBUG_RETURN(tmp_in_range);
- in_range+= tmp_in_range;
- }
- } while (*(++file));
- DBUG_RETURN(in_range);
+ DBUG_RETURN(estimate_rows(TRUE, inx, min_key, max_key));
}
@@ -5854,22 +5909,36 @@ ha_rows ha_partition::records_in_range(u
ha_rows ha_partition::estimate_rows_upper_bound()
{
- ha_rows rows, tot_rows= 0;
- handler **file;
DBUG_ENTER("ha_partition::estimate_rows_upper_bound");
- file= m_file;
- do
- {
- if (bitmap_is_set(&(m_part_info->used_partitions), (file - m_file)))
- {
- rows= (*file)->estimate_rows_upper_bound();
- if (rows == HA_POS_ERROR)
- DBUG_RETURN(HA_POS_ERROR);
- tot_rows+= rows;
- }
- } while (*(++file));
- DBUG_RETURN(tot_rows);
+ DBUG_RETURN(estimate_rows(FALSE, 0, NULL, NULL));
+}
+
+
+/*
+ Get time to read
+
+ SYNOPSIS
+ read_time()
+ index Index number used
+ ranges Number of ranges
+ rows Number of rows
+
+ RETURN VALUE
+ time for read
+
+ DESCRIPTION
+ This will be optimised later to include whether or not the index can
+ be used with partitioning. To achieve we need to add another parameter
+ that specifies how many of the index fields that are bound in the ranges.
+ Possibly added as a new call to handlers.
+*/
+
+double ha_partition::read_time(uint index, uint ranges, ha_rows rows)
+{
+ DBUG_ENTER("ha_partition::read_time");
+
+ DBUG_RETURN(m_file[0]->read_time(index, ranges, rows));
}
=== modified file 'sql/ha_partition.h'
--- a/sql/ha_partition.h 2009-12-03 11:19:05 +0000
+++ b/sql/ha_partition.h 2010-03-04 08:03:07 +0000
@@ -547,6 +547,18 @@ public:
-------------------------------------------------------------------------
*/
+private:
+ /*
+ Helper function to get the minimum number of partitions to use for
+ the optimizer hints/cost calls.
+ */
+ void partitions_optimizer_call_preparations(uint *num_used_parts,
+ uint *check_min_num,
+ uint *first);
+ ha_rows estimate_rows(bool is_records_in_range, uint inx,
+ key_range *min_key, key_range *max_key);
+public:
+
/*
keys_to_use_for_scanning can probably be implemented as the
intersection of all underlying handlers if mixed handlers are used.
=== modified file 'sql/item.cc'
--- a/sql/item.cc 2010-01-27 21:53:08 +0000
+++ b/sql/item.cc 2010-03-04 08:03:07 +0000
@@ -5150,7 +5150,7 @@ int Item::save_in_field(Field *field, bo
field->set_notnull();
error=field->store(nr, unsigned_flag);
}
- return error;
+ return error ? error : (field->table->in_use->is_error() ? 2 : 0);
}
=== modified file 'sql/item.h'
--- a/sql/item.h 2010-01-15 15:27:55 +0000
+++ b/sql/item.h 2010-03-04 08:03:07 +0000
@@ -506,6 +506,13 @@ public:
char * name; /* Name from select */
/* Original item name (if it was renamed)*/
char * orig_name;
+ /**
+ Intrusive list pointer for free list. If not null, points to the next
+ Item on some Query_arena's free list. For instance, stored procedures
+ have their own Query_arena's.
+
+ @see Query_arena::free_list
+ */
Item *next;
uint32 max_length;
uint name_length; /* Length of name */
@@ -963,6 +970,32 @@ public:
virtual Item *equal_fields_propagator(uchar * arg) { return this; }
virtual bool set_no_const_sub(uchar *arg) { return FALSE; }
virtual Item *replace_equal_field(uchar * arg) { return this; }
+ /*
+ Check if an expression value depends on the current timezone. Used by
+ partitioning code to reject timezone-dependent expressions in a
+ (sub)partitioning function.
+ */
+ virtual bool is_timezone_dependent_processor(uchar *bool_arg)
+ {
+ return FALSE;
+ }
+
+ /**
+ Find a function of a given type
+
+ @param arg the function type to search (enum Item_func::Functype)
+ @return
+ @retval TRUE the function type we're searching for is found
+ @retval FALSE the function type wasn't found
+
+ @description
+ This function can be used (together with Item::walk()) to find functions
+ in an item tree fragment.
+ */
+ virtual bool find_function_processor (uchar *arg)
+ {
+ return FALSE;
+ }
/*
For SP local variable returns pointer to Item representing its
=== modified file 'sql/item_cmpfunc.cc'
--- a/sql/item_cmpfunc.cc 2010-01-15 15:27:55 +0000
+++ b/sql/item_cmpfunc.cc 2010-03-04 08:03:07 +0000
@@ -4251,7 +4251,7 @@ Item *Item_cond::compile(Item_analyzer a
uchar *arg_v= *arg_p;
Item *new_item= item->compile(analyzer, &arg_v, transformer, arg_t);
if (new_item && new_item != item)
- li.replace(new_item);
+ current_thd->change_item_tree(li.ref(), new_item);
}
return Item_func::transform(transformer, arg_t);
}
@@ -5252,7 +5252,8 @@ Item *Item_bool_rowready_func2::negated_
}
Item_equal::Item_equal(Item_field *f1, Item_field *f2)
- : Item_bool_func(), const_item(0), eval_item(0), cond_false(0)
+ : Item_bool_func(), const_item(0), eval_item(0), cond_false(0),
+ compare_as_dates(FALSE)
{
const_item_cache= 0;
fields.push_back(f1);
@@ -5265,6 +5266,7 @@ Item_equal::Item_equal(Item *c, Item_fie
const_item_cache= 0;
fields.push_back(f);
const_item= c;
+ compare_as_dates= f->is_datetime();
}
@@ -5279,9 +5281,45 @@ Item_equal::Item_equal(Item_equal *item_
fields.push_back(item);
}
const_item= item_equal->const_item;
+ compare_as_dates= item_equal->compare_as_dates;
cond_false= item_equal->cond_false;
}
+
+void Item_equal::compare_const(Item *c)
+{
+ if (compare_as_dates)
+ {
+ cmp.set_datetime_cmp_func(this, &c, &const_item);
+ cond_false= cmp.compare();
+ }
+ else
+ {
+ Item_func_eq *func= new Item_func_eq(c, const_item);
+ func->set_cmp_func();
+ func->quick_fix_field();
+ cond_false= !func->val_int();
+ }
+ if (cond_false)
+ const_item_cache= 1;
+}
+
+
+void Item_equal::add(Item *c, Item_field *f)
+{
+ if (cond_false)
+ return;
+ if (!const_item)
+ {
+ DBUG_ASSERT(f);
+ const_item= c;
+ compare_as_dates= f->is_datetime();
+ return;
+ }
+ compare_const(c);
+}
+
+
void Item_equal::add(Item *c)
{
if (cond_false)
@@ -5291,11 +5329,7 @@ void Item_equal::add(Item *c)
const_item= c;
return;
}
- Item_func_eq *func= new Item_func_eq(c, const_item);
- func->set_cmp_func();
- func->quick_fix_field();
- if ((cond_false= !func->val_int()))
- const_item_cache= 1;
+ compare_const(c);
}
void Item_equal::add(Item_field *f)
=== modified file 'sql/item_cmpfunc.h'
--- a/sql/item_cmpfunc.h 2010-01-15 15:27:55 +0000
+++ b/sql/item_cmpfunc.h 2010-03-04 08:03:07 +0000
@@ -1580,7 +1580,9 @@ class Item_equal: public Item_bool_func
List<Item_field> fields; /* list of equal field items */
Item *const_item; /* optional constant item equal to fields items */
cmp_item *eval_item;
+ Arg_comparator cmp;
bool cond_false;
+ bool compare_as_dates;
public:
inline Item_equal()
: Item_bool_func(), const_item(0), eval_item(0), cond_false(0)
@@ -1589,6 +1591,8 @@ public:
Item_equal(Item *c, Item_field *f);
Item_equal(Item_equal *item_equal);
inline Item* get_const() { return const_item; }
+ void compare_const(Item *c);
+ void add(Item *c, Item_field *f);
void add(Item *c);
void add(Item_field *f);
uint members();
=== modified file 'sql/item_create.cc'
--- a/sql/item_create.cc 2010-01-15 15:27:55 +0000
+++ b/sql/item_create.cc 2010-03-04 08:03:07 +0000
@@ -4178,6 +4178,16 @@ Create_func_rand::create_native(THD *thd
if (item_list != NULL)
arg_count= item_list->elements;
+ /*
+ When RAND() is binlogged, the seed is binlogged too. So the
+ sequence of random numbers is the same on a replication slave as
+ on the master. However, if several RAND() values are inserted
+ into a table, the order in which the rows are modified may differ
+ between master and slave, because the order is undefined. Hence,
+ the statement is unsafe to log in statement format.
+ */
+ thd->lex->set_stmt_unsafe();
+
switch (arg_count) {
case 0:
{
=== modified file 'sql/item_func.cc'
--- a/sql/item_func.cc 2010-01-15 15:27:55 +0000
+++ b/sql/item_func.cc 2010-03-04 08:03:07 +0000
@@ -605,7 +605,7 @@ void Item_func::signal_divide_by_null()
Item *Item_func::get_tmp_table_item(THD *thd)
{
- if (!with_sum_func && !const_item() && functype() != SUSERVAR_FUNC)
+ if (!with_sum_func && !const_item())
return new Item_field(result_field);
return copy_or_same(thd);
}
=== modified file 'sql/item_func.h'
--- a/sql/item_func.h 2010-01-15 15:27:55 +0000
+++ b/sql/item_func.h 2010-03-04 08:03:07 +0000
@@ -189,6 +189,34 @@ public:
null_value=1;
return 0.0;
}
+ bool has_timestamp_args()
+ {
+ DBUG_ASSERT(fixed == TRUE);
+ for (uint i= 0; i < arg_count; i++)
+ {
+ if (args[i]->type() == Item::FIELD_ITEM &&
+ args[i]->field_type() == MYSQL_TYPE_TIMESTAMP)
+ return TRUE;
+ }
+ return FALSE;
+ }
+ /*
+ We assume the result of any function that has a TIMESTAMP argument to be
+ timezone-dependent, since a TIMESTAMP value in both numeric and string
+ contexts is interpreted according to the current timezone.
+ The only exception is UNIX_TIMESTAMP() which returns the internal
+ representation of a TIMESTAMP argument verbatim, and thus does not depend on
+ the timezone.
+ */
+ virtual bool is_timezone_dependent_processor(uchar *bool_arg)
+ {
+ return has_timestamp_args();
+ }
+
+ virtual bool find_function_processor (uchar *arg)
+ {
+ return functype() == *(Functype *) arg;
+ }
};
=== modified file 'sql/item_strfunc.cc'
--- a/sql/item_strfunc.cc 2010-01-15 15:27:55 +0000
+++ b/sql/item_strfunc.cc 2010-03-04 08:03:07 +0000
@@ -42,6 +42,20 @@ C_MODE_END
String my_empty_string("",default_charset_info);
+/*
+ Convert an array of bytes to a hexadecimal representation.
+
+ Used to generate a hexadecimal representation of a message digest.
+*/
+static void array_to_hex(char *to, const char *str, uint len)
+{
+ const char *str_end= str + len;
+ for (; str != str_end; ++str)
+ {
+ *to++= _dig_vec_lower[((uchar) *str) >> 4];
+ *to++= _dig_vec_lower[((uchar) *str) & 0x0F];
+ }
+}
bool Item_str_func::fix_fields(THD *thd, Item **ref)
@@ -114,12 +128,7 @@ String *Item_func_md5::val_str(String *s
null_value=1;
return 0;
}
- sprintf((char *) str->ptr(),
- "%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x",
- digest[0], digest[1], digest[2], digest[3],
- digest[4], digest[5], digest[6], digest[7],
- digest[8], digest[9], digest[10], digest[11],
- digest[12], digest[13], digest[14], digest[15]);
+ array_to_hex((char *) str->ptr(), (const char*) digest, 16);
str->length((uint) 32);
return str;
}
@@ -160,15 +169,7 @@ String *Item_func_sha::val_str(String *s
if (!( str->alloc(SHA1_HASH_SIZE*2) ||
(mysql_sha1_result(&context,digest))))
{
- sprintf((char *) str->ptr(),
- "%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x%02x\
-%02x%02x%02x%02x%02x%02x%02x%02x",
- digest[0], digest[1], digest[2], digest[3],
- digest[4], digest[5], digest[6], digest[7],
- digest[8], digest[9], digest[10], digest[11],
- digest[12], digest[13], digest[14], digest[15],
- digest[16], digest[17], digest[18], digest[19]);
-
+ array_to_hex((char *) str->ptr(), (const char*) digest, SHA1_HASH_SIZE);
str->length((uint) SHA1_HASH_SIZE*2);
null_value=0;
return str;
@@ -678,8 +679,8 @@ String *Item_func_concat_ws::val_str(Str
res->length() + sep_str->length() + res2->length())
{
/* We have room in str; We can't get any errors here */
- if (str == res2)
- { // This is quote uncommon!
+ if (str->ptr() == res2->ptr())
+ { // This is quite uncommon!
str->replace(0,0,*sep_str);
str->replace(0,0,*res);
}
@@ -1721,68 +1722,65 @@ String *Item_func_encrypt::val_str(Strin
#endif /* HAVE_CRYPT */
}
+bool Item_func_encode::seed()
+{
+ char buf[80];
+ ulong rand_nr[2];
+ String *key, tmp(buf, sizeof(buf), system_charset_info);
+
+ if (!(key= args[1]->val_str(&tmp)))
+ return TRUE;
+
+ hash_password(rand_nr, key->ptr(), key->length());
+ sql_crypt.init(rand_nr);
+
+ return FALSE;
+}
+
void Item_func_encode::fix_length_and_dec()
{
max_length=args[0]->max_length;
maybe_null=args[0]->maybe_null || args[1]->maybe_null;
collation.set(&my_charset_bin);
+ /* Precompute the seed state if the item is constant. */
+ seeded= args[1]->const_item() &&
+ (args[1]->result_type() == STRING_RESULT) && !seed();
}
String *Item_func_encode::val_str(String *str)
{
String *res;
- char pw_buff[80];
- String tmp_pw_value(pw_buff, sizeof(pw_buff), system_charset_info);
- String *password;
DBUG_ASSERT(fixed == 1);
if (!(res=args[0]->val_str(str)))
{
- null_value=1; /* purecov: inspected */
- return 0; /* purecov: inspected */
+ null_value= 1;
+ return NULL;
}
- if (!(password=args[1]->val_str(& tmp_pw_value)))
+ if (!seeded && seed())
{
- null_value=1;
- return 0;
+ null_value= 1;
+ return NULL;
}
- null_value=0;
- res=copy_if_not_alloced(str,res,res->length());
- SQL_CRYPT sql_crypt(password->ptr(), password->length());
- sql_crypt.init();
- sql_crypt.encode((char*) res->ptr(),res->length());
- res->set_charset(&my_charset_bin);
+ null_value= 0;
+ res= copy_if_not_alloced(str, res, res->length());
+ crypto_transform(res);
+ sql_crypt.reinit();
+
return res;
}
-String *Item_func_decode::val_str(String *str)
+void Item_func_encode::crypto_transform(String *res)
{
- String *res;
- char pw_buff[80];
- String tmp_pw_value(pw_buff, sizeof(pw_buff), system_charset_info);
- String *password;
- DBUG_ASSERT(fixed == 1);
-
- if (!(res=args[0]->val_str(str)))
- {
- null_value=1; /* purecov: inspected */
- return 0; /* purecov: inspected */
- }
-
- if (!(password=args[1]->val_str(& tmp_pw_value)))
- {
- null_value=1;
- return 0;
- }
+ sql_crypt.encode((char*) res->ptr(),res->length());
+ res->set_charset(&my_charset_bin);
+}
- null_value=0;
- res=copy_if_not_alloced(str,res,res->length());
- SQL_CRYPT sql_crypt(password->ptr(), password->length());
- sql_crypt.init();
+void Item_func_decode::crypto_transform(String *res)
+{
sql_crypt.decode((char*) res->ptr(),res->length());
- return res;
}
=== modified file 'sql/item_strfunc.h'
--- a/sql/item_strfunc.h 2009-09-07 20:50:10 +0000
+++ b/sql/item_strfunc.h 2010-03-04 08:03:07 +0000
@@ -351,12 +351,22 @@ public:
class Item_func_encode :public Item_str_func
{
+private:
+ /** Whether the PRNG has already been seeded. */
+ bool seeded;
+protected:
+ SQL_CRYPT sql_crypt;
public:
Item_func_encode(Item *a, Item *seed):
Item_str_func(a, seed) {}
String *val_str(String *);
void fix_length_and_dec();
const char *func_name() const { return "encode"; }
+protected:
+ virtual void crypto_transform(String *);
+private:
+ /** Provide a seed for the PRNG sequence. */
+ bool seed();
};
@@ -364,8 +374,9 @@ class Item_func_decode :public Item_func
{
public:
Item_func_decode(Item *a, Item *seed): Item_func_encode(a, seed) {}
- String *val_str(String *);
const char *func_name() const { return "decode"; }
+protected:
+ void crypto_transform(String *);
};
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-01-15 15:27:55 +0000
+++ b/sql/item_subselect.cc 2010-03-09 19:29:05 +0000
@@ -39,7 +39,8 @@ inline Item * and_items(Item* cond, Item
Item_subselect::Item_subselect():
Item_result_field(), value_assigned(0), thd(0), substitution(0),
engine(0), old_engine(0), used_tables_cache(0), have_to_be_excluded(0),
- const_item_cache(1), in_fix_fields(0), engine_changed(0), changed(0), is_correlated(FALSE)
+ const_item_cache(1), in_fix_fields(0), eliminated(FALSE),
+ engine_changed(0), changed(0), is_correlated(FALSE)
{
with_subselect= 1;
reset();
@@ -431,6 +432,7 @@ void Item_maxmin_subselect::print(String
void Item_singlerow_subselect::reset()
{
+ eliminated= FALSE;
null_value= 1;
if (value)
value->null_value= 1;
@@ -1774,6 +1776,10 @@ int subselect_single_select_engine::prep
{
if (prepared)
return 0;
+ if (select_lex->join)
+ {
+ select_lex->cleanup();
+ }
join= new JOIN(thd, select_lex->item_list,
select_lex->options | SELECT_NO_UNLOCK, result);
if (!join || !result)
=== modified file 'sql/item_subselect.h'
--- a/sql/item_subselect.h 2010-01-15 15:27:55 +0000
+++ b/sql/item_subselect.h 2010-03-09 15:03:54 +0000
@@ -90,6 +90,7 @@ public:
void cleanup();
virtual void reset()
{
+ eliminated= FALSE;
null_value= 1;
}
virtual trans_res select_transformer(JOIN *join);
@@ -235,6 +236,7 @@ public:
subs_type substype() { return EXISTS_SUBS; }
void reset()
{
+ eliminated= FALSE;
value= 0;
}
@@ -306,6 +308,7 @@ public:
subs_type substype() { return IN_SUBS; }
void reset()
{
+ eliminated= FALSE;
value= 0;
null_value= 0;
was_null= 0;
=== modified file 'sql/item_timefunc.cc'
--- a/sql/item_timefunc.cc 2010-01-15 15:27:55 +0000
+++ b/sql/item_timefunc.cc 2010-03-04 08:03:07 +0000
@@ -2560,9 +2560,9 @@ void Item_char_typecast::fix_length_and_
from_cs != &my_charset_bin &&
cast_cs != &my_charset_bin);
collation.set(cast_cs, DERIVATION_IMPLICIT);
- char_length= (cast_length >= 0) ?
- cast_length :
- args[0]->max_length / args[0]->collation.collation->mbmaxlen;
+ char_length= (cast_length >= 0) ? cast_length :
+ args[0]->max_length /
+ (cast_cs == &my_charset_bin ? 1 : args[0]->collation.collation->mbmaxlen);
max_length= char_length * cast_cs->mbmaxlen;
}
=== modified file 'sql/item_timefunc.h'
--- a/sql/item_timefunc.h 2009-02-07 15:50:31 +0000
+++ b/sql/item_timefunc.h 2009-12-13 20:29:50 +0000
@@ -305,6 +305,16 @@ public:
Item_func_unix_timestamp(Item *a) :Item_int_func(a) {}
longlong val_int();
const char *func_name() const { return "unix_timestamp"; }
+ bool check_partition_func_processor(uchar *int_arg) {return FALSE;}
+ /*
+ UNIX_TIMESTAMP() depends on the current timezone
+ (and thus may not be used as a partitioning function)
+ when its argument is NOT of the TIMESTAMP type.
+ */
+ bool is_timezone_dependent_processor(uchar *int_arg)
+ {
+ return !has_timestamp_args();
+ }
void fix_length_and_dec()
{
decimals=0;
=== modified file 'sql/log.cc'
--- a/sql/log.cc 2010-01-15 15:27:55 +0000
+++ b/sql/log.cc 2010-03-04 08:03:07 +0000
@@ -1475,7 +1475,7 @@ binlog_end_trans(THD *thd, binlog_trx_da
if (all || !(thd->options & (OPTION_BEGIN | OPTION_NOT_AUTOCOMMIT)))
{
if (trx_data->has_incident())
- mysql_bin_log.write_incident(thd, TRUE);
+ error= mysql_bin_log.write_incident(thd, TRUE);
trx_data->reset();
}
else // ...statement
@@ -1910,6 +1910,22 @@ void MYSQL_LOG::init(enum_log_type log_t
}
+bool MYSQL_LOG::init_and_set_log_file_name(const char *log_name,
+ const char *new_name,
+ enum_log_type log_type_arg,
+ enum cache_type io_cache_type_arg)
+{
+ init(log_type_arg, io_cache_type_arg);
+
+ if (new_name && !strmov(log_file_name, new_name))
+ return TRUE;
+ else if (!new_name && generate_new_name(log_file_name, log_name))
+ return TRUE;
+
+ return FALSE;
+}
+
+
/*
Open a (new) log file.
@@ -1942,17 +1958,14 @@ bool MYSQL_LOG::open(const char *log_nam
write_error= 0;
- init(log_type_arg, io_cache_type_arg);
-
if (!(name= my_strdup(log_name, MYF(MY_WME))))
{
name= (char *)log_name; // for the error message
goto err;
}
- if (new_name)
- strmov(log_file_name, new_name);
- else if (generate_new_name(log_file_name, name))
+ if (init_and_set_log_file_name(name, new_name,
+ log_type_arg, io_cache_type_arg))
goto err;
if (io_cache_type == SEQ_READ_APPEND)
@@ -2440,7 +2453,7 @@ const char *MYSQL_LOG::generate_name(con
{
char *p= fn_ext(log_name);
uint length= (uint) (p - log_name);
- strmake(buff, log_name, min(length, FN_REFLEN));
+ strmake(buff, log_name, min(length, FN_REFLEN-1));
return (const char*)buff;
}
return log_name;
@@ -2462,7 +2475,7 @@ MYSQL_BIN_LOG::MYSQL_BIN_LOG()
*/
index_file_name[0] = 0;
bzero((char*) &index_file, sizeof(index_file));
- bzero((char*) &purge_temp, sizeof(purge_temp));
+ bzero((char*) &purge_index_file, sizeof(purge_index_file));
}
/* this is called only once */
@@ -2511,7 +2524,7 @@ void MYSQL_BIN_LOG::init_pthread_objects
bool MYSQL_BIN_LOG::open_index_file(const char *index_file_name_arg,
- const char *log_name)
+ const char *log_name, bool need_mutex)
{
File index_file_nr= -1;
DBUG_ASSERT(!my_b_inited(&index_file));
@@ -2536,7 +2549,8 @@ bool MYSQL_BIN_LOG::open_index_file(cons
init_io_cache(&index_file, index_file_nr,
IO_SIZE, WRITE_CACHE,
my_seek(index_file_nr,0L,MY_SEEK_END,MYF(0)),
- 0, MYF(MY_WME | MY_WAIT_IF_FULL)))
+ 0, MYF(MY_WME | MY_WAIT_IF_FULL)) ||
+ DBUG_EVALUATE_IF("fault_injection_openning_index", 1, 0))
{
/*
TODO: all operations creating/deleting the index file or a log, should
@@ -2547,6 +2561,28 @@ bool MYSQL_BIN_LOG::open_index_file(cons
my_close(index_file_nr,MYF(0));
return TRUE;
}
+
+#ifdef HAVE_REPLICATION
+ /*
+ Sync the index by purging any binary log file that is not registered.
+ In other words, either purge binary log files that were removed from
+ the index but not purged from the file system due to a crash or purge
+ any binary log file that was created but not register in the index
+ due to a crash.
+ */
+
+ if (set_purge_index_file_name(index_file_name_arg) ||
+ open_purge_index_file(FALSE) ||
+ purge_index_entry(NULL, NULL, need_mutex) ||
+ close_purge_index_file() ||
+ DBUG_EVALUATE_IF("fault_injection_recovering_index", 1, 0))
+ {
+ sql_print_error("MYSQL_BIN_LOG::open_index_file failed to sync the index "
+ "file.");
+ return TRUE;
+ }
+#endif
+
return FALSE;
}
@@ -2571,17 +2607,44 @@ bool MYSQL_BIN_LOG::open(const char *log
enum cache_type io_cache_type_arg,
bool no_auto_events_arg,
ulong max_size_arg,
- bool null_created_arg)
+ bool null_created_arg,
+ bool need_mutex)
{
File file= -1;
+
DBUG_ENTER("MYSQL_BIN_LOG::open");
DBUG_PRINT("enter",("log_type: %d",(int) log_type_arg));
- write_error=0;
+ if (init_and_set_log_file_name(log_name, new_name, log_type_arg,
+ io_cache_type_arg))
+ {
+ sql_print_error("MSYQL_BIN_LOG::open failed to generate new file name.");
+ DBUG_RETURN(1);
+ }
+
+#ifdef HAVE_REPLICATION
+ if (open_purge_index_file(TRUE) ||
+ register_create_index_entry(log_file_name) ||
+ sync_purge_index_file() ||
+ DBUG_EVALUATE_IF("fault_injection_registering_index", 1, 0))
+ {
+ sql_print_error("MSYQL_BIN_LOG::open failed to sync the index file.");
+ DBUG_RETURN(1);
+ }
+ DBUG_EXECUTE_IF("crash_create_non_critical_before_update_index", abort(););
+#endif
+
+ write_error= 0;
/* open the main log file */
- if (MYSQL_LOG::open(log_name, log_type_arg, new_name, io_cache_type_arg))
+ if (MYSQL_LOG::open(log_name, log_type_arg, new_name,
+ io_cache_type_arg))
+ {
+#ifdef HAVE_REPLICATION
+ close_purge_index_file();
+#endif
DBUG_RETURN(1); /* all warnings issued */
+ }
init(no_auto_events_arg, max_size_arg);
@@ -2607,9 +2670,6 @@ bool MYSQL_BIN_LOG::open(const char *log
write_file_name_to_index_file= 1;
}
- DBUG_ASSERT(my_b_inited(&index_file) != 0);
- reinit_io_cache(&index_file, WRITE_CACHE,
- my_b_filelength(&index_file), 0, 0);
if (need_start_event && !no_auto_events)
{
/*
@@ -2667,23 +2727,44 @@ bool MYSQL_BIN_LOG::open(const char *log
if (write_file_name_to_index_file)
{
+#ifdef HAVE_REPLICATION
+ DBUG_EXECUTE_IF("crash_create_critical_before_update_index", abort(););
+#endif
+
+ DBUG_ASSERT(my_b_inited(&index_file) != 0);
+ reinit_io_cache(&index_file, WRITE_CACHE,
+ my_b_filelength(&index_file), 0, 0);
/*
As this is a new log file, we write the file name to the index
file. As every time we write to the index file, we sync it.
*/
- if (my_b_write(&index_file, (uchar*) log_file_name,
- strlen(log_file_name)) ||
- my_b_write(&index_file, (uchar*) "\n", 1) ||
- flush_io_cache(&index_file) ||
+ if (DBUG_EVALUATE_IF("fault_injection_updating_index", 1, 0) ||
+ my_b_write(&index_file, (uchar*) log_file_name,
+ strlen(log_file_name)) ||
+ my_b_write(&index_file, (uchar*) "\n", 1) ||
+ flush_io_cache(&index_file) ||
my_sync(index_file.file, MYF(MY_WME)))
- goto err;
+ goto err;
+
+#ifdef HAVE_REPLICATION
+ DBUG_EXECUTE_IF("crash_create_after_update_index", abort(););
+#endif
}
}
log_state= LOG_OPENED;
+#ifdef HAVE_REPLICATION
+ close_purge_index_file();
+#endif
+
DBUG_RETURN(0);
err:
+#ifdef HAVE_REPLICATION
+ if (is_inited_purge_index_file())
+ purge_index_entry(NULL, NULL, need_mutex);
+ close_purge_index_file();
+#endif
sql_print_error("Could not use %s for logging (error %d). \
Turning logging off for the whole duration of the MySQL server process. \
To turn it on again: fix the cause, \
@@ -2940,8 +3021,15 @@ bool MYSQL_BIN_LOG::reset_logs(THD* thd)
name=0; // Protect against free
close(LOG_CLOSE_TO_BE_OPENED);
- /* First delete all old log files */
+ /*
+ First delete all old log files and then update the index file.
+ As we first delete the log files and do not use sort of logging,
+ a crash may lead to an inconsistent state where the index has
+ references to non-existent files.
+ We need to invert the steps and use the purge_index_file methods
+ in order to make the operation safe.
+ */
if (find_log_pos(&linfo, NullS, 0))
{
error=1;
@@ -2964,7 +3052,7 @@ bool MYSQL_BIN_LOG::reset_logs(THD* thd)
}
else
{
- push_warning_printf(current_thd, MYSQL_ERROR::WARN_LEVEL_ERROR,
+ push_warning_printf(current_thd, MYSQL_ERROR::WARN_LEVEL_WARN,
ER_BINLOG_PURGE_FATAL_ERR,
"a problem with deleting %s; "
"consider examining correspondence "
@@ -2995,7 +3083,7 @@ bool MYSQL_BIN_LOG::reset_logs(THD* thd)
}
else
{
- push_warning_printf(current_thd, MYSQL_ERROR::WARN_LEVEL_ERROR,
+ push_warning_printf(current_thd, MYSQL_ERROR::WARN_LEVEL_WARN,
ER_BINLOG_PURGE_FATAL_ERR,
"a problem with deleting %s; "
"consider examining correspondence "
@@ -3008,8 +3096,8 @@ bool MYSQL_BIN_LOG::reset_logs(THD* thd)
}
if (!thd->slave_thread)
need_start_event=1;
- if (!open_index_file(index_file_name, 0))
- open(save_name, log_type, 0, io_cache_type, no_auto_events, max_size, 0);
+ if (!open_index_file(index_file_name, 0, FALSE))
+ open(save_name, log_type, 0, io_cache_type, no_auto_events, max_size, 0, FALSE);
my_free((uchar*) save_name, MYF(0));
err:
@@ -3196,7 +3284,7 @@ int MYSQL_BIN_LOG::purge_logs(const char
bool need_update_threads,
ulonglong *decrease_log_space)
{
- int error;
+ int error= 0;
bool exit_loop= 0;
LOG_INFO log_info;
THD *thd= current_thd;
@@ -3207,33 +3295,15 @@ int MYSQL_BIN_LOG::purge_logs(const char
pthread_mutex_lock(&LOCK_index);
if ((error=find_log_pos(&log_info, to_log, 0 /*no mutex*/)))
{
- sql_print_error("MYSQL_LOG::purge_logs was called with file %s not "
+ sql_print_error("MYSQL_BIN_LOG::purge_logs was called with file %s not "
"listed in the index.", to_log);
goto err;
}
- /*
- For crash recovery reasons the index needs to be updated before
- any files are deleted. Move files to be deleted into a temp file
- to be processed after the index is updated.
- */
- if (!my_b_inited(&purge_temp))
- {
- if ((error=open_cached_file(&purge_temp, mysql_tmpdir, TEMP_PREFIX,
- DISK_BUFFER_SIZE, MYF(MY_WME))))
- {
- sql_print_error("MYSQL_LOG::purge_logs failed to open purge_temp");
- goto err;
- }
- }
- else
+ if ((error= open_purge_index_file(TRUE)))
{
- if ((error=reinit_io_cache(&purge_temp, WRITE_CACHE, 0, 0, 1)))
- {
- sql_print_error("MYSQL_LOG::purge_logs failed to reinit purge_temp "
- "for write");
- goto err;
- }
+ sql_print_error("MYSQL_BIN_LOG::purge_logs failed to sync the index file.");
+ goto err;
}
/*
@@ -3243,51 +3313,177 @@ int MYSQL_BIN_LOG::purge_logs(const char
if ((error=find_log_pos(&log_info, NullS, 0 /*no mutex*/)))
goto err;
while ((strcmp(to_log,log_info.log_file_name) || (exit_loop=included)) &&
+ !is_active(log_info.log_file_name) &&
!log_in_use(log_info.log_file_name))
{
- if ((error=my_b_write(&purge_temp, (const uchar*)log_info.log_file_name,
- strlen(log_info.log_file_name))) ||
- (error=my_b_write(&purge_temp, (const uchar*)"\n", 1)))
+ if ((error= register_purge_index_entry(log_info.log_file_name)))
{
- sql_print_error("MYSQL_LOG::purge_logs failed to copy %s to purge_temp",
+ sql_print_error("MYSQL_BIN_LOG::purge_logs failed to copy %s to register file.",
log_info.log_file_name);
goto err;
}
if (find_next_log(&log_info, 0) || exit_loop)
break;
- }
+ }
+
+ DBUG_EXECUTE_IF("crash_purge_before_update_index", abort(););
+
+ if ((error= sync_purge_index_file()))
+ {
+ sql_print_error("MSYQL_BIN_LOG::purge_logs failed to flush register file.");
+ goto err;
+ }
/* We know how many files to delete. Update index file. */
if ((error=update_log_index(&log_info, need_update_threads)))
{
- sql_print_error("MSYQL_LOG::purge_logs failed to update the index file");
+ sql_print_error("MSYQL_BIN_LOG::purge_logs failed to update the index file");
goto err;
}
- DBUG_EXECUTE_IF("crash_after_update_index", abort(););
+ DBUG_EXECUTE_IF("crash_purge_critical_after_update_index", abort(););
+
+err:
+ /* Read each entry from purge_index_file and delete the file. */
+ if (is_inited_purge_index_file() &&
+ (error= purge_index_entry(thd, decrease_log_space, FALSE)))
+ sql_print_error("MSYQL_BIN_LOG::purge_logs failed to process registered files"
+ " that would be purged.");
+ close_purge_index_file();
+
+ DBUG_EXECUTE_IF("crash_purge_non_critical_after_update_index", abort(););
+
+ if (need_mutex)
+ pthread_mutex_unlock(&LOCK_index);
+ DBUG_RETURN(error);
+}
+
+int MYSQL_BIN_LOG::set_purge_index_file_name(const char *base_file_name)
+{
+ int error= 0;
+ DBUG_ENTER("MYSQL_BIN_LOG::set_purge_index_file_name");
+ if (fn_format(purge_index_file_name, base_file_name, mysql_data_home,
+ ".~rec~", MYF(MY_UNPACK_FILENAME | MY_SAFE_PATH |
+ MY_REPLACE_EXT)) == NULL)
+ {
+ error= 1;
+ sql_print_error("MYSQL_BIN_LOG::set_purge_index_file_name failed to set "
+ "file name.");
+ }
+ DBUG_RETURN(error);
+}
+
+int MYSQL_BIN_LOG::open_purge_index_file(bool destroy)
+{
+ int error= 0;
+ File file= -1;
+
+ DBUG_ENTER("MYSQL_BIN_LOG::open_purge_index_file");
+
+ if (destroy)
+ close_purge_index_file();
+
+ if (!my_b_inited(&purge_index_file))
+ {
+ if ((file= my_open(purge_index_file_name, O_RDWR | O_CREAT | O_BINARY,
+ MYF(MY_WME | ME_WAITTANG))) < 0 ||
+ init_io_cache(&purge_index_file, file, IO_SIZE,
+ (destroy ? WRITE_CACHE : READ_CACHE),
+ 0, 0, MYF(MY_WME | MY_NABP | MY_WAIT_IF_FULL)))
+ {
+ error= 1;
+ sql_print_error("MYSQL_BIN_LOG::open_purge_index_file failed to open register "
+ " file.");
+ }
+ }
+ DBUG_RETURN(error);
+}
+
+int MYSQL_BIN_LOG::close_purge_index_file()
+{
+ int error= 0;
+
+ DBUG_ENTER("MYSQL_BIN_LOG::close_purge_index_file");
+
+ if (my_b_inited(&purge_index_file))
+ {
+ end_io_cache(&purge_index_file);
+ error= my_close(purge_index_file.file, MYF(0));
+ }
+ my_delete(purge_index_file_name, MYF(0));
+ bzero((char*) &purge_index_file, sizeof(purge_index_file));
+
+ DBUG_RETURN(error);
+}
+
+bool MYSQL_BIN_LOG::is_inited_purge_index_file()
+{
+ DBUG_ENTER("MYSQL_BIN_LOG::is_inited_purge_index_file");
+ DBUG_RETURN (my_b_inited(&purge_index_file));
+}
+
+int MYSQL_BIN_LOG::sync_purge_index_file()
+{
+ int error= 0;
+ DBUG_ENTER("MYSQL_BIN_LOG::sync_purge_index_file");
+
+ if ((error= flush_io_cache(&purge_index_file)) ||
+ (error= my_sync(purge_index_file.file, MYF(MY_WME))))
+ DBUG_RETURN(error);
+
+ DBUG_RETURN(error);
+}
+
+int MYSQL_BIN_LOG::register_purge_index_entry(const char *entry)
+{
+ int error= 0;
+ DBUG_ENTER("MYSQL_BIN_LOG::register_purge_index_entry");
+
+ if ((error=my_b_write(&purge_index_file, (const uchar*)entry, strlen(entry))) ||
+ (error=my_b_write(&purge_index_file, (const uchar*)"\n", 1)))
+ DBUG_RETURN (error);
+
+ DBUG_RETURN(error);
+}
+
+int MYSQL_BIN_LOG::register_create_index_entry(const char *entry)
+{
+ DBUG_ENTER("MYSQL_BIN_LOG::register_create_index_entry");
+ DBUG_RETURN(register_purge_index_entry(entry));
+}
- /* Switch purge_temp for read. */
- if ((error=reinit_io_cache(&purge_temp, READ_CACHE, 0, 0, 0)))
+int MYSQL_BIN_LOG::purge_index_entry(THD *thd, ulonglong *decrease_log_space,
+ bool need_mutex)
+{
+ MY_STAT s;
+ int error= 0;
+ LOG_INFO log_info;
+ LOG_INFO check_log_info;
+
+ DBUG_ENTER("MYSQL_BIN_LOG:purge_index_entry");
+
+ DBUG_ASSERT(my_b_inited(&purge_index_file));
+
+ if ((error=reinit_io_cache(&purge_index_file, READ_CACHE, 0, 0, 0)))
{
- sql_print_error("MSYQL_LOG::purge_logs failed to reinit purge_temp "
+ sql_print_error("MSYQL_BIN_LOG::purge_index_entry failed to reinit register file "
"for read");
goto err;
}
- /* Read each entry from purge_temp and delete the file. */
for (;;)
{
uint length;
- if ((length=my_b_gets(&purge_temp, log_info.log_file_name,
+ if ((length=my_b_gets(&purge_index_file, log_info.log_file_name,
FN_REFLEN)) <= 1)
{
- if (purge_temp.error)
+ if (purge_index_file.error)
{
- error= purge_temp.error;
- sql_print_error("MSYQL_LOG::purge_logs error %d reading from "
- "purge_temp", error);
+ error= purge_index_file.error;
+ sql_print_error("MSYQL_BIN_LOG::purge_index_entry error %d reading from "
+ "register file.", error);
goto err;
}
@@ -3298,9 +3494,6 @@ int MYSQL_BIN_LOG::purge_logs(const char
/* Get rid of the trailing '\n' */
log_info.log_file_name[length-1]= 0;
- ha_binlog_index_purge_file(current_thd, log_info.log_file_name);
-
- MY_STAT s;
if (!my_stat(log_info.log_file_name, &s, MYF(0)))
{
if (my_errno == ENOENT)
@@ -3326,7 +3519,7 @@ int MYSQL_BIN_LOG::purge_logs(const char
*/
if (thd)
{
- push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_ERROR,
+ push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
ER_BINLOG_PURGE_FATAL_ERR,
"a problem with getting info on being purged %s; "
"consider examining correspondence "
@@ -3348,64 +3541,92 @@ int MYSQL_BIN_LOG::purge_logs(const char
}
else
{
- DBUG_PRINT("info",("purging %s",log_info.log_file_name));
- if (!my_delete(log_info.log_file_name, MYF(0)))
- {
- if (decrease_log_space)
- *decrease_log_space-= s.st_size;
- }
- else
+ if ((error= find_log_pos(&check_log_info, log_info.log_file_name, need_mutex)))
{
- if (my_errno == ENOENT)
+ if (error != LOG_INFO_EOF)
{
if (thd)
{
push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
- ER_LOG_PURGE_NO_FILE, ER(ER_LOG_PURGE_NO_FILE),
+ ER_BINLOG_PURGE_FATAL_ERR,
+ "a problem with deleting %s and "
+ "reading the binlog index file",
log_info.log_file_name);
}
- sql_print_information("Failed to delete file '%s'",
- log_info.log_file_name);
- my_errno= 0;
+ else
+ {
+ sql_print_information("Failed to delete file '%s' and "
+ "read the binlog index file",
+ log_info.log_file_name);
+ }
+ goto err;
+ }
+
+ error= 0;
+ if (!need_mutex)
+ {
+ /*
+ This is to avoid triggering an error in NDB.
+ */
+ ha_binlog_index_purge_file(current_thd, log_info.log_file_name);
+ }
+
+ DBUG_PRINT("info",("purging %s",log_info.log_file_name));
+ if (!my_delete(log_info.log_file_name, MYF(0)))
+ {
+ if (decrease_log_space)
+ *decrease_log_space-= s.st_size;
}
else
{
- if (thd)
+ if (my_errno == ENOENT)
{
- push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_ERROR,
- ER_BINLOG_PURGE_FATAL_ERR,
- "a problem with deleting %s; "
- "consider examining correspondence "
- "of your binlog index file "
- "to the actual binlog files",
- log_info.log_file_name);
+ if (thd)
+ {
+ push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
+ ER_LOG_PURGE_NO_FILE, ER(ER_LOG_PURGE_NO_FILE),
+ log_info.log_file_name);
+ }
+ sql_print_information("Failed to delete file '%s'",
+ log_info.log_file_name);
+ my_errno= 0;
}
else
{
- sql_print_information("Failed to delete file '%s'; "
+ if (thd)
+ {
+ push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
+ ER_BINLOG_PURGE_FATAL_ERR,
+ "a problem with deleting %s; "
"consider examining correspondence "
"of your binlog index file "
"to the actual binlog files",
log_info.log_file_name);
- }
- if (my_errno == EMFILE)
- {
- DBUG_PRINT("info",
- ("my_errno: %d, set ret = LOG_INFO_EMFILE", my_errno));
- error= LOG_INFO_EMFILE;
+ }
+ else
+ {
+ sql_print_information("Failed to delete file '%s'; "
+ "consider examining correspondence "
+ "of your binlog index file "
+ "to the actual binlog files",
+ log_info.log_file_name);
+ }
+ if (my_errno == EMFILE)
+ {
+ DBUG_PRINT("info",
+ ("my_errno: %d, set ret = LOG_INFO_EMFILE", my_errno));
+ error= LOG_INFO_EMFILE;
+ goto err;
+ }
+ error= LOG_INFO_FATAL;
goto err;
}
- error= LOG_INFO_FATAL;
- goto err;
}
}
}
}
err:
- close_cached_file(&purge_temp);
- if (need_mutex)
- pthread_mutex_unlock(&LOCK_index);
DBUG_RETURN(error);
}
@@ -3445,7 +3666,8 @@ int MYSQL_BIN_LOG::purge_logs_before_dat
goto err;
while (strcmp(log_file_name, log_info.log_file_name) &&
- !log_in_use(log_info.log_file_name))
+ !is_active(log_info.log_file_name) &&
+ !log_in_use(log_info.log_file_name))
{
if (!my_stat(log_info.log_file_name, &stat_area, MYF(0)))
{
@@ -3454,14 +3676,6 @@ int MYSQL_BIN_LOG::purge_logs_before_dat
/*
It's not fatal if we can't stat a log file that does not exist.
*/
- if (thd)
- {
- push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
- ER_LOG_PURGE_NO_FILE, ER(ER_LOG_PURGE_NO_FILE),
- log_info.log_file_name);
- }
- sql_print_information("Failed to execute my_stat on file '%s'",
- log_info.log_file_name);
my_errno= 0;
}
else
@@ -3471,7 +3685,7 @@ int MYSQL_BIN_LOG::purge_logs_before_dat
*/
if (thd)
{
- push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_ERROR,
+ push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
ER_BINLOG_PURGE_FATAL_ERR,
"a problem with getting info on being purged %s; "
"consider examining correspondence "
@@ -3493,7 +3707,7 @@ int MYSQL_BIN_LOG::purge_logs_before_dat
if (stat_area.st_mtime < purge_time)
strmake(to_log,
log_info.log_file_name,
- sizeof(log_info.log_file_name));
+ sizeof(log_info.log_file_name) - 1);
else
break;
}
@@ -3656,9 +3870,9 @@ void MYSQL_BIN_LOG::new_file_impl(bool n
*/
/* reopen index binlog file, BUG#34582 */
- if (!open_index_file(index_file_name, 0))
- open(old_name, log_type, new_name_ptr,
- io_cache_type, no_auto_events, max_size, 1);
+ if (!open_index_file(index_file_name, 0, FALSE))
+ open(old_name, log_type, new_name_ptr,
+ io_cache_type, no_auto_events, max_size, 1, FALSE);
my_free(old_name,MYF(0));
end:
@@ -4108,12 +4322,20 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
#if defined(USING_TRANSACTIONS)
/*
Should we write to the binlog cache or to the binlog on disk?
+
Write to the binlog cache if:
- - it is already not empty (meaning we're in a transaction; note that the
- present event could be about a non-transactional table, but still we need
- to write to the binlog cache in that case to handle updates to mixed
- trans/non-trans table types the best possible in binlogging)
- - or if the event asks for it (cache_stmt == TRUE).
+ 1 - a transactional engine/table is updated (stmt_has_updated_trans_table == TRUE);
+ 2 - or the event asks for it (cache_stmt == TRUE);
+ 3 - or the cache is already not empty (meaning we're in a transaction;
+ note that the present event could be about a non-transactional table, but
+ still we need to write to the binlog cache in that case to handle updates
+ to mixed trans/non-trans table types).
+
+ Write to the binlog on disk if only a non-transactional engine is
+ updated and:
+ 1 - the binlog cache is empty or;
+ 2 - --binlog-direct-non-transactional-updates is set and we are about to
+ use the statement format. When using the row format (cache_stmt == TRUE).
*/
if (opt_using_transactions && thd)
{
@@ -4124,8 +4346,9 @@ bool MYSQL_BIN_LOG::write(Log_event *eve
(binlog_trx_data*) thd_get_ha_data(thd, binlog_hton);
IO_CACHE *trans_log= &trx_data->trans_log;
my_off_t trans_log_pos= my_b_tell(trans_log);
- if (event_info->get_cache_stmt() || trans_log_pos != 0 ||
- stmt_has_updated_trans_table(thd))
+ if (event_info->get_cache_stmt() || stmt_has_updated_trans_table(thd) ||
+ (!thd->variables.binlog_direct_non_trans_update &&
+ trans_log_pos != 0))
{
DBUG_PRINT("info", ("Using trans_log: cache: %d, trans_log_pos: %lu",
event_info->get_cache_stmt(),
@@ -4305,6 +4528,9 @@ bool general_log_write(THD *thd, enum en
void MYSQL_BIN_LOG::rotate_and_purge(uint flags)
{
+#ifdef HAVE_REPLICATION
+ bool check_purge= false;
+#endif
if (!(flags & RP_LOCK_LOG_IS_ALREADY_LOCKED))
pthread_mutex_lock(&LOCK_log);
if ((flags & RP_FORCE_ROTATE) ||
@@ -4312,16 +4538,24 @@ void MYSQL_BIN_LOG::rotate_and_purge(uin
{
new_file_without_locking();
#ifdef HAVE_REPLICATION
- if (expire_logs_days)
- {
- time_t purge_time= my_time(0) - expire_logs_days*24*60*60;
- if (purge_time >= 0)
- purge_logs_before_date(purge_time);
- }
+ check_purge= true;
#endif
}
if (!(flags & RP_LOCK_LOG_IS_ALREADY_LOCKED))
pthread_mutex_unlock(&LOCK_log);
+
+#ifdef HAVE_REPLICATION
+ /*
+ NOTE: Run purge_logs wo/ holding LOCK_log
+ as it otherwise will deadlock in ndbcluster_binlog_index_purge_file
+ */
+ if (check_purge && expire_logs_days)
+ {
+ time_t purge_time= my_time(0) - expire_logs_days*24*60*60;
+ if (purge_time >= 0)
+ purge_logs_before_date(purge_time);
+ }
+#endif
}
uint MYSQL_BIN_LOG::next_file_id()
@@ -4514,7 +4748,7 @@ bool MYSQL_BIN_LOG::write_incident(THD *
Incident_log_event ev(thd, incident, write_error_msg);
if (lock)
pthread_mutex_lock(&LOCK_log);
- ev.write(&log_file);
+ error= ev.write(&log_file);
if (lock)
{
if (!error && !(error= flush_and_sync()))
@@ -4834,11 +5068,11 @@ bool flush_error_log()
if (opt_error_log)
{
char err_renamed[FN_REFLEN], *end;
- end= strmake(err_renamed,log_error_file,FN_REFLEN-4);
+ end= strmake(err_renamed,log_error_file,FN_REFLEN-5);
strmov(end, "-old");
VOID(pthread_mutex_lock(&LOCK_error_log));
#ifdef __WIN__
- char err_temp[FN_REFLEN+4];
+ char err_temp[FN_REFLEN+5];
/*
On Windows is necessary a temporary file for to rename
the current error file.
@@ -5563,7 +5797,7 @@ int TC_LOG_BINLOG::open(const char *opt_
if (using_heuristic_recover())
{
/* generate a new binlog to mask a corrupted one */
- open(opt_name, LOG_BIN, 0, WRITE_CACHE, 0, max_binlog_size, 0);
+ open(opt_name, LOG_BIN, 0, WRITE_CACHE, 0, max_binlog_size, 0, TRUE);
cleanup();
return 1;
}
=== modified file 'sql/log.h'
--- a/sql/log.h 2009-06-18 13:52:46 +0000
+++ b/sql/log.h 2009-12-04 14:40:42 +0000
@@ -172,6 +172,10 @@ public:
enum_log_type log_type,
const char *new_name,
enum cache_type io_cache_type_arg);
+ bool init_and_set_log_file_name(const char *log_name,
+ const char *new_name,
+ enum_log_type log_type_arg,
+ enum cache_type io_cache_type_arg);
void init(enum_log_type log_type_arg,
enum cache_type io_cache_type_arg);
void close(uint exiting);
@@ -233,14 +237,15 @@ class MYSQL_BIN_LOG: public TC_LOG, priv
pthread_cond_t update_cond;
ulonglong bytes_written;
IO_CACHE index_file;
+ char index_file_name[FN_REFLEN];
/*
- purge_temp is a temp file used in purge_logs so that the index file
+ purge_file is a temp file used in purge_logs so that the index file
can be updated before deleting files from disk, yielding better crash
recovery. It is created on demand the first time purge_logs is called
and then reused for subsequent calls. It is cleaned up in cleanup().
*/
- IO_CACHE purge_temp;
- char index_file_name[FN_REFLEN];
+ IO_CACHE purge_index_file;
+ char purge_index_file_name[FN_REFLEN];
/*
The max size before rotation (usable only if log_type == LOG_BIN: binary
logs and relay logs).
@@ -349,9 +354,10 @@ public:
const char *new_name,
enum cache_type io_cache_type_arg,
bool no_auto_events_arg, ulong max_size,
- bool null_created);
+ bool null_created,
+ bool need_mutex);
bool open_index_file(const char *index_file_name_arg,
- const char *log_name);
+ const char *log_name, bool need_mutex);
/* Use this to start writing a new log file */
void new_file();
@@ -384,6 +390,16 @@ public:
ulonglong *decrease_log_space);
int purge_logs_before_date(time_t purge_time);
int purge_first_log(Relay_log_info* rli, bool included);
+ int set_purge_index_file_name(const char *base_file_name);
+ int open_purge_index_file(bool destroy);
+ bool is_inited_purge_index_file();
+ int close_purge_index_file();
+ int clean_purge_index_file();
+ int sync_purge_index_file();
+ int register_purge_index_entry(const char* entry);
+ int register_create_index_entry(const char* entry);
+ int purge_index_entry(THD *thd, ulonglong *decrease_log_space,
+ bool need_mutex);
bool reset_logs(THD* thd);
void close(uint exiting);
=== modified file 'sql/log_event.cc'
--- a/sql/log_event.cc 2010-01-15 15:27:55 +0000
+++ b/sql/log_event.cc 2010-03-04 08:03:07 +0000
@@ -2294,10 +2294,22 @@ bool Query_log_event::write(IO_CACHE* fi
int8store(start, table_map_for_update);
start+= 8;
}
+ if (master_data_written != 0)
+ {
+ /*
+ Q_MASTER_DATA_WRITTEN_CODE only exists in relay logs where the master
+ has binlog_version<4 and the slave has binlog_version=4. See comment
+ for master_data_written in log_event.h for details.
+ */
+ *start++= Q_MASTER_DATA_WRITTEN_CODE;
+ int4store(start, master_data_written);
+ start+= 4;
+ }
+
/*
NOTE: When adding new status vars, please don't forget to update
- the MAX_SIZE_LOG_EVENT_STATUS in log_event.h and update function
- code_name in this file.
+ the MAX_SIZE_LOG_EVENT_STATUS in log_event.h and update the function
+ code_name() in this file.
Here there could be code like
if (command-line-option-which-says-"log_this_variable" && inited)
@@ -2373,7 +2385,8 @@ Query_log_event::Query_log_event(THD* th
auto_increment_offset(thd_arg->variables.auto_increment_offset),
lc_time_names_number(thd_arg->variables.lc_time_names->number),
charset_database_number(0),
- table_map_for_update((ulonglong)thd_arg->table_map_for_update)
+ table_map_for_update((ulonglong)thd_arg->table_map_for_update),
+ master_data_written(0)
{
time_t end_time;
@@ -2497,6 +2510,7 @@ code_name(int code)
case Q_LC_TIME_NAMES_CODE: return "Q_LC_TIME_NAMES_CODE";
case Q_CHARSET_DATABASE_CODE: return "Q_CHARSET_DATABASE_CODE";
case Q_TABLE_MAP_FOR_UPDATE_CODE: return "Q_TABLE_MAP_FOR_UPDATE_CODE";
+ case Q_MASTER_DATA_WRITTEN_CODE: return "Q_MASTER_DATA_WRITTEN_CODE";
}
sprintf(buf, "CODE#%d", code);
return buf;
@@ -2534,7 +2548,7 @@ Query_log_event::Query_log_event(const c
flags2_inited(0), sql_mode_inited(0), charset_inited(0),
auto_increment_increment(1), auto_increment_offset(1),
time_zone_len(0), lc_time_names_number(0), charset_database_number(0),
- table_map_for_update(0)
+ table_map_for_update(0), master_data_written(0)
{
ulong data_len;
uint32 tmp;
@@ -2590,6 +2604,18 @@ Query_log_event::Query_log_event(const c
DBUG_PRINT("info", ("Query_log_event has status_vars_len: %u",
(uint) status_vars_len));
tmp-= 2;
+ }
+ else
+ {
+ /*
+ server version < 5.0 / binlog_version < 4 master's event is
+ relay-logged with storing the original size of the event in
+ Q_MASTER_DATA_WRITTEN_CODE status variable.
+ The size is to be restored at reading Q_MASTER_DATA_WRITTEN_CODE-marked
+ event from the relay log.
+ */
+ DBUG_ASSERT(description_event->binlog_version < 4);
+ master_data_written= data_written;
}
/*
We have parsed everything we know in the post header for QUERY_EVENT,
@@ -2681,6 +2707,11 @@ Query_log_event::Query_log_event(const c
table_map_for_update= uint8korr(pos);
pos+= 8;
break;
+ case Q_MASTER_DATA_WRITTEN_CODE:
+ CHECK_SPACE(pos, end, 4);
+ data_written= master_data_written= uint4korr(pos);
+ pos+= 4;
+ break;
default:
/* That's why you must write status vars in growing order of code */
DBUG_PRINT("info",("Query_log_event has unknown status vars (first has\
@@ -3170,7 +3201,18 @@ START SLAVE; . Query: '%s'", expected_er
compare_errors:
- /*
+ /*
+ In the slave thread, we may sometimes execute some DROP / * 40005
+ TEMPORARY * / TABLE that come from parts of binlogs (likely if we
+ use RESET SLAVE or CHANGE MASTER TO), while the temporary table
+ has already been dropped. To ignore such irrelevant "table does
+ not exist errors", we silently clear the error if TEMPORARY was used.
+ */
+ if (thd->lex->sql_command == SQLCOM_DROP_TABLE && thd->lex->drop_temporary &&
+ thd->is_error() && thd->main_da.sql_errno() == ER_BAD_TABLE_ERROR &&
+ !expected_error)
+ thd->main_da.reset_diagnostics_area();
+ /*
If we expected a non-zero error code, and we don't get the same error
code, and it should be ignored or is related to a concurrency issue.
*/
@@ -4005,6 +4047,7 @@ uint Load_log_event::get_query_buffer_le
return
5 + db_len + 3 + // "use DB; "
18 + fname_len + 2 + // "LOAD DATA INFILE 'file''"
+ 11 + // "CONCURRENT "
7 + // LOCAL
9 + // " REPLACE or IGNORE "
13 + table_name_len*2 + // "INTO TABLE `table`"
@@ -4032,6 +4075,9 @@ void Load_log_event::print_query(bool ne
pos= strmov(pos, "LOAD DATA ");
+ if (thd->lex->lock_option == TL_WRITE_CONCURRENT_INSERT)
+ pos= strmov(pos, "CONCURRENT ");
+
if (fn_start)
*fn_start= pos;
@@ -5851,7 +5897,7 @@ Slave_log_event::Slave_log_event(const c
int Slave_log_event::do_apply_event(Relay_log_info const *rli)
{
if (mysql_bin_log.is_open())
- mysql_bin_log.write(this);
+ return mysql_bin_log.write(this);
return 0;
}
#endif /* !MYSQL_CLIENT */
@@ -7598,7 +7644,7 @@ static int rows_event_stmt_cleanup(Relay
(assume the last master's transaction is ignored by the slave because of
replicate-ignore rules).
*/
- thd->binlog_flush_pending_rows_event(true);
+ error= thd->binlog_flush_pending_rows_event(true);
/*
If this event is not in a transaction, the call below will, if some
@@ -7609,7 +7655,7 @@ static int rows_event_stmt_cleanup(Relay
are involved, commit the transaction and flush the pending event to the
binlog.
*/
- error= ha_autocommit_or_rollback(thd, 0);
+ error|= ha_autocommit_or_rollback(thd, error);
/*
Now what if this is not a transactional engine? we still need to
@@ -7913,10 +7959,10 @@ Table_map_log_event::Table_map_log_event
plus one or three bytes (see pack.c:net_store_length) for number of
elements in the field metadata array.
*/
- if (m_field_metadata_size > 255)
- m_data_size+= m_field_metadata_size + 3;
- else
+ if (m_field_metadata_size < 251)
m_data_size+= m_field_metadata_size + 1;
+ else
+ m_data_size+= m_field_metadata_size + 3;
bzero(m_null_bits, num_null_bytes);
for (unsigned int i= 0 ; i < m_table->s->fields ; ++i)
=== modified file 'sql/log_event.h'
--- a/sql/log_event.h 2010-01-15 15:27:55 +0000
+++ b/sql/log_event.h 2010-03-04 08:03:07 +0000
@@ -263,7 +263,8 @@ struct sql_ex_info
1 + 1 + 255 /* type, length, time_zone */ + \
1 + 2 /* type, lc_time_names_number */ + \
1 + 2 /* type, charset_database_number */ + \
- 1 + 8 /* type, table_map_for_update */)
+ 1 + 8 /* type, table_map_for_update */ + \
+ 1 + 4 /* type, master_data_written */)
#define MAX_LOG_EVENT_HEADER ( /* in order of Query_log_event::write */ \
LOG_EVENT_HEADER_LEN + /* write_header */ \
QUERY_HEADER_LEN + /* write_data */ \
@@ -330,6 +331,10 @@ struct sql_ex_info
#define Q_TABLE_MAP_FOR_UPDATE_CODE 9
+#define Q_MASTER_DATA_WRITTEN_CODE 10
+
+/* Intvar event post-header */
+
/* Intvar event data */
#define I_TYPE_OFFSET 0
#define I_VAL_OFFSET 1
@@ -1620,6 +1625,16 @@ public:
statement, for other query statements, this will be zero.
*/
ulonglong table_map_for_update;
+ /*
+ Holds the original length of a Query_log_event that comes from a
+ master of version < 5.0 (i.e., binlog_version < 4). When the IO
+ thread writes the relay log, it augments the Query_log_event with a
+ Q_MASTER_DATA_WRITTEN_CODE status_var that holds the original event
+ length. This field is initialized to non-zero in the SQL thread when
+ it reads this augmented event. SQL thread does not write
+ Q_MASTER_DATA_WRITTEN_CODE to the slave's server binlog.
+ */
+ uint32 master_data_written;
#ifndef MYSQL_CLIENT
@@ -1766,7 +1781,7 @@ private:
@verbatim
(1) USE db;
- (2) LOAD DATA [LOCAL] INFILE 'file_name'
+ (2) LOAD DATA [CONCURRENT] [LOCAL] INFILE 'file_name'
(3) [REPLACE | IGNORE]
(4) INTO TABLE 'table_name'
(5) [FIELDS
=== modified file 'sql/log_event_old.cc'
--- a/sql/log_event_old.cc 2009-12-03 11:19:05 +0000
+++ b/sql/log_event_old.cc 2010-03-04 08:03:07 +0000
@@ -1541,7 +1541,15 @@ int Old_rows_log_event::do_apply_event(R
NOTE: For this new scheme there should be no pending event:
need to add code to assert that is the case.
*/
- thd->binlog_flush_pending_rows_event(false);
+ error= thd->binlog_flush_pending_rows_event(false);
+ if (error)
+ {
+ rli->report(ERROR_LEVEL, ER_SLAVE_FATAL_ERROR,
+ ER(ER_SLAVE_FATAL_ERROR),
+ "call to binlog_flush_pending_rows_event() failed");
+ thd->is_slave_error= 1;
+ DBUG_RETURN(error);
+ }
TABLE_LIST *tables= rli->tables_to_lock;
close_tables_for_reopen(thd, &tables);
@@ -1831,7 +1839,7 @@ int Old_rows_log_event::do_apply_event(R
(assume the last master's transaction is ignored by the slave because of
replicate-ignore rules).
*/
- thd->binlog_flush_pending_rows_event(true);
+ int binlog_error= thd->binlog_flush_pending_rows_event(true);
/*
If this event is not in a transaction, the call below will, if some
@@ -1842,12 +1850,13 @@ int Old_rows_log_event::do_apply_event(R
are involved, commit the transaction and flush the pending event to the
binlog.
*/
- if ((error= ha_autocommit_or_rollback(thd, 0)))
+ if ((error= ha_autocommit_or_rollback(thd, binlog_error)))
rli->report(ERROR_LEVEL, error,
"Error in %s event: commit of row events failed, "
"table `%s`.`%s`",
get_type_str(), m_table->s->db.str,
m_table->s->table_name.str);
+ error|= binlog_error;
/*
Now what if this is not a transactional engine? we still need to
=== modified file 'sql/mysql_priv.h'
--- a/sql/mysql_priv.h 2010-02-10 19:06:24 +0000
+++ b/sql/mysql_priv.h 2010-03-04 08:03:07 +0000
@@ -112,6 +112,10 @@ char* query_table_status(THD *thd,const
#define PREV_BITS(type,A) ((type) (((type) 1 << (A)) -1))
#define all_bits_set(A,B) ((A) & (B) != (B))
+/* Version numbers for deprecation messages */
+#define VER_BETONY "5.5"
+#define VER_CELOSIA "5.6"
+
#define WARN_DEPRECATED(Thd,Ver,Old,New) \
do { \
DBUG_ASSERT(strncmp(Ver, MYSQL_SERVER_VERSION, sizeof(Ver)-1) > 0); \
@@ -121,7 +125,7 @@ char* query_table_status(THD *thd,const
(Old), (Ver), (New)); \
else \
sql_print_warning("The syntax '%s' is deprecated and will be removed " \
- "in MySQL %s. Please use %s instead.", (Old), (Ver), (New)); \
+ "in a future release. Please use %s instead.", (Old), (New)); \
} while(0)
extern MYSQL_PLUGIN_IMPORT CHARSET_INFO *system_charset_info;
@@ -1045,8 +1049,8 @@ check_and_unset_inject_value(int value)
#endif
-void write_bin_log(THD *thd, bool clear_error,
- char const *query, ulong query_length);
+int write_bin_log(THD *thd, bool clear_error,
+ char const *query, ulong query_length);
/* sql_connect.cc */
int check_user(THD *thd, enum enum_server_command command,
@@ -1434,8 +1438,18 @@ bool get_schema_tables_result(JOIN *join
enum enum_schema_table_state executed_place);
enum enum_schema_tables get_schema_table_idx(ST_SCHEMA_TABLE *schema_table);
-#define is_schema_db(X) \
- !my_strcasecmp(system_charset_info, INFORMATION_SCHEMA_NAME.str, (X))
+inline bool is_schema_db(const char *name, size_t len)
+{
+ return (INFORMATION_SCHEMA_NAME.length == len &&
+ !my_strcasecmp(system_charset_info,
+ INFORMATION_SCHEMA_NAME.str, name));
+}
+
+inline bool is_schema_db(const char *name)
+{
+ return !my_strcasecmp(system_charset_info,
+ INFORMATION_SCHEMA_NAME.str, name);
+}
/* sql_prepare.cc */
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-03-09 19:22:24 +0000
+++ b/sql/mysqld.cc 2010-03-10 09:12:23 +0000
@@ -1322,7 +1322,6 @@ void clean_up(bool print_message)
lex_free(); /* Free some memory */
item_create_cleanup();
set_var_free();
- free_charsets();
if (!opt_noacl)
{
#ifdef HAVE_DLOPEN
@@ -2033,10 +2032,10 @@ bool one_thread_per_connection_end(THD *
/* It's safe to broadcast outside a lock (COND... is not deleted here) */
DBUG_PRINT("signal", ("Broadcasting COND_thread_count"));
+ DBUG_LEAVE; // Must match DBUG_ENTER()
my_thread_end();
(void) pthread_cond_broadcast(&COND_thread_count);
- DBUG_LEAVE; // Must match DBUG_ENTER()
pthread_exit(0);
return 0; // Avoid compiler warnings
}
@@ -4053,7 +4052,7 @@ a file name for --log-bin-index option",
my_free(opt_bin_logname, MYF(MY_ALLOW_ZERO_PTR));
opt_bin_logname=my_strdup(buf, MYF(0));
}
- if (mysql_bin_log.open_index_file(opt_binlog_index_name, ln))
+ if (mysql_bin_log.open_index_file(opt_binlog_index_name, ln, TRUE))
{
unireg_abort(1);
}
@@ -4225,7 +4224,7 @@ a file name for --log-bin-index option",
}
if (opt_bin_log && mysql_bin_log.open(opt_bin_logname, LOG_BIN, 0,
- WRITE_CACHE, 0, max_binlog_size, 0))
+ WRITE_CACHE, 0, max_binlog_size, 0, TRUE))
unireg_abort(1);
#ifdef HAVE_REPLICATION
@@ -5761,6 +5760,7 @@ enum options_mysqld
OPT_DISCONNECT_SLAVE_EVENT_COUNT, OPT_TC_HEURISTIC_RECOVER,
OPT_ABORT_SLAVE_EVENT_COUNT,
OPT_LOG_BIN_TRUST_FUNCTION_CREATORS,
+ OPT_LOG_BIN_TRUST_FUNCTION_CREATORS_OLD,
OPT_ENGINE_CONDITION_PUSHDOWN, OPT_NDB_CONNECTSTRING,
OPT_NDB_USE_EXACT_COUNT, OPT_NDB_USE_TRANSACTIONS,
OPT_NDB_FORCE_SEND, OPT_NDB_AUTOINCREMENT_PREFETCH_SZ,
@@ -5810,6 +5810,7 @@ enum options_mysqld
OPT_MYISAM_BLOCK_SIZE, OPT_MYISAM_MAX_EXTRA_SORT_FILE_SIZE,
OPT_MYISAM_MAX_SORT_FILE_SIZE, OPT_MYISAM_SORT_BUFFER_SIZE,
OPT_MYISAM_USE_MMAP, OPT_MYISAM_REPAIR_THREADS,
+ OPT_MYISAM_MMAP_SIZE,
OPT_MYISAM_STATS_METHOD,
OPT_PAGECACHE_BUFFER_SIZE,
@@ -5845,6 +5846,7 @@ enum options_mysqld
OPT_EXPIRE_LOGS_DAYS,
OPT_GROUP_CONCAT_MAX_LEN,
OPT_DEFAULT_COLLATION,
+ OPT_DEFAULT_COLLATION_OLD,
OPT_CHARACTER_SET_CLIENT_HANDSHAKE,
OPT_CHARACTER_SET_FILESYSTEM,
OPT_LC_TIME_NAMES,
@@ -5871,6 +5873,9 @@ enum options_mysqld
OPT_TABLE_LOCK_WAIT_TIMEOUT,
OPT_PLUGIN_LOAD,
OPT_PLUGIN_DIR,
+ OPT_SYMBOLIC_LINKS,
+ OPT_WARNINGS,
+ OPT_RECORD_BUFFER_OLD,
OPT_LOG_OUTPUT,
OPT_PORT_OPEN_TIMEOUT,
OPT_PROFILING,
@@ -5897,7 +5902,9 @@ enum options_mysqld
OPT_LOG_SLOW_FILTER,
OPT_GENERAL_LOG_FILE,
OPT_SLOW_QUERY_LOG_FILE,
- OPT_IGNORE_BUILTIN_INNODB
+ OPT_IGNORE_BUILTIN_INNODB,
+ OPT_BINLOG_DIRECT_NON_TRANS_UPDATE,
+ OPT_DEFAULT_CHARACTER_SET_OLD
};
@@ -6050,10 +6057,11 @@ struct my_option my_long_options[] =
{"debug-flush", OPT_DEBUG_FLUSH, "Default debug log with flush after write",
(uchar**) 0, (uchar**) 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0},
#endif
- {"default-character-set", 'C', "Set the default character set (deprecated option, use --character-set-server instead).",
+ {"default-character-set", OPT_DEFAULT_CHARACTER_SET_OLD,
+ "Set the default character set (deprecated option, use --character-set-server instead).",
(uchar**) &default_character_set_name, (uchar**) &default_character_set_name,
0, GET_STR, REQUIRED_ARG, 0, 0, 0, 0, 0, 0 },
- {"default-collation", OPT_DEFAULT_COLLATION, "Set the default collation (deprecated option, use --collation-server instead).",
+ {"default-collation", OPT_DEFAULT_COLLATION_OLD, "Set the default collation (deprecated option, use --collation-server instead).",
(uchar**) &default_collation_name, (uchar**) &default_collation_name,
0, GET_STR, REQUIRED_ARG, 0, 0, 0, 0, 0, 0 },
{"default-storage-engine", OPT_STORAGE_ENGINE,
@@ -6152,7 +6160,8 @@ Disable with --skip-large-pages.",
#endif
{"init-rpl-role", OPT_INIT_RPL_ROLE, "Set the replication role.", 0, 0, 0,
GET_STR, REQUIRED_ARG, 0, 0, 0, 0, 0, 0},
- {"init-slave", OPT_INIT_SLAVE, "Command(s) that are executed when a slave connects to this master",
+ {"init-slave", OPT_INIT_SLAVE, "Command(s) that are executed by a slave server \
+each time the SQL thread starts.",
(uchar**) &opt_init_slave, (uchar**) &opt_init_slave, 0, GET_STR_ALLOC,
REQUIRED_ARG, 0, 0, 0, 0, 0, 0},
{"language", 'L',
@@ -6192,7 +6201,7 @@ Disable with --skip-large-pages.",
compatibility; the behaviour was also changed to apply only to functions
(and triggers). In a future release this old name could be removed.
*/
- {"log-bin-trust-routine-creators", OPT_LOG_BIN_TRUST_FUNCTION_CREATORS,
+ {"log-bin-trust-routine-creators", OPT_LOG_BIN_TRUST_FUNCTION_CREATORS_OLD,
"(deprecated) Use log-bin-trust-function-creators.",
(uchar**) &trust_function_creators, (uchar**) &trust_function_creators, 0,
GET_BOOL, NO_ARG, 0, 0, 0, 0, 0, 0},
@@ -6745,7 +6754,7 @@ log and this option does nothing anymore
{"transaction-isolation", OPT_TX_ISOLATION,
"Default transaction isolation level.", 0, 0, 0, GET_STR, REQUIRED_ARG, 0,
0, 0, 0, 0, 0},
- {"use-symbolic-links", 's', "Enable symbolic link support. Deprecated option; use --symbolic-links instead.",
+ {"use-symbolic-links", OPT_SYMBOLIC_LINKS, "Enable symbolic link support. Deprecated option; use --symbolic-links instead.",
(uchar**) &my_use_symdir, (uchar**) &my_use_symdir, 0, GET_BOOL, NO_ARG,
IF_VALGRIND(0,1), 0, 0, 0, 0, 0},
{"user", 'u', "Run mysqld daemon as user.", 0, 0, 0, GET_STR, REQUIRED_ARG,
@@ -6755,7 +6764,7 @@ log and this option does nothing anymore
0, 0},
{"version", 'V', "Output version information and exit.", 0, 0, 0, GET_NO_ARG,
NO_ARG, 0, 0, 0, 0, 0, 0},
- {"warnings", 'W', "Deprecated; use --log-warnings instead.",
+ {"warnings", OPT_WARNINGS, "Deprecated; use --log-warnings instead.",
(uchar**) &global_system_variables.log_warnings,
(uchar**) &max_system_variables.log_warnings, 0, GET_ULONG, OPT_ARG,
1, 0, (longlong) ULONG_MAX, 0, 0, 0},
@@ -7033,6 +7042,10 @@ The minimum value for this variable is 4
(uchar**) &max_system_variables.myisam_max_sort_file_size, 0,
GET_ULL, REQUIRED_ARG, (longlong) LONG_MAX, 0, (ulonglong) MAX_FILE_SIZE,
0, 1024*1024, 0},
+ {"myisam_mmap_size", OPT_MYISAM_MMAP_SIZE,
+ "Can be used to restrict the total memory used for memory mmaping of myisam files",
+ (uchar**) &myisam_mmap_size, (uchar**) &myisam_mmap_size, 0,
+ GET_ULL, REQUIRED_ARG, SIZE_T_MAX, MEMMAP_EXTRA_MARGIN, SIZE_T_MAX, 0, 1, 0},
{"myisam_repair_threads", OPT_MYISAM_REPAIR_THREADS,
"Number of threads to use when repairing MyISAM tables. The value of 1 disables parallel repair.",
(uchar**) &global_system_variables.myisam_repair_threads,
@@ -7180,8 +7193,8 @@ The minimum value for this variable is 4
(uchar**) &max_system_variables.read_rnd_buff_size, 0,
GET_ULONG, REQUIRED_ARG, 256*1024L, IO_SIZE*2+MALLOC_OVERHEAD,
INT_MAX32, MALLOC_OVERHEAD, IO_SIZE, 0},
- {"record_buffer", OPT_RECORD_BUFFER,
- "Alias for read_buffer_size",
+ {"record_buffer", OPT_RECORD_BUFFER_OLD,
+ "Alias for read_buffer_size. This variable is deprecated and will be removed in a future release.",
(uchar**) &global_system_variables.read_buff_size,
(uchar**) &max_system_variables.read_buff_size,0, GET_ULONG, REQUIRED_ARG,
128*1024L, IO_SIZE*2+MALLOC_OVERHEAD, INT_MAX32, MALLOC_OVERHEAD, IO_SIZE, 0},
@@ -7305,6 +7318,10 @@ The minimum value for this variable is 4
(uchar**) &max_system_variables.net_wait_timeout, 0, GET_ULONG,
REQUIRED_ARG, NET_WAIT_TIMEOUT, 1, IF_WIN(INT_MAX32/1000, LONG_TIMEOUT),
0, 1, 0},
+ {"binlog-direct-non-transactional-updates", OPT_BINLOG_DIRECT_NON_TRANS_UPDATE,
+ "Causes updates to non-transactional engines using statement format to be written directly to binary log. Before using this option make sure that there are no dependencies between transactional and non-transactional tables such as in the statement INSERT INTO t_myisam SELECT * FROM t_innodb; otherwise, slaves may diverge from the master.",
+ (uchar**) &global_system_variables.binlog_direct_non_trans_update, (uchar**) &max_system_variables.binlog_direct_non_trans_update, 0, GET_BOOL, NO_ARG, 0,
+ 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0}
};
@@ -8193,6 +8210,9 @@ mysqld_get_one_option(int optid,
opt_endinfo=1; /* unireg: memory allocation */
break;
#endif
+ case '0':
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--log-long-format", "--log-short-format");
+ break;
case 'a':
global_system_variables.sql_mode= fix_sql_mode(MODE_ANSI);
global_system_variables.tx_isolation= ISO_SERIALIZABLE;
@@ -8200,6 +8220,11 @@ mysqld_get_one_option(int optid,
case 'b':
strmake(mysql_home,argument,sizeof(mysql_home)-1);
break;
+ case OPT_DEFAULT_CHARACTER_SET_OLD: // --default-character-set
+ WARN_DEPRECATED(NULL, VER_CELOSIA,
+ "--default-character-set",
+ "--character-set-server");
+ /* Fall through */
case 'C':
if (default_collation_name == compiled_default_collation_name)
default_collation_name= 0;
@@ -8223,6 +8248,9 @@ mysqld_get_one_option(int optid,
case 'L':
strmake(language, argument, sizeof(language)-1);
break;
+ case 'O':
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--set-variable", "--variable-name=value");
+ break;
#ifdef HAVE_REPLICATION
case OPT_SLAVE_SKIP_ERRORS:
init_slave_skip_errors(argument);
@@ -8245,6 +8273,9 @@ mysqld_get_one_option(int optid,
print_version();
exit(0);
#endif /*EMBEDDED_LIBRARY*/
+ case OPT_WARNINGS:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--warnings", "--log-warnings");
+ /* Note: fall-through to 'W' */
case 'W':
if (!argument)
global_system_variables.log_warnings++;
@@ -8257,6 +8288,18 @@ mysqld_get_one_option(int optid,
test_flags= argument ? (uint) atoi(argument) : 0;
opt_endinfo=1;
break;
+ case (int) OPT_DEFAULT_COLLATION_OLD:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--default-collation", "--collation-server");
+ break;
+ case (int) OPT_SAFE_SHOW_DB:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--safe-show-database", "GRANT SHOW DATABASES");
+ break;
+ case (int) OPT_LOG_BIN_TRUST_FUNCTION_CREATORS_OLD:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--log-bin-trust-routine-creators", "--log-bin-trust-function-creators");
+ break;
+ case (int) OPT_ENABLE_LOCK:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--enable-locking", "--external-locking");
+ break;
case (int) OPT_BIG_TABLES:
thd_startup_options|=OPTION_BIG_TABLES;
break;
@@ -8267,6 +8310,7 @@ mysqld_get_one_option(int optid,
opt_myisam_log=1;
break;
case (int) OPT_UPDATE_LOG:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--log-update", "--log-bin");
opt_update_log=1;
break;
case (int) OPT_BIN_LOG:
@@ -8438,8 +8482,18 @@ mysqld_get_one_option(int optid,
"give threads different priorities.");
break;
case (int) OPT_SKIP_LOCK:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--skip-locking", "--skip-external-locking");
opt_external_locking=0;
break;
+ case (int) OPT_SQL_BIN_UPDATE_SAME:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--sql-bin-update-same", "the binary log");
+ break;
+ case (int) OPT_RECORD_BUFFER_OLD:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "record_buffer", "read_buffer_size");
+ break;
+ case (int) OPT_SYMBOLIC_LINKS:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--use-symbolic-links", "--symbolic-links");
+ break;
case (int) OPT_SKIP_HOST_CACHE:
opt_specialflag|= SPECIAL_NO_HOST_CACHE;
break;
@@ -8465,6 +8519,7 @@ mysqld_get_one_option(int optid,
test_flags|=TEST_NO_STACKTRACE;
break;
case (int) OPT_SKIP_SYMLINKS:
+ WARN_DEPRECATED(NULL, VER_CELOSIA, "--skip-symlink", "--skip-symbolic-links");
my_use_symdir=0;
break;
case (int) OPT_BIND_ADDRESS:
@@ -8559,6 +8614,9 @@ mysqld_get_one_option(int optid,
server_id_supplied = 1;
break;
case OPT_DELAY_KEY_WRITE_ALL:
+ WARN_DEPRECATED(NULL, VER_CELOSIA,
+ "--delay-key-write-for-all-tables",
+ "--delay-key-write=ALL");
if (argument != disabled_my_option)
argument= (char*) "ALL";
/* Fall through */
=== modified file 'sql/rpl_injector.cc'
--- a/sql/rpl_injector.cc 2008-04-28 16:24:05 +0000
+++ b/sql/rpl_injector.cc 2010-03-04 08:03:07 +0000
@@ -59,10 +59,14 @@ injector::transaction::~transaction()
my_free(the_memory, MYF(0));
}
+/**
+ @retval 0 transaction committed
+ @retval 1 transaction rolled back
+ */
int injector::transaction::commit()
{
DBUG_ENTER("injector::transaction::commit()");
- m_thd->binlog_flush_pending_rows_event(true);
+ int error= m_thd->binlog_flush_pending_rows_event(true);
/*
Cluster replication does not preserve statement or
transaction boundaries of the master. Instead, a new
@@ -82,9 +86,9 @@ int injector::transaction::commit()
is committed by committing the statement transaction
explicitly.
*/
- ha_autocommit_or_rollback(m_thd, 0);
- end_trans(m_thd, COMMIT);
- DBUG_RETURN(0);
+ error |= ha_autocommit_or_rollback(m_thd, error);
+ end_trans(m_thd, error ? ROLLBACK : COMMIT);
+ DBUG_RETURN(error);
}
int injector::transaction::use_table(server_id_type sid, table tbl)
@@ -110,16 +114,17 @@ int injector::transaction::write_row (se
record_type record)
{
DBUG_ENTER("injector::transaction::write_row(...)");
-
- if (int error= check_state(ROW_STATE))
+
+ int error= check_state(ROW_STATE);
+ if (error)
DBUG_RETURN(error);
server_id_type save_id= m_thd->server_id;
m_thd->set_server_id(sid);
- m_thd->binlog_write_row(tbl.get_table(), tbl.is_transactional(),
- cols, colcnt, record);
+ error= m_thd->binlog_write_row(tbl.get_table(), tbl.is_transactional(),
+ cols, colcnt, record);
m_thd->set_server_id(save_id);
- DBUG_RETURN(0);
+ DBUG_RETURN(error);
}
@@ -129,15 +134,16 @@ int injector::transaction::delete_row(se
{
DBUG_ENTER("injector::transaction::delete_row(...)");
- if (int error= check_state(ROW_STATE))
+ int error= check_state(ROW_STATE);
+ if (error)
DBUG_RETURN(error);
server_id_type save_id= m_thd->server_id;
m_thd->set_server_id(sid);
- m_thd->binlog_delete_row(tbl.get_table(), tbl.is_transactional(),
- cols, colcnt, record);
+ error= m_thd->binlog_delete_row(tbl.get_table(), tbl.is_transactional(),
+ cols, colcnt, record);
m_thd->set_server_id(save_id);
- DBUG_RETURN(0);
+ DBUG_RETURN(error);
}
@@ -147,15 +153,16 @@ int injector::transaction::update_row(se
{
DBUG_ENTER("injector::transaction::update_row(...)");
- if (int error= check_state(ROW_STATE))
+ int error= check_state(ROW_STATE);
+ if (error)
DBUG_RETURN(error);
server_id_type save_id= m_thd->server_id;
m_thd->set_server_id(sid);
- m_thd->binlog_update_row(tbl.get_table(), tbl.is_transactional(),
- cols, colcnt, before, after);
+ error= m_thd->binlog_update_row(tbl.get_table(), tbl.is_transactional(),
+ cols, colcnt, before, after);
m_thd->set_server_id(save_id);
- DBUG_RETURN(0);
+ DBUG_RETURN(error);
}
=== modified file 'sql/rpl_record.cc'
--- a/sql/rpl_record.cc 2010-01-28 11:35:10 +0000
+++ b/sql/rpl_record.cc 2010-03-04 08:03:07 +0000
@@ -231,6 +231,22 @@ unpack_row(Relay_log_info const *rli,
{
DBUG_PRINT("debug", ("Was NULL; null mask: 0x%x; null bits: 0x%x",
null_mask, null_bits));
+ /**
+ Calling reset just in case one is unpacking on top a
+ record with data.
+
+ This could probably go into set_null() but doing so,
+ (i) triggers assertion in other parts of the code at
+ the moment; (ii) it would make us reset the field,
+ always when setting null, which right now doesn't seem
+ needed anywhere else except here.
+
+ TODO: maybe in the future we should consider moving
+ the reset to make it part of set_null. But then
+ the assertions triggered need to be
+ addressed/revisited.
+ */
+ f->reset();
f->set_null();
}
else
=== modified file 'sql/rpl_rli.cc'
--- a/sql/rpl_rli.cc 2010-01-15 15:27:55 +0000
+++ b/sql/rpl_rli.cc 2010-03-04 08:03:07 +0000
@@ -177,10 +177,10 @@ a file name for --relay-log-index option
note, that if open() fails, we'll still have index file open
but a destructor will take care of that
*/
- if (rli->relay_log.open_index_file(opt_relaylog_index_name, ln) ||
+ if (rli->relay_log.open_index_file(opt_relaylog_index_name, ln, TRUE) ||
rli->relay_log.open(ln, LOG_BIN, 0, SEQ_READ_APPEND, 0,
(max_relay_log_size ? max_relay_log_size :
- max_binlog_size), 1))
+ max_binlog_size), 1, TRUE))
{
pthread_mutex_unlock(&rli->data_lock);
sql_print_error("Failed in open_log() called from init_relay_log_info()");
@@ -1017,7 +1017,7 @@ err:
false - condition not met
*/
-bool Relay_log_info::is_until_satisfied(my_off_t master_beg_pos)
+bool Relay_log_info::is_until_satisfied(THD *thd, Log_event *ev)
{
const char *log_name;
ulonglong log_pos;
@@ -1027,8 +1027,12 @@ bool Relay_log_info::is_until_satisfied(
if (until_condition == UNTIL_MASTER_POS)
{
+ if (ev && ev->server_id == (uint32) ::server_id && !replicate_same_server_id)
+ DBUG_RETURN(FALSE);
log_name= group_master_log_name;
- log_pos= master_beg_pos;
+ log_pos= (!ev)? group_master_log_pos :
+ ((thd->options & OPTION_BEGIN || !ev->log_pos) ?
+ group_master_log_pos : ev->log_pos - ev->data_written);
}
else
{ /* until_condition == UNTIL_RELAY_POS */
=== modified file 'sql/rpl_rli.h'
--- a/sql/rpl_rli.h 2009-05-19 09:28:05 +0000
+++ b/sql/rpl_rli.h 2010-03-04 08:03:07 +0000
@@ -303,7 +303,7 @@ public:
void close_temporary_tables();
/* Check if UNTIL condition is satisfied. See slave.cc for more. */
- bool is_until_satisfied(my_off_t master_beg_pos);
+ bool is_until_satisfied(THD *thd, Log_event *ev);
inline ulonglong until_pos()
{
return ((until_condition == UNTIL_MASTER_POS) ? group_master_log_pos :
=== modified file 'sql/rpl_utility.h'
--- a/sql/rpl_utility.h 2009-03-25 10:53:56 +0000
+++ b/sql/rpl_utility.h 2010-01-05 06:25:29 +0000
@@ -95,6 +95,7 @@ public:
case MYSQL_TYPE_LONG_BLOB:
case MYSQL_TYPE_DOUBLE:
case MYSQL_TYPE_FLOAT:
+ case MYSQL_TYPE_GEOMETRY:
{
/*
These types store a single byte.
=== modified file 'sql/set_var.cc'
--- a/sql/set_var.cc 2010-03-08 13:57:32 +0000
+++ b/sql/set_var.cc 2010-03-09 19:23:30 +0000
@@ -151,6 +151,7 @@ static void sys_default_general_log_path
static bool sys_update_slow_log_path(THD *thd, set_var * var);
static void sys_default_slow_log_path(THD *thd, enum_var_type type);
static void fix_sys_log_slow_filter(THD *thd, enum_var_type);
+static uchar *get_myisam_mmap_size(THD *thd);
/*
Variable definition list
@@ -184,6 +185,8 @@ static sys_var_long_ptr sys_binlog_cache
&binlog_cache_size);
static sys_var_thd_binlog_format sys_binlog_format(&vars, "binlog_format",
&SV::binlog_format);
+static sys_var_thd_bool sys_binlog_direct_non_trans_update(&vars, "binlog_direct_non_transactional_updates",
+ &SV::binlog_direct_non_trans_update);
static sys_var_thd_ulong sys_bulk_insert_buff_size(&vars, "bulk_insert_buffer_size",
&SV::bulk_insert_buff_size);
static sys_var_const_os sys_character_sets_dir(&vars,
@@ -939,6 +942,10 @@ sys_var_str sys_var_slow_log_path(&vars,
opt_slow_logname);
static sys_var_log_output sys_var_log_output_state(&vars, "log_output", &log_output_options,
&log_output_typelib, 0);
+static sys_var_readonly sys_myisam_mmap_size(&vars, "myisam_mmap_size",
+ OPT_GLOBAL,
+ SHOW_LONGLONG,
+ get_myisam_mmap_size);
bool sys_var::check(THD *thd, set_var *var)
@@ -3275,6 +3282,12 @@ static uchar *get_tmpdir(THD *thd)
return (uchar*)mysql_tmpdir;
}
+static uchar *get_myisam_mmap_size(THD *thd)
+{
+ return (uchar *)&myisam_mmap_size;
+}
+
+
/****************************************************************************
Main handling of variables:
- Initialisation
@@ -4183,7 +4196,7 @@ bool process_key_caches(process_key_cach
void sys_var_trust_routine_creators::warn_deprecated(THD *thd)
{
- WARN_DEPRECATED(thd, "6.0", "@@log_bin_trust_routine_creators",
+ WARN_DEPRECATED(thd, VER_CELOSIA, "@@log_bin_trust_routine_creators",
"'@@log_bin_trust_function_creators'");
}
=== modified file 'sql/share/errmsg.txt'
--- a/sql/share/errmsg.txt 2009-12-06 17:26:12 +0000
+++ b/sql/share/errmsg.txt 2010-03-04 08:03:07 +0000
@@ -5138,11 +5138,11 @@ ER_SP_BADSTATEMENT 0A000
eng "%s is not allowed in stored procedures"
ger "%s ist in gespeicherten Prozeduren nicht erlaubt"
ER_UPDATE_LOG_DEPRECATED_IGNORED 42000
- eng "The update log is deprecated and replaced by the binary log; SET SQL_LOG_UPDATE has been ignored"
- ger "Das Update-Log ist veraltet und wurde durch das Bin�Log ersetzt. SET SQL_LOG_UPDATE wird ignoriert"
+ eng "The update log is deprecated and replaced by the binary log; SET SQL_LOG_UPDATE has been ignored. This option will be removed in MySQL 5.6."
+ ger "Das Update-Log ist veraltet und wurde durch das Bin�Log ersetzt. SET SQL_LOG_UPDATE wird ignoriert. Diese Option wird in MySQL 5.6 entfernt."
ER_UPDATE_LOG_DEPRECATED_TRANSLATED 42000
- eng "The update log is deprecated and replaced by the binary log; SET SQL_LOG_UPDATE has been translated to SET SQL_LOG_BIN"
- ger "Das Update-Log ist veraltet und wurde durch das Bin�Log ersetzt. SET SQL_LOG_UPDATE wurde in SET SQL_LOG_BIN �tzt"
+ eng "The update log is deprecated and replaced by the binary log; SET SQL_LOG_UPDATE has been translated to SET SQL_LOG_BIN. This option will be removed in MySQL 5.6."
+ ger "Das Update-Log ist veraltet und wurde durch das Bin�Log ersetzt. SET SQL_LOG_UPDATE wurde in SET SQL_LOG_BIN �tzt. Diese Option wird in MySQL 5.6 entfernt."
ER_QUERY_INTERRUPTED 70100
eng "Query execution was interrupted"
ger "Ausf� der Abfrage wurde unterbrochen"
@@ -5696,8 +5696,8 @@ ER_PARTITION_WRONG_NO_SUBPART_ERROR
eng "Wrong number of subpartitions defined, mismatch with previous setting"
ger "Falsche Anzahl von Unterpartitionen definiert, stimmt nicht mit vorherigen Einstellungen �n"
swe "Antal subpartitioner definierade och antal subpartitioner �inte lika"
-ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR
- eng "Constant/Random expression in (sub)partitioning function is not allowed"
+ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR
+ eng "Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed"
ger "Konstante oder Random-Ausdr�n (Unter-)Partitionsfunktionen sind nicht erlaubt"
swe "Konstanta uttryck eller slumpm�iga uttryck �inte till�a (sub)partitioneringsfunktioner"
ER_NO_CONST_EXPR_IN_RANGE_OR_LIST_ERROR
=== modified file 'sql/slave.cc'
--- a/sql/slave.cc 2010-01-29 10:42:31 +0000
+++ b/sql/slave.cc 2010-03-04 08:03:07 +0000
@@ -2218,9 +2218,7 @@ static int exec_relay_log_event(THD* thd
hits the UNTIL barrier.
*/
if (rli->until_condition != Relay_log_info::UNTIL_NONE &&
- rli->is_until_satisfied((rli->is_in_group() || !ev->log_pos) ?
- rli->group_master_log_pos :
- ev->log_pos - ev->data_written))
+ rli->is_until_satisfied(thd, ev))
{
char buf[22];
sql_print_information("Slave SQL thread stopped because it reached its"
@@ -2963,7 +2961,7 @@ log '%s' at position %s, relay log '%s'
*/
pthread_mutex_lock(&rli->data_lock);
if (rli->until_condition != Relay_log_info::UNTIL_NONE &&
- rli->is_until_satisfied(rli->group_master_log_pos))
+ rli->is_until_satisfied(thd, NULL))
{
char buf[22];
sql_print_information("Slave SQL thread stopped because it reached its"
=== modified file 'sql/sp.cc'
--- a/sql/sp.cc 2009-11-21 11:18:21 +0000
+++ b/sql/sp.cc 2010-01-25 02:55:05 +0000
@@ -896,10 +896,13 @@ sp_create_routine(THD *thd, int type, sp
bool store_failed= FALSE;
+ bool save_binlog_row_based;
+
DBUG_ENTER("sp_create_routine");
DBUG_PRINT("enter", ("type: %d name: %.*s",type, (int) sp->m_name.length,
sp->m_name.str));
String retstr(64);
+ retstr.set_charset(system_charset_info);
DBUG_ASSERT(type == TYPE_ENUM_PROCEDURE ||
type == TYPE_ENUM_FUNCTION);
@@ -912,6 +915,7 @@ sp_create_routine(THD *thd, int type, sp
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
saved_count_cuted_fields= thd->count_cuted_fields;
@@ -1104,9 +1108,10 @@ sp_create_routine(THD *thd, int type, sp
/* restore sql_mode when binloging */
thd->variables.sql_mode= saved_mode;
/* Such a statement can always go directly to binlog, no trans cache */
- thd->binlog_query(THD::MYSQL_QUERY_TYPE,
- log_query.c_ptr(), log_query.length(),
- FALSE, FALSE, 0);
+ if (thd->binlog_query(THD::MYSQL_QUERY_TYPE,
+ log_query.c_ptr(), log_query.length(),
+ FALSE, FALSE, 0))
+ ret= SP_INTERNAL_ERROR;
thd->variables.sql_mode= 0;
}
@@ -1117,6 +1122,8 @@ done:
thd->variables.sql_mode= saved_mode;
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(ret);
}
@@ -1141,6 +1148,7 @@ sp_drop_routine(THD *thd, int type, sp_n
{
TABLE *table;
int ret;
+ bool save_binlog_row_based;
DBUG_ENTER("sp_drop_routine");
DBUG_PRINT("enter", ("type: %d name: %.*s",
type, (int) name->m_name.length, name->m_name.str));
@@ -1153,6 +1161,7 @@ sp_drop_routine(THD *thd, int type, sp_n
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
if (!(table= open_proc_table_for_update(thd)))
@@ -1165,11 +1174,14 @@ sp_drop_routine(THD *thd, int type, sp_n
if (ret == SP_OK)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ ret= SP_INTERNAL_ERROR;
sp_cache_invalidate();
}
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(ret);
}
@@ -1196,6 +1208,7 @@ sp_update_routine(THD *thd, int type, sp
{
TABLE *table;
int ret;
+ bool save_binlog_row_based;
DBUG_ENTER("sp_update_routine");
DBUG_PRINT("enter", ("type: %d name: %.*s",
type, (int) name->m_name.length, name->m_name.str));
@@ -1207,6 +1220,7 @@ sp_update_routine(THD *thd, int type, sp
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
if (!(table= open_proc_table_for_update(thd)))
@@ -1235,11 +1249,14 @@ sp_update_routine(THD *thd, int type, sp
if (ret == SP_OK)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ ret= SP_INTERNAL_ERROR;
sp_cache_invalidate();
}
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(ret);
}
@@ -1403,6 +1420,7 @@ sp_find_routine(THD *thd, int type, sp_n
64 -- size of "returns" column of mysql.proc.
*/
String retstr(64);
+ retstr.set_charset(sp->get_creation_ctx()->get_client_cs());
DBUG_PRINT("info", ("found: 0x%lx", (ulong)sp));
if (sp->m_first_free_instance)
=== modified file 'sql/sp_head.cc'
--- a/sql/sp_head.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sp_head.cc 2010-03-04 08:03:07 +0000
@@ -1790,6 +1790,7 @@ sp_head::execute_function(THD *thd, Item
push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN, ER_UNKNOWN_ERROR,
"Invoked ROUTINE modified a transactional table but MySQL "
"failed to reflect this change in the binary log");
+ err_status= TRUE;
}
reset_dynamic(&thd->user_var_events);
/* Forget those values, in case more function calls are binlogged: */
@@ -2772,8 +2773,15 @@ sp_lex_keeper::reset_lex_and_exec_core(T
m_lex->mark_as_requiring_prelocking(NULL);
}
thd->rollback_item_tree_changes();
- /* Update the state of the active arena. */
- thd->stmt_arena->state= Query_arena::EXECUTED;
+ /*
+ Update the state of the active arena if no errors on
+ open_tables stage.
+ */
+ if (!res || !thd->is_error() ||
+ (thd->main_da.sql_errno() != ER_CANT_REOPEN_TABLE &&
+ thd->main_da.sql_errno() != ER_NO_SUCH_TABLE &&
+ thd->main_da.sql_errno() != ER_UPDATE_TABLE_USED))
+ thd->stmt_arena->state= Query_arena::EXECUTED;
/*
Merge here with the saved parent's values
=== modified file 'sql/sp_pcontext.h'
--- a/sql/sp_pcontext.h 2009-04-29 02:59:10 +0000
+++ b/sql/sp_pcontext.h 2009-12-18 18:44:24 +0000
@@ -71,7 +71,7 @@ typedef struct sp_label
typedef struct sp_cond_type
{
enum { number, state, warning, notfound, exception } type;
- char sqlstate[6];
+ char sqlstate[SQLSTATE_LENGTH+1];
uint mysqlerr;
} sp_cond_type_t;
=== modified file 'sql/sql_acl.cc'
--- a/sql/sql_acl.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sql_acl.cc 2010-03-04 08:03:07 +0000
@@ -310,7 +310,7 @@ static my_bool acl_load(THD *thd, TABLE_
{
TABLE *table;
READ_RECORD read_record_info;
- my_bool return_val= 1;
+ my_bool return_val= TRUE;
bool check_no_resolve= specialflag & SPECIAL_NO_RESOLVE;
char tmp_name[NAME_LEN+1];
int password_length;
@@ -623,7 +623,7 @@ static my_bool acl_load(THD *thd, TABLE_
init_check_host();
initialized=1;
- return_val=0;
+ return_val= FALSE;
end:
thd->variables.sql_mode= old_sql_mode;
@@ -674,7 +674,7 @@ my_bool acl_reload(THD *thd)
DYNAMIC_ARRAY old_acl_hosts,old_acl_users,old_acl_dbs;
MEM_ROOT old_mem;
bool old_initialized;
- my_bool return_val= 1;
+ my_bool return_val= TRUE;
DBUG_ENTER("acl_reload");
if (thd->locked_tables)
@@ -701,8 +701,13 @@ my_bool acl_reload(THD *thd)
if (simple_open_n_lock_tables(thd, tables))
{
- sql_print_error("Fatal error: Can't open and lock privilege tables: %s",
- thd->main_da.message());
+ /*
+ Execution might have been interrupted; only print the error message
+ if an error condition has been raised.
+ */
+ if (thd->main_da.is_error())
+ sql_print_error("Fatal error: Can't open and lock privilege tables: %s",
+ thd->main_da.message());
goto end;
}
@@ -1061,7 +1066,7 @@ int acl_getroot(THD *thd, USER_RESOURCES
*mqh= acl_user->user_resource;
if (acl_user->host.hostname)
- strmake(sctx->priv_host, acl_user->host.hostname, MAX_HOSTNAME);
+ strmake(sctx->priv_host, acl_user->host.hostname, MAX_HOSTNAME - 1);
else
*sctx->priv_host= 0;
}
@@ -1162,7 +1167,7 @@ bool acl_getroot_no_password(Security_co
sctx->priv_user= acl_user->user ? user : (char *) "";
if (acl_user->host.hostname)
- strmake(sctx->priv_host, acl_user->host.hostname, MAX_HOSTNAME);
+ strmake(sctx->priv_host, acl_user->host.hostname, MAX_HOSTNAME - 1);
else
*sctx->priv_host= 0;
}
@@ -1655,8 +1660,8 @@ bool change_password(THD *thd, const cha
acl_user->host.hostname ? acl_user->host.hostname : "",
new_password));
thd->clear_error();
- thd->binlog_query(THD::MYSQL_QUERY_TYPE, buff, query_length,
- FALSE, FALSE, 0);
+ result= thd->binlog_query(THD::MYSQL_QUERY_TYPE, buff, query_length,
+ FALSE, FALSE, 0);
}
end:
close_thread_tables(thd);
@@ -2974,6 +2979,7 @@ int mysql_table_grant(THD *thd, TABLE_LI
TABLE_LIST tables[3];
bool create_new_users=0;
char *db_name, *table_name;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_table_grant");
if (!initialized)
@@ -3069,6 +3075,7 @@ int mysql_table_grant(THD *thd, TABLE_LI
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
#ifdef HAVE_REPLICATION
@@ -3084,7 +3091,11 @@ int mysql_table_grant(THD *thd, TABLE_LI
*/
tables[0].updating= tables[1].updating= tables[2].updating= 1;
if (!(thd->spcont || rpl_filter->tables_ok(0, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(FALSE);
+ }
}
#endif
@@ -3097,6 +3108,8 @@ int mysql_table_grant(THD *thd, TABLE_LI
if (simple_open_n_lock_tables(thd,tables))
{ // Should never happen
close_thread_tables(thd); /* purecov: deadcode */
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(TRUE); /* purecov: deadcode */
}
@@ -3213,7 +3226,7 @@ int mysql_table_grant(THD *thd, TABLE_LI
if (!result) /* success */
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ result= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
rw_unlock(&LOCK_grant);
@@ -3223,6 +3236,8 @@ int mysql_table_grant(THD *thd, TABLE_LI
/* Tables are automatically closed */
thd->lex->restore_backup_query_tables_list(&backup);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result);
}
@@ -3251,6 +3266,7 @@ bool mysql_routine_grant(THD *thd, TABLE
TABLE_LIST tables[2];
bool create_new_users=0, result=0;
char *db_name, *table_name;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_routine_grant");
if (!initialized)
@@ -3286,6 +3302,7 @@ bool mysql_routine_grant(THD *thd, TABLE
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
#ifdef HAVE_REPLICATION
@@ -3301,13 +3318,19 @@ bool mysql_routine_grant(THD *thd, TABLE
*/
tables[0].updating= tables[1].updating= 1;
if (!(thd->spcont || rpl_filter->tables_ok(0, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(FALSE);
+ }
}
#endif
if (simple_open_n_lock_tables(thd,tables))
{ // Should never happen
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(TRUE);
}
@@ -3379,10 +3402,13 @@ bool mysql_routine_grant(THD *thd, TABLE
if (write_to_binlog)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (write_bin_log(thd, FALSE, thd->query(), thd->query_length()))
+ result= TRUE;
}
rw_unlock(&LOCK_grant);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
/* Tables are automatically closed */
DBUG_RETURN(result);
@@ -3397,6 +3423,7 @@ bool mysql_grant(THD *thd, const char *d
char tmp_db[NAME_LEN+1];
bool create_new_users=0;
TABLE_LIST tables[2];
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_grant");
if (!initialized)
{
@@ -3425,6 +3452,7 @@ bool mysql_grant(THD *thd, const char *d
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
#ifdef HAVE_REPLICATION
@@ -3440,13 +3468,19 @@ bool mysql_grant(THD *thd, const char *d
*/
tables[0].updating= tables[1].updating= 1;
if (!(thd->spcont || rpl_filter->tables_ok(0, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(FALSE);
+ }
}
#endif
if (simple_open_n_lock_tables(thd,tables))
{ // This should never happen
close_thread_tables(thd); /* purecov: deadcode */
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(TRUE); /* purecov: deadcode */
}
@@ -3498,7 +3532,7 @@ bool mysql_grant(THD *thd, const char *d
if (!result)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ result= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
rw_unlock(&LOCK_grant);
@@ -3506,6 +3540,8 @@ bool mysql_grant(THD *thd, const char *d
if (!result)
my_ok(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result);
}
@@ -3766,11 +3802,11 @@ static my_bool grant_reload_procs_priv(T
DBUG_RETURN(TRUE);
}
+ rw_wrlock(&LOCK_grant);
/* Save a copy of the current hash if we need to undo the grant load */
old_proc_priv_hash= proc_priv_hash;
old_func_priv_hash= func_priv_hash;
- rw_wrlock(&LOCK_grant);
if ((return_val= grant_load_procs_priv(table.table)))
{
/* Error; Reverting to old hash */
@@ -5661,6 +5697,7 @@ bool mysql_create_user(THD *thd, List <L
List_iterator <LEX_USER> user_list(list);
TABLE_LIST tables[GRANT_TABLES];
bool some_users_created= FALSE;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_create_user");
/*
@@ -5668,11 +5705,16 @@ bool mysql_create_user(THD *thd, List <L
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
/* CREATE USER may be skipped on replication client. */
if ((result= open_grant_tables(thd, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result != 1);
+ }
rw_wrlock(&LOCK_grant);
VOID(pthread_mutex_lock(&acl_cache->lock));
@@ -5710,10 +5752,12 @@ bool mysql_create_user(THD *thd, List <L
my_error(ER_CANNOT_USER, MYF(0), "CREATE USER", wrong_users.c_ptr_safe());
if (some_users_created)
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ result |= write_bin_log(thd, FALSE, thd->query(), thd->query_length());
rw_unlock(&LOCK_grant);
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result);
}
@@ -5740,6 +5784,7 @@ bool mysql_drop_user(THD *thd, List <LEX
TABLE_LIST tables[GRANT_TABLES];
bool some_users_deleted= FALSE;
ulong old_sql_mode= thd->variables.sql_mode;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_drop_user");
/*
@@ -5747,11 +5792,16 @@ bool mysql_drop_user(THD *thd, List <LEX
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
/* DROP USER may be skipped on replication client. */
if ((result= open_grant_tables(thd, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result != 1);
+ }
thd->variables.sql_mode&= ~MODE_PAD_CHAR_TO_FULL_LENGTH;
@@ -5783,11 +5833,13 @@ bool mysql_drop_user(THD *thd, List <LEX
my_error(ER_CANNOT_USER, MYF(0), "DROP USER", wrong_users.c_ptr_safe());
if (some_users_deleted)
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ result |= write_bin_log(thd, FALSE, thd->query(), thd->query_length());
rw_unlock(&LOCK_grant);
close_thread_tables(thd);
thd->variables.sql_mode= old_sql_mode;
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result);
}
@@ -5814,6 +5866,7 @@ bool mysql_rename_user(THD *thd, List <L
List_iterator <LEX_USER> user_list(list);
TABLE_LIST tables[GRANT_TABLES];
bool some_users_renamed= FALSE;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_rename_user");
/*
@@ -5821,11 +5874,16 @@ bool mysql_rename_user(THD *thd, List <L
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
/* RENAME USER may be skipped on replication client. */
if ((result= open_grant_tables(thd, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result != 1);
+ }
rw_wrlock(&LOCK_grant);
VOID(pthread_mutex_lock(&acl_cache->lock));
@@ -5868,10 +5926,12 @@ bool mysql_rename_user(THD *thd, List <L
my_error(ER_CANNOT_USER, MYF(0), "RENAME USER", wrong_users.c_ptr_safe());
if (some_users_renamed && mysql_bin_log.is_open())
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ result |= write_bin_log(thd, FALSE, thd->query(), thd->query_length());
rw_unlock(&LOCK_grant);
close_thread_tables(thd);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result);
}
@@ -5896,6 +5956,7 @@ bool mysql_revoke_all(THD *thd, List <L
int result;
ACL_DB *acl_db;
TABLE_LIST tables[GRANT_TABLES];
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_revoke_all");
/*
@@ -5903,10 +5964,15 @@ bool mysql_revoke_all(THD *thd, List <L
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
if ((result= open_grant_tables(thd, tables)))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(result != 1);
+ }
rw_wrlock(&LOCK_grant);
VOID(pthread_mutex_lock(&acl_cache->lock));
@@ -6050,15 +6116,19 @@ bool mysql_revoke_all(THD *thd, List <L
VOID(pthread_mutex_unlock(&acl_cache->lock));
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ int binlog_error=
+ write_bin_log(thd, FALSE, thd->query(), thd->query_length());
rw_unlock(&LOCK_grant);
close_thread_tables(thd);
- if (result)
+ /* error for writing binary log has already been reported */
+ if (result && !binlog_error)
my_message(ER_REVOKE_GRANTS, ER(ER_REVOKE_GRANTS), MYF(0));
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
- DBUG_RETURN(result);
+ DBUG_RETURN(result || binlog_error);
}
@@ -6140,6 +6210,7 @@ bool sp_revoke_privileges(THD *thd, cons
TABLE_LIST tables[GRANT_TABLES];
HASH *hash= is_proc ? &proc_priv_hash : &func_priv_hash;
Silence_routine_definer_errors error_handler;
+ bool save_binlog_row_based;
DBUG_ENTER("sp_revoke_privileges");
if ((result= open_grant_tables(thd, tables)))
@@ -6156,6 +6227,7 @@ bool sp_revoke_privileges(THD *thd, cons
row-based replication. The flag will be reset at the end of the
statement.
*/
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
thd->clear_current_stmt_binlog_row_based();
/* Remove procedure access */
@@ -6192,6 +6264,8 @@ bool sp_revoke_privileges(THD *thd, cons
close_thread_tables(thd);
thd->pop_internal_handler();
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(error_handler.has_errors());
}
=== modified file 'sql/sql_base.cc'
--- a/sql/sql_base.cc 2010-02-10 19:06:24 +0000
+++ b/sql/sql_base.cc 2010-03-04 08:03:07 +0000
@@ -1336,7 +1336,7 @@ void close_thread_tables(THD *thd)
handled either before writing a query log event (inside
binlog_query()) or when preparing a pending event.
*/
- thd->binlog_flush_pending_rows_event(TRUE);
+ (void)thd->binlog_flush_pending_rows_event(TRUE);
mysql_unlock_tables(thd, thd->lock);
thd->lock=0;
}
@@ -1550,7 +1550,11 @@ void close_temporary_tables(THD *thd)
qinfo.db= db.ptr();
qinfo.db_len= db.length();
thd->variables.character_set_client= cs_save;
- mysql_bin_log.write(&qinfo);
+ if (mysql_bin_log.write(&qinfo))
+ {
+ push_warning(thd, MYSQL_ERROR::WARN_LEVEL_ERROR, MYF(0),
+ "Failed to write the DROP statement for temporary tables to binary log");
+ }
thd->variables.pseudo_thread_id= save_pseudo_thread_id;
}
else
@@ -4049,9 +4053,13 @@ retry:
end = strxmov(strmov(query, "DELETE FROM `"),
share->db.str,"`.`",share->table_name.str,"`", NullS);
int errcode= query_error_code(thd, TRUE);
- thd->binlog_query(THD::STMT_QUERY_TYPE,
- query, (ulong)(end-query),
- FALSE, FALSE, errcode);
+ if (thd->binlog_query(THD::STMT_QUERY_TYPE,
+ query, (ulong)(end-query),
+ FALSE, FALSE, errcode))
+ {
+ my_free(query, MYF(0));
+ goto err;
+ }
my_free(query, MYF(0));
}
else
@@ -5698,7 +5706,8 @@ find_field_in_view(THD *thd, TABLE_LIST
if (!my_strcasecmp(system_charset_info, field_it.name(), name))
{
// in PS use own arena or data will be freed after prepare
- if (register_tree_change && thd->stmt_arena->is_stmt_prepare_or_first_sp_execute())
+ if (register_tree_change &&
+ thd->stmt_arena->is_stmt_prepare_or_first_stmt_execute())
arena= thd->activate_stmt_arena_if_needed(&backup);
/*
create_item() may, or may not create a new Item, depending on
=== modified file 'sql/sql_class.h'
--- a/sql/sql_class.h 2009-12-03 11:34:11 +0000
+++ b/sql/sql_class.h 2010-03-04 08:03:07 +0000
@@ -359,6 +359,7 @@ struct system_variables
ulong ndb_index_stat_cache_entries;
ulong ndb_index_stat_update_freq;
ulong binlog_format; // binlog format for this thd (see enum_binlog_format)
+ my_bool binlog_direct_non_trans_update;
/*
In slave thread we need to know in behalf of which
thread the query is being run to replicate temp tables properly
@@ -558,6 +559,8 @@ public:
{ return state == INITIALIZED_FOR_SP; }
inline bool is_stmt_prepare_or_first_sp_execute() const
{ return (int)state < (int)PREPARED; }
+ inline bool is_stmt_prepare_or_first_stmt_execute() const
+ { return (int)state <= (int)PREPARED; }
inline bool is_first_stmt_execute() const { return state == PREPARED; }
inline bool is_stmt_execute() const
{ return state == PREPARED || state == EXECUTED; }
@@ -2636,7 +2639,7 @@ public:
{}
int prepare(List<Item> &list, SELECT_LEX_UNIT *u);
- void binlog_show_create_table(TABLE **tables, uint count);
+ int binlog_show_create_table(TABLE **tables, uint count);
void store_values(List<Item> &values);
void send_error(uint errcode,const char *err);
bool send_eof();
=== modified file 'sql/sql_connect.cc'
--- a/sql/sql_connect.cc 2010-02-23 12:04:58 +0000
+++ b/sql/sql_connect.cc 2010-03-04 08:03:07 +0000
@@ -710,7 +710,7 @@ static int check_connection(THD *thd)
ulong server_capabilites;
{
/* buff[] needs to big enough to hold the server_version variable */
- char buff[SERVER_VERSION_LENGTH + SCRAMBLE_LENGTH + 64];
+ char buff[SERVER_VERSION_LENGTH + 1 + SCRAMBLE_LENGTH + 1 + 64];
server_capabilites= CLIENT_BASIC_FLAGS;
if (opt_using_transactions)
=== modified file 'sql/sql_crypt.cc'
--- a/sql/sql_crypt.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_crypt.cc 2010-03-04 08:03:07 +0000
@@ -28,14 +28,7 @@
#include "mysql_priv.h"
-SQL_CRYPT::SQL_CRYPT(const char *password, uint length)
-{
- ulong rand_nr[2];
- hash_password(rand_nr,password, length);
- crypt_init(rand_nr);
-}
-
-void SQL_CRYPT::crypt_init(ulong *rand_nr)
+void SQL_CRYPT::init(ulong *rand_nr)
{
uint i;
my_rnd_init(&rand,rand_nr[0],rand_nr[1]);
=== modified file 'sql/sql_crypt.h'
--- a/sql/sql_crypt.h 2009-09-07 20:50:10 +0000
+++ b/sql/sql_crypt.h 2010-03-04 08:03:07 +0000
@@ -23,15 +23,15 @@ class SQL_CRYPT :public Sql_alloc
struct my_rnd_struct rand,org_rand;
char decode_buff[256],encode_buff[256];
uint shift;
- void crypt_init(ulong *seed);
public:
- SQL_CRYPT(const char *seed, uint length);
+ SQL_CRYPT() {}
SQL_CRYPT(ulong *seed)
{
- crypt_init(seed);
+ init(seed);
}
~SQL_CRYPT() {}
- void init() { shift=0; rand=org_rand; }
+ void init(ulong *seed);
+ void reinit() { shift=0; rand=org_rand; }
void encode(char *str, uint length);
void decode(char *str, uint length);
};
=== modified file 'sql/sql_db.cc'
--- a/sql/sql_db.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_db.cc 2010-03-04 08:03:07 +0000
@@ -178,13 +178,13 @@ uchar* dboptions_get_key(my_dbopt_t *opt
Helper function to write a query to binlog used by mysql_rm_db()
*/
-static inline void write_to_binlog(THD *thd, char *query, uint q_len,
- char *db, uint db_len)
+static inline int write_to_binlog(THD *thd, char *query, uint q_len,
+ char *db, uint db_len)
{
Query_log_event qinfo(thd, query, q_len, 0, 0, 0);
qinfo.db= db;
qinfo.db_len= db_len;
- mysql_bin_log.write(&qinfo);
+ return mysql_bin_log.write(&qinfo);
}
@@ -618,7 +618,7 @@ int mysql_create_db(THD *thd, char *db,
DBUG_ENTER("mysql_create_db");
/* do not create 'information_schema' db */
- if (!my_strcasecmp(system_charset_info, db, INFORMATION_SCHEMA_NAME.str))
+ if (is_schema_db(db))
{
my_error(ER_DB_CREATE_EXISTS, MYF(0), db);
DBUG_RETURN(-1);
@@ -695,6 +695,7 @@ int mysql_create_db(THD *thd, char *db,
file. In this case it's best to just continue as if nothing has
happened. (This is a very unlikely senario)
*/
+ thd->clear_error();
}
not_silent:
@@ -746,7 +747,11 @@ not_silent:
qinfo.db_len = strlen(db);
/* These DDL methods and logging protected with LOCK_mysql_create_db */
- mysql_bin_log.write(&qinfo);
+ if (mysql_bin_log.write(&qinfo))
+ {
+ error= -1;
+ goto exit;
+ }
}
my_ok(thd, result);
}
@@ -810,9 +815,9 @@ bool mysql_alter_db(THD *thd, const char
if (mysql_bin_log.is_open())
{
- int errcode= query_error_code(thd, TRUE);
+ thd->clear_error();
Query_log_event qinfo(thd, thd->query(), thd->query_length(), 0,
- /* suppress_use */ TRUE, errcode);
+ /* suppress_use */ TRUE, 0);
/*
Write should use the database being created as the "current
@@ -822,9 +827,9 @@ bool mysql_alter_db(THD *thd, const char
qinfo.db = db;
qinfo.db_len = strlen(db);
- thd->clear_error();
/* These DDL methods and logging protected with LOCK_mysql_create_db */
- mysql_bin_log.write(&qinfo);
+ if ((error= mysql_bin_log.write(&qinfo)))
+ goto exit;
}
my_ok(thd, result);
@@ -962,9 +967,9 @@ bool mysql_rm_db(THD *thd,char *db,bool
}
if (mysql_bin_log.is_open())
{
- int errcode= query_error_code(thd, TRUE);
+ thd->clear_error();
Query_log_event qinfo(thd, query, query_length, 0,
- /* suppress_use */ TRUE, errcode);
+ /* suppress_use */ TRUE, 0);
/*
Write should use the database being created as the "current
database" and not the threads current database, which is the
@@ -973,9 +978,12 @@ bool mysql_rm_db(THD *thd,char *db,bool
qinfo.db = db;
qinfo.db_len = strlen(db);
- thd->clear_error();
/* These DDL methods and logging protected with LOCK_mysql_create_db */
- mysql_bin_log.write(&qinfo);
+ if (mysql_bin_log.write(&qinfo))
+ {
+ error= -1;
+ goto exit;
+ }
}
thd->clear_error();
thd->server_status|= SERVER_STATUS_DB_DROPPED;
@@ -1003,7 +1011,11 @@ bool mysql_rm_db(THD *thd,char *db,bool
if (query_pos + tbl_name_len + 1 >= query_end)
{
/* These DDL methods and logging protected with LOCK_mysql_create_db */
- write_to_binlog(thd, query, query_pos -1 - query, db, db_len);
+ if (write_to_binlog(thd, query, query_pos -1 - query, db, db_len))
+ {
+ error= -1;
+ goto exit;
+ }
query_pos= query_data_start;
}
@@ -1016,7 +1028,11 @@ bool mysql_rm_db(THD *thd,char *db,bool
if (query_pos != query_data_start)
{
/* These DDL methods and logging protected with LOCK_mysql_create_db */
- write_to_binlog(thd, query, query_pos -1 - query, db, db_len);
+ if (write_to_binlog(thd, query, query_pos -1 - query, db, db_len))
+ {
+ error= -1;
+ goto exit;
+ }
}
}
@@ -1554,8 +1570,7 @@ bool mysql_change_db(THD *thd, const LEX
}
}
- if (my_strcasecmp(system_charset_info, new_db_name->str,
- INFORMATION_SCHEMA_NAME.str) == 0)
+ if (is_schema_db(new_db_name->str, new_db_name->length))
{
/* Switch the current database to INFORMATION_SCHEMA. */
@@ -1963,7 +1978,7 @@ bool mysql_upgrade_db(THD *thd, LEX_STRI
Query_log_event qinfo(thd, thd->query(), thd->query_length(),
0, TRUE, errcode);
thd->clear_error();
- mysql_bin_log.write(&qinfo);
+ error|= mysql_bin_log.write(&qinfo);
}
/* Step9: Let's do "use newdb" if we renamed the current database */
=== modified file 'sql/sql_delete.cc'
--- a/sql/sql_delete.cc 2010-02-10 19:06:24 +0000
+++ b/sql/sql_delete.cc 2010-03-04 08:03:07 +0000
@@ -850,9 +850,10 @@ void multi_delete::abort()
if (mysql_bin_log.is_open())
{
int errcode= query_error_code(thd, thd->killed == THD::NOT_KILLED);
- thd->binlog_query(THD::ROW_QUERY_TYPE,
- thd->query(), thd->query_length(),
- transactional_tables, FALSE, errcode);
+ /* possible error of writing binary log is ignored deliberately */
+ (void) thd->binlog_query(THD::ROW_QUERY_TYPE,
+ thd->query(), thd->query_length(),
+ transactional_tables, FALSE, errcode);
}
thd->transaction.all.modified_non_trans_table= true;
}
@@ -1176,8 +1177,9 @@ end:
{
/* In RBR, the statement is not binlogged if the table is temporary. */
if (!is_temporary_table || !thd->current_stmt_binlog_row_based)
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
- my_ok(thd); // This should return record count
+ error= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (!error)
+ my_ok(thd); // This should return record count
}
VOID(pthread_mutex_lock(&LOCK_open));
unlock_table_name(thd, table_list);
=== modified file 'sql/sql_insert.cc'
--- a/sql/sql_insert.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sql_insert.cc 2010-03-04 08:03:07 +0000
@@ -807,12 +807,21 @@ bool mysql_insert(THD *thd,TABLE_LIST *t
restore_record(table,s->default_values); // Get empty record
else
{
+ TABLE_SHARE *share= table->s;
+
/*
Fix delete marker. No need to restore rest of record since it will
be overwritten by fill_record() anyway (and fill_record() does not
use default values in this case).
*/
- table->record[0][0]= table->s->default_values[0];
+ table->record[0][0]= share->default_values[0];
+
+ /* Fix undefined null_bits. */
+ if (share->null_bytes > 1 && share->last_null_bit_pos)
+ {
+ table->record[0][share->null_bytes - 1]=
+ share->default_values[share->null_bytes - 1];
+ }
}
if (fill_record_n_invoke_before_triggers(thd, table->field, *values, 0,
table->triggers,
@@ -2765,10 +2774,11 @@ bool Delayed_insert::handle_inserts(void
will be binlogged together as one single Table_map event and one
single Rows event.
*/
- thd.binlog_query(THD::ROW_QUERY_TYPE,
- row->query.str, row->query.length,
- FALSE, FALSE, errcode);
-
+ if (thd.binlog_query(THD::ROW_QUERY_TYPE,
+ row->query.str, row->query.length,
+ FALSE, FALSE, errcode))
+ goto err;
+
thd.time_zone_used = backup_time_zone_used;
thd.variables.time_zone = backup_time_zone;
}
@@ -2837,8 +2847,9 @@ bool Delayed_insert::handle_inserts(void
TODO: Move the logging to last in the sequence of rows.
*/
- if (thd.current_stmt_binlog_row_based)
- thd.binlog_flush_pending_rows_event(TRUE);
+ if (thd.current_stmt_binlog_row_based &&
+ thd.binlog_flush_pending_rows_event(TRUE))
+ goto err;
if ((error=table->file->extra(HA_EXTRA_NO_CACHE)))
{ // This shouldn't happen
@@ -3289,16 +3300,21 @@ bool select_insert::send_eof()
events are in the transaction cache and will be written when
ha_autocommit_or_rollback() is issued below.
*/
- if (mysql_bin_log.is_open())
+ if (mysql_bin_log.is_open() &&
+ (!error || thd->transaction.stmt.modified_non_trans_table))
{
int errcode= 0;
if (!error)
thd->clear_error();
else
errcode= query_error_code(thd, killed_status == THD::NOT_KILLED);
- thd->binlog_query(THD::ROW_QUERY_TYPE,
+ if (thd->binlog_query(THD::ROW_QUERY_TYPE,
thd->query(), thd->query_length(),
- trans_table, FALSE, errcode);
+ trans_table, FALSE, errcode))
+ {
+ table->file->ha_release_auto_increment();
+ DBUG_RETURN(1);
+ }
}
table->file->ha_release_auto_increment();
@@ -3367,9 +3383,10 @@ void select_insert::abort() {
if (mysql_bin_log.is_open())
{
int errcode= query_error_code(thd, thd->killed == THD::NOT_KILLED);
- thd->binlog_query(THD::ROW_QUERY_TYPE, thd->query(),
- thd->query_length(),
- transactional_table, FALSE, errcode);
+ /* error of writing binary log is ignored */
+ (void) thd->binlog_query(THD::ROW_QUERY_TYPE, thd->query(),
+ thd->query_length(),
+ transactional_table, FALSE, errcode);
}
if (!thd->current_stmt_binlog_row_based && !can_rollback_data())
thd->transaction.all.modified_non_trans_table= TRUE;
@@ -3633,7 +3650,8 @@ select_create::prepare(List<Item> &value
!table->s->tmp_table &&
!ptr->get_create_info()->table_existed)
{
- ptr->binlog_show_create_table(tables, count);
+ if (int error= ptr->binlog_show_create_table(tables, count))
+ return error;
}
return 0;
}
@@ -3740,7 +3758,7 @@ select_create::prepare(List<Item> &value
DBUG_RETURN(0);
}
-void
+int
select_create::binlog_show_create_table(TABLE **tables, uint count)
{
/*
@@ -3779,12 +3797,13 @@ select_create::binlog_show_create_table(
if (mysql_bin_log.is_open())
{
int errcode= query_error_code(thd, thd->killed == THD::NOT_KILLED);
- thd->binlog_query(THD::STMT_QUERY_TYPE,
- query.ptr(), query.length(),
- /* is_trans */ TRUE,
- /* suppress_use */ FALSE,
- errcode);
+ result= thd->binlog_query(THD::STMT_QUERY_TYPE,
+ query.ptr(), query.length(),
+ /* is_trans */ TRUE,
+ /* suppress_use */ FALSE,
+ errcode);
}
+ return result;
}
void select_create::store_values(List<Item> &values)
@@ -3882,7 +3901,8 @@ void select_create::abort()
select_insert::abort();
thd->transaction.stmt.modified_non_trans_table= FALSE;
reenable_binlog(thd);
- thd->binlog_flush_pending_rows_event(TRUE);
+ /* possible error of writing binary log is ignored deliberately */
+ (void)thd->binlog_flush_pending_rows_event(TRUE);
if (m_plock)
{
=== modified file 'sql/sql_load.cc'
--- a/sql/sql_load.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sql_load.cc 2010-03-04 08:03:07 +0000
@@ -122,7 +122,7 @@ int mysql_load(THD *thd,sql_exchange *ex
char name[FN_REFLEN];
File file;
TABLE *table= NULL;
- int error;
+ int error= 0;
String *field_term=ex->field_term,*escaped=ex->escaped;
String *enclosed=ex->enclosed;
bool is_fifo=0;
@@ -504,18 +504,20 @@ int mysql_load(THD *thd,sql_exchange *ex
{
int errcode= query_error_code(thd, killed_status == THD::NOT_KILLED);
+ /* since there is already an error, the possible error of
+ writing binary log will be ignored */
if (thd->transaction.stmt.modified_non_trans_table)
- write_execute_load_query_log_event(thd, ex,
- table_list->db,
- table_list->table_name,
- handle_duplicates, ignore,
- transactional_table,
- errcode);
+ (void) write_execute_load_query_log_event(thd, ex,
+ table_list->db,
+ table_list->table_name,
+ handle_duplicates, ignore,
+ transactional_table,
+ errcode);
else
{
Delete_file_log_event d(thd, db, transactional_table);
d.flags|= LOG_EVENT_UPDATE_TABLE_MAP_VERSION_F;
- mysql_bin_log.write(&d);
+ (void) mysql_bin_log.write(&d);
}
}
}
@@ -541,7 +543,7 @@ int mysql_load(THD *thd,sql_exchange *ex
after this point.
*/
if (thd->current_stmt_binlog_row_based)
- thd->binlog_flush_pending_rows_event(true);
+ error= thd->binlog_flush_pending_rows_event(true);
else
{
/*
@@ -553,13 +555,15 @@ int mysql_load(THD *thd,sql_exchange *ex
if (lf_info.wrote_create_file)
{
int errcode= query_error_code(thd, killed_status == THD::NOT_KILLED);
- write_execute_load_query_log_event(thd, ex,
- table_list->db, table_list->table_name,
- handle_duplicates, ignore,
- transactional_table,
- errcode);
+ error= write_execute_load_query_log_event(thd, ex,
+ table_list->db, table_list->table_name,
+ handle_duplicates, ignore,
+ transactional_table,
+ errcode);
}
}
+ if (error)
+ goto err;
}
#endif /*!EMBEDDED_LIBRARY*/
@@ -640,7 +644,11 @@ static bool write_execute_load_query_log
if (n++)
pfields.append(", ");
if (item->name)
+ {
+ pfields.append("`");
pfields.append(item->name);
+ pfields.append("`");
+ }
else
item->print(&pfields, QT_ORDINARY);
}
@@ -660,7 +668,9 @@ static bool write_execute_load_query_log
val= lv++;
if (n++)
pfields.append(", ");
+ pfields.append("`");
pfields.append(item->name);
+ pfields.append("`");
pfields.append("=");
val->print(&pfields, QT_ORDINARY);
}
=== modified file 'sql/sql_parse.cc'
--- a/sql/sql_parse.cc 2010-01-29 10:42:31 +0000
+++ b/sql/sql_parse.cc 2010-03-04 08:03:07 +0000
@@ -626,8 +626,10 @@ void free_items(Item *item)
DBUG_VOID_RETURN;
}
-/* This works because items are allocated with sql_alloc() */
-
+/**
+ This works because items are allocated with sql_alloc().
+ @note The function also handles null pointers (empty list).
+*/
void cleanup_items(Item *item)
{
DBUG_ENTER("cleanup_items");
@@ -1323,8 +1325,7 @@ bool dispatch_command(enum enum_server_c
table_list.alias= table_list.table_name= conv_name.str;
packet= arg_end + 1;
- if (!my_strcasecmp(system_charset_info, table_list.db,
- INFORMATION_SCHEMA_NAME.str))
+ if (is_schema_db(table_list.db, table_list.db_length))
{
ST_SCHEMA_TABLE *schema_table= find_schema_table(thd, table_list.alias);
if (schema_table)
@@ -1386,7 +1387,7 @@ bool dispatch_command(enum enum_server_c
break;
}
if (check_access(thd, CREATE_ACL, db.str , 0, 1, 0,
- is_schema_db(db.str)))
+ is_schema_db(db.str, db.length)))
break;
general_log_print(thd, command, "%.*s", db.length, db.str);
bzero(&create_info, sizeof(create_info));
@@ -1405,7 +1406,8 @@ bool dispatch_command(enum enum_server_c
my_error(ER_WRONG_DB_NAME, MYF(0), db.str ? db.str : "NULL");
break;
}
- if (check_access(thd, DROP_ACL, db.str, 0, 1, 0, is_schema_db(db.str)))
+ if (check_access(thd, DROP_ACL, db.str, 0, 1, 0,
+ is_schema_db(db.str, db.length)))
break;
if (thd->locked_tables || thd->active_transaction())
{
@@ -2710,6 +2712,8 @@ mysql_execute_command(THD *thd)
{
lex->link_first_table_back(create_table, link_to_local);
create_table->create= TRUE;
+ /* Base table and temporary table are not in the same name space. */
+ create_table->skip_temporary= 1;
}
if (!(res= open_and_lock_tables(thd, lex->query_tables)))
@@ -3041,7 +3045,7 @@ end_with_restore_list:
/*
Presumably, REPAIR and binlog writing doesn't require synchronization
*/
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ res= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
select_lex->table_list.first= (uchar*) first_table;
lex->query_tables=all_tables;
@@ -3075,7 +3079,7 @@ end_with_restore_list:
/*
Presumably, ANALYZE and binlog writing doesn't require synchronization
*/
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ res= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
select_lex->table_list.first= (uchar*) first_table;
lex->query_tables=all_tables;
@@ -3099,7 +3103,7 @@ end_with_restore_list:
/*
Presumably, OPTIMIZE and binlog writing doesn't require synchronization
*/
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ res= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
}
select_lex->table_list.first= (uchar*) first_table;
lex->query_tables=all_tables;
@@ -3216,7 +3220,7 @@ end_with_restore_list:
if (incident)
{
Incident_log_event ev(thd, incident);
- mysql_bin_log.write(&ev);
+ (void) mysql_bin_log.write(&ev); /* error is ignored */
mysql_bin_log.rotate_and_purge(RP_FORCE_ROTATE);
}
DBUG_PRINT("debug", ("Just after generate_incident()"));
@@ -3409,9 +3413,9 @@ end_with_restore_list:
select_lex->where,
0, (ORDER *)NULL, (ORDER *)NULL, (Item *)NULL,
(ORDER *)NULL,
- select_lex->options | thd->options |
+ (select_lex->options | thd->options |
SELECT_NO_JOIN_CACHE | SELECT_NO_UNLOCK |
- OPTION_SETUP_TABLES_DONE,
+ OPTION_SETUP_TABLES_DONE) & ~OPTION_BUFFER_RESULT,
del_result, unit, select_lex);
res|= thd->is_error();
if (res)
@@ -3434,17 +3438,6 @@ end_with_restore_list:
}
else
{
- /*
- If this is a slave thread, we may sometimes execute some
- DROP / * 40005 TEMPORARY * / TABLE
- that come from parts of binlogs (likely if we use RESET SLAVE or CHANGE
- MASTER TO), while the temporary table has already been dropped.
- To not generate such irrelevant "table does not exist errors",
- we silently add IF EXISTS if TEMPORARY was used.
- */
- if (thd->slave_thread)
- lex->drop_if_exists= 1;
-
/* So that DROP TEMPORARY TABLE gets to binlog at commit/rollback */
thd->options|= OPTION_KEEP_LOG;
}
@@ -3658,7 +3651,7 @@ end_with_restore_list:
}
#endif
if (check_access(thd,CREATE_ACL,lex->name.str, 0, 1, 0,
- is_schema_db(lex->name.str)))
+ is_schema_db(lex->name.str, lex->name.length)))
break;
res= mysql_create_db(thd,(lower_case_table_names == 2 ? alias :
lex->name.str), &create_info, 0);
@@ -3693,7 +3686,7 @@ end_with_restore_list:
}
#endif
if (check_access(thd,DROP_ACL,lex->name.str,0,1,0,
- is_schema_db(lex->name.str)))
+ is_schema_db(lex->name.str, lex->name.length)))
break;
if (thd->locked_tables || thd->active_transaction())
{
@@ -3727,9 +3720,12 @@ end_with_restore_list:
my_error(ER_WRONG_DB_NAME, MYF(0), db->str);
break;
}
- if (check_access(thd, ALTER_ACL, db->str, 0, 1, 0, is_schema_db(db->str)) ||
- check_access(thd, DROP_ACL, db->str, 0, 1, 0, is_schema_db(db->str)) ||
- check_access(thd, CREATE_ACL, db->str, 0, 1, 0, is_schema_db(db->str)))
+ if (check_access(thd, ALTER_ACL, db->str, 0, 1, 0,
+ is_schema_db(db->str, db->length)) ||
+ check_access(thd, DROP_ACL, db->str, 0, 1, 0,
+ is_schema_db(db->str, db->length)) ||
+ check_access(thd, CREATE_ACL, db->str, 0, 1, 0,
+ is_schema_db(db->str, db->length)))
{
res= 1;
break;
@@ -3772,7 +3768,8 @@ end_with_restore_list:
break;
}
#endif
- if (check_access(thd, ALTER_ACL, db->str, 0, 1, 0, is_schema_db(db->str)))
+ if (check_access(thd, ALTER_ACL, db->str, 0, 1, 0,
+ is_schema_db(db->str, db->length)))
break;
if (thd->locked_tables || thd->active_transaction())
{
@@ -3928,7 +3925,8 @@ end_with_restore_list:
first_table ? &first_table->grant.privilege : 0,
first_table ? 0 : 1, 0,
first_table ? (bool) first_table->schema_table :
- select_lex->db ? is_schema_db(select_lex->db) : 0))
+ select_lex->db ?
+ is_schema_db(select_lex->db) : 0))
goto error;
if (thd->security_ctx->user) // If not replication
@@ -4051,7 +4049,8 @@ end_with_restore_list:
*/
if (!lex->no_write_to_binlog && write_to_binlog)
{
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ if ((res= write_bin_log(thd, FALSE, thd->query(), thd->query_length())))
+ break;
}
my_ok(thd);
}
@@ -4271,7 +4270,8 @@ end_with_restore_list:
}
if (check_access(thd, CREATE_PROC_ACL, lex->sphead->m_db.str, 0, 0, 0,
- is_schema_db(lex->sphead->m_db.str)))
+ is_schema_db(lex->sphead->m_db.str,
+ lex->sphead->m_db.length)))
goto create_sp_error;
if (end_active_trans(thd))
@@ -4628,12 +4628,12 @@ create_sp_error:
case SP_KEY_NOT_FOUND:
if (lex->drop_if_exists)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ res= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_NOTE,
ER_SP_DOES_NOT_EXIST, ER(ER_SP_DOES_NOT_EXIST),
SP_COM_STRING(lex), lex->spname->m_name.str);
- res= FALSE;
- my_ok(thd);
+ if (!res)
+ my_ok(thd);
break;
}
my_error(ER_SP_DOES_NOT_EXIST, MYF(0),
@@ -4926,7 +4926,8 @@ create_sp_error:
res= mysql_xa_recover(thd);
break;
case SQLCOM_ALTER_TABLESPACE:
- if (check_access(thd, ALTER_ACL, thd->db, 0, 1, 0, thd->db ? is_schema_db(thd->db) : 0))
+ if (check_access(thd, ALTER_ACL, thd->db, 0, 1, 0,
+ thd->db ? is_schema_db(thd->db, thd->db_length) : 0))
break;
if (!(res= mysql_alter_tablespace(thd, lex->alter_tablespace_info)))
my_ok(thd);
@@ -6332,8 +6333,7 @@ TABLE_LIST *st_select_lex::add_table_to_
ptr->force_index= test(table_options & TL_OPTION_FORCE_INDEX);
ptr->ignore_leaves= test(table_options & TL_OPTION_IGNORE_LEAVES);
ptr->derived= table->sel;
- if (!ptr->derived && !my_strcasecmp(system_charset_info, ptr->db,
- INFORMATION_SCHEMA_NAME.str))
+ if (!ptr->derived && is_schema_db(ptr->db, ptr->db_length))
{
ST_SCHEMA_TABLE *schema_table= find_schema_table(thd, ptr->table_name);
if (!schema_table ||
@@ -6861,13 +6861,13 @@ bool reload_acl_and_cache(THD *thd, ulon
thd->store_globals();
lex_start(thd);
}
-
+
if (thd)
{
bool reload_acl_failed= acl_reload(thd);
bool reload_grants_failed= grant_reload(thd);
bool reload_servers_failed= servers_reload(thd);
-
+
if (reload_acl_failed || reload_grants_failed || reload_servers_failed)
{
result= 1;
@@ -7023,7 +7023,10 @@ bool reload_acl_and_cache(THD *thd, ulon
if (options & REFRESH_USER_RESOURCES)
reset_mqh((LEX_USER *) NULL, 0); /* purecov: inspected */
*write_to_binlog= tmp_write_to_binlog;
- return result;
+ /*
+ If the query was killed then this function must fail.
+ */
+ return result || (thd ? thd->killed : 0);
}
=== modified file 'sql/sql_partition.cc'
--- a/sql/sql_partition.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sql_partition.cc 2010-03-09 15:03:54 +0000
@@ -870,6 +870,8 @@ int check_signed_flag(partition_info *pa
part_info Reference to partitioning data structure
is_sub_part Is the table subpartitioned as well
is_field_to_be_setup Flag if we are to set-up field arrays
+ is_create_table_ind Indicator of whether openfrm was called as part of
+ CREATE or ALTER TABLE
RETURN VALUE
TRUE An error occurred, something was wrong with the
@@ -892,8 +894,9 @@ int check_signed_flag(partition_info *pa
on the field object.
*/
-bool fix_fields_part_func(THD *thd, Item* func_expr, TABLE *table,
- bool is_sub_part, bool is_field_to_be_setup)
+static bool fix_fields_part_func(THD *thd, Item* func_expr, TABLE *table,
+ bool is_sub_part, bool is_field_to_be_setup,
+ bool is_create_table_ind)
{
partition_info *part_info= table->part_info;
uint dir_length, home_dir_length;
@@ -1005,10 +1008,31 @@ bool fix_fields_part_func(THD *thd, Item
thd->where= save_where;
if (unlikely(func_expr->const_item()))
{
- my_error(ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR, MYF(0));
+ my_error(ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR, MYF(0));
clear_field_flag(table);
goto end;
}
+
+ /*
+ We don't allow creating partitions with timezone-dependent expressions as
+ a (sub)partitioning function, but we want to allow such expressions when
+ opening existing tables for easier maintenance. This exception should be
+ deprecated at some point in future so that we always throw an error.
+ */
+ if (func_expr->walk(&Item::is_timezone_dependent_processor,
+ 0, NULL))
+ {
+ if (is_create_table_ind)
+ {
+ my_error(ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR, MYF(0));
+ goto end;
+ }
+ else
+ push_warning(thd, MYSQL_ERROR::WARN_LEVEL_WARN,
+ ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR,
+ ER(ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR));
+ }
+
if ((!is_sub_part) && (error= check_signed_flag(part_info)))
goto end;
result= FALSE;
@@ -1616,7 +1640,8 @@ bool fix_partition_func(THD *thd, TABLE
else
{
if (unlikely(fix_fields_part_func(thd, part_info->subpart_expr,
- table, TRUE, TRUE)))
+ table, TRUE, TRUE,
+ is_create_table_ind)))
goto end;
if (unlikely(part_info->subpart_expr->result_type() != INT_RESULT))
{
@@ -1644,7 +1669,8 @@ bool fix_partition_func(THD *thd, TABLE
else
{
if (unlikely(fix_fields_part_func(thd, part_info->part_expr,
- table, FALSE, TRUE)))
+ table, FALSE, TRUE,
+ is_create_table_ind)))
goto end;
if (unlikely(part_info->part_expr->result_type() != INT_RESULT))
{
@@ -1658,7 +1684,8 @@ bool fix_partition_func(THD *thd, TABLE
{
const char *error_str;
if (unlikely(fix_fields_part_func(thd, part_info->part_expr,
- table, FALSE, TRUE)))
+ table, FALSE, TRUE,
+ is_create_table_ind)))
goto end;
if (part_info->part_type == RANGE_PARTITION)
{
@@ -2851,16 +2878,13 @@ int get_partition_id_range(partition_inf
part_func_value-= 0x8000000000000000ULL;
while (max_part_id > min_part_id)
{
- loc_part_id= (max_part_id + min_part_id + 1) >> 1;
+ loc_part_id= (max_part_id + min_part_id) / 2;
if (range_array[loc_part_id] <= part_func_value)
min_part_id= loc_part_id + 1;
else
- max_part_id= loc_part_id - 1;
+ max_part_id= loc_part_id;
}
loc_part_id= max_part_id;
- if (part_func_value >= range_array[loc_part_id])
- if (loc_part_id != max_partition)
- loc_part_id++;
*part_id= (uint32)loc_part_id;
if (loc_part_id == max_partition &&
part_func_value >= range_array[loc_part_id] &&
@@ -2934,6 +2958,7 @@ uint32 get_partition_id_range_for_endpoi
bool include_endpoint)
{
longlong *range_array= part_info->range_int_array;
+ longlong part_end_val;
uint max_partition= part_info->no_parts - 1;
uint min_part_id= 0, max_part_id= max_partition, loc_part_id;
/* Get the partitioning function value for the endpoint */
@@ -2967,46 +2992,46 @@ uint32 get_partition_id_range_for_endpoi
}
}
-
if (unsigned_flag)
part_func_value-= 0x8000000000000000ULL;
if (left_endpoint && !include_endpoint)
part_func_value++;
+
+ /*
+ Search for the partition containing part_func_value
+ (including the right endpoint).
+ */
while (max_part_id > min_part_id)
{
- loc_part_id= (max_part_id + min_part_id + 1) >> 1;
- if (range_array[loc_part_id] <= part_func_value)
+ loc_part_id= (max_part_id + min_part_id) / 2;
+ if (range_array[loc_part_id] < part_func_value)
min_part_id= loc_part_id + 1;
else
- max_part_id= loc_part_id - 1;
+ max_part_id= loc_part_id;
}
loc_part_id= max_part_id;
- if (loc_part_id < max_partition &&
- part_func_value >= range_array[loc_part_id+1])
- {
- loc_part_id++;
- }
+
+ /* Adjust for endpoints */
+ part_end_val= range_array[loc_part_id];
if (left_endpoint)
{
- longlong bound= range_array[loc_part_id];
/*
In case of PARTITION p VALUES LESS THAN MAXVALUE
the maximum value is in the current partition.
*/
- if (part_func_value > bound ||
- (part_func_value == bound &&
- (!part_info->defined_max_value || loc_part_id < max_partition)))
+ if (part_func_value > part_end_val ||
+ (part_func_value == part_end_val &&
+ (loc_part_id < max_partition || !part_info->defined_max_value)))
loc_part_id++;
}
else
{
- if (loc_part_id < max_partition)
- {
- if (part_func_value == range_array[loc_part_id])
- loc_part_id += test(include_endpoint);
- else if (part_func_value > range_array[loc_part_id])
- loc_part_id++;
- }
+ /* if 'WHERE <= X' and partition is LESS THAN (X) include next partition */
+ if (include_endpoint && loc_part_id < max_partition &&
+ part_func_value == part_end_val)
+ loc_part_id++;
+
+ /* Right endpoint, set end after correct partition */
loc_part_id++;
}
DBUG_RETURN(loc_part_id);
@@ -4089,8 +4114,9 @@ static int fast_end_partition(THD *thd,
}
if ((!is_empty) && (!written_bin_log) &&
- (!thd->lex->no_write_to_binlog))
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ (!thd->lex->no_write_to_binlog) &&
+ write_bin_log(thd, FALSE, thd->query(), thd->query_length()))
+ DBUG_RETURN(TRUE);
my_snprintf(tmp_name, sizeof(tmp_name), ER(ER_INSERT_INFO),
(ulong) (copied + deleted),
@@ -5681,8 +5707,7 @@ static bool write_log_drop_partition(ALT
part_info->first_log_entry= NULL;
build_table_filename(path, sizeof(path) - 1, lpt->db,
lpt->table_name, "", 0);
- build_table_filename(tmp_path, sizeof(tmp_path) - 1, lpt->db,
- lpt->table_name, "#", 0);
+ build_table_shadow_filename(tmp_path, sizeof(tmp_path) - 1, lpt);
pthread_mutex_lock(&LOCK_gdl);
if (write_log_dropped_partitions(lpt, &next_entry, (const char*)path,
FALSE))
@@ -5738,8 +5763,7 @@ static bool write_log_add_change_partiti
build_table_filename(path, sizeof(path) - 1, lpt->db,
lpt->table_name, "", 0);
- build_table_filename(tmp_path, sizeof(tmp_path) - 1, lpt->db,
- lpt->table_name, "#", 0);
+ build_table_shadow_filename(tmp_path, sizeof(tmp_path) - 1, lpt);
pthread_mutex_lock(&LOCK_gdl);
if (write_log_dropped_partitions(lpt, &next_entry, (const char*)path,
FALSE))
@@ -5964,7 +5988,7 @@ void handle_alter_part_error(ALTER_PARTI
partition_info *part_info= lpt->part_info;
DBUG_ENTER("handle_alter_part_error");
- if (!part_info->first_log_entry &&
+ if (part_info->first_log_entry &&
execute_ddl_log_entry(current_thd,
part_info->first_log_entry->entry_pos))
{
=== modified file 'sql/sql_partition.h'
--- a/sql/sql_partition.h 2007-11-20 10:21:00 +0000
+++ b/sql/sql_partition.h 2009-12-13 20:29:50 +0000
@@ -91,9 +91,6 @@ uint32 get_list_array_idx_for_endpoint(p
uint32 get_partition_id_range_for_endpoint(partition_info *part_info,
bool left_endpoint,
bool include_endpoint);
-bool fix_fields_part_func(THD *thd, Item* func_expr, TABLE *table,
- bool is_sub_part, bool is_field_to_be_setup);
-
bool check_part_func_fields(Field **ptr, bool ok_with_charsets);
bool field_is_partition_charset(Field *field);
=== modified file 'sql/sql_plugin.cc'
--- a/sql/sql_plugin.cc 2010-03-08 17:05:09 +0000
+++ b/sql/sql_plugin.cc 2010-03-09 19:23:30 +0000
@@ -2100,7 +2100,7 @@ static int check_func_set(THD *thd, stru
&error, &error_len, ¬_used);
if (error_len)
{
- strmake(buff, error, min(sizeof(buff), error_len));
+ strmake(buff, error, min(sizeof(buff) - 1, error_len));
strvalue= buff;
goto err;
}
=== modified file 'sql/sql_prepare.cc'
--- a/sql/sql_prepare.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sql_prepare.cc 2010-03-04 08:03:07 +0000
@@ -1600,6 +1600,8 @@ static bool mysql_test_create_table(Prep
{
lex->link_first_table_back(create_table, link_to_local);
create_table->create= TRUE;
+ /* Base table and temporary table are not in the same name space. */
+ create_table->skip_temporary= true;
}
if (open_normal_and_derived_tables(stmt->thd, lex->query_tables, 0))
=== modified file 'sql/sql_rename.cc'
--- a/sql/sql_rename.cc 2009-10-16 10:29:42 +0000
+++ b/sql/sql_rename.cc 2010-01-24 07:03:23 +0000
@@ -34,6 +34,7 @@ static TABLE_LIST *reverse_table_list(TA
bool mysql_rename_tables(THD *thd, TABLE_LIST *table_list, bool silent)
{
bool error= 1;
+ bool binlog_error= 0;
TABLE_LIST *ren_table= 0;
int to_table;
char *rename_log_table[2]= {NULL, NULL};
@@ -174,11 +175,11 @@ bool mysql_rename_tables(THD *thd, TABLE
*/
pthread_mutex_unlock(&LOCK_open);
- /* Lets hope this doesn't fail as the result will be messy */
if (!silent && !error)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
- my_ok(thd);
+ binlog_error= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (!binlog_error)
+ my_ok(thd);
}
if (!error)
@@ -190,7 +191,7 @@ bool mysql_rename_tables(THD *thd, TABLE
err:
start_waiting_global_read_lock(thd);
- DBUG_RETURN(error);
+ DBUG_RETURN(error || binlog_error);
}
=== modified file 'sql/sql_repl.cc'
--- a/sql/sql_repl.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_repl.cc 2010-03-04 08:03:07 +0000
@@ -711,11 +711,14 @@ impossible position";
thd_proc_info(thd, "Finished reading one binlog; switching to next binlog");
switch (mysql_bin_log.find_next_log(&linfo, 1)) {
- case LOG_INFO_EOF:
- loop_breaker = (flags & BINLOG_DUMP_NON_BLOCK);
- break;
case 0:
break;
+ case LOG_INFO_EOF:
+ if (mysql_bin_log.is_active(log_file_name))
+ {
+ loop_breaker = (flags & BINLOG_DUMP_NON_BLOCK);
+ break;
+ }
default:
errmsg = "could not find next log";
my_errno= ER_MASTER_FATAL_ERROR_READING_BINLOG;
@@ -1001,8 +1004,8 @@ int reset_slave(THD *thd, Master_info* m
MY_STAT stat_area;
char fname[FN_REFLEN];
int thread_mask= 0, error= 0;
- uint sql_errno=0;
- const char* errmsg=0;
+ uint sql_errno=ER_UNKNOWN_ERROR;
+ const char* errmsg= "Unknown error occured while reseting slave";
DBUG_ENTER("reset_slave");
lock_slave_threads(mi);
@@ -1668,7 +1671,8 @@ err:
replication events along LOAD DATA processing.
@param file pointer to io-cache
- @return 0
+ @retval 0 success
+ @retval 1 failure
*/
int log_loaded_block(IO_CACHE* file)
{
@@ -1695,7 +1699,8 @@ int log_loaded_block(IO_CACHE* file)
Append_block_log_event a(lf_info->thd, lf_info->thd->db, buffer,
min(block_len, max_event_size),
lf_info->log_delayed);
- mysql_bin_log.write(&a);
+ if (mysql_bin_log.write(&a))
+ DBUG_RETURN(1);
}
else
{
@@ -1703,7 +1708,8 @@ int log_loaded_block(IO_CACHE* file)
buffer,
min(block_len, max_event_size),
lf_info->log_delayed);
- mysql_bin_log.write(&b);
+ if (mysql_bin_log.write(&b))
+ DBUG_RETURN(1);
lf_info->wrote_create_file= 1;
DBUG_SYNC_POINT("debug_lock.created_file_event",10);
}
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-09 19:22:24 +0000
+++ b/sql/sql_select.cc 2010-03-10 09:12:23 +0000
@@ -529,7 +529,7 @@ JOIN::prepare(Item ***rref_pointer_array
thd->lex->allow_sum_func= save_allow_sum_func;
}
- if (!thd->lex->view_prepare_mode)
+ if (!thd->lex->view_prepare_mode && !(select_options & SELECT_DESCRIBE))
{
Item_subselect *subselect;
/* Is it subselect? */
@@ -549,13 +549,26 @@ JOIN::prepare(Item ***rref_pointer_array
if (order)
{
+ bool real_order= FALSE;
ORDER *ord;
for (ord= order; ord; ord= ord->next)
{
Item *item= *ord->item;
+ /*
+ Disregard sort order if there's only "{VAR}CHAR(0) NOT NULL" fields
+ there. Such fields don't contain any data to sort.
+ */
+ if (!real_order &&
+ (item->type() != Item::FIELD_ITEM ||
+ ((Item_field *) item)->field->maybe_null() ||
+ ((Item_field *) item)->field->sort_length()))
+ real_order= TRUE;
+
if (item->with_sum_func && item->type() != Item::SUM_FUNC_ITEM)
item->split_sum_func(thd, ref_pointer_array, all_fields);
}
+ if (!real_order)
+ order= NULL;
}
if (having && having->with_sum_func)
@@ -952,6 +965,7 @@ JOIN::optimize()
DBUG_PRINT("info",("Select tables optimized away"));
zero_result_cause= "Select tables optimized away";
tables_list= 0; // All tables resolved
+ const_tables= tables;
/*
Extract all table-independent conditions and replace the WHERE
clause with them. All other conditions were computed by opt_sum_query
@@ -1632,6 +1646,11 @@ JOIN::reinit()
if (join_tab_save)
memcpy(join_tab, join_tab_save, sizeof(JOIN_TAB) * tables);
+ /* need to reset ref access state (see join_read_key) */
+ if (join_tab)
+ for (uint i= 0; i < tables; i++)
+ join_tab[i].ref.key_err= TRUE;
+
if (tmp_join)
restore_tmp();
@@ -3722,20 +3741,20 @@ add_ft_keys(DYNAMIC_ARRAY *keyuse_array,
cond_func=(Item_func_match *)cond;
else if (func->arg_count == 2)
{
- Item_func *arg0=(Item_func *)(func->arguments()[0]),
- *arg1=(Item_func *)(func->arguments()[1]);
- if (arg1->const_item() &&
- arg0->type() == Item::FUNC_ITEM &&
- arg0->functype() == Item_func::FT_FUNC &&
+ Item *arg0= func->arguments()[0],
+ *arg1= func->arguments()[1];
+ if (arg1->const_item() && arg1->cols() == 1 &&
+ arg0->type() == Item::FUNC_ITEM &&
+ ((Item_func *) arg0)->functype() == Item_func::FT_FUNC &&
((functype == Item_func::GE_FUNC && arg1->val_real() > 0) ||
- (functype == Item_func::GT_FUNC && arg1->val_real() >=0)))
- cond_func=(Item_func_match *) arg0;
- else if (arg0->const_item() &&
- arg1->type() == Item::FUNC_ITEM &&
- arg1->functype() == Item_func::FT_FUNC &&
+ (functype == Item_func::GT_FUNC && arg1->val_real() >= 0)))
+ cond_func= (Item_func_match *) arg0;
+ else if (arg0->const_item() && arg0->cols() == 1 &&
+ arg1->type() == Item::FUNC_ITEM &&
+ ((Item_func *) arg1)->functype() == Item_func::FT_FUNC &&
((functype == Item_func::LE_FUNC && arg0->val_real() > 0) ||
- (functype == Item_func::LT_FUNC && arg0->val_real() >=0)))
- cond_func=(Item_func_match *) arg1;
+ (functype == Item_func::LT_FUNC && arg0->val_real() >= 0)))
+ cond_func= (Item_func_match *) arg1;
}
}
else if (cond->type() == Item::COND_ITEM)
@@ -5950,6 +5969,7 @@ inline void add_cond_and_fix(Item **e1,
{
*e1= res;
res->quick_fix_field();
+ res->update_used_tables();
}
}
else
@@ -7158,6 +7178,7 @@ static void update_depend_map(JOIN *join
table_map depend_map;
order->item[0]->update_used_tables();
order->depend_map=depend_map=order->item[0]->used_tables();
+ order->used= 0;
// Not item_sum(), RAND() and no reference to table outside of sub select
if (!(order->depend_map & (OUTER_REF_TABLE_BIT | RAND_TABLE_BIT))
&& !order->item[0]->with_sum_func)
@@ -7215,7 +7236,19 @@ remove_const(JOIN *join,ORDER *first_ord
for (order=first_order; order ; order=order->next)
{
table_map order_tables=order->item[0]->used_tables();
- if (order->item[0]->with_sum_func)
+ if (order->item[0]->with_sum_func ||
+ /*
+ If the outer table of an outer join is const (either by itself or
+ after applying WHERE condition), grouping on a field from such a
+ table will be optimized away and filesort without temporary table
+ will be used unless we prevent that now. Filesort is not fit to
+ handle joins and the join condition is not applied. We can't detect
+ the case without an expensive test, however, so we force temporary
+ table for all queries containing more than one table, ROLLUP, and an
+ outer join.
+ */
+ (join->tables > 1 && join->rollup.state == ROLLUP::STATE_INITED &&
+ join->outer_join))
*simple_order=0; // Must do a temp table to sort
else if (!(order_tables & not_const_tables))
{
@@ -7623,7 +7656,7 @@ static bool check_simple_equality(Item *
already contains a constant and its value is not equal to
the value of const_item.
*/
- item_equal->add(const_item);
+ item_equal->add(const_item, field_item);
}
else
{
@@ -13804,7 +13837,7 @@ check_reverse_order:
select->quick=tmp;
}
}
- else if (tab->type != JT_NEXT &&
+ else if (tab->type != JT_NEXT && tab->type != JT_REF_OR_NULL &&
tab->ref.key >= 0 && tab->ref.key_parts <= used_key_parts)
{
/*
=== modified file 'sql/sql_select.h'
--- a/sql/sql_select.h 2010-01-15 15:27:55 +0000
+++ b/sql/sql_select.h 2010-03-04 08:03:07 +0000
@@ -722,6 +722,12 @@ public:
my_bitmap_map *old_map= dbug_tmp_use_all_columns(table,
table->write_set);
int res= item->save_in_field(to_field, 1);
+ /*
+ Item::save_in_field() may call Item::val_xxx(). And if this is a subquery
+ we need to check for errors executing it and react accordingly
+ */
+ if (!res && table->in_use->is_error())
+ res= 2;
dbug_tmp_restore_column_map(table->write_set, old_map);
null_key= to_field->is_null() || item->null_value;
return (err != 0 || res > 2 ? STORE_KEY_FATAL : (store_key_result) res);
@@ -755,6 +761,12 @@ protected:
if (!err)
err= res;
}
+ /*
+ Item::save_in_field() may call Item::val_xxx(). And if this is a subquery
+ we need to check for errors executing it and react accordingly
+ */
+ if (!err && to_field->table->in_use->is_error())
+ err= 2;
}
null_key= to_field->is_null() || item->null_value;
return (err > 2 ? STORE_KEY_FATAL : (store_key_result) err);
=== modified file 'sql/sql_servers.cc'
--- a/sql/sql_servers.cc 2009-03-20 14:27:53 +0000
+++ b/sql/sql_servers.cc 2010-01-13 11:39:00 +0000
@@ -241,8 +241,14 @@ bool servers_reload(THD *thd)
if (simple_open_n_lock_tables(thd, tables))
{
- sql_print_error("Can't open and lock privilege tables: %s",
- thd->main_da.message());
+ /*
+ Execution might have been interrupted; only print the error message
+ if an error condition has been raised.
+ */
+ if (thd->main_da.is_error())
+ sql_print_error("Can't open and lock privilege tables: %s",
+ thd->main_da.message());
+ return_val= FALSE;
goto end;
}
=== modified file 'sql/sql_show.cc'
--- a/sql/sql_show.cc 2010-01-15 15:27:55 +0000
+++ b/sql/sql_show.cc 2010-03-04 08:03:07 +0000
@@ -828,8 +828,7 @@ bool mysqld_show_create_db(THD *thd, cha
DBUG_RETURN(TRUE);
}
#endif
- if (!my_strcasecmp(system_charset_info, dbname,
- INFORMATION_SCHEMA_NAME.str))
+ if (is_schema_db(dbname))
{
dbname= INFORMATION_SCHEMA_NAME.str;
create.default_table_charset= system_charset_info;
@@ -2797,8 +2796,8 @@ int make_db_list(THD *thd, List<LEX_STRI
*/
if (lookup_field_vals->db_value.str)
{
- if (!my_strcasecmp(system_charset_info, INFORMATION_SCHEMA_NAME.str,
- lookup_field_vals->db_value.str))
+ if (is_schema_db(lookup_field_vals->db_value.str,
+ lookup_field_vals->db_value.length))
{
*with_i_schema= 1;
if (files->push_back(i_s_name_copy))
@@ -3385,11 +3384,11 @@ int get_all_tables(THD *thd, TABLE_LIST
while ((db_name= it++))
{
#ifndef NO_EMBEDDED_ACCESS_CHECKS
- if (!check_access(thd,SELECT_ACL, db_name->str,
- &thd->col_access, 0, 1, with_i_schema) ||
+ if (!(check_access(thd,SELECT_ACL, db_name->str,
+ &thd->col_access, 0, 1, with_i_schema) ||
+ (!thd->col_access && check_grant_db(thd, db_name->str))) ||
sctx->master_access & (DB_ACLS | SHOW_DB_ACL) ||
- acl_get(sctx->host, sctx->ip, sctx->priv_user, db_name->str, 0) ||
- !check_grant_db(thd, db_name->str))
+ acl_get(sctx->host, sctx->ip, sctx->priv_user, db_name->str, 0))
#endif
{
thd->no_warnings_for_error= 1;
@@ -5250,7 +5249,7 @@ copy_event_to_schema_table(THD *thd, TAB
*/
if (thd->lex->sql_command != SQLCOM_SHOW_EVENTS &&
check_access(thd, EVENT_ACL, et.dbname.str, 0, 0, 1,
- is_schema_db(et.dbname.str)))
+ is_schema_db(et.dbname.str, et.dbname.length)))
DBUG_RETURN(0);
/* ->field[0] is EVENT_CATALOG and is by default NULL */
=== modified file 'sql/sql_table.cc'
--- a/sql/sql_table.cc 2010-02-10 19:06:24 +0000
+++ b/sql/sql_table.cc 2010-03-04 08:03:07 +0000
@@ -647,7 +647,7 @@ static bool read_ddl_log_file_entry(uint
Write one entry from ddl log file
SYNOPSIS
write_ddl_log_file_entry()
- entry_no Entry number to read
+ entry_no Entry number to write
RETURN VALUES
TRUE Error
FALSE Success
@@ -748,10 +748,10 @@ static uint read_ddl_log_header()
else
successful_open= TRUE;
}
- entry_no= uint4korr(&file_entry_buf[DDL_LOG_NUM_ENTRY_POS]);
- global_ddl_log.name_len= uint4korr(&file_entry_buf[DDL_LOG_NAME_LEN_POS]);
if (successful_open)
{
+ entry_no= uint4korr(&file_entry_buf[DDL_LOG_NUM_ENTRY_POS]);
+ global_ddl_log.name_len= uint4korr(&file_entry_buf[DDL_LOG_NAME_LEN_POS]);
global_ddl_log.io_size= uint4korr(&file_entry_buf[DDL_LOG_IO_SIZE_POS]);
DBUG_ASSERT(global_ddl_log.io_size <=
sizeof(global_ddl_log.file_entry_buf));
@@ -832,6 +832,7 @@ static bool init_ddl_log()
goto end;
global_ddl_log.io_size= IO_SIZE;
+ global_ddl_log.name_len= FN_LEN;
create_ddl_log_file_name(file_name);
if ((global_ddl_log.file_id= my_create(file_name,
CREATE_MODE,
@@ -884,6 +885,13 @@ static int execute_ddl_log_action(THD *t
{
DBUG_RETURN(FALSE);
}
+ DBUG_PRINT("ddl_log",
+ ("execute type %c next %u name '%s' from_name '%s' handler '%s'",
+ ddl_log_entry->action_type,
+ ddl_log_entry->next_entry,
+ ddl_log_entry->name,
+ ddl_log_entry->from_name,
+ ddl_log_entry->handler_name));
handler_name.str= (char*)ddl_log_entry->handler_name;
handler_name.length= strlen(ddl_log_entry->handler_name);
init_sql_alloc(&mem_root, TABLE_ALLOC_BLOCK_SIZE, 0);
@@ -1091,6 +1099,15 @@ bool write_ddl_log_entry(DDL_LOG_ENTRY *
DBUG_RETURN(TRUE);
}
error= FALSE;
+ DBUG_PRINT("ddl_log",
+ ("write type %c next %u name '%s' from_name '%s' handler '%s'",
+ (char) global_ddl_log.file_entry_buf[DDL_LOG_ACTION_TYPE_POS],
+ ddl_log_entry->next_entry,
+ (char*) &global_ddl_log.file_entry_buf[DDL_LOG_NAME_POS],
+ (char*) &global_ddl_log.file_entry_buf[DDL_LOG_NAME_POS
+ + FN_LEN],
+ (char*) &global_ddl_log.file_entry_buf[DDL_LOG_NAME_POS
+ + (2*FN_LEN)]));
if (write_ddl_log_file_entry((*active_entry)->entry_pos))
{
error= TRUE;
@@ -1731,9 +1748,10 @@ end:
file
*/
-void write_bin_log(THD *thd, bool clear_error,
- char const *query, ulong query_length)
+int write_bin_log(THD *thd, bool clear_error,
+ char const *query, ulong query_length)
{
+ int error= 0;
if (mysql_bin_log.is_open())
{
int errcode= 0;
@@ -1741,9 +1759,10 @@ void write_bin_log(THD *thd, bool clear_
thd->clear_error();
else
errcode= query_error_code(thd, TRUE);
- thd->binlog_query(THD::STMT_QUERY_TYPE,
- query, query_length, FALSE, FALSE, errcode);
+ error= thd->binlog_query(THD::STMT_QUERY_TYPE,
+ query, query_length, FALSE, FALSE, errcode);
}
+ return error;
}
@@ -2091,7 +2110,7 @@ int mysql_rm_table_part2(THD *thd, TABLE
tables). In this case, we can write the original query into
the binary log.
*/
- write_bin_log(thd, !error, thd->query(), thd->query_length());
+ error |= write_bin_log(thd, !error, thd->query(), thd->query_length());
}
else if (thd->current_stmt_binlog_row_based &&
tmp_table_deleted)
@@ -2113,7 +2132,7 @@ int mysql_rm_table_part2(THD *thd, TABLE
*/
built_query.chop(); // Chop of the last comma
built_query.append(" /* generated by server */");
- write_bin_log(thd, !error, built_query.ptr(), built_query.length());
+ error|= write_bin_log(thd, !error, built_query.ptr(), built_query.length());
}
/*
@@ -2132,7 +2151,7 @@ int mysql_rm_table_part2(THD *thd, TABLE
*/
built_tmp_query.chop(); // Chop of the last comma
built_tmp_query.append(" /* generated by server */");
- write_bin_log(thd, !error, built_tmp_query.ptr(), built_tmp_query.length());
+ error|= write_bin_log(thd, !error, built_tmp_query.ptr(), built_tmp_query.length());
}
}
@@ -2577,7 +2596,7 @@ mysql_prepare_create_table(THD *thd, HA_
!(sql_field->charset= get_charset_by_csname(sql_field->charset->csname,
MY_CS_BINSORT,MYF(0))))
{
- char tmp[64];
+ char tmp[65];
strmake(strmake(tmp, save_cs->csname, sizeof(tmp)-4),
STRING_WITH_LEN("_bin"));
my_error(ER_UNKNOWN_COLLATION, MYF(0), tmp);
@@ -3541,9 +3560,9 @@ void sp_prepare_create_field(THD *thd, C
RETURN VALUES
NONE
*/
-static inline void write_create_table_bin_log(THD *thd,
- const HA_CREATE_INFO *create_info,
- bool internal_tmp_table)
+static inline int write_create_table_bin_log(THD *thd,
+ const HA_CREATE_INFO *create_info,
+ bool internal_tmp_table)
{
/*
Don't write statement if:
@@ -3556,7 +3575,8 @@ static inline void write_create_table_bi
(!thd->current_stmt_binlog_row_based ||
(thd->current_stmt_binlog_row_based &&
!(create_info->options & HA_LEX_CREATE_TMP_TABLE))))
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ return write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ return 0;
}
@@ -3823,8 +3843,7 @@ bool mysql_create_table_no_lock(THD *thd
push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_NOTE,
ER_TABLE_EXISTS_ERROR, ER(ER_TABLE_EXISTS_ERROR),
alias);
- error= 0;
- write_create_table_bin_log(thd, create_info, internal_tmp_table);
+ error= write_create_table_bin_log(thd, create_info, internal_tmp_table);
goto err;
}
my_error(ER_TABLE_EXISTS_ERROR, MYF(0), alias);
@@ -3952,8 +3971,7 @@ bool mysql_create_table_no_lock(THD *thd
thd->thread_specific_used= TRUE;
}
- write_create_table_bin_log(thd, create_info, internal_tmp_table);
- error= FALSE;
+ error= write_create_table_bin_log(thd, create_info, internal_tmp_table);
unlock_and_end:
VOID(pthread_mutex_unlock(&LOCK_open));
@@ -3968,7 +3986,7 @@ warn:
ER_TABLE_EXISTS_ERROR, ER(ER_TABLE_EXISTS_ERROR),
alias);
create_info->table_existed= 1; // Mark that table existed
- write_create_table_bin_log(thd, create_info, internal_tmp_table);
+ error= write_create_table_bin_log(thd, create_info, internal_tmp_table);
goto unlock_and_end;
}
@@ -5444,18 +5462,20 @@ binlog:
create_info, FALSE /* show_database */);
DBUG_ASSERT(result == 0); // store_create_info() always return 0
- write_bin_log(thd, TRUE, query.ptr(), query.length());
+ if (write_bin_log(thd, TRUE, query.ptr(), query.length()))
+ goto err;
}
}
else // Case 1
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ goto err;
}
/*
Case 3 and 4 does nothing under RBR
*/
}
- else
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ else if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ goto err;
res= FALSE;
@@ -5543,7 +5563,7 @@ mysql_discard_or_import_tablespace(THD *
error=1;
if (error)
goto err;
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ error= write_bin_log(thd, FALSE, thd->query(), thd->query_length());
err:
ha_autocommit_or_rollback(thd, error);
@@ -6571,11 +6591,13 @@ bool mysql_alter_table(THD *thd,char *ne
thd->clear_error();
Query_log_event qinfo(thd, thd->query(), thd->query_length(),
0, FALSE, 0);
- mysql_bin_log.write(&qinfo);
+ if ((error= mysql_bin_log.write(&qinfo)))
+ goto view_err_unlock;
}
my_ok(thd);
}
+view_err_unlock:
unlock_table_names(thd, table_list, (TABLE_LIST*) 0);
view_err:
@@ -6828,8 +6850,9 @@ view_err:
if (!error)
{
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
- my_ok(thd);
+ error= write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (!error)
+ my_ok(thd);
}
else if (error > 0)
{
@@ -7322,8 +7345,9 @@ view_err:
if (rename_temporary_table(thd, new_table, new_db, new_name))
goto err1;
/* We don't replicate alter table statement on temporary tables */
- if (!thd->current_stmt_binlog_row_based)
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (!thd->current_stmt_binlog_row_based &&
+ write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ DBUG_RETURN(TRUE);
goto end_temporary;
}
@@ -7486,7 +7510,8 @@ view_err:
DBUG_ASSERT(!(mysql_bin_log.is_open() &&
thd->current_stmt_binlog_row_based &&
(create_info->options & HA_LEX_CREATE_TMP_TABLE)));
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
+ if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ DBUG_RETURN(TRUE);
if (ha_check_storage_engine_flag(old_db_type, HTON_FLUSH_AFTER_RENAME))
{
=== modified file 'sql/sql_tablespace.cc'
--- a/sql/sql_tablespace.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_tablespace.cc 2010-03-04 08:03:07 +0000
@@ -67,6 +67,6 @@ int mysql_alter_tablespace(THD *thd, st_
hton_name(hton)->str,
"TABLESPACE or LOGFILE GROUP");
}
- write_bin_log(thd, FALSE, thd->query(), thd->query_length());
- DBUG_RETURN(FALSE);
+ error= write_bin_log(thd, FALSE, thd->query(), thd->query_length());
+ DBUG_RETURN(error);
}
=== modified file 'sql/sql_test.cc'
--- a/sql/sql_test.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_test.cc 2010-03-04 08:03:07 +0000
@@ -168,6 +168,21 @@ TEST_join(JOIN *join)
uint i,ref;
DBUG_ENTER("TEST_join");
+ /*
+ Assemble results of all the calls to full_name() first,
+ in order not to garble the tabular output below.
+ */
+ String ref_key_parts[MAX_TABLES];
+ for (i= 0; i < join->tables; i++)
+ {
+ JOIN_TAB *tab= join->join_tab + i;
+ for (ref= 0; ref < tab->ref.key_parts; ref++)
+ {
+ ref_key_parts[i].append(tab->ref.items[ref]->full_name());
+ ref_key_parts[i].append(" ");
+ }
+ }
+
DBUG_LOCK_FILE;
VOID(fputs("\nInfo about JOIN\n",DBUG_FILE));
for (i=0 ; i < join->tables ; i++)
@@ -199,13 +214,8 @@ TEST_join(JOIN *join)
}
if (tab->ref.key_parts)
{
- VOID(fputs(" refs: ",DBUG_FILE));
- for (ref=0 ; ref < tab->ref.key_parts ; ref++)
- {
- Item *item=tab->ref.items[ref];
- fprintf(DBUG_FILE,"%s ", item->full_name());
- }
- VOID(fputc('\n',DBUG_FILE));
+ fprintf(DBUG_FILE,
+ " refs: %s\n", ref_key_parts[i].ptr());
}
}
DBUG_UNLOCK_FILE;
=== modified file 'sql/sql_trigger.cc'
--- a/sql/sql_trigger.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_trigger.cc 2010-03-04 08:03:07 +0000
@@ -507,7 +507,7 @@ end:
if (!result)
{
- write_bin_log(thd, TRUE, stmt_query.ptr(), stmt_query.length());
+ result= write_bin_log(thd, TRUE, stmt_query.ptr(), stmt_query.length());
}
VOID(pthread_mutex_unlock(&LOCK_open));
=== modified file 'sql/sql_udf.cc'
--- a/sql/sql_udf.cc 2009-10-16 10:29:42 +0000
+++ b/sql/sql_udf.cc 2010-01-25 02:55:05 +0000
@@ -398,6 +398,7 @@ int mysql_create_function(THD *thd,udf_f
TABLE *table;
TABLE_LIST tables;
udf_func *u_d;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_create_function");
if (!initialized)
@@ -437,8 +438,8 @@ int mysql_create_function(THD *thd,udf_f
Turn off row binlogging of this statement and use statement-based
so that all supporting tables are updated for CREATE FUNCTION command.
*/
- if (thd->current_stmt_binlog_row_based)
- thd->clear_current_stmt_binlog_row_based();
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
+ thd->clear_current_stmt_binlog_row_based();
rw_wrlock(&THR_LOCK_udf);
if ((hash_search(&udf_hash,(uchar*) udf->name.str, udf->name.length)))
@@ -506,14 +507,22 @@ int mysql_create_function(THD *thd,udf_f
rw_unlock(&THR_LOCK_udf);
/* Binlog the create function. */
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
-
+ if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
+ DBUG_RETURN(1);
+ }
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(0);
err:
if (new_dl)
dlclose(dl);
rw_unlock(&THR_LOCK_udf);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(1);
}
@@ -525,6 +534,7 @@ int mysql_drop_function(THD *thd,const L
udf_func *udf;
char *exact_name_str;
uint exact_name_len;
+ bool save_binlog_row_based;
DBUG_ENTER("mysql_drop_function");
if (!initialized)
@@ -540,8 +550,8 @@ int mysql_drop_function(THD *thd,const L
Turn off row binlogging of this statement and use statement-based
so that all supporting tables are updated for DROP FUNCTION command.
*/
- if (thd->current_stmt_binlog_row_based)
- thd->clear_current_stmt_binlog_row_based();
+ save_binlog_row_based= thd->current_stmt_binlog_row_based;
+ thd->clear_current_stmt_binlog_row_based();
rw_wrlock(&THR_LOCK_udf);
if (!(udf=(udf_func*) hash_search(&udf_hash,(uchar*) udf_name->str,
@@ -581,11 +591,19 @@ int mysql_drop_function(THD *thd,const L
rw_unlock(&THR_LOCK_udf);
/* Binlog the drop function. */
- write_bin_log(thd, TRUE, thd->query(), thd->query_length());
-
+ if (write_bin_log(thd, TRUE, thd->query(), thd->query_length()))
+ {
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
+ DBUG_RETURN(1);
+ }
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(0);
err:
rw_unlock(&THR_LOCK_udf);
+ /* Restore the state of binlog format */
+ thd->current_stmt_binlog_row_based= save_binlog_row_based;
DBUG_RETURN(1);
}
=== modified file 'sql/sql_union.cc'
--- a/sql/sql_union.cc 2009-09-07 20:50:10 +0000
+++ b/sql/sql_union.cc 2010-03-04 08:03:07 +0000
@@ -335,6 +335,35 @@ bool st_select_lex_unit::prepare(THD *th
}
}
+ /*
+ Disable the usage of fulltext searches in the last union branch.
+ This is a temporary 5.x limitation because of the way the fulltext
+ search functions are handled by the optimizer.
+ This is manifestation of the more general problems of "taking away"
+ parts of a SELECT statement post-fix_fields(). This is generally not
+ doable since various flags are collected in various places (e.g.
+ SELECT_LEX) that carry information about the presence of certain
+ expressions or constructs in the parts of the query.
+ When part of the query is taken away it's not clear how to "divide"
+ the meaning of these accumulated flags and what to carry over to the
+ recipient query (SELECT_LEX).
+ */
+ if (global_parameters->ftfunc_list->elements &&
+ global_parameters->order_list.elements &&
+ global_parameters != fake_select_lex)
+ {
+ ORDER *ord;
+ Item_func::Functype ft= Item_func::FT_FUNC;
+ for (ord= (ORDER*)global_parameters->order_list.first; ord; ord= ord->next)
+ if ((*ord->item)->walk (&Item::find_function_processor, FALSE,
+ (uchar *) &ft))
+ {
+ my_error (ER_CANT_USE_OPTION_HERE, MYF(0), "MATCH()");
+ goto err;
+ }
+ }
+
+
create_options= (first_sl->options | thd_arg->options |
TMP_TABLE_ALL_COLUMNS);
/*
@@ -669,7 +698,7 @@ bool st_select_lex_unit::cleanup()
{
ORDER *ord;
for (ord= (ORDER*)global_parameters->order_list.first; ord; ord= ord->next)
- (*ord->item)->cleanup();
+ (*ord->item)->walk (&Item::cleanup_processor, 0, 0);
}
}
=== modified file 'sql/sql_update.cc'
--- a/sql/sql_update.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_update.cc 2010-03-04 08:03:07 +0000
@@ -23,6 +23,7 @@
#include "sql_select.h"
#include "sp_head.h"
#include "sql_trigger.h"
+#include "debug_sync.h"
/* Return 0 if row hasn't changed */
@@ -828,7 +829,7 @@ int mysql_update(THD *thd,
if (error < 0)
{
- char buff[STRING_BUFFER_USUAL_SIZE];
+ char buff[MYSQL_ERRMSG_SIZE];
my_snprintf(buff, sizeof(buff), ER(ER_UPDATE_INFO), (ulong) found, (ulong) updated,
(ulong) thd->cuted_fields);
thd->row_count_func=
@@ -1143,8 +1144,11 @@ reopen_tables:
items from 'fields' list, so the cleanup above is necessary to.
*/
cleanup_items(thd->free_list);
-
+ cleanup_items(thd->stmt_arena->free_list);
close_tables_for_reopen(thd, &table_list);
+
+ DEBUG_SYNC(thd, "multi_update_reopen_tables");
+
goto reopen_tables;
}
@@ -1864,9 +1868,10 @@ void multi_update::abort()
into repl event.
*/
int errcode= query_error_code(thd, thd->killed == THD::NOT_KILLED);
- thd->binlog_query(THD::ROW_QUERY_TYPE,
- thd->query(), thd->query_length(),
- transactional_tables, FALSE, errcode);
+ /* the error of binary logging is ignored */
+ (void)thd->binlog_query(THD::ROW_QUERY_TYPE,
+ thd->query(), thd->query_length(),
+ transactional_tables, FALSE, errcode);
}
thd->transaction.all.modified_non_trans_table= TRUE;
}
=== modified file 'sql/sql_view.cc'
--- a/sql/sql_view.cc 2009-12-03 11:19:05 +0000
+++ b/sql/sql_view.cc 2010-03-04 08:03:07 +0000
@@ -268,11 +268,11 @@ bool create_view_precheck(THD *thd, TABL
table (i.e. user will not get some privileges by view creation)
*/
if ((check_access(thd, CREATE_VIEW_ACL, view->db, &view->grant.privilege,
- 0, 0, is_schema_db(view->db)) ||
+ 0, 0, is_schema_db(view->db, view->db_length)) ||
check_grant(thd, CREATE_VIEW_ACL, view, 0, 1, 0)) ||
(mode != VIEW_CREATE_NEW &&
(check_access(thd, DROP_ACL, view->db, &view->grant.privilege,
- 0, 0, is_schema_db(view->db)) ||
+ 0, 0, is_schema_db(view->db, view->db_length)) ||
check_grant(thd, DROP_ACL, view, 0, 1, 0))))
goto err;
@@ -662,8 +662,9 @@ bool mysql_create_view(THD *thd, TABLE_L
buff.append(views->source.str, views->source.length);
int errcode= query_error_code(thd, TRUE);
- thd->binlog_query(THD::STMT_QUERY_TYPE,
- buff.ptr(), buff.length(), FALSE, FALSE, errcode);
+ if (thd->binlog_query(THD::STMT_QUERY_TYPE,
+ buff.ptr(), buff.length(), FALSE, FALSE, errcode))
+ res= TRUE;
}
VOID(pthread_mutex_unlock(&LOCK_open));
@@ -1652,7 +1653,8 @@ bool mysql_drop_view(THD *thd, TABLE_LIS
/* if something goes wrong, bin-log with possible error code,
otherwise bin-log with error code cleared.
*/
- write_bin_log(thd, !something_wrong, thd->query(), thd->query_length());
+ if (write_bin_log(thd, !something_wrong, thd->query(), thd->query_length()))
+ something_wrong= 1;
}
VOID(pthread_mutex_unlock(&LOCK_open));
@@ -1771,7 +1773,7 @@ bool check_key_in_view(THD *thd, TABLE_L
if (!fld->item->fixed && fld->item->fix_fields(thd, &fld->item))
{
thd->mark_used_columns= save_mark_used_columns;
- return TRUE;
+ DBUG_RETURN(TRUE);
}
}
thd->mark_used_columns= save_mark_used_columns;
=== modified file 'sql/sql_yacc.yy'
--- a/sql/sql_yacc.yy 2010-01-17 17:22:46 +0000
+++ b/sql/sql_yacc.yy 2010-03-04 08:03:07 +0000
@@ -596,6 +596,35 @@ Item* handle_sql2003_note184_exception(T
DBUG_RETURN(result);
}
+
+static bool add_create_index_prepare (LEX *lex, Table_ident *table)
+{
+ lex->sql_command= SQLCOM_CREATE_INDEX;
+ if (!lex->current_select->add_table_to_list(lex->thd, table, NULL,
+ TL_OPTION_UPDATING))
+ return TRUE;
+ lex->alter_info.reset();
+ lex->alter_info.flags= ALTER_ADD_INDEX;
+ lex->col_list.empty();
+ lex->change= NullS;
+ return FALSE;
+}
+
+
+static bool add_create_index (LEX *lex, Key::Keytype type, const char *name,
+ KEY_CREATE_INFO *info= NULL, bool generated= 0)
+{
+ Key *key;
+ key= new Key(type, name, info ? info : &lex->key_create_info, generated,
+ lex->col_list);
+ if (key == NULL)
+ return TRUE;
+
+ lex->alter_info.key_list.push_back(key);
+ lex->col_list.empty();
+ return FALSE;
+}
+
%}
%union {
int num;
@@ -1335,7 +1364,7 @@ bool my_yyoverflow(short **a, YYSTYPE **
option_type opt_var_type opt_var_ident_type
%type <key_type>
- key_type opt_unique_or_fulltext constraint_key_type
+ normal_key_type opt_unique constraint_key_type fulltext spatial
%type <key_alg>
btree_or_rtree
@@ -1434,7 +1463,10 @@ bool my_yyoverflow(short **a, YYSTYPE **
view_suid view_tail view_list_opt view_list view_select
view_check_option trigger_tail sp_tail sf_tail udf_tail event_tail
install uninstall partition_entry binlog_base64_event
- init_key_options key_options key_opts key_opt key_using_alg
+ init_key_options normal_key_options normal_key_opts all_key_opt
+ spatial_key_options fulltext_key_options normal_key_opt
+ fulltext_key_opt spatial_key_opt fulltext_key_opts spatial_key_opts
+ key_using_alg
server_def server_options_list server_option
definer_opt no_definer definer
END_OF_INPUT
@@ -1828,35 +1860,37 @@ create:
$5->table.str);
}
}
- | CREATE opt_unique_or_fulltext INDEX_SYM ident key_alg ON
+ | CREATE opt_unique INDEX_SYM ident key_alg ON table_ident
+ {
+ if (add_create_index_prepare(Lex, $7))
+ MYSQL_YYABORT;
+ }
+ '(' key_list ')' normal_key_options
+ {
+ if (add_create_index(Lex, $2, $4.str))
+ MYSQL_YYABORT;
+ }
+ | CREATE fulltext INDEX_SYM ident init_key_options ON
table_ident
{
- LEX *lex=Lex;
- lex->sql_command= SQLCOM_CREATE_INDEX;
- if (!lex->current_select->add_table_to_list(lex->thd, $7,
- NULL,
- TL_OPTION_UPDATING))
+ if (add_create_index_prepare(Lex, $7))
MYSQL_YYABORT;
- lex->alter_info.reset();
- lex->alter_info.flags= ALTER_ADD_INDEX;
- lex->col_list.empty();
- lex->change=NullS;
}
- '(' key_list ')' key_options
+ '(' key_list ')' fulltext_key_options
{
- LEX *lex=Lex;
- Key *key;
- if ($2 != Key::FULLTEXT && lex->key_create_info.parser_name.str)
- {
- my_parse_error(ER(ER_SYNTAX_ERROR));
+ if (add_create_index(Lex, $2, $4.str))
MYSQL_YYABORT;
- }
- key= new Key($2, $4.str, &lex->key_create_info, 0,
- lex->col_list);
- if (key == NULL)
+ }
+ | CREATE spatial INDEX_SYM ident init_key_options ON
+ table_ident
+ {
+ if (add_create_index_prepare(Lex, $7))
+ MYSQL_YYABORT;
+ }
+ '(' key_list ')' spatial_key_options
+ {
+ if (add_create_index(Lex, $2, $4.str))
MYSQL_YYABORT;
- lex->alter_info.key_list.push_back(key);
- lex->col_list.empty();
}
| CREATE DATABASE opt_if_not_exists ident
{
@@ -4082,7 +4116,7 @@ part_func_expr:
lex->safe_to_cache_query= 1;
if (not_corr_func)
{
- my_parse_error(ER(ER_CONST_EXPR_IN_PARTITION_FUNC_ERROR));
+ my_parse_error(ER(ER_WRONG_EXPR_IN_PARTITION_FUNC_ERROR));
MYSQL_YYABORT;
}
$$=$1;
@@ -4822,32 +4856,28 @@ column_def:
;
key_def:
- key_type opt_ident key_alg '(' key_list ')' key_options
+ normal_key_type opt_ident key_alg '(' key_list ')' normal_key_options
{
- LEX *lex=Lex;
- if ($1 != Key::FULLTEXT && lex->key_create_info.parser_name.str)
- {
- my_parse_error(ER(ER_SYNTAX_ERROR));
+ if (add_create_index (Lex, $1, $2))
MYSQL_YYABORT;
- }
- Key *key= new Key($1, $2, &lex->key_create_info, 0,
- lex->col_list);
- if (key == NULL)
+ }
+ | fulltext opt_key_or_index opt_ident init_key_options
+ '(' key_list ')' fulltext_key_options
+ {
+ if (add_create_index (Lex, $1, $3))
+ MYSQL_YYABORT;
+ }
+ | spatial opt_key_or_index opt_ident init_key_options
+ '(' key_list ')' spatial_key_options
+ {
+ if (add_create_index (Lex, $1, $3))
MYSQL_YYABORT;
- lex->alter_info.key_list.push_back(key);
- lex->col_list.empty(); /* Alloced by sql_alloc */
}
| opt_constraint constraint_key_type opt_ident key_alg
- '(' key_list ')' key_options
+ '(' key_list ')' normal_key_options
{
- LEX *lex=Lex;
- const char *key_name= $3 ? $3 : $1;
- Key *key= new Key($2, key_name, &lex->key_create_info, 0,
- lex->col_list);
- if (key == NULL)
+ if (add_create_index (Lex, $2, $3 ? $3 : $1))
MYSQL_YYABORT;
- lex->alter_info.key_list.push_back(key);
- lex->col_list.empty(); /* Alloced by sql_alloc */
}
| opt_constraint FOREIGN KEY_SYM opt_ident '(' key_list ')' references
{
@@ -4863,13 +4893,9 @@ key_def:
if (key == NULL)
MYSQL_YYABORT;
lex->alter_info.key_list.push_back(key);
- key= new Key(Key::MULTIPLE, key_name,
- &default_key_create_info, 1,
- lex->col_list);
- if (key == NULL)
+ if (add_create_index (lex, Key::MULTIPLE, key_name,
+ &default_key_create_info, 1))
MYSQL_YYABORT;
- lex->alter_info.key_list.push_back(key);
- lex->col_list.empty(); /* Alloced by sql_alloc */
/* Only used for ALTER TABLE. Ignored otherwise. */
lex->alter_info.flags|= ALTER_FOREIGN_KEY;
}
@@ -5437,19 +5463,8 @@ delete_option:
| SET DEFAULT { $$= (int) Foreign_key::FK_OPTION_DEFAULT; }
;
-key_type:
+normal_key_type:
key_or_index { $$= Key::MULTIPLE; }
- | FULLTEXT_SYM opt_key_or_index { $$= Key::FULLTEXT; }
- | SPATIAL_SYM opt_key_or_index
- {
-#ifdef HAVE_SPATIAL
- $$= Key::SPATIAL;
-#else
- my_error(ER_FEATURE_DISABLED, MYF(0),
- sym_group_geom.name, sym_group_geom.needed_define);
- MYSQL_YYABORT;
-#endif
- }
;
constraint_key_type:
@@ -5473,11 +5488,17 @@ keys_or_index:
| INDEXES {}
;
-opt_unique_or_fulltext:
+opt_unique:
/* empty */ { $$= Key::MULTIPLE; }
| UNIQUE_SYM { $$= Key::UNIQUE; }
- | FULLTEXT_SYM { $$= Key::FULLTEXT;}
- | SPATIAL_SYM
+ ;
+
+fulltext:
+ FULLTEXT_SYM { $$= Key::FULLTEXT;}
+ ;
+
+spatial:
+ SPATIAL_SYM
{
#ifdef HAVE_SPATIAL
$$= Key::SPATIAL;
@@ -5506,14 +5527,34 @@ key_alg:
| init_key_options key_using_alg
;
-key_options:
+normal_key_options:
+ /* empty */ {}
+ | normal_key_opts
+ ;
+
+fulltext_key_options:
/* empty */ {}
- | key_opts
+ | fulltext_key_opts
+ ;
+
+spatial_key_options:
+ /* empty */ {}
+ | spatial_key_opts
+ ;
+
+normal_key_opts:
+ normal_key_opt
+ | normal_key_opts normal_key_opt
;
-key_opts:
- key_opt
- | key_opts key_opt
+spatial_key_opts:
+ spatial_key_opt
+ | spatial_key_opts spatial_key_opt
+ ;
+
+fulltext_key_opts:
+ fulltext_key_opt
+ | fulltext_key_opts fulltext_key_opt
;
key_using_alg:
@@ -5521,10 +5562,22 @@ key_using_alg:
| TYPE_SYM btree_or_rtree { Lex->key_create_info.algorithm= $2; }
;
-key_opt:
- key_using_alg
- | KEY_BLOCK_SIZE opt_equal ulong_num
+all_key_opt:
+ KEY_BLOCK_SIZE opt_equal ulong_num
{ Lex->key_create_info.block_size= $3; }
+ ;
+
+normal_key_opt:
+ all_key_opt
+ | key_using_alg
+ ;
+
+spatial_key_opt:
+ all_key_opt
+ ;
+
+fulltext_key_opt:
+ all_key_opt
| WITH PARSER_SYM IDENT_sys
{
if (plugin_is_ready(&$3, MYSQL_FTPARSER_PLUGIN))
@@ -8895,7 +8948,7 @@ interval_time_stamp:
implementation without changing its
resolution.
*/
- WARN_DEPRECATED(yythd, "6.2", "FRAC_SECOND", "MICROSECOND");
+ WARN_DEPRECATED(yythd, VER_CELOSIA, "FRAC_SECOND", "MICROSECOND");
}
;
=== modified file 'sql/table.cc'
--- a/sql/table.cc 2010-03-08 13:57:32 +0000
+++ b/sql/table.cc 2010-03-09 19:23:30 +0000
@@ -212,10 +212,7 @@ TABLE_CATEGORY get_table_category(const
DBUG_ASSERT(db != NULL);
DBUG_ASSERT(name != NULL);
- if ((db->length == INFORMATION_SCHEMA_NAME.length) &&
- (my_strcasecmp(system_charset_info,
- INFORMATION_SCHEMA_NAME.str,
- db->str) == 0))
+ if (is_schema_db(db->str, db->length))
{
return TABLE_CATEGORY_INFORMATION;
}
@@ -1850,8 +1847,8 @@ int open_table_from_share(THD *thd, TABL
{
if (work_part_info_used)
tmp= fix_partition_func(thd, outparam, is_create_table);
- outparam->part_info->item_free_list= part_func_arena.free_list;
}
+ outparam->part_info->item_free_list= part_func_arena.free_list;
partititon_err:
if (tmp)
{
=== modified file 'storage/archive/ha_archive.cc'
--- a/storage/archive/ha_archive.cc 2010-01-15 15:27:55 +0000
+++ b/storage/archive/ha_archive.cc 2010-03-04 08:03:07 +0000
@@ -1490,7 +1490,7 @@ int ha_archive::info(uint flag)
stats.create_time= (ulong) file_stat.st_ctime;
stats.update_time= (ulong) file_stat.st_mtime;
stats.mean_rec_length= stats.records ?
- stats.data_file_length / stats.records : table->s->reclength;
+ ulong(stats.data_file_length / stats.records) : table->s->reclength;
stats.max_data_file_length= MAX_FILE_SIZE;
}
stats.delete_length= 0;
=== modified file 'storage/ibmdb2i/db2i_constraints.cc'
--- a/storage/ibmdb2i/db2i_constraints.cc 2009-03-09 21:20:14 +0000
+++ b/storage/ibmdb2i/db2i_constraints.cc 2009-12-11 07:16:57 +0000
@@ -329,7 +329,7 @@ char* ha_ibmdb2i::get_foreign_key_create
/* Process the constraint name. */
- info.strncat(STRING_WITH_LEN(" CONSTRAINT "));
+ info.strncat(STRING_WITH_LEN(",\n CONSTRAINT "));
convNameForCreateInfo(thd, info,
FKCstDef->CstName.Name, FKCstDef->CstName.Len);
@@ -398,7 +398,6 @@ char* ha_ibmdb2i::get_foreign_key_create
if ((i+1) < cstCnt)
{
- info.strcat(',');
tempPtr = (char*)cstHdr + cstHdr->CstLen;
cstHdr = (constraint_hdr_t*)(tempPtr);
}
@@ -671,28 +670,3 @@ uint ha_ibmdb2i::referenced_by_foreign_k
}
DBUG_RETURN(count);
}
-
-
-bool ha_ibmdb2i::check_if_incompatible_data(HA_CREATE_INFO *info,
- uint table_changes)
-{
- DBUG_ENTER("ha_ibmdb2i::check_if_incompatible_data");
- uint i;
- /* Check that auto_increment value and field definitions were
- not changed. */
- if ((info->used_fields & HA_CREATE_USED_AUTO &&
- info->auto_increment_value != 0) ||
- table_changes != IS_EQUAL_YES)
- DBUG_RETURN(COMPATIBLE_DATA_NO);
- /* Check if any fields were renamed. */
- for (i= 0; i < table->s->fields; i++)
- {
- Field *field= table->field[i];
- if (field->flags & FIELD_IS_RENAMED)
- {
- DBUG_PRINT("info", ("Field has been renamed, copy table"));
- DBUG_RETURN(COMPATIBLE_DATA_NO);
- }
- }
- DBUG_RETURN(COMPATIBLE_DATA_YES);
-}
=== modified file 'storage/ibmdb2i/ha_ibmdb2i.cc'
--- a/storage/ibmdb2i/ha_ibmdb2i.cc 2009-07-08 09:10:01 +0000
+++ b/storage/ibmdb2i/ha_ibmdb2i.cc 2009-12-11 07:01:16 +0000
@@ -284,7 +284,7 @@ static int ibmdb2i_init_func(void *p)
was_ILE_inited = false;
ibmdb2i_hton= (handlerton *)p;
VOID(pthread_mutex_init(&ibmdb2i_mutex,MY_MUTEX_INIT_FAST));
- (void) hash_init(&ibmdb2i_open_tables,system_charset_info,32,0,0,
+ (void) hash_init(&ibmdb2i_open_tables,table_alias_charset,32,0,0,
(hash_get_key) ibmdb2i_get_key,0,0);
ibmdb2i_hton->state= SHOW_OPTION_YES;
=== modified file 'storage/innobase/fil/fil0fil.c'
--- a/storage/innobase/fil/fil0fil.c 2009-07-10 10:40:31 +0000
+++ b/storage/innobase/fil/fil0fil.c 2009-12-21 10:20:32 +0000
@@ -1740,6 +1740,8 @@ fil_op_write_log(
MLOG_FILE_DELETE, or
MLOG_FILE_RENAME */
ulint space_id, /* in: space id */
+ ulint log_flags, /* in: redo log flags (stored
+ in the page number field) */
const char* name, /* in: table name in the familiar
'databasename/tablename' format, or
the file path in the case of
@@ -1760,8 +1762,8 @@ fil_op_write_log(
return;
}
- log_ptr = mlog_write_initial_log_record_for_file_op(type, space_id, 0,
- log_ptr, mtr);
+ log_ptr = mlog_write_initial_log_record_for_file_op(
+ type, space_id, log_flags, log_ptr, mtr);
/* Let us store the strings as null-terminated for easier readability
and handling */
@@ -1810,11 +1812,11 @@ fil_op_log_parse_or_replay(
not fir completely between ptr and end_ptr */
byte* end_ptr, /* in: buffer end */
ulint type, /* in: the type of this log record */
- ibool do_replay, /* in: TRUE if we want to replay the
- operation, and not just parse the log record */
- ulint space_id) /* in: if do_replay is TRUE, the space id of
- the tablespace in question; otherwise
- ignored */
+ ulint space_id, /* in: the space id of the tablespace in
+ question, or 0 if the log record should
+ only be parsed but not replayed */
+ ulint log_flags) /* in: redo log flags
+ (stored in the page number parameter) */
{
ulint name_len;
ulint new_name_len;
@@ -1868,7 +1870,7 @@ fil_op_log_parse_or_replay(
printf("new name %s\n", new_name);
}
*/
- if (do_replay == FALSE) {
+ if (!space_id) {
return(ptr);
}
@@ -1917,6 +1919,8 @@ fil_op_log_parse_or_replay(
} else if (fil_get_space_id_for_table(name)
!= ULINT_UNDEFINED) {
/* Do nothing */
+ } else if (log_flags & MLOG_FILE_FLAG_TEMP) {
+ /* Temporary table, do nothing */
} else {
/* Create the database directory for name, if it does
not exist yet */
@@ -2078,7 +2082,7 @@ try_again:
to write any log record */
mtr_start(&mtr);
- fil_op_write_log(MLOG_FILE_DELETE, id, path, NULL, &mtr);
+ fil_op_write_log(MLOG_FILE_DELETE, id, 0, path, NULL, &mtr);
mtr_commit(&mtr);
#endif
mem_free(path);
@@ -2349,7 +2353,7 @@ retry:
mtr_start(&mtr);
- fil_op_write_log(MLOG_FILE_RENAME, id, old_name, new_name,
+ fil_op_write_log(MLOG_FILE_RENAME, id, 0, old_name, new_name,
&mtr);
mtr_commit(&mtr);
}
@@ -2525,8 +2529,9 @@ error_exit2:
mtr_start(&mtr);
- fil_op_write_log(MLOG_FILE_CREATE, *space_id, tablename,
- NULL, &mtr);
+ fil_op_write_log(MLOG_FILE_CREATE, *space_id,
+ is_temp ? MLOG_FILE_FLAG_TEMP : 0,
+ tablename, NULL, &mtr);
mtr_commit(&mtr);
}
=== modified file 'storage/innobase/handler/ha_innodb.cc'
--- a/storage/innobase/handler/ha_innodb.cc 2010-01-15 15:27:55 +0000
+++ b/storage/innobase/handler/ha_innodb.cc 2010-03-04 08:03:07 +0000
@@ -40,12 +40,6 @@ have disabled the InnoDB inlining in thi
#include "ha_innodb.h"
#include <mysql/plugin.h>
-#ifndef MYSQL_SERVER
-/* This is needed because of Bug #3596. Let us hope that pthread_mutex_t
-is defined the same in both builds: the MySQL server and the InnoDB plugin. */
-extern pthread_mutex_t LOCK_thread_count;
-#endif /* MYSQL_SERVER */
-
/** to protect innobase_open_files */
static pthread_mutex_t innobase_share_mutex;
/** to force correct commit order in binlog */
@@ -2592,57 +2586,150 @@ normalize_table_name(
}
/************************************************************************
+Get the upper limit of the MySQL integral and floating-point type. */
+static
+ulonglong
+innobase_get_int_col_max_value(
+/*===========================*/
+ /* out: maximum allowed value for the field */
+ const Field* field) /* in: MySQL field */
+{
+ ulonglong max_value = 0;
+
+ switch(field->key_type()) {
+ /* TINY */
+ case HA_KEYTYPE_BINARY:
+ max_value = 0xFFULL;
+ break;
+ case HA_KEYTYPE_INT8:
+ max_value = 0x7FULL;
+ break;
+ /* SHORT */
+ case HA_KEYTYPE_USHORT_INT:
+ max_value = 0xFFFFULL;
+ break;
+ case HA_KEYTYPE_SHORT_INT:
+ max_value = 0x7FFFULL;
+ break;
+ /* MEDIUM */
+ case HA_KEYTYPE_UINT24:
+ max_value = 0xFFFFFFULL;
+ break;
+ case HA_KEYTYPE_INT24:
+ max_value = 0x7FFFFFULL;
+ break;
+ /* LONG */
+ case HA_KEYTYPE_ULONG_INT:
+ max_value = 0xFFFFFFFFULL;
+ break;
+ case HA_KEYTYPE_LONG_INT:
+ max_value = 0x7FFFFFFFULL;
+ break;
+ /* BIG */
+ case HA_KEYTYPE_ULONGLONG:
+ max_value = 0xFFFFFFFFFFFFFFFFULL;
+ break;
+ case HA_KEYTYPE_LONGLONG:
+ max_value = 0x7FFFFFFFFFFFFFFFULL;
+ break;
+ case HA_KEYTYPE_FLOAT:
+ /* We use the maximum as per IEEE754-2008 standard, 2^24 */
+ max_value = 0x1000000ULL;
+ break;
+ case HA_KEYTYPE_DOUBLE:
+ /* We use the maximum as per IEEE754-2008 standard, 2^53 */
+ max_value = 0x20000000000000ULL;
+ break;
+ default:
+ ut_error;
+ }
+
+ return(max_value);
+}
+
+/************************************************************************
Set the autoinc column max value. This should only be called once from
ha_innobase::open(). Therefore there's no need for a covering lock. */
-ulong
+void
ha_innobase::innobase_initialize_autoinc()
/*======================================*/
{
- dict_index_t* index;
ulonglong auto_inc;
- const char* col_name;
- ulint error = DB_SUCCESS;
- dict_table_t* innodb_table = prebuilt->table;
-
- col_name = table->found_next_number_field->field_name;
- index = innobase_get_index(table->s->next_number_index);
+ const Field* field = table->found_next_number_field;
- /* Execute SELECT MAX(col_name) FROM TABLE; */
- error = row_search_max_autoinc(index, col_name, &auto_inc);
+ if (field != NULL) {
+ auto_inc = innobase_get_int_col_max_value(field);
+ } else {
+ /* We have no idea what's been passed in to us as the
+ autoinc column. We set it to the MAX_INT of our table
+ autoinc type. */
+ auto_inc = 0xFFFFFFFFFFFFFFFFULL;
- if (error == DB_SUCCESS) {
+ ut_print_timestamp(stderr);
+ fprintf(stderr, " InnoDB: Unable to determine the AUTOINC "
+ "column name\n");
+ }
- /* At the this stage we dont' know the increment
- or the offset, so use default inrement of 1. */
- ++auto_inc;
+ if (srv_force_recovery >= SRV_FORCE_NO_IBUF_MERGE) {
+ /* If the recovery level is set so high that writes
+ are disabled we force the AUTOINC counter to the MAX
+ value effectively disabling writes to the table.
+ Secondly, we avoid reading the table in case the read
+ results in failure due to a corrupted table/index.
+
+ We will not return an error to the client, so that the
+ tables can be dumped with minimal hassle. If an error
+ were returned in this case, the first attempt to read
+ the table would fail and subsequent SELECTs would succeed. */
+ } else if (field == NULL) {
+ my_error(ER_AUTOINC_READ_FAILED, MYF(0));
+ } else {
+ dict_index_t* index;
+ const char* col_name;
+ ulonglong read_auto_inc;
+ ulint err;
- dict_table_autoinc_initialize(innodb_table, auto_inc);
+ update_thd(ha_thd());
+ col_name = field->field_name;
+ index = innobase_get_index(table->s->next_number_index);
- } else if (error == DB_RECORD_NOT_FOUND) {
- ut_print_timestamp(stderr);
- fprintf(stderr, " InnoDB: MySQL and InnoDB data "
- "dictionaries are out of sync.\n"
- "InnoDB: Unable to find the AUTOINC column %s in the "
- "InnoDB table %s.\n"
- "InnoDB: We set the next AUTOINC column value to the "
- "maximum possible value,\n"
- "InnoDB: in effect disabling the AUTOINC next value "
- "generation.\n"
- "InnoDB: You can either set the next AUTOINC value "
- "explicitly using ALTER TABLE\n"
- "InnoDB: or fix the data dictionary by recreating "
- "the table.\n",
- col_name, index->table->name);
+ /* Execute SELECT MAX(col_name) FROM TABLE; */
+ err = row_search_max_autoinc(index, col_name, &read_auto_inc);
- auto_inc = 0xFFFFFFFFFFFFFFFFULL;
+ switch (err) {
+ case DB_SUCCESS:
+ /* At the this stage we do not know the increment
+ or the offset, so use a default increment of 1. */
+ auto_inc = read_auto_inc + 1;
+ break;
- dict_table_autoinc_initialize(innodb_table, auto_inc);
+ case DB_RECORD_NOT_FOUND:
+ ut_print_timestamp(stderr);
+ fprintf(stderr, " InnoDB: MySQL and InnoDB data "
+ "dictionaries are out of sync.\n"
+ "InnoDB: Unable to find the AUTOINC column "
+ "%s in the InnoDB table %s.\n"
+ "InnoDB: We set the next AUTOINC column "
+ "value to the maximum possible value,\n"
+ "InnoDB: in effect disabling the AUTOINC "
+ "next value generation.\n"
+ "InnoDB: You can either set the next "
+ "AUTOINC value explicitly using ALTER TABLE\n"
+ "InnoDB: or fix the data dictionary by "
+ "recreating the table.\n",
+ col_name, index->table->name);
- error = DB_SUCCESS;
- } /* else other errors are still fatal */
+ my_error(ER_AUTOINC_READ_FAILED, MYF(0));
+ break;
+ default:
+ /* row_search_max_autoinc() should only return
+ one of DB_SUCCESS or DB_RECORD_NOT_FOUND. */
+ ut_error;
+ }
+ }
- return(ulong(error));
+ dict_table_autoinc_initialize(prebuilt->table, auto_inc);
}
/*********************************************************************
@@ -2840,8 +2927,6 @@ retry:
/* Only if the table has an AUTOINC column. */
if (prebuilt->table != NULL && table->found_next_number_field != NULL) {
- ulint error;
-
dict_table_autoinc_lock(prebuilt->table);
/* Since a table can already be "open" in InnoDB's internal
@@ -2850,8 +2935,7 @@ retry:
autoinc value from a previous MySQL open. */
if (dict_table_autoinc_read(prebuilt->table) == 0) {
- error = innobase_initialize_autoinc();
- ut_a(error == DB_SUCCESS);
+ innobase_initialize_autoinc();
}
dict_table_autoinc_unlock(prebuilt->table);
@@ -3667,67 +3751,6 @@ skip_field:
}
/************************************************************************
-Get the upper limit of the MySQL integral and floating-point type. */
-
-ulonglong
-ha_innobase::innobase_get_int_col_max_value(
-/*========================================*/
- const Field* field)
-{
- ulonglong max_value = 0;
-
- switch(field->key_type()) {
- /* TINY */
- case HA_KEYTYPE_BINARY:
- max_value = 0xFFULL;
- break;
- case HA_KEYTYPE_INT8:
- max_value = 0x7FULL;
- break;
- /* SHORT */
- case HA_KEYTYPE_USHORT_INT:
- max_value = 0xFFFFULL;
- break;
- case HA_KEYTYPE_SHORT_INT:
- max_value = 0x7FFFULL;
- break;
- /* MEDIUM */
- case HA_KEYTYPE_UINT24:
- max_value = 0xFFFFFFULL;
- break;
- case HA_KEYTYPE_INT24:
- max_value = 0x7FFFFFULL;
- break;
- /* LONG */
- case HA_KEYTYPE_ULONG_INT:
- max_value = 0xFFFFFFFFULL;
- break;
- case HA_KEYTYPE_LONG_INT:
- max_value = 0x7FFFFFFFULL;
- break;
- /* BIG */
- case HA_KEYTYPE_ULONGLONG:
- max_value = 0xFFFFFFFFFFFFFFFFULL;
- break;
- case HA_KEYTYPE_LONGLONG:
- max_value = 0x7FFFFFFFFFFFFFFFULL;
- break;
- case HA_KEYTYPE_FLOAT:
- /* We use the maximum as per IEEE754-2008 standard, 2^24 */
- max_value = 0x1000000ULL;
- break;
- case HA_KEYTYPE_DOUBLE:
- /* We use the maximum as per IEEE754-2008 standard, 2^53 */
- max_value = 0x20000000000000ULL;
- break;
- default:
- ut_error;
- }
-
- return(max_value);
-}
-
-/************************************************************************
This special handling is really to overcome the limitations of MySQL's
binlogging. We need to eliminate the non-determinism that will arise in
INSERT ... SELECT type of statements, since MySQL binlog only stores the
@@ -7410,8 +7433,8 @@ innodb_show_status(
mutex_enter_noninline(&srv_monitor_file_mutex);
rewind(srv_monitor_file);
- srv_printf_innodb_monitor(srv_monitor_file,
- &trx_list_start, &trx_list_end);
+ srv_printf_innodb_monitor(srv_monitor_file, FALSE,
+ &trx_list_start, &trx_list_end);
flen = ftell(srv_monitor_file);
os_file_set_eof(srv_monitor_file);
=== modified file 'storage/innobase/handler/ha_innodb.h'
--- a/storage/innobase/handler/ha_innodb.h 2009-09-24 14:52:52 +0000
+++ b/storage/innobase/handler/ha_innodb.h 2010-01-22 09:57:02 +0000
@@ -78,9 +78,8 @@ class ha_innobase: public handler
ulong innobase_reset_autoinc(ulonglong auto_inc);
ulong innobase_get_autoinc(ulonglong* value);
ulong innobase_update_autoinc(ulonglong auto_inc);
- ulong innobase_initialize_autoinc();
+ void innobase_initialize_autoinc();
dict_index_t* innobase_get_index(uint keynr);
- ulonglong innobase_get_int_col_max_value(const Field* field);
/* Init values for the class: */
public:
=== modified file 'storage/innobase/include/fil0fil.h'
--- a/storage/innobase/include/fil0fil.h 2006-06-01 06:34:04 +0000
+++ b/storage/innobase/include/fil0fil.h 2009-12-21 10:20:32 +0000
@@ -330,11 +330,11 @@ fil_op_log_parse_or_replay(
not fir completely between ptr and end_ptr */
byte* end_ptr, /* in: buffer end */
ulint type, /* in: the type of this log record */
- ibool do_replay, /* in: TRUE if we want to replay the
- operation, and not just parse the log record */
- ulint space_id); /* in: if do_replay is TRUE, the space id of
- the tablespace in question; otherwise
- ignored */
+ ulint space_id, /* in: the space id of the tablespace in
+ question, or 0 if the log record should
+ only be parsed but not replayed */
+ ulint log_flags); /* in: redo log flags
+ (stored in the page number parameter) */
/***********************************************************************
Deletes a single-table tablespace. The tablespace must be cached in the
memory cache. */
=== modified file 'storage/innobase/include/lock0lock.h'
--- a/storage/innobase/include/lock0lock.h 2008-12-14 20:00:37 +0000
+++ b/storage/innobase/include/lock0lock.h 2009-12-23 06:59:34 +0000
@@ -579,10 +579,15 @@ lock_rec_print(
/*************************************************************************
Prints info of locks for all transactions. */
-void
+ibool
lock_print_info_summary(
/*====================*/
- FILE* file); /* in: file where to print */
+ /* out: FALSE if not able to obtain
+ kernel mutex and exits without
+ printing info */
+ FILE* file, /* in: file where to print */
+ ibool nowait);/* in: whether to wait for the kernel
+ mutex */
/*************************************************************************
Prints info of locks for each transaction. */
=== modified file 'storage/innobase/include/mtr0mtr.h'
--- a/storage/innobase/include/mtr0mtr.h 2006-10-20 18:36:15 +0000
+++ b/storage/innobase/include/mtr0mtr.h 2009-12-21 10:20:32 +0000
@@ -134,6 +134,12 @@ flag value must give the length also! */
#define MLOG_BIGGEST_TYPE ((byte)46) /* biggest value (used in
asserts) */
+/* Flags for MLOG_FILE operations (stored in the page number
+parameter, called log_flags in the functions). The page number
+parameter was initially written as 0. */
+#define MLOG_FILE_FLAG_TEMP 1 /* identifies TEMPORARY TABLE in
+ MLOG_FILE_CREATE */
+
/*******************************************************************
Starts a mini-transaction and creates a mini-transaction handle
and buffer in the memory buffer given by the caller. */
=== modified file 'storage/innobase/include/srv0srv.h'
--- a/storage/innobase/include/srv0srv.h 2009-05-19 08:20:28 +0000
+++ b/storage/innobase/include/srv0srv.h 2009-12-23 06:59:34 +0000
@@ -146,7 +146,8 @@ extern ibool srv_print_innodb_tablespace
extern ibool srv_print_verbose_log;
extern ibool srv_print_innodb_table_monitor;
-extern ibool srv_lock_timeout_and_monitor_active;
+extern ibool srv_lock_timeout_active;
+extern ibool srv_monitor_active;
extern ibool srv_error_monitor_active;
extern ulong srv_n_spin_wait_rounds;
@@ -427,12 +428,21 @@ srv_release_mysql_thread_if_suspended(
que_thr_t* thr); /* in: query thread associated with the
MySQL OS thread */
/*************************************************************************
-A thread which wakes up threads whose lock wait may have lasted too long.
-This also prints the info output by various InnoDB monitors. */
+A thread which wakes up threads whose lock wait may have lasted too
+long. */
os_thread_ret_t
-srv_lock_timeout_and_monitor_thread(
-/*================================*/
+srv_lock_timeout_thread(
+/*====================*/
+ /* out: a dummy parameter */
+ void* arg); /* in: a dummy parameter required by
+ os_thread_create */
+/*************************************************************************
+A thread which prints the info output by various InnoDB monitors. */
+
+os_thread_ret_t
+srv_monitor_thread(
+/*===============*/
/* out: a dummy parameter */
void* arg); /* in: a dummy parameter required by
os_thread_create */
@@ -449,10 +459,14 @@ srv_error_monitor_thread(
/**********************************************************************
Outputs to a file the output of the InnoDB Monitor. */
-void
+ibool
srv_printf_innodb_monitor(
/*======================*/
+ /* out: FALSE if not all information printed
+ due to failure to obtain necessary mutex */
FILE* file, /* in: output stream */
+ ibool nowait, /* in: whether to wait for kernel
+ mutex. */
ulint* trx_start, /* out: file position of the start of
the list of active transactions */
ulint* trx_end); /* out: file position of the end of
=== modified file 'storage/innobase/lock/lock0lock.c'
--- a/storage/innobase/lock/lock0lock.c 2009-12-01 10:38:40 +0000
+++ b/storage/innobase/lock/lock0lock.c 2009-12-23 06:59:34 +0000
@@ -4192,12 +4192,27 @@ lock_get_n_rec_locks(void)
/*************************************************************************
Prints info of locks for all transactions. */
-void
+ibool
lock_print_info_summary(
/*====================*/
- FILE* file) /* in: file where to print */
-{
- lock_mutex_enter_kernel();
+ /* out: FALSE if not able to obtain
+ kernel mutex and exit without
+ printing lock info */
+ FILE* file, /* in: file where to print */
+ ibool nowait) /* in: whether to wait for the kernel
+ mutex */
+{
+
+ /* if nowait is FALSE, wait on the kernel mutex,
+ otherwise return immediately if fail to obtain the
+ mutex. */
+ if (!nowait) {
+ lock_mutex_enter_kernel();
+ } else if (mutex_enter_nowait(&kernel_mutex)) {
+ fputs("FAIL TO OBTAIN KERNEL MUTEX, "
+ "SKIP LOCK INFO PRINTING\n", file);
+ return(FALSE);
+ }
if (lock_deadlock_found) {
fputs("------------------------\n"
@@ -4231,6 +4246,7 @@ lock_print_info_summary(
"Total number of lock structs in row lock hash table %lu\n",
(ulong) lock_get_n_rec_locks());
#endif /* PRINT_NUM_OF_LOCK_STRUCTS */
+ return(TRUE);
}
/*************************************************************************
=== modified file 'storage/innobase/log/log0log.c'
--- a/storage/innobase/log/log0log.c 2007-07-10 14:34:21 +0000
+++ b/storage/innobase/log/log0log.c 2009-12-23 06:59:34 +0000
@@ -3045,7 +3045,7 @@ loop:
if (srv_fast_shutdown < 2
&& (srv_error_monitor_active
- || srv_lock_timeout_and_monitor_active)) {
+ || srv_lock_timeout_active || srv_monitor_active)) {
mutex_exit(&kernel_mutex);
=== modified file 'storage/innobase/log/log0recv.c'
--- a/storage/innobase/log/log0recv.c 2007-08-28 00:41:29 +0000
+++ b/storage/innobase/log/log0recv.c 2009-12-21 10:20:32 +0000
@@ -939,8 +939,7 @@ recv_parse_or_apply_log_rec_body(
case MLOG_FILE_CREATE:
case MLOG_FILE_RENAME:
case MLOG_FILE_DELETE:
- ptr = fil_op_log_parse_or_replay(ptr, end_ptr, type, FALSE,
- ULINT_UNDEFINED);
+ ptr = fil_op_log_parse_or_replay(ptr, end_ptr, type, 0, 0);
break;
default:
ptr = NULL;
@@ -1938,8 +1937,8 @@ loop:
point to the datadir we should use there */
if (NULL == fil_op_log_parse_or_replay(
- body, end_ptr, type, TRUE,
- space)) {
+ body, end_ptr, type,
+ space, page_no)) {
fprintf(stderr,
"InnoDB: Error: file op"
" log record of type %lu"
=== modified file 'storage/innobase/row/row0mysql.c'
--- a/storage/innobase/row/row0mysql.c 2009-11-02 14:59:19 +0000
+++ b/storage/innobase/row/row0mysql.c 2010-01-22 09:55:50 +0000
@@ -3245,19 +3245,13 @@ check_next_foreign:
"END;\n"
, FALSE, trx);
- if (err != DB_SUCCESS) {
- ut_a(err == DB_OUT_OF_FILE_SPACE);
-
- err = DB_MUST_GET_MORE_FILE_SPACE;
-
- row_mysql_handle_errors(&err, trx, NULL, NULL);
-
- ut_error;
- } else {
+ switch (err) {
ibool is_path;
const char* name_or_path;
mem_heap_t* heap;
+ case DB_SUCCESS:
+
heap = mem_heap_create(200);
/* Clone the name, in case it has been allocated
@@ -3322,7 +3316,27 @@ check_next_foreign:
}
mem_heap_free(heap);
+ break;
+
+ case DB_TOO_MANY_CONCURRENT_TRXS:
+ /* Cannot even find a free slot for the
+ the undo log. We can directly exit here
+ and return the DB_TOO_MANY_CONCURRENT_TRXS
+ error. */
+ break;
+
+ case DB_OUT_OF_FILE_SPACE:
+ err = DB_MUST_GET_MORE_FILE_SPACE;
+
+ row_mysql_handle_errors(&err, trx, NULL, NULL);
+
+ /* Fall through to raise error */
+
+ default:
+ /* No other possible error returns */
+ ut_error;
}
+
funct_exit:
trx_commit_for_mysql(trx);
=== modified file 'storage/innobase/srv/srv0srv.c'
--- a/storage/innobase/srv/srv0srv.c 2009-05-19 08:20:28 +0000
+++ b/storage/innobase/srv/srv0srv.c 2009-12-23 06:59:34 +0000
@@ -64,7 +64,8 @@ ulint srv_fatal_semaphore_wait_threshold
in microseconds, in order to reduce the lagging of the purge thread. */
ulint srv_dml_needed_delay = 0;
-ibool srv_lock_timeout_and_monitor_active = FALSE;
+ibool srv_lock_timeout_active = FALSE;
+ibool srv_monitor_active = FALSE;
ibool srv_error_monitor_active = FALSE;
const char* srv_main_thread_op_info = "";
@@ -122,6 +123,16 @@ ulint srv_log_file_size = ULINT_MAX; /*
ulint srv_log_buffer_size = ULINT_MAX; /* size in database pages */
ulong srv_flush_log_at_trx_commit = 1;
+/* Maximum number of times allowed to conditionally acquire
+mutex before switching to blocking wait on the mutex */
+#define MAX_MUTEX_NOWAIT 20
+
+/* Check whether the number of failed nonblocking mutex
+acquisition attempts exceeds maximum allowed value. If so,
+srv_printf_innodb_monitor() will request mutex acquisition
+with mutex_enter(), which will wait until it gets the mutex. */
+#define MUTEX_NOWAIT(mutex_skipped) ((mutex_skipped) < MAX_MUTEX_NOWAIT)
+
byte srv_latin1_ordering[256] /* The sort order table of the latin1
character set. The following table is
the MySQL order as of Feb 10th, 2002 */
@@ -1626,10 +1637,13 @@ srv_refresh_innodb_monitor_stats(void)
/**********************************************************************
Outputs to a file the output of the InnoDB Monitor. */
-void
+ibool
srv_printf_innodb_monitor(
/*======================*/
+ /* out: FALSE if not all information printed
+ due to failure to obtain necessary mutex */
FILE* file, /* in: output stream */
+ ibool nowait, /* in: whether to wait for the mutex. */
ulint* trx_start, /* out: file position of the start of
the list of active transactions */
ulint* trx_end) /* out: file position of the end of
@@ -1638,6 +1652,7 @@ srv_printf_innodb_monitor(
double time_elapsed;
time_t current_time;
ulint n_reserved;
+ ibool ret;
mutex_enter(&srv_innodb_monitor_mutex);
@@ -1682,24 +1697,31 @@ srv_printf_innodb_monitor(
mutex_exit(&dict_foreign_err_mutex);
- lock_print_info_summary(file);
- if (trx_start) {
- long t = ftell(file);
- if (t < 0) {
- *trx_start = ULINT_UNDEFINED;
- } else {
- *trx_start = (ulint) t;
+ /* Only if lock_print_info_summary proceeds correctly,
+ before we call the lock_print_info_all_transactions
+ to print all the lock information. */
+ ret = lock_print_info_summary(file, nowait);
+
+ if (ret) {
+ if (trx_start) {
+ long t = ftell(file);
+ if (t < 0) {
+ *trx_start = ULINT_UNDEFINED;
+ } else {
+ *trx_start = (ulint) t;
+ }
}
- }
- lock_print_info_all_transactions(file);
- if (trx_end) {
- long t = ftell(file);
- if (t < 0) {
- *trx_end = ULINT_UNDEFINED;
- } else {
- *trx_end = (ulint) t;
+ lock_print_info_all_transactions(file);
+ if (trx_end) {
+ long t = ftell(file);
+ if (t < 0) {
+ *trx_end = ULINT_UNDEFINED;
+ } else {
+ *trx_end = (ulint) t;
+ }
}
}
+
fputs("--------\n"
"FILE I/O\n"
"--------\n", file);
@@ -1804,6 +1826,8 @@ srv_printf_innodb_monitor(
"============================\n", file);
mutex_exit(&srv_innodb_monitor_mutex);
fflush(file);
+
+ return(ret);
}
/**********************************************************************
@@ -1883,26 +1907,23 @@ srv_export_innodb_status(void)
}
/*************************************************************************
-A thread which wakes up threads whose lock wait may have lasted too long.
-This also prints the info output by various InnoDB monitors. */
+A thread prints the info output by various InnoDB monitors. */
os_thread_ret_t
-srv_lock_timeout_and_monitor_thread(
-/*================================*/
+srv_monitor_thread(
+/*===============*/
/* out: a dummy parameter */
void* arg __attribute__((unused)))
/* in: a dummy parameter required by
os_thread_create */
{
- srv_slot_t* slot;
double time_elapsed;
time_t current_time;
time_t last_table_monitor_time;
time_t last_tablespace_monitor_time;
time_t last_monitor_time;
- ibool some_waits;
- double wait_time;
- ulint i;
+ ulint mutex_skipped;
+ ibool last_srv_print_monitor;
#ifdef UNIV_DEBUG_THREAD_CREATION
fprintf(stderr, "Lock timeout thread starts, id %lu\n",
@@ -1913,13 +1934,15 @@ srv_lock_timeout_and_monitor_thread(
last_table_monitor_time = time(NULL);
last_tablespace_monitor_time = time(NULL);
last_monitor_time = time(NULL);
+ mutex_skipped = 0;
+ last_srv_print_monitor = srv_print_innodb_monitor;
loop:
- srv_lock_timeout_and_monitor_active = TRUE;
+ srv_monitor_active = TRUE;
- /* When someone is waiting for a lock, we wake up every second
- and check if a timeout has passed for a lock wait */
+ /* Wake up every 5 seconds to see if we need to print
+ monitor information. */
- os_thread_sleep(1000000);
+ os_thread_sleep(5000000);
current_time = time(NULL);
@@ -1929,14 +1952,40 @@ loop:
last_monitor_time = time(NULL);
if (srv_print_innodb_monitor) {
- srv_printf_innodb_monitor(stderr, NULL, NULL);
+ /* Reset mutex_skipped counter everytime
+ srv_print_innodb_monitor changes. This is to
+ ensure we will not be blocked by kernel_mutex
+ for short duration information printing,
+ such as requested by sync_array_print_long_waits() */
+ if (!last_srv_print_monitor) {
+ mutex_skipped = 0;
+ last_srv_print_monitor = TRUE;
+ }
+
+ if (!srv_printf_innodb_monitor(stderr,
+ MUTEX_NOWAIT(mutex_skipped),
+ NULL, NULL)) {
+ mutex_skipped++;
+ } else {
+ /* Reset the counter */
+ mutex_skipped = 0;
+ }
+ } else {
+ last_srv_print_monitor = FALSE;
}
+
if (srv_innodb_status) {
mutex_enter(&srv_monitor_file_mutex);
rewind(srv_monitor_file);
- srv_printf_innodb_monitor(srv_monitor_file, NULL,
- NULL);
+ if (!srv_printf_innodb_monitor(srv_monitor_file,
+ MUTEX_NOWAIT(mutex_skipped),
+ NULL, NULL)) {
+ mutex_skipped++;
+ } else {
+ mutex_skipped = 0;
+ }
+
os_file_set_eof(srv_monitor_file);
mutex_exit(&srv_monitor_file_mutex);
}
@@ -1989,6 +2038,56 @@ loop:
}
}
+ if (srv_shutdown_state >= SRV_SHUTDOWN_CLEANUP) {
+ goto exit_func;
+ }
+
+ if (srv_print_innodb_monitor
+ || srv_print_innodb_lock_monitor
+ || srv_print_innodb_tablespace_monitor
+ || srv_print_innodb_table_monitor) {
+ goto loop;
+ }
+
+ srv_monitor_active = FALSE;
+
+ goto loop;
+
+exit_func:
+ srv_monitor_active = FALSE;
+
+ /* We count the number of threads in os_thread_exit(). A created
+ thread should always use that to exit and not use return() to exit. */
+
+ os_thread_exit(NULL);
+
+ OS_THREAD_DUMMY_RETURN;
+}
+
+/*************************************************************************
+A thread which wakes up threads whose lock wait may have lasted too long. */
+
+os_thread_ret_t
+srv_lock_timeout_thread(
+/*====================*/
+ /* out: a dummy parameter */
+ void* arg __attribute__((unused)))
+ /* in: a dummy parameter required by
+ os_thread_create */
+{
+ srv_slot_t* slot;
+ ibool some_waits;
+ double wait_time;
+ ulint i;
+
+loop:
+ /* When someone is waiting for a lock, we wake up every second
+ and check if a timeout has passed for a lock wait */
+
+ os_thread_sleep(1000000);
+
+ srv_lock_timeout_active = TRUE;
+
mutex_enter(&kernel_mutex);
some_waits = FALSE;
@@ -2033,17 +2132,11 @@ loop:
goto exit_func;
}
- if (some_waits || srv_print_innodb_monitor
- || srv_print_innodb_lock_monitor
- || srv_print_innodb_tablespace_monitor
- || srv_print_innodb_table_monitor) {
+ if (some_waits) {
goto loop;
}
- /* No one was waiting for a lock and no monitor was active:
- suspend this thread */
-
- srv_lock_timeout_and_monitor_active = FALSE;
+ srv_lock_timeout_active = FALSE;
#if 0
/* The following synchronisation is disabled, since
@@ -2053,7 +2146,7 @@ loop:
goto loop;
exit_func:
- srv_lock_timeout_and_monitor_active = FALSE;
+ srv_lock_timeout_active = FALSE;
/* We count the number of threads in os_thread_exit(). A created
thread should always use that to exit and not use return() to exit. */
=== modified file 'storage/innobase/srv/srv0start.c'
--- a/storage/innobase/srv/srv0start.c 2009-05-06 12:03:24 +0000
+++ b/storage/innobase/srv/srv0start.c 2010-03-04 08:03:07 +0000
@@ -87,8 +87,8 @@ static os_file_t files[1000];
static mutex_t ios_mutex;
static ulint ios;
-static ulint n[SRV_MAX_N_IO_THREADS + 5];
-static os_thread_id_t thread_ids[SRV_MAX_N_IO_THREADS + 5];
+static ulint n[SRV_MAX_N_IO_THREADS + 6];
+static os_thread_id_t thread_ids[SRV_MAX_N_IO_THREADS + 6];
/* We use this mutex to test the return value of pthread_mutex_trylock
on successful locking. HP-UX does NOT return 0, though Linux et al do. */
@@ -1596,15 +1596,20 @@ innobase_start_or_create_for_mysql(void)
/* fprintf(stderr, "Max allowed record size %lu\n",
page_get_free_space_of_empty() / 2); */
- /* Create the thread which watches the timeouts for lock waits
- and prints InnoDB monitor info */
+ /* Create the thread which watches the timeouts for lock
+ waits */
- os_thread_create(&srv_lock_timeout_and_monitor_thread, NULL,
+ os_thread_create(&srv_lock_timeout_thread, NULL,
thread_ids + 2 + SRV_MAX_N_IO_THREADS);
/* Create the thread which warns of long semaphore waits */
os_thread_create(&srv_error_monitor_thread, NULL,
thread_ids + 3 + SRV_MAX_N_IO_THREADS);
+
+ /* Create the thread which prints InnoDB monitor info */
+ os_thread_create(&srv_monitor_thread, NULL,
+ thread_ids + 4 + SRV_MAX_N_IO_THREADS);
+
srv_was_started = TRUE;
srv_is_being_started = FALSE;
=== modified file 'storage/innodb_plugin/CMakeLists.txt'
--- a/storage/innodb_plugin/CMakeLists.txt 2010-01-15 15:27:55 +0000
+++ b/storage/innodb_plugin/CMakeLists.txt 2010-03-04 08:03:07 +0000
@@ -83,4 +83,4 @@ SET(INNODB_PLUGIN_SOURCES btr/btr0btr.c
ADD_DEFINITIONS(-DHAVE_WINDOWS_ATOMICS -DIB_HAVE_PAUSE_INSTRUCTION)
#Disable storage engine, as we are using XtraDB
-#MYSQL_STORAGE_ENGINE(INNOBASE)
+#MYSQL_STORAGE_ENGINE(INNODB_PLUGIN)
=== modified file 'storage/innodb_plugin/handler/ha_innodb.cc'
--- a/storage/innodb_plugin/handler/ha_innodb.cc 2009-12-08 09:26:11 +0000
+++ b/storage/innodb_plugin/handler/ha_innodb.cc 2010-01-13 10:28:42 +0000
@@ -110,9 +110,6 @@ extern "C" {
# ifndef MYSQL_PLUGIN_IMPORT
# define MYSQL_PLUGIN_IMPORT /* nothing */
# endif /* MYSQL_PLUGIN_IMPORT */
-/* This is needed because of Bug #3596. Let us hope that pthread_mutex_t
-is defined the same in both builds: the MySQL server and the InnoDB plugin. */
-extern MYSQL_PLUGIN_IMPORT pthread_mutex_t LOCK_thread_count;
#if MYSQL_VERSION_ID < 50124
/* this is defined in mysql_priv.h inside #ifdef MYSQL_SERVER
=== modified file 'storage/myisam/mi_packrec.c'
--- a/storage/myisam/mi_packrec.c 2009-10-15 21:38:29 +0000
+++ b/storage/myisam/mi_packrec.c 2010-03-04 08:03:07 +0000
@@ -1493,20 +1493,54 @@ static int _mi_read_rnd_mempack_record(M
my_bool _mi_memmap_file(MI_INFO *info)
{
MYISAM_SHARE *share=info->s;
+ my_bool eom;
+
DBUG_ENTER("mi_memmap_file");
if (!info->s->file_map)
{
+ my_off_t data_file_length= share->state.state.data_file_length;
+
+ if (myisam_mmap_size != SIZE_T_MAX)
+ {
+ pthread_mutex_lock(&THR_LOCK_myisam_mmap);
+ eom= data_file_length > myisam_mmap_size - myisam_mmap_used - MEMMAP_EXTRA_MARGIN;
+ if (!eom)
+ myisam_mmap_used+= data_file_length + MEMMAP_EXTRA_MARGIN;
+ pthread_mutex_unlock(&THR_LOCK_myisam_mmap);
+ }
+ else
+ eom= data_file_length > myisam_mmap_size - MEMMAP_EXTRA_MARGIN;
+
+ if (eom)
+ {
+ DBUG_PRINT("warning", ("File is too large for mmap"));
+ DBUG_RETURN(0);
+ }
if (my_seek(info->dfile,0L,MY_SEEK_END,MYF(0)) <
share->state.state.data_file_length+MEMMAP_EXTRA_MARGIN)
{
DBUG_PRINT("warning",("File isn't extended for memmap"));
+ if (myisam_mmap_size != SIZE_T_MAX)
+ {
+ pthread_mutex_lock(&THR_LOCK_myisam_mmap);
+ myisam_mmap_used-= data_file_length + MEMMAP_EXTRA_MARGIN;
+ pthread_mutex_unlock(&THR_LOCK_myisam_mmap);
+ }
DBUG_RETURN(0);
}
if (mi_dynmap_file(info,
share->state.state.data_file_length +
MEMMAP_EXTRA_MARGIN))
+ {
+ if (myisam_mmap_size != SIZE_T_MAX)
+ {
+ pthread_mutex_lock(&THR_LOCK_myisam_mmap);
+ myisam_mmap_used-= data_file_length + MEMMAP_EXTRA_MARGIN;
+ pthread_mutex_unlock(&THR_LOCK_myisam_mmap);
+ }
DBUG_RETURN(0);
+ }
}
info->opt_flag|= MEMMAP_USED;
info->read_record= share->read_record= _mi_read_mempack_record;
@@ -1519,6 +1553,13 @@ void _mi_unmap_file(MI_INFO *info)
{
VOID(my_munmap((char*) info->s->file_map,
(size_t) info->s->mmaped_length + MEMMAP_EXTRA_MARGIN));
+
+ if (myisam_mmap_size != SIZE_T_MAX)
+ {
+ pthread_mutex_lock(&THR_LOCK_myisam_mmap);
+ myisam_mmap_used-= info->s->mmaped_length + MEMMAP_EXTRA_MARGIN;
+ pthread_mutex_unlock(&THR_LOCK_myisam_mmap);
+ }
}
=== modified file 'storage/myisam/mi_static.c'
--- a/storage/myisam/mi_static.c 2009-10-27 13:20:34 +0000
+++ b/storage/myisam/mi_static.c 2009-12-24 06:34:31 +0000
@@ -40,7 +40,7 @@ ulong myisam_concurrent_insert= 0;
my_off_t myisam_max_temp_length= MAX_FILE_SIZE;
ulong myisam_bulk_insert_tree_size=8192*1024;
ulong myisam_data_pointer_size=4;
-
+ulonglong myisam_mmap_size= SIZE_T_MAX, myisam_mmap_used= 0;
static int always_valid(const char *filename __attribute__((unused)))
{
=== modified file 'storage/myisam/myisamdef.h'
--- a/storage/myisam/myisamdef.h 2010-02-10 19:06:24 +0000
+++ b/storage/myisam/myisamdef.h 2010-03-04 08:03:07 +0000
@@ -392,7 +392,6 @@ struct st_myisam_info
#define MI_MAX_BLOCK_LENGTH ((((ulong) 1 << 24)-1) & (~ (ulong) (MI_DYN_ALIGN_SIZE-1)))
#define MI_REC_BUFF_OFFSET ALIGN_SIZE(MI_DYN_DELETE_BLOCK_HEADER+sizeof(uint32))
-#define MEMMAP_EXTRA_MARGIN 7 /* Write this as a suffix for file */
#define PACK_TYPE_SELECTED 1 /* Bits in field->pack_type */
#define PACK_TYPE_SPACE_FIELDS 2
=== modified file 'storage/myisammrg/ha_myisammrg.cc'
--- a/storage/myisammrg/ha_myisammrg.cc 2009-10-15 21:38:29 +0000
+++ b/storage/myisammrg/ha_myisammrg.cc 2010-03-04 08:03:07 +0000
@@ -382,7 +382,7 @@ static MI_INFO *myisammrg_attach_childre
my_errno= HA_ERR_WRONG_MRG_TABLE_DEF;
}
DBUG_PRINT("myrg", ("MyISAM handle: 0x%lx my_errno: %d",
- (long) myisam, my_errno));
+ my_errno ? NULL : (long) myisam, my_errno));
err:
DBUG_RETURN(my_errno ? NULL : myisam);
=== modified file 'strings/Makefile.am'
--- a/strings/Makefile.am 2009-08-13 21:12:12 +0000
+++ b/strings/Makefile.am 2010-03-04 08:03:07 +0000
@@ -21,13 +21,13 @@ pkglib_LIBRARIES = libmystrings.a
# Exact one of ASSEMBLER_X
if ASSEMBLER_x86
ASRCS = strings-x86.s longlong2str-x86.s my_strtoll10-x86.s
-CSRCS = bfill.c bmove.c bmove512.c bchange.c strxnmov.c int2str.c str2int.c r_strinstr.c strtod.c bcmp.c strtol.c strtoul.c strtoll.c strtoull.c llstr.c strnlen.c ctype.c ctype-simple.c ctype-mb.c ctype-big5.c ctype-cp932.c ctype-czech.c ctype-eucjpms.c ctype-euc_kr.c ctype-gb2312.c ctype-gbk.c ctype-sjis.c ctype-tis620.c ctype-ujis.c ctype-utf8.c ctype-ucs2.c ctype-uca.c ctype-win1250ch.c ctype-bin.c ctype-latin1.c my_vsnprintf.c xml.c decimal.c ctype-extra.c str_alloc.c longlong2str_asm.c my_strchr.c strmov_overlapp.c
+CSRCS = bfill.c bmove.c bmove512.c bchange.c strxnmov.c int2str.c str2int.c r_strinstr.c strtod.c bcmp.c strtol.c strtoul.c strtoll.c strtoull.c llstr.c strnlen.c ctype.c ctype-simple.c ctype-mb.c ctype-big5.c ctype-cp932.c ctype-czech.c ctype-eucjpms.c ctype-euc_kr.c ctype-gb2312.c ctype-gbk.c ctype-sjis.c ctype-tis620.c ctype-ujis.c ctype-utf8.c ctype-ucs2.c ctype-uca.c ctype-win1250ch.c ctype-bin.c ctype-latin1.c my_vsnprintf.c xml.c decimal.c ctype-extra.c str_alloc.c longlong2str_asm.c my_strchr.c strmov.c strmov_overlapp.c
else
if ASSEMBLER_sparc32
# These file MUST all be on the same line!! Otherwise automake
# generats a very broken makefile
ASRCS = bmove_upp-sparc.s strappend-sparc.s strend-sparc.s strinstr-sparc.s strmake-sparc.s strmov-sparc.s strnmov-sparc.s strstr-sparc.s
-CSRCS = strcont.c strfill.c strcend.c is_prefix.c longlong2str.c bfill.c bmove.c bmove512.c bchange.c strxnmov.c int2str.c str2int.c r_strinstr.c strtod.c bcmp.c strtol.c strtoul.c strtoll.c strtoull.c llstr.c strnlen.c strxmov.c ctype.c ctype-simple.c ctype-mb.c ctype-big5.c ctype-cp932.c ctype-czech.c ctype-eucjpms.c ctype-euc_kr.c ctype-gb2312.c ctype-gbk.c ctype-sjis.c ctype-tis620.c ctype-ujis.c ctype-utf8.c ctype-ucs2.c ctype-uca.c ctype-win1250ch.c ctype-bin.c ctype-latin1.c my_vsnprintf.c xml.c decimal.c ctype-extra.c my_strtoll10.c str_alloc.c my_strchr.c strmov_overlapp.c
+CSRCS = strcont.c strfill.c strcend.c is_prefix.c longlong2str.c bfill.c bmove.c bmove512.c bchange.c strxnmov.c int2str.c str2int.c r_strinstr.c strtod.c bcmp.c strtol.c strtoul.c strtoll.c strtoull.c llstr.c strnlen.c strxmov.c ctype.c ctype-simple.c ctype-mb.c ctype-big5.c ctype-cp932.c ctype-czech.c ctype-eucjpms.c ctype-euc_kr.c ctype-gb2312.c ctype-gbk.c ctype-sjis.c ctype-tis620.c ctype-ujis.c ctype-utf8.c ctype-ucs2.c ctype-uca.c ctype-win1250ch.c ctype-bin.c ctype-latin1.c my_vsnprintf.c xml.c decimal.c ctype-extra.c my_strtoll10.c str_alloc.c my_strchr.c strmov.c strmov_overlapp.c
else
#no assembler
ASRCS =
=== modified file 'strings/ctype-ucs2.c'
--- a/strings/ctype-ucs2.c 2009-12-03 12:02:37 +0000
+++ b/strings/ctype-ucs2.c 2010-03-04 08:03:07 +0000
@@ -1611,16 +1611,6 @@ fill_max_and_min:
*min_str++= *max_str++ = ptr[1];
}
- /* Temporary fix for handling w_one at end of string (key compression) */
- {
- char *tmp;
- for (tmp= min_str ; tmp-1 > min_org && tmp[-1] == '\0' && tmp[-2]=='\0';)
- {
- *--tmp=' ';
- *--tmp='\0';
- }
- }
-
*min_length= *max_length = (size_t) (min_str - min_org);
while (min_str + 1 < min_end)
{
=== modified file 'strings/strmov.c'
--- a/strings/strmov.c 2006-12-23 19:17:15 +0000
+++ b/strings/strmov.c 2010-03-04 08:03:07 +0000
@@ -24,11 +24,6 @@
#include <my_global.h>
#include "m_string.h"
-#ifdef BAD_STRING_COMPILER
-#undef strmov
-#define strmov strmov_overlapp
-#endif
-
#ifndef strmov
#if !defined(MC68000) && !defined(DS90)
=== modified file 'support-files/Makefile.am'
--- a/support-files/Makefile.am 2009-10-23 16:48:54 +0000
+++ b/support-files/Makefile.am 2010-03-04 08:03:07 +0000
@@ -1,4 +1,4 @@
-# Copyright (C) 2000-2001, 2003-2006 MySQL AB
+# Copyright (C) 2000-2006 MySQL AB, 2008-2010 Sun Microsystems, Inc.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Library General Public
@@ -121,6 +121,7 @@ SUFFIXES = .sh
-e 's!@''SHARED_LIB_VERSION''@!@SHARED_LIB_VERSION@!' \
-e 's!@''MYSQL_BASE_VERSION''@!@MYSQL_BASE_VERSION@!' \
-e 's!@''MYSQL_NO_DASH_VERSION''@!@MYSQL_NO_DASH_VERSION@!' \
+ -e 's!@''MYSQL_U_SCORE_VERSION''@!@MYSQL_U_SCORE_VERSION@!' \
-e 's!@''MYSQL_COPYRIGHT_YEAR''@!@MYSQL_COPYRIGHT_YEAR@!' \
-e 's!@''MYSQL_TCP_PORT''@!@MYSQL_TCP_PORT@!' \
-e 's!@''PERL_DBI_VERSION''@!@PERL_DBI_VERSION@!' \
=== modified file 'support-files/mysql.spec.sh'
--- a/support-files/mysql.spec.sh 2009-12-03 11:19:05 +0000
+++ b/support-files/mysql.spec.sh 2010-03-04 08:03:07 +0000
@@ -1,4 +1,4 @@
-# Copyright 2000-2008 MySQL AB, 2008 Sun Microsystems, Inc.
+# Copyright (C) 2000-2008 MySQL AB, 2008-2010 Sun Microsystems, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@@ -56,9 +56,9 @@
%{!?_with_maria:%define MARIA_BUILD 0}
%if %{STATIC_BUILD}
-%define release 0
+%define release 1
%else
-%define release 0.glibc23
+%define release 1.glibc23
%endif
%define mysql_license GPL
%define mysqld_user mysql
@@ -70,6 +70,19 @@
# See BUG#998 for details.
%define _unpackaged_files_terminate_build 0
+# ------------------------------------------------------------------------------
+# RPM build tools now automatically detects Perl module dependencies. This
+# detection gives problems as it is broken in some versions, and it also
+# give unwanted dependencies from mandatory scripts in our package.
+# Might not be possible to disable in all RPM tool versions, but here we
+# try. We keep the "AutoReqProv: no" for the "test" sub package, as disabling
+# here might fail, and that package has the most problems.
+# See http://fedoraproject.org/wiki/Packaging/Perl#Filtering_Requires:_and_Provid…
+# http://www.wideopen.com/archives/rpm-list/2002-October/msg00343.html
+# ------------------------------------------------------------------------------
+%undefine __perl_provides
+%undefine __perl_requires
+
%define see_base For a description of MySQL see the base MySQL RPM or http://www.mysql.com
# On SuSE 9 no separate "debuginfo" package is built. To enable basic
@@ -92,7 +105,7 @@
Name: MySQL
Summary: MySQL: a very fast and reliable SQL database server
Group: Applications/Databases
-Version: @MYSQL_NO_DASH_VERSION@
+Version: @MYSQL_U_SCORE_VERSION@
Release: %{release}
License: Copyright 2000-2008 MySQL AB, @MYSQL_COPYRIGHT_YEAR@ Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Under %{mysql_license} license as shown in the Description field.
Source: http://www.mysql.com/Downloads/MySQL-@MYSQL_BASE_VERSION@/mysql-%{mysql_ver…
@@ -210,7 +223,7 @@ They should be used with caution.
%endif
%package test
-Requires: %{name}-client perl-DBI perl
+Requires: %{name}-client perl
Summary: MySQL - Test suite
Group: Applications/Databases
Provides: mysql-test
@@ -917,6 +930,12 @@ fi
# itself - note that they must be ordered by date (important when
# merging BK trees)
%changelog
+* Mon Jan 11 2010 Joerg Bruehe <joerg.bruehe(a)sun.com>
+
+- Change RPM file naming:
+ - Suffix like "-m2", "-rc" becomes part of version as "_m2", "_rc".
+ - Release counts from 1, not 0.
+
* Mon Aug 24 2009 Jonathan Perkin <jperkin(a)sun.com>
- Add conditionals for bundled zlib and innodb plugin
=== modified file 'win/configure.js'
--- a/win/configure.js 2009-10-07 21:00:29 +0000
+++ b/win/configure.js 2010-03-04 08:03:07 +0000
@@ -155,10 +155,10 @@ function GetValue(str, key)
function GetVersion(str)
{
- var key = "AM_INIT_AUTOMAKE(mysql, ";
+ var key = "AC_INIT([MariaDB Server], [";
var key2 = "AM_INIT_AUTOMAKE(mariadb, ";
var key_len = key.length;
- var pos = str.indexOf(key); //5.0.6-beta)
+ var pos = str.indexOf(key);
if (pos == -1)
{
pos = str.indexOf(key2);
@@ -166,7 +166,7 @@ function GetVersion(str)
}
if (pos == -1) return null;
pos += key_len;
- var end = str.indexOf(")", pos);
+ var end = str.indexOf("]", pos);
if (end == -1) return null;
return str.substring(pos, end);
}
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (knielsen:2826)
by knielsen@knielsen-hq.org 10 Mar '10
by knielsen@knielsen-hq.org 10 Mar '10
10 Mar '10
#At lp:maria
2826 knielsen(a)knielsen-hq.org 2010-03-10
Fixes for two test failures in Buildbot.
- Adjust timing in test case, to avoid test failures caused by high load
on machines and consequent race conditions in the test case.
- Add another variant of Valgrind suppressions for memory leak in system
libraries when unloading dynamic object files.
modified:
mysql-test/r/information_schema.result
mysql-test/t/information_schema.test
mysql-test/valgrind.supp
per-file messages:
mysql-test/r/information_schema.result
Adjust timing to avoid test failures due to races.
mysql-test/t/information_schema.test
Adjust timing to avoid test failures due to races.
mysql-test/valgrind.supp
Add another variant of valgrind suppression for leak in system libs.
=== modified file 'mysql-test/r/information_schema.result'
--- a/mysql-test/r/information_schema.result 2010-01-15 15:58:25 +0000
+++ b/mysql-test/r/information_schema.result 2010-03-10 09:11:02 +0000
@@ -1386,7 +1386,7 @@ who
other connection here
SELECT IF(`time` > 0, 'OK', `time`) AS time_low,
IF(`time` < 1000, 'OK', `time`) AS time_high,
-IF(time_ms > 1500, 'OK', time_ms) AS time_ms_low,
+IF(time_ms >= 1000, 'OK', time_ms) AS time_ms_low,
IF(time_ms < 1000000, 'OK', time_ms) AS time_ms_high
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE ID=@tid;
=== modified file 'mysql-test/t/information_schema.test'
--- a/mysql-test/t/information_schema.test 2009-11-30 13:36:06 +0000
+++ b/mysql-test/t/information_schema.test 2010-03-10 09:11:02 +0000
@@ -1114,7 +1114,7 @@ eval SET @tid=$ID;
--enable_query_log
SELECT IF(`time` > 0, 'OK', `time`) AS time_low,
IF(`time` < 1000, 'OK', `time`) AS time_high,
- IF(time_ms > 1500, 'OK', time_ms) AS time_ms_low,
+ IF(time_ms >= 1000, 'OK', time_ms) AS time_ms_low,
IF(time_ms < 1000000, 'OK', time_ms) AS time_ms_high
FROM INFORMATION_SCHEMA.PROCESSLIST
WHERE ID=@tid;
=== modified file 'mysql-test/valgrind.supp'
--- a/mysql-test/valgrind.supp 2010-01-18 12:56:10 +0000
+++ b/mysql-test/valgrind.supp 2010-03-10 09:11:02 +0000
@@ -469,6 +469,26 @@
}
{
+ dlclose memory loss from plugin variant 8
+ Memcheck:Leak
+ fun:calloc
+ fun:_dlerror_run
+ fun:dlclose
+ fun:_Z15free_plugin_memP12st_plugin_dl
+ fun:_Z13plugin_dl_delPK19st_mysql_lex_string
+}
+
+{
+ dlclose memory loss from plugin variant 9
+ Memcheck:Leak
+ fun:calloc
+ fun:_dlerror_run
+ fun:dlclose
+ fun:_ZL15free_plugin_memP12st_plugin_dl
+ fun:_ZL13plugin_dl_delPK19st_mysql_lex_string
+}
+
+{
dlopen / ptread_cancel_init memory loss on Suse Linux 10.3 32/64 bit ver 1
Memcheck:Leak
fun:*alloc
1
0
[Maria-developers] Rev 27: Increased run time to 15 minutes to get more reliable results. in file:///Users/hakan/work/monty_program/mariadb-tools/
by Hakan Kuecuekyilmaz 10 Mar '10
by Hakan Kuecuekyilmaz 10 Mar '10
10 Mar '10
At file:///Users/hakan/work/monty_program/mariadb-tools/
------------------------------------------------------------
revno: 27
revision-id: hakan(a)askmonty.org-20100310010046-hwv56n4wfn4t4odp
parent: hakan(a)askmonty.org-20100310000201-nlm71rud41hxpk38
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: mariadb-tools
timestamp: Wed 2010-03-10 02:00:46 +0100
message:
Increased run time to 15 minutes to get more reliable results.
=== modified file 'sysbench/run-sysbench-myisam.sh'
--- a/sysbench/run-sysbench-myisam.sh 2010-03-10 00:02:01 +0000
+++ b/sysbench/run-sysbench-myisam.sh 2010-03-10 01:00:46 +0000
@@ -87,7 +87,7 @@
TABLE_SIZE=20000000
# The run time we use for sysbench.
-RUN_TIME=300
+RUN_TIME=900
# Warm up time we use for sysbench.
WARM_UP_TIME=180
1
0
[Maria-developers] Rev 26: sysbench prepare needs --max-time or --max-requests in file:///Users/hakan/work/monty_program/mariadb-tools/
by Hakan Kuecuekyilmaz 10 Mar '10
by Hakan Kuecuekyilmaz 10 Mar '10
10 Mar '10
At file:///Users/hakan/work/monty_program/mariadb-tools/
------------------------------------------------------------
revno: 26
revision-id: hakan(a)askmonty.org-20100310000201-nlm71rud41hxpk38
parent: hakan(a)askmonty.org-20100309233127-vgj4q029apf0un2x
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: mariadb-tools
timestamp: Wed 2010-03-10 01:02:01 +0100
message:
sysbench prepare needs --max-time or --max-requests
=== modified file 'sysbench/run-sysbench-myisam.sh'
--- a/sysbench/run-sysbench-myisam.sh 2010-03-09 23:31:27 +0000
+++ b/sysbench/run-sysbench-myisam.sh 2010-03-10 00:02:01 +0000
@@ -291,7 +291,7 @@
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Preparing and loading data for $SYSBENCH_TEST."
SYSBENCH_OPTIONS="${SYSBENCH_OPTIONS} --test=${TEST_DIR}/${SYSBENCH_TEST}"
- $SYSBENCH $SYSBENCH_OPTIONS prepare
+ $SYSBENCH $SYSBENCH_OPTIONS --max-time=$RUN_TIME prepare
$MYSQLADMIN $MYSQLADMIN_OPTIONS shutdown
sync
=== modified file 'sysbench/run-sysbench.sh'
--- a/sysbench/run-sysbench.sh 2010-03-09 23:31:27 +0000
+++ b/sysbench/run-sysbench.sh 2010-03-10 00:02:01 +0000
@@ -287,7 +287,7 @@
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Preparing and loading data for $SYSBENCH_TEST."
SYSBENCH_OPTIONS="${SYSBENCH_OPTIONS} --test=${TEST_DIR}/${SYSBENCH_TEST}"
- $SYSBENCH $SYSBENCH_OPTIONS prepare
+ $SYSBENCH $SYSBENCH_OPTIONS --max-time=$RUN_TIME prepare
$MYSQLADMIN $MYSQLADMIN_OPTIONS shutdown
sync
1
0
[Maria-developers] Rev 25: Added status information about warm up and actual run. in file:///Users/hakan/work/monty_program/mariadb-tools/
by Hakan Kuecuekyilmaz 10 Mar '10
by Hakan Kuecuekyilmaz 10 Mar '10
10 Mar '10
At file:///Users/hakan/work/monty_program/mariadb-tools/
------------------------------------------------------------
revno: 25
revision-id: hakan(a)askmonty.org-20100309233127-vgj4q029apf0un2x
parent: hakan(a)askmonty.org-20100309220617-3ti82bv9tvlvx5w6
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: mariadb-tools
timestamp: Wed 2010-03-10 00:31:27 +0100
message:
Added status information about warm up and actual run.
=== modified file 'sysbench/conf/lu0012.inc'
--- a/sysbench/conf/lu0012.inc 2010-03-09 22:05:59 +0000
+++ b/sysbench/conf/lu0012.inc 2010-03-09 23:31:27 +0000
@@ -22,7 +22,7 @@
# System statistic binaries.
IOSTAT='/usr/bin/iostat'
IOSTAT_DEVICE='/dev/sda'
-# For CPU utilization statistics
+# For CPU utilization statistics.
MPSTAT='/usr/bin/mpstat'
# Directories.
=== modified file 'sysbench/conf/perro.inc'
--- a/sysbench/conf/perro.inc 2010-03-09 22:05:59 +0000
+++ b/sysbench/conf/perro.inc 2010-03-09 23:31:27 +0000
@@ -22,7 +22,7 @@
# System statistic binaries.
IOSTAT='/usr/bin/iostat'
IOSTAT_DEVICE='/dev/sda'
-# For CPU utilization statistics
+# For CPU utilization statistics.
MPSTAT='/usr/bin/mpstat'
# Other binaries.
=== modified file 'sysbench/conf/work.inc'
--- a/sysbench/conf/work.inc 2010-03-09 22:05:59 +0000
+++ b/sysbench/conf/work.inc 2010-03-09 23:31:27 +0000
@@ -22,7 +22,7 @@
# System statistic binaries.
IOSTAT='/usr/bin/iostat'
IOSTAT_DEVICE='/dev/sda'
-# For CPU utilization statistics
+# For CPU utilization statistics.
MPSTAT='/usr/bin/mpstat'
# Other binaries.
=== modified file 'sysbench/run-sysbench-myisam.sh'
--- a/sysbench/run-sysbench-myisam.sh 2010-03-09 22:03:48 +0000
+++ b/sysbench/run-sysbench-myisam.sh 2010-03-09 23:31:27 +0000
@@ -271,6 +271,9 @@
#
echo $MYSQL_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/mysqld_options.txt
echo $SYSBENCH_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
+echo '' >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
+echo "Warm up time is: $WARM_UP_TIME" >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
+echo "Run time is: $RUN_TIME" >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
for SYSBENCH_TEST in $SYSBENCH_TESTS
do
@@ -325,8 +328,12 @@
start_mysqld
sync
+ echo "[$(date "+%Y-%m-%d %H:%M:%S")] Starting warm up of $WARM_UP_TIME seconds."
$SYSBENCH $SYSBENCH_OPTIONS_WARM_UP run
sync
+ echo "[$(date "+%Y-%m-%d %H:%M:%S")] Finnished warm up."
+
+ echo "[$(date "+%Y-%m-%d %H:%M:%S")] Starting actual sysbench run."
$SYSBENCH $SYSBENCH_OPTIONS_RUN run > ${THIS_RESULT_DIR}/result${k}.txt 2>&1
grep "write requests:" ${THIS_RESULT_DIR}/result${k}.txt | awk '{ print $4 }' | sed -e 's/(//' >> ${THIS_RESULT_DIR}/results.txt
=== modified file 'sysbench/run-sysbench.sh'
--- a/sysbench/run-sysbench.sh 2010-03-09 22:03:48 +0000
+++ b/sysbench/run-sysbench.sh 2010-03-09 23:31:27 +0000
@@ -267,6 +267,9 @@
#
echo $MYSQL_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/mysqld_options.txt
echo $SYSBENCH_OPTIONS > ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
+echo '' >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
+echo "Warm up time is: $WARM_UP_TIME" >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
+echo "Run time is: $RUN_TIME" >> ${RESULT_DIR}/${TODAY}/${PRODUCT}/sysbench_options.txt
for SYSBENCH_TEST in $SYSBENCH_TESTS
do
@@ -321,8 +324,12 @@
start_mysqld
sync
+ echo "[$(date "+%Y-%m-%d %H:%M:%S")] Starting warm up of $WARM_UP_TIME seconds."
$SYSBENCH $SYSBENCH_OPTIONS_WARM_UP run
sync
+ echo "[$(date "+%Y-%m-%d %H:%M:%S")] Finnished warm up."
+
+ echo "[$(date "+%Y-%m-%d %H:%M:%S")] Starting actual sysbench run."
$SYSBENCH $SYSBENCH_OPTIONS_RUN run > ${THIS_RESULT_DIR}/result${k}.txt 2>&1
grep "write requests:" ${THIS_RESULT_DIR}/result${k}.txt | awk '{ print $4 }' | sed -e 's/(//' >> ${THIS_RESULT_DIR}/results.txt
1
0
[Maria-developers] Rev 24: Merge. in file:///Users/hakan/work/monty_program/mariadb-tools/
by Hakan Kuecuekyilmaz 09 Mar '10
by Hakan Kuecuekyilmaz 09 Mar '10
09 Mar '10
At file:///Users/hakan/work/monty_program/mariadb-tools/
------------------------------------------------------------
revno: 24 [merge]
revision-id: hakan(a)askmonty.org-20100309220617-3ti82bv9tvlvx5w6
parent: hakan(a)askmonty.org-20100309220559-6u2f1d4hcjcchc1n
parent: hakan(a)askmonty.org-20100309220348-qpzbd0lavfwxneuy
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: mariadb-tools
timestamp: Tue 2010-03-09 23:06:17 +0100
message:
Merge.
modified:
sysbench/run-sysbench-myisam.sh runsysbenchmyisam.sh-20100303192900-8z3drtmisv1b19uy-1
sysbench/run-sysbench.sh runsysbench.sh-20100219052618-ybly665ohw916cxl-3
=== modified file 'sysbench/run-sysbench-myisam.sh'
--- a/sysbench/run-sysbench-myisam.sh 2010-03-09 15:06:22 +0000
+++ b/sysbench/run-sysbench-myisam.sh 2010-03-09 22:03:48 +0000
@@ -7,6 +7,16 @@
# killall -9, which can cause severe side effects!
# * By bzr pull we mean bzr merge --pull
#
+# Index sizes for 20 mio rows (--table-size=20000000).
+# * delete.lua: 313M sbtest.MYI
+# * insert.lua: 4.0K sbtest.MYI
+# * oltp_complex_ro.lua: 313M sbtest.MYI
+# * oltp_complex_rw.lua: 313M sbtest.MYI
+# * oltp_simple.lua: 325M sbtest.MYI
+# * select.lua: 313M sbtest.MYI
+# * update_index.lua: 313M sbtest.MYI
+# * update_non_index.lua: 313M sbtest.MYI
+#
# Hakan Kuecuekyilmaz <hakan at askmonty dot org> 2010-02-19.
#
@@ -79,6 +89,9 @@
# The run time we use for sysbench.
RUN_TIME=300
+# Warm up time we use for sysbench.
+WARM_UP_TIME=180
+
# How many times we run each test.
LOOP_COUNT=3
@@ -99,7 +112,6 @@
# otherwise we get a table full error while preparing the run.
#
SYSBENCH_OPTIONS="--oltp-table-size=$TABLE_SIZE \
- --max-time=$RUN_TIME \
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
@@ -293,7 +305,8 @@
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Running $SYSBENCH_TEST with $THREADS threads and $LOOP_COUNT iterations for $PRODUCT" | tee ${THIS_RESULT_DIR}/results.txt
echo '' >> ${THIS_RESULT_DIR}/results.txt
- SYSBENCH_OPTIONS="$SYSBENCH_OPTIONS --num-threads=$THREADS"
+ SYSBENCH_OPTIONS_WARM_UP="${SYSBENCH_OPTIONS} --num-threads=1 --max-time=$WARM_UP_TIME"
+ SYSBENCH_OPTIONS_RUN="${SYSBENCH_OPTIONS} --num-threads=$THREADS --max-time=$RUN_TIME"
k=0
while [ $k -lt $LOOP_COUNT ]
@@ -312,7 +325,9 @@
start_mysqld
sync
- $SYSBENCH $SYSBENCH_OPTIONS run > ${THIS_RESULT_DIR}/result${k}.txt 2>&1
+ $SYSBENCH $SYSBENCH_OPTIONS_WARM_UP run
+ sync
+ $SYSBENCH $SYSBENCH_OPTIONS_RUN run > ${THIS_RESULT_DIR}/result${k}.txt 2>&1
grep "write requests:" ${THIS_RESULT_DIR}/result${k}.txt | awk '{ print $4 }' | sed -e 's/(//' >> ${THIS_RESULT_DIR}/results.txt
=== modified file 'sysbench/run-sysbench.sh'
--- a/sysbench/run-sysbench.sh 2010-03-09 15:06:22 +0000
+++ b/sysbench/run-sysbench.sh 2010-03-09 22:03:48 +0000
@@ -90,6 +90,9 @@
# The run time we use for sysbench.
RUN_TIME=300
+# Warm up time we use for sysbench.
+WARM_UP_TIME=180
+
# How many times we run each test.
LOOP_COUNT=3
@@ -106,7 +109,6 @@
update_non_index.lua"
SYSBENCH_OPTIONS="--oltp-table-size=$TABLE_SIZE \
- --max-time=$RUN_TIME \
--max-requests=0 \
--mysql-table-engine=InnoDB \
--mysql-user=root \
@@ -299,7 +301,8 @@
echo "[$(date "+%Y-%m-%d %H:%M:%S")] Running $SYSBENCH_TEST with $THREADS threads and $LOOP_COUNT iterations for $PRODUCT" | tee ${THIS_RESULT_DIR}/results.txt
echo '' >> ${THIS_RESULT_DIR}/results.txt
- SYSBENCH_OPTIONS="$SYSBENCH_OPTIONS --num-threads=$THREADS"
+ SYSBENCH_OPTIONS_WARM_UP="${SYSBENCH_OPTIONS} --num-threads=1 --max-time=$WARM_UP_TIME"
+ SYSBENCH_OPTIONS_RUN="${SYSBENCH_OPTIONS} --num-threads=$THREADS --max-time=$RUN_TIME"
k=0
while [ $k -lt $LOOP_COUNT ]
@@ -318,7 +321,9 @@
start_mysqld
sync
- $SYSBENCH $SYSBENCH_OPTIONS run > ${THIS_RESULT_DIR}/result${k}.txt 2>&1
+ $SYSBENCH $SYSBENCH_OPTIONS_WARM_UP run
+ sync
+ $SYSBENCH $SYSBENCH_OPTIONS_RUN run > ${THIS_RESULT_DIR}/result${k}.txt 2>&1
grep "write requests:" ${THIS_RESULT_DIR}/result${k}.txt | awk '{ print $4 }' | sed -e 's/(//' >> ${THIS_RESULT_DIR}/results.txt
1
0
[Maria-developers] Rev 23: Use mpstat instead of sar for CPU utilization statistics. in file:///Users/hakan/work/monty_program/mariadb-tools/
by Hakan Kuecuekyilmaz 09 Mar '10
by Hakan Kuecuekyilmaz 09 Mar '10
09 Mar '10
At file:///Users/hakan/work/monty_program/mariadb-tools/
------------------------------------------------------------
revno: 23
revision-id: hakan(a)askmonty.org-20100309220559-6u2f1d4hcjcchc1n
parent: hakan(a)askmonty.org-20100309150622-0cpj0wxp3oxsuhep
committer: Hakan Kuecuekyilmaz <hakan(a)askmonty.org>
branch nick: mariadb-tools
timestamp: Tue 2010-03-09 23:05:59 +0100
message:
Use mpstat instead of sar for CPU utilization statistics.
=== modified file 'sysbench/conf/lu0012.inc'
--- a/sysbench/conf/lu0012.inc 2010-03-04 02:03:03 +0000
+++ b/sysbench/conf/lu0012.inc 2010-03-09 22:05:59 +0000
@@ -22,7 +22,8 @@
# System statistic binaries.
IOSTAT='/usr/bin/iostat'
IOSTAT_DEVICE='/dev/sda'
-SAR='/usr/bin/sar'
+# For CPU utilization statistics
+MPSTAT='/usr/bin/mpstat'
# Directories.
TEMP_DIR='/tmp'
=== modified file 'sysbench/conf/perro.inc'
--- a/sysbench/conf/perro.inc 2010-03-09 14:04:29 +0000
+++ b/sysbench/conf/perro.inc 2010-03-09 22:05:59 +0000
@@ -22,7 +22,8 @@
# System statistic binaries.
IOSTAT='/usr/bin/iostat'
IOSTAT_DEVICE='/dev/sda'
-SAR='/usr/bin/sar'
+# For CPU utilization statistics
+MPSTAT='/usr/bin/mpstat'
# Other binaries.
SUDO=/my/local/bin/sur
=== modified file 'sysbench/conf/work.inc'
--- a/sysbench/conf/work.inc 2010-03-09 14:04:29 +0000
+++ b/sysbench/conf/work.inc 2010-03-09 22:05:59 +0000
@@ -22,7 +22,8 @@
# System statistic binaries.
IOSTAT='/usr/bin/iostat'
IOSTAT_DEVICE='/dev/sda'
-SAR='/usr/bin/sar'
+# For CPU utilization statistics
+MPSTAT='/usr/bin/mpstat'
# Other binaries.
SUDO=/my/local/bin/sur
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (knielsen:2800)
by knielsen@knielsen-hq.org 09 Mar '10
by knielsen@knielsen-hq.org 09 Mar '10
09 Mar '10
#At lp:maria
2800 knielsen(a)knielsen-hq.org 2010-03-09 [merge]
Automerge for buildbot test
modified:
BUILD/compile-pentium64-gcov
BUILD/compile-pentium64-gprof
include/my_sys.h
mysql-test/r/variables.result
mysys/my_sync.c
sql-bench/test-select.sh
sql/mysqld.cc
sql/sql_select.cc
storage/maria/ma_key_recover.h
storage/maria/ma_page.c
storage/maria/ma_rkey.c
storage/maria/ma_search.c
storage/maria/ma_write.c
storage/maria/maria_def.h
=== modified file 'BUILD/compile-pentium64-gcov'
--- a/BUILD/compile-pentium64-gcov 2007-08-16 00:10:16 +0000
+++ b/BUILD/compile-pentium64-gcov 2010-03-09 19:22:24 +0000
@@ -9,9 +9,9 @@ export CCACHE_DISABLE
export LDFLAGS="$gcov_link_flags"
-extra_flags="$pentium64_cflags $debug_cflags $max_cflags $gcov_compile_flags"
+extra_flags="$pentium64_cflags $max_cflags $gcov_compile_flags"
c_warnings="$c_warnings $debug_extra_warnings"
cxx_warnings="$cxx_warnings $debug_extra_warnings"
-extra_configs="$pentium64_configs $debug_configs $gcov_configs $max_configs"
+extra_configs="$pentium_configs $debug_configs $gcov_configs $max_configs --with-zlib-dir=bundled"
. "$path/FINISH.sh"
=== modified file 'BUILD/compile-pentium64-gprof'
--- a/BUILD/compile-pentium64-gprof 2007-08-16 00:10:16 +0000
+++ b/BUILD/compile-pentium64-gprof 2010-03-09 19:22:24 +0000
@@ -4,6 +4,6 @@ path=`dirname $0`
. "$path/SETUP.sh"
extra_flags="$pentium64_cflags $gprof_compile_flags"
-extra_configs="$pentium64_configs $debug_configs $gprof_link_flags"
+extra_configs="$pentium_configs $max_configs $gprof_link_flags --with-zlib-dir=bundled"
. "$path/FINISH.sh"
=== modified file 'include/my_sys.h'
--- a/include/my_sys.h 2010-03-04 08:03:07 +0000
+++ b/include/my_sys.h 2010-03-09 19:31:28 +0000
@@ -247,6 +247,7 @@ extern CHARSET_INFO compiled_charsets[];
/* statistics */
extern ulong my_file_opened,my_stream_opened, my_tmp_file_created;
extern ulong my_file_total_opened;
+extern ulong my_sync_count;
extern uint mysys_usage_id;
extern my_bool my_init_done;
=== modified file 'mysql-test/r/variables.result'
--- a/mysql-test/r/variables.result 2010-03-04 08:03:07 +0000
+++ b/mysql-test/r/variables.result 2010-03-09 19:31:28 +0000
@@ -575,8 +575,6 @@ set storage_engine=myisam;
set global thread_cache_size=100;
set timestamp=1, timestamp=default;
set tmp_table_size=100;
-Warnings:
-Warning 1292 Truncated incorrect tmp_table_size value: '100'
set tx_isolation="READ-COMMITTED";
set wait_timeout=100;
set log_warnings=1;
=== modified file 'mysys/my_sync.c'
--- a/mysys/my_sync.c 2010-01-15 15:27:55 +0000
+++ b/mysys/my_sync.c 2010-03-09 19:22:24 +0000
@@ -17,6 +17,8 @@
#include "mysys_err.h"
#include <errno.h>
+ulong my_sync_count; /* Count number of sync calls */
+
/*
Sync data in file to disk
@@ -46,6 +48,7 @@ int my_sync(File fd, myf my_flags)
DBUG_ENTER("my_sync");
DBUG_PRINT("my",("fd: %d my_flags: %d", fd, my_flags));
+ statistic_increment(my_sync_count,&THR_LOCK_open);
do
{
#if defined(F_FULLFSYNC)
=== modified file 'sql-bench/test-select.sh'
--- a/sql-bench/test-select.sh 2010-02-17 20:10:02 +0000
+++ b/sql-bench/test-select.sh 2010-03-09 19:22:24 +0000
@@ -68,7 +68,8 @@ do_many($dbh,$server->create("bench1",
["region char(1) NOT NULL",
"idn integer(6) NOT NULL",
"rev_idn integer(6) NOT NULL",
- "grp integer(6) NOT NULL"],
+ "grp integer(6) NOT NULL",
+ "grp_no_key integer(6) NOT NULL"],
["primary key (region,idn)",
"unique (region,rev_idn)",
"unique (region,grp,idn)"]));
@@ -105,10 +106,10 @@ for ($id=0,$rev_id=$opt_loop_count-1 ; $
{
$grp=$id*3 % $opt_groups;
$region=chr(65+$id%$opt_regions);
- do_query($dbh,"$query'$region',$id,$rev_id,$grp)");
+ do_query($dbh,"$query'$region',$id,$rev_id,$grp,$grp)");
if ($id == $half_done)
{ # Test with different insert
- $query="insert into bench1 (region,idn,rev_idn,grp) values (";
+ $query="insert into bench1 (region,idn,rev_idn,grp,grp_no_key) values (";
}
}
@@ -323,6 +324,26 @@ if ($limits->{'group_functions'})
$end_time=new Benchmark;
print "Time for count_group_on_key_parts ($i:$rows): " .
timestr(timediff($end_time, $loop_time),"all") . "\n";
+
+ $loop_time=new Benchmark;
+ $rows=0;
+ for ($i=0 ; $i < $opt_medium_loop_count ; $i++)
+ {
+ $rows+=fetch_all_rows($dbh,"select grp_no_key,count(*) from bench1 group by grp_no_key");
+ }
+ $end_time=new Benchmark;
+ print "Time for count_group ($i:$rows): " .
+ timestr(timediff($end_time, $loop_time),"all") . "\n";
+
+ $loop_time=new Benchmark;
+ $rows=0;
+ for ($i=0 ; $i < $opt_medium_loop_count ; $i++)
+ {
+ $rows+=fetch_all_rows($dbh,"select grp_no_key,count(*) as cnt from bench1 group by grp_no_key order by cnt");
+ }
+ $end_time=new Benchmark;
+ print "Time for count_group_with_order ($i:$rows): " .
+ timestr(timediff($end_time, $loop_time),"all") . "\n";
}
if ($limits->{'group_distinct_functions'})
=== modified file 'sql/mysqld.cc'
--- a/sql/mysqld.cc 2010-03-04 08:03:07 +0000
+++ b/sql/mysqld.cc 2010-03-09 19:31:28 +0000
@@ -7286,10 +7286,10 @@ The minimum value for this variable is 4
0, GET_STR, REQUIRED_ARG, 0, 0, 0, 0, 0, 0},
{"tmp_table_size", OPT_TMP_TABLE_SIZE,
"If an internal in-memory temporary table exceeds this size, MySQL will"
- " automatically convert it to an on-disk MyISAM table.",
+ " automatically convert it to an on-disk MyISAM/Maria table.",
(uchar**) &global_system_variables.tmp_table_size,
(uchar**) &max_system_variables.tmp_table_size, 0, GET_ULL,
- REQUIRED_ARG, 16*1024*1024L, 1024, MAX_MEM_TABLE_SIZE, 0, 1, 0},
+ REQUIRED_ARG, 16*1024*1024L, 0, MAX_MEM_TABLE_SIZE, 0, 1, 0},
{"transaction_alloc_block_size", OPT_TRANS_ALLOC_BLOCK_SIZE,
"Allocation block size for transactions to be stored in binary log",
(uchar**) &global_system_variables.trans_alloc_block_size,
@@ -7795,6 +7795,7 @@ SHOW_VAR status_vars[]= {
{"Ssl_verify_mode", (char*) &show_ssl_get_verify_mode, SHOW_FUNC},
{"Ssl_version", (char*) &show_ssl_get_version, SHOW_FUNC},
#endif /* HAVE_OPENSSL */
+ {"Syncs", (char*) &my_sync_count, SHOW_LONG_NOFLUSH},
{"Table_locks_immediate", (char*) &locks_immediate, SHOW_LONG},
{"Table_locks_waited", (char*) &locks_waited, SHOW_LONG},
#ifdef HAVE_MMAP
=== modified file 'sql/sql_select.cc'
--- a/sql/sql_select.cc 2010-03-09 19:23:30 +0000
+++ b/sql/sql_select.cc 2010-03-09 19:31:28 +0000
@@ -10201,7 +10201,8 @@ create_tmp_table(THD *thd,TMP_TABLE_PARA
/* future: storage engine selection can be made dynamic? */
if (blob_count || using_unique_constraint ||
(select_options & (OPTION_BIG_TABLES | SELECT_SMALL_RESULT)) ==
- OPTION_BIG_TABLES || (select_options & TMP_TABLE_FORCE_MYISAM))
+ OPTION_BIG_TABLES || (select_options & TMP_TABLE_FORCE_MYISAM) ||
+ !thd->variables.tmp_table_size)
{
share->db_plugin= ha_lock_engine(0, TMP_ENGINE_HTON);
table->file= get_new_handler(share, &table->mem_root,
@@ -10740,7 +10741,7 @@ static bool create_internal_tmp_table(TA
{
/* Create an unique key */
bzero((char*) &keydef,sizeof(keydef));
- keydef.flag=HA_NOSAME | HA_BINARY_PACK_KEY | HA_PACK_KEY;
+ keydef.flag=HA_NOSAME;
keydef.keysegs= keyinfo->key_parts;
keydef.seg= seg;
}
@@ -10765,7 +10766,7 @@ static bool create_internal_tmp_table(TA
seg->type= keyinfo->key_part[i].type;
/* Tell handler if it can do suffic space compression */
if (field->real_type() == MYSQL_TYPE_STRING &&
- keyinfo->key_part[i].length > 4)
+ keyinfo->key_part[i].length > 32)
seg->flag|= HA_SPACE_PACK;
}
if (!(field->flags & NOT_NULL_FLAG))
=== modified file 'storage/maria/ma_key_recover.h'
--- a/storage/maria/ma_key_recover.h 2008-09-01 17:31:40 +0000
+++ b/storage/maria/ma_key_recover.h 2010-03-09 19:22:24 +0000
@@ -63,7 +63,6 @@ extern my_bool write_hook_for_undo_key_i
extern my_bool write_hook_for_undo_key_delete(enum translog_record_type type,
TRN *trn, MARIA_HA *tbl_info,
LSN *lsn, void *hook_arg);
-void _ma_unpin_all_pages(MARIA_HA *info, LSN undo_lsn);
my_bool _ma_log_prefix(MARIA_PAGE *page, uint changed_length, int move_length);
my_bool _ma_log_suffix(MARIA_PAGE *page, uint org_length,
=== modified file 'storage/maria/ma_page.c'
--- a/storage/maria/ma_page.c 2009-05-06 12:03:24 +0000
+++ b/storage/maria/ma_page.c 2010-03-09 19:22:24 +0000
@@ -64,6 +64,15 @@ void _ma_page_setup(MARIA_PAGE *page, MA
share->base.key_reflength : 0);
}
+#ifdef IDENTICAL_PAGES_AFTER_RECOVERY
+void page_cleanup(MARIA_SHARE *share, MARIA_PAGE *page)
+{
+ uint length= page->size;
+ DBUG_ASSERT(length <= block_size - KEYPAGE_CHECKSUM_SIZE);
+ bzero(page->buff + length, share->block_size - length);
+}
+#endif
+
/**
Fetch a key-page in memory
@@ -102,8 +111,10 @@ my_bool _ma_fetch_keypage(MARIA_PAGE *pa
if (lock != PAGECACHE_LOCK_LEFT_UNLOCKED)
{
- DBUG_ASSERT(lock == PAGECACHE_LOCK_WRITE);
- page_link.unlock= PAGECACHE_LOCK_WRITE_UNLOCK;
+ DBUG_ASSERT(lock == PAGECACHE_LOCK_WRITE || PAGECACHE_LOCK_READ);
+ page_link.unlock= (lock == PAGECACHE_LOCK_WRITE ?
+ PAGECACHE_LOCK_WRITE_UNLOCK :
+ PAGECACHE_LOCK_READ_UNLOCK);
page_link.changed= 0;
push_dynamic(&info->pinned_pages, (void*) &page_link);
page->link_offset= info->pinned_pages.elements-1;
@@ -209,14 +220,7 @@ my_bool _ma_write_keypage(MARIA_PAGE *pa
}
#endif
-#ifdef IDENTICAL_PAGES_AFTER_RECOVERY
- {
- uint length= page->size;
- DBUG_ASSERT(length <= block_size - KEYPAGE_CHECKSUM_SIZE);
- bzero(buff + length, block_size - length);
- }
-#endif
-
+ page_cleanup(share, page);
res= pagecache_write(share->pagecache,
&share->kfile,
(pgcache_page_no_t) (page->pos / block_size),
=== modified file 'storage/maria/ma_rkey.c'
--- a/storage/maria/ma_rkey.c 2008-06-26 05:18:28 +0000
+++ b/storage/maria/ma_rkey.c 2010-03-09 19:22:24 +0000
@@ -82,6 +82,9 @@ int maria_rkey(MARIA_HA *info, uchar *bu
rw_rdlock(&keyinfo->root_lock);
nextflag= maria_read_vec[search_flag] | key.flag;
+ if (search_flag != HA_READ_KEY_EXACT ||
+ ((keyinfo->flag & (HA_NOSAME | HA_NULL_PART)) != HA_NOSAME))
+ nextflag|= SEARCH_SAVE_BUFF;
switch (keyinfo->key_alg) {
#ifdef HAVE_RTREE_KEYS
=== modified file 'storage/maria/ma_search.c'
--- a/storage/maria/ma_search.c 2009-05-06 12:03:24 +0000
+++ b/storage/maria/ma_search.c 2010-03-09 19:22:24 +0000
@@ -18,6 +18,10 @@
#include "ma_fulltext.h"
#include "m_ctype.h"
+static int _ma_search_no_save(register MARIA_HA *info, MARIA_KEY *key,
+ uint32 nextflag, register my_off_t pos,
+ MARIA_PINNED_PAGE **res_page_link,
+ uchar **res_page_buff);
static my_bool _ma_get_prev_key(MARIA_KEY *key, MARIA_PAGE *ma_page,
uchar *keypos);
@@ -57,7 +61,51 @@ int _ma_check_index(MARIA_HA *info, int
*/
int _ma_search(register MARIA_HA *info, MARIA_KEY *key, uint32 nextflag,
- register my_off_t pos)
+ my_off_t pos)
+{
+ int error;
+ MARIA_PINNED_PAGE *page_link;
+ uchar *page_buff;
+
+ info->page_changed= 1; /* If page not saved */
+ if (!(error= _ma_search_no_save(info, key, nextflag, pos, &page_link,
+ &page_buff)))
+ {
+ if (nextflag & SEARCH_SAVE_BUFF)
+ {
+ bmove512(info->keyread_buff, page_buff, info->s->block_size);
+
+ /* Save position for a possible read next / previous */
+ info->int_keypos= info->keyread_buff + (ulonglong) info->int_keypos;
+ info->int_maxpos= info->keyread_buff + (ulonglong) info->int_maxpos;
+ info->int_keytree_version= key->keyinfo->version;
+ info->last_search_keypage= info->last_keypage;
+ info->page_changed= 0;
+ info->keyread_buff_used= 0;
+ }
+ }
+ _ma_unpin_all_pages(info, LSN_IMPOSSIBLE);
+ return (error);
+}
+
+/**
+ @breif Search after row by a key
+
+ ret_page_link Will contain pointer to page where we found key
+
+ @note
+ Position to row is stored in info->lastpos
+
+ @return
+ @retval 0 ok (key found)
+ @retval -1 Not found
+ @retval 1 If one should continue search on higher level
+*/
+
+static int _ma_search_no_save(register MARIA_HA *info, MARIA_KEY *key,
+ uint32 nextflag, register my_off_t pos,
+ MARIA_PINNED_PAGE **res_page_link,
+ uchar **res_page_buff)
{
my_bool last_key_not_used;
int error,flag;
@@ -66,6 +114,7 @@ int _ma_search(register MARIA_HA *info,
uchar lastkey[MARIA_MAX_KEY_BUFF];
MARIA_KEYDEF *keyinfo= key->keyinfo;
MARIA_PAGE page;
+ MARIA_PINNED_PAGE *page_link;
DBUG_ENTER("_ma_search");
DBUG_PRINT("enter",("pos: %lu nextflag: %u lastpos: %lu",
(ulong) pos, nextflag, (ulong) info->cur_row.lastpos));
@@ -81,10 +130,11 @@ int _ma_search(register MARIA_HA *info,
}
if (_ma_fetch_keypage(&page, info, keyinfo, pos,
- PAGECACHE_LOCK_LEFT_UNLOCKED,
- DFLT_INIT_HITS, info->keyread_buff,
- test(!(nextflag & SEARCH_SAVE_BUFF))))
+ PAGECACHE_LOCK_READ, DFLT_INIT_HITS, 0, 0))
goto err;
+ page_link= dynamic_element(&info->pinned_pages,
+ info->pinned_pages.elements-1,
+ MARIA_PINNED_PAGE*);
DBUG_DUMP("page", page.buff, page.size);
flag= (*keyinfo->bin_search)(key, &page, nextflag, &keypos, lastkey,
@@ -98,8 +148,9 @@ int _ma_search(register MARIA_HA *info,
if (flag)
{
- if ((error= _ma_search(info, key, nextflag,
- _ma_kpos(nod_flag,keypos))) <= 0)
+ if ((error= _ma_search_no_save(info, key, nextflag,
+ _ma_kpos(nod_flag,keypos),
+ res_page_link, res_page_buff)) <= 0)
DBUG_RETURN(error);
if (flag >0)
@@ -118,26 +169,15 @@ int _ma_search(register MARIA_HA *info,
((keyinfo->flag & (HA_NOSAME | HA_NULL_PART)) != HA_NOSAME ||
(key->flag & SEARCH_PART_KEY) || info->s->base.born_transactional))
{
- if ((error= _ma_search(info, key, (nextflag | SEARCH_FIND) &
- ~(SEARCH_BIGGER | SEARCH_SMALLER | SEARCH_LAST),
- _ma_kpos(nod_flag,keypos))) >= 0 ||
+ if ((error= _ma_search_no_save(info, key, (nextflag | SEARCH_FIND) &
+ ~(SEARCH_BIGGER | SEARCH_SMALLER |
+ SEARCH_LAST),
+ _ma_kpos(nod_flag,keypos),
+ res_page_link, res_page_buff)) >= 0 ||
my_errno != HA_ERR_KEY_NOT_FOUND)
DBUG_RETURN(error);
- info->last_keypage= HA_OFFSET_ERROR; /* Buffer not in mem */
}
}
- if (pos != info->last_keypage)
- {
- uchar *old_buff= page.buff;
- if (_ma_fetch_keypage(&page, info, keyinfo, pos,
- PAGECACHE_LOCK_LEFT_UNLOCKED,DFLT_INIT_HITS,
- info->keyread_buff,
- test(!(nextflag & SEARCH_SAVE_BUFF))))
- goto err;
- /* Restore position if page buffer moved */
- keypos= page.buff + (keypos - old_buff);
- maxpos= page.buff + (maxpos - old_buff);
- }
info->last_key.keyinfo= keyinfo;
if ((nextflag & (SEARCH_SMALLER | SEARCH_LAST)) && flag != 0)
@@ -172,16 +212,15 @@ int _ma_search(register MARIA_HA *info,
}
info->cur_row.lastpos= _ma_row_pos_from_key(&info->last_key);
info->cur_row.trid= _ma_trid_from_key(&info->last_key);
- /* Save position for a possible read next / previous */
- info->int_keypos= info->keyread_buff + (keypos - page.buff);
- info->int_maxpos= info->keyread_buff + (maxpos - page.buff);
- info->int_nod_flag=nod_flag;
- info->int_keytree_version=keyinfo->version;
- info->last_search_keypage=info->last_keypage;
- info->page_changed=0;
- /* Set marker that buffer was used (Marker for mi_search_next()) */
- info->keyread_buff_used= (info->keyread_buff != page.buff);
+ /* Store offset to key */
+ info->int_keypos= (uchar*) (keypos - page.buff);
+ info->int_maxpos= (uchar*) (maxpos - page.buff);
+ info->int_nod_flag= nod_flag;
+ info->last_keypage= pos;
+ *res_page_link= page_link;
+ *res_page_buff= page.buff;
+
DBUG_PRINT("exit",("found key at %lu",(ulong) info->cur_row.lastpos));
DBUG_RETURN(0);
@@ -190,7 +229,7 @@ err:
info->cur_row.lastpos= HA_OFFSET_ERROR;
info->page_changed=1;
DBUG_RETURN (-1);
-} /* _ma_search */
+}
/*
=== modified file 'storage/maria/ma_write.c'
--- a/storage/maria/ma_write.c 2009-02-19 09:01:25 +0000
+++ b/storage/maria/ma_write.c 2010-03-09 19:22:24 +0000
@@ -587,6 +587,12 @@ my_bool _ma_enlarge_root(MARIA_HA *info,
/*
Search after a position for a key and store it there
+ TODO:
+ Change this to use pagecache directly instead of creating a copy
+ of the page. To do this, we must however change write-key-on-page
+ algorithm to not overwrite the buffer but instead store any overflow
+ key in a separate buffer.
+
@return
@retval -1 error
@retval 0 ok
=== modified file 'storage/maria/maria_def.h'
--- a/storage/maria/maria_def.h 2010-02-10 19:06:24 +0000
+++ b/storage/maria/maria_def.h 2010-03-09 19:22:24 +0000
@@ -979,6 +979,11 @@ extern ulonglong transid_get_packed(MARI
#define page_store_info(share, page) \
_ma_store_keypage_flag((share), (page)->buff, (page)->flag); \
_ma_store_page_used((share), (page)->buff, (page)->size);
+#ifdef IDENTICAL_PAGES_AFTER_RECOVERY
+void page_cleanup(MARIA_SHARE *share, MARIA_PAGE *page)
+#else
+#define page_cleanup(A,B) while (0)
+#endif
extern MARIA_KEY *_ma_make_key(MARIA_HA *info, MARIA_KEY *int_key, uint keynr,
uchar *key, const uchar *record,
@@ -1197,7 +1202,7 @@ void _ma_tmp_disable_logging_for_table(M
my_bool log_incomplete);
my_bool _ma_reenable_logging_for_table(MARIA_HA *info, my_bool flush_pages);
my_bool write_log_record_for_bulk_insert(MARIA_HA *info);
-
+void _ma_unpin_all_pages(MARIA_HA *info, LSN undo_lsn);
#define MARIA_NO_CRC_NORMAL_PAGE 0xffffffff
#define MARIA_NO_CRC_BITMAP_PAGE 0xfffffffe
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (knielsen:2799)
by knielsen@knielsen-hq.org 09 Mar '10
by knielsen@knielsen-hq.org 09 Mar '10
09 Mar '10
#At lp:maria
2799 knielsen(a)knielsen-hq.org 2010-03-09 [merge]
Automerge for buildbot test
modified:
sql/item_subselect.cc
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-03-09 15:03:54 +0000
+++ b/sql/item_subselect.cc 2010-03-09 19:29:05 +0000
@@ -1776,6 +1776,10 @@ int subselect_single_select_engine::prep
{
if (prepared)
return 0;
+ if (select_lex->join)
+ {
+ select_lex->cleanup();
+ }
join= new JOIN(thd, select_lex->item_list,
select_lex->options | SELECT_NO_UNLOCK, result);
if (!join || !result)
1
0
[Maria-developers] bzr commit into MariaDB 5.1, with Maria 1.5:maria branch (knielsen:2826)
by knielsen@knielsen-hq.org 09 Mar '10
by knielsen@knielsen-hq.org 09 Mar '10
09 Mar '10
#At lp:maria
2826 knielsen(a)knielsen-hq.org 2010-03-09
Fix a buildbot memory leak due to JOIN::destroy() not being called for EXPLAIN
query:
- When subquery is located in ORDER BY, EXPLAIN will run as follows:
select_describe() will run JOIN::prepare()/optimize() for the subquery;
then at some point subselect_single_select_engine::prepare() will be called,
which will create another join and run join->prepare().
In mainline mysql this is not a problem because subquery's join will be
destroyed after the first call.
In MariaDB, it won't (table elimination needs to keep JOIN objects around
for longer in order to know which tables were eliminated when constructing
EXPLAIN EXTENDED warning).
Fix the problem of memory leak by calling select_lex->cleanup() in
subselect_single_select_engine::prepare().
modified:
sql/item_subselect.cc
=== modified file 'sql/item_subselect.cc'
--- a/sql/item_subselect.cc 2010-03-09 15:03:54 +0000
+++ b/sql/item_subselect.cc 2010-03-09 19:29:05 +0000
@@ -1776,6 +1776,10 @@ int subselect_single_select_engine::prep
{
if (prepared)
return 0;
+ if (select_lex->join)
+ {
+ select_lex->cleanup();
+ }
join= new JOIN(thd, select_lex->item_list,
select_lex->options | SELECT_NO_UNLOCK, result);
if (!join || !result)
1
0