developers
Threads by month
- ----- 2025 -----
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- 7 participants
- 6853 discussions

Re: [Maria-developers] bb2db687a05: 26 aug (MDEV-371 Unique indexes for blobs)
by Sergei Golubchik 10 Oct '18
by Sergei Golubchik 10 Oct '18
10 Oct '18
Hi, Sachin!
Could you please explain what you store in the frm file and how you modify
keyinfo/keypart structures in memory?
Needs to be tested with partitioned tables. Particularly with MDEV-14005.
Tests were great and thourough, good job.
It'd be useful to have at least some tests for innodb too, though.
See other comments below.
On Oct 08, Sachin Setiya wrote:
> diff --git a/mysql-test/main/long_unique.result b/mysql-test/main/long_unique.result
> new file mode 100644
> index …
[View More]00000000000..1a06c230c72
> --- /dev/null
> +++ b/mysql-test/main/long_unique.result
> @@ -0,0 +1,1391 @@
> +#Structure of tests
> +#First we will check all option for
> +#table containing single unique column
> +#table containing keys like unique(a,b,c,d) etc
> +#then table containing 2 blob unique etc
> +set @allowed_packet= @@max_allowed_packet;
> +#table with single long blob column;
> +create table t1(a blob unique);
> +insert into t1 values(1),(2),(3),(56),('sachin'),('maria'),(123456789034567891),(null),(null),(123456789034567890);
> +#table structure;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table t1
> +Non_unique 0
> +Key_name a
> +Seq_in_index 1
> +Column_name a
> +Collation A
> +Cardinality NULL
> +Sub_part NULL
> +Packed NULL
> +Null YES
> +Index_type HASH_INDEX
I'm not sure about it. One cannot write UNIQUE (a) USING HASH_INDEX
in CREATE TABLE, right?
May be just report HASH here? (and fix the parser to behave accordingly)
> +Comment
> +Index_comment
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 10 Deleted blocks: 0
> +Recordlength: 20
> +
> +table description:
> +Key Start Len Index Type
> +1 12 8 multip. ulonglong NULL
good idea
> +select * from information_schema.columns where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE DATETIME_PRECISION CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT IS_GENERATED GENERATION_EXPRESSION
> +def test t1 a 1 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob UNI select,insert,update,references NEVER NULL
> +select * from information_schema.statistics where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME NON_UNIQUE INDEX_SCHEMA INDEX_NAME SEQ_IN_INDEX COLUMN_NAME COLLATION CARDINALITY SUB_PART PACKED NULLABLE INDEX_TYPE COMMENT INDEX_COMMENT
> +def test t1 0 test a 1 a A NULL NULL NULL YES HASH_INDEX
> +select * from information_schema.key_column_usage where table_schema= 'test' and table_name= 't1';
> +CONSTRAINT_CATALOG def
> +CONSTRAINT_SCHEMA test
> +CONSTRAINT_NAME a
> +TABLE_CATALOG def
> +TABLE_SCHEMA test
> +TABLE_NAME t1
> +COLUMN_NAME a
> +ORDINAL_POSITION 1
> +POSITION_IN_UNIQUE_CONSTRAINT NULL
> +REFERENCED_TABLE_SCHEMA NULL
> +REFERENCED_TABLE_NAME NULL
> +REFERENCED_COLUMN_NAME NULL
> +# table select we should not be able to see db_row_hash_column;
> +select * from t1;
> +a
> +1
> +2
> +3
> +56
> +sachin
> +maria
> +123456789034567891
> +NULL
> +NULL
> +123456789034567890
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +#duplicate entry test;
> +insert into t1 values(2);
> +ERROR 23000: Duplicate entry '2' for key 'a'
> +insert into t1 values('sachin');
> +ERROR 23000: Duplicate entry 'sachin' for key 'a'
> +insert into t1 values(123456789034567891);
> +ERROR 23000: Duplicate entry '123456789034567891' for key 'a'
looks great :)
> +select * from t1;
> +a
> +1
> +2
> +3
> +56
> +sachin
> +maria
> +123456789034567891
> +NULL
> +NULL
> +123456789034567890
> +insert into t1 values(11),(22),(33);
> +insert into t1 values(12),(22);
> +ERROR 23000: Duplicate entry '22' for key 'a'
> +select * from t1;
> +a
> +1
> +2
> +3
> +56
> +sachin
> +maria
> +123456789034567891
> +NULL
> +NULL
> +123456789034567890
> +11
> +22
> +33
> +12
> +insert into t1 values(repeat('s',4000*10)),(repeat('s',4001*10));
> +insert into t1 values(repeat('m',4000*10)),(repeat('m',4000*10));
> +ERROR 23000: Duplicate entry 'mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm' for key 'a'
> +insert into t1 values(repeat('m',4001)),(repeat('m',4002));
> +truncate table t1;
> +insert into t1 values(1),(2),(3),(4),(5),(8),(7);
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 7 Deleted blocks: 0
> +Recordlength: 20
> +
> +table description:
> +Key Start Len Index Type
> +1 12 8 multip. ulonglong NULL
> +#now some alter commands;
> +alter table t1 add column b int;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b int(11) YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535))
I think there shouldn't be (65535) here
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(1,2);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +insert into t1 values(2,2);
> +ERROR 23000: Duplicate entry '2' for key 'a'
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +#now try to change db_row_hash_1 column;
> +alter table t1 drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 add column d int , add column e int , drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 modify column db_row_hash_1 int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column a int , add column b int, modify column db_row_hash_1 int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 change column db_row_hash_1 dsds int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column asd int, change column db_row_hash_1 dsds int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 drop column b , add column c int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +#now add some column with name db_row_hash;
> +alter table t1 add column db_row_hash_1 int unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(45,1,55),(46,1,55);
> +ERROR 23000: Duplicate entry '55' for key 'db_row_hash_1'
> +alter table t1 add column db_row_hash_2 int, add column db_row_hash_3 int;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +c int(11) YES NULL
> +db_row_hash_1 int(11) YES UNI NULL
> +db_row_hash_2 int(11) YES NULL
> +db_row_hash_3 int(11) YES NULL
> +#this should also drop the unique index ;
> +alter table t1 drop column a;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +#add column with unique index on blob ;
> +alter table t1 add column a blob unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +# try to change the blob unique column name;
> +#this will change index to b tree;
> +alter table t1 modify column a int ;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +alter table t1 add column clm blob unique;
> +#try changing the name ;
> +alter table t1 change column clm clm_changed blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed` blob DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `clm` (`clm_changed`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +t1 0 clm 1 clm_changed A NULL NULL NULL YES HASH_INDEX
> +#now drop the unique key;
> +alter table t1 drop key clm;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed` blob DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +drop table t1;
> +create table t1 (a TEXT CHARSET latin1 COLLATE latin1_german2_ci unique);
> +desc t1;
> +Field Type Null Key Default Extra
> +a text YES UNI NULL
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +insert into t1 values ('ae');
> +insert into t1 values ('AE');
> +ERROR 23000: Duplicate entry 'AE' for key 'a'
> +insert into t1 values ('Ä');
this should've failed, shouldn't it?
> +drop table t1;
> +create table t1 (a int primary key, b blob unique);
> +desc t1;
> +Field Type Null Key Default Extra
> +a int(11) NO PRI NULL
> +b blob YES UNI NULL
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 PRIMARY 1 a A 0 NULL NULL BTREE
> +t1 0 b 1 b A NULL NULL NULL YES HASH_INDEX
> +insert into t1 values(1,1),(2,2),(3,3);
> +insert into t1 values(1,1);
> +ERROR 23000: Duplicate entry '1' for key 'b'
> +insert into t1 values(7,1);
> +ERROR 23000: Duplicate entry '1' for key 'b'
> +drop table t1;
> +#table with multiple long blob column and varchar text column ;
> +create table t1(a blob unique, b int , c blob unique , d text unique , e varchar(3000) unique);
> +insert into t1 values(1,2,3,4,5),(2,11,22,33,44),(3111,222,333,444,555),(5611,2222,3333,4444,5555),
> +('sachin',341,'fdf','gfgfgfg','hghgr'),('maria',345,'frter','dasd','utyuty'),
> +(123456789034567891,353534,53453453453456,64565464564564,45435345345345),
> +(123456789034567890,43545,657567567567,78967657567567,657567567567567676);
> +#table structure;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b int(11) YES NULL
> +c blob YES UNI NULL
> +d text YES UNI NULL
> +e varchar(3000) YES UNI NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `c` (`c`),
> + UNIQUE KEY `d` (`d`),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 d 1 d A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 8 Deleted blocks: 0
> +Recordlength: 3072
> +
> +table description:
> +Key Start Len Index Type
> +1 3063 8 multip. ulonglong NULL
> +2 3055 8 multip. ulonglong NULL
> +3 3047 8 multip. ulonglong NULL
> +4 3039 8 multip. ulonglong prefix NULL
why "prefix" here? how is it different from others?
> +select * from information_schema.columns where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE DATETIME_PRECISION CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT IS_GENERATED GENERATION_EXPRESSION
> +def test t1 a 1 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob UNI select,insert,update,references NEVER NULL
> +def test t1 b 2 NULL YES int NULL NULL 10 0 NULL NULL NULL int(11) select,insert,update,references NEVER NULL
> +def test t1 c 3 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob UNI select,insert,update,references NEVER NULL
> +def test t1 d 4 NULL YES text 65535 65535 NULL NULL NULL latin1 latin1_swedish_ci text UNI select,insert,update,references NEVER NULL
> +def test t1 e 5 NULL YES varchar 3000 3000 NULL NULL NULL latin1 latin1_swedish_ci varchar(3000) UNI select,insert,update,references NEVER NULL
> +select * from information_schema.statistics where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME NON_UNIQUE INDEX_SCHEMA INDEX_NAME SEQ_IN_INDEX COLUMN_NAME COLLATION CARDINALITY SUB_PART PACKED NULLABLE INDEX_TYPE COMMENT INDEX_COMMENT
> +def test t1 0 test a 1 a A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test c 1 c A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test d 1 d A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test e 1 e A NULL NULL NULL YES HASH_INDEX
> +select * from information_schema.key_column_usage where table_schema= 'test' and table_name= 't1';
> +CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION POSITION_IN_UNIQUE_CONSTRAINT REFERENCED_TABLE_SCHEMA REFERENCED_TABLE_NAME REFERENCED_COLUMN_NAME
> +def test a def test t1 a 1 NULL NULL NULL NULL
> +def test c def test t1 c 1 NULL NULL NULL NULL
> +def test d def test t1 d 1 NULL NULL NULL NULL
> +def test e def test t1 e 1 NULL NULL NULL NULL
> +#table select we should not be able to see db_row_hash_1 column;
> +select * from t1;
> +a b c d e
> +1 2 3 4 5
> +2 11 22 33 44
> +3111 222 333 444 555
> +5611 2222 3333 4444 5555
> +sachin 341 fdf gfgfgfg hghgr
> +maria 345 frter dasd utyuty
> +123456789034567891 353534 53453453453456 64565464564564 45435345345345
> +123456789034567890 43545 657567567567 78967657567567 657567567567567676
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +select db_row_hash_2 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
> +select db_row_hash_3 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_3' in 'field list'
> +#duplicate entry test;
> +insert into t1 values(21,2,3,42,51);
> +ERROR 23000: Duplicate entry '3' for key 'c'
> +insert into t1 values('sachin',null,null,null,null);
> +ERROR 23000: Duplicate entry 'sachin' for key 'a'
> +insert into t1 values(1234567890345671890,4353451,6575675675617,789676575675617,657567567567567676);
> +ERROR 23000: Duplicate entry '657567567567567676' for key 'e'
> +select * from t1;
> +a b c d e
> +1 2 3 4 5
> +2 11 22 33 44
> +3111 222 333 444 555
> +5611 2222 3333 4444 5555
> +sachin 341 fdf gfgfgfg hghgr
> +maria 345 frter dasd utyuty
> +123456789034567891 353534 53453453453456 64565464564564 45435345345345
> +123456789034567890 43545 657567567567 78967657567567 657567567567567676
> +insert into t1 values(repeat('s',4000*10),100,repeat('s',4000*10),repeat('s',4000*10),
> +repeat('s',400)),(repeat('s',4001*10),1000,repeat('s',4001*10),repeat('s',4001*10),
> +repeat('s',2995));
> +insert into t1 values(repeat('m',4000*11),10,repeat('s',4000*11),repeat('s',4000*11),repeat('s',2995));
> +ERROR 23000: Duplicate entry 'ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss' for key 'e'
> +truncate table t1;
> +insert into t1 values(1,2,3,4,5),(2,11,22,33,44),(3111,222,333,444,555),(5611,2222,3333,4444,5555);
> +#now some alter commands;
> +alter table t1 add column f int;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b int(11) YES NULL
> +c blob YES UNI NULL
> +d text YES UNI NULL
> +e varchar(3000) YES UNI NULL
> +f int(11) YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `c` (`c`(65535)),
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +#unique key should not break;
> +insert into t1 values(1,2,3,4,5,6);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +select db_row_hash_1 , db_row_hash_2, db_row_hash_3 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +#now try to change db_row_hash_1 column;
> +alter table t1 drop column db_row_hash_1, drop column db_row_hash_2, drop column db_row_hash_3;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 add column dg int , add column ef int , drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 modify column db_row_hash_1 int , modify column db_row_hash_2 int, modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column ar int , add column rb int, modify column db_row_hash_1 int , modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 change column db_row_hash_1 dsds int , change column db_row_hash_2 dfdf int , change column db_row_hash_3 gdfg int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column asd int, drop column a, change column db_row_hash_1 dsds int, change db_row_hash_3 fdfdfd int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 drop column b , add column g int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `c` (`c`(65535)),
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +#now add some column with name db_row_hash;
> +alter table t1 add column db_row_hash_1 int unique;
> +alter table t1 add column db_row_hash_2 int unique;
> +alter table t1 add column db_row_hash_3 int unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `c` (`c`(65535)),
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `db_row_hash_3` (`db_row_hash_3`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +alter table t1 add column db_row_hash_7 int, add column db_row_hash_5 int , add column db_row_hash_4 int ;
> +alter table t1 drop column db_row_hash_7,drop column db_row_hash_3, drop column db_row_hash_4;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +c blob YES UNI NULL
> +d text YES UNI NULL
> +e varchar(3000) YES UNI NULL
> +f int(11) YES NULL
> +g int(11) YES NULL
> +db_row_hash_1 int(11) YES UNI NULL
> +db_row_hash_2 int(11) YES UNI NULL
> +db_row_hash_5 int(11) YES NULL
> +#this show now break anything;
"this should not" ?
> +insert into t1 values(1,2,3,4,5,6,23,5,6);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +#this should also drop the unique index;
> +alter table t1 drop column a, drop column c;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +#add column with unique index on blob;
> +alter table t1 add column a blob unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +#try to change the blob unique column name;
> +#this will change index to b tree;
> +alter table t1 modify column a int , modify column e int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +alter table t1 add column clm1 blob unique,add column clm2 blob unique;
> +#try changing the name;
> +alter table t1 change column clm1 clm_changed1 blob, change column clm2 clm_changed2 blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed1` blob DEFAULT NULL,
> + `clm_changed2` blob DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `clm1` (`clm_changed1`),
> + UNIQUE KEY `clm2` (`clm_changed2`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +t1 0 clm1 1 clm_changed1 A NULL NULL NULL YES HASH_INDEX
> +t1 0 clm2 1 clm_changed2 A NULL NULL NULL YES HASH_INDEX
> +#now drop the unique key;
> +alter table t1 drop key clm1, drop key clm2;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed1` blob DEFAULT NULL,
> + `clm_changed2` blob DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +drop table t1;
> +#now the table with key on multiple columns; the ultimate test;
> +create table t1(a blob, b int , c varchar(2000) , d text , e varchar(3000) , f longblob , g int , h text ,
> +unique(a,b,c), unique(c,d,e),unique(e,f,g,h), unique(b,d,g,h));
> +insert into t1 values(1,1,1,1,1,1,1,1),(2,2,2,2,2,2,2,2),(3,3,3,3,3,3,3,3),(4,4,4,4,4,4,4,4),(5,5,5,5,5,5,5,5),
> +('maria',6,'maria','maria','maria','maria',6,'maria'),('mariadb',7,'mariadb','mariadb','mariadb','mariadb',8,'mariadb')
> +,(null,null,null,null,null,null,null,null),(null,null,null,null,null,null,null,null);
> +#table structure;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES MUL NULL
> +b int(11) YES MUL NULL
> +c varchar(2000) YES MUL NULL
> +d text YES NULL
> +e varchar(3000) YES MUL NULL
> +f longblob YES NULL
> +g int(11) YES NULL
> +h text YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`,`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`),
> + UNIQUE KEY `b` (`b`,`d`,`g`,`h`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 NULL NULL YES HASH_INDEX
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 9 Deleted blocks: 0
> +Recordlength: 5092
> +
> +table description:
> +Key Start Len Index Type
> +1 5081 8 multip. ulonglong prefix NULL
> +2 5073 8 multip. ulonglong prefix NULL
> +3 5065 8 multip. ulonglong prefix NULL
> +4 5057 8 multip. ulonglong NULL
> +select * from information_schema.columns where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE DATETIME_PRECISION CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT IS_GENERATED GENERATION_EXPRESSION
> +def test t1 a 1 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob MUL select,insert,update,references NEVER NULL
> +def test t1 b 2 NULL YES int NULL NULL 10 0 NULL NULL NULL int(11) MUL select,insert,update,references NEVER NULL
> +def test t1 c 3 NULL YES varchar 2000 2000 NULL NULL NULL latin1 latin1_swedish_ci varchar(2000) MUL select,insert,update,references NEVER NULL
> +def test t1 d 4 NULL YES text 65535 65535 NULL NULL NULL latin1 latin1_swedish_ci text select,insert,update,references NEVER NULL
> +def test t1 e 5 NULL YES varchar 3000 3000 NULL NULL NULL latin1 latin1_swedish_ci varchar(3000) MUL select,insert,update,references NEVER NULL
> +def test t1 f 6 NULL YES longblob 4294967295 4294967295 NULL NULL NULL NULL NULL longblob select,insert,update,references NEVER NULL
> +def test t1 g 7 NULL YES int NULL NULL 10 0 NULL NULL NULL int(11) select,insert,update,references NEVER NULL
> +def test t1 h 8 NULL YES text 65535 65535 NULL NULL NULL latin1 latin1_swedish_ci text select,insert,update,references NEVER NULL
> +select * from information_schema.statistics where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME NON_UNIQUE INDEX_SCHEMA INDEX_NAME SEQ_IN_INDEX COLUMN_NAME COLLATION CARDINALITY SUB_PART PACKED NULLABLE INDEX_TYPE COMMENT INDEX_COMMENT
> +def test t1 0 test a 1 a A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test a 2 b A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test a 3 c A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test c 1 c A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test c 2 d A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test c 3 e A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 1 e A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 2 f A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 3 g A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 4 h A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 1 b A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 2 d A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 3 g A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 4 h A 0 NULL NULL YES HASH_INDEX
> +select * from information_schema.key_column_usage where table_schema= 'test' and table_name= 't1';
> +CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION POSITION_IN_UNIQUE_CONSTRAINT REFERENCED_TABLE_SCHEMA REFERENCED_TABLE_NAME REFERENCED_COLUMN_NAME
> +def test a def test t1 a 1 NULL NULL NULL NULL
> +def test a def test t1 b 2 NULL NULL NULL NULL
> +def test a def test t1 c 3 NULL NULL NULL NULL
> +def test c def test t1 c 1 NULL NULL NULL NULL
> +def test c def test t1 d 2 NULL NULL NULL NULL
> +def test c def test t1 e 3 NULL NULL NULL NULL
> +def test e def test t1 e 1 NULL NULL NULL NULL
> +def test e def test t1 f 2 NULL NULL NULL NULL
> +def test e def test t1 g 3 NULL NULL NULL NULL
> +def test e def test t1 h 4 NULL NULL NULL NULL
> +def test b def test t1 b 1 NULL NULL NULL NULL
> +def test b def test t1 d 2 NULL NULL NULL NULL
> +def test b def test t1 g 3 NULL NULL NULL NULL
> +def test b def test t1 h 4 NULL NULL NULL NULL
> +# table select we should not be able to see db_row_hash_1 column;
> +select * from t1;
> +a b c d e f g h
> +1 1 1 1 1 1 1 1
> +2 2 2 2 2 2 2 2
> +3 3 3 3 3 3 3 3
> +4 4 4 4 4 4 4 4
> +5 5 5 5 5 5 5 5
> +maria 6 maria maria maria maria 6 maria
> +mariadb 7 mariadb mariadb mariadb mariadb 8 mariadb
> +NULL NULL NULL NULL NULL NULL NULL NULL
> +NULL NULL NULL NULL NULL NULL NULL NULL
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +select db_row_hash_2 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
> +select db_row_hash_3 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_3' in 'field list'
> +#duplicate entry test;
> +#duplicate keys entry;
> +insert into t1 values(1,1,1,0,0,0,0,0);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'a'
> +insert into t1 values(0,0,1,1,1,0,0,0);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'c'
> +insert into t1 values(0,0,0,0,1,1,1,1);
> +ERROR 23000: Duplicate entry '1-1-1-1' for key 'e'
> +insert into t1 values(1,1,1,1,1,0,0,0);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'a'
> +insert into t1 values(0,0,0,0,1,1,1,1);
> +ERROR 23000: Duplicate entry '1-1-1-1' for key 'e'
> +insert into t1 values(1,1,1,1,1,1,1,1);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'a'
> +select db_row_hash_1,db_row_hash_2,db_row_hash_3,db_row_hash_4,db_row_hash_5 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +alter table t1 drop column db_row_hash_1, drop column db_row_hash_2, drop column db_row_hash_3;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 add column dg int , add column ef int , drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 modify column db_row_hash_1 int , modify column db_row_hash_2 int, modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column ar int , add column rb int, modify column db_row_hash_1 int , modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 change column db_row_hash_1 dsds int , change column db_row_hash_2 dfdf int , change column db_row_hash_3 gdfg int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column asd int, drop column a, change column db_row_hash_1 dsds int, change db_row_hash_3 fdfdfd int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`,`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`),
> + UNIQUE KEY `b` (`b`,`d`,`g`,`h`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +# add column named db_row_hash_*;
> +alter table t1 add column db_row_hash_7 int , add column db_row_hash_5 int,
> +add column db_row_hash_1 int, add column db_row_hash_2 int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + `db_row_hash_7` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +alter table t1 drop column db_row_hash_7 , drop column db_row_hash_5 ,
> +drop column db_row_hash_1, drop column db_row_hash_2 ;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#try to change column names;
> +alter table t1 change column a aa blob , change column b bb blob , change column d dd blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `aa` blob DEFAULT NULL,
> + `bb` blob DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `dd` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`aa`(65535),`bb`,`c`),
> + UNIQUE KEY `c` (`c`,`dd`(65535),`e`),
> + UNIQUE KEY `b` (`bb`,`dd`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 aa A 0 65535 NULL YES HASH_INDEX
> +t1 0 a 2 bb A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 dd A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 bb A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 dd A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +alter table t1 change column aa a blob , change column bb b blob , change column dd d blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`(65535),`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A 0 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#now we will change the data type to int and varchar limit so that we no longer require hash_index;
> +#on key a_b_c;
> +alter table t1 modify column a varchar(20) , modify column b varchar(20) , modify column c varchar(20);
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` varchar(20) DEFAULT NULL,
> + `b` varchar(20) DEFAULT NULL,
> + `c` varchar(20) DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +t1 0 a 2 b A NULL NULL NULL YES BTREE
> +t1 0 a 3 c A 0 NULL NULL YES BTREE
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#change it back;
> +alter table t1 modify column a blob , modify column b blob , modify column c blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 2 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#try to delete blob column in unique;
> +truncate table t1;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 2 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#now try to delete keys;
> +alter table t1 drop key c, drop key e;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`b`(65535),`c`(65535)),
> + UNIQUE KEY `b` (`b`(65535),`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +drop table t1;
> +#now alter table containing some data basically some tests with ignore;
> +create table t1 (a blob);
> +insert into t1 values(1),(2),(3);
> +#normal alter table;
> +alter table t1 add unique key(a);
> +alter table t1 drop key a;
> +truncate table t1;
> +insert into t1 values(1),(1),(2),(2),(3);
> +alter table t1 add unique key(a);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +alter ignore table t1 add unique key(a);
> +select * from t1;
> +a
> +1
> +2
> +3
> +insert into t1 values(1);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +drop table t1;
> +#Now with multiple keys;
> +create table t1(a blob , b blob, c blob , d blob , e int);
> +insert into t1 values (1,1,1,1,1);
> +insert into t1 values (1,1,1,1,1);
> +insert into t1 values (2,1,1,1,1);
> +insert into t1 values (2,2,2,2,2);
> +insert into t1 values (3,3,4,4,4);
> +insert into t1 values (4,4,4,4,4);
> +alter table t1 add unique key(a,c), add unique key(b,d), add unique key(e);
> +ERROR 23000: Duplicate entry '1-1' for key 'a'
> +alter ignore table t1 add unique key(a,c), add unique key(b,d), add unique key(e);
> +select * from t1;
> +a b c d e
> +1 1 1 1 1
> +2 2 2 2 2
> +3 3 4 4 4
> +insert into t1 values (1,12,1,13,14);
> +ERROR 23000: Duplicate entry '1-1' for key 'a'
> +insert into t1 values (12,1,14,1,14);
> +ERROR 23000: Duplicate entry '1-1' for key 'b'
> +insert into t1 values (13,12,13,14,4);
> +ERROR 23000: Duplicate entry '4' for key 'e'
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`c`(65535)),
> + UNIQUE KEY `b` (`b`(65535),`d`(65535)),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 c A NULL 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A NULL 65535 NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES BTREE
> +drop table t1;
> +#visibility of db_row_hash
> +create table t1 (a blob unique , b blob unique);
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b blob YES UNI NULL
> +insert into t1 values(1,19);
> +insert into t1 values(2,29);
> +insert into t1 values(3,39);
> +insert into t1 values(4,49);
> +create table t2 (DB_ROW_HASH_1 int, DB_ROW_HASH_2 int);
> +insert into t2 values(11,1);
> +insert into t2 values(22,2);
> +insert into t2 values(33,3);
> +insert into t2 values(44,4);
> +select * from t1;
> +a b
> +1 19
> +2 29
> +3 39
> +4 49
> +select * from t2;
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +22 2
> +33 3
> +44 4
> +select DB_ROW_HASH_1, DB_ROW_HASH_2 from t1;
> +ERROR 42S22: Unknown column 'DB_ROW_HASH_1' in 'field list'
> +#bug
> +select DB_ROW_HASH_1, DB_ROW_HASH_2 from t1,t2;
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +11 1
> +11 1
> +11 1
> +22 2
> +22 2
> +22 2
> +22 2
> +33 3
> +33 3
> +33 3
> +33 3
> +44 4
> +44 4
> +44 4
> +44 4
> +select * from t1 where DB_ROW_HASH_1 in (select DB_ROW_HASH_1 from t2);
> +ERROR 42S22: Unknown column 'DB_ROW_HASH_1' in 'IN/ALL/ANY subquery'
> +select DB_ROW_HASH_1, DB_ROW_HASH_2 from t1,t2 where DB_ROW_HASH_1 in (select DB_ROW_HASH_1 from t2);
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +22 2
> +33 3
> +44 4
> +11 1
> +22 2
> +33 3
> +44 4
> +11 1
> +22 2
> +33 3
> +44 4
> +11 1
> +22 2
> +33 3
> +44 4
> +select * from t2 where DB_ROW_HASH_1 in (select DB_ROW_HASH_1 from t1);
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +22 2
> +33 3
> +44 4
> +select DB_ROW_HASH_1 from t1,t2 where t1.DB_ROW_HASH_1 = t2.DB_ROW_HASH_2;
> +ERROR 42S22: Unknown column 't1.DB_ROW_HASH_1' in 'where clause'
> +select DB_ROW_HASH_1 from t1 inner join t2 on t1.a = t2.DB_ROW_HASH_2;
> +DB_ROW_HASH_1
> +11
> +22
> +33
> +44
> +drop table t1,t2;
> +#very long blob entry;
> +SET @@GLOBAL.max_allowed_packet=67108864;
> +connect 'newcon', localhost, root,,;
> +connection newcon;
> +show variables like 'max_allowed_packet';
> +Variable_name Value
> +max_allowed_packet 67108864
> +create table t1(a longblob unique, b longblob , c longblob , unique(b,c));
> +desc t1;
> +Field Type Null Key Default Extra
> +a longblob YES UNI NULL
> +b longblob YES MUL NULL
> +c longblob YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` longblob DEFAULT NULL,
> + `b` longblob DEFAULT NULL,
> + `c` longblob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `b` (`b`,`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 b 2 c A 0 NULL NULL YES HASH_INDEX
> +insert into t1 values(concat(repeat('sachin',10000000),'1'),concat(repeat('sachin',10000000),'1'),
> +concat(repeat('sachin',10000000),'1'));
> +insert into t1 values(concat(repeat('sachin',10000000),'2'),concat(repeat('sachin',10000000),'2'),
> +concat(repeat('sachin',10000000),'1'));
> +insert into t1 values(concat(repeat('sachin',10000000),'2'),concat(repeat('sachin',10000000),'2'),
> +concat(repeat('sachin',10000000),'4'));
> +ERROR 23000: Duplicate entry 'sachinsachinsachinsachinsachinsachinsachinsachinsachinsachinsach' for key 'a'
> +insert into t1 values(concat(repeat('sachin',10000000),'3'),concat(repeat('sachin',10000000),'1'),
> +concat(repeat('sachin',10000000),'1'));
> +ERROR 23000: Duplicate entry 'sachinsachinsachinsachinsachinsachinsachinsachinsachinsachinsach' for key 'b'
> +drop table t1;
> +#long key unique with different key length
> +create table t1(a blob, unique(a(3000)));
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 3000 NULL YES HASH_INDEX
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(3000))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 value(concat(repeat('s',3000),'1'));
> +insert into t1 value(concat(repeat('s',3000),'2'));
> +ERROR 23000: Duplicate entry 'ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss' for key 'a'
> +insert into t1 value(concat(repeat('a',3000),'2'));
> +drop table t1;
> +create table t1(a varchar(4000), b longblob , c varchar(5000), d longblob,
> +unique(a(3500), b(1000000)), unique(c(4500), d(10000000)));
> +desc t1;
> +Field Type Null Key Default Extra
> +a varchar(4000) YES MUL NULL
> +b longblob YES NULL
> +c varchar(5000) YES MUL NULL
> +d longblob YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` varchar(4000) DEFAULT NULL,
> + `b` longblob DEFAULT NULL,
> + `c` varchar(5000) DEFAULT NULL,
> + `d` longblob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(3500),`b`),
> + UNIQUE KEY `c` (`c`(4500),`d`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 3500 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 4500 NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 NULL NULL YES HASH_INDEX
> +drop table t1;
> +disconnect newcon;
> +connection default;
> +SET @@GLOBAL.max_allowed_packet=4194304;
> +#ext bug
> +create table t1(a int primary key, b blob unique, c int, d blob , index(c));
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` int(11) NOT NULL,
> + `b` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + PRIMARY KEY (`a`),
> + UNIQUE KEY `b` (`b`),
> + KEY `c` (`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(1,23,1,33);
> +insert into t1 values(2,23,1,33);
> +ERROR 23000: Duplicate entry '23' for key 'b'
> +drop table t1;
> +create table t2 (a blob unique , c int , index(c));
> +show create table t2;
> +Table Create Table
> +t2 CREATE TABLE `t2` (
> + `a` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`),
> + KEY `c` (`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t2 values(1,1);
> +insert into t2 values(2,1);
> +drop table t2;
> +#not null test //todo solve warnings
> +create table t1(a blob unique not null);
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob NO PRI NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob NOT NULL,
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(1);
> +insert into t1 values(3);
> +insert into t1 values(1);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +drop table t1;
> +create table t1(a int primary key, b blob unique , c blob unique not null);
> +insert into t1 values(1,1,1);
> +insert into t1 values(2,1,2);
> +ERROR 23000: Duplicate entry '1' for key 'b'
> +insert into t1 values(3,3,1);
> +ERROR 23000: Duplicate entry '1' for key 'c'
> +drop table t1;
> +create table t1 (a blob unique not null, b blob not null, c blob not null, unique(b,c));
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob NO PRI NULL
> +b blob NO MUL NULL
> +c blob NO NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob NOT NULL,
> + `b` blob NOT NULL,
> + `c` blob NOT NULL,
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `b` (`b`,`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values (1, 2, 3);
> +insert into t1 values (2, 1, 3);
> +insert into t1 values (2, 1, 3);
> +ERROR 23000: Duplicate entry '2' for key 'a'
> +drop table t1;
> +set @@GLOBAL.max_allowed_packet= @allowed_packet;
> diff --git a/sql/field.h b/sql/field.h
> index b6f28808e2e..13eea28fe8a 100644
> --- a/sql/field.h
> +++ b/sql/field.h
> @@ -539,6 +539,7 @@ class Virtual_column_info: public Sql_alloc
> Item *expr;
> LEX_CSTRING name; /* Name of constraint */
> uint flags;
> + LEX_CSTRING hash_expr;
It is not a good idea to put your hash-unique-constraint specific
data into a common Virtual_column_info.
Luckily, you don't need it. Just like "current_timestamp()" expression
is generated on the fly as needed in parse_vcol_defs(), you can
generate the hash expression there too, no need to do it
in TABLE_SHARE::init_from_binary_frm_image().
So, just remove hash_expr.
>
> Virtual_column_info()
> : vcol_type((enum_vcol_info_type)VCOL_TYPE_NONE),
> @@ -681,7 +688,7 @@ class Field: public Value_source
> GEOM_MULTIPOINT = 4, GEOM_MULTILINESTRING = 5, GEOM_MULTIPOLYGON = 6,
> GEOM_GEOMETRYCOLLECTION = 7
> };
> - enum imagetype { itRAW, itMBR};
> + enum imagetype { itRAW, itMBR, itHASH};
itHASH isn't used anywhere in your patch :)
>
> utype unireg_check;
> uint32 field_length; // Length of field
> diff --git a/sql/field.cc b/sql/field.cc
> index dc854826ed6..f75f52141e3 100644
> --- a/sql/field.cc
> +++ b/sql/field.cc
> @@ -9822,7 +9822,7 @@ int Field_bit::key_cmp(const uchar *str, uint length)
> }
>
>
> -int Field_bit::cmp_offset(uint row_offset)
> +int Field_bit::cmp_offset(long row_offset)
may be my_ptrdiff_t ?
> {
> if (bit_len)
> {
> diff --git a/sql/item_func.h b/sql/item_func.h
> index 3b6cb4ceeac..0391de0c5ad 100644
> --- a/sql/item_func.h
> +++ b/sql/item_func.h
> @@ -803,6 +803,19 @@ class Item_long_func: public Item_int_func
> };
>
>
> +class Item_func_hash: public Item_int_func
> +{
> +public:
> + Item_func_hash(THD *thd, List<Item> &item): Item_int_func(thd, item)
> + {}
> + longlong val_int();
> + void fix_length_and_dec();
> + const Type_handler *type_handler() const { return &type_handler_long; }
> + Item *get_copy(THD *thd)
> + { return get_item_copy<Item_func_hash>(thd, this); }
> + const char *func_name() const { return "HASH"; }
is it a user visible function? Can one write
SELECT HASH(a) FROM t1 ?
looks like yes, so
1. please add tests for it
2. we need to document what hash function, exactly, is used.
3. May be better to call it MARIADB_HASH? to highlight its internal-use
nature and a home-baked hash function? And to reduce a chance for
a collision with user defined functions.
4. An even better approach would be not to create any user visible function
at all. Currently unpack_vcol_info_from_frm() calls the parser and
then initializes vcol_info and calls fix_fields. If you split it in two
you can do `new Item_func_hash` manually and skip the first part
of unpack_vcol_info_from_frm(). And you won't need HASH to be user-visible.
As a minor optimization, the same can be done for "current_timestamp()"
> +};
> +
> class Item_longlong_func: public Item_int_func
> {
> public:
> diff --git a/sql/sql_show.cc b/sql/sql_show.cc
> index 4923c628bf9..c620ea23165 100644
> --- a/sql/sql_show.cc
> +++ b/sql/sql_show.cc
> @@ -2300,7 +2300,7 @@ int show_create_table(THD *thd, TABLE_LIST *table_list, String *packet,
> */
> packet->append(STRING_WITH_LEN("PRIMARY KEY"));
> }
> - else if (key_info->flags & HA_NOSAME)
> + else if (key_info->flags & HA_NOSAME || key_info->flags & HA_LONG_UNIQUE_HASH)
Is it needed? As far as I can see, you always set HA_NOSAME
whenever you set HA_LONG_UNIQUE_HASH.
> packet->append(STRING_WITH_LEN("UNIQUE KEY "));
> else if (key_info->flags & HA_FULLTEXT)
> packet->append(STRING_WITH_LEN("FULLTEXT KEY "));
> diff --git a/include/my_base.h b/include/my_base.h
> index c36072c0bfa..ba17f8d80dc 100644
> --- a/include/my_base.h
> +++ b/include/my_base.h
> @@ -290,6 +291,11 @@ enum ha_base_keytype {
> #define HA_KEY_HAS_PART_KEY_SEG 65536
> /* Internal Flag Can be calcaluted */
> #define HA_INVISIBLE_KEY 2<<18
> +/*
> + Some more flags for keys these are not stored in
> + frm it is calculated on the fly in init_from_binary_frm_image
> +*/
1. I don't think you need to make the same comment more verbose every time
you repeat it :) Just say, like
#define HA_INVISIBLE_KEY 2<<18 /* this is calculated too */
#define HA_LONG_UNIQUE_HASH 2<<19 /* this is calculated too */
2. Do you need a separate flag for that, just HA_INVISIBLE_KEY isn't enough?
> +#define HA_LONG_UNIQUE_HASH 2<<19
> /* Automatic bits in key-flag */
>
> #define HA_SPACE_PACK_USED 4 /* Test for if SPACE_PACK used */
> @@ -317,6 +323,13 @@ enum ha_base_keytype {
> #define HA_BIT_PART 1024
> #define HA_CAN_MEMCMP 2048 /* internal, never stored in frm */
>
> +/*
> + Used for key parts whole length is greater then > file->max_key_part_length
> + Only used for HA_LONG_UNIQUE_HASH keys
> +*/ //TODO a better name ??
> +#define HA_HASH_KEY_PART_FLAG 4096
> +/* Field need to be frees externally */
> +#define HA_FIELD_EX_FREED 8192
Okay, this is a weird name, agree. But it is not used anywere, so
there's an easy solution :)
> /* optionbits for database */
> #define HA_OPTION_PACK_RECORD 1U
> #define HA_OPTION_PACK_KEYS 2U
> diff --git a/sql/table.h b/sql/table.h
> index 785fd9f3427..9ed7fcd7beb 100644
> --- a/sql/table.h
> +++ b/sql/table.h
> @@ -346,9 +346,39 @@ enum field_visibility_t {
> INVISIBLE_FULL
> };
>
> -#define INVISIBLE_MAX_BITS 3
> +#define INVISIBLE_MAX_BITS 3
> +/* We will store the info into 3rd bit if field is hash field */
> +#define HASH_FIELD_MASK 15
> +#define HASH_FIELD_MASK_SHIFT 4
> +#define HA_HASH_FIELD_LENGTH 8
> +#define HA_HASH_KEY_LENGTH_WITHOUT_NULL 8
> +#define HA_HASH_KEY_LENGTH_WITH_NULL 9
> +//TODO is this correct ? how about size of char ptr on 32/16 bit machine?
I don't understand that, sorry
> +#define HA_HASH_KEY_PART_LENGTH 4 + 8 // 4 for length , 8 for portable size of char ptr
you almost always should use parentheses in macro expressions
>
> +const LEX_CSTRING ha_hash_str {STRING_WITH_LEN("HASH")};
???
you create a copy of ha_hash_str symbol in every .o file that
included table.h?
>
> +
> +int find_field_pos_in_hash(Item *hash_item, const char * field_name);
> +
> +int fields_in_hash_str(Item *hash_item);
> +
> +Field * field_ptr_in_hash_str(Item *hash_item, int index);
> +
> +int get_key_part_length(KEY *keyinfo, int index);
> +
> +void calc_hash_for_unique(ulong &nr1, ulong &nr2, String *str);
> +
> +void create_update_handler(THD *thd, TABLE *table);
> +
> +void delete_update_handler(THD *thd, TABLE *table);
> +
> +void setup_table_hash(TABLE *table);
> +
> +void re_setup_table(TABLE *table);
> +
> +int get_hash_key(THD *thd, TABLE *table, handler *h, uint key_index, uchar *rec_buf,
> + uchar *key_buff);
> /**
> Category of table found in the table share.
> */
> @@ -1094,6 +1124,17 @@ struct TABLE
> THD *in_use; /* Which thread uses this */
>
> uchar *record[3]; /* Pointer to records */
> + /* record buf to resolve hash collisions for long UNIQUE constraints */
> + uchar *check_unique_buf;
> + handler *update_handler; /* Handler used in case of update */
> + /*
> + In the case of write row for long unique we are unable of find
> + Whick key is voilated. Because we in case of duplicate hash we never reach
> + handler write_row function. So print_error will always print that
> + key 0 is voilated. We store which key is voilated in this variable
> + by default this should be initialized to -1
typos :)
"unable to find", "which", "violated"
> + */
> + int dupp_hash_key;
> uchar *write_row_record; /* Used as optimisation in
> THD::write_row */
> uchar *insert_values; /* used by INSERT ... UPDATE */
> @@ -2898,6 +2939,7 @@ void append_unescaped(String *res, const char *pos, size_t length);
> void prepare_frm_header(THD *thd, uint reclength, uchar *fileinfo,
> HA_CREATE_INFO *create_info, uint keys, KEY *key_info);
> const char *fn_frm_ext(const char *name);
> +void calc_hash_for_unique(ulong &nr1, ulong &nr2, String *str);
you have this prototype twice in table.h
>
> /* Check that the integer is in the internal */
> static inline int set_zone(int nr,int min_zone,int max_zone)
> diff --git a/sql/table.cc b/sql/table.cc
> index 73b1a6bd9b2..8cd4db2844d 100644
> --- a/sql/table.cc
> +++ b/sql/table.cc
> @@ -747,7 +748,13 @@ static bool create_key_infos(const uchar *strpos, const uchar *frm_image_end,
> if (i == 0)
> {
> ext_key_parts+= (share->use_ext_keys ? first_keyinfo->user_defined_key_parts*(keys-1) : 0);
> + /*
> + Some keys can be HA_LONG_UNIQUE_HASH , but we do not know at this point ,
> + how many ?, but will always be less than or equal to total num of
> + keys. Each HA_LONG_UNIQUE_HASH key require one extra key_part in which
> + it stored hash. On safe side we will allocate memory for each key.
> + */
> - n_length=keys * sizeof(KEY) + ext_key_parts * sizeof(KEY_PART_INFO);
> + n_length=keys * sizeof(KEY) + (ext_key_parts +keys) * sizeof(KEY_PART_INFO);
Hmm, why wouldn't you store the number of HA_KEY_ALG_LONG_HASH keys in
EXTRA2_LONG_UNIQUES ? Then you'll know it here.
> if (!(keyinfo= (KEY*) alloc_root(&share->mem_root,
> n_length + len)))
> return 1;
> @@ -798,6 +805,14 @@ static bool create_key_infos(const uchar *strpos, const uchar *frm_image_end,
> }
> key_part->store_length=key_part->length;
> }
> + if (keyinfo->algorithm == HA_KEY_ALG_LONG_HASH)
> + {
> + keyinfo->flags|= HA_LONG_UNIQUE_HASH | HA_NOSAME;
> + keyinfo->key_length= 0;
> + share->ext_key_parts++;
> + // This empty key_part for storing Hash
> + key_part++;
> + }
so, you write keyinfo's for HA_LONG_UNIQUE_HASH keys into the frm?
why don't you write keysegs for them? I mean, it is not very logical.
I'd thought you won't write either keyinfos or keysegs. Or, okay, you could
write both. But only keyinfos and no keysegs? That's strange
>
> /*
> Add primary key to end of extended keys for non unique keys for
> @@ -1143,13 +1159,21 @@ bool parse_vcol_defs(THD *thd, MEM_ROOT *mem_root, TABLE *table,
> pos+= expr_length;
> }
>
> - /* Now, initialize CURRENT_TIMESTAMP fields */
> + /* Now, initialize CURRENT_TIMESTAMP and UNIQUE_INDEX_HASH_FIELD fields */
> for (field_ptr= table->field; *field_ptr; field_ptr++)
> {
> Field *field= *field_ptr;
> - if (field->has_default_now_unireg_check())
> + if (field->vcol_info && (length = field->vcol_info->hash_expr.length))
> {
> expr_str.length(parse_vcol_keyword.length);
> + expr_str.append((char*)field->vcol_info->hash_expr.str, length);
> + vcol= unpack_vcol_info_from_frm(thd, mem_root, table, &expr_str,
> + &(field->vcol_info), error_reported);
see above about not going through the parser for hash keys.
> + *(vfield_ptr++)= *field_ptr;
> +
> + }
> + if (field->has_default_now_unireg_check())
> + {
> expr_str.append(STRING_WITH_LEN("current_timestamp("));
> expr_str.append_ulonglong(field->decimals());
> expr_str.append(')');
> @@ -2106,7 +2132,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> uchar flags= *extra2_field_flags++;
> if (flags & VERS_OPTIMIZED_UPDATE)
> reg_field->flags|= VERS_UPDATE_UNVERSIONED_FLAG;
> -
> + if (flags & EXTRA2_LONG_UNIQUE_HASH_FIELD)
> + reg_field->flags|= LONG_UNIQUE_HASH_FIELD;
so, you write LONG_UNIQUE_HASH_FIELD fields to frm too. Why?
> reg_field->invisible= f_visibility(flags);
> }
> if (reg_field->invisible == INVISIBLE_USER)
> @@ -2350,6 +2438,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> key_part= keyinfo->key_part;
> uint key_parts= share->use_ext_keys ? keyinfo->ext_key_parts :
> keyinfo->user_defined_key_parts;
> + if (keyinfo->flags & HA_LONG_UNIQUE_HASH)
> + key_parts++;
key_parts++ ? Doesn't your HA_LONG_UNIQUE_HASH key have only one part, always?
> for (i=0; i < key_parts; key_part++, i++)
> {
> Field *field;
> @@ -2363,7 +2453,16 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
>
> field= key_part->field= share->field[key_part->fieldnr-1];
> key_part->type= field->key_type();
> + if (keyinfo->flags & HA_LONG_UNIQUE_HASH
> + &&(key_part->length > handler_file->max_key_part_length()
> + || key_part->length == 0))
1. fix the spacing and the indentation here, please
2. what should happen if HA_LONG_UNIQUE_HASH flag is set,
but the key_part->length is small ?
> + {
> + key_part->key_part_flag= HA_HASH_KEY_PART_FLAG;
> + key_part->store_length= HA_HASH_KEY_PART_LENGTH;
> + }
> + /* Invisible Full is currently used by long uniques */
> - if (field->invisible > INVISIBLE_USER && !field->vers_sys_field())
> + if ((field->invisible == INVISIBLE_USER ||
> + field->invisible == INVISIBLE_SYSTEM )&& !field->vers_sys_field())
why is this change?
> keyinfo->flags |= HA_INVISIBLE_KEY;
> if (field->null_ptr)
> {
> @@ -2428,7 +2527,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> field->part_of_sortkey= share->keys_in_use;
> }
> }
> - if (field->key_length() != key_part->length)
> + if (field->key_length() != key_part->length &&
> + !(keyinfo->flags & HA_LONG_UNIQUE_HASH))
why is that?
> {
> #ifndef TO_BE_DELETED_ON_PRODUCTION
> if (field->type() == MYSQL_TYPE_NEWDECIMAL)
> @@ -2470,7 +2570,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> if (!(key_part->key_part_flag & (HA_BLOB_PART | HA_VAR_LENGTH_PART |
> HA_BIT_PART)) &&
> key_part->type != HA_KEYTYPE_FLOAT &&
> - key_part->type == HA_KEYTYPE_DOUBLE)
> + key_part->type == HA_KEYTYPE_DOUBLE &&
> + !(keyinfo->flags & HA_LONG_UNIQUE_HASH))
why is that?
> key_part->key_part_flag|= HA_CAN_MEMCMP;
> }
> keyinfo->usable_key_parts= usable_parts; // Filesort
> @@ -8346,6 +8454,377 @@ double KEY::actual_rec_per_key(uint i)
> }
>
>
> +/*
> + find out the field positoin in hash_str()
> + position starts from 0
> + else return -1;
> +*/
> +int find_field_pos_in_hash(Item *hash_item, const char * field_name)
> +{
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_item);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + uint arg_count= t_item->argument_count();
> + Item ** arguments= t_item->arguments();
> + Field * t_field;
> +
> + for (uint j=0; j < arg_count; j++)
> + {
> + DBUG_ASSERT(arguments[j]->type() == Item::FIELD_ITEM ||
> + arguments[j]->type() == Item::FUNC_ITEM);
> + if (arguments[j]->type() == Item::FIELD_ITEM)
> + {
> + t_field= static_cast<Item_field *>(arguments[j])->field;
> + }
> + else
> + {
> + Item_func_left *fnc= static_cast<Item_func_left *>(arguments[j]);
> + t_field= static_cast<Item_field *>(fnc->arguments()[0])->field;
> + }
> + if (!my_strcasecmp(system_charset_info, t_field->field_name.str, field_name))
> + return j;
> + }
> + return -1;
> +}
> +
> +/*
> + find total number of field in hash_str
> +*/
> +int fields_in_hash_str(Item * hash_item)
can be static
> +{
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_item);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + return t_item->argument_count();
> +}
> +
> +/*
> + Returns fields ptr given by hash_str index
> + Index starts from 0
> +*/
> +Field * field_ptr_in_hash_str(Item *hash_item, int index)
> +{
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_item);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + return static_cast<Item_field *>(t_item->arguments()[index])->field;
> +}
> +
> +//NO longer Needed
remove it, then :)
> +int get_key_part_length(KEY *keyinfo, int index)
> +{
> + DBUG_ASSERT(keyinfo->flags & HA_LONG_UNIQUE_HASH);
> + TABLE *tbl= keyinfo->table;
> + Item *h_item= keyinfo->key_part->field->vcol_info->expr;
> + Field *fld= field_ptr_in_hash_str(h_item, index);
> + if (!index)
> + return fld->pack_length()+HA_HASH_KEY_LENGTH_WITH_NULL+1;
> + return fld->pack_length()+1;
> +}
> +/**
> + @brief clone of current handler.
> + Creates a clone of handler used in update for
> + unique hash key.
> + @param thd Thread Object
> + @param table Table Object
> + @return handler object
> +*/
> +void create_update_handler(THD *thd, TABLE *table)
better, clone_handler_for_update
> +{
> + handler *update_handler= NULL;
> + for (uint i= 0; i < table->s->keys; i++)
indentation
> + {
> + if (table->key_info[i].flags & HA_LONG_UNIQUE_HASH)
> + {
> + update_handler= table->file->clone(table->s->normalized_path.str,
> + thd->mem_root);
> + update_handler->ha_external_lock(thd, F_RDLCK);
> + table->update_handler= update_handler;
why do you store it in TABLE? you can just keep it in a local variable
in mysql_update, you don't need it outside of mysql_update anyway.
(and multi-update)
also, you shouldn't need to scan all table keys here. use some
flag or property in TABLE_SCHEMA to check quickly whether the table
has these lock unique keys.
> + return;
> + }
> + }
> + return;
> +}
> +
> +/**
> + @brief Deletes update handler object
> + @param thd Thread Object
> + @param table Table Object
> +*/
> +void delete_update_handler(THD *thd, TABLE *table)
> +{
> + if (table->update_handler)
> + {
> + table->update_handler->ha_external_lock(thd, F_UNLCK);
> + table->update_handler->ha_close();
> + delete table->update_handler;
> + table->update_handler= NULL;
> + }
> +}
> +/**
> + @brief This function makes table object with
> + long unique keys ready for storage engine.
> + It makes key_part of HA_LONG_UNIQUE_HASH point to
> + hash key_part.
> + @param table Table object
> + */
> +void setup_table_hash(TABLE *table)
this and other functions could be methods in TABLE
> +{
> + /*
> + Extra parts of long unique key which are used only at server level
> + for example in key unique(a, b, c) //a b c are blob
> + extra_key_part_hash is 3
> + */
> + uint extra_key_part_hash= 0;
> + uint hash_parts= 0;
> + KEY *s_keyinfo= table->s->key_info;
> + KEY *keyinfo= table->key_info;
> + /*
> + Sometime s_keyinfo or key_info can be null. So
> + two different loop for keyinfo and s_keyinfo
> + reference test case:- main.subselect_sj2
how they could be null here?
> + */
> +
> + if (keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, keyinfo++)
> + {
> + if (keyinfo->flags & HA_LONG_UNIQUE_HASH)
> + {
> + DBUG_ASSERT(keyinfo->user_defined_key_parts ==
> + keyinfo->ext_key_parts);
> + keyinfo->flags&= ~(HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + keyinfo->algorithm= HA_KEY_ALG_UNDEF;
> + extra_key_part_hash+= keyinfo->ext_key_parts;
> + hash_parts++;
> + keyinfo->key_part= keyinfo->key_part+ keyinfo->ext_key_parts;
> + keyinfo->user_defined_key_parts= keyinfo->usable_key_parts=
> + keyinfo->ext_key_parts= 1;
> + keyinfo->key_length= keyinfo->key_part->store_length;
> + }
> + }
> + table->s->key_parts-= extra_key_part_hash;
> + table->s->key_parts+= hash_parts;
> + table->s->ext_key_parts-= extra_key_part_hash;
I don't understand what you're doing here. Could you add a comment, explaning
the resulting structure of keys and keysegs and what's where?
> + }
> + if (s_keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, s_keyinfo++)
> + {
> + if (s_keyinfo->flags & HA_LONG_UNIQUE_HASH)
> + {
> + DBUG_ASSERT(s_keyinfo->user_defined_key_parts ==
> + s_keyinfo->ext_key_parts);
> + s_keyinfo->flags&= ~(HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + s_keyinfo->algorithm= HA_KEY_ALG_BTREE;
> + extra_key_part_hash+= s_keyinfo->ext_key_parts;
> + s_keyinfo->key_part= s_keyinfo->key_part+ s_keyinfo->ext_key_parts;
> + s_keyinfo->user_defined_key_parts= s_keyinfo->usable_key_parts=
> + s_keyinfo->ext_key_parts= 1;
> + s_keyinfo->key_length= s_keyinfo->key_part->store_length;
> + }
> + }
> + if (!keyinfo)
> + {
> + table->s->key_parts-= extra_key_part_hash;
> + table->s->key_parts+= hash_parts;
> + table->s->ext_key_parts-= extra_key_part_hash;
> + }
> + }
> +}
> +
> +/**
> + @brief Revert the effect of setup_table_hash
> + @param table Table Object
> + */
> +void re_setup_table(TABLE *table)
> +{
> + //extra key parts excluding hash , which needs to be added in keyparts
> + uint extra_key_parts_ex_hash= 0;
> + uint extra_hash_parts= 0; // this var for share->extra_hash_parts
> + KEY *s_keyinfo= table->s->key_info;
> + KEY *keyinfo= table->key_info;
> + /*
> + Sometime s_keyinfo can be null so
> + two different loop for keyinfo and s_keyinfo
> + ref test case:- main.subselect_sj2
> + */
> + if (keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, keyinfo++)
> + {
> + if (keyinfo->user_defined_key_parts == 1 &&
> + keyinfo->key_part->field->flags & LONG_UNIQUE_HASH_FIELD)
> + {
> + keyinfo->flags|= (HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + keyinfo->algorithm= HA_KEY_ALG_LONG_HASH;
> + /* Sometimes it can happen, that we does not parsed hash_str.
> + Like when this function is called in ha_create. So we will
> + Use field from table->field rather then share->field*/
> + Item *h_item= table->field[keyinfo->key_part->fieldnr - 1]->
> + vcol_info->expr;
> + uint hash_parts= fields_in_hash_str(h_item);
> + keyinfo->key_part= keyinfo->key_part- hash_parts;
> + keyinfo->user_defined_key_parts= keyinfo->usable_key_parts=
> + keyinfo->ext_key_parts= hash_parts;
> + extra_key_parts_ex_hash+= hash_parts;
> + extra_hash_parts++;
> + keyinfo->key_length= -1;
> + }
> + }
> + table->s->key_parts-= extra_hash_parts;
> + table->s->key_parts+= extra_key_parts_ex_hash;
> + table->s->ext_key_parts+= extra_key_parts_ex_hash + extra_hash_parts;
> + }
> + if (s_keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, s_keyinfo++)
> + {
> + if (s_keyinfo->user_defined_key_parts == 1 &&
> + s_keyinfo->key_part->field->flags & LONG_UNIQUE_HASH_FIELD)
> + {
> + s_keyinfo->flags|= (HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + s_keyinfo->algorithm= HA_KEY_ALG_LONG_HASH;
> + extra_hash_parts++;
> + /* Sometimes it can happen, that we does not parsed hash_str.
> + Like when this function is called in ha_create. So we will
> + Use field from table->field rather then share->field*/
> + Item *h_item= table->field[s_keyinfo->key_part->fieldnr - 1]->
> + vcol_info->expr;
> + uint hash_parts= fields_in_hash_str(h_item);
> + s_keyinfo->key_part= s_keyinfo->key_part- hash_parts;
> + s_keyinfo->user_defined_key_parts= s_keyinfo->usable_key_parts=
> + s_keyinfo->ext_key_parts= hash_parts;
> + extra_key_parts_ex_hash+= hash_parts;
> + s_keyinfo->key_length= -1;
> + }
> + }
> + if (!keyinfo)
> + {
> + table->s->key_parts-= extra_hash_parts;
> + table->s->key_parts+= extra_key_parts_ex_hash;
> + table->s->ext_key_parts+= extra_key_parts_ex_hash + extra_hash_parts;
> + }
> + }
> +}
> +
> +/**
> + @brief set_hash
> + @param table
> + @param key_index
> + @param key_buff
> +
> +int get_hash_key(THD *thd,TABLE *table, handler *h, uint key_index,
> + uchar * rec_buff, uchar *key_buff)
unused?
> +{
> + KEY *keyinfo= &table->key_info[key_index];
> + DBUG_ASSERT(keyinfo->flags & HA_LONG_UNIQUE_HASH);
> + KEY_PART_INFO *temp_key_part= table->key_info[key_index].key_part;
> + Field *fld[keyinfo->user_defined_key_parts];
> + Field *t_field;
> + uchar hash_buff[9];
> + ulong nr1= 1, nr2= 4;
> + String *str1, temp1;
> + String *str2, temp2;
> + bool is_null= false;
> + bool is_same= true;
> + bool is_index_inited= h->inited;
> + /* difference between field->ptr and start of rec_buff *
> + long diff = rec_buff- table->record[0];
> + int result= 0;
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++, temp_key_part++)
> + {
> + uint maybe_null= MY_TEST(temp_key_part->null_bit);
> + fld[i]= temp_key_part->field->new_key_field(thd->mem_root, table,
> + key_buff + maybe_null, 12,
> + maybe_null?key_buff:0, 1,
> + temp_key_part->key_part_flag
> + & HA_HASH_KEY_PART_FLAG);
> + if (fld[i]->is_real_null())
> + {
> + is_null= true;
> + break;
> + }
> + str1= fld[i]->val_str(&temp1);
> + calc_hash_for_unique(nr1, nr2, str1);
> + key_buff+= temp_key_part->store_length;
> + }
> + if (is_null && !(keyinfo->flags & HA_NULL_PART_KEY))
> + return HA_ERR_KEY_NOT_FOUND;
> + if (keyinfo->flags & HA_NULL_PART_KEY)
> + {
> + hash_buff[0]= is_null;
> + int8store(hash_buff + 1, nr1);
> + }
> + else
> + int8store(hash_buff, nr1);
> + if (!is_index_inited)
> + result= h->ha_index_init(key_index, 0);
> + if (result)
> + return result;
> +
> + setup_table_hash(table);
> + result= h->ha_index_read_map(rec_buff, hash_buff, HA_WHOLE_KEY,
> + HA_READ_KEY_EXACT);
> + re_setup_table(table);
> + if (!result)
> + {
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + t_field= keyinfo->key_part[i].field;
> + t_field->move_field(t_field->ptr+diff,
> + t_field->null_ptr+diff, t_field->null_bit);
> + }
> + do
> + {
> + re_setup_table(table);
> + is_same= true;
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + t_field= keyinfo->key_part[i].field;
> + if (fld[i]->is_real_null() && t_field->is_real_null())
> + continue;
> + if (!fld[i]->is_real_null() && !t_field->is_real_null())
> + {
> + str1= t_field->val_str(&temp1);
> + str2= fld[i]->val_str(&temp2);
> + if (my_strcasecmp(str1->charset(), str1->c_ptr_safe(),
> + str2->c_ptr_safe()))
> + {
> + is_same= false;
> + break;
> + }
> + }
> + else
> + {
> + is_same= false;
> + break;
> + }
> + }
> + setup_table_hash(table);
> + }
> + while (!is_same && !(result= h->ha_index_next_same(rec_buff, hash_buff,
> + keyinfo->key_length)));
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + t_field= keyinfo->key_part[i].field;
> + t_field->move_field(t_field->ptr-diff,
> + t_field->null_ptr-diff, t_field->null_bit);
> + }
> + }
> + if (!is_index_inited)
> + h->ha_index_end();
> + re_setup_table(table);
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + if (keyinfo->key_part[i].key_part_flag & HA_FIELD_EX_FREED)
> + {
> + Field_blob *blb= static_cast<Field_blob *>(fld[i]);
> + uchar * addr;
> + blb->get_ptr(&addr);
> + my_free(addr);
> + }
> + }
> + return result;
> +}
> +*/
> LEX_CSTRING *fk_option_name(enum_fk_option opt)
> {
> static LEX_CSTRING names[]=
> @@ -8814,3 +9293,15 @@ bool TABLE::export_structure(THD *thd, Row_definition_list *defs)
> }
> return false;
> }
> +
> +void calc_hash_for_unique(ulong &nr1, ulong &nr2, String *str)
> +{
> + CHARSET_INFO *cs;
> + uchar l[4];
> + int4store(l, str->length());
> + cs= &my_charset_bin;
> + cs->coll->hash_sort(cs, l, sizeof(l), &nr1, &nr2);
> + cs= str->charset();
> + cs->coll->hash_sort(cs, (uchar *)str->ptr(), str->length(), &nr1, &nr2);
> + sql_print_information("setiya %lu, %s", nr1, str->ptr());
really? :)
> +}
> diff --git a/sql/sql_table.cc b/sql/sql_table.cc
> index 4ac8b102d79..f42687c0377 100644
> --- a/sql/sql_table.cc
> +++ b/sql/sql_table.cc
> @@ -3343,6 +3343,87 @@ mysql_add_invisible_index(THD *thd, List<Key> *key_list,
> key_list->push_back(key, thd->mem_root);
> return key;
> }
> +/**
> + Add hidden level 3 hash field to table in case of long
s/hidden level 3/fully invisible/
> + unique column
> + @param thd Thread Context.
> + @param create_list List of table fields.
> + @param cs Field Charset
> + @param key_info Whole Keys buffer
> + @param key_index Index of current key
1. instead of key_info and key_index, you can just take a key_info of the current
key here (that is key_info + key_index).
2. what is the "current key"? Is it the unique key over blobs that you create
a hash field for? please clarify the comment.
> +*/
> +
> +static void add_hash_field(THD * thd, List<Create_field> *create_list,
> + CHARSET_INFO *cs, KEY *key_info, int key_index)
> +{
> + List_iterator<Create_field> it(*create_list);
> + Create_field *dup_field, *cf= new (thd->mem_root) Create_field();
> + cf->flags|= UNSIGNED_FLAG | LONG_UNIQUE_HASH_FIELD;
> + cf->charset= cs;
> + cf->decimals= 0;
> + cf->length= cf->char_length= cf->pack_length= HA_HASH_FIELD_LENGTH;
> + cf->invisible= INVISIBLE_FULL;
> + cf->pack_flag|= FIELDFLAG_MAYBE_NULL;
> + uint num= 1;
> + char *temp_name= (char *)thd->alloc(30);
> + my_snprintf(temp_name, 30, "DB_ROW_HASH_%u", num);
please, give names for 30 and for the whole my_snprintf - macros, functions,
whatever. And better use a LEX_STRING for it, for example:
#define LONG_HASH_FIELD_NAME_LENGTH 30
static inline make_long_hash_field_name(LEX_STRING buf, uint num)
{
buf->length= my_snprintf(buf->str, LONG_HASH_FIELD_NAME_LENGTH, "DB_ROW_HASH_%u", num);
}
> + /*
> + Check for collusions
collisions :)
> + */
> + while ((dup_field= it++))
> + {
> + if (!my_strcasecmp(system_charset_info, temp_name, dup_field->field_name.str))
> + {
> + num++;
> + my_snprintf(temp_name, 30, "DB_ROW_HASH_%u", num);
> + it.rewind();
> + }
> + }
> + it.rewind();
> + cf->field_name.str= temp_name;
> + cf->field_name.length= strlen(temp_name);
you won't need strlen here, if you use LEX_STRING above
> + cf->set_handler(&type_handler_longlong);
> + /*
> + We have added db_row_hash field in starting of
> + fields array , So we have to change key_part
> + field index
> + for (int i= 0; i <= key_index; i++, key_info++)
> + {
> + KEY_PART_INFO *info= key_info->key_part;
> + for (uint j= 0; j < key_info->user_defined_key_parts; j++, info++)
> + {
> + info->fieldnr+= 1;
> + info->offset+= HA_HASH_FIELD_LENGTH;
> + }
> + }*/
Forgot to remove it?
> + key_info[key_index].flags|= HA_NOSAME;
> + key_info[key_index].algorithm= HA_KEY_ALG_LONG_HASH;
> + it.rewind();
> + uint record_offset= 0;
> + while ((dup_field= it++))
> + {
> + dup_field->offset= record_offset;
> + if (dup_field->stored_in_db())
> + record_offset+= dup_field->pack_length;
Why do you change all field offsets? You only need to put your field last,
that's all. Like
while ((dup_field= it++))
set_if_bigger(record_offset, dup_field->offset + dup_field->pack_length)
> + }
> + cf->offset= record_offset;
> + /*
> + it.rewind();
> + while ((sql_field= it++))
> + {
> + if (!sql_field->stored_in_db())
> + {
> + sql_field->offset= record_offset;
> + record_offset+= sql_field->pack_length;
> + }
> + }
why is that?
> + */
> + /* hash column should be fully hidden */
> + //prepare_create_field(cf, NULL, 0);
Forgot to remove it?
> + create_list->push_back(cf,thd->mem_root);
> +}
> +
> +
> /*
> Preparation for table creation
>
> @@ -3868,6 +3951,8 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> }
>
> cols2.rewind();
> + key_part_info->fieldnr= field;
> + key_part_info->offset= (uint16) sql_field->offset;
why did you need to move that?
> if (key->type == Key::FULLTEXT)
> {
> if ((sql_field->real_field_type() != MYSQL_TYPE_STRING &&
> @@ -3922,8 +4007,19 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> column->length= MAX_LEN_GEOM_POINT_FIELD;
> if (!column->length)
> {
> - my_error(ER_BLOB_KEY_WITHOUT_LENGTH, MYF(0), column->field_name.str);
> - DBUG_RETURN(TRUE);
> + if (key->type == Key::PRIMARY)
> + {
> + my_error(ER_BLOB_KEY_WITHOUT_LENGTH, MYF(0), column->field_name.str);
> + DBUG_RETURN(TRUE);
> + }
> + else if (!is_hash_field_added)
> + {
> + add_hash_field(thd, &alter_info->create_list,
> + create_info->default_table_charset,
> + *key_info_buffer, key_number);
> + column->length= 0;
why column->length= 0 ?
> + is_hash_field_added= true;
> + }
> }
> }
> #ifdef HAVE_SPATIAL
> @@ -4062,11 +4159,29 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> }
> else
> {
> - my_error(ER_TOO_LONG_KEY, MYF(0), key_part_length);
> - DBUG_RETURN(TRUE);
> - }
> + if(key->type == Key::UNIQUE)
> + {
> + if (!is_hash_field_added)
> + {
> + add_hash_field(thd, &alter_info->create_list,
> + create_info->default_table_charset,
> + *key_info_buffer, key_number);
> + is_hash_field_added= true;
instead of is_hash_field_added and many add_hash_field() here and there, I'd
rather rename the variable to hash_field_needed. And only do hash_field_needed= true.
And at the end if (hash_field_needed) add_hash_field(...)
> + }
> + }
> + else
> + {
> + my_error(ER_TOO_LONG_KEY, MYF(0), key_part_length);
any test cases for this error?
> + DBUG_RETURN(TRUE);
> + }
> + }
> }
> - key_part_info->length= (uint16) key_part_length;
> + /* We can not store key_part_length more then 2^16 - 1 in frm
> + So we will simply make it zero */
Not really. If someone explicitly asks for a long prefix, it should be an error.
Like CREATE TABLE (a blob, UNIQUE (a(65537)); -- this is an error
> + if (is_hash_field_added && key_part_length > (2<<16) - 1)
> + key_part_info->length= 0;
> + else
> + key_part_info->length= (uint16) key_part_length;
> /* Use packed keys for long strings on the first column */
> if (!((*db_options) & HA_OPTION_NO_PACK_KEYS) &&
> !((create_info->table_options & HA_OPTION_NO_PACK_KEYS)) &&
> @@ -4122,12 +4237,37 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> if (key->type == Key::UNIQUE && !(key_info->flags & HA_NULL_PART_KEY))
> unique_key=1;
> key_info->key_length=(uint16) key_length;
> - if (key_length > max_key_length && key->type != Key::FULLTEXT)
> + if (key_length > max_key_length && key->type != Key::FULLTEXT &&
> + !is_hash_field_added)
> {
> my_error(ER_TOO_LONG_KEY,MYF(0),max_key_length);
> DBUG_RETURN(TRUE);
> }
>
> + if (is_hash_field_added)
> + {
> + if (key_info->flags & HA_NULL_PART_KEY)
> + null_fields++;
> + else
> + {
> + uint elements= alter_info->create_list.elements;
> + Create_field *hash_fld= static_cast<Create_field *>(alter_info->
> + create_list.elem(elements -1 ));
> + hash_fld->flags|= NOT_NULL_FLAG;
> + hash_fld->pack_flag&= ~FIELDFLAG_MAYBE_NULL;
> + /*
> + Althought we do not need default value anywhere in code , but if we create
> + table with non null long columns , then at the time of insert we get warning.
> + So default value is used so solve this warning.
> + Virtual_column_info *default_value= new (thd->mem_root) Virtual_column_info();
> + char * def_str= (char *)alloc_root(thd->mem_root, 2);
> + strncpy(def_str, "0", 1);
> + default_value->expr_str.str= def_str;
> + default_value->expr_str.length= 1;
> + default_value->expr_item= new (thd->mem_root) Item_int(thd,0);
> + hash_fld->default_value= default_value; */
> + }
Forgot to remove it?
> + }
> if (validate_comment_length(thd, &key->key_create_info.comment,
> INDEX_COMMENT_MAXLEN,
> ER_TOO_LONG_INDEX_COMMENT,
> @@ -8328,6 +8468,11 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
> enum Key::Keytype key_type;
> LEX_CSTRING tmp_name;
> bzero((char*) &key_create_info, sizeof(key_create_info));
> + if (key_info->flags & HA_LONG_UNIQUE_HASH)
> + {
> + key_info->flags&= ~(HA_LONG_UNIQUE_HASH);
> + key_info->algorithm= HA_KEY_ALG_UNDEF;
> + }
Why?
> key_create_info.algorithm= key_info->algorithm;
> /*
> We copy block size directly as some engines, like Area, sets this
> diff --git a/sql/handler.cc b/sql/handler.cc
> index b77b2a3fa2c..cda53dfdc87 100644
> --- a/sql/handler.cc
> +++ b/sql/handler.cc
> @@ -6179,6 +6185,185 @@ int handler::ha_reset()
> DBUG_RETURN(reset());
> }
>
> +static int check_duplicate_long_entry_key(TABLE *table, handler *h, uchar *new_rec,
> + uint key_no)
> +{
> + Field *hash_field;
> + int result, error= 0;
> + if (!(table->key_info[key_no].user_defined_key_parts == 1
> + && table->key_info[key_no].key_part->field->flags & LONG_UNIQUE_HASH_FIELD ))
> + return 0;
What if LONG_UNIQUE_HASH_FIELD is set but user_defined_key_parts != 1?
> + hash_field= table->key_info[key_no].key_part->field;
> + DBUG_ASSERT((table->key_info[key_no].flags & HA_NULL_PART_KEY &&
> + table->key_info[key_no].key_length == HA_HASH_KEY_LENGTH_WITH_NULL)
> + || table->key_info[key_no].key_length == HA_HASH_KEY_LENGTH_WITHOUT_NULL);
> + uchar ptr[HA_HASH_KEY_LENGTH_WITH_NULL];
> +
> + if (hash_field->is_real_null())
> + return 0;
> +
> + key_copy(ptr, new_rec, &table->key_info[key_no],
> + table->key_info[key_no].key_length, false);
good, use existing function, no need to reinvent the wheel
> +
> + if (!table->check_unique_buf)
> + table->check_unique_buf= (uchar *)alloc_root(&table->mem_root,
> + table->s->reclength*sizeof(uchar));
> +
> + result= h->ha_index_init(key_no, 0);
> + if (result)
> + return result;
> + result= h->ha_index_read_map(table->check_unique_buf,
> + ptr, HA_WHOLE_KEY, HA_READ_KEY_EXACT);
> + if (!result)
> + {
> + bool is_same;
> + do
> + {
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_field->
> + vcol_info->expr);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + uint arg_count= t_item->argument_count();
> + Item ** arguments= t_item->arguments();
> + long diff= table->check_unique_buf - new_rec;
> + Field * t_field;
> + is_same= true;
> + for (uint j=0; j < arg_count; j++)
> + {
> + DBUG_ASSERT(arguments[j]->type() == Item::FIELD_ITEM ||
> + // this one for left(fld_name,length)
> + arguments[j]->type() == Item::FUNC_ITEM);
> + if (arguments[j]->type() == Item::FIELD_ITEM)
> + {
> + t_field= static_cast<Item_field *>(arguments[j])->field;
> + if (t_field->cmp_offset(diff))
> + is_same= false;
> + }
> + else
> + {
> + Item_func_left *fnc= static_cast<Item_func_left *>(arguments[j]);
> + DBUG_ASSERT(!my_strcasecmp(system_charset_info, "left", fnc->func_name()));
> + //item_data= fnc->val_str(&tmp1);
> + DBUG_ASSERT(fnc->arguments()[0]->type() == Item::FIELD_ITEM);
> + t_field= static_cast<Item_field *>(fnc->arguments()[0])->field;
> + // field_data= t_field->val_str(&tmp2);
> + // if (my_strnncoll(t_field->charset(),(const uchar *)item_data->ptr(),
> + // item_data->length(),
> + // (const uchar *)field_data.ptr(),
> + // item_data->length()))
> + // return 0;
> + uint length= fnc->arguments()[1]->val_int();
> + if (t_field->cmp_max(t_field->ptr, t_field->ptr + diff, length))
> + is_same= false;
> + }
> + }
you don't need all that code, use key_cmp() or key_cmp_if_same() from key.cc
> + }
> + while (!is_same && !(result= table->file->ha_index_next_same(table->check_unique_buf,
> + ptr, table->key_info[key_no].key_length)));
> + if (is_same)
> + {
> + table->dupp_hash_key= key_no;
> + error= HA_ERR_FOUND_DUPP_KEY;
> + goto exit;
> + }
> + else
> + goto exit;
> + }
> + if (result == HA_ERR_LOCK_WAIT_TIMEOUT)
> + {
> + table->dupp_hash_key= key_no;
> + //TODO check if this is the only case
> + error= HA_ERR_FOUND_DUPP_KEY;
Why?
> + }
> + exit:
> + h->ha_index_end();
> + return error;
> +}
> +/** @brief
> + check whether inserted/updated records breaks the
> + unique constraint on long columns.
> + In the case of update we just need to check the specic key
> + reason for that is consider case
> + create table t1(a blob , b blob , x blob , y blob ,unique(a,b)
> + ,unique(x,y))
> + and update statement like this
> + update t1 set a=23+a; in this case if we try to scan for
> + whole keys in table then index scan on x_y will return 0
> + because data is same so in the case of update we take
> + key as a parameter in normal insert key should be -1
> + @returns 0 if no duplicate else returns error
> + */
> +static int check_duplicate_long_entries(TABLE *table, handler *h, uchar *new_rec)
> +{
> + table->dupp_hash_key= -1;
> + int result;
> + for (uint i= 0; i < table->s->keys; i++)
> + {
> + if ((result= check_duplicate_long_entry_key(table, h, new_rec, i)))
> + return result;
> + }
> + return 0;
> +}
> +
> +/** @brief
> + check whether updated records breaks the
> + unique constraint on long columns.
> + @returns 0 if no duplicate else returns error
> + */
> +static int check_duplicate_long_entries_update(TABLE *table, handler *h, uchar *new_rec)
> +{
> + Field **f, *field;
> + Item *h_item;
> + int error= 0;
> + bool is_update_handler_null= false;
> + /*
> + Here we are comparing whether new record and old record are same
> + with respect to fields in hash_str
> + */
> + long reclength= table->record[1]-table->record[0];
> + for (uint i= 0; i < table->s->keys; i++)
> + {
> + if (table->key_info[i].user_defined_key_parts == 1 &&
> + table->key_info[i].key_part->field->flags & LONG_UNIQUE_HASH_FIELD)
> + {
> + /*
> + Currently mysql_update is pacthed so that it will automatically set the
> + Update handler and then free it but ha_update_row is used in many function (
> + like in case of reinsert) Instead of patching them all here we check is
> + update_handler is null then set it And then set it null again
> + */
> + if (!table->update_handler)
> + {
> + create_update_handler(current_thd, table);
> + is_update_handler_null= true;
> + }
> + h_item= table->key_info[i].key_part->field->vcol_info->expr;
> + for (f= table->field; f && (field= *f); f++)
> + {
> + if ( find_field_pos_in_hash(h_item, field->field_name.str) != -1)
> + {
> + /* Compare fields if they are different then check for duplicates*/
> + if(field->cmp_binary_offset(reclength))
> + {
> + if((error= check_duplicate_long_entry_key(table, table->update_handler,
> + new_rec, i)))
> + goto exit;
> + /*
> + break beacuse check_duplicate_long_entrie_key will
> + take care of remaning fields
> + */
> + break;
> + }
> + }
> + }
> + }
> + }
> + exit:
> + if (is_update_handler_null)
> + {
> + delete_update_handler(current_thd, table);
> + }
> + return error;
> +}
>
> int handler::ha_write_row(uchar *buf)
> {
> @@ -6189,14 +6374,21 @@ int handler::ha_write_row(uchar *buf)
> DBUG_ENTER("handler::ha_write_row");
> DEBUG_SYNC_C("ha_write_row_start");
>
> + setup_table_hash(table);
No-no. You cannot modify table structure back and forth for *every inserted row*.
It's ok to do it once, when a table is opened. But not for every row,
> MYSQL_INSERT_ROW_START(table_share->db.str, table_share->table_name.str);
> mark_trx_read_write();
> increment_statistics(&SSV::ha_write_count);
>
> + if ((error= check_duplicate_long_entries(table, table->file, buf)))
> + {
> + re_setup_table(table);
> + DBUG_RETURN(error);
> + }
> TABLE_IO_WAIT(tracker, m_psi, PSI_TABLE_WRITE_ROW, MAX_KEY, 0,
> { error= write_row(buf); })
>
> MYSQL_INSERT_ROW_DONE(error);
> + re_setup_table(table);
> if (likely(!error) && !row_already_logged)
> {
> rows_changed++;
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
[View Less]
1
0
Hello Alexander,
Can you review this patch for MDEV-17359 ?
Best regards.
Ce message et les pièces jointes sont confidentiels et établis à l'attention exclusive de ses destinataires. Toute utilisation ou diffusion, même partielle, non autorisée est interdite. Tout message électronique est susceptible d'altération; CEGID décline donc toute responsabilité au titre de ce message. Si vous n'êtes pas le destinataire de ce message, merci de le détruire et d'avertir l'expéditeur.
This message and …
[View More]any attachments are confidential and intended solely for the addressees. Any unauthorized use or disclosure, either whole or partial is prohibited. E-mails are susceptible to alteration; CEGID shall therefore not be liable for the content of this message. If you are not the intended recipient of this message, please delete it and notify the sender.
[View Less]
2
9

Re: [Maria-developers] d1ccc60: MDEV-14005 Remove need for Partition Key to be part of Primary Key.
by Sergei Golubchik 05 Oct '18
by Sergei Golubchik 05 Oct '18
05 Oct '18
Hi, Alexey!
Some minor comments, questions, see below...
On Aug 21, Alexey Botchkov wrote:
> revision-id: d1ccc60360c79cd456abbebafc1c80c8043b7dce (mariadb-10.3.6-106-gd1ccc60)
> parent(s): ead9a34a3e934f607c2ea7a6c68f7f6d9d29b5bd
> committer: Alexey Botchkov
> timestamp: 2018-08-21 14:47:45 +0400
> message:
>
> MDEV-14005 Remove need for Partition Key to be part of Primary Key.
>
> The limitation was removed, so now we check all the partition
> for unique key …
[View More]duplicates in these cases.
>
> diff --git a/mysql-test/main/partition_unique.result b/mysql-test/main/partition_unique.result
> new file mode 100644
> index 0000000..17a9550
> --- /dev/null
> +++ b/mysql-test/main/partition_unique.result
> @@ -0,0 +1,51 @@
> +CREATE TABLE t1 (
> +id int(10) NOT NULL,
> +status varchar(1) DEFAULT NULL,
> +PRIMARY KEY (`id`)
> +)
> +PARTITION BY LIST COLUMNS(`status`)
> +(
> +PARTITION `a` VALUES IN ('A'),
> +PARTITION `b` VALUES IN ('B'),
> +PARTITION `c` VALUES IN ('C'),
> +PARTITION `d` DEFAULT);
> +INSERT INTO t1 VALUES (4, 'A');
> +INSERT INTO t1 VALUES (6, 'A');
> +INSERT INTO t1 VALUES (4, 'C');
> +ERROR 23000: Duplicate entry '4' for key 'PRIMARY'
> +INSERT INTO t1 VALUES (5, 'C');
> +UPDATE t1 SET id=4 WHERE id=5;
> +ERROR 23000: Duplicate entry '4' for key 'PRIMARY'
> +UPDATE t1 SET id=4 WHERE id=5 AND status='C';
> +ERROR 23000: Duplicate entry '4' for key 'PRIMARY'
> +UPDATE t1 SET id=6 WHERE id=4 AND status='A';
> +ERROR 23000: Duplicate entry '6' for key 'PRIMARY'
> +select * from t1;
> +id status
> +4 A
> +6 A
> +5 C
> +connect con1,localhost,root,,test;
> +connect con2,localhost,root,,test;
> +connection con1;
> +SET DEBUG_SYNC= 'berfore_part_unique_check SIGNAL bpu_hit WAIT_FOR bpu_flushed';
may be s/berfore/before/ ? :)
> +INSERT INTO t1 VALUES(7, 'A');
> +connection con2;
> +SET DEBUG_SYNC= 'now WAIT_FOR bpu_hit';
> +INSERT INTO t1 VALUES(7, 'C');
> +connection default;
> +SET DEBUG_SYNC= 'now SIGNAL bpu_flushed';
> +connection con1;
> +connection con2;
> +ERROR 23000: Duplicate entry '7' for key 'PRIMARY'
> +disconnect con1;
> +disconnect con2;
> +connection default;
> +select * from t1;
> +id status
> +4 A
> +6 A
> +7 A
> +5 C
> +DROP TABLE t1;
> +set debug_sync= "RESET";
> diff --git a/mysql-test/main/partition_unique.test b/mysql-test/main/partition_unique.test
> new file mode 100644
> index 0000000..f292359
> --- /dev/null
> +++ b/mysql-test/main/partition_unique.test
> @@ -0,0 +1,58 @@
> +--source include/have_innodb.inc
> +--source include/have_partition.inc
> +
> +CREATE TABLE t1 (
> + id int(10) NOT NULL,
> + status varchar(1) DEFAULT NULL,
> + PRIMARY KEY (`id`)
> + )
> + PARTITION BY LIST COLUMNS(`status`)
> + (
> + PARTITION `a` VALUES IN ('A'),
> + PARTITION `b` VALUES IN ('B'),
> + PARTITION `c` VALUES IN ('C'),
> + PARTITION `d` DEFAULT);
you don't seem to test innodb, this looks like a partitioned myisam table
> +
> +INSERT INTO t1 VALUES (4, 'A');
> +INSERT INTO t1 VALUES (6, 'A');
> +--error ER_DUP_ENTRY
> +INSERT INTO t1 VALUES (4, 'C');
> +INSERT INTO t1 VALUES (5, 'C');
> +--error ER_DUP_ENTRY
> +UPDATE t1 SET id=4 WHERE id=5;
> +--error ER_DUP_ENTRY
> +UPDATE t1 SET id=4 WHERE id=5 AND status='C';
> +--error ER_DUP_ENTRY
> +UPDATE t1 SET id=6 WHERE id=4 AND status='A';
> +select * from t1;
> +
> +connect (con1,localhost,root,,test);
> +connect (con2,localhost,root,,test);
> +connection con1;
> +SET DEBUG_SYNC= 'berfore_part_unique_check SIGNAL bpu_hit WAIT_FOR bpu_flushed';
> +send INSERT INTO t1 VALUES(7, 'A');
> +
> +connection con2;
> +SET DEBUG_SYNC= 'now WAIT_FOR bpu_hit';
> +send INSERT INTO t1 VALUES(7, 'C');
add also a test that you can insert some other value in 'A' and 'C'
partitions. That is, partitions aren't completely locked.
> +
> +connection default;
> +SET DEBUG_SYNC= 'now SIGNAL bpu_flushed';
> +
> +connection con1;
> +--reap
> +connection con2;
> +--error ER_DUP_ENTRY
> +--reap
> +
> +disconnect con1;
> +disconnect con2;
> +
> +connection default;
> +
> +select * from t1;
> +
> +DROP TABLE t1;
> +
> +set debug_sync= "RESET";
> +
> diff --git a/sql/ha_partition.cc b/sql/ha_partition.cc
> index 262e791..da07075 100644
> --- a/sql/ha_partition.cc
> +++ b/sql/ha_partition.cc
> @@ -368,6 +368,7 @@ void ha_partition::init_handler_variables()
> part_share= NULL;
> m_new_partitions_share_refs.empty();
> m_part_ids_sorted_by_num_of_records= NULL;
> + m_cu_errkey= (uint) -1;
rather confusing name. Could you de-confuse it?
Why not simply m_errkey? or m_err_dupkey, or whatever
and why did you make it unsigned? you need to cast in every second
occurence of it :)
> m_partitions_to_open= NULL;
>
> m_range_info= NULL;
> @@ -4198,6 +4199,95 @@ void ha_partition::try_semi_consistent_read(bool yes)
> }
>
>
> +int ha_partition::check_files_for_key(uchar *key, int n_key,
> + int part_begin, int part_end,
> + int part_to_skip,
> + int *res)
> +{
> + DBUG_ASSERT(inited == NONE ||
> + (inited == RND && !m_scan_value));
> +
> + for (int n=part_begin; n < part_end; n++)
> + {
> + handler *f= m_file[n];
> + init_stat sav_inited;
> +
> + if ((int) n == part_to_skip)
> + continue;
> +
> + if ((sav_inited= f->inited) == RND)
> + f->ha_rnd_end();
When can this happen?
> +
> + *res= f->index_read_idx_map(m_part_info->table->record[0],
> + n_key, key, HA_WHOLE_KEY, HA_READ_KEY_EXACT);
> +
> + f->ha_end_keyread();
> +
> + if (sav_inited == RND)
> + {
> + int ires= f->ha_rnd_init(FALSE);
> + if (ires)
> + *res= ires;
> + }
> +
> + if (*res == HA_ERR_KEY_NOT_FOUND)
> + continue;
> +
> + if (*res == 0)
> + *res= HA_ERR_FOUND_DUPP_KEY;
> +
> + m_last_part= n;
> + m_cu_errkey= n_key;
> + return 1;
> + }
> +
> + *res= 0;
> + return 0;
> +}
> +
> +
> +int ha_partition::check_uniques_insert(uchar *buf,
> + int part_begin, int part_end,
> + KEY **dup_key,
> + int *res)
> +{
> + uint n_key;
> +
> + for (n_key=0; n_key < m_part_info->n_uniques_to_check; n_key++)
> + {
> + uchar *cbuf= m_part_info->unique_key_buf[0];
> + KEY *ckey= m_part_info->uniques_to_check[n_key];
> + uint n;
> +
> + for (n=0; n < ckey->user_defined_key_parts; n++)
> + {
can you use key_copy() here?
> + const KEY_PART_INFO *cpart= ckey->key_part + n;
> + uint maybe_null= MY_TEST(cpart->null_bit);
> + Field *f= cpart->field;
> + my_ptrdiff_t ofs= buf-table->record[0];
> +
> + f->move_field_offset(ofs);
> + if (maybe_null)
> + cbuf[0]= f->is_null();
> + f->get_key_image(cbuf+maybe_null, cpart->length, Field::itRAW);
> + cbuf+= cpart->store_length;
> + f->move_field_offset(-ofs);
> + }
> +
> + if (check_files_for_key(m_part_info->unique_key_buf[0],
> + table->key_info - ckey,
> + part_begin, part_end, -1, res))
> + {
> + *dup_key= ckey;
> + return 1;
> + }
> + }
> +
> + *res= 0;
> + return 0;
> +}
> +
> +
> /****************************************************************************
> MODULE change record
> ****************************************************************************/
> @@ -4307,7 +4398,41 @@ int ha_partition::write_row(uchar * buf)
> start_part_bulk_insert(thd, part_id);
>
> tmp_disable_binlog(thd); /* Do not replicate the low-level changes. */
> +
> + /*
> + Check unique keys (if there is any) for duplications in
> + partitions 0 .. inserted partition, then
> + do the write_row then check the unique in
> + partitions inserted partition +1 .. m_tot_parts.
> + We do so to keep the order of locking always same to
> + avoid deadlocks.
> + */
> + if (table->part_info->n_uniques_to_check &&
> + check_uniques_insert(buf, 0, part_id, &dup_key, &error))
> + {
> + goto exit;
> + }
> +
> error= m_file[part_id]->ha_write_row(buf);
> +
> + DEBUG_SYNC(thd, "berfore_part_unique_check");
> +
> + if (!error && table->part_info->n_uniques_to_check &&
> + check_uniques_insert(buf, part_id+1, m_tot_parts, &dup_key, &error))
> + {
> + /*
> + Errors here are ignored, as the error already found.
> + and i don't have a good idea what to do if things go
> + wrong here.
> + */
> + if (!(m_file[part_id]->ha_index_read_idx_map(table->record[1],
> + dup_key - table->key_info, table->part_info->unique_key_buf[0],
> + HA_WHOLE_KEY, HA_READ_KEY_EXACT)))
1. why do you need ha_index_read_idx_map here?
you've just did ha_write_row, isn't it enough?
2. if the table is transational and no IGNORE was used,
you don't need to delete anything, the statement will be
rolled back anyway. an easy optimization.
> + {
> + (void) m_file[part_id]->ha_delete_row(buf);
> + }
> + }
> +
> if (have_auto_increment && !table->s->next_number_keypart)
> set_auto_increment_if_higher(table->next_number_field);
> reenable_binlog(thd);
> @@ -4317,6 +4442,59 @@ int ha_partition::write_row(uchar * buf)
> table->auto_increment_field_not_null= saved_auto_inc_field_not_null;
> DBUG_RETURN(error);
> }
> +
> +
> +int ha_partition::check_uniques_update(const uchar *old_data,
> + const uchar *new_data,
> + int new_part_id, int *res)
> +{
> + uint n_key;
> +
> + for (n_key=0; n_key < m_part_info->n_uniques_to_check; n_key++)
> + {
> + uchar *buf0= m_part_info->unique_key_buf[0];
> + uchar *buf1= m_part_info->unique_key_buf[1];
> + KEY *ckey= m_part_info->uniques_to_check[n_key];
> + uint n;
> +
> + for (n=0; n < ckey->user_defined_key_parts; n++)
> + {
> + const KEY_PART_INFO *cpart= ckey->key_part + n;
> + uint maybe_null= MY_TEST(cpart->null_bit);
> + Field *f= cpart->field;
> + my_ptrdiff_t dif;
> +
> + dif= old_data - table->record[0];
> + f->move_field_offset(dif);
> + if (maybe_null)
> + buf0[0]= f->is_null();
> + f->get_key_image(buf0+maybe_null, cpart->length, Field::itRAW);
> + buf0+= cpart->store_length;
> + f->move_field_offset(-dif);
> +
> + dif= new_data - table->record[0];
> + f->move_field_offset(dif);
> + if (maybe_null)
> + buf1[0]= f->is_null();
> + f->get_key_image(buf1+maybe_null, cpart->length, Field::itRAW);
> + buf1+= cpart->store_length;
> + f->move_field_offset(-dif);
> + }
> +
> + if (memcmp(m_part_info->unique_key_buf[0], m_part_info->unique_key_buf[1],
> + buf0 - m_part_info->unique_key_buf[0]) == 0)
> + {
> + /* Key did not change. */
> + continue;
> + }
> +
> + if (check_files_for_key(m_part_info->unique_key_buf[1],
> + table->key_info - ckey, 0, m_tot_parts, new_part_id, res))
How does that work?
I'd expect you to do like you did for write_row:
- check from 0 to new_part_id, skipping old_part_id,
- insert the row
- check from new_part_id+1 to m_last_part, skipping old_part_id.
> + return 1;
> + }
> +
> + return 0;
> +}
>
>
> /*
> @@ -5754,11 +5935,14 @@ int ha_partition::index_read_idx_map(uchar *buf, uint index,
> get_partition_set(table, buf, index, &m_start_key, &m_part_spec);
>
> /*
> - We have either found exactly 1 partition
> + If there is no 'unbound' unique keys where not all keyparts
> + are partition definition fields,
> + we have either found exactly 1 partition
I don't understand. It's HA_READ_KEY_EXACT, so it can never
find more than one row in a unique key, and one row can never
be in more than one partition. Or do you mean unique key over
nullable column that can have many null values? Then mention it
in the comment, please.
> (in which case start_part == end_part)
> - or no matching partitions (start_part > end_part)
> + or no matching partitions (start_part > end_part),
> */
> - DBUG_ASSERT(m_part_spec.start_part >= m_part_spec.end_part);
> + DBUG_ASSERT(m_part_spec.start_part >= m_part_spec.end_part ||
> + m_part_info->n_uniques_to_check);
> /* The start part is must be marked as used. */
> DBUG_ASSERT(m_part_spec.start_part > m_part_spec.end_part ||
> bitmap_is_set(&(m_part_info->read_partitions),
> @@ -8315,15 +8499,20 @@ int ha_partition::info(uint flag)
> {
> handler *file= m_file[m_last_part];
> DBUG_PRINT("info", ("info: HA_STATUS_ERRKEY"));
> - /*
> - This flag is used to get index number of the unique index that
> - reported duplicate key
> - We will report the errkey on the last handler used and ignore the rest
> - Note: all engines does not support HA_STATUS_ERRKEY, so set errkey.
> - */
> - file->errkey= errkey;
> - file->info(HA_STATUS_ERRKEY | no_lock_flag);
> - errkey= file->errkey;
> + if ((int) m_cu_errkey >= 0)
> + errkey= m_cu_errkey;
> + else
> + {
> + /*
> + This flag is used to get index number of the unique index that
> + reported duplicate key
> + We will report the errkey on the last handler used and ignore the rest
> + Note: all engines does not support HA_STATUS_ERRKEY, so set errkey.
I don't understand what this means (I know it's an old comment, but still)
Is it "not all engines support HA_STATUS_ERRKEY"?
If yes, please change it to say that.
> + */
> + file->errkey= errkey;
> + file->info(HA_STATUS_ERRKEY | no_lock_flag);
> + errkey= file->errkey;
> + }
> }
> if (flag & HA_STATUS_TIME)
> {
> diff --git a/sql/partition_info.h b/sql/partition_info.h
> index e00a2c4..eeac0d9 100644
> --- a/sql/partition_info.h
> +++ b/sql/partition_info.h
> @@ -283,6 +283,14 @@ class partition_info : public Sql_alloc
> bool is_auto_partitioned;
> bool has_null_value;
> bool column_list; // COLUMNS PARTITIONING, 5.5+
> + /*
> + Unique keys that don't have all the partitioning fields in them
> + need to be checked when INSERT/UPDATE.
> + So they are collected here.
> + */
> + KEY **uniques_to_check;
> + uint n_uniques_to_check;
> + uchar *unique_key_buf[2];
Is it safe? Isn't partition_info shared between many handlers
so that every handler can only use it read-only?
it's not quite clear to me. it seems that every TABLE has its own
partition_info. But cloned handlers all share the same partition_info.
>
> partition_info()
> : get_partition_id(NULL), get_part_partition_id(NULL),
> diff --git a/sql/sql_partition.cc b/sql/sql_partition.cc
> index 3133b94..6cd46dd 100644
> --- a/sql/sql_partition.cc
> +++ b/sql/sql_partition.cc
> @@ -988,15 +945,15 @@ static bool check_unique_keys(TABLE *table)
> check_fields_in_PF(table->part_info->full_part_field_array,
> &all_fields, &some_fields);
> clear_indicator_in_key_fields(table->key_info+i);
> - if (unlikely(!all_fields))
> + if (!all_fields)
> {
> - my_error(ER_UNIQUE_KEY_NEED_ALL_FIELDS_IN_PF,MYF(0),"UNIQUE INDEX");
you can rename ER_UNIQUE_KEY_NEED_ALL_FIELDS_IN_PF to
ER_UNUSED with the next free number
> - result= TRUE;
> - break;
> + ++keys_found;
> + if (table->part_info->n_uniques_to_check >= keys_found)
> + table->part_info->uniques_to_check[keys_found-1]= table->key_info+i;
> }
> }
> }
> - DBUG_RETURN(result);
> + DBUG_RETURN(keys_found);
> }
>
>
> @@ -2798,6 +2765,47 @@ bool partition_key_modified(TABLE *table, const MY_BITMAP *fields)
> }
>
>
> + /*
> + Check if there are unique keys that not entirely 'inside' the partition
> + key and if any fields of such keys are modified.
> + If it's so, it usually means we have to use tamporary storage for records
"temporary"
> + handling the UPDATE command.
> +
> + SYNOPSIS
please, use doxygen format for new functions
> + partition_unique_modified
> + table TABLE object for which partition fields are set-up
> + fields Bitmap representing fields to be modified
> +
> + RETURN VALUES
> + TRUE Need special handling of UPDATE
> + FALSE Normal UPDATE handling is ok
> +*/
> +
> +bool partition_unique_modified(TABLE *table, const MY_BITMAP *fields)
> +{
> + partition_info *part_info= table->part_info;
> + DBUG_ENTER("partition_unique_modified");
> +
> + if (!part_info)
> + DBUG_RETURN(FALSE);
> +
> + if (part_info->uniques_to_check == 0)
> + DBUG_RETURN(FALSE);
> +
> + for (uint n_key=0; n_key < part_info->n_uniques_to_check; n_key++)
> + {
> + KEY *ckey= part_info->uniques_to_check[n_key];
> + for (uint n_part=0; n_part < ckey->user_defined_key_parts; n_part++)
> + {
> + if (bitmap_is_set(fields, ckey->key_part[n_part].field->field_index))
> + DBUG_RETURN(TRUE);
> + }
> + }
> +
> + DBUG_RETURN(FALSE);
> +}
> +
> +
> /*
> A function to handle correct handling of NULL values in partition
> functions.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
[View Less]
1
0

Re: [Maria-developers] [Commits] 0f97f6b8398: MDEV-17346 parallel slave start and stop races to workers disappeared
by Kristian Nielsen 05 Oct '18
by Kristian Nielsen 05 Oct '18
05 Oct '18
Hi Andrei!
andrei.elkin(a)pp.inet.fi writes:
> revision-id: 0f97f6b8398054ccb0507fbacc76c9deeddd47a4 (mariadb-10.1.35-71-g0f97f6b8398)
> author: Andrei Elkin
> timestamp: 2018-10-03 15:42:12 +0300
> message:
>
> MDEV-17346 parallel slave start and stop races to workers disappeared
Ooh, that's a nice catch. The code around slave start/stop really is
terribly tricky, glad that you were able to sort this one out.
Patch looks good to push.
What about test case? I assume this …
[View More]was caught from occasional random
failures in buildbot? That should be ok as test case for this bug, I think.
It can be a _lot_ of work to write reliable tests for stuff like this, and
the start/stop slave is stressed quite a bit by the existing replication
tests.
> The bug appears as a slave SQL thread hanging in
> rpl_parallel_thread_pool::get_thread() while there are no slave worker
> threads to awake it.
>
> The hang could occur at parallel slave worker pool activation by a
> "new" started SQL thread when the pool was concurrently deactivated by
> being terminated "old" SQL thread. At reading the current pool size
> the SQL thread did not employ necessary protection designed by
> MDEV-9573. The pool can't be deactivated when there is an active slave
> but the "new" slave might set its active status too late while seeing
> the pool still non-empty. The end product of four computaional events
>
> Old_slave:any_slave_sql_running() => 0
> New_slave:slave_running= "active"
> New_slave:global_rpl_thread_pool.size > 0 => true
> Old_slave:global_rpl_thread_pool.size := 0
>
> could led to the observed hang. The new SQL thread proceeds to scheduling
> having all workers gone.
>
> Fixed with making the SQL thread at the pool activation first
> to grab the same lock as potential deactivator also does prior
> to destroy the pool.
At first I did not see how this fixes the problem. But it is because of the
extra check inside rpl_parallel_change_thread_count():
if (!new_count && !force) {
if (any_slave_sql_running()) {
pool_mark_not_busy(pool);
return 0; // Ok to not resize pool
}
}
So one possibility is that New_slave has set running=active when Old_slave
hits that test. Then Old_slave will leave the pool intact thanks to this
test.
The other possibility is that New_slave has not yet set running=active at
this point, so Old_slave can empty the pool. But then Old_slave already
marked the pool busy. So the patch will ensure that New_slave will see that
the pool has been emptied, and will know to re-populate it before starting.
Seems solid.
Thanks for fixing!
- Kristian.
> ---
> sql/rpl_parallel.cc | 25 ++++++++++++++++++++++---
> 1 file changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/sql/rpl_parallel.cc b/sql/rpl_parallel.cc
> index 35cddee6d4d..8fef2d66635 100644
> --- a/sql/rpl_parallel.cc
> +++ b/sql/rpl_parallel.cc
> @@ -1617,13 +1617,32 @@ int rpl_parallel_resize_pool_if_no_slaves(void)
> }
>
>
> +/**
> + Pool activation is preceeded by taking a "lock" of pool_mark_busy
> + which guarantees the number of running slaves drops to zero atomicly
> + with the number of pool workers.
> + This resolves race between the function caller thread and one
> + that may be attempting to deactivate the pool.
> +*/
> int
> rpl_parallel_activate_pool(rpl_parallel_thread_pool *pool)
> {
> + int rc= 0;
> +
> + if ((rc= pool_mark_busy(pool, current_thd)))
> + return rc; // killed
> +
> if (!pool->count)
> - return rpl_parallel_change_thread_count(pool, opt_slave_parallel_threads,
> - 0);
> - return 0;
> + {
> + pool_mark_not_busy(pool);
> + rc= rpl_parallel_change_thread_count(pool, opt_slave_parallel_threads,
> + 0);
> + }
> + else
> + {
> + pool_mark_not_busy(pool);
> + }
> + return rc;
> }
[View Less]
1
0

Re: [Maria-developers] c45b4a7: MDEV-17200 - pthread_detach called for already detached threads
by Sergei Golubchik 05 Oct '18
by Sergei Golubchik 05 Oct '18
05 Oct '18
Hi, Sergey!
ok to push!
On Sep 21, Sergey Vojtovich wrote:
> revision-id: c45b4a774b6d1404a080a1c1759f780fa78f223b (mariadb-10.1.35-65-gc45b4a7)
> parent(s): 82524239c48a2603f115a6117e061751f6427d08
> author: Sergey Vojtovich
> committer: Sergey Vojtovich
> timestamp: 2018-09-21 16:24:23 +0400
> message:
>
> MDEV-17200 - pthread_detach called for already detached threads
>
> pthread_detach_this_thread() was intended to be defined to something
> meaningful …
[View More]only on some ancient unixes, which don't have
> pthread_attr_setdetachstate() defined. Otherwise, on normal unixes,
> threads are created detached in the first place.
>
> This was broken in 0f01bf267680244ec488adaf65a42838756ed48e so that
> we started calling pthread_detach() for already detached threads.
> Intention was to detach aria checkpoint thread.
>
> However in 87007dc2f71634cc460271eb277ad851ec69c04b aria service threads
> were made joinable with appropriate handling, which makes breaking
> revision unneccessary.
>
> Revert remnants of 0f01bf267680244ec488adaf65a42838756ed48e, so that
> pthread_detach_this_thread() is meaningful only on some ancient unixes
> again.
>
> ---
> include/my_pthread.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/my_pthread.h b/include/my_pthread.h
> index 6b830ca..ae2f912 100644
> --- a/include/my_pthread.h
> +++ b/include/my_pthread.h
> @@ -184,7 +184,7 @@ int pthread_cancel(pthread_t thread);
> #define pthread_key(T,V) pthread_key_t V
> #define my_pthread_getspecific_ptr(T,V) my_pthread_getspecific(T,(V))
> #define my_pthread_setspecific_ptr(T,V) pthread_setspecific(T,(void*) (V))
> -#define pthread_detach_this_thread() { pthread_t tmp=pthread_self() ; pthread_detach(tmp); }
> +#define pthread_detach_this_thread()
> #define pthread_handler_t EXTERNC void *
> typedef void *(* pthread_handler)(void *);
>
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
[View Less]
1
0

Re: [Maria-developers] [Commits] 0f97f6b8398: MDEV-17346 parallel slave start and stop races to workers disappeared
by andrei.elkin@pp.inet.fi 03 Oct '18
by andrei.elkin@pp.inet.fi 03 Oct '18
03 Oct '18
Kristian, hello.
Perhaps you could review this worker pool patch though it actually follows up
changes made by MDEV-9573, by Monty.
Cheers,
Andrei
> revision-id: 0f97f6b8398054ccb0507fbacc76c9deeddd47a4 (mariadb-10.1.35-71-g0f97f6b8398)
> parent(s): 1fc5a6f30c3a9c047dcf9a36b00026d98f286f6b
> author: Andrei Elkin
> committer: Andrei Elkin
> timestamp: 2018-10-03 15:42:12 +0300
> message:
>
> MDEV-17346 parallel slave start and stop races to workers disappeared
>
…
[View More]> The bug appears as a slave SQL thread hanging in
> rpl_parallel_thread_pool::get_thread() while there are no slave worker
> threads to awake it.
>
> The hang could occur at parallel slave worker pool activation by a
> "new" started SQL thread when the pool was concurrently deactivated by
> being terminated "old" SQL thread. At reading the current pool size
> the SQL thread did not employ necessary protection designed by
> MDEV-9573. The pool can't be deactivated when there is an active slave
> but the "new" slave might set its active status too late while seeing
> the pool still non-empty. The end product of four computaional events
>
> Old_slave:any_slave_sql_running() => 0
> New_slave:slave_running= "active"
> New_slave:global_rpl_thread_pool.size > 0 => true
> Old_slave:global_rpl_thread_pool.size := 0
>
> could led to the observed hang. The new SQL thread proceeds to scheduling
> having all workers gone.
>
> Fixed with making the SQL thread at the pool activation first
> to grab the same lock as potential deactivator also does prior
> to destroy the pool.
>
> ---
> sql/rpl_parallel.cc | 25 ++++++++++++++++++++++---
> 1 file changed, 22 insertions(+), 3 deletions(-)
>
> diff --git a/sql/rpl_parallel.cc b/sql/rpl_parallel.cc
> index 35cddee6d4d..8fef2d66635 100644
> --- a/sql/rpl_parallel.cc
> +++ b/sql/rpl_parallel.cc
> @@ -1617,13 +1617,32 @@ int rpl_parallel_resize_pool_if_no_slaves(void)
> }
>
>
> +/**
> + Pool activation is preceeded by taking a "lock" of pool_mark_busy
> + which guarantees the number of running slaves drops to zero atomicly
> + with the number of pool workers.
> + This resolves race between the function caller thread and one
> + that may be attempting to deactivate the pool.
> +*/
> int
> rpl_parallel_activate_pool(rpl_parallel_thread_pool *pool)
> {
> + int rc= 0;
> +
> + if ((rc= pool_mark_busy(pool, current_thd)))
> + return rc; // killed
> +
> if (!pool->count)
> - return rpl_parallel_change_thread_count(pool, opt_slave_parallel_threads,
> - 0);
> - return 0;
> + {
> + pool_mark_not_busy(pool);
> + rc= rpl_parallel_change_thread_count(pool, opt_slave_parallel_threads,
> + 0);
> + }
> + else
> + {
> + pool_mark_not_busy(pool);
> + }
> + return rc;
> }
>
>
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
[View Less]
1
0
Hi,
I'd like to know what is the best way to track memory consumption /leak in mariadb.
Our application use intensively stored procedure and with only 35 connection we have to restart mariadb each 1.5 hours because it use 15Go of memory very fast.
Mysqladmin doesn't show excessive memory consumption :
| Max_used_connections | 42 |
| Memory_used | 2422894288 …
[View More] |
| Memory_used_initial | 291786624 |
But
[root@vmdevmdb ~]# ps -up 20039
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mysql 20039 10.5 73.8 14574168 12000852 ? Ssl 14:30 11:19 /usr/sbin/mysqld
--> 12Go of resident memory in only 11 minutes of cpu.
Best regards
Jérôme.
Ce message et les pièces jointes sont confidentiels et établis à l'attention exclusive de ses destinataires. Toute utilisation ou diffusion, même partielle, non autorisée est interdite. Tout message électronique est susceptible d'altération; CEGID décline donc toute responsabilité au titre de ce message. Si vous n'êtes pas le destinataire de ce message, merci de le détruire et d'avertir l'expéditeur.
This message and any attachments are confidential and intended solely for the addressees. Any unauthorized use or disclosure, either whole or partial is prohibited. E-mails are susceptible to alteration; CEGID shall therefore not be liable for the content of this message. If you are not the intended recipient of this message, please delete it and notify the sender.
[View Less]
1
0
Hello,
Can I get some explanation, why was
https://jira.mariadb.org/browse/CONC-312
closed as WON'T FIX ?
--
MySQL 8.0 implemented "caching_sha2_password" as its default
authentification plugin / method.
MariaDB doesn't have support for it so mariadb-connector-c cannot be used
for MySQL 8.
That makes problems for nearly every Linux OS, beacuse they chose MariaDB
over MySQL as the default implemetation.
I'd like to know the MariaDB stance to it.
Do you see it as an issue as well?
Are there …
[View More]any plans to deal with it? How?
Michal Schorm
--
Michal Schorm
Associate Software Engineer
Core Services - Databases Team
Red Hat
[View Less]
2
2

Re: [Maria-developers] e11e0e6c8b9: MDEV-17167 - InnoDB: Failing assertion: table->get_ref_count() == 0 upon
by Sergey Vojtovich 24 Sep '18
by Sergey Vojtovich 24 Sep '18
24 Sep '18
Hi Sergei,
On Mon, Sep 24, 2018 at 03:02:40PM +0200, Sergei Golubchik wrote:
> Hi, Sergey!
>
> On Sep 12, Sergey Vojtovich wrote:
> > revision-id: e11e0e6c8b9a85ca7f16c6b9b28fb505d464a96e (mariadb-10.3.7-186-ge11e0e6c8b9)
> > parent(s): f2f661b848c02ebd7c444ba645b26416d3e4d2ad
> > author: Sergey Vojtovich
> > committer: Sergey Vojtovich
> > timestamp: 2018-09-12 16:36:45 +0400
> > message:
> >
> > MDEV-17167 - InnoDB: Failing assertion:…
[View More] table->get_ref_count() == 0 upon
> > truncating a temporary table
> >
> > TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
> > itself) to be open. However this requirement wasn't enforced after
> > "MDEV-5535: Cannot reopen temporary table".
> >
> > Fixed by closing unused table instances before performing TRUNCATE.
>
> > diff --git a/sql/sql_class.h b/sql/sql_class.h
> > index 6256522bf5c..e7dcd997e05 100644
> > --- a/sql/sql_class.h
> > +++ b/sql/sql_class.h
> > @@ -4612,6 +4612,7 @@ class THD :public Statement,
> >
> > TMP_TABLE_SHARE* save_tmp_table_share(TABLE *table);
> > void restore_tmp_table_share(TMP_TABLE_SHARE *share);
> > + void close_unused_temporary_table_instances(const TABLE_LIST *tl);
> >
> > private:
> > /* Whether a lock has been acquired? */
> > diff --git a/sql/sql_truncate.cc b/sql/sql_truncate.cc
> > index 201825d4593..695fcb538f9 100644
> > --- a/sql/sql_truncate.cc
> > +++ b/sql/sql_truncate.cc
> > @@ -401,6 +401,8 @@ bool Sql_cmd_truncate_table::truncate_table(THD *thd, TABLE_LIST *table_ref)
> > /* In RBR, the statement is not binlogged if the table is temporary. */
> > binlog_stmt= !thd->is_current_stmt_binlog_format_row();
> >
> > + thd->close_unused_temporary_table_instances(table_ref);
>
> Is it only TRUNCATE that needs it? What about ALTER? REPAIR? Other
> similar commands?
Right, do you think it is worth to exapnd scope of this bug and look for
other possible issues?
> > +*/
> > +
> > +void THD::close_unused_temporary_table_instances(const TABLE_LIST *tl)
> > +{
> > + TMP_TABLE_SHARE *share= find_tmp_table_share(tl);
> > +
> > + if (share)
> > + {
> > + Share_free_tables::List purge_tables;
> > + All_share_tables_list::Iterator tables_it(share->all_tmp_tables);
> > +
> > + while (TABLE *table= tables_it++)
> > + {
> > + if (table->query_id == 0)
> > + purge_tables.push_front(table);
> > + }
> > +
> > + while (TABLE *table= purge_tables.pop_front())
> > + {
> > + share->all_tmp_tables.remove(table);
> > + free_temporary_table(table);
> > + }
>
> Why are you doing it in two loops? Because free_temporary_table()
> invalidates the iterator?
Not free_temporary_table(), but rather share->all_tmp_tables.remove(table).
OTOH it seems to be safe to remove current element with it++, but not with ++it.
I'll try to remove extra loop if it works.
Thanks,
Sergey
[View Less]
2
1
--
Best Regards,
Valentin Rakush.
1
0