developers
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
October 2018
- 14 participants
- 27 discussions
Hello,
can I get some status update on MDEV-13912 please?
It is still an issue for the customer on my side and I'd love to see
fixed it in 5.5.62 release.
Michal
--
Michal Schorm
Associate Software Engineer
Core Services - Databases Team
Red Hat
2
2
Re: [Maria-developers] cf9bafc: MDEV-14815 - Server crash or AddressSanitizer errors or valgrind warnings
by Sergei Golubchik 19 Oct '18
by Sergei Golubchik 19 Oct '18
19 Oct '18
Hi, Sergey!
On Oct 19, Sergey Vojtovich wrote:
> revision-id: cf9bafc8cb0943c914d118525888140ea15e6b55 (mariadb-10.0.36-50-gcf9bafc)
> parent(s): 8e716138cef2d9be603cdf71701d82bcb72dfd69
> author: Sergey Vojtovich
> committer: Sergey Vojtovich
> timestamp: 2018-10-18 23:50:50 +0400
> message:
>
> MDEV-14815 - Server crash or AddressSanitizer errors or valgrind warnings
> in thr_lock / has_old_lock upon FLUSH TABLES
>
> Explicit partition access of partitioned MEMORY table under LOCK TABLES
> may cause subsequent statements to crash the server, deadlock, trigger
> valgrind warnings or ASAN errors. Freed memory was being used due to
> incorrect cleanup.
>
> At least MyISAM and InnoDB don't seem to be affected, since their
> THR_LOCK structures don't survive FLUSH TABLES. MEMORY keeps table shared
> data (including THR_LOCK) even if there're no open instances.
>
> There's partition_info::lock_partitions bitmap, which holds bits of
> partitions allowed to be accessed after pruning. This bitmap is
> updated for each individual statement.
>
> This bitmap was abused in ha_partition::store_lock() such that when we
> need to unlock a table, locked by LOCK TABLES, only locks for partitions
> that were accessed by previous statement were released.
>
> Eventually FLUSH TABLES frees THR_LOCK_DATA objects, which are still
> linked into THR_LOCK lists. When such THR_LOCK gets reused we end up with
> freed memory access.
>
> Fixed by using ha_partition::m_locked_partitions bitmap similarly to
> ha_partition::external_lock().
Thanks, good comment.
> Some unused code removed.
Generally, it's better to do that in a separate commit.
It's very easy to do with git citool, and doesn't requite any advance
planning, you just edit whatever you like and then run 'git citool'
twice, selecting what lines you want to commit each time.
> diff --git a/sql/ha_partition.cc b/sql/ha_partition.cc
> index 0488ebf..02de92a 100644
> --- a/sql/ha_partition.cc
> +++ b/sql/ha_partition.cc
> @@ -3907,9 +3907,14 @@ THR_LOCK_DATA **ha_partition::store_lock(THD *thd,
> }
> else
> {
> - for (i= bitmap_get_first_set(&(m_part_info->lock_partitions));
> + MY_BITMAP *used_partitions= lock_type == TL_UNLOCK ||
> + lock_type == TL_IGNORE ?
why TL_IGNORE too? external_lock only checks for TL_UNLOCK.
> + &m_locked_partitions :
> + &m_part_info->lock_partitions;
> +
> + for (i= bitmap_get_first_set(used_partitions);
> i < m_tot_parts;
> - i= bitmap_get_next_set(&m_part_info->lock_partitions, i))
> + i= bitmap_get_next_set(used_partitions, i))
> {
> DBUG_PRINT("info", ("store lock %d iteration", i));
> to= m_file[i]->store_lock(thd, to, lock_type);
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
2
Re: [Maria-developers] 7ce7184cf0e: MDEV-17133 dump thread reads from a past position
by Sergei Golubchik 19 Oct '18
by Sergei Golubchik 19 Oct '18
19 Oct '18
Hi, Andrei!
ok to push.
But please add a meaningful commit comment.
Thanks!
On Oct 19, Andrei Elkin wrote:
> revision-id: 7ce7184cf0e354a1c9114bb31df1fb63b1fbbcc7 (mariadb-10.1.35-59-g7ce7184cf0e)
> parent(s): d3a8b5aa9cee24a6397662e30df2e915f45460e0
> author: Andrei Elkin <andrei.elkin(a)mariadb.com>
> committer: Andrei Elkin <andrei.elkin(a)mariadb.com>
> timestamp: 2018-09-18 23:21:18 +0300
> message:
>
> MDEV-17133 dump thread reads from a past position
>
> ---
> mysys/mf_iocache.c | 8 +++--
> unittest/sql/mf_iocache-t.cc | 74 +++++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 79 insertions(+), 3 deletions(-)
>
> diff --git a/mysys/mf_iocache.c b/mysys/mf_iocache.c
> index 56b1ae3fc6e..8f829a8c9c2 100644
> --- a/mysys/mf_iocache.c
> +++ b/mysys/mf_iocache.c
> @@ -563,7 +563,7 @@ int _my_b_write(IO_CACHE *info, const uchar *Buffer, size_t Count)
>
> int _my_b_cache_read(IO_CACHE *info, uchar *Buffer, size_t Count)
> {
> - size_t length, diff_length, left_length= 0, max_length;
> + size_t length= 0, diff_length, left_length= 0, max_length;
> my_off_t pos_in_file;
> DBUG_ENTER("_my_b_cache_read");
>
> @@ -668,7 +668,11 @@ int _my_b_cache_read(IO_CACHE *info, uchar *Buffer, size_t Count)
> else
> {
> info->error= 0;
> - DBUG_RETURN(0); /* EOF */
> + if (length == 0)
> + {
> + DBUG_RETURN(0); // EOF
> + }
> + length= 0;
> }
> }
> else if ((length= mysql_file_read(info->file,info->buffer, max_length,
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0
19 Oct '18
Hi Bar,
Currently if we have generated the sql file using mysqlbinlog -v and
if we have big statement binlog
, then mysqlbinlog add comment in between of Binlog 'XXX' statement ,
but unfortunately base64_decode
does not understands it, and it through error since # is not a base64 char
This patches solves this. There are 2 patches one for 5.5 and 2nd for
10.0 and above
Please review it.
4
7
Re: [Maria-developers] 9b453c1ebf4: MDEV-10963 Fragmented BINLOG query
by Sergei Golubchik 18 Oct '18
by Sergei Golubchik 18 Oct '18
18 Oct '18
Hi, Andrei!
On Sep 07, andrei.elkin(a)pp.inet.fi wrote:
> revision-id: 9b453c1ebf47a20ea1436de3284f8810d0e64c4c (mariadb-10.1.34-8-g9b453c1ebf4)
> parent(s): c09a8b5b36edb494e2bcc93074c06e26cd9f2b92
> author: Andrei Elkin
> committer: Andrei Elkin
> timestamp: 2018-09-07 20:36:16 +0300
> message:
>
> MDEV-10963 Fragmented BINLOG query
>
> The problem was originally stated in
> http://bugs.mysql.com/bug.php?id=82212
> The size of an base64-encoded Rows_log_event exceeds its
> vanilla byte representation in 4/3 times.
> When a binlogged event size is about 1GB mysqlbinlog generates
> a BINLOG query that can't be send out due to its size.
>
> It is fixed with fragmenting the BINLOG argument C-string into
> (approximate) halves when the base64 encoded event is over 1GB size.
> The mysqlbinlog in such case puts out
>
> SET @binlog_fragment_0='base64-encoded-fragment_0';
> SET @binlog_fragment_1='base64-encoded-fragment_1';
> BINLOG 2, 'binlog_fragment';
No, that's totally ridiculous. A statement that internally walks the
list of user variables and collects values from variables named
according to a specific pattern, seriously?
Please, make it
BINLOG CONCAT(@binlog_fragment_0, @binlog_fragment_1)
that'll work with no questions asked, everybody understands what it
means. The parser doesn't need to accept an arbitrary expression here,
it'd be simpler and safer to hard-code the syntax as above.
A couple of minor generic comments below, but I've stopped reviewing,
because the code will change anyway.
> to represent a big BINLOG 'base64-encoded-"total"'.
> Two more statements are composed to promptly release memory
> SET @binlog_fragment_0=NULL;
> SET @binlog_fragment_1=NULL;
>
> diff --git a/client/mysqlbinlog.cc b/client/mysqlbinlog.cc
> index 9753125dd67..f0689631cfb 100644
> --- a/client/mysqlbinlog.cc
> +++ b/client/mysqlbinlog.cc
> @@ -56,7 +56,13 @@ Rpl_filter *binlog_filter= 0;
>
> #define BIN_LOG_HEADER_SIZE 4
> #define PROBE_HEADER_LEN (EVENT_LEN_OFFSET+4)
> -
> +/*
> + 2 fragments can always represent near 1GB row-based base64-encoded event as
> + two strings each of size less than max(max_allowed_packet).
> + Bigger number of fragments does not safe from potential need to tune (increase)
> + @@max_allowed_packet before to process the fragments. So 2 is safe and enough.
> +*/
> +#define BINLOG_ROWS_EVENT_ENCODED_FRAGMENTS 2
>
> #define CLIENT_CAPABILITIES (CLIENT_LONG_PASSWORD | CLIENT_LONG_FLAG | CLIENT_LOCAL_FILES)
>
> @@ -71,6 +77,9 @@ ulong bytes_sent = 0L, bytes_received = 0L;
> ulong mysqld_net_retry_count = 10L;
> ulong open_files_limit;
> ulong opt_binlog_rows_event_max_size;
> +#ifndef DBUG_OFF
> +ulong opt_binlog_rows_event_max_encoded_size;
> +#endif
Try to avoid unnecessary ifdefs. Here you could've defined the variable
unconditionally and suppressed the warning with __attribute__((unused)),
of you could've used it unconditionally as a splitting threshold below
and only hidden the command line option.
> uint test_flags = 0;
> static uint opt_protocol= 0;
> static FILE *result_file;
> @@ -813,7 +822,14 @@ write_event_header_and_base64(Log_event *ev, FILE *result_file,
>
> /* Write header and base64 output to cache */
> ev->print_header(head, print_event_info, FALSE);
> - ev->print_base64(body, print_event_info, FALSE);
> +
> + /* the assert states the only current use case for the function */
> + DBUG_ASSERT(print_event_info->base64_output_mode ==
> + BASE64_OUTPUT_ALWAYS);
1. don't document the obvious
2. don't wrap the line if it fits in 80-char line
> +
> + ev->print_base64(body, print_event_info,
> + print_event_info->base64_output_mode !=
> + BASE64_OUTPUT_DECODE_ROWS);
>
> /* Read data from cache and write to result file */
> if (copy_event_cache_to_file_and_reinit(head, result_file) ||
> @@ -1472,6 +1490,15 @@ that may lead to an endless loop.",
> "This value must be a multiple of 256.",
> &opt_binlog_rows_event_max_size, &opt_binlog_rows_event_max_size, 0,
> GET_ULONG, REQUIRED_ARG, UINT_MAX, 256, ULONG_MAX, 0, 256, 0},
> +#ifndef DBUG_OFF
> + {"binlog-row-event-max-encoded-size", 0,
all debug command-line options must start with "debug-", that is,
either rename it to "debug-binlog-row-event-max-encoded-size" or
make it non-debug.
> + "The maximum size of base64-encoded rows-event in one BINLOG pseudo-query "
> + "instance. When the computed actual size exceeds the limit "
> + "the BINLOG's argument string is fragmented in two.",
> + &opt_binlog_rows_event_max_encoded_size,
> + &opt_binlog_rows_event_max_encoded_size, 0,
> + GET_ULONG, REQUIRED_ARG, UINT_MAX/4, 256, ULONG_MAX, 0, 256, 0},
> +#endif
> {"verify-binlog-checksum", 'c', "Verify checksum binlog events.",
> (uchar**) &opt_verify_binlog_checksum, (uchar**) &opt_verify_binlog_checksum,
> 0, GET_BOOL, NO_ARG, 0, 0, 0, 0, 0, 0},
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
3
Re: [Maria-developers] [Commits] 2f4a0c5be2c: Fix accumulation of old rows in mysql.gtid_slave_pos
by Kristian Nielsen 13 Oct '18
by Kristian Nielsen 13 Oct '18
13 Oct '18
Hi Andrei!
Can you review this patch?
This fixes a rather serious bug in parallel replication that was reported on
maria-discuss@ here: https://lists.launchpad.net/maria-discuss/msg05253.html
The user ended up with millions of rows in mysql.gtid_slave_pos, which is
quite bad.
I think this should go in 10.1. The bug is probably also present in 10.0,
but it will be much harder to trigger since 10.0 does not have optimistic
parallel replication. So leaving it unfixed in 10.0 seems ok.
I plan to manually merge it to 10.2 and 10.3 as well, since the merge to
10.3 is not entirely trivial, due to interaction with
@@gtid_pos_auto_engines.
Github branch here:
https://github.com/knielsen/server/commits/gtid_table_garbage_rows
- Kristian.
Kristian Nielsen <knielsen(a)knielsen-hq.org> writes:
> revision-id: 2f4a0c5be2c5d5153c4253a49ba8820ab333a9a0 (mariadb-10.1.35-71-g2f4a0c5be2c)
> parent(s): 1fc5a6f30c3a9c047dcf9a36b00026d98f286f6b
> author: Kristian Nielsen
> committer: Kristian Nielsen
> timestamp: 2018-10-07 18:59:52 +0200
> message:
>
> Fix accumulation of old rows in mysql.gtid_slave_pos
>
> This would happen especially in optimistic parallel replication, where there
> is a good chance that a transaction will be rolled back (due to conflicts)
> after it has executed record_gtid(). If the transaction did any deletions of
> old rows as part of record_gtid(), those deletions will be undone as well.
> And the code did not properly ensure that the deletions would be re-tried.
>
> This patch makes record_gtid() remember the list of deletions done as part
> of a transaction. Then in rpl_slave_state::update() when the changes have
> been committed, we discard the list. However, in case of error and rollback,
> in cleanup_context() we will instead put the list back into
> rpl_global_gtid_slave_state so that the deletions will be re-tried later.
>
> Probably fixes part of the cause of MDEV-12147 as well.
>
> Signed-off-by: Kristian Nielsen <knielsen(a)knielsen-hq.org>
>
> ---
> .../suite/rpl/r/rpl_parallel_optimistic.result | 6 ++
> .../suite/rpl/t/rpl_parallel_optimistic.test | 17 +++++
> sql/log_event.cc | 6 +-
> sql/rpl_gtid.cc | 64 ++++++++++++------
> sql/rpl_gtid.h | 2 +-
> sql/rpl_rli.cc | 79 ++++++++++++++++++++++
> sql/rpl_rli.h | 11 +++
> 7 files changed, 161 insertions(+), 24 deletions(-)
>
> diff --git a/mysql-test/suite/rpl/r/rpl_parallel_optimistic.result b/mysql-test/suite/rpl/r/rpl_parallel_optimistic.result
> index 3cd4f8231bf..99bd8562ffe 100644
> --- a/mysql-test/suite/rpl/r/rpl_parallel_optimistic.result
> +++ b/mysql-test/suite/rpl/r/rpl_parallel_optimistic.result
> @@ -571,4 +571,10 @@ SET GLOBAL slave_parallel_mode=@old_parallel_mode;
> SET GLOBAL slave_parallel_threads=@old_parallel_threads;
> include/start_slave.inc
> DROP TABLE t1, t2, t3;
> +include/save_master_gtid.inc
> +include/sync_with_master_gtid.inc
> +Check that no more than the expected last two GTIDs are in mysql.gtid_slave_pos
> +select count(*) from mysql.gtid_slave_pos order by domain_id, sub_id;
> +count(*)
> +2
> include/rpl_end.inc
> diff --git a/mysql-test/suite/rpl/t/rpl_parallel_optimistic.test b/mysql-test/suite/rpl/t/rpl_parallel_optimistic.test
> index 9f6669279db..3867a3fdf3a 100644
> --- a/mysql-test/suite/rpl/t/rpl_parallel_optimistic.test
> +++ b/mysql-test/suite/rpl/t/rpl_parallel_optimistic.test
> @@ -549,5 +549,22 @@ SET GLOBAL slave_parallel_threads=@old_parallel_threads;
>
> --connection server_1
> DROP TABLE t1, t2, t3;
> +--source include/save_master_gtid.inc
> +
> +--connection server_2
> +--source include/sync_with_master_gtid.inc
> +# Check for left-over rows in table mysql.gtid_slave_pos (MDEV-12147).
> +#
> +# There was a bug when a transaction got a conflict and was rolled back. It
> +# might have also handled deletion of some old rows, and these deletions would
> +# then also be rolled back. And since the deletes were never re-tried, old no
> +# longer needed rows would accumulate in the table without limit.
> +#
> +# The earlier part of this test file have plenty of transactions being rolled
> +# back. But the last DROP TABLE statement runs on its own and should never
> +# conflict, thus at this point the mysql.gtid_slave_pos table should be clean.
> +--echo Check that no more than the expected last two GTIDs are in mysql.gtid_slave_pos
> +select count(*) from mysql.gtid_slave_pos order by domain_id, sub_id;
>
> +--connection server_1
> --source include/rpl_end.inc
> diff --git a/sql/log_event.cc b/sql/log_event.cc
> index e1912ad4620..e07b7002398 100644
> --- a/sql/log_event.cc
> +++ b/sql/log_event.cc
> @@ -4429,7 +4429,7 @@ int Query_log_event::do_apply_event(rpl_group_info *rgi,
>
> gtid= rgi->current_gtid;
> if (rpl_global_gtid_slave_state->record_gtid(thd, >id, sub_id,
> - true, false))
> + rgi, false))
> {
> int errcode= thd->get_stmt_da()->sql_errno();
> if (!is_parallel_retry_error(rgi, errcode))
> @@ -7132,7 +7132,7 @@ Gtid_list_log_event::do_apply_event(rpl_group_info *rgi)
> {
> if ((ret= rpl_global_gtid_slave_state->record_gtid(thd, &list[i],
> sub_id_list[i],
> - false, false)))
> + NULL, false)))
> return ret;
> rpl_global_gtid_slave_state->update_state_hash(sub_id_list[i], &list[i],
> NULL);
> @@ -7639,7 +7639,7 @@ int Xid_log_event::do_apply_event(rpl_group_info *rgi)
> rgi->gtid_pending= false;
>
> gtid= rgi->current_gtid;
> - err= rpl_global_gtid_slave_state->record_gtid(thd, >id, sub_id, true,
> + err= rpl_global_gtid_slave_state->record_gtid(thd, >id, sub_id, rgi,
> false);
> if (err)
> {
> diff --git a/sql/rpl_gtid.cc b/sql/rpl_gtid.cc
> index 7b1acf17ef5..94944b5b3e5 100644
> --- a/sql/rpl_gtid.cc
> +++ b/sql/rpl_gtid.cc
> @@ -77,7 +77,7 @@ rpl_slave_state::record_and_update_gtid(THD *thd, rpl_group_info *rgi)
> rgi->gtid_pending= false;
> if (rgi->gtid_ignore_duplicate_state!=rpl_group_info::GTID_DUPLICATE_IGNORE)
> {
> - if (record_gtid(thd, &rgi->current_gtid, sub_id, false, false))
> + if (record_gtid(thd, &rgi->current_gtid, sub_id, NULL, false))
> DBUG_RETURN(1);
> update_state_hash(sub_id, &rgi->current_gtid, rgi);
> }
> @@ -328,6 +328,8 @@ rpl_slave_state::update(uint32 domain_id, uint32 server_id, uint64 sub_id,
> }
> }
> rgi->gtid_ignore_duplicate_state= rpl_group_info::GTID_DUPLICATE_NULL;
> +
> + rgi->pending_gtid_deletes_clear();
> }
>
> if (!(list_elem= (list_element *)my_malloc(sizeof(*list_elem), MYF(MY_WME))))
> @@ -377,15 +379,24 @@ int
> rpl_slave_state::put_back_list(uint32 domain_id, list_element *list)
> {
> element *e;
> + int err= 0;
> +
> + mysql_mutex_lock(&LOCK_slave_state);
> if (!(e= (element *)my_hash_search(&hash, (const uchar *)&domain_id, 0)))
> - return 1;
> + {
> + err= 1;
> + goto end;
> + }
> while (list)
> {
> list_element *next= list->next;
> e->add(list);
> list= next;
> }
> - return 0;
> +
> +end:
> + mysql_mutex_unlock(&LOCK_slave_state);
> + return err;
> }
>
>
> @@ -468,12 +479,12 @@ gtid_check_rpl_slave_state_table(TABLE *table)
> /*
> Write a gtid to the replication slave state table.
>
> - Do it as part of the transaction, to get slave crash safety, or as a separate
> - transaction if !in_transaction (eg. MyISAM or DDL).
> -
> gtid The global transaction id for this event group.
> sub_id Value allocated within the sub_id when the event group was
> read (sub_id must be consistent with commit order in master binlog).
> + rgi rpl_group_info context, if we are recording the gtid transactionally
> + as part of replicating a transactional event. NULL if called from
> + outside of a replicated transaction.
>
> Note that caller must later ensure that the new gtid and sub_id is inserted
> into the appropriate HASH element with rpl_slave_state.add(), so that it can
> @@ -481,13 +492,13 @@ gtid_check_rpl_slave_state_table(TABLE *table)
> */
> int
> rpl_slave_state::record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> - bool in_transaction, bool in_statement)
> + rpl_group_info *rgi, bool in_statement)
> {
> TABLE_LIST tlist;
> int err= 0;
> bool table_opened= false;
> TABLE *table;
> - list_element *elist= 0, *next;
> + list_element *elist= 0, *cur, *next;
> element *elem;
> ulonglong thd_saved_option= thd->variables.option_bits;
> Query_tables_list lex_backup;
> @@ -558,7 +569,7 @@ rpl_slave_state::record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> thd->wsrep_ignore_table= true;
> #endif
>
> - if (!in_transaction)
> + if (!rgi)
> {
> DBUG_PRINT("info", ("resetting OPTION_BEGIN"));
> thd->variables.option_bits&=
> @@ -601,9 +612,9 @@ rpl_slave_state::record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> if ((elist= elem->grab_list()) != NULL)
> {
> /* Delete any old stuff, but keep around the most recent one. */
> - list_element *cur= elist;
> - uint64 best_sub_id= cur->sub_id;
> + uint64 best_sub_id= elist->sub_id;
> list_element **best_ptr_ptr= &elist;
> + cur= elist;
> while ((next= cur->next))
> {
> if (next->sub_id > best_sub_id)
> @@ -636,7 +647,8 @@ rpl_slave_state::record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> table->file->print_error(err, MYF(0));
> goto end;
> }
> - while (elist)
> + cur = elist;
> + while (cur)
> {
> uchar key_buffer[4+8];
>
> @@ -646,9 +658,9 @@ rpl_slave_state::record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> /* `break' does not work inside DBUG_EXECUTE_IF */
> goto dbug_break; });
>
> - next= elist->next;
> + next= cur->next;
>
> - table->field[1]->store(elist->sub_id, true);
> + table->field[1]->store(cur->sub_id, true);
> /* domain_id is already set in table->record[0] from write_row() above. */
> key_copy(key_buffer, table->record[0], &table->key_info[0], 0, false);
> if (table->file->ha_index_read_map(table->record[1], key_buffer,
> @@ -662,8 +674,7 @@ rpl_slave_state::record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> not want to endlessly error on the same element in case of table
> corruption or such.
> */
> - my_free(elist);
> - elist= next;
> + cur= next;
> if (err)
> break;
> }
> @@ -686,18 +697,31 @@ IF_DBUG(dbug_break:, )
> */
> if (elist)
> {
> - mysql_mutex_lock(&LOCK_slave_state);
> put_back_list(gtid->domain_id, elist);
> - mysql_mutex_unlock(&LOCK_slave_state);
> + elist = 0;
> }
>
> ha_rollback_trans(thd, FALSE);
> }
> close_thread_tables(thd);
> - if (in_transaction)
> + if (rgi)
> + {
> thd->mdl_context.release_statement_locks();
> + /*
> + Save the list of old gtid entries we deleted. If this transaction
> + fails later for some reason and is rolled back, the deletion of those
> + entries will be rolled back as well, and we will need to put them back
> + on the to-be-deleted list so we can re-do the deletion. Otherwise
> + redundant rows in mysql.gtid_slave_pos may accumulate if transactions
> + are rolled back and retried after record_gtid().
> + */
> + rgi->pending_gtid_deletes_save(gtid->domain_id, elist);
> + }
> else
> + {
> thd->mdl_context.release_transactional_locks();
> + rpl_group_info::pending_gtid_deletes_free(elist);
> + }
> }
> thd->lex->restore_backup_query_tables_list(&lex_backup);
> thd->variables.option_bits= thd_saved_option;
> @@ -1080,7 +1104,7 @@ rpl_slave_state::load(THD *thd, char *state_from_master, size_t len,
>
> if (gtid_parser_helper(&state_from_master, end, >id) ||
> !(sub_id= next_sub_id(gtid.domain_id)) ||
> - record_gtid(thd, >id, sub_id, false, in_statement) ||
> + record_gtid(thd, >id, sub_id, NULL, in_statement) ||
> update(gtid.domain_id, gtid.server_id, sub_id, gtid.seq_no, NULL))
> return 1;
> if (state_from_master == end)
> diff --git a/sql/rpl_gtid.h b/sql/rpl_gtid.h
> index 79d566bddbf..7bd639b768f 100644
> --- a/sql/rpl_gtid.h
> +++ b/sql/rpl_gtid.h
> @@ -182,7 +182,7 @@ struct rpl_slave_state
> uint64 seq_no, rpl_group_info *rgi);
> int truncate_state_table(THD *thd);
> int record_gtid(THD *thd, const rpl_gtid *gtid, uint64 sub_id,
> - bool in_transaction, bool in_statement);
> + rpl_group_info *rgi, bool in_statement);
> uint64 next_sub_id(uint32 domain_id);
> int iterate(int (*cb)(rpl_gtid *, void *), void *data,
> rpl_gtid *extra_gtids, uint32 num_extra,
> diff --git a/sql/rpl_rli.cc b/sql/rpl_rli.cc
> index 64a1b535307..b35130c1505 100644
> --- a/sql/rpl_rli.cc
> +++ b/sql/rpl_rli.cc
> @@ -1680,6 +1680,7 @@ rpl_group_info::reinit(Relay_log_info *rli)
> long_find_row_note_printed= false;
> did_mark_start_commit= false;
> gtid_ev_flags2= 0;
> + pending_gtid_delete_list= NULL;
> last_master_timestamp = 0;
> gtid_ignore_duplicate_state= GTID_DUPLICATE_NULL;
> speculation= SPECULATE_NO;
> @@ -1804,6 +1805,12 @@ void rpl_group_info::cleanup_context(THD *thd, bool error)
> erroneously update the GTID position.
> */
> gtid_pending= false;
> +
> + /*
> + Rollback will have undone any deletions of old rows we might have made
> + in mysql.gtid_slave_pos. Put those rows back on the list to be deleted.
> + */
> + pending_gtid_deletes_put_back();
> }
> m_table_map.clear_tables();
> slave_close_thread_tables(thd);
> @@ -2027,6 +2034,78 @@ rpl_group_info::unmark_start_commit()
> }
>
>
> +/*
> + When record_gtid() has deleted any old rows from the table
> + mysql.gtid_slave_pos as part of a replicated transaction, save the list of
> + rows deleted here.
> +
> + If later the transaction fails (eg. optimistic parallel replication), the
> + deletes will be undone when the transaction is rolled back. Then we can
> + put back the list of rows into the rpl_global_gtid_slave_state, so that
> + we can re-do the deletes and avoid accumulating old rows in the table.
> +*/
> +void
> +rpl_group_info::pending_gtid_deletes_save(uint32 domain_id,
> + rpl_slave_state::list_element *list)
> +{
> + /*
> + We should never get to a state where we try to save a new pending list of
> + gtid deletes while we still have an old one. But make sure we handle it
> + anyway just in case, so we avoid leaving stray entries in the
> + mysql.gtid_slave_pos table.
> + */
> + DBUG_ASSERT(!pending_gtid_delete_list);
> + if (unlikely(pending_gtid_delete_list))
> + pending_gtid_deletes_put_back();
> +
> + pending_gtid_delete_list= list;
> + pending_gtid_delete_list_domain= domain_id;
> +}
> +
> +
> +/*
> + Take the list recorded by pending_gtid_deletes_save() and put it back into
> + rpl_global_gtid_slave_state. This is needed if deletion of the rows was
> + rolled back due to transaction failure.
> +*/
> +void
> +rpl_group_info::pending_gtid_deletes_put_back()
> +{
> + if (pending_gtid_delete_list)
> + {
> + rpl_global_gtid_slave_state->put_back_list(pending_gtid_delete_list_domain,
> + pending_gtid_delete_list);
> + pending_gtid_delete_list= NULL;
> + }
> +}
> +
> +
> +/*
> + Free the list recorded by pending_gtid_deletes_save(). Done when the deletes
> + in the list have been permanently committed.
> +*/
> +void
> +rpl_group_info::pending_gtid_deletes_clear()
> +{
> + pending_gtid_deletes_free(pending_gtid_delete_list);
> + pending_gtid_delete_list= NULL;
> +}
> +
> +
> +void
> +rpl_group_info::pending_gtid_deletes_free(rpl_slave_state::list_element *list)
> +{
> + rpl_slave_state::list_element *next;
> +
> + while (list)
> + {
> + next= list->next;
> + my_free(list);
> + list= next;
> + }
> +}
> +
> +
> rpl_sql_thread_info::rpl_sql_thread_info(Rpl_filter *filter)
> : rpl_filter(filter)
> {
> diff --git a/sql/rpl_rli.h b/sql/rpl_rli.h
> index 74d5b6fe416..b40a34a54e6 100644
> --- a/sql/rpl_rli.h
> +++ b/sql/rpl_rli.h
> @@ -676,6 +676,11 @@ struct rpl_group_info
> /* Needs room for "Gtid D-S-N\x00". */
> char gtid_info_buf[5+10+1+10+1+20+1];
>
> + /* List of not yet committed deletions in mysql.gtid_slave_pos. */
> + rpl_slave_state::list_element *pending_gtid_delete_list;
> + /* Domain associated with pending_gtid_delete_list. */
> + uint32 pending_gtid_delete_list_domain;
> +
> /*
> The timestamp, from the master, of the commit event.
> Used to do delayed update of rli->last_master_timestamp, for getting
> @@ -817,6 +822,12 @@ struct rpl_group_info
> char *gtid_info();
> void unmark_start_commit();
>
> + static void pending_gtid_deletes_free(rpl_slave_state::list_element *list);
> + void pending_gtid_deletes_save(uint32 domain_id,
> + rpl_slave_state::list_element *list);
> + void pending_gtid_deletes_put_back();
> + void pending_gtid_deletes_clear();
> +
> time_t get_row_stmt_start_timestamp()
> {
> return row_stmt_start_timestamp;
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
2
5
Hello Alexander,
As stored procedures seems to be very memory aggressive, I found two possible optimizations to avoid having three strings containing the stored procedure code in memory.
Probably, same optimizations may be applied to packages.
I haven't add any tests because I think everything is covered by the existing tests.
What do you think about ?
In a second time, I think working on SP cache.
Currently it' s a simple hash table and all the hash table is flushed as soon as the threshold (stored_program_cache) is reached.
It's not very good.
We could :
- Use a more intelligent cache (LFU ?)
- Add some performance counters
Regards.
Ce message et les pi?ces jointes sont confidentiels et ?tablis ? l'attention exclusive de ses destinataires. Toute utilisation ou diffusion, m?me partielle, non autoris?e est interdite. Tout message ?lectronique est susceptible d'alt?ration; CEGID d?cline donc toute responsabilit? au titre de ce message. Si vous n'?tes pas le destinataire de ce message, merci de le d?truire et d'avertir l'exp?diteur.
This message and any attachments are confidential and intended solely for the addressees. Any unauthorized use or disclosure, either whole or partial is prohibited. E-mails are susceptible to alteration; CEGID shall therefore not be liable for the content of this message. If you are not the intended recipient of this message, please delete it and notify the sender.
1
0
[Maria-developers] [beginner] how can I assign an existing ticket to myself?
by Takashi Sasaki 12 Oct '18
by Takashi Sasaki 12 Oct '18
12 Oct '18
Hello,
I wanted to contribute something to MariaDB, so I signed up to Jira,
but how can I assign an existing ticket to myself?
[programming experience]:
I have been programming for over 10 years but recently using Java,
Python, Node.js at work and C ++ is a little played a few years ago.
[knowledge of the MariaDB source]
I just started reading the code recently and I am almost an amateur.
[how much you know about using MySQL/MariaDB]
I use MySQL for WordPress backend as a user and I use Oracle for work,
so I have knowledge of general RDBMS,
but I do not have deep knowledge specific to the products.
Thanks,
Takashi Sasaki
3
2
Re: [Maria-developers] [Commits] 1edec241473: MDEV-17137: Syntax errors with VIEW using MEDIAN
by Sergey Petrunia 11 Oct '18
by Sergey Petrunia 11 Oct '18
11 Oct '18
Hi Varun,
On Wed, Sep 12, 2018 at 06:13:23PM +0530, Varun wrote:
> revision-id: 1edec241473a122553fa9a27be995f3aef52654e (mariadb-10.3.7-171-g1edec241473)
> parent(s): 5a1868b58d26b286b6ad433096e7184895953311
> author: Varun Gupta
> committer: Varun Gupta
> timestamp: 2018-09-12 18:10:16 +0530
> message:
>
> MDEV-17137: Syntax errors with VIEW using MEDIAN
>
> The syntax error happened because we had not implemented a different print for
> percentile functions. The syntax is a bit different when we use percentile functions
> as window functions in comparision to normal window functions.
> Implemented a seperate print function for percentile functions
>
...
> diff --git a/sql/item_windowfunc.h b/sql/item_windowfunc.h
> index b3e23748246..268182a2788 100644
> --- a/sql/item_windowfunc.h
> +++ b/sql/item_windowfunc.h
> @@ -1311,6 +1311,7 @@ class Item_window_func : public Item_func_or_sum
> bool resolve_window_name(THD *thd);
>
> void print(String *str, enum_query_type query_type);
> + void print_percentile_functions(String *str, enum_query_type query_type);
>
* Please make this function private
* I think "print_percentile_functions" is a bad name as it implies multiple
functions are printed. In fact, only one function is printed. How about
"print_for_percentile_functions"?
Ok to push after the above is addressed.
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0
Re: [Maria-developers] bb2db687a05: 26 aug (MDEV-371 Unique indexes for blobs)
by Sergei Golubchik 10 Oct '18
by Sergei Golubchik 10 Oct '18
10 Oct '18
Hi, Sachin!
Could you please explain what you store in the frm file and how you modify
keyinfo/keypart structures in memory?
Needs to be tested with partitioned tables. Particularly with MDEV-14005.
Tests were great and thourough, good job.
It'd be useful to have at least some tests for innodb too, though.
See other comments below.
On Oct 08, Sachin Setiya wrote:
> diff --git a/mysql-test/main/long_unique.result b/mysql-test/main/long_unique.result
> new file mode 100644
> index 00000000000..1a06c230c72
> --- /dev/null
> +++ b/mysql-test/main/long_unique.result
> @@ -0,0 +1,1391 @@
> +#Structure of tests
> +#First we will check all option for
> +#table containing single unique column
> +#table containing keys like unique(a,b,c,d) etc
> +#then table containing 2 blob unique etc
> +set @allowed_packet= @@max_allowed_packet;
> +#table with single long blob column;
> +create table t1(a blob unique);
> +insert into t1 values(1),(2),(3),(56),('sachin'),('maria'),(123456789034567891),(null),(null),(123456789034567890);
> +#table structure;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table t1
> +Non_unique 0
> +Key_name a
> +Seq_in_index 1
> +Column_name a
> +Collation A
> +Cardinality NULL
> +Sub_part NULL
> +Packed NULL
> +Null YES
> +Index_type HASH_INDEX
I'm not sure about it. One cannot write UNIQUE (a) USING HASH_INDEX
in CREATE TABLE, right?
May be just report HASH here? (and fix the parser to behave accordingly)
> +Comment
> +Index_comment
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 10 Deleted blocks: 0
> +Recordlength: 20
> +
> +table description:
> +Key Start Len Index Type
> +1 12 8 multip. ulonglong NULL
good idea
> +select * from information_schema.columns where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE DATETIME_PRECISION CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT IS_GENERATED GENERATION_EXPRESSION
> +def test t1 a 1 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob UNI select,insert,update,references NEVER NULL
> +select * from information_schema.statistics where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME NON_UNIQUE INDEX_SCHEMA INDEX_NAME SEQ_IN_INDEX COLUMN_NAME COLLATION CARDINALITY SUB_PART PACKED NULLABLE INDEX_TYPE COMMENT INDEX_COMMENT
> +def test t1 0 test a 1 a A NULL NULL NULL YES HASH_INDEX
> +select * from information_schema.key_column_usage where table_schema= 'test' and table_name= 't1';
> +CONSTRAINT_CATALOG def
> +CONSTRAINT_SCHEMA test
> +CONSTRAINT_NAME a
> +TABLE_CATALOG def
> +TABLE_SCHEMA test
> +TABLE_NAME t1
> +COLUMN_NAME a
> +ORDINAL_POSITION 1
> +POSITION_IN_UNIQUE_CONSTRAINT NULL
> +REFERENCED_TABLE_SCHEMA NULL
> +REFERENCED_TABLE_NAME NULL
> +REFERENCED_COLUMN_NAME NULL
> +# table select we should not be able to see db_row_hash_column;
> +select * from t1;
> +a
> +1
> +2
> +3
> +56
> +sachin
> +maria
> +123456789034567891
> +NULL
> +NULL
> +123456789034567890
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +#duplicate entry test;
> +insert into t1 values(2);
> +ERROR 23000: Duplicate entry '2' for key 'a'
> +insert into t1 values('sachin');
> +ERROR 23000: Duplicate entry 'sachin' for key 'a'
> +insert into t1 values(123456789034567891);
> +ERROR 23000: Duplicate entry '123456789034567891' for key 'a'
looks great :)
> +select * from t1;
> +a
> +1
> +2
> +3
> +56
> +sachin
> +maria
> +123456789034567891
> +NULL
> +NULL
> +123456789034567890
> +insert into t1 values(11),(22),(33);
> +insert into t1 values(12),(22);
> +ERROR 23000: Duplicate entry '22' for key 'a'
> +select * from t1;
> +a
> +1
> +2
> +3
> +56
> +sachin
> +maria
> +123456789034567891
> +NULL
> +NULL
> +123456789034567890
> +11
> +22
> +33
> +12
> +insert into t1 values(repeat('s',4000*10)),(repeat('s',4001*10));
> +insert into t1 values(repeat('m',4000*10)),(repeat('m',4000*10));
> +ERROR 23000: Duplicate entry 'mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm' for key 'a'
> +insert into t1 values(repeat('m',4001)),(repeat('m',4002));
> +truncate table t1;
> +insert into t1 values(1),(2),(3),(4),(5),(8),(7);
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 7 Deleted blocks: 0
> +Recordlength: 20
> +
> +table description:
> +Key Start Len Index Type
> +1 12 8 multip. ulonglong NULL
> +#now some alter commands;
> +alter table t1 add column b int;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b int(11) YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535))
I think there shouldn't be (65535) here
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(1,2);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +insert into t1 values(2,2);
> +ERROR 23000: Duplicate entry '2' for key 'a'
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +#now try to change db_row_hash_1 column;
> +alter table t1 drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 add column d int , add column e int , drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 modify column db_row_hash_1 int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column a int , add column b int, modify column db_row_hash_1 int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 change column db_row_hash_1 dsds int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column asd int, change column db_row_hash_1 dsds int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 drop column b , add column c int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +#now add some column with name db_row_hash;
> +alter table t1 add column db_row_hash_1 int unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(45,1,55),(46,1,55);
> +ERROR 23000: Duplicate entry '55' for key 'db_row_hash_1'
> +alter table t1 add column db_row_hash_2 int, add column db_row_hash_3 int;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +c int(11) YES NULL
> +db_row_hash_1 int(11) YES UNI NULL
> +db_row_hash_2 int(11) YES NULL
> +db_row_hash_3 int(11) YES NULL
> +#this should also drop the unique index ;
> +alter table t1 drop column a;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +#add column with unique index on blob ;
> +alter table t1 add column a blob unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +# try to change the blob unique column name;
> +#this will change index to b tree;
> +alter table t1 modify column a int ;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +alter table t1 add column clm blob unique;
> +#try changing the name ;
> +alter table t1 change column clm clm_changed blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed` blob DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `clm` (`clm_changed`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +t1 0 clm 1 clm_changed A NULL NULL NULL YES HASH_INDEX
> +#now drop the unique key;
> +alter table t1 drop key clm;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `c` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed` blob DEFAULT NULL,
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +drop table t1;
> +create table t1 (a TEXT CHARSET latin1 COLLATE latin1_german2_ci unique);
> +desc t1;
> +Field Type Null Key Default Extra
> +a text YES UNI NULL
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +insert into t1 values ('ae');
> +insert into t1 values ('AE');
> +ERROR 23000: Duplicate entry 'AE' for key 'a'
> +insert into t1 values ('Ä');
this should've failed, shouldn't it?
> +drop table t1;
> +create table t1 (a int primary key, b blob unique);
> +desc t1;
> +Field Type Null Key Default Extra
> +a int(11) NO PRI NULL
> +b blob YES UNI NULL
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 PRIMARY 1 a A 0 NULL NULL BTREE
> +t1 0 b 1 b A NULL NULL NULL YES HASH_INDEX
> +insert into t1 values(1,1),(2,2),(3,3);
> +insert into t1 values(1,1);
> +ERROR 23000: Duplicate entry '1' for key 'b'
> +insert into t1 values(7,1);
> +ERROR 23000: Duplicate entry '1' for key 'b'
> +drop table t1;
> +#table with multiple long blob column and varchar text column ;
> +create table t1(a blob unique, b int , c blob unique , d text unique , e varchar(3000) unique);
> +insert into t1 values(1,2,3,4,5),(2,11,22,33,44),(3111,222,333,444,555),(5611,2222,3333,4444,5555),
> +('sachin',341,'fdf','gfgfgfg','hghgr'),('maria',345,'frter','dasd','utyuty'),
> +(123456789034567891,353534,53453453453456,64565464564564,45435345345345),
> +(123456789034567890,43545,657567567567,78967657567567,657567567567567676);
> +#table structure;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b int(11) YES NULL
> +c blob YES UNI NULL
> +d text YES UNI NULL
> +e varchar(3000) YES UNI NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `c` (`c`),
> + UNIQUE KEY `d` (`d`),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 d 1 d A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 8 Deleted blocks: 0
> +Recordlength: 3072
> +
> +table description:
> +Key Start Len Index Type
> +1 3063 8 multip. ulonglong NULL
> +2 3055 8 multip. ulonglong NULL
> +3 3047 8 multip. ulonglong NULL
> +4 3039 8 multip. ulonglong prefix NULL
why "prefix" here? how is it different from others?
> +select * from information_schema.columns where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE DATETIME_PRECISION CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT IS_GENERATED GENERATION_EXPRESSION
> +def test t1 a 1 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob UNI select,insert,update,references NEVER NULL
> +def test t1 b 2 NULL YES int NULL NULL 10 0 NULL NULL NULL int(11) select,insert,update,references NEVER NULL
> +def test t1 c 3 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob UNI select,insert,update,references NEVER NULL
> +def test t1 d 4 NULL YES text 65535 65535 NULL NULL NULL latin1 latin1_swedish_ci text UNI select,insert,update,references NEVER NULL
> +def test t1 e 5 NULL YES varchar 3000 3000 NULL NULL NULL latin1 latin1_swedish_ci varchar(3000) UNI select,insert,update,references NEVER NULL
> +select * from information_schema.statistics where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME NON_UNIQUE INDEX_SCHEMA INDEX_NAME SEQ_IN_INDEX COLUMN_NAME COLLATION CARDINALITY SUB_PART PACKED NULLABLE INDEX_TYPE COMMENT INDEX_COMMENT
> +def test t1 0 test a 1 a A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test c 1 c A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test d 1 d A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test e 1 e A NULL NULL NULL YES HASH_INDEX
> +select * from information_schema.key_column_usage where table_schema= 'test' and table_name= 't1';
> +CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION POSITION_IN_UNIQUE_CONSTRAINT REFERENCED_TABLE_SCHEMA REFERENCED_TABLE_NAME REFERENCED_COLUMN_NAME
> +def test a def test t1 a 1 NULL NULL NULL NULL
> +def test c def test t1 c 1 NULL NULL NULL NULL
> +def test d def test t1 d 1 NULL NULL NULL NULL
> +def test e def test t1 e 1 NULL NULL NULL NULL
> +#table select we should not be able to see db_row_hash_1 column;
> +select * from t1;
> +a b c d e
> +1 2 3 4 5
> +2 11 22 33 44
> +3111 222 333 444 555
> +5611 2222 3333 4444 5555
> +sachin 341 fdf gfgfgfg hghgr
> +maria 345 frter dasd utyuty
> +123456789034567891 353534 53453453453456 64565464564564 45435345345345
> +123456789034567890 43545 657567567567 78967657567567 657567567567567676
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +select db_row_hash_2 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
> +select db_row_hash_3 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_3' in 'field list'
> +#duplicate entry test;
> +insert into t1 values(21,2,3,42,51);
> +ERROR 23000: Duplicate entry '3' for key 'c'
> +insert into t1 values('sachin',null,null,null,null);
> +ERROR 23000: Duplicate entry 'sachin' for key 'a'
> +insert into t1 values(1234567890345671890,4353451,6575675675617,789676575675617,657567567567567676);
> +ERROR 23000: Duplicate entry '657567567567567676' for key 'e'
> +select * from t1;
> +a b c d e
> +1 2 3 4 5
> +2 11 22 33 44
> +3111 222 333 444 555
> +5611 2222 3333 4444 5555
> +sachin 341 fdf gfgfgfg hghgr
> +maria 345 frter dasd utyuty
> +123456789034567891 353534 53453453453456 64565464564564 45435345345345
> +123456789034567890 43545 657567567567 78967657567567 657567567567567676
> +insert into t1 values(repeat('s',4000*10),100,repeat('s',4000*10),repeat('s',4000*10),
> +repeat('s',400)),(repeat('s',4001*10),1000,repeat('s',4001*10),repeat('s',4001*10),
> +repeat('s',2995));
> +insert into t1 values(repeat('m',4000*11),10,repeat('s',4000*11),repeat('s',4000*11),repeat('s',2995));
> +ERROR 23000: Duplicate entry 'ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss' for key 'e'
> +truncate table t1;
> +insert into t1 values(1,2,3,4,5),(2,11,22,33,44),(3111,222,333,444,555),(5611,2222,3333,4444,5555);
> +#now some alter commands;
> +alter table t1 add column f int;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b int(11) YES NULL
> +c blob YES UNI NULL
> +d text YES UNI NULL
> +e varchar(3000) YES UNI NULL
> +f int(11) YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `c` (`c`(65535)),
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +#unique key should not break;
> +insert into t1 values(1,2,3,4,5,6);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +select db_row_hash_1 , db_row_hash_2, db_row_hash_3 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +#now try to change db_row_hash_1 column;
> +alter table t1 drop column db_row_hash_1, drop column db_row_hash_2, drop column db_row_hash_3;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 add column dg int , add column ef int , drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 modify column db_row_hash_1 int , modify column db_row_hash_2 int, modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column ar int , add column rb int, modify column db_row_hash_1 int , modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 change column db_row_hash_1 dsds int , change column db_row_hash_2 dfdf int , change column db_row_hash_3 gdfg int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column asd int, drop column a, change column db_row_hash_1 dsds int, change db_row_hash_3 fdfdfd int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 drop column b , add column g int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `c` (`c`(65535)),
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +#now add some column with name db_row_hash;
> +alter table t1 add column db_row_hash_1 int unique;
> +alter table t1 add column db_row_hash_2 int unique;
> +alter table t1 add column db_row_hash_3 int unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_3` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535)),
> + UNIQUE KEY `c` (`c`(65535)),
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `db_row_hash_3` (`db_row_hash_3`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +alter table t1 add column db_row_hash_7 int, add column db_row_hash_5 int , add column db_row_hash_4 int ;
> +alter table t1 drop column db_row_hash_7,drop column db_row_hash_3, drop column db_row_hash_4;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +c blob YES UNI NULL
> +d text YES UNI NULL
> +e varchar(3000) YES UNI NULL
> +f int(11) YES NULL
> +g int(11) YES NULL
> +db_row_hash_1 int(11) YES UNI NULL
> +db_row_hash_2 int(11) YES UNI NULL
> +db_row_hash_5 int(11) YES NULL
> +#this show now break anything;
"this should not" ?
> +insert into t1 values(1,2,3,4,5,6,23,5,6);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +#this should also drop the unique index;
> +alter table t1 drop column a, drop column c;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +#add column with unique index on blob;
> +alter table t1 add column a blob unique;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +#try to change the blob unique column name;
> +#this will change index to b tree;
> +alter table t1 modify column a int , modify column e int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +alter table t1 add column clm1 blob unique,add column clm2 blob unique;
> +#try changing the name;
> +alter table t1 change column clm1 clm_changed1 blob, change column clm2 clm_changed2 blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed1` blob DEFAULT NULL,
> + `clm_changed2` blob DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `clm1` (`clm_changed1`),
> + UNIQUE KEY `clm2` (`clm_changed2`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +t1 0 clm1 1 clm_changed1 A NULL NULL NULL YES HASH_INDEX
> +t1 0 clm2 1 clm_changed2 A NULL NULL NULL YES HASH_INDEX
> +#now drop the unique key;
> +alter table t1 drop key clm1, drop key clm2;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `d` text DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + `f` int(11) DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `a` int(11) DEFAULT NULL,
> + `clm_changed1` blob DEFAULT NULL,
> + `clm_changed2` blob DEFAULT NULL,
> + UNIQUE KEY `d` (`d`(65535)),
> + UNIQUE KEY `e` (`e`),
> + UNIQUE KEY `db_row_hash_1` (`db_row_hash_1`),
> + UNIQUE KEY `db_row_hash_2` (`db_row_hash_2`),
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 d 1 d A NULL 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_1 1 db_row_hash_1 A NULL NULL NULL YES BTREE
> +t1 0 db_row_hash_2 1 db_row_hash_2 A NULL NULL NULL YES BTREE
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +drop table t1;
> +#now the table with key on multiple columns; the ultimate test;
> +create table t1(a blob, b int , c varchar(2000) , d text , e varchar(3000) , f longblob , g int , h text ,
> +unique(a,b,c), unique(c,d,e),unique(e,f,g,h), unique(b,d,g,h));
> +insert into t1 values(1,1,1,1,1,1,1,1),(2,2,2,2,2,2,2,2),(3,3,3,3,3,3,3,3),(4,4,4,4,4,4,4,4),(5,5,5,5,5,5,5,5),
> +('maria',6,'maria','maria','maria','maria',6,'maria'),('mariadb',7,'mariadb','mariadb','mariadb','mariadb',8,'mariadb')
> +,(null,null,null,null,null,null,null,null),(null,null,null,null,null,null,null,null);
> +#table structure;
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES MUL NULL
> +b int(11) YES MUL NULL
> +c varchar(2000) YES MUL NULL
> +d text YES NULL
> +e varchar(3000) YES MUL NULL
> +f longblob YES NULL
> +g int(11) YES NULL
> +h text YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`,`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`),
> + UNIQUE KEY `b` (`b`,`d`,`g`,`h`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 NULL NULL YES HASH_INDEX
> +
> +MyISAM file: DATADIR/test/t1
> +Record format: Packed
> +Character set: latin1_swedish_ci (8)
> +Data records: 9 Deleted blocks: 0
> +Recordlength: 5092
> +
> +table description:
> +Key Start Len Index Type
> +1 5081 8 multip. ulonglong prefix NULL
> +2 5073 8 multip. ulonglong prefix NULL
> +3 5065 8 multip. ulonglong prefix NULL
> +4 5057 8 multip. ulonglong NULL
> +select * from information_schema.columns where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION COLUMN_DEFAULT IS_NULLABLE DATA_TYPE CHARACTER_MAXIMUM_LENGTH CHARACTER_OCTET_LENGTH NUMERIC_PRECISION NUMERIC_SCALE DATETIME_PRECISION CHARACTER_SET_NAME COLLATION_NAME COLUMN_TYPE COLUMN_KEY EXTRA PRIVILEGES COLUMN_COMMENT IS_GENERATED GENERATION_EXPRESSION
> +def test t1 a 1 NULL YES blob 65535 65535 NULL NULL NULL NULL NULL blob MUL select,insert,update,references NEVER NULL
> +def test t1 b 2 NULL YES int NULL NULL 10 0 NULL NULL NULL int(11) MUL select,insert,update,references NEVER NULL
> +def test t1 c 3 NULL YES varchar 2000 2000 NULL NULL NULL latin1 latin1_swedish_ci varchar(2000) MUL select,insert,update,references NEVER NULL
> +def test t1 d 4 NULL YES text 65535 65535 NULL NULL NULL latin1 latin1_swedish_ci text select,insert,update,references NEVER NULL
> +def test t1 e 5 NULL YES varchar 3000 3000 NULL NULL NULL latin1 latin1_swedish_ci varchar(3000) MUL select,insert,update,references NEVER NULL
> +def test t1 f 6 NULL YES longblob 4294967295 4294967295 NULL NULL NULL NULL NULL longblob select,insert,update,references NEVER NULL
> +def test t1 g 7 NULL YES int NULL NULL 10 0 NULL NULL NULL int(11) select,insert,update,references NEVER NULL
> +def test t1 h 8 NULL YES text 65535 65535 NULL NULL NULL latin1 latin1_swedish_ci text select,insert,update,references NEVER NULL
> +select * from information_schema.statistics where table_schema = 'test' and table_name = 't1';
> +TABLE_CATALOG TABLE_SCHEMA TABLE_NAME NON_UNIQUE INDEX_SCHEMA INDEX_NAME SEQ_IN_INDEX COLUMN_NAME COLLATION CARDINALITY SUB_PART PACKED NULLABLE INDEX_TYPE COMMENT INDEX_COMMENT
> +def test t1 0 test a 1 a A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test a 2 b A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test a 3 c A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test c 1 c A NULL NULL NULL YES HASH_INDEX
> +def test t1 0 test c 2 d A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test c 3 e A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 1 e A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 2 f A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 3 g A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test e 4 h A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 1 b A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 2 d A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 3 g A 0 NULL NULL YES HASH_INDEX
> +def test t1 0 test b 4 h A 0 NULL NULL YES HASH_INDEX
> +select * from information_schema.key_column_usage where table_schema= 'test' and table_name= 't1';
> +CONSTRAINT_CATALOG CONSTRAINT_SCHEMA CONSTRAINT_NAME TABLE_CATALOG TABLE_SCHEMA TABLE_NAME COLUMN_NAME ORDINAL_POSITION POSITION_IN_UNIQUE_CONSTRAINT REFERENCED_TABLE_SCHEMA REFERENCED_TABLE_NAME REFERENCED_COLUMN_NAME
> +def test a def test t1 a 1 NULL NULL NULL NULL
> +def test a def test t1 b 2 NULL NULL NULL NULL
> +def test a def test t1 c 3 NULL NULL NULL NULL
> +def test c def test t1 c 1 NULL NULL NULL NULL
> +def test c def test t1 d 2 NULL NULL NULL NULL
> +def test c def test t1 e 3 NULL NULL NULL NULL
> +def test e def test t1 e 1 NULL NULL NULL NULL
> +def test e def test t1 f 2 NULL NULL NULL NULL
> +def test e def test t1 g 3 NULL NULL NULL NULL
> +def test e def test t1 h 4 NULL NULL NULL NULL
> +def test b def test t1 b 1 NULL NULL NULL NULL
> +def test b def test t1 d 2 NULL NULL NULL NULL
> +def test b def test t1 g 3 NULL NULL NULL NULL
> +def test b def test t1 h 4 NULL NULL NULL NULL
> +# table select we should not be able to see db_row_hash_1 column;
> +select * from t1;
> +a b c d e f g h
> +1 1 1 1 1 1 1 1
> +2 2 2 2 2 2 2 2
> +3 3 3 3 3 3 3 3
> +4 4 4 4 4 4 4 4
> +5 5 5 5 5 5 5 5
> +maria 6 maria maria maria maria 6 maria
> +mariadb 7 mariadb mariadb mariadb mariadb 8 mariadb
> +NULL NULL NULL NULL NULL NULL NULL NULL
> +NULL NULL NULL NULL NULL NULL NULL NULL
> +select db_row_hash_1 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +select db_row_hash_2 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_2' in 'field list'
> +select db_row_hash_3 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_3' in 'field list'
> +#duplicate entry test;
> +#duplicate keys entry;
> +insert into t1 values(1,1,1,0,0,0,0,0);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'a'
> +insert into t1 values(0,0,1,1,1,0,0,0);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'c'
> +insert into t1 values(0,0,0,0,1,1,1,1);
> +ERROR 23000: Duplicate entry '1-1-1-1' for key 'e'
> +insert into t1 values(1,1,1,1,1,0,0,0);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'a'
> +insert into t1 values(0,0,0,0,1,1,1,1);
> +ERROR 23000: Duplicate entry '1-1-1-1' for key 'e'
> +insert into t1 values(1,1,1,1,1,1,1,1);
> +ERROR 23000: Duplicate entry '1-1-1' for key 'a'
> +select db_row_hash_1,db_row_hash_2,db_row_hash_3,db_row_hash_4,db_row_hash_5 from t1;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 'field list'
> +alter table t1 drop column db_row_hash_1, drop column db_row_hash_2, drop column db_row_hash_3;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 add column dg int , add column ef int , drop column db_row_hash_1;
> +ERROR 42000: Can't DROP COLUMN `db_row_hash_1`; check that it exists
> +alter table t1 modify column db_row_hash_1 int , modify column db_row_hash_2 int, modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column ar int , add column rb int, modify column db_row_hash_1 int , modify column db_row_hash_3 int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 change column db_row_hash_1 dsds int , change column db_row_hash_2 dfdf int , change column db_row_hash_3 gdfg int ;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +alter table t1 add column asd int, drop column a, change column db_row_hash_1 dsds int, change db_row_hash_3 fdfdfd int;
> +ERROR 42S22: Unknown column 'db_row_hash_1' in 't1'
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`,`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`),
> + UNIQUE KEY `b` (`b`,`d`,`g`,`h`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +# add column named db_row_hash_*;
> +alter table t1 add column db_row_hash_7 int , add column db_row_hash_5 int,
> +add column db_row_hash_1 int, add column db_row_hash_2 int;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + `db_row_hash_7` int(11) DEFAULT NULL,
> + `db_row_hash_5` int(11) DEFAULT NULL,
> + `db_row_hash_1` int(11) DEFAULT NULL,
> + `db_row_hash_2` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +alter table t1 drop column db_row_hash_7 , drop column db_row_hash_5 ,
> +drop column db_row_hash_1, drop column db_row_hash_2 ;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` int(11) DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` text DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#try to change column names;
> +alter table t1 change column a aa blob , change column b bb blob , change column d dd blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `aa` blob DEFAULT NULL,
> + `bb` blob DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `dd` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`aa`(65535),`bb`,`c`),
> + UNIQUE KEY `c` (`c`,`dd`(65535),`e`),
> + UNIQUE KEY `b` (`bb`,`dd`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 aa A 0 65535 NULL YES HASH_INDEX
> +t1 0 a 2 bb A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 dd A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 bb A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 dd A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +alter table t1 change column aa a blob , change column bb b blob , change column dd d blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` varchar(2000) DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`(65535),`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A 0 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#now we will change the data type to int and varchar limit so that we no longer require hash_index;
> +#on key a_b_c;
> +alter table t1 modify column a varchar(20) , modify column b varchar(20) , modify column c varchar(20);
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` varchar(20) DEFAULT NULL,
> + `b` varchar(20) DEFAULT NULL,
> + `c` varchar(20) DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A NULL NULL NULL YES BTREE
> +t1 0 a 2 b A NULL NULL NULL YES BTREE
> +t1 0 a 3 c A 0 NULL NULL YES BTREE
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#change it back;
> +alter table t1 modify column a blob , modify column b blob , modify column c blob;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 2 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#try to delete blob column in unique;
> +truncate table t1;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `e` (`e`,`f`,`g`,`h`(65535)),
> + UNIQUE KEY `a` (`a`,`b`,`c`),
> + UNIQUE KEY `c` (`c`,`d`(65535),`e`),
> + UNIQUE KEY `b` (`b`,`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 e 1 e A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 2 f A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 3 g A NULL NULL NULL YES HASH_INDEX
> +t1 0 e 4 h A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 1 a A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 2 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 NULL NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 c 3 e A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +#now try to delete keys;
> +alter table t1 drop key c, drop key e;
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` varchar(3000) DEFAULT NULL,
> + `f` longblob DEFAULT NULL,
> + `g` int(11) DEFAULT NULL,
> + `h` text DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`b`(65535),`c`(65535)),
> + UNIQUE KEY `b` (`b`(65535),`d`(65535),`g`,`h`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 3 c A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 b 3 g A 0 NULL NULL YES HASH_INDEX
> +t1 0 b 4 h A 0 65535 NULL YES HASH_INDEX
> +drop table t1;
> +#now alter table containing some data basically some tests with ignore;
> +create table t1 (a blob);
> +insert into t1 values(1),(2),(3);
> +#normal alter table;
> +alter table t1 add unique key(a);
> +alter table t1 drop key a;
> +truncate table t1;
> +insert into t1 values(1),(1),(2),(2),(3);
> +alter table t1 add unique key(a);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +alter ignore table t1 add unique key(a);
> +select * from t1;
> +a
> +1
> +2
> +3
> +insert into t1 values(1);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +drop table t1;
> +#Now with multiple keys;
> +create table t1(a blob , b blob, c blob , d blob , e int);
> +insert into t1 values (1,1,1,1,1);
> +insert into t1 values (1,1,1,1,1);
> +insert into t1 values (2,1,1,1,1);
> +insert into t1 values (2,2,2,2,2);
> +insert into t1 values (3,3,4,4,4);
> +insert into t1 values (4,4,4,4,4);
> +alter table t1 add unique key(a,c), add unique key(b,d), add unique key(e);
> +ERROR 23000: Duplicate entry '1-1' for key 'a'
> +alter ignore table t1 add unique key(a,c), add unique key(b,d), add unique key(e);
> +select * from t1;
> +a b c d e
> +1 1 1 1 1
> +2 2 2 2 2
> +3 3 4 4 4
> +insert into t1 values (1,12,1,13,14);
> +ERROR 23000: Duplicate entry '1-1' for key 'a'
> +insert into t1 values (12,1,14,1,14);
> +ERROR 23000: Duplicate entry '1-1' for key 'b'
> +insert into t1 values (13,12,13,14,4);
> +ERROR 23000: Duplicate entry '4' for key 'e'
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + `b` blob DEFAULT NULL,
> + `c` blob DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + `e` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(65535),`c`(65535)),
> + UNIQUE KEY `b` (`b`(65535),`d`(65535)),
> + UNIQUE KEY `e` (`e`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 65535 NULL YES HASH_INDEX
> +t1 0 a 2 c A NULL 65535 NULL YES HASH_INDEX
> +t1 0 b 1 b A NULL 65535 NULL YES HASH_INDEX
> +t1 0 b 2 d A 0 65535 NULL YES HASH_INDEX
> +t1 0 e 1 e A 0 NULL NULL YES BTREE
> +drop table t1;
> +#visibility of db_row_hash
> +create table t1 (a blob unique , b blob unique);
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +b blob YES UNI NULL
> +insert into t1 values(1,19);
> +insert into t1 values(2,29);
> +insert into t1 values(3,39);
> +insert into t1 values(4,49);
> +create table t2 (DB_ROW_HASH_1 int, DB_ROW_HASH_2 int);
> +insert into t2 values(11,1);
> +insert into t2 values(22,2);
> +insert into t2 values(33,3);
> +insert into t2 values(44,4);
> +select * from t1;
> +a b
> +1 19
> +2 29
> +3 39
> +4 49
> +select * from t2;
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +22 2
> +33 3
> +44 4
> +select DB_ROW_HASH_1, DB_ROW_HASH_2 from t1;
> +ERROR 42S22: Unknown column 'DB_ROW_HASH_1' in 'field list'
> +#bug
> +select DB_ROW_HASH_1, DB_ROW_HASH_2 from t1,t2;
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +11 1
> +11 1
> +11 1
> +22 2
> +22 2
> +22 2
> +22 2
> +33 3
> +33 3
> +33 3
> +33 3
> +44 4
> +44 4
> +44 4
> +44 4
> +select * from t1 where DB_ROW_HASH_1 in (select DB_ROW_HASH_1 from t2);
> +ERROR 42S22: Unknown column 'DB_ROW_HASH_1' in 'IN/ALL/ANY subquery'
> +select DB_ROW_HASH_1, DB_ROW_HASH_2 from t1,t2 where DB_ROW_HASH_1 in (select DB_ROW_HASH_1 from t2);
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +22 2
> +33 3
> +44 4
> +11 1
> +22 2
> +33 3
> +44 4
> +11 1
> +22 2
> +33 3
> +44 4
> +11 1
> +22 2
> +33 3
> +44 4
> +select * from t2 where DB_ROW_HASH_1 in (select DB_ROW_HASH_1 from t1);
> +DB_ROW_HASH_1 DB_ROW_HASH_2
> +11 1
> +22 2
> +33 3
> +44 4
> +select DB_ROW_HASH_1 from t1,t2 where t1.DB_ROW_HASH_1 = t2.DB_ROW_HASH_2;
> +ERROR 42S22: Unknown column 't1.DB_ROW_HASH_1' in 'where clause'
> +select DB_ROW_HASH_1 from t1 inner join t2 on t1.a = t2.DB_ROW_HASH_2;
> +DB_ROW_HASH_1
> +11
> +22
> +33
> +44
> +drop table t1,t2;
> +#very long blob entry;
> +SET @@GLOBAL.max_allowed_packet=67108864;
> +connect 'newcon', localhost, root,,;
> +connection newcon;
> +show variables like 'max_allowed_packet';
> +Variable_name Value
> +max_allowed_packet 67108864
> +create table t1(a longblob unique, b longblob , c longblob , unique(b,c));
> +desc t1;
> +Field Type Null Key Default Extra
> +a longblob YES UNI NULL
> +b longblob YES MUL NULL
> +c longblob YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` longblob DEFAULT NULL,
> + `b` longblob DEFAULT NULL,
> + `c` longblob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `b` (`b`,`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL NULL NULL YES HASH_INDEX
> +t1 0 b 1 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 b 2 c A 0 NULL NULL YES HASH_INDEX
> +insert into t1 values(concat(repeat('sachin',10000000),'1'),concat(repeat('sachin',10000000),'1'),
> +concat(repeat('sachin',10000000),'1'));
> +insert into t1 values(concat(repeat('sachin',10000000),'2'),concat(repeat('sachin',10000000),'2'),
> +concat(repeat('sachin',10000000),'1'));
> +insert into t1 values(concat(repeat('sachin',10000000),'2'),concat(repeat('sachin',10000000),'2'),
> +concat(repeat('sachin',10000000),'4'));
> +ERROR 23000: Duplicate entry 'sachinsachinsachinsachinsachinsachinsachinsachinsachinsachinsach' for key 'a'
> +insert into t1 values(concat(repeat('sachin',10000000),'3'),concat(repeat('sachin',10000000),'1'),
> +concat(repeat('sachin',10000000),'1'));
> +ERROR 23000: Duplicate entry 'sachinsachinsachinsachinsachinsachinsachinsachinsachinsachinsach' for key 'b'
> +drop table t1;
> +#long key unique with different key length
> +create table t1(a blob, unique(a(3000)));
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob YES UNI NULL
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 3000 NULL YES HASH_INDEX
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(3000))
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 value(concat(repeat('s',3000),'1'));
> +insert into t1 value(concat(repeat('s',3000),'2'));
> +ERROR 23000: Duplicate entry 'ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss' for key 'a'
> +insert into t1 value(concat(repeat('a',3000),'2'));
> +drop table t1;
> +create table t1(a varchar(4000), b longblob , c varchar(5000), d longblob,
> +unique(a(3500), b(1000000)), unique(c(4500), d(10000000)));
> +desc t1;
> +Field Type Null Key Default Extra
> +a varchar(4000) YES MUL NULL
> +b longblob YES NULL
> +c varchar(5000) YES MUL NULL
> +d longblob YES NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` varchar(4000) DEFAULT NULL,
> + `b` longblob DEFAULT NULL,
> + `c` varchar(5000) DEFAULT NULL,
> + `d` longblob DEFAULT NULL,
> + UNIQUE KEY `a` (`a`(3500),`b`),
> + UNIQUE KEY `c` (`c`(4500),`d`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +show keys from t1;
> +Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
> +t1 0 a 1 a A NULL 3500 NULL YES HASH_INDEX
> +t1 0 a 2 b A NULL NULL NULL YES HASH_INDEX
> +t1 0 c 1 c A 0 4500 NULL YES HASH_INDEX
> +t1 0 c 2 d A 0 NULL NULL YES HASH_INDEX
> +drop table t1;
> +disconnect newcon;
> +connection default;
> +SET @@GLOBAL.max_allowed_packet=4194304;
> +#ext bug
> +create table t1(a int primary key, b blob unique, c int, d blob , index(c));
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` int(11) NOT NULL,
> + `b` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + `d` blob DEFAULT NULL,
> + PRIMARY KEY (`a`),
> + UNIQUE KEY `b` (`b`),
> + KEY `c` (`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(1,23,1,33);
> +insert into t1 values(2,23,1,33);
> +ERROR 23000: Duplicate entry '23' for key 'b'
> +drop table t1;
> +create table t2 (a blob unique , c int , index(c));
> +show create table t2;
> +Table Create Table
> +t2 CREATE TABLE `t2` (
> + `a` blob DEFAULT NULL,
> + `c` int(11) DEFAULT NULL,
> + UNIQUE KEY `a` (`a`),
> + KEY `c` (`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t2 values(1,1);
> +insert into t2 values(2,1);
> +drop table t2;
> +#not null test //todo solve warnings
> +create table t1(a blob unique not null);
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob NO PRI NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob NOT NULL,
> + UNIQUE KEY `a` (`a`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values(1);
> +insert into t1 values(3);
> +insert into t1 values(1);
> +ERROR 23000: Duplicate entry '1' for key 'a'
> +drop table t1;
> +create table t1(a int primary key, b blob unique , c blob unique not null);
> +insert into t1 values(1,1,1);
> +insert into t1 values(2,1,2);
> +ERROR 23000: Duplicate entry '1' for key 'b'
> +insert into t1 values(3,3,1);
> +ERROR 23000: Duplicate entry '1' for key 'c'
> +drop table t1;
> +create table t1 (a blob unique not null, b blob not null, c blob not null, unique(b,c));
> +desc t1;
> +Field Type Null Key Default Extra
> +a blob NO PRI NULL
> +b blob NO MUL NULL
> +c blob NO NULL
> +show create table t1;
> +Table Create Table
> +t1 CREATE TABLE `t1` (
> + `a` blob NOT NULL,
> + `b` blob NOT NULL,
> + `c` blob NOT NULL,
> + UNIQUE KEY `a` (`a`),
> + UNIQUE KEY `b` (`b`,`c`)
> +) ENGINE=MyISAM DEFAULT CHARSET=latin1
> +insert into t1 values (1, 2, 3);
> +insert into t1 values (2, 1, 3);
> +insert into t1 values (2, 1, 3);
> +ERROR 23000: Duplicate entry '2' for key 'a'
> +drop table t1;
> +set @@GLOBAL.max_allowed_packet= @allowed_packet;
> diff --git a/sql/field.h b/sql/field.h
> index b6f28808e2e..13eea28fe8a 100644
> --- a/sql/field.h
> +++ b/sql/field.h
> @@ -539,6 +539,7 @@ class Virtual_column_info: public Sql_alloc
> Item *expr;
> LEX_CSTRING name; /* Name of constraint */
> uint flags;
> + LEX_CSTRING hash_expr;
It is not a good idea to put your hash-unique-constraint specific
data into a common Virtual_column_info.
Luckily, you don't need it. Just like "current_timestamp()" expression
is generated on the fly as needed in parse_vcol_defs(), you can
generate the hash expression there too, no need to do it
in TABLE_SHARE::init_from_binary_frm_image().
So, just remove hash_expr.
>
> Virtual_column_info()
> : vcol_type((enum_vcol_info_type)VCOL_TYPE_NONE),
> @@ -681,7 +688,7 @@ class Field: public Value_source
> GEOM_MULTIPOINT = 4, GEOM_MULTILINESTRING = 5, GEOM_MULTIPOLYGON = 6,
> GEOM_GEOMETRYCOLLECTION = 7
> };
> - enum imagetype { itRAW, itMBR};
> + enum imagetype { itRAW, itMBR, itHASH};
itHASH isn't used anywhere in your patch :)
>
> utype unireg_check;
> uint32 field_length; // Length of field
> diff --git a/sql/field.cc b/sql/field.cc
> index dc854826ed6..f75f52141e3 100644
> --- a/sql/field.cc
> +++ b/sql/field.cc
> @@ -9822,7 +9822,7 @@ int Field_bit::key_cmp(const uchar *str, uint length)
> }
>
>
> -int Field_bit::cmp_offset(uint row_offset)
> +int Field_bit::cmp_offset(long row_offset)
may be my_ptrdiff_t ?
> {
> if (bit_len)
> {
> diff --git a/sql/item_func.h b/sql/item_func.h
> index 3b6cb4ceeac..0391de0c5ad 100644
> --- a/sql/item_func.h
> +++ b/sql/item_func.h
> @@ -803,6 +803,19 @@ class Item_long_func: public Item_int_func
> };
>
>
> +class Item_func_hash: public Item_int_func
> +{
> +public:
> + Item_func_hash(THD *thd, List<Item> &item): Item_int_func(thd, item)
> + {}
> + longlong val_int();
> + void fix_length_and_dec();
> + const Type_handler *type_handler() const { return &type_handler_long; }
> + Item *get_copy(THD *thd)
> + { return get_item_copy<Item_func_hash>(thd, this); }
> + const char *func_name() const { return "HASH"; }
is it a user visible function? Can one write
SELECT HASH(a) FROM t1 ?
looks like yes, so
1. please add tests for it
2. we need to document what hash function, exactly, is used.
3. May be better to call it MARIADB_HASH? to highlight its internal-use
nature and a home-baked hash function? And to reduce a chance for
a collision with user defined functions.
4. An even better approach would be not to create any user visible function
at all. Currently unpack_vcol_info_from_frm() calls the parser and
then initializes vcol_info and calls fix_fields. If you split it in two
you can do `new Item_func_hash` manually and skip the first part
of unpack_vcol_info_from_frm(). And you won't need HASH to be user-visible.
As a minor optimization, the same can be done for "current_timestamp()"
> +};
> +
> class Item_longlong_func: public Item_int_func
> {
> public:
> diff --git a/sql/sql_show.cc b/sql/sql_show.cc
> index 4923c628bf9..c620ea23165 100644
> --- a/sql/sql_show.cc
> +++ b/sql/sql_show.cc
> @@ -2300,7 +2300,7 @@ int show_create_table(THD *thd, TABLE_LIST *table_list, String *packet,
> */
> packet->append(STRING_WITH_LEN("PRIMARY KEY"));
> }
> - else if (key_info->flags & HA_NOSAME)
> + else if (key_info->flags & HA_NOSAME || key_info->flags & HA_LONG_UNIQUE_HASH)
Is it needed? As far as I can see, you always set HA_NOSAME
whenever you set HA_LONG_UNIQUE_HASH.
> packet->append(STRING_WITH_LEN("UNIQUE KEY "));
> else if (key_info->flags & HA_FULLTEXT)
> packet->append(STRING_WITH_LEN("FULLTEXT KEY "));
> diff --git a/include/my_base.h b/include/my_base.h
> index c36072c0bfa..ba17f8d80dc 100644
> --- a/include/my_base.h
> +++ b/include/my_base.h
> @@ -290,6 +291,11 @@ enum ha_base_keytype {
> #define HA_KEY_HAS_PART_KEY_SEG 65536
> /* Internal Flag Can be calcaluted */
> #define HA_INVISIBLE_KEY 2<<18
> +/*
> + Some more flags for keys these are not stored in
> + frm it is calculated on the fly in init_from_binary_frm_image
> +*/
1. I don't think you need to make the same comment more verbose every time
you repeat it :) Just say, like
#define HA_INVISIBLE_KEY 2<<18 /* this is calculated too */
#define HA_LONG_UNIQUE_HASH 2<<19 /* this is calculated too */
2. Do you need a separate flag for that, just HA_INVISIBLE_KEY isn't enough?
> +#define HA_LONG_UNIQUE_HASH 2<<19
> /* Automatic bits in key-flag */
>
> #define HA_SPACE_PACK_USED 4 /* Test for if SPACE_PACK used */
> @@ -317,6 +323,13 @@ enum ha_base_keytype {
> #define HA_BIT_PART 1024
> #define HA_CAN_MEMCMP 2048 /* internal, never stored in frm */
>
> +/*
> + Used for key parts whole length is greater then > file->max_key_part_length
> + Only used for HA_LONG_UNIQUE_HASH keys
> +*/ //TODO a better name ??
> +#define HA_HASH_KEY_PART_FLAG 4096
> +/* Field need to be frees externally */
> +#define HA_FIELD_EX_FREED 8192
Okay, this is a weird name, agree. But it is not used anywere, so
there's an easy solution :)
> /* optionbits for database */
> #define HA_OPTION_PACK_RECORD 1U
> #define HA_OPTION_PACK_KEYS 2U
> diff --git a/sql/table.h b/sql/table.h
> index 785fd9f3427..9ed7fcd7beb 100644
> --- a/sql/table.h
> +++ b/sql/table.h
> @@ -346,9 +346,39 @@ enum field_visibility_t {
> INVISIBLE_FULL
> };
>
> -#define INVISIBLE_MAX_BITS 3
> +#define INVISIBLE_MAX_BITS 3
> +/* We will store the info into 3rd bit if field is hash field */
> +#define HASH_FIELD_MASK 15
> +#define HASH_FIELD_MASK_SHIFT 4
> +#define HA_HASH_FIELD_LENGTH 8
> +#define HA_HASH_KEY_LENGTH_WITHOUT_NULL 8
> +#define HA_HASH_KEY_LENGTH_WITH_NULL 9
> +//TODO is this correct ? how about size of char ptr on 32/16 bit machine?
I don't understand that, sorry
> +#define HA_HASH_KEY_PART_LENGTH 4 + 8 // 4 for length , 8 for portable size of char ptr
you almost always should use parentheses in macro expressions
>
> +const LEX_CSTRING ha_hash_str {STRING_WITH_LEN("HASH")};
???
you create a copy of ha_hash_str symbol in every .o file that
included table.h?
>
> +
> +int find_field_pos_in_hash(Item *hash_item, const char * field_name);
> +
> +int fields_in_hash_str(Item *hash_item);
> +
> +Field * field_ptr_in_hash_str(Item *hash_item, int index);
> +
> +int get_key_part_length(KEY *keyinfo, int index);
> +
> +void calc_hash_for_unique(ulong &nr1, ulong &nr2, String *str);
> +
> +void create_update_handler(THD *thd, TABLE *table);
> +
> +void delete_update_handler(THD *thd, TABLE *table);
> +
> +void setup_table_hash(TABLE *table);
> +
> +void re_setup_table(TABLE *table);
> +
> +int get_hash_key(THD *thd, TABLE *table, handler *h, uint key_index, uchar *rec_buf,
> + uchar *key_buff);
> /**
> Category of table found in the table share.
> */
> @@ -1094,6 +1124,17 @@ struct TABLE
> THD *in_use; /* Which thread uses this */
>
> uchar *record[3]; /* Pointer to records */
> + /* record buf to resolve hash collisions for long UNIQUE constraints */
> + uchar *check_unique_buf;
> + handler *update_handler; /* Handler used in case of update */
> + /*
> + In the case of write row for long unique we are unable of find
> + Whick key is voilated. Because we in case of duplicate hash we never reach
> + handler write_row function. So print_error will always print that
> + key 0 is voilated. We store which key is voilated in this variable
> + by default this should be initialized to -1
typos :)
"unable to find", "which", "violated"
> + */
> + int dupp_hash_key;
> uchar *write_row_record; /* Used as optimisation in
> THD::write_row */
> uchar *insert_values; /* used by INSERT ... UPDATE */
> @@ -2898,6 +2939,7 @@ void append_unescaped(String *res, const char *pos, size_t length);
> void prepare_frm_header(THD *thd, uint reclength, uchar *fileinfo,
> HA_CREATE_INFO *create_info, uint keys, KEY *key_info);
> const char *fn_frm_ext(const char *name);
> +void calc_hash_for_unique(ulong &nr1, ulong &nr2, String *str);
you have this prototype twice in table.h
>
> /* Check that the integer is in the internal */
> static inline int set_zone(int nr,int min_zone,int max_zone)
> diff --git a/sql/table.cc b/sql/table.cc
> index 73b1a6bd9b2..8cd4db2844d 100644
> --- a/sql/table.cc
> +++ b/sql/table.cc
> @@ -747,7 +748,13 @@ static bool create_key_infos(const uchar *strpos, const uchar *frm_image_end,
> if (i == 0)
> {
> ext_key_parts+= (share->use_ext_keys ? first_keyinfo->user_defined_key_parts*(keys-1) : 0);
> + /*
> + Some keys can be HA_LONG_UNIQUE_HASH , but we do not know at this point ,
> + how many ?, but will always be less than or equal to total num of
> + keys. Each HA_LONG_UNIQUE_HASH key require one extra key_part in which
> + it stored hash. On safe side we will allocate memory for each key.
> + */
> - n_length=keys * sizeof(KEY) + ext_key_parts * sizeof(KEY_PART_INFO);
> + n_length=keys * sizeof(KEY) + (ext_key_parts +keys) * sizeof(KEY_PART_INFO);
Hmm, why wouldn't you store the number of HA_KEY_ALG_LONG_HASH keys in
EXTRA2_LONG_UNIQUES ? Then you'll know it here.
> if (!(keyinfo= (KEY*) alloc_root(&share->mem_root,
> n_length + len)))
> return 1;
> @@ -798,6 +805,14 @@ static bool create_key_infos(const uchar *strpos, const uchar *frm_image_end,
> }
> key_part->store_length=key_part->length;
> }
> + if (keyinfo->algorithm == HA_KEY_ALG_LONG_HASH)
> + {
> + keyinfo->flags|= HA_LONG_UNIQUE_HASH | HA_NOSAME;
> + keyinfo->key_length= 0;
> + share->ext_key_parts++;
> + // This empty key_part for storing Hash
> + key_part++;
> + }
so, you write keyinfo's for HA_LONG_UNIQUE_HASH keys into the frm?
why don't you write keysegs for them? I mean, it is not very logical.
I'd thought you won't write either keyinfos or keysegs. Or, okay, you could
write both. But only keyinfos and no keysegs? That's strange
>
> /*
> Add primary key to end of extended keys for non unique keys for
> @@ -1143,13 +1159,21 @@ bool parse_vcol_defs(THD *thd, MEM_ROOT *mem_root, TABLE *table,
> pos+= expr_length;
> }
>
> - /* Now, initialize CURRENT_TIMESTAMP fields */
> + /* Now, initialize CURRENT_TIMESTAMP and UNIQUE_INDEX_HASH_FIELD fields */
> for (field_ptr= table->field; *field_ptr; field_ptr++)
> {
> Field *field= *field_ptr;
> - if (field->has_default_now_unireg_check())
> + if (field->vcol_info && (length = field->vcol_info->hash_expr.length))
> {
> expr_str.length(parse_vcol_keyword.length);
> + expr_str.append((char*)field->vcol_info->hash_expr.str, length);
> + vcol= unpack_vcol_info_from_frm(thd, mem_root, table, &expr_str,
> + &(field->vcol_info), error_reported);
see above about not going through the parser for hash keys.
> + *(vfield_ptr++)= *field_ptr;
> +
> + }
> + if (field->has_default_now_unireg_check())
> + {
> expr_str.append(STRING_WITH_LEN("current_timestamp("));
> expr_str.append_ulonglong(field->decimals());
> expr_str.append(')');
> @@ -2106,7 +2132,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> uchar flags= *extra2_field_flags++;
> if (flags & VERS_OPTIMIZED_UPDATE)
> reg_field->flags|= VERS_UPDATE_UNVERSIONED_FLAG;
> -
> + if (flags & EXTRA2_LONG_UNIQUE_HASH_FIELD)
> + reg_field->flags|= LONG_UNIQUE_HASH_FIELD;
so, you write LONG_UNIQUE_HASH_FIELD fields to frm too. Why?
> reg_field->invisible= f_visibility(flags);
> }
> if (reg_field->invisible == INVISIBLE_USER)
> @@ -2350,6 +2438,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> key_part= keyinfo->key_part;
> uint key_parts= share->use_ext_keys ? keyinfo->ext_key_parts :
> keyinfo->user_defined_key_parts;
> + if (keyinfo->flags & HA_LONG_UNIQUE_HASH)
> + key_parts++;
key_parts++ ? Doesn't your HA_LONG_UNIQUE_HASH key have only one part, always?
> for (i=0; i < key_parts; key_part++, i++)
> {
> Field *field;
> @@ -2363,7 +2453,16 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
>
> field= key_part->field= share->field[key_part->fieldnr-1];
> key_part->type= field->key_type();
> + if (keyinfo->flags & HA_LONG_UNIQUE_HASH
> + &&(key_part->length > handler_file->max_key_part_length()
> + || key_part->length == 0))
1. fix the spacing and the indentation here, please
2. what should happen if HA_LONG_UNIQUE_HASH flag is set,
but the key_part->length is small ?
> + {
> + key_part->key_part_flag= HA_HASH_KEY_PART_FLAG;
> + key_part->store_length= HA_HASH_KEY_PART_LENGTH;
> + }
> + /* Invisible Full is currently used by long uniques */
> - if (field->invisible > INVISIBLE_USER && !field->vers_sys_field())
> + if ((field->invisible == INVISIBLE_USER ||
> + field->invisible == INVISIBLE_SYSTEM )&& !field->vers_sys_field())
why is this change?
> keyinfo->flags |= HA_INVISIBLE_KEY;
> if (field->null_ptr)
> {
> @@ -2428,7 +2527,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> field->part_of_sortkey= share->keys_in_use;
> }
> }
> - if (field->key_length() != key_part->length)
> + if (field->key_length() != key_part->length &&
> + !(keyinfo->flags & HA_LONG_UNIQUE_HASH))
why is that?
> {
> #ifndef TO_BE_DELETED_ON_PRODUCTION
> if (field->type() == MYSQL_TYPE_NEWDECIMAL)
> @@ -2470,7 +2570,8 @@ int TABLE_SHARE::init_from_binary_frm_image(THD *thd, bool write,
> if (!(key_part->key_part_flag & (HA_BLOB_PART | HA_VAR_LENGTH_PART |
> HA_BIT_PART)) &&
> key_part->type != HA_KEYTYPE_FLOAT &&
> - key_part->type == HA_KEYTYPE_DOUBLE)
> + key_part->type == HA_KEYTYPE_DOUBLE &&
> + !(keyinfo->flags & HA_LONG_UNIQUE_HASH))
why is that?
> key_part->key_part_flag|= HA_CAN_MEMCMP;
> }
> keyinfo->usable_key_parts= usable_parts; // Filesort
> @@ -8346,6 +8454,377 @@ double KEY::actual_rec_per_key(uint i)
> }
>
>
> +/*
> + find out the field positoin in hash_str()
> + position starts from 0
> + else return -1;
> +*/
> +int find_field_pos_in_hash(Item *hash_item, const char * field_name)
> +{
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_item);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + uint arg_count= t_item->argument_count();
> + Item ** arguments= t_item->arguments();
> + Field * t_field;
> +
> + for (uint j=0; j < arg_count; j++)
> + {
> + DBUG_ASSERT(arguments[j]->type() == Item::FIELD_ITEM ||
> + arguments[j]->type() == Item::FUNC_ITEM);
> + if (arguments[j]->type() == Item::FIELD_ITEM)
> + {
> + t_field= static_cast<Item_field *>(arguments[j])->field;
> + }
> + else
> + {
> + Item_func_left *fnc= static_cast<Item_func_left *>(arguments[j]);
> + t_field= static_cast<Item_field *>(fnc->arguments()[0])->field;
> + }
> + if (!my_strcasecmp(system_charset_info, t_field->field_name.str, field_name))
> + return j;
> + }
> + return -1;
> +}
> +
> +/*
> + find total number of field in hash_str
> +*/
> +int fields_in_hash_str(Item * hash_item)
can be static
> +{
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_item);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + return t_item->argument_count();
> +}
> +
> +/*
> + Returns fields ptr given by hash_str index
> + Index starts from 0
> +*/
> +Field * field_ptr_in_hash_str(Item *hash_item, int index)
> +{
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_item);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + return static_cast<Item_field *>(t_item->arguments()[index])->field;
> +}
> +
> +//NO longer Needed
remove it, then :)
> +int get_key_part_length(KEY *keyinfo, int index)
> +{
> + DBUG_ASSERT(keyinfo->flags & HA_LONG_UNIQUE_HASH);
> + TABLE *tbl= keyinfo->table;
> + Item *h_item= keyinfo->key_part->field->vcol_info->expr;
> + Field *fld= field_ptr_in_hash_str(h_item, index);
> + if (!index)
> + return fld->pack_length()+HA_HASH_KEY_LENGTH_WITH_NULL+1;
> + return fld->pack_length()+1;
> +}
> +/**
> + @brief clone of current handler.
> + Creates a clone of handler used in update for
> + unique hash key.
> + @param thd Thread Object
> + @param table Table Object
> + @return handler object
> +*/
> +void create_update_handler(THD *thd, TABLE *table)
better, clone_handler_for_update
> +{
> + handler *update_handler= NULL;
> + for (uint i= 0; i < table->s->keys; i++)
indentation
> + {
> + if (table->key_info[i].flags & HA_LONG_UNIQUE_HASH)
> + {
> + update_handler= table->file->clone(table->s->normalized_path.str,
> + thd->mem_root);
> + update_handler->ha_external_lock(thd, F_RDLCK);
> + table->update_handler= update_handler;
why do you store it in TABLE? you can just keep it in a local variable
in mysql_update, you don't need it outside of mysql_update anyway.
(and multi-update)
also, you shouldn't need to scan all table keys here. use some
flag or property in TABLE_SCHEMA to check quickly whether the table
has these lock unique keys.
> + return;
> + }
> + }
> + return;
> +}
> +
> +/**
> + @brief Deletes update handler object
> + @param thd Thread Object
> + @param table Table Object
> +*/
> +void delete_update_handler(THD *thd, TABLE *table)
> +{
> + if (table->update_handler)
> + {
> + table->update_handler->ha_external_lock(thd, F_UNLCK);
> + table->update_handler->ha_close();
> + delete table->update_handler;
> + table->update_handler= NULL;
> + }
> +}
> +/**
> + @brief This function makes table object with
> + long unique keys ready for storage engine.
> + It makes key_part of HA_LONG_UNIQUE_HASH point to
> + hash key_part.
> + @param table Table object
> + */
> +void setup_table_hash(TABLE *table)
this and other functions could be methods in TABLE
> +{
> + /*
> + Extra parts of long unique key which are used only at server level
> + for example in key unique(a, b, c) //a b c are blob
> + extra_key_part_hash is 3
> + */
> + uint extra_key_part_hash= 0;
> + uint hash_parts= 0;
> + KEY *s_keyinfo= table->s->key_info;
> + KEY *keyinfo= table->key_info;
> + /*
> + Sometime s_keyinfo or key_info can be null. So
> + two different loop for keyinfo and s_keyinfo
> + reference test case:- main.subselect_sj2
how they could be null here?
> + */
> +
> + if (keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, keyinfo++)
> + {
> + if (keyinfo->flags & HA_LONG_UNIQUE_HASH)
> + {
> + DBUG_ASSERT(keyinfo->user_defined_key_parts ==
> + keyinfo->ext_key_parts);
> + keyinfo->flags&= ~(HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + keyinfo->algorithm= HA_KEY_ALG_UNDEF;
> + extra_key_part_hash+= keyinfo->ext_key_parts;
> + hash_parts++;
> + keyinfo->key_part= keyinfo->key_part+ keyinfo->ext_key_parts;
> + keyinfo->user_defined_key_parts= keyinfo->usable_key_parts=
> + keyinfo->ext_key_parts= 1;
> + keyinfo->key_length= keyinfo->key_part->store_length;
> + }
> + }
> + table->s->key_parts-= extra_key_part_hash;
> + table->s->key_parts+= hash_parts;
> + table->s->ext_key_parts-= extra_key_part_hash;
I don't understand what you're doing here. Could you add a comment, explaning
the resulting structure of keys and keysegs and what's where?
> + }
> + if (s_keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, s_keyinfo++)
> + {
> + if (s_keyinfo->flags & HA_LONG_UNIQUE_HASH)
> + {
> + DBUG_ASSERT(s_keyinfo->user_defined_key_parts ==
> + s_keyinfo->ext_key_parts);
> + s_keyinfo->flags&= ~(HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + s_keyinfo->algorithm= HA_KEY_ALG_BTREE;
> + extra_key_part_hash+= s_keyinfo->ext_key_parts;
> + s_keyinfo->key_part= s_keyinfo->key_part+ s_keyinfo->ext_key_parts;
> + s_keyinfo->user_defined_key_parts= s_keyinfo->usable_key_parts=
> + s_keyinfo->ext_key_parts= 1;
> + s_keyinfo->key_length= s_keyinfo->key_part->store_length;
> + }
> + }
> + if (!keyinfo)
> + {
> + table->s->key_parts-= extra_key_part_hash;
> + table->s->key_parts+= hash_parts;
> + table->s->ext_key_parts-= extra_key_part_hash;
> + }
> + }
> +}
> +
> +/**
> + @brief Revert the effect of setup_table_hash
> + @param table Table Object
> + */
> +void re_setup_table(TABLE *table)
> +{
> + //extra key parts excluding hash , which needs to be added in keyparts
> + uint extra_key_parts_ex_hash= 0;
> + uint extra_hash_parts= 0; // this var for share->extra_hash_parts
> + KEY *s_keyinfo= table->s->key_info;
> + KEY *keyinfo= table->key_info;
> + /*
> + Sometime s_keyinfo can be null so
> + two different loop for keyinfo and s_keyinfo
> + ref test case:- main.subselect_sj2
> + */
> + if (keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, keyinfo++)
> + {
> + if (keyinfo->user_defined_key_parts == 1 &&
> + keyinfo->key_part->field->flags & LONG_UNIQUE_HASH_FIELD)
> + {
> + keyinfo->flags|= (HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + keyinfo->algorithm= HA_KEY_ALG_LONG_HASH;
> + /* Sometimes it can happen, that we does not parsed hash_str.
> + Like when this function is called in ha_create. So we will
> + Use field from table->field rather then share->field*/
> + Item *h_item= table->field[keyinfo->key_part->fieldnr - 1]->
> + vcol_info->expr;
> + uint hash_parts= fields_in_hash_str(h_item);
> + keyinfo->key_part= keyinfo->key_part- hash_parts;
> + keyinfo->user_defined_key_parts= keyinfo->usable_key_parts=
> + keyinfo->ext_key_parts= hash_parts;
> + extra_key_parts_ex_hash+= hash_parts;
> + extra_hash_parts++;
> + keyinfo->key_length= -1;
> + }
> + }
> + table->s->key_parts-= extra_hash_parts;
> + table->s->key_parts+= extra_key_parts_ex_hash;
> + table->s->ext_key_parts+= extra_key_parts_ex_hash + extra_hash_parts;
> + }
> + if (s_keyinfo)
> + {
> + for (uint i= 0; i < table->s->keys; i++, s_keyinfo++)
> + {
> + if (s_keyinfo->user_defined_key_parts == 1 &&
> + s_keyinfo->key_part->field->flags & LONG_UNIQUE_HASH_FIELD)
> + {
> + s_keyinfo->flags|= (HA_NOSAME | HA_LONG_UNIQUE_HASH);
> + s_keyinfo->algorithm= HA_KEY_ALG_LONG_HASH;
> + extra_hash_parts++;
> + /* Sometimes it can happen, that we does not parsed hash_str.
> + Like when this function is called in ha_create. So we will
> + Use field from table->field rather then share->field*/
> + Item *h_item= table->field[s_keyinfo->key_part->fieldnr - 1]->
> + vcol_info->expr;
> + uint hash_parts= fields_in_hash_str(h_item);
> + s_keyinfo->key_part= s_keyinfo->key_part- hash_parts;
> + s_keyinfo->user_defined_key_parts= s_keyinfo->usable_key_parts=
> + s_keyinfo->ext_key_parts= hash_parts;
> + extra_key_parts_ex_hash+= hash_parts;
> + s_keyinfo->key_length= -1;
> + }
> + }
> + if (!keyinfo)
> + {
> + table->s->key_parts-= extra_hash_parts;
> + table->s->key_parts+= extra_key_parts_ex_hash;
> + table->s->ext_key_parts+= extra_key_parts_ex_hash + extra_hash_parts;
> + }
> + }
> +}
> +
> +/**
> + @brief set_hash
> + @param table
> + @param key_index
> + @param key_buff
> +
> +int get_hash_key(THD *thd,TABLE *table, handler *h, uint key_index,
> + uchar * rec_buff, uchar *key_buff)
unused?
> +{
> + KEY *keyinfo= &table->key_info[key_index];
> + DBUG_ASSERT(keyinfo->flags & HA_LONG_UNIQUE_HASH);
> + KEY_PART_INFO *temp_key_part= table->key_info[key_index].key_part;
> + Field *fld[keyinfo->user_defined_key_parts];
> + Field *t_field;
> + uchar hash_buff[9];
> + ulong nr1= 1, nr2= 4;
> + String *str1, temp1;
> + String *str2, temp2;
> + bool is_null= false;
> + bool is_same= true;
> + bool is_index_inited= h->inited;
> + /* difference between field->ptr and start of rec_buff *
> + long diff = rec_buff- table->record[0];
> + int result= 0;
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++, temp_key_part++)
> + {
> + uint maybe_null= MY_TEST(temp_key_part->null_bit);
> + fld[i]= temp_key_part->field->new_key_field(thd->mem_root, table,
> + key_buff + maybe_null, 12,
> + maybe_null?key_buff:0, 1,
> + temp_key_part->key_part_flag
> + & HA_HASH_KEY_PART_FLAG);
> + if (fld[i]->is_real_null())
> + {
> + is_null= true;
> + break;
> + }
> + str1= fld[i]->val_str(&temp1);
> + calc_hash_for_unique(nr1, nr2, str1);
> + key_buff+= temp_key_part->store_length;
> + }
> + if (is_null && !(keyinfo->flags & HA_NULL_PART_KEY))
> + return HA_ERR_KEY_NOT_FOUND;
> + if (keyinfo->flags & HA_NULL_PART_KEY)
> + {
> + hash_buff[0]= is_null;
> + int8store(hash_buff + 1, nr1);
> + }
> + else
> + int8store(hash_buff, nr1);
> + if (!is_index_inited)
> + result= h->ha_index_init(key_index, 0);
> + if (result)
> + return result;
> +
> + setup_table_hash(table);
> + result= h->ha_index_read_map(rec_buff, hash_buff, HA_WHOLE_KEY,
> + HA_READ_KEY_EXACT);
> + re_setup_table(table);
> + if (!result)
> + {
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + t_field= keyinfo->key_part[i].field;
> + t_field->move_field(t_field->ptr+diff,
> + t_field->null_ptr+diff, t_field->null_bit);
> + }
> + do
> + {
> + re_setup_table(table);
> + is_same= true;
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + t_field= keyinfo->key_part[i].field;
> + if (fld[i]->is_real_null() && t_field->is_real_null())
> + continue;
> + if (!fld[i]->is_real_null() && !t_field->is_real_null())
> + {
> + str1= t_field->val_str(&temp1);
> + str2= fld[i]->val_str(&temp2);
> + if (my_strcasecmp(str1->charset(), str1->c_ptr_safe(),
> + str2->c_ptr_safe()))
> + {
> + is_same= false;
> + break;
> + }
> + }
> + else
> + {
> + is_same= false;
> + break;
> + }
> + }
> + setup_table_hash(table);
> + }
> + while (!is_same && !(result= h->ha_index_next_same(rec_buff, hash_buff,
> + keyinfo->key_length)));
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + t_field= keyinfo->key_part[i].field;
> + t_field->move_field(t_field->ptr-diff,
> + t_field->null_ptr-diff, t_field->null_bit);
> + }
> + }
> + if (!is_index_inited)
> + h->ha_index_end();
> + re_setup_table(table);
> + for (uint i=0; i < keyinfo->user_defined_key_parts; i++)
> + {
> + if (keyinfo->key_part[i].key_part_flag & HA_FIELD_EX_FREED)
> + {
> + Field_blob *blb= static_cast<Field_blob *>(fld[i]);
> + uchar * addr;
> + blb->get_ptr(&addr);
> + my_free(addr);
> + }
> + }
> + return result;
> +}
> +*/
> LEX_CSTRING *fk_option_name(enum_fk_option opt)
> {
> static LEX_CSTRING names[]=
> @@ -8814,3 +9293,15 @@ bool TABLE::export_structure(THD *thd, Row_definition_list *defs)
> }
> return false;
> }
> +
> +void calc_hash_for_unique(ulong &nr1, ulong &nr2, String *str)
> +{
> + CHARSET_INFO *cs;
> + uchar l[4];
> + int4store(l, str->length());
> + cs= &my_charset_bin;
> + cs->coll->hash_sort(cs, l, sizeof(l), &nr1, &nr2);
> + cs= str->charset();
> + cs->coll->hash_sort(cs, (uchar *)str->ptr(), str->length(), &nr1, &nr2);
> + sql_print_information("setiya %lu, %s", nr1, str->ptr());
really? :)
> +}
> diff --git a/sql/sql_table.cc b/sql/sql_table.cc
> index 4ac8b102d79..f42687c0377 100644
> --- a/sql/sql_table.cc
> +++ b/sql/sql_table.cc
> @@ -3343,6 +3343,87 @@ mysql_add_invisible_index(THD *thd, List<Key> *key_list,
> key_list->push_back(key, thd->mem_root);
> return key;
> }
> +/**
> + Add hidden level 3 hash field to table in case of long
s/hidden level 3/fully invisible/
> + unique column
> + @param thd Thread Context.
> + @param create_list List of table fields.
> + @param cs Field Charset
> + @param key_info Whole Keys buffer
> + @param key_index Index of current key
1. instead of key_info and key_index, you can just take a key_info of the current
key here (that is key_info + key_index).
2. what is the "current key"? Is it the unique key over blobs that you create
a hash field for? please clarify the comment.
> +*/
> +
> +static void add_hash_field(THD * thd, List<Create_field> *create_list,
> + CHARSET_INFO *cs, KEY *key_info, int key_index)
> +{
> + List_iterator<Create_field> it(*create_list);
> + Create_field *dup_field, *cf= new (thd->mem_root) Create_field();
> + cf->flags|= UNSIGNED_FLAG | LONG_UNIQUE_HASH_FIELD;
> + cf->charset= cs;
> + cf->decimals= 0;
> + cf->length= cf->char_length= cf->pack_length= HA_HASH_FIELD_LENGTH;
> + cf->invisible= INVISIBLE_FULL;
> + cf->pack_flag|= FIELDFLAG_MAYBE_NULL;
> + uint num= 1;
> + char *temp_name= (char *)thd->alloc(30);
> + my_snprintf(temp_name, 30, "DB_ROW_HASH_%u", num);
please, give names for 30 and for the whole my_snprintf - macros, functions,
whatever. And better use a LEX_STRING for it, for example:
#define LONG_HASH_FIELD_NAME_LENGTH 30
static inline make_long_hash_field_name(LEX_STRING buf, uint num)
{
buf->length= my_snprintf(buf->str, LONG_HASH_FIELD_NAME_LENGTH, "DB_ROW_HASH_%u", num);
}
> + /*
> + Check for collusions
collisions :)
> + */
> + while ((dup_field= it++))
> + {
> + if (!my_strcasecmp(system_charset_info, temp_name, dup_field->field_name.str))
> + {
> + num++;
> + my_snprintf(temp_name, 30, "DB_ROW_HASH_%u", num);
> + it.rewind();
> + }
> + }
> + it.rewind();
> + cf->field_name.str= temp_name;
> + cf->field_name.length= strlen(temp_name);
you won't need strlen here, if you use LEX_STRING above
> + cf->set_handler(&type_handler_longlong);
> + /*
> + We have added db_row_hash field in starting of
> + fields array , So we have to change key_part
> + field index
> + for (int i= 0; i <= key_index; i++, key_info++)
> + {
> + KEY_PART_INFO *info= key_info->key_part;
> + for (uint j= 0; j < key_info->user_defined_key_parts; j++, info++)
> + {
> + info->fieldnr+= 1;
> + info->offset+= HA_HASH_FIELD_LENGTH;
> + }
> + }*/
Forgot to remove it?
> + key_info[key_index].flags|= HA_NOSAME;
> + key_info[key_index].algorithm= HA_KEY_ALG_LONG_HASH;
> + it.rewind();
> + uint record_offset= 0;
> + while ((dup_field= it++))
> + {
> + dup_field->offset= record_offset;
> + if (dup_field->stored_in_db())
> + record_offset+= dup_field->pack_length;
Why do you change all field offsets? You only need to put your field last,
that's all. Like
while ((dup_field= it++))
set_if_bigger(record_offset, dup_field->offset + dup_field->pack_length)
> + }
> + cf->offset= record_offset;
> + /*
> + it.rewind();
> + while ((sql_field= it++))
> + {
> + if (!sql_field->stored_in_db())
> + {
> + sql_field->offset= record_offset;
> + record_offset+= sql_field->pack_length;
> + }
> + }
why is that?
> + */
> + /* hash column should be fully hidden */
> + //prepare_create_field(cf, NULL, 0);
Forgot to remove it?
> + create_list->push_back(cf,thd->mem_root);
> +}
> +
> +
> /*
> Preparation for table creation
>
> @@ -3868,6 +3951,8 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> }
>
> cols2.rewind();
> + key_part_info->fieldnr= field;
> + key_part_info->offset= (uint16) sql_field->offset;
why did you need to move that?
> if (key->type == Key::FULLTEXT)
> {
> if ((sql_field->real_field_type() != MYSQL_TYPE_STRING &&
> @@ -3922,8 +4007,19 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> column->length= MAX_LEN_GEOM_POINT_FIELD;
> if (!column->length)
> {
> - my_error(ER_BLOB_KEY_WITHOUT_LENGTH, MYF(0), column->field_name.str);
> - DBUG_RETURN(TRUE);
> + if (key->type == Key::PRIMARY)
> + {
> + my_error(ER_BLOB_KEY_WITHOUT_LENGTH, MYF(0), column->field_name.str);
> + DBUG_RETURN(TRUE);
> + }
> + else if (!is_hash_field_added)
> + {
> + add_hash_field(thd, &alter_info->create_list,
> + create_info->default_table_charset,
> + *key_info_buffer, key_number);
> + column->length= 0;
why column->length= 0 ?
> + is_hash_field_added= true;
> + }
> }
> }
> #ifdef HAVE_SPATIAL
> @@ -4062,11 +4159,29 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> }
> else
> {
> - my_error(ER_TOO_LONG_KEY, MYF(0), key_part_length);
> - DBUG_RETURN(TRUE);
> - }
> + if(key->type == Key::UNIQUE)
> + {
> + if (!is_hash_field_added)
> + {
> + add_hash_field(thd, &alter_info->create_list,
> + create_info->default_table_charset,
> + *key_info_buffer, key_number);
> + is_hash_field_added= true;
instead of is_hash_field_added and many add_hash_field() here and there, I'd
rather rename the variable to hash_field_needed. And only do hash_field_needed= true.
And at the end if (hash_field_needed) add_hash_field(...)
> + }
> + }
> + else
> + {
> + my_error(ER_TOO_LONG_KEY, MYF(0), key_part_length);
any test cases for this error?
> + DBUG_RETURN(TRUE);
> + }
> + }
> }
> - key_part_info->length= (uint16) key_part_length;
> + /* We can not store key_part_length more then 2^16 - 1 in frm
> + So we will simply make it zero */
Not really. If someone explicitly asks for a long prefix, it should be an error.
Like CREATE TABLE (a blob, UNIQUE (a(65537)); -- this is an error
> + if (is_hash_field_added && key_part_length > (2<<16) - 1)
> + key_part_info->length= 0;
> + else
> + key_part_info->length= (uint16) key_part_length;
> /* Use packed keys for long strings on the first column */
> if (!((*db_options) & HA_OPTION_NO_PACK_KEYS) &&
> !((create_info->table_options & HA_OPTION_NO_PACK_KEYS)) &&
> @@ -4122,12 +4237,37 @@ mysql_prepare_create_table(THD *thd, HA_CREATE_INFO *create_info,
> if (key->type == Key::UNIQUE && !(key_info->flags & HA_NULL_PART_KEY))
> unique_key=1;
> key_info->key_length=(uint16) key_length;
> - if (key_length > max_key_length && key->type != Key::FULLTEXT)
> + if (key_length > max_key_length && key->type != Key::FULLTEXT &&
> + !is_hash_field_added)
> {
> my_error(ER_TOO_LONG_KEY,MYF(0),max_key_length);
> DBUG_RETURN(TRUE);
> }
>
> + if (is_hash_field_added)
> + {
> + if (key_info->flags & HA_NULL_PART_KEY)
> + null_fields++;
> + else
> + {
> + uint elements= alter_info->create_list.elements;
> + Create_field *hash_fld= static_cast<Create_field *>(alter_info->
> + create_list.elem(elements -1 ));
> + hash_fld->flags|= NOT_NULL_FLAG;
> + hash_fld->pack_flag&= ~FIELDFLAG_MAYBE_NULL;
> + /*
> + Althought we do not need default value anywhere in code , but if we create
> + table with non null long columns , then at the time of insert we get warning.
> + So default value is used so solve this warning.
> + Virtual_column_info *default_value= new (thd->mem_root) Virtual_column_info();
> + char * def_str= (char *)alloc_root(thd->mem_root, 2);
> + strncpy(def_str, "0", 1);
> + default_value->expr_str.str= def_str;
> + default_value->expr_str.length= 1;
> + default_value->expr_item= new (thd->mem_root) Item_int(thd,0);
> + hash_fld->default_value= default_value; */
> + }
Forgot to remove it?
> + }
> if (validate_comment_length(thd, &key->key_create_info.comment,
> INDEX_COMMENT_MAXLEN,
> ER_TOO_LONG_INDEX_COMMENT,
> @@ -8328,6 +8468,11 @@ mysql_prepare_alter_table(THD *thd, TABLE *table,
> enum Key::Keytype key_type;
> LEX_CSTRING tmp_name;
> bzero((char*) &key_create_info, sizeof(key_create_info));
> + if (key_info->flags & HA_LONG_UNIQUE_HASH)
> + {
> + key_info->flags&= ~(HA_LONG_UNIQUE_HASH);
> + key_info->algorithm= HA_KEY_ALG_UNDEF;
> + }
Why?
> key_create_info.algorithm= key_info->algorithm;
> /*
> We copy block size directly as some engines, like Area, sets this
> diff --git a/sql/handler.cc b/sql/handler.cc
> index b77b2a3fa2c..cda53dfdc87 100644
> --- a/sql/handler.cc
> +++ b/sql/handler.cc
> @@ -6179,6 +6185,185 @@ int handler::ha_reset()
> DBUG_RETURN(reset());
> }
>
> +static int check_duplicate_long_entry_key(TABLE *table, handler *h, uchar *new_rec,
> + uint key_no)
> +{
> + Field *hash_field;
> + int result, error= 0;
> + if (!(table->key_info[key_no].user_defined_key_parts == 1
> + && table->key_info[key_no].key_part->field->flags & LONG_UNIQUE_HASH_FIELD ))
> + return 0;
What if LONG_UNIQUE_HASH_FIELD is set but user_defined_key_parts != 1?
> + hash_field= table->key_info[key_no].key_part->field;
> + DBUG_ASSERT((table->key_info[key_no].flags & HA_NULL_PART_KEY &&
> + table->key_info[key_no].key_length == HA_HASH_KEY_LENGTH_WITH_NULL)
> + || table->key_info[key_no].key_length == HA_HASH_KEY_LENGTH_WITHOUT_NULL);
> + uchar ptr[HA_HASH_KEY_LENGTH_WITH_NULL];
> +
> + if (hash_field->is_real_null())
> + return 0;
> +
> + key_copy(ptr, new_rec, &table->key_info[key_no],
> + table->key_info[key_no].key_length, false);
good, use existing function, no need to reinvent the wheel
> +
> + if (!table->check_unique_buf)
> + table->check_unique_buf= (uchar *)alloc_root(&table->mem_root,
> + table->s->reclength*sizeof(uchar));
> +
> + result= h->ha_index_init(key_no, 0);
> + if (result)
> + return result;
> + result= h->ha_index_read_map(table->check_unique_buf,
> + ptr, HA_WHOLE_KEY, HA_READ_KEY_EXACT);
> + if (!result)
> + {
> + bool is_same;
> + do
> + {
> + Item_func_or_sum * temp= static_cast<Item_func_or_sum *>(hash_field->
> + vcol_info->expr);
> + Item_args * t_item= static_cast<Item_args *>(temp);
> + uint arg_count= t_item->argument_count();
> + Item ** arguments= t_item->arguments();
> + long diff= table->check_unique_buf - new_rec;
> + Field * t_field;
> + is_same= true;
> + for (uint j=0; j < arg_count; j++)
> + {
> + DBUG_ASSERT(arguments[j]->type() == Item::FIELD_ITEM ||
> + // this one for left(fld_name,length)
> + arguments[j]->type() == Item::FUNC_ITEM);
> + if (arguments[j]->type() == Item::FIELD_ITEM)
> + {
> + t_field= static_cast<Item_field *>(arguments[j])->field;
> + if (t_field->cmp_offset(diff))
> + is_same= false;
> + }
> + else
> + {
> + Item_func_left *fnc= static_cast<Item_func_left *>(arguments[j]);
> + DBUG_ASSERT(!my_strcasecmp(system_charset_info, "left", fnc->func_name()));
> + //item_data= fnc->val_str(&tmp1);
> + DBUG_ASSERT(fnc->arguments()[0]->type() == Item::FIELD_ITEM);
> + t_field= static_cast<Item_field *>(fnc->arguments()[0])->field;
> + // field_data= t_field->val_str(&tmp2);
> + // if (my_strnncoll(t_field->charset(),(const uchar *)item_data->ptr(),
> + // item_data->length(),
> + // (const uchar *)field_data.ptr(),
> + // item_data->length()))
> + // return 0;
> + uint length= fnc->arguments()[1]->val_int();
> + if (t_field->cmp_max(t_field->ptr, t_field->ptr + diff, length))
> + is_same= false;
> + }
> + }
you don't need all that code, use key_cmp() or key_cmp_if_same() from key.cc
> + }
> + while (!is_same && !(result= table->file->ha_index_next_same(table->check_unique_buf,
> + ptr, table->key_info[key_no].key_length)));
> + if (is_same)
> + {
> + table->dupp_hash_key= key_no;
> + error= HA_ERR_FOUND_DUPP_KEY;
> + goto exit;
> + }
> + else
> + goto exit;
> + }
> + if (result == HA_ERR_LOCK_WAIT_TIMEOUT)
> + {
> + table->dupp_hash_key= key_no;
> + //TODO check if this is the only case
> + error= HA_ERR_FOUND_DUPP_KEY;
Why?
> + }
> + exit:
> + h->ha_index_end();
> + return error;
> +}
> +/** @brief
> + check whether inserted/updated records breaks the
> + unique constraint on long columns.
> + In the case of update we just need to check the specic key
> + reason for that is consider case
> + create table t1(a blob , b blob , x blob , y blob ,unique(a,b)
> + ,unique(x,y))
> + and update statement like this
> + update t1 set a=23+a; in this case if we try to scan for
> + whole keys in table then index scan on x_y will return 0
> + because data is same so in the case of update we take
> + key as a parameter in normal insert key should be -1
> + @returns 0 if no duplicate else returns error
> + */
> +static int check_duplicate_long_entries(TABLE *table, handler *h, uchar *new_rec)
> +{
> + table->dupp_hash_key= -1;
> + int result;
> + for (uint i= 0; i < table->s->keys; i++)
> + {
> + if ((result= check_duplicate_long_entry_key(table, h, new_rec, i)))
> + return result;
> + }
> + return 0;
> +}
> +
> +/** @brief
> + check whether updated records breaks the
> + unique constraint on long columns.
> + @returns 0 if no duplicate else returns error
> + */
> +static int check_duplicate_long_entries_update(TABLE *table, handler *h, uchar *new_rec)
> +{
> + Field **f, *field;
> + Item *h_item;
> + int error= 0;
> + bool is_update_handler_null= false;
> + /*
> + Here we are comparing whether new record and old record are same
> + with respect to fields in hash_str
> + */
> + long reclength= table->record[1]-table->record[0];
> + for (uint i= 0; i < table->s->keys; i++)
> + {
> + if (table->key_info[i].user_defined_key_parts == 1 &&
> + table->key_info[i].key_part->field->flags & LONG_UNIQUE_HASH_FIELD)
> + {
> + /*
> + Currently mysql_update is pacthed so that it will automatically set the
> + Update handler and then free it but ha_update_row is used in many function (
> + like in case of reinsert) Instead of patching them all here we check is
> + update_handler is null then set it And then set it null again
> + */
> + if (!table->update_handler)
> + {
> + create_update_handler(current_thd, table);
> + is_update_handler_null= true;
> + }
> + h_item= table->key_info[i].key_part->field->vcol_info->expr;
> + for (f= table->field; f && (field= *f); f++)
> + {
> + if ( find_field_pos_in_hash(h_item, field->field_name.str) != -1)
> + {
> + /* Compare fields if they are different then check for duplicates*/
> + if(field->cmp_binary_offset(reclength))
> + {
> + if((error= check_duplicate_long_entry_key(table, table->update_handler,
> + new_rec, i)))
> + goto exit;
> + /*
> + break beacuse check_duplicate_long_entrie_key will
> + take care of remaning fields
> + */
> + break;
> + }
> + }
> + }
> + }
> + }
> + exit:
> + if (is_update_handler_null)
> + {
> + delete_update_handler(current_thd, table);
> + }
> + return error;
> +}
>
> int handler::ha_write_row(uchar *buf)
> {
> @@ -6189,14 +6374,21 @@ int handler::ha_write_row(uchar *buf)
> DBUG_ENTER("handler::ha_write_row");
> DEBUG_SYNC_C("ha_write_row_start");
>
> + setup_table_hash(table);
No-no. You cannot modify table structure back and forth for *every inserted row*.
It's ok to do it once, when a table is opened. But not for every row,
> MYSQL_INSERT_ROW_START(table_share->db.str, table_share->table_name.str);
> mark_trx_read_write();
> increment_statistics(&SSV::ha_write_count);
>
> + if ((error= check_duplicate_long_entries(table, table->file, buf)))
> + {
> + re_setup_table(table);
> + DBUG_RETURN(error);
> + }
> TABLE_IO_WAIT(tracker, m_psi, PSI_TABLE_WRITE_ROW, MAX_KEY, 0,
> { error= write_row(buf); })
>
> MYSQL_INSERT_ROW_DONE(error);
> + re_setup_table(table);
> if (likely(!error) && !row_already_logged)
> {
> rows_changed++;
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0