developers
Threads by month
- ----- 2025 -----
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- 7 participants
- 6852 discussions

Re: [Maria-developers] [Commits] cf04c06: MDEV-10435 crash with bad stat tables.
by Sergei Golubchik 01 Dec '16
by Sergei Golubchik 01 Dec '16
01 Dec '16
Hi, Alexey!
On Nov 02, Alexey Botchkov wrote:
> revision-id: cf04c06a58e3d0e700491f8f9167e9323cf1de1d (mariadb-10.1.18-28-gcf04c06)
> parent(s): c18054deb2b5cfcf1f13aa71574406f2bafb87c6
> committer: Alexey Botchkov
> timestamp: 2016-11-02 13:02:32 +0400
> message:
>
> MDEV-10435 crash with bad stat tables.
>
> Functions from sql/statistics.cc don't seem to expect
> stat tables to fail or to have inadequate structure.
> Table open errors suppressed and some validity checks added.
>
> diff --git a/sql/sql_statistics.cc b/sql/sql_statistics.cc
> index 4020cbc..3f341ac 100644
> --- a/sql/sql_statistics.cc
> +++ b/sql/sql_statistics.cc
> @@ -129,6 +129,30 @@ inline void init_table_list_for_single_stat_table(TABLE_LIST *tbl,
> }
>
>
> +static
> +inline int stat_tables_are_inadequate(TABLE_LIST *tables)
> +{
> + TABLE_SHARE *cur_s;
> +
> + /* If the number of tables changes, we should revise this function. */
> + DBUG_ASSERT(STATISTICS_TABLES == 3);
> +
> + cur_s= tables[TABLE_STAT].table->s;
> + if (cur_s->fields < TABLE_STAT_N_FIELDS || cur_s->keys == 0)
> + return TRUE;
> +
> + cur_s= tables[COLUMN_STAT].table->s;
> + if (cur_s->fields < COLUMN_STAT_N_FIELDS || cur_s->keys == 0)
> + return TRUE;
> +
> + cur_s= tables[INDEX_STAT].table->s;
> + if (cur_s->fields < INDEX_STAT_N_FIELDS || cur_s->keys == 0)
> + return TRUE;
> +
> + return FALSE;
> +}
I'd suggest to use Table_check_intact interface instead.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

[Maria-developers] 10.2 tree, innodb_gis.alter_spatial_index test broken after your push
by Sergey Petrunia 01 Dec '16
by Sergey Petrunia 01 Dec '16
01 Dec '16
Hi Jan,
Your push
https://github.com/MariaDB/server/commit/dc9f919f27fccfeb0de3ab392f33bc5efd…
broke the innodb_gis.alter_spatial_index test in 10.2 tree.
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0

Re: [Maria-developers] [Commits] 2c9bb42: MDEV-11432 Change the informational redo log format tag to "MariaDB 10.2.3"
by Jan Lindström 01 Dec '16
by Jan Lindström 01 Dec '16
01 Dec '16
Hi,
ok to push.
R: Jan
On Thu, Dec 1, 2016 at 8:36 AM, <marko.makela(a)mariadb.com> wrote:
> revision-id: 2c9bb42d901fc4f48f4884e4a85af74eae6d0929
> (mariadb-10.2.2-91-g2c9bb42)
> parent(s): dc9f919f27fccfeb0de3ab392f33bc5efdfd59a0
> author: Marko Mäkelä
> committer: Marko Mäkelä
> timestamp: 2016-12-01 08:28:59 +0200
> message:
>
> MDEV-11432 Change the informational redo log format tag to "MariaDB 10.2.3"
>
> MariaDB 10.2 incorporates MySQL 5.7. MySQL 5.7.9 (the first GA release
> of the series) introduced an informational field to the InnoDB redo log
> header, which identifies the server version where the redo log files
> were created (initialized, resized or updated), in
> WL#8845: InnoDB: Redo log format version identifier.
>
> The informational message would be displayed to the user, for example
> if someone tries to start up MySQL 8.0 after killing a MariaDB 10.2 server.
> In the current MariaDB 10.2 source code, the identifier string would
> misleadingly say "MySQL 5.7.14" (using the hard-coded version number in
> univ.i) instead of "MariaDB 10.2.3" (using the contents of the VERSION
> file, the build system copies to config.h and my_config.h).
>
> This is only a cosmetic change. The compatibility check is based on a
> numeric identifier.
>
> We should probably also change the numeric identifier and some logic
> around it. MariaDB 10.2 should refuse to recover from a crashed MySQL 5.7
> instance, because the redo log might contain references to shared
> tablespaces,
> which are not supported by MariaDB 10.2. Also, when MariaDB 10.2 creates
> an encrypted redo log, there should be a redo log format version tag that
> will prevent MySQL 5.7 or 8.0 from starting up.
>
> ---
> storage/innobase/include/log0log.h | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/storage/innobase/include/log0log.h
> b/storage/innobase/include/log0log.h
> index c1f2570..caa067c 100644
> --- a/storage/innobase/include/log0log.h
> +++ b/storage/innobase/include/log0log.h
> @@ -555,7 +555,11 @@ or the MySQL version that created the redo log file.
> */
> /** End of the log file creator field. */
> #define LOG_HEADER_CREATOR_END (LOG_HEADER_CREATOR + 32)
> /** Contents of the LOG_HEADER_CREATOR field */
> -#define LOG_HEADER_CREATOR_CURRENT "MySQL " INNODB_VERSION_STR
> +#define LOG_HEADER_CREATOR_CURRENT \
> + "MariaDB " \
> + IB_TO_STR(MYSQL_VERSION_MAJOR) "." \
> + IB_TO_STR(MYSQL_VERSION_MINOR) "." \
> + IB_TO_STR(MYSQL_VERSION_PATCH)
>
> /** The redo log format identifier corresponding to the current format
> version.
> Stored in LOG_HEADER_FORMAT. */
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
1
0

Re: [Maria-developers] Please review MDEV-11298 Split Item_func_hex::val_str_ascii() into virtual methods in Type_handler
by Alexander Barkov 01 Dec '16
by Alexander Barkov 01 Dec '16
01 Dec '16
Hello Alexey,
Thanks for review!
I addressed your suggestions.
Also, Vicentiu expressed a concern (when reviewing another patch with
the same approach) that this kind of cast that I used in sql_type.cc
might be unsafe, because it's non-standard. We're discussion other
possible solutions now.
So I thought that in the meanwhile for the built-in data types we can
create methods:
String *val_str_ascii_from_val_int(String *str);
String *val_str_ascii_from_val_real(String *str);
String *val_str_ascii_from_val_str(String *str);
and use them from Type_handler_xxx.
A new patch is attached. Please see comments below.
On 11/30/2016 05:46 PM, Alexey Botchkov wrote:
> Hi, Alexander!
>
> Basically ok with the patch.
> Few comments though:
>
> > +String *Item_func_hex::val_str_ascii_from_int(String *str, ulonglong
> num)
> > {
> ...
> > + char ans[65], *ptr;
> > + if (!(ptr= longlong2str(num, ans, 16)) ||
> > + str->copy(ans, (uint32) (ptr - ans), &my_charset_numeric))
> > + return make_empty_result(); // End of memory
> > + return str;
> > }
>
>
> We can avoid extra copying here like this:
>
> {
> char *n_end;
> str->set_charset(&my_charset_numeric);
> if (str->alloc(65) ||
> !(n_end= longlong2str(num, (char *) str->ptr(), 16)))
> return make_empty_result();
> str->length((uint32) (n_end - str->ptr());
> return str;
> }
Thanks for the idea. It's nice to fix it at once.
I moved this to String rather than Item_func_hex though:
- this helps to avoid (char*) str->ptr().
- it can be useful for other purposes
I tried to move octex2hex() from passport.c to int2str.c
and reuse it in sql_string.cc, but failed because of strange
compilation failures in mysqlbinlog.cc which includes both
password.c and sql_string.cc using #include.
From my understanding there is a bug in password.c:
instead of
#include <mysql.h>
it should be:
#include "mysql.h"
I would like to avoid making changes in here for now,
so I gave up trying to reuse octet2str().
Instead of that I added APPEND_HEX() and reused it in 3
places in sql_string.cc.
>
> > class Item_func_hex :public Item_str_ascii_func
> > {
> ....
>
> Shouldn't we implement this too: ?
> const Type_handler *Item_func_hex::type_handler()
> { return m_handler; }
Not really. Type handler for Item_func_hex is type_handler_varchar,
and it does not depend on the data type of the argument.
To avoid confusion, I renamed m_handler to m_arg0_type_handler.
> Personally i wouldn't add the m_handler member at all and just do
> const Type_handler *Item_func_hex::type_handler()
> { return args[0]->m_handler; }
> Are you sure the m_handler member works faster? Well you can just forget
> that
> comment by now, but i'd like to test this someday.
I think we should cache it, because it can be expensive to
get the handler of args[0] on every row, as it involves virtual
methods, which can call more virtual methods recursively.
I added a comment about this.
>
> > +class Item_func_hex_str: public Item_func_hex
> > +{
> > +public:
> > + String *val_str_ascii(String *str)
> > + {
> > + /* Convert given string to a hex string, character by character */
> > + String *res= args[0]->val_str(str);
> > + if ((null_value= (!res || tmp_value.alloc(res->length() * 2 + 1))))
> > + return 0;
> > + tmp_value.length(res->length() * 2);
> > + tmp_value.set_charset(&my_charset_latin1);
> > + octet2hex((char*) tmp_value.ptr(), res->ptr(), res->length());
> > + return &tmp_value;
> > + }
> > +};
>
> I think it makes more sence to switch the *str and tmp_value
> usage here. So the caller gets the 'str' value he sends to the
> function as a result. Not that it changes a lot, still i think it's
> nicer.
Good idea! Done.
>
> Best regards.
> HF
>
1
0
Hi Varun,
Please find my review comments below. We'll need another review pass once
these are addressed.
> commit e9d1969a9758801674828f26431306350bb4c0ab
> Author: Varun <varunraiko1803(a)gmail.com>
> Date: Mon Nov 28 15:09:03 2016 +0530
>
> LIMIT Clause without optimisation added to GROUP_CONCAT
>
Please mention MDEV-11297 in the commit comment. The idea is that it should be
possible to search for MDEV number in 'git log' output and find all commits
with the code for the issue.
> diff --git a/mysql-test/t/group_concat_limit.test b/mysql-test/t/group_concat_limit.test
> index 82baf54..7230e80 100644
Ok this is .test file. Where is the .result file, did you forget to add it?
Please also add testcases where the GROUP_CONCAT's argument is NULL.
> --- a/mysql-test/t/group_concat_limit.test
> +++ b/mysql-test/t/group_concat_limit.test
> @@ -18,30 +18,30 @@ insert into t1 values (3,7,"E","c");
>
> # Test of MySQL simple request
>
> -select grp,group_concat(c limit 1 ) from t1 group by grp;
> +select grp,group_concat(c limit 0 ) from t1 group by grp;
> select grp,group_concat(c limit 1,1 ) from t1 group by grp;
>
> select grp,group_concat(c limit 1,10 ) from t1 group by grp;
> select grp,group_concat(distinct c limit 1,10 ) from t1 group by grp;
>
> select grp,group_concat(c order by a) from t1 group by grp;
> -elect grp,group_concat(c order by a limit 2 ) from t1 group by grp;
> +select grp,group_concat(c order by a limit 2 ) from t1 group by grp;
>
> select grp,group_concat(c order by a limit 1,1 ) from t1 group by grp;
> select grp,group_concat(c order by a limit 1,2 ) from t1 group by grp;
> select grp,group_concat(c order by a limit 10 ) from t1 group by grp;
>
> -#select grp,group_concat(c order by c) from t1 group by grp;
> -#select grp,group_concat(c order by c limit 2) from t1 group by grp;
> +select grp,group_concat(c order by c) from t1 group by grp;
> +select grp,group_concat(c order by c limit 2) from t1 group by grp;
>
> -#select grp,group_concat(c order by c desc) from t1 group by grp;
> -#select grp,group_concat(c order by c desc limit 2) from t1 group by grp;
> +select grp,group_concat(c order by c desc) from t1 group by grp;
> +select grp,group_concat(c order by c desc limit 2) from t1 group by grp;
>
> -#select grp,group_concat(d order by a) from t1 group by grp;
> -#select grp,group_concat(d order by a limit 2) from t1 group by grp;
> +select grp,group_concat(d order by a) from t1 group by grp;
> +select grp,group_concat(d order by a limit 2) from t1 group by grp;
>
> -#select grp,group_concat(d order by a desc) from t1 group by grp;
> -#select grp,group_concat(d order by a desc limit 1) from t1 group by grp;
> +select grp,group_concat(d order by a desc) from t1 group by grp;
> +select grp,group_concat(d order by a desc limit 1) from t1 group by grp;
>
>
What about this:
create table t2 (a int, b varchar(10));
insert into t2 values
(1,'a'),(1,'b'),(1,'c'),
(2,'x'),(2,'y');
MariaDB [j10]> select group_concat(a,b) from t2;
+-------------------+
| group_concat(a,b) |
+-------------------+
| 1a,1b,1c,2x,2y |
+-------------------+
1 row in set (0.00 sec)
MariaDB [j10]> select group_concat(a,b limit 3) from t2;
+---------------------------+
| group_concat(a,b limit 3) |
+---------------------------+
| 1a,1 |
+---------------------------+
1 row in set (0.01 sec)
Do you think it's correct? We need to discuss this.
> drop table t1;
> diff --git a/sql/item_sum.cc b/sql/item_sum.cc
> index 51a6c2b..681b800 100644
> --- a/sql/item_sum.cc
> +++ b/sql/item_sum.cc
> @@ -3114,6 +3114,11 @@ int dump_leaf_key(void* key_arg, element_count count __attribute__((unused)),
> String *result= &item->result;
> Item **arg= item->args, **arg_end= item->args + item->arg_count_field;
> uint old_length= result->length();
> + ulonglong *offset_limit= &item->offset_limit;
> + ulonglong *row_limit = &item->row_limit;
> +
> + if(item->limit_clause && !(*row_limit))
> + return 1;
>
> if (item->no_appended)
> item->no_appended= FALSE;
> @@ -3148,7 +3153,31 @@ int dump_leaf_key(void* key_arg, element_count count __attribute__((unused)),
> res= (*arg)->val_str(&tmp);
> }
> if (res)
> - result->append(*res);
> + {
> + // This can be further optimised if we calculate the values for the fields only when it is necessary that is afer #offset_limit
> + if(item->limit_clause)
> + {
> + if(*offset_limit)
> + {
> + (*offset_limit)--;
> + item->no_appended= TRUE;
> + }
> + else
> + {
> + if(*row_limit)
> + {
> + result->append(*res);
> + (*row_limit)--;
> + if(!(*row_limit))
> + item->no_appended= TRUE;
I've commented out the above two lines and the testcase still produces the
same result.
I claim that they are not needed. If you think otherwise, please add test
coverage.
Please do the same for the assignment right below.
> + }
> + else
> + item->no_appended= TRUE;
> + }
> + }
> + else
> + result->append(*res);
> + }
> }
>
> item->row_count++;
> @@ -3199,7 +3228,8 @@ int dump_leaf_key(void* key_arg, element_count count __attribute__((unused)),
> Item_func_group_concat(THD *thd, Name_resolution_context *context_arg,
> bool distinct_arg, List<Item> *select_list,
> const SQL_I_List<ORDER> &order_list,
> - String *separator_arg)
> + String *separator_arg, bool limit_clause,
> + ulonglong row_limit, ulonglong offset_limit)
> :Item_sum(thd), tmp_table_param(0), separator(separator_arg), tree(0),
> unique_filter(NULL), table(0),
> order(0), context(context_arg),
> @@ -3208,7 +3238,9 @@ int dump_leaf_key(void* key_arg, element_count count __attribute__((unused)),
> row_count(0),
> distinct(distinct_arg),
> warning_for_row(FALSE),
> - force_copy_fields(0), original(0)
> + force_copy_fields(0), original(0),
> + row_limit(row_limit), offset_limit(offset_limit),limit_clause(limit_clause),
> + copy_offset_limit(offset_limit), copy_row_limit(row_limit)
> {
> Item *item_select;
> Item **arg_ptr;
> @@ -3269,7 +3301,9 @@ int dump_leaf_key(void* key_arg, element_count count __attribute__((unused)),
> warning_for_row(item->warning_for_row),
> always_null(item->always_null),
> force_copy_fields(item->force_copy_fields),
> - original(item)
> + original(item), row_limit(item->row_limit),
> + offset_limit(item->offset_limit),limit_clause(item->limit_clause),
> + copy_offset_limit(item->copy_offset_limit), copy_row_limit(item->row_limit)
> {
> quick_group= item->quick_group;
> result.set_charset(collation.collation);
> @@ -3382,6 +3416,8 @@ void Item_func_group_concat::clear()
> null_value= TRUE;
> warning_for_row= FALSE;
> no_appended= TRUE;
> + offset_limit= copy_offset_limit;
> + row_limit= copy_row_limit;
> if (tree)
> reset_tree(tree);
> if (unique_filter)
> diff --git a/sql/item_sum.h b/sql/item_sum.h
> index b9075db..a38c5ca 100644
> --- a/sql/item_sum.h
> +++ b/sql/item_sum.h
> @@ -1583,6 +1583,12 @@ class Item_func_group_concat : public Item_sum
> bool always_null;
> bool force_copy_fields;
> bool no_appended;
> + bool limit_clause;
> + ulonglong row_limit;
> + ulonglong offset_limit;
> + ulonglong copy_offset_limit;
> + ulonglong copy_row_limit;
Please add (short) comments describing each new member variable.
> +
> /*
> Following is 0 normal object and pointer to original one for copy
> (to correctly free resources)
> @@ -1602,7 +1608,8 @@ class Item_func_group_concat : public Item_sum
> public:
> Item_func_group_concat(THD *thd, Name_resolution_context *context_arg,
> bool is_distinct, List<Item> *is_select,
> - const SQL_I_List<ORDER> &is_order, String *is_separator);
> + const SQL_I_List<ORDER> &is_order, String *is_separator,
> + bool limit_clause, ulonglong row_limit, ulonglong offset_limit);
>
> Item_func_group_concat(THD *thd, Item_func_group_concat *item);
> ~Item_func_group_concat();
> diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
> index 163322d..d87e6fb 100644
> --- a/sql/sql_yacc.yy
> +++ b/sql/sql_yacc.yy
> @@ -1786,7 +1786,7 @@ bool my_yyoverflow(short **a, YYSTYPE **b, ulong *yystacksize);
> %type <num>
> order_dir lock_option
> udf_type opt_local opt_no_write_to_binlog
> - opt_temporary all_or_any opt_distinct
> + opt_temporary all_or_any opt_distinct opt_glimit_clause
> opt_ignore_leaves fulltext_options union_option
> opt_not
> select_derived_init transaction_access_mode_types
> @@ -1844,7 +1844,7 @@ bool my_yyoverflow(short **a, YYSTYPE **b, ulong *yystacksize);
> opt_escape
> sp_opt_default
> simple_ident_nospvar simple_ident_q
> - field_or_var limit_option
> + field_or_var limit_option glimit_option
> part_func_expr
> window_func_expr
> window_func
> @@ -10449,16 +10449,36 @@ sum_expr:
> | GROUP_CONCAT_SYM '(' opt_distinct
> { Select->in_sum_expr++; }
> expr_list opt_gorder_clause
> - opt_gconcat_separator
> + opt_gconcat_separator opt_glimit_clause
> ')'
> {
> SELECT_LEX *sel= Select;
> sel->in_sum_expr--;
> - $$= new (thd->mem_root)
> - Item_func_group_concat(thd, Lex->current_context(), $3, $5,
> - sel->gorder_list, $7);
> - if ($$ == NULL)
> - MYSQL_YYABORT;
> + if(!$8)
Coding style requires a space after "if": "if (...)". Please fix this
everywhere in this patch.
IIRC it also requires that if the statement inside if, it should be enclosed
in {...}.
> + $$= new (thd->mem_root)
> + Item_func_group_concat(thd, Lex->current_context(), $3, $5,
> + sel->gorder_list, $7, FALSE, 0, 0);
> + else
> + {
> + if(sel->select_limit && sel->offset_limit)
> + $$= new (thd->mem_root)
> + Item_func_group_concat(thd, Lex->current_context(), $3, $5,
> + sel->gorder_list, $7, TRUE, sel->select_limit->val_int(),
> + sel->offset_limit->val_int());
There is at least one case where you can't call val_int from the parser: when
LIMIT number is a prepared statement parameter.
consider an example:
create table t2 (a int, b varchar(10));
insert into t2 values
(1,'a'),(1,'b'),(1,'c'),
(2,'x'),(2,'y');
prepare STMT from 'select group_concat(b order by a limit ?) from t2' ;
This crashes:
Program received signal SIGABRT, Aborted.
0x00007ffff5d7a425 in __GI_raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
64 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
#2 0x00007ffff5d730ee in __assert_fail_base (fmt=<optimized out>, assertion=0x5555566d8bd7 "0", file=0x5555566d8f08 "/home/psergey/dev-git/10.2-varun-mdev11297/sql/item.cc", line=<optimized out>, function=<optimized out>) at assert.c:94
#3 0x00007ffff5d73192 in __GI___assert_fail (assertion=0x5555566d8bd7 "0", file=0x5555566d8f08 "/home/psergey/dev-git/10.2-varun-mdev11297/sql/item.cc", line=3689, function=0x5555566dbd60 "virtual longlong Item_param::val_int()") at assert.c:103
#4 0x0000555555de54a0 in Item_param::val_int (this=0x7fff40054160) at /home/psergey/dev-git/10.2-varun-mdev11297/sql/item.cc:3689
#5 0x0000555555d63277 in MYSQLparse (thd=0x7fff40000b00) at /home/psergey/dev-git/10.2-varun-mdev11297/sql/sql_yacc.yy:10471
#6 0x0000555555b6e696 in parse_sql (thd=0x7fff40000b00, parser_state=0x7fffdc0c8cb0, creation_ctx=0x0, do_pfs_digest=false) at /home/psergey/dev-git/10.2-varun-mdev11297/sql/sql_parse.cc:9815
#7 0x0000555555b848b2 in Prepared_statement::prepare (this=0x7fff4004bd80, packet=0x7fff400133a0 "select group_concat(b order by a limit ?) from t2", packet_len=49) at /home/psergey/dev-git/10.2-varun-mdev11297/sql/sql_prepare.cc:3835
...
> + else if(sel->select_limit)
> + $$= new (thd->mem_root)
> + Item_func_group_concat(thd, Lex->current_context(), $3, $5,
> + sel->gorder_list, $7, TRUE, sel->select_limit->val_int(),0);
> + else
> + $$ = NULL; //need more thinking and maybe some error handling to inform the user the problem with the query
First, the identation is wrong, this 'else' is actually attached to the
"if(sel->select_limit)". One doesn't make such mistakes when they are
using {...} for all multi-line if-elses.
Second, can the "$$ = NULL" ever execute at all? It will be run when the LIMIT
clause is present (as $8!=0), but neither LIMIT nor OFFSET clauses are. I think
the grammar prevents that.
You can add an assert if you still want the code here to cover all cases.
> + }
> + if ($$ == NULL)
> + MYSQL_YYABORT;
> +
> + sel->select_limit= NULL;
> + sel->offset_limit= NULL;
> + sel->explicit_limit= 0;
> +
> $5->empty();
> sel->gorder_list.empty();
> }
> @@ -10680,6 +10700,105 @@ gorder_list:
> { if (add_gorder_to_list(thd, $1,(bool) $2)) MYSQL_YYABORT; }
> ;
>
> +opt_glimit_clause:
> + /* empty */ { $$ = 0; }
> + | glimit_clause { $$ = 1; }
> + ;
> +
> +glimit_clause_init:
> + LIMIT{}
> + ;
> +
> +glimit_clause:
> + glimit_clause_init glimit_options{}
> + | glimit_clause_init glimit_options
> + ROWS_SYM EXAMINED_SYM glimit_rows_option{}
> + | glimit_clause_init ROWS_SYM EXAMINED_SYM glimit_rows_option{}
> + ;
> +
> +glimit_options:
> + glimit_option
> + {
> + SELECT_LEX *sel= Select;
> + sel->select_limit= $1;
> + sel->offset_limit= 0;
> + sel->explicit_limit= 1;
> + }
> + | glimit_option ',' glimit_option
> + {
> + SELECT_LEX *sel= Select;
> + sel->select_limit= $3;
> + sel->offset_limit= $1;
> + sel->explicit_limit= 1;
> + }
> + | glimit_option OFFSET_SYM glimit_option
> + {
> + SELECT_LEX *sel= Select;
> + sel->select_limit= $1;
> + sel->offset_limit= $3;
> + sel->explicit_limit= 1;
> + }
> + ;
> +
> +glimit_option:
> + ident
> + {
> + Item_splocal *splocal;
> + LEX *lex= thd->lex;
> + Lex_input_stream *lip= & thd->m_parser_state->m_lip;
> + sp_variable *spv;
> + sp_pcontext *spc = lex->spcont;
> + if (spc && (spv = spc->find_variable($1, false)))
> + {
> + splocal= new (thd->mem_root)
> + Item_splocal(thd, $1, spv->offset, spv->sql_type(),
> + lip->get_tok_start() - lex->sphead->m_tmp_query,
> + lip->get_ptr() - lip->get_tok_start());
> + if (splocal == NULL)
> + MYSQL_YYABORT;
> +#ifndef DBUG_OFF
> + splocal->m_sp= lex->sphead;
> +#endif
> + lex->safe_to_cache_query=0;
> + }
> + else
> + my_yyabort_error((ER_SP_UNDECLARED_VAR, MYF(0), $1.str));
> + if (splocal->type() != Item::INT_ITEM)
> + my_yyabort_error((ER_WRONG_SPVAR_TYPE_IN_LIMIT, MYF(0)));
> + splocal->limit_clause_param= TRUE;
> + $$= splocal;
> + }
> + | param_marker
> + {
> + $1->limit_clause_param= TRUE;
> + }
> + | ULONGLONG_NUM
> + {
> + $$= new (thd->mem_root) Item_uint(thd, $1.str, $1.length);
> + if ($$ == NULL)
> + MYSQL_YYABORT;
> + }
> + | LONG_NUM
> + {
> + $$= new (thd->mem_root) Item_uint(thd, $1.str, $1.length);
> + if ($$ == NULL)
> + MYSQL_YYABORT;
> + }
> + | NUM
> + {
> + $$= new (thd->mem_root) Item_uint(thd, $1.str, $1.length);
> + if ($$ == NULL)
> + MYSQL_YYABORT;
> + }
> + ;
> +
> +glimit_rows_option:
> + glimit_option
> + {
> + LEX *lex=Lex;
> + lex->limit_rows_examined= $1;
> + }
> +
> in_sum_expr:
> opt_all
> {
When compiling, I'm getting warnings like this:
|| [ 76%] Building CXX object sql/CMakeFiles/sql.dir/item_sum.cc.o
|| /home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h: In constructor ‘Item_func_group_concat::Item_func_group_concat(THD*, Name_resolution_context*, bool, List<Item>*, const SQL_I_List<st_order>&, String*, bool, ulonglong, ulonglong)’:
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1596 col 27| warning: ‘Item_func_group_concat::original’ will be initialized after [-Wreorder]
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1587 col 13| warning: ‘ulonglong Item_func_group_concat::row_limit’ [-Wreorder]
item_sum.cc|3227 col 1| warning: when initialized here [-Wreorder]
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1588 col 13| warning: ‘Item_func_group_concat::offset_limit’ will be initialized after [-Wreorder]
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1586 col 8| warning: ‘bool Item_func_group_concat::limit_clause’ [-Wreorder]
item_sum.cc|3227 col 1| warning: when initialized here [-Wreorder]
|| /home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h: In constructor ‘Item_func_group_concat::Item_func_group_concat(THD*, Item_func_group_concat*)’:
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1596 col 27| warning: ‘Item_func_group_concat::original’ will be initialized after [-Wreorder]
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1587 col 13| warning: ‘ulonglong Item_func_group_concat::row_limit’ [-Wreorder]
item_sum.cc|3288 col 1| warning: when initialized here [-Wreorder]
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1588 col 13| warning: ‘Item_func_group_concat::offset_limit’ will be initialized after [-Wreorder]
/home/psergey/dev-git/10.2-varun-mdev11297/sql/item_sum.h|1586 col 8| warning: ‘bool Item_func_group_concat::limit_clause’ [-Wreorder]
item_sum.cc|3288 col 1| warning: when initialized here [-Wreorder]
|| Linking CXX static library libsql.a
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0
Hi, Georg!
On Nov 03, Georg Richter wrote:
> Hello Serg,
>
> attached you will find the patch for MDEV-9069.
>
> /Georg
>
> --
> Georg Richter, Senior Software Engineer
> MariaDB Corporation Ab
> From bde4b9d0683f115fa55bb6d2e2164739f89d732c Mon Sep 17 00:00:00 2001
> From: Georg Richter <georg(a)mariadb.com>
> Date: Thu, 3 Nov 2016 07:44:15 +0100
> Subject: [PATCH] Initial implementaion for MDEV-9069: "extend AES_ENCRYPT()
> and AES_DECRYPT() to support IV and the algorithm"
>
> - Extended function syntax:
> AES_ENCRYPT(plaintext, key, [[iv, [aad]] using block_encryption_mode);
> AES_DECRYPT(plaintext, key, [[iv, [aad]] using block_encryption_mode);
This doesn't look right. at least square brackets aren't paired,
but also I'd expect [using block_encryption_mode] to be in brackets,
and the comma after key should be too.
> AES_ crypt functions will return an error, if
> - length of IV is too small
> - the number of parameter doesn't match (e.g. if an iv was specified for block cipher mode ECB)
> AES_crypt functions will return a warning, if
> - key_size is too small
>
> - Added new session variable block_encryption_mode
> - Added a new status variable block_encryption_mode_list, which lists available block_encryption modes
This is absolutely not necessary, the list of available values for
block_encryption_mode can be retrieved as
select enum_value_list from system_variables
where variable_name='block_encryption_mode';
>
> - Added new tests for AES_ crypt functions: AES_ECB and AES_CBC vectors from
> NIST Cryptographic validation program (CAVP)
> [http://csrc.nist.gov/groups/STM/cavp/]
What license are these tests available under?
>
> - To prevent too many warnings all keys in tests now are hased by sha2() function
What does that mean? That is, no need to explain, I'll see it from the code,
but the commit comment is confusing, please reformulate this sentence.
>
> - This patch also includes bug fix for MDEV-11174 (my_aes_crypt crashes in GCM mode)
>
> Note:
> - AES_GCM and AES_CTR are not well tested - CAVP test vectors couldn't be used for AES_GCM since the auth tag after encryption differs.
why does it differ?
> - A key which is longer than the requested key size will be truncated to the appropriate size.
what do you mean, truncated, there is a key derivation
function that creates the real key from the user specified long key
in Item_aes_crypt::create_key
> diff --git a/mysys_ssl/my_crypt.cc b/mysys_ssl/my_crypt.cc
> index 49bd9af..5485576 100644
> --- a/mysys_ssl/my_crypt.cc
> +++ b/mysys_ssl/my_crypt.cc
> @@ -155,6 +155,7 @@ class MyCTX_gcm : public MyCTX
> int real_ivlen= EVP_CIPHER_CTX_iv_length(&ctx);
> aad= iv + real_ivlen;
> aadlen= ivlen - real_ivlen;
> + EVP_CIPHER_CTX_ctrl(&ctx, EVP_CTRL_GCM_SET_IVLEN, 12, NULL);
Hmm, why is that needed?
> return res;
> }
>
> @@ -205,12 +208,56 @@ const EVP_CIPHER *(*ciphers[])(uint)= {
> aes_ecb, aes_cbc
> #ifdef HAVE_EncryptAes128Ctr
> , aes_ctr
> +#endif
> #ifdef HAVE_EncryptAes128Gcm
> , aes_gcm
> #endif
> +};
> +
> +const char *my_aes_block_encryption_mode_str[]= {
> + "aes-128-ecb",
> + "aes-192-ecb",
> + "aes-256-ecb",
> + "aes-128-cbc",
> + "aes-192-cbc",
> + "aes-256-cbc"
> +#ifdef HAVE_EncryptAes128Ctr
> + , "aes-128-ctr",
> + "aes-192-ctr",
> + "aes-256-ctr"
> #endif
> +#ifdef HAVE_EncryptAes128Gcm
> + , "aes-128-gcm",
> + "aes-192-gcm",
> + "aes-256-gcm"
> +#endif
> + , NULL
> };
>
> +struct st_my_aes_info my_aes_info[]= {
> + { my_aes_128_ecb, MY_AES_ECB, 128, 0, 2},
> + { my_aes_192_ecb, MY_AES_ECB, 192, 0, 2},
> + { my_aes_256_ecb, MY_AES_ECB, 256, 0, 2},
> + { my_aes_128_cbc, MY_AES_CBC, 128, MY_AES_BLOCK_SIZE, 3},
> + { my_aes_192_cbc, MY_AES_CBC, 192, MY_AES_BLOCK_SIZE, 3},
> + { my_aes_256_cbc, MY_AES_CBC, 256, MY_AES_BLOCK_SIZE, 3},
> +#ifdef HAVE_EncryptAes128Ctr
> + { my_aes_128_ctr, MY_AES_CTR, 128, MY_AES_BLOCK_SIZE, 3},
> + { my_aes_192_ctr, MY_AES_CTR, 192, MY_AES_BLOCK_SIZE, 3},
> + { my_aes_256_ctr, MY_AES_CTR, 256, MY_AES_BLOCK_SIZE, 3},
> +#endif
> +#ifdef HAVE_EncryptAes128Gcm
> + { my_aes_128_gcm, MY_AES_GCM, 128, 12, 4 },
> + { my_aes_192_gcm, MY_AES_GCM, 192, 12, 4 },
> + { my_aes_256_gcm, MY_AES_GCM, 256, 12, 4 },
> +#endif
> +};
> +
> +TYPELIB block_encryption_mode_typelib=
> + { my_aes_block_encryption_end,
> + "block_encryption_modes",
> + my_aes_block_encryption_mode_str, NULL};
> +
I don't think all that belongs to my_crypt.cc and my_crypt.h.
This is part of the SQL interface to aes, it should be in item_strfunc.cc
> extern "C" {
>
> int my_aes_crypt_init(void *ctx, enum my_aes_mode mode, int flags,
> diff --git a/sql/item_strfunc.cc b/sql/item_strfunc.cc
> index acd3d74..9a4108c 100644
> --- a/sql/item_strfunc.cc
> +++ b/sql/item_strfunc.cc
> @@ -359,47 +359,116 @@ void Item_func_sha2::fix_length_and_dec()
> }
>
> /* Implementation of AES encryption routines */
> -void Item_aes_crypt::create_key(String *user_key, uchar *real_key)
> +void Item_aes_crypt::create_item(String *user_item, uchar *real_item, uint item_size)
> {
> - uchar *real_key_end= real_key + AES_KEY_LENGTH / 8;
> + uchar *real_item_end= real_item + item_size;
> uchar *ptr;
> - const char *sptr= user_key->ptr();
> - const char *key_end= sptr + user_key->length();
> + const char *sptr= user_item->ptr();
> + const char *item_end= sptr + user_item->length();
>
> - bzero(real_key, AES_KEY_LENGTH / 8);
> + bzero(real_item, item_size);
>
> - for (ptr= real_key; sptr < key_end; ptr++, sptr++)
> + for (ptr= real_item; sptr < item_end; ptr++, sptr++)
> {
> - if (ptr == real_key_end)
> - ptr= real_key;
> + if (ptr == real_item_end)
> + return;
> *ptr ^= (uchar) *sptr;
> }
> }
>
> -
> String *Item_aes_crypt::val_str(String *str)
> {
> DBUG_ASSERT(fixed == 1);
> - StringBuffer<80> user_key_buf;
> + StringBuffer<80> user_buf;
> String *sptr= args[0]->val_str(str);
> - String *user_key= args[1]->val_str(&user_key_buf);
> - uint32 aes_length;
> + enum my_aes_mode block_cipher;
> +
> + if (block_encryption_mode == (ulong)-1)
> + block_encryption_mode= current_thd->variables.block_encryption_mode;
> +
> + block_cipher= my_aes_info[block_encryption_mode].aes_mode;
> +
> + if (arg_count > my_aes_info[block_encryption_mode].param_count)
> + {
> + null_value= 1;
> + my_error(ER_WRONG_PARAMCOUNT_TO_NATIVE_FCT, MYF(0), what ? "AES_ENCRYPT" : "AES_DECRYPT");
> + return 0;
> + }
>
> - if (sptr && user_key) // we need both arguments to be not NULL
> + if (sptr)
> {
> + uchar riv[128]; // MY_AES_BLOCK_SIZE + MAX_AADLEN (=720 bits)
> + uint32 digest_size; // size of digest in bytes
> + String *user_iv;
> + uint iv_size;
> +
> +
> + digest_size= my_aes_get_size(block_cipher, sptr->length());
> + iv_size= my_aes_info[block_encryption_mode].iv_size;
> +
> + if (iv_size) // check if block encryption mode requires IV
> + {
I thought you can have an assert here: DBUG_ASSERT(arg_count > 2)
because you do ER_WRONG_PARAMCOUNT_TO_NATIVE_FCT earlier.
> + if (arg_count > 2)
> + {
> + user_iv= args[2]->val_str(&user_buf);
> + // throw error if iv is too short
> + if (!user_iv || user_iv->length() < iv_size)
> + {
> + null_value= 1;
> + my_error(ER_AES_INVALID_IV, MYF(0), iv_size);
This needs to be documented carefully, because it's inconsistent
with the behavior for key argument.
> + return 0;
> + }
> + create_item(user_iv, riv, iv_size);
> + } else {
> + null_value= 1;
> + my_error(ER_AES_INVALID_IV, MYF(0), iv_size);
> + return 0;
> + }
> + }
> +#ifdef HAVE_EncryptAes128Gcm
> + if (arg_count > 3 &&
> + block_cipher == MY_AES_GCM)
why do you check for MY_AES_GCM, isn't arg_count > 3 enough?
> + {
> + String *user_aad= args[3]->val_str(&user_buf);
> + if (user_aad)
> + {
> + create_item(user_aad, riv + iv_size, user_aad->length());
> + iv_size+= user_aad->length();
> + }
> + }
> +#endif
> null_value=0;
> - aes_length=my_aes_get_size(MY_AES_ECB, sptr->length());
>
> - if (!str_value.alloc(aes_length)) // Ensure that memory is free
> + if (!str_value.alloc(digest_size)) // Ensure that memory is free
> {
> - uchar rkey[AES_KEY_LENGTH / 8];
> - create_key(user_key, rkey);
> + String *user_key= args[1]->val_str(&user_buf);
> + uint32 key_size= my_aes_info[block_encryption_mode].key_size;
> + uchar rkey[MY_AES_MAX_KEY_LENGTH];
>
> - if (!my_aes_crypt(MY_AES_ECB, what, (uchar*)sptr->ptr(), sptr->length(),
> - (uchar*)str_value.ptr(), &aes_length,
> - rkey, AES_KEY_LENGTH / 8, 0, 0))
> + if (!user_key || !user_key->length())
> {
> - str_value.length((uint) aes_length);
> + null_value= 1;
> + return 0;
> + }
better to check the key before allocating any memory,
just as the old code did.
> +
> + // if key is too short, throw a warning
> + if (user_key->length() < (key_size / 8))
> + {
> + push_warning_printf(current_thd, Sql_condition::WARN_LEVEL_WARN,
> + ER_WARN_AES_KEY_TOO_SHORT,
> + ER_THD(current_thd, ER_WARN_AES_KEY_TOO_SHORT),
> + key_size / 8);
> + }
> +
> + create_item(user_key, rkey, key_size / 8);
> +
> + if (!my_aes_crypt(block_cipher,
> + what, (uchar*)sptr->ptr(), sptr->length(),
> + (uchar*)str_value.ptr(), &digest_size,
> + rkey, key_size / 8, (arg_count > 2) ? riv : 0,
> + (arg_count > 2) ? iv_size : 0))
> + {
> + str_value.length((uint) digest_size);
> return &str_value;
> }
> }
> diff --git a/sql/item_strfunc.h b/sql/item_strfunc.h
> index 25b63eb..153ca7b 100644
> --- a/sql/item_strfunc.h
> +++ b/sql/item_strfunc.h
> @@ -184,8 +188,9 @@ class Item_func_aes_encrypt :public Item_aes_crypt
> class Item_func_aes_decrypt :public Item_aes_crypt
> {
> public:
> - Item_func_aes_decrypt(THD *thd, Item *a, Item *b):
> - Item_aes_crypt(thd, a, b) {}
> + Item_func_aes_decrypt(THD *thd, ulong mode, List<Item> &list_item):
> + Item_aes_crypt(thd, list_item)
> + { block_encryption_mode= mode; }
you also need to implement check_vcol_func_processor()
and check_valid_arguments_processor() to make sure that AES_ENCRYPT/AES_DECRYPT
without the USING clause are not used in partitioning expression and
in stored generated columns. And add tests for that, please.
after I push MDEV-5800 (or now in bb-10.2-vcols branch) see Item_func_week as
an example (it also has an optional argument that by default takes a value
from a session variable).
> void fix_length_and_dec();
> const char *func_name() const { return "aes_decrypt"; }
> Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> diff --git a/sql/mysqld.cc b/sql/mysqld.cc
> index 310ccb0..b2f677e 100644
> --- a/sql/mysqld.cc
> +++ b/sql/mysqld.cc
> @@ -8317,6 +8317,27 @@ static int show_memory_used(THD *thd, SHOW_VAR *var, char *buff,
> return 0;
> }
>
> +static int show_block_encrypt_mode_list(THD *thd, SHOW_VAR *var, char *buff,
> + struct system_status_var *status_var,
> + enum enum_var_type scope)
> +{
> + uint i;
> + char *end= buff + SHOW_VAR_FUNC_BUFF_SIZE;
> + var->type= SHOW_CHAR;
> + var->value= buff;
> +
> + for (i=0; my_aes_block_encryption_mode_str[i] &&
> + buff + strlen(my_aes_block_encryption_mode_str[i]) < end &&
> + i < my_aes_block_encryption_end; i++)
> + {
> + buff= strnmov(buff, my_aes_block_encryption_mode_str[i], end-buff-1);
> + *buff++= ',';
> + }
> + if (i)
> + buff--;
> + *buff= 0;
> + return 0;
> +}
No need to, because block_encryption_mode_list status variable is not
needed either.
>
> #ifndef DBUG_OFF
> static int debug_status_func(THD *thd, SHOW_VAR *var, char *buff,
> diff --git a/sql/share/errmsg-utf8.txt b/sql/share/errmsg-utf8.txt
> index d42611b..5a4c8bf 100644
> --- a/sql/share/errmsg-utf8.txt
> +++ b/sql/share/errmsg-utf8.txt
> @@ -7231,4 +7231,16 @@ ER_PARTITION_DEFAULT_ERROR
> eng "Only one DEFAULT partition allowed"
> ukr "Припустимо мати тільки один DEFAULT розділ"
> ER_REFERENCED_TRG_DOES_NOT_EXIST
> - eng "Referenced trigger '%s' for the given action time and event type does not exist"
> + eng "Referenced trigger '%s' for the given action time and event type does not exist"
> +ER_AES_INVALID_IV
> + eng "The initialization vector supplied is too short. Must be at least %d bytes long"
> + ger "Der angegebene Initalisierungsvektor ist zu kurz. Die Mindestlänge beträgt %d Bytes"
> +ER_AES_INVALID_MODE
> + eng "Unknown block encryption mode '%s'"
> + ger "Der angegebene Blockcodierungsmodus '%s' exstiert nicht"
> +ER_AES_INVALID_KEY
> + eng "Invalid key"
> + ger "Ungültiger Schlüssel"
This error message seems to be unused.
> +ER_WARN_AES_KEY_TOO_SHORT
> + eng "The provided key is too short. It is more secure to use a key with %d bytes"
s/with %d bytes/of at least %d bytes long/
> + ger "Der angegebene Schlüssel ist zu kurz. Es ist empfehlenswert einen Schlüssel mit %d Bytes zu verwenden"
> diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
> index e17a514..12e79d3 100644
> --- a/sql/sql_yacc.yy
> +++ b/sql/sql_yacc.yy
> @@ -6670,7 +6673,6 @@ charset:
> CHAR_SYM SET {}
> | CHARSET {}
> ;
> -
Huh?
> charset_name:
> ident_or_text
> {
> @@ -7102,6 +7104,16 @@ opt_component:
> | '.' ident { $$= $2; }
> ;
>
> +block_encryption_mode:
> + ident_or_text
> + {
> + if (!($$=find_type($1.str, &block_encryption_mode_typelib, MYF(0))))
> + my_yyabort_error((ER_AES_INVALID_MODE, MYF(0), $1.str));
> + $$-= 1;
> + }
> + | DEFAULT { $$= (ulong)-1;}
why do you want to support USING DEFAULT ?
> + ;
> +
> string_list:
> text_string
> { Lex->last_field->interval_list.push_back($1, thd->mem_root); }
> diff --git a/sql/sys_vars.cc b/sql/sys_vars.cc
> index b6359ff..8b39462 100644
> --- a/sql/sys_vars.cc
> +++ b/sql/sys_vars.cc
> @@ -554,6 +554,10 @@ static Sys_var_mybool Sys_explicit_defaults_for_timestamp(
> READ_ONLY GLOBAL_VAR(opt_explicit_defaults_for_timestamp),
> CMD_LINE(OPT_ARG), DEFAULT(FALSE), NO_MUTEX_GUARD, NOT_IN_BINLOG);
>
> +static Sys_var_enum Sys_block_encryption_mode(
> + "block_encryption_mode", "mode for AES_ENCRYPT/AES_DECRYPT",
"for AES_ENCRYPT and AES_DECRYPT"
> + SESSION_VAR(block_encryption_mode), CMD_LINE(REQUIRED_ARG),
> + my_aes_block_encryption_mode_str, DEFAULT(my_aes_128_ecb));
>
> static Sys_var_ulonglong Sys_bulk_insert_buff_size(
> "bulk_insert_buffer_size", "Size of tree cache used in bulk "
> diff --git a/mysql-test/r/aes_cbc_crypt.result b/mysql-test/r/aes_cbc_crypt.result
> new file mode 100644
> index 0000000..87ed84b
> --- /dev/null
> +++ b/mysql-test/r/aes_cbc_crypt.result
> @@ -0,0 +1,19300 @@
> +#
> +# Error checking
> +#
> +SELECT HEX(AES_ENCRYPT("Die Katze tritt die Treppe krumm", "foo", "1234567890123456" USING "aes-128-cbc"));
I think it's better to use underscores aes_128_cbc, then the mode can be used as
an identifier:
AES_ENCRYPT("Die Katze", "foo", "1234567890123456" USING aes_128_cbc);
(like charset names) otherwise it's not at all intuitive that
SELECT AES_ENCRYPT("Die Katze", "foo", "1234567890123456"
USING "aes-128-cbc");
works, but
SELECT AES_ENCRYPT("Die Katze", "foo", "1234567890123456"
USING concat_ws("-", "aes", "128", "cbc"));
doesn't. So, either exclude the possibility of this confusion by not
allowing a string literal as a mode. Or (actually better) allow any
arbitrary non-constant expression as the mode.
> +#
> +# Session variable tests
> +#
> +SET session block_encryption_mode="aes-111-cbc";
> +ERROR 42000: Variable 'block_encryption_mode' can't be set to the value of 'aes-111-cbc'
> +SET session block_encryption_mode="aes-128-cbc";
> +SELECT LEFT(AES_ENCRYPT(X'53696E676C6520626C6F636B206D7367', X'06A9214036B8A15B512E03D534120006', X'3DAFBA429D9EB430B422DA802C9FAC41'), 16) = X'E353779C1079AEB82708942DBE77181A';
> +LEFT(AES_ENCRYPT(X'53696E676C6520626C6F636B206D7367', X'06A9214036B8A15B512E03D534120006', X'3DAFBA429D9EB430B422DA802C9FAC41'), 16) = X'E353779C1079AEB82708942DBE77181A'
> +1
> +SELECT AES_DECRYPT(AES_ENCRYPT(X'53696E676C6520626C6F636B206D7367', X'06A9214036B8A15B512E03D534120006', X'3DAFBA429D9EB430B422DA802C9FAC41'), X'06A9214036B8A15B512E03D534120006', X'3DAFBA429D9EB430B422DA802C9FAC41') = X'53696E676C6520626C6F636B206D7367';
> +AES_DECRYPT(AES_ENCRYPT(X'53696E676C6520626C6F636B206D7367', X'06A9214036B8A15B512E03D534120006', X'3DAFBA429D9EB430B422DA802C9FAC41'), X'06A9214036B8A15B512E03D534120006', X'3DAFBA429D9EB430B422DA802C9FAC41') = X'53696E676C6520626C6F636B206D7367'
please add tests for views, default clause, virtual columns, partitioning.
> +1
> ...
I don't think it's necessary to commit megabytes of CAVP tests into the
server repository. May be just a few K of them will be enough?
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0
Hi!
I have now started to work on the spider patches for MariaDB 10.2, MDEV-7698.
I have moved all code from maria-10.1-spider to a new branch
maria-10.2-spider and added some more patches.
I have closed all related MDEV's in MDEV-7698 that is now included in
10.2-spider.
While doing this, I noticed that spider/handler test was not included
in the test suite. I added the missing suite.pm and suite.opt files
and got the tests to work.
However, when I tried to run test to verify my changes, I noticed that
a lot of test in spider/handler where failing:
mysql-test-run --suite=spider/handler
produces these failures:
spider/handler.spider3_fixes spider/handler.direct_aggregate
spider/handler.direct_update spider/handler.spider_fixes
spider/handler.function spider/handler.ha spider/handler.vp_fixes
All failures are because .test and .result file doesn't match.
I checked the patch file:
http://spiderformysql.com/downloads/spider-3.2/patch_mariadb-10.1.8.tgz
but this doesn't include any updates to the handler test files:
grep mysql_test * returns nothing.
However the .tar file:
http://spiderformysql.com/downloads/spider-3.2/mariadb-10.1.8-spider-3.2-vp…
Contains a lot of updated .test and .result files.
Kentoku, do you have patches for the test files, or should I just take
them from the above spider branch or from somewhere else ?
Another question:
After applying the patches:
013_mariadb-10.0.15.vp_handler.diff
034_mariadb-10.0.15.vp_handler2.diff
005_mariadb-10.0.15.hs.diff
041_mariadb-10.0.15.vp_handler2.diff
I get the following change in spider/handler/basic_sql.result:
--- a/storage/spider/mysql-test/spider/handler/r/basic_sql.result
+++ b/storage/spider/mysql-test/spider/handler/r/basic_sql.result
@@ -70,6 +70,12 @@ CREATE TABLE ta_l (
PRIMARY KEY(a)
) MASTER_1_ENGINE MASTER_1_CHARSET MASTER_1_COMMENT_2_1
IGNORE SELECT a, b, c FROM tb_l
+Warnings:
+Warning 1062 Duplicate entry '1' for key 'PRIMARY'
+Warning 1062 Duplicate entry '2' for key 'PRIMARY'
+Warning 1062 Duplicate entry '3' for key 'PRIMARY'
+Warning 1062 Duplicate entry '4' for key 'PRIMARY'
+Warning 1062 Duplicate entry '5' for key 'PRIMARY'
I can't figure out,why we get the above warnings.
This is from a patch we discussed at booking.com one year ago. Any
explanation for the above warnings would be appreciated.
You can branch 10.2-spider and check the current state.
Regards,
Monty
2
5

Re: [Maria-developers] 1579140: MDEV-11005: Incorrect error message when using ONLINE alter table with GIS
by Sergei Golubchik 30 Nov '16
by Sergei Golubchik 30 Nov '16
30 Nov '16
Hi, Jan!
On Oct 10, Jan Lindström wrote:
> parent(s): a6f032af5778018051d41fc8ba7e9c983b4b7fbf
> author: Jan Lindström
> committer: Jan Lindström
> timestamp: 2016-10-10 12:34:06 +0300
> message:
>
> MDEV-11005: Incorrect error message when using ONLINE alter table with GIS
>
> Fix incorrect error message on ONLINE alter table with GIS indexes and
> add rtree page type to page type check.
>
> diff --git a/sql/share/errmsg-utf8.txt b/sql/share/errmsg-utf8.txt
> index d42611b..2892342 100644
> --- a/sql/share/errmsg-utf8.txt
> +++ b/sql/share/errmsg-utf8.txt
> @@ -6861,6 +6861,9 @@ ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_CHANGE_FTS
> ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_FTS
> eng "Fulltext index creation requires a lock"
>
> +ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_GIS
> + eng "Do not support online operation on table with GIS index"
> +
No, this is absolutely not possible in 10.2.
ER_SQL_SLAVE_SKIP_COUNTER_NOT_SETTABLE_IN_GTID_MODE, the next error
message, was added in 10.0. If you add a new error message here, then
the error number for ER_SQL_SLAVE_SKIP_COUNTER_NOT_SETTABLE_IN_GTID_MODE
will change. But 10.0 is GA, and error numbers cannot be changed in GA
versions.
You can add your new ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_GIS
after ER_CANNOT_DISCARD_TEMPORARY_TABLE, or, preferrably, copy all MySQL
errors from 3000 and up to ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_GIS.
This will ensure that ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_GIS will
have the same number in MariaDB and in MySQL.
> ER_SQL_SLAVE_SKIP_COUNTER_NOT_SETTABLE_IN_GTID_MODE
> eng "sql_slave_skip_counter can not be set when the server is running with GTID_MODE = ON. Instead, for each transaction that you want to skip, generate an empty transaction with the same GTID as the transaction"
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
1

Re: [Maria-developers] [Commits] dabf6ca: MDEV-10340: support COM_RESET_CONNECTION
by Sergei Golubchik 30 Nov '16
by Sergei Golubchik 30 Nov '16
30 Nov '16
Hi, Oleksandr!
On Oct 17, Oleksandr Byelkin wrote:
> revision-id: dabf6cac60987e88266396a28e40b341899704e6 (mariadb-10.2.2-49-gdabf6ca)
> parent(s): 8303aded294ce905bbc513e7ee42623d5f1fdb50
> committer: Oleksandr Byelkin
> timestamp: 2016-10-17 16:59:36 +0200
> message:
>
> MDEV-10340: support COM_RESET_CONNECTION
>
> draft to check with client
>
> ---
> include/mysql.h.pp | 2 +
> include/mysql_com.h | 2 +
> mysql-test/r/mysqld--help.result | 2 +-
> .../sys_vars/r/sysvars_server_embedded.result | 4 +-
> .../sys_vars/r/sysvars_server_notembedded.result | 4 +-
> sql/sql_class.cc | 65 ++++++++++++++++++++++
> sql/sql_class.h | 1 +
> sql/sql_parse.cc | 13 ++++-
> 8 files changed, 86 insertions(+), 7 deletions(-)
>
> diff --git a/include/mysql_com.h b/include/mysql_com.h
> index 461800f..e1b129a 100644
> --- a/include/mysql_com.h
> +++ b/include/mysql_com.h
> @@ -111,6 +111,8 @@ enum enum_server_command
> COM_TABLE_DUMP, COM_CONNECT_OUT, COM_REGISTER_SLAVE,
> COM_STMT_PREPARE, COM_STMT_EXECUTE, COM_STMT_SEND_LONG_DATA, COM_STMT_CLOSE,
> COM_STMT_RESET, COM_SET_OPTION, COM_STMT_FETCH, COM_DAEMON,
> + COM_UNIMPLEMENTED,
> + COM_RESET_CONNECTION,
What's COM_UNIMPLEMENTED?
> /* don't forget to update const char *command_name[] in sql_parse.cc */
> COM_MDB_GAP_BEG,
> COM_MDB_GAP_END=250,
> diff --git a/sql/sql_class.cc b/sql/sql_class.cc
> index 1af3b9a..0cb58d4 100644
> --- a/sql/sql_class.cc
> +++ b/sql/sql_class.cc
> @@ -1575,6 +1575,71 @@ void THD::change_user(void)
> }
>
>
> +/*
> + Do what's needed when one invokes change user
> +
> + SYNOPSIS
> + cleanup_connection()
> +
> + IMPLEMENTATION
> + Reset all resources that are connection specific
> +*/
> +
> +void THD::cleanup_connection(void)
> +{
Why do you need a dedicated method for that, instead of simply invoking
THD::change_user() for COM_RESET_CONNECTION?
> +}
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

[Maria-developers] Please review a 10.3 patch "Moving functions cast_stmt_xxx and add_select_to_union_list as methods to LEX"
by Alexander Barkov 29 Nov '16
by Alexander Barkov 29 Nov '16
29 Nov '16
Hello Sanja,
As we agreed during our discussion on slack,
we should port a few preparatory patches
from bb-10.2-compatibility to 10.3, to make our
further compatibility related development easier.
This is the first patch.
Please review.
Thanks!
3
2

[Maria-developers] MDEV-11343 LOAD DATA INFILE fails to load data with an escape character followed by a multi-byte character
by Alexander Barkov 28 Nov '16
by Alexander Barkov 28 Nov '16
28 Nov '16
Hello Sergei,
Please review a patch for MDEV-11343.
Thanks!
2
3

Re: [Maria-developers] [Commits] 44f3058: Prevent undefined behavior if the table is already initialized
by Sergei Golubchik 28 Nov '16
by Sergei Golubchik 28 Nov '16
28 Nov '16
Hi, Vicențiu!
On Sep 20, Vicențiu Ciorbaru wrote:
> Hi Sergey, Monty!
>
> CCed Monty as he last touched this code as part of MDEV-8408.
>
> This patch comes after I found a warning during compilation that says that
> we might be using the error variable as uninitialised.
> Looking at the code:
> int error;
> /* ..... */
> if (!table->file->inited &&
> (error= table->file->ha_index_init(idx, 1)))
> /* ... */
> DBUG_RETURN(error != 0);
>
> Here, if table->file->inited is actually set to true, the error
> variable is never set. The problem is that i'm not sure if we should
> be returning a failure or not. I considered that having the table
> initialised _before_ this call would lead to "not-an-error". Then
> again, the semantics are strange and I couldn't figure out exactly
> which is the correct return value.
>
> Thoughts?
Can table->file->inited be true here at all?
I've added an assertion there and run the main test suite (in normal and
--ps-protocol, just in case) - it has never fired.
So, I'd speculate that table->file->inited must be always false there and
that assert looks more appropriate than if() there.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

[Maria-developers] Please review MDEV-11365 Split the data type and attribute related code in Item_sum_hybrid::fix_fields into Type_handler::Item_sum_hybrid_fix_length_and_dec()
by Alexander Barkov 28 Nov '16
by Alexander Barkov 28 Nov '16
28 Nov '16
Hello Sanja,
Please review MDEV-11365.
Thanks!
1
0

[Maria-developers] [Commits] 1c9da8d: MDEV-9312: storage engine not enforced during
by Sachin Setiya 28 Nov '16
by Sachin Setiya 28 Nov '16
28 Nov '16
Hi Nirbhay,
if (IF_WSREP(thd->wsrep_applier,1))
{
plugin_thdvar_init(thd);
}
This code in commit , breaks log writing on other nodes(In galera) , It sets
the thd->variables to global system variable.
And this
if (wsrep_emulate_bin_log || !(thd->variables.option_bits & OPTION_BIN_LOG))
DBUG_RETURN(0);
code here make write_transaction_to_binlog() function exit.
I am unable to understand purpose of applying plugin_thdvar_init(thd);
for wsrep_applier threads.
Because this is applied on first thread only and not applied to
remaining threads.
--
Regards
Sachin Setiya
Software Engineer at MariaDB
2
1

28 Nov '16
Hi Sergei,
Please let me know if you want to review the binlog encryption tests,
otherwise I can just push them into 10.1. There are no changes apart
from tests, and in this form they've passed a buildbot round on a
development tree.
After they're merged into 10.2, several tests will fail, this is
expected. I will attach to the JIRA issue a separate patch which should
fix them after the merge.
Regards,
/E
-------- Forwarded Message --------
Subject: [Commits] ff7bee0: MDEV-9038 Binlog encryption tests
Date: Mon, 28 Nov 2016 02:47:14 +0200 (EET)
From: elenst(a)montyprogram.com
Reply-To: maria-developers(a)lists.launchpad.net
To: commits(a)mariadb.org
revision-id: ff7bee04f8abb6d95adc73fcfe7e213b5972d698
(mariadb-10.1.19-14-gff7bee0)
parent(s): a68d1352b60bfc3a424fd290a4f5a1beae1bb71e
author: Elena Stepanova
committer: Elena Stepanova
timestamp: 2016-11-27 18:09:19 +0200
message:
MDEV-9038 Binlog encryption tests
---
mysql-test/mysql-test-run.pl | 1 +
.../suite/binlog_encryption/binlog_index.result | 187 ++
.../suite/binlog_encryption/binlog_index.test | 276 ++
.../suite/binlog_encryption/binlog_ioerr.result | 32 +
.../suite/binlog_encryption/binlog_ioerr.test | 33 +
.../binlog_mysqlbinlog-cp932-master.opt | 1 +
.../binlog_mysqlbinlog-cp932.result | 19 +
.../binlog_mysqlbinlog-cp932.test | 32 +
.../binlog_row_annotate-master.opt | 1 +
.../binlog_encryption/binlog_row_annotate.result | 724 ++++++
.../binlog_encryption/binlog_row_annotate.test | 160 ++
.../binlog_encryption/binlog_write_error.result | 108 +
.../binlog_encryption/binlog_write_error.test | 106 +
.../binlog_encryption/binlog_xa_recover-master.opt | 1 +
.../binlog_encryption/binlog_xa_recover.result | 240 ++
.../suite/binlog_encryption/binlog_xa_recover.test | 304 +++
mysql-test/suite/binlog_encryption/disabled.def | 1 +
.../binlog_encryption/encrypted_master.result | 694 +++++
.../suite/binlog_encryption/encrypted_master.test | 188 ++
.../encrypted_master_lost_key.result | 111 +
.../encrypted_master_lost_key.test | 206 ++
.../encrypted_master_switch_to_unencrypted.test | 137 +
.../suite/binlog_encryption/encrypted_slave.cnf | 12 +
.../suite/binlog_encryption/encrypted_slave.result | 247 ++
.../suite/binlog_encryption/encrypted_slave.test | 122 +
.../encryption_algorithms.combinations | 5 +
.../binlog_encryption/encryption_algorithms.inc | 2 +
.../suite/binlog_encryption/encryption_combo.cnf | 5 +
.../binlog_encryption/encryption_combo.result | 94 +
.../suite/binlog_encryption/encryption_combo.test | 136 +
mysql-test/suite/binlog_encryption/grep_binlog.inc | 54 +
.../master_switch_to_unencrypted.cnf | 4 +
mysql-test/suite/binlog_encryption/multisource.cnf | 17 +
.../suite/binlog_encryption/multisource.result | 228 ++
.../suite/binlog_encryption/multisource.test | 335 +++
mysql-test/suite/binlog_encryption/my.cnf | 27 +
.../suite/binlog_encryption/restart_server.inc | 35 +
.../suite/binlog_encryption/rpl_binlog_errors.cnf | 7 +
.../binlog_encryption/rpl_binlog_errors.result | 280 ++
.../suite/binlog_encryption/rpl_binlog_errors.test | 439 ++++
.../rpl_cant_read_event_incident.result | 26 +
.../rpl_cant_read_event_incident.test | 92 +
.../suite/binlog_encryption/rpl_checksum.cnf | 10 +
.../suite/binlog_encryption/rpl_checksum.result | 157 ++
.../suite/binlog_encryption/rpl_checksum.test | 326 +++
.../binlog_encryption/rpl_checksum_cache.result | 136 +
.../binlog_encryption/rpl_checksum_cache.test | 271 ++
.../suite/binlog_encryption/rpl_corruption.cnf | 9 +
.../suite/binlog_encryption/rpl_corruption.result | 63 +
.../suite/binlog_encryption/rpl_corruption.test | 193 ++
.../suite/binlog_encryption/rpl_gtid_basic.cnf | 24 +
.../suite/binlog_encryption/rpl_gtid_basic.result | 558 ++++
.../suite/binlog_encryption/rpl_gtid_basic.test | 644 +++++
.../suite/binlog_encryption/rpl_incident.cnf | 7 +
.../suite/binlog_encryption/rpl_incident.result | 42 +
.../suite/binlog_encryption/rpl_incident.test | 72 +
.../binlog_encryption/rpl_init_slave_errors.result | 22 +
.../binlog_encryption/rpl_init_slave_errors.test | 100 +
.../binlog_encryption/rpl_loaddata_local.result | 134 +
.../binlog_encryption/rpl_loaddata_local.test | 233 ++
.../suite/binlog_encryption/rpl_loadfile.result | 262 ++
.../suite/binlog_encryption/rpl_loadfile.test | 137 +
.../rpl_mixed_binlog_max_cache_size.result | 210 ++
.../rpl_mixed_binlog_max_cache_size.test | 489 ++++
mysql-test/suite/binlog_encryption/rpl_packet.cnf | 10 +
.../suite/binlog_encryption/rpl_packet.result | 83 +
mysql-test/suite/binlog_encryption/rpl_packet.test | 199 ++
.../suite/binlog_encryption/rpl_parallel.result | 2025 +++++++++++++++
.../suite/binlog_encryption/rpl_parallel.test | 2729
++++++++++++++++++++
.../rpl_parallel_show_binlog_events_purge_logs.cnf | 6 +
...l_parallel_show_binlog_events_purge_logs.result | 13 +
...rpl_parallel_show_binlog_events_purge_logs.test | 39 +
.../binlog_encryption/rpl_relayrotate-slave.opt | 5 +
.../suite/binlog_encryption/rpl_relayrotate.result | 20 +
.../suite/binlog_encryption/rpl_relayrotate.test | 21 +
.../suite/binlog_encryption/rpl_semi_sync.result | 488 ++++
.../suite/binlog_encryption/rpl_semi_sync.test | 606 +++++
.../binlog_encryption/rpl_skip_replication.cnf | 6 +
.../binlog_encryption/rpl_skip_replication.result | 312 +++
.../binlog_encryption/rpl_skip_replication.test | 401 +++
.../binlog_encryption/rpl_special_charset.opt | 1 +
.../binlog_encryption/rpl_special_charset.result | 10 +
.../binlog_encryption/rpl_special_charset.test | 35 +
.../rpl_sporadic_master-master.opt | 1 +
.../binlog_encryption/rpl_sporadic_master.result | 28 +
.../binlog_encryption/rpl_sporadic_master.test | 34 +
mysql-test/suite/binlog_encryption/rpl_ssl.result | 55 +
mysql-test/suite/binlog_encryption/rpl_ssl.test | 126 +
.../rpl_stm_relay_ign_space-slave.opt | 1 +
.../rpl_stm_relay_ign_space.result | 6 +
.../binlog_encryption/rpl_stm_relay_ign_space.test | 117 +
.../rpl_switch_stm_row_mixed.result | 457 ++++
.../rpl_switch_stm_row_mixed.test | 634 +++++
.../suite/binlog_encryption/rpl_sync-master.opt | 2 +
.../suite/binlog_encryption/rpl_sync-slave.opt | 2 +
mysql-test/suite/binlog_encryption/rpl_sync.result | 53 +
mysql-test/suite/binlog_encryption/rpl_sync.test | 180 ++
.../rpl_temporal_format_default_to_default.cnf | 6 +
.../rpl_temporal_format_default_to_default.result | 91 +
.../rpl_temporal_format_default_to_default.test | 82 +
.../rpl_temporal_format_mariadb53_to_mysql56.cnf | 6 +
...rpl_temporal_format_mariadb53_to_mysql56.result | 95 +
.../rpl_temporal_format_mariadb53_to_mysql56.test | 18 +
.../rpl_temporal_format_mysql56_to_mariadb53.cnf | 6 +
...rpl_temporal_format_mysql56_to_mariadb53.result | 95 +
.../rpl_temporal_format_mysql56_to_mariadb53.test | 8 +
.../suite/binlog_encryption/rpl_typeconv.result | 548 ++++
.../suite/binlog_encryption/rpl_typeconv.test | 81 +
mysql-test/suite/binlog_encryption/suite.pm | 18 +
mysql-test/suite/binlog_encryption/testdata.inc | 207 ++
mysql-test/suite/binlog_encryption/testdata.opt | 1 +
mysql-test/unstable-tests | 4 +
112 files changed, 19799 insertions(+)
diff --git a/mysql-test/mysql-test-run.pl b/mysql-test/mysql-test-run.pl
index 2bd89f5..2485b94 100755
--- a/mysql-test/mysql-test-run.pl
+++ b/mysql-test/mysql-test-run.pl
@@ -170,6 +170,7 @@ my @DEFAULT_SUITES= qw(
main-
archive-
binlog-
+ binlog_encryption-
csv-
encryption-
federated-
diff --git a/mysql-test/suite/binlog_encryption/binlog_index.result
b/mysql-test/suite/binlog_encryption/binlog_index.result
new file mode 100644
index 0000000..8cdca86
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_index.result
@@ -0,0 +1,187 @@
+call mtr.add_suppression('Attempting backtrace');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to process
registered files that would be purged.');
+call mtr.add_suppression('MSYQL_BIN_LOG::open failed to sync the index
file');
+call mtr.add_suppression('Turning logging off for the whole duration of
the MySQL server process.');
+call mtr.add_suppression('Could not open .*');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to clean
registers before purging logs.');
+flush tables;
+RESET MASTER;
+flush logs;
+flush logs;
+flush logs;
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+master-bin.000003 #
+master-bin.000004 #
+flush tables;
+purge binary logs TO 'master-bin.000004';
+Warnings:
+Warning 1612 Being purged log master-bin.000001 was not found
+*** must show a list starting from the 'TO' argument of PURGE ***
+show binary logs;
+Log_name File_size
+master-bin.000004 #
+reset master;
+flush logs;
+flush logs;
+flush logs;
+*** must be a warning master-bin.000001 was not found ***
+Warnings:
+Warning 1612 Being purged log master-bin.000001 was not found
+*** must show one record, of the active binlog, left in the index file
after PURGE ***
+show binary logs;
+Log_name File_size
+master-bin.000004 #
+reset master;
+flush logs;
+flush logs;
+flush logs;
+purge binary logs TO 'master-bin.000002';
+ERROR HY000: Fatal error during log purge
+show warnings;
+Level Code Message
+Warning 1377 a problem with deleting master-bin.000001; consider
examining correspondence of your binlog index file to the actual binlog
files
+Error 1377 Fatal error during log purge
+reset master;
+# crash_purge_before_update_index
+flush logs;
+SET SESSION debug_dbug="+d,crash_purge_before_update_index";
+purge binary logs TO 'master-bin.000002';
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000001
+master-bin.000002
+master-bin.000003
+
+# crash_purge_non_critical_after_update_index
+flush logs;
+SET SESSION debug_dbug="+d,crash_purge_non_critical_after_update_index";
+purge binary logs TO 'master-bin.000004';
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000004
+master-bin.000005
+
+# crash_purge_critical_after_update_index
+flush logs;
+SET SESSION debug_dbug="+d,crash_purge_critical_after_update_index";
+purge binary logs TO 'master-bin.000006';
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+
+# crash_create_non_critical_before_update_index
+SET SESSION debug_dbug="+d,crash_create_non_critical_before_update_index";
+flush logs;
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+
+# crash_create_critical_before_update_index
+SET SESSION debug_dbug="+d,crash_create_critical_before_update_index";
+flush logs;
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+
+# crash_create_after_update_index
+SET SESSION debug_dbug="+d,crash_create_after_update_index";
+flush logs;
+ERROR HY000: Lost connection to MySQL server during query
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+
+#
+# This should put the server in unsafe state and stop
+# accepting any command. If we inject a fault at this
+# point and continue the execution the server crashes.
+#
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+
+# fault_injection_registering_index
+SET SESSION debug_dbug="+d,fault_injection_registering_index";
+flush logs;
+ERROR HY000: Can't open file: 'master-bin.000012' (errno: 1 "Operation
not permitted")
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+master-bin.000012
+
+# fault_injection_updating_index
+SET SESSION debug_dbug="+d,fault_injection_updating_index";
+flush logs;
+ERROR HY000: Can't open file: 'master-bin.000013' (errno: 1 "Operation
not permitted")
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+master-bin.000012
+
+SET @index=LOAD_FILE('MYSQLTEST_VARDIR/mysqld.1/data//master-bin.index');
+SELECT @index;
+@index
+master-bin.000006
+master-bin.000007
+master-bin.000008
+master-bin.000009
+master-bin.000010
+master-bin.000011
+master-bin.000012
+master-bin.000013
+
+SET SESSION debug_dbug="";
+End of tests
diff --git a/mysql-test/suite/binlog_encryption/binlog_index.test
b/mysql-test/suite/binlog_encryption/binlog_index.test
new file mode 100644
index 0000000..b85bf6c
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_index.test
@@ -0,0 +1,276 @@
+#
+# The test was taken as is from the binlog suite
+#
+
+#
+# testing of purging of binary log files bug#18199/Bug#18453
+#
+source include/have_log_bin.inc;
+source include/not_embedded.inc;
+# Don't test this under valgrind, memory leaks will occur
+--source include/not_valgrind.inc
+source include/have_debug.inc;
+# Avoid CrashReporter popup on Mac
+--source include/not_crashrep.inc
+call mtr.add_suppression('Attempting backtrace');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to process
registered files that would be purged.');
+call mtr.add_suppression('MSYQL_BIN_LOG::open failed to sync the index
file');
+call mtr.add_suppression('Turning logging off for the whole duration of
the MySQL server process.');
+call mtr.add_suppression('Could not open .*');
+call mtr.add_suppression('MSYQL_BIN_LOG::purge_logs failed to clean
registers before purging logs.');
+flush tables;
+
+let $old=`select @@debug`;
+
+RESET MASTER;
+
+let $MYSQLD_DATADIR= `select @@datadir`;
+let $INDEX=$MYSQLD_DATADIR/master-bin.index;
+
+#
+# testing purge binary logs TO
+#
+
+flush logs;
+flush logs;
+flush logs;
+
+source include/show_binary_logs.inc;
+remove_file $MYSQLD_DATADIR/master-bin.000001;
+flush tables;
+
+# there must be a warning with file names
+replace_regex /\.[\\\/]master/master/;
+--source include/wait_for_binlog_checkpoint.inc
+purge binary logs TO 'master-bin.000004';
+
+--echo *** must show a list starting from the 'TO' argument of PURGE ***
+source include/show_binary_logs.inc;
+
+#
+# testing purge binary logs BEFORE
+#
+
+reset master;
+
+flush logs;
+flush logs;
+flush logs;
+remove_file $MYSQLD_DATADIR/master-bin.000001;
+
+--echo *** must be a warning master-bin.000001 was not found ***
+let $date=`select NOW() + INTERVAL 1 MINUTE`;
+--disable_query_log
+replace_regex /\.[\\\/]master/master/;
+--source include/wait_for_binlog_checkpoint.inc
+eval purge binary logs BEFORE '$date';
+--enable_query_log
+
+--echo *** must show one record, of the active binlog, left in the
index file after PURGE ***
+source include/show_binary_logs.inc;
+
+#
+# testing a fatal error
+# Turning a binlog file into a directory must be a portable setup
+# +
+reset master;
+
+flush logs;
+flush logs;
+flush logs;
+
+remove_file $MYSQLD_DATADIR/master-bin.000001;
+mkdir $MYSQLD_DATADIR/master-bin.000001;
+
+--source include/wait_for_binlog_checkpoint.inc
+--error ER_BINLOG_PURGE_FATAL_ERR
+purge binary logs TO 'master-bin.000002';
+replace_regex /\.[\\\/]master/master/;
+show warnings;
+rmdir $MYSQLD_DATADIR/master-bin.000001;
+--disable_warnings
+reset master;
+--enable_warnings
+
+--echo # crash_purge_before_update_index
+flush logs;
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug_dbug="+d,crash_purge_before_update_index";
+--source include/wait_for_binlog_checkpoint.inc
+--error 2013
+purge binary logs TO 'master-bin.000002';
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000001;
+file_exists $MYSQLD_DATADIR/master-bin.000002;
+file_exists $MYSQLD_DATADIR/master-bin.000003;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_purge_non_critical_after_update_index
+flush logs;
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug_dbug="+d,crash_purge_non_critical_after_update_index";
+--source include/wait_for_binlog_checkpoint.inc
+--error 2013
+purge binary logs TO 'master-bin.000004';
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000001;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000002;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000003;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_purge_critical_after_update_index
+flush logs;
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug_dbug="+d,crash_purge_critical_after_update_index";
+--source include/wait_for_binlog_checkpoint.inc
+--error 2013
+purge binary logs TO 'master-bin.000006';
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000004;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000005;
+file_exists $MYSQLD_DATADIR/master-bin.000006;
+file_exists $MYSQLD_DATADIR/master-bin.000007;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000008;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_create_non_critical_before_update_index
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug_dbug="+d,crash_create_non_critical_before_update_index";
+--error 2013
+flush logs;
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000008;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000009;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_create_critical_before_update_index
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug_dbug="+d,crash_create_critical_before_update_index";
+--error 2013
+flush logs;
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000009;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000010;
+--error 1
+file_exists $MYSQLD_DATADIR/master-bin.000011;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # crash_create_after_update_index
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+SET SESSION debug_dbug="+d,crash_create_after_update_index";
+--error 2013
+flush logs;
+
+--enable_reconnect
+--source include/wait_until_connected_again.inc
+
+file_exists $MYSQLD_DATADIR/master-bin.000010;
+file_exists $MYSQLD_DATADIR/master-bin.000011;
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo #
+--echo # This should put the server in unsafe state and stop
+--echo # accepting any command. If we inject a fault at this
+--echo # point and continue the execution the server crashes.
+--echo #
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # fault_injection_registering_index
+SET SESSION debug_dbug="+d,fault_injection_registering_index";
+-- replace_regex /\.[\\\/]master/master/
+-- error ER_CANT_OPEN_FILE
+flush logs;
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--source include/restart_mysqld.inc
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--echo # fault_injection_updating_index
+SET SESSION debug_dbug="+d,fault_injection_updating_index";
+-- replace_regex /\.[\\\/]master/master/
+-- error ER_CANT_OPEN_FILE
+flush logs;
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+--source include/restart_mysqld.inc
+
+--chmod 0644 $INDEX
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SET @index=LOAD_FILE('$index')
+-- replace_regex /\.[\\\/]master/master/
+SELECT @index;
+
+eval SET SESSION debug_dbug="$old";
+
+--echo End of tests
diff --git a/mysql-test/suite/binlog_encryption/binlog_ioerr.result
b/mysql-test/suite/binlog_encryption/binlog_ioerr.result
new file mode 100644
index 0000000..6b3120b
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_ioerr.result
@@ -0,0 +1,32 @@
+CALL mtr.add_suppression("Error writing file 'master-bin'");
+RESET MASTER;
+CREATE TABLE t1 (a INT PRIMARY KEY) ENGINE=innodb;
+INSERT INTO t1 VALUES(0);
+SET SESSION debug_dbug='+d,fail_binlog_write_1';
+INSERT INTO t1 VALUES(1);
+ERROR HY000: Error writing file 'master-bin' (errno: 28 "No space left
on device")
+INSERT INTO t1 VALUES(2);
+ERROR HY000: Error writing file 'master-bin' (errno: 28 "No space left
on device")
+SET SESSION debug_dbug='';
+INSERT INTO t1 VALUES(3);
+SELECT * FROM t1;
+a
+0
+3
+SHOW BINLOG EVENTS;
+Log_name Pos Event_type Server_id End_log_pos Info
+BINLOG POS Format_desc 1 ENDPOS Server ver: #, Binlog ver: #
+BINLOG POS Start_encryption 1 ENDPOS
+BINLOG POS Gtid_list 1 ENDPOS []
+BINLOG POS Binlog_checkpoint 1 ENDPOS master-bin.000001
+BINLOG POS Gtid 1 ENDPOS GTID 0-1-1
+BINLOG POS Query 1 ENDPOS use `test`; CREATE TABLE t1 (a INT PRIMARY
KEY) ENGINE=innodb
+BINLOG POS Gtid 1 ENDPOS BEGIN GTID 0-1-2
+BINLOG POS Query 1 ENDPOS use `test`; INSERT INTO t1 VALUES(0)
+BINLOG POS Xid 1 ENDPOS COMMIT /* XID */
+BINLOG POS Gtid 1 ENDPOS BEGIN GTID 0-1-3
+BINLOG POS Gtid 1 ENDPOS BEGIN GTID 0-1-4
+BINLOG POS Gtid 1 ENDPOS BEGIN GTID 0-1-5
+BINLOG POS Query 1 ENDPOS use `test`; INSERT INTO t1 VALUES(3)
+BINLOG POS Xid 1 ENDPOS COMMIT /* XID */
+DROP TABLE t1;
diff --git a/mysql-test/suite/binlog_encryption/binlog_ioerr.test
b/mysql-test/suite/binlog_encryption/binlog_ioerr.test
new file mode 100644
index 0000000..4758d86
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_ioerr.test
@@ -0,0 +1,33 @@
+#
+# The test was taken from the binlog suite as is, only result file was
modified
+#
+
+source include/have_debug.inc;
+source include/have_log_bin.inc;
+source include/have_binlog_format_mixed_or_statement.inc;
+
+CALL mtr.add_suppression("Error writing file 'master-bin'");
+
+RESET MASTER;
+
+CREATE TABLE t1 (a INT PRIMARY KEY) ENGINE=innodb;
+INSERT INTO t1 VALUES(0);
+SET SESSION debug_dbug='+d,fail_binlog_write_1';
+--error ER_ERROR_ON_WRITE
+INSERT INTO t1 VALUES(1);
+--error ER_ERROR_ON_WRITE
+INSERT INTO t1 VALUES(2);
+SET SESSION debug_dbug='';
+INSERT INTO t1 VALUES(3);
+SELECT * FROM t1;
+
+# Actually the output from this currently shows a bug.
+# The injected IO error leaves partially written transactions in the
binlog in
+# the form of stray "BEGIN" events.
+# These should disappear from the output if binlog error handling is
improved
+# (see MySQL Bug#37148 and WL#1790).
+--replace_regex /\/\* xid=.* \*\//\/* XID *\// /Server ver: .*, Binlog
ver: .*/Server ver: #, Binlog ver: #/ /table_id: [0-9]+/table_id: #/
+--replace_column 1 BINLOG 2 POS 5 ENDPOS
+SHOW BINLOG EVENTS;
+
+DROP TABLE t1;
diff --git
a/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932-master.opt
b/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932-master.opt
new file mode 100644
index 0000000..bb0cda4
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932-master.opt
@@ -0,0 +1 @@
+--max-binlog-size=8192
diff --git
a/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932.result
b/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932.result
new file mode 100644
index 0000000..cbf6159
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932.result
@@ -0,0 +1,19 @@
+RESET MASTER;
+create table t3 (f text character set utf8);
+create table t4 (f text character set cp932);
+flush logs;
+rename table t3 to t03, t4 to t04;
+select HEX(f) from t03;
+HEX(f)
+E382BD
+select HEX(f) from t3;
+HEX(f)
+E382BD
+select HEX(f) from t04;
+HEX(f)
+835C
+select HEX(f) from t4;
+HEX(f)
+835C
+drop table t3, t4, t03, t04;
+End of 5.0 tests
diff --git
a/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932.test
b/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932.test
new file mode 100644
index 0000000..68056e1
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_mysqlbinlog-cp932.test
@@ -0,0 +1,32 @@
+#
+# The test was taken almost as is from the binlog suite, except that
+# mysqlbinlog cannot read encrypted files directly, so it reads
+# from the server instead
+#
+
+# disabled in embedded until tools running is fixed with embedded
+--source include/not_embedded.inc
+
+-- source include/have_binlog_format_mixed_or_statement.inc
+-- source include/have_cp932.inc
+-- source include/have_log_bin.inc
+
+RESET MASTER;
+
+# Bug#16217 (mysql client did not know how not switch its internal charset)
+create table t3 (f text character set utf8);
+create table t4 (f text character set cp932); +--exec $MYSQL
--default-character-set=utf8 test -e "insert into t3 values(_utf8'ã½')"
+--exec $MYSQL --default-character-set=cp932 test -e "insert into t4
values(_cp932'\');"
+flush logs;
+rename table t3 to t03, t4 to t04;
+let $MYSQLD_DATADIR= `select @@datadir`;
+--exec $MYSQL_BINLOG --read-from-remote-server --port=$MASTER_MYPORT
-uroot --short-form master-bin.000001 | $MYSQL --default-character-set=utf8
+# original and recovered data must be equal
+select HEX(f) from t03;
+select HEX(f) from t3;
+select HEX(f) from t04;
+select HEX(f) from t4;
+
+drop table t3, t4, t03, t04;
+--echo End of 5.0 tests
diff --git
a/mysql-test/suite/binlog_encryption/binlog_row_annotate-master.opt
b/mysql-test/suite/binlog_encryption/binlog_row_annotate-master.opt
new file mode 100644
index 0000000..344a4ff
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_row_annotate-master.opt
@@ -0,0 +1 @@
+--timezone=GMT-3 --binlog-do-db=test1 --binlog-do-db=test2
--binlog-do-db=test3 --binlog-checksum=NONE
diff --git
a/mysql-test/suite/binlog_encryption/binlog_row_annotate.result
b/mysql-test/suite/binlog_encryption/binlog_row_annotate.result
new file mode 100644
index 0000000..d32b80b
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_row_annotate.result
@@ -0,0 +1,724 @@
+#####################################################################################
+# The following Annotate_rows events should appear below:
+# - INSERT INTO test2.t2 VALUES (1), (2), (3)
+# - INSERT INTO test3.t3 VALUES (1), (2), (3)
+# - DELETE test1.t1, test2.t2 FROM <...>
+# - INSERT INTO test2.t2 VALUES (1), (2), (3)
+# - DELETE xtest1.xt1, test2.t2 FROM <...>
+#####################################################################################
+show binlog events in 'master-bin.000001' from <start_pos>;
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Start_encryption 1 #
+master-bin.000001 # Gtid_list 1 # []
+master-bin.000001 # Binlog_checkpoint 1 # master-bin.000001
+master-bin.000001 # Gtid 1 # GTID 0-1-1
+master-bin.000001 # Query 1 # DROP DATABASE IF EXISTS test1
+master-bin.000001 # Gtid 1 # GTID 0-1-2
+master-bin.000001 # Query 1 # DROP DATABASE IF EXISTS test2
+master-bin.000001 # Gtid 1 # GTID 0-1-3
+master-bin.000001 # Query 1 # DROP DATABASE IF EXISTS test3
+master-bin.000001 # Gtid 1 # GTID 0-1-4
+master-bin.000001 # Query 1 # CREATE DATABASE test1
+master-bin.000001 # Gtid 1 # GTID 0-1-5
+master-bin.000001 # Query 1 # CREATE DATABASE test2
+master-bin.000001 # Gtid 1 # GTID 0-1-6
+master-bin.000001 # Query 1 # CREATE DATABASE test3
+master-bin.000001 # Gtid 1 # BEGIN GTID 0-1-7
+master-bin.000001 # Table_map 1 # table_id: # (test1.t1)
+master-bin.000001 # Write_rows_v1 1 # table_id: # flags: STMT_END_F
+master-bin.000001 # Query 1 # COMMIT
+master-bin.000001 # Gtid 1 # BEGIN GTID 0-1-8
+master-bin.000001 # Annotate_rows 1 # INSERT INTO test2.t2 VALUES (1),
(2), (3)
+master-bin.000001 # Table_map 1 # table_id: # (test2.t2)
+master-bin.000001 # Write_rows_v1 1 # table_id: # flags: STMT_END_F
+master-bin.000001 # Query 1 # COMMIT
+master-bin.000001 # Gtid 1 # BEGIN GTID 0-1-9
+master-bin.000001 # Annotate_rows 1 # INSERT INTO test3.t3 VALUES (1),
(2), (3)
+master-bin.000001 # Table_map 1 # table_id: # (test3.t3)
+master-bin.000001 # Write_rows_v1 1 # table_id: # flags: STMT_END_F
+master-bin.000001 # Query 1 # COMMIT
+master-bin.000001 # Gtid 1 # BEGIN GTID 0-1-10
+master-bin.000001 # Annotate_rows 1 # DELETE test1.t1, test2.t2
+FROM test1.t1 INNER JOIN test2.t2 INNER JOIN test3.t3
+WHERE test1.t1.a=test2.t2.a AND test2.t2.a=test3.t3.a
+master-bin.000001 # Table_map 1 # table_id: # (test1.t1)
+master-bin.000001 # Table_map 1 # table_id: # (test2.t2)
+master-bin.000001 # Delete_rows_v1 1 # table_id: #
+master-bin.000001 # Delete_rows_v1 1 # table_id: # flags: STMT_END_F
+master-bin.000001 # Query 1 # COMMIT
+master-bin.000001 # Gtid 1 # BEGIN GTID 0-1-11
+master-bin.000001 # Annotate_rows 1 # INSERT INTO test2.v2 VALUES (1),
(2), (3)
+master-bin.000001 # Table_map 1 # table_id: # (test2.t2)
+master-bin.000001 # Write_rows_v1 1 # table_id: # flags: STMT_END_F
+master-bin.000001 # Query 1 # COMMIT
+master-bin.000001 # Gtid 1 # BEGIN GTID 0-1-12
+master-bin.000001 # Annotate_rows 1 # DELETE xtest1.xt1, test2.t2
+FROM xtest1.xt1 INNER JOIN test2.t2 INNER JOIN test3.t3
+WHERE xtest1.xt1.a=test2.t2.a AND test2.t2.a=test3.t3.a
+master-bin.000001 # Table_map 1 # table_id: # (test2.t2)
+master-bin.000001 # Delete_rows_v1 1 # table_id: # flags: STMT_END_F
+master-bin.000001 # Query 1 # COMMIT
+master-bin.000001 # Rotate 1 # master-bin.000002;pos=4
+#
+#####################################################################################
+# mysqlbinlog --read-from-remote-server
+# The following Annotates should appear in this output:
+# - INSERT INTO test2.t2 VALUES (1), (2), (3)
+# - INSERT INTO test3.t3 VALUES (1), (2), (3)
+# - DELETE test1.t1, test2.t2 FROM <...> (with two subsequent Table maps)
+# - INSERT INTO test2.t2 VALUES (1), (2), (3)
+# - DELETE xtest1.xt1, test2.t2 FROM <...> (with one subsequent Table map)
+#####################################################################################
+/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
+/*!40019 SET @@session.max_insert_delayed_threads=0*/;
+/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
+DELIMITER /*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Start: binlog v 4, server
v #.##.## created 010909 4:46:40 at startup
+ROLLBACK/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Gtid list []
+# at #
+#010909 4:46:40 server id # end_log_pos # Binlog checkpoint
master-bin.000001
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-1 ddl
+/*!100101 SET @@session.skip_parallel_replication=0*//*!*/;
+/*!100001 SET @@session.gtid_domain_id=0*//*!*/;
+/*!100001 SET @@session.server_id=1*//*!*/;
+/*!100001 SET @@session.gtid_seq_no=1*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+SET @@session.pseudo_thread_id=#/*!*/;
+SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0,
@@session.unique_checks=1, @@session.autocommit=1/*!*/;
+SET @@session.sql_mode=0/*!*/;
+SET @@session.auto_increment_increment=1,
@@session.auto_increment_offset=1/*!*/;
+/*!\C latin1 *//*!*/;
+SET
@@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/;
+SET @@session.lc_time_names=0/*!*/;
+SET @@session.collation_database=DEFAULT/*!*/;
+DROP DATABASE IF EXISTS test1
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-2 ddl
+/*!100001 SET @@session.gtid_seq_no=2*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+DROP DATABASE IF EXISTS test2
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-3 ddl
+/*!100001 SET @@session.gtid_seq_no=3*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+DROP DATABASE IF EXISTS test3
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-4 ddl
+/*!100001 SET @@session.gtid_seq_no=4*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test1
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-5 ddl
+/*!100001 SET @@session.gtid_seq_no=5*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test2
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-6 ddl
+/*!100001 SET @@session.gtid_seq_no=6*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test3
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-7
+/*!100001 SET @@session.gtid_seq_no=7*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test1`.`t1`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-8
+/*!100001 SET @@session.gtid_seq_no=8*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Annotate_rows:
+#Q> INSERT INTO test2.t2 VALUES (1), (2), (3)
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-9
+/*!100001 SET @@session.gtid_seq_no=9*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Annotate_rows:
+#Q> INSERT INTO test3.t3 VALUES (1), (2), (3)
+#010909 4:46:40 server id # end_log_pos # Table_map: `test3`.`t3`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test3`.`t3`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test3`.`t3`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test3`.`t3`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-10
+/*!100001 SET @@session.gtid_seq_no=10*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Annotate_rows:
+#Q> DELETE test1.t1, test2.t2
+#Q> FROM test1.t1 INNER JOIN test2.t2 INNER JOIN test3.t3
+#Q> WHERE test1.t1.a=test2.t2.a AND test2.t2.a=test3.t3
+#010909 4:46:40 server id # end_log_pos # Table_map: `test1`.`t1`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
flags: STMT_END_F
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-11
+/*!100001 SET @@session.gtid_seq_no=11*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Annotate_rows:
+#Q> INSERT INTO test2.v2 VALUES (1), (2), (3)
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-12
+/*!100001 SET @@session.gtid_seq_no=12*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Annotate_rows:
+#Q> DELETE xtest1.xt1, test2.t2
+#Q> FROM xtest1.xt1 INNER JOIN test2.t2 INNER JOIN test3.t3
+#Q> WHERE xtest1.xt1.a=test2.t2.a AND test2.t2.a=test3.t3
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
flags: STMT_END_F
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Rotate to
master-bin.000002 pos: 4
+DELIMITER ;
+# End of log file
+ROLLBACK /* added by mysqlbinlog */;
+/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
+/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
+#
+#####################################################################################
+# mysqlbinlog --read-from-remote-server --database=test1
+# The following Annotate should appear in this output:
+# - DELETE test1.t1, test2.t2 FROM <...>
+#####################################################################################
+/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
+/*!40019 SET @@session.max_insert_delayed_threads=0*/;
+/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
+DELIMITER /*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Start: binlog v 4, server
v #.##.## created 010909 4:46:40 at startup
+ROLLBACK/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Gtid list []
+# at #
+#010909 4:46:40 server id # end_log_pos # Binlog checkpoint
master-bin.000001
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-1 ddl
+/*!100101 SET @@session.skip_parallel_replication=0*//*!*/;
+/*!100001 SET @@session.gtid_domain_id=0*//*!*/;
+/*!100001 SET @@session.server_id=1*//*!*/;
+/*!100001 SET @@session.gtid_seq_no=1*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+SET @@session.pseudo_thread_id=#/*!*/;
+SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0,
@@session.unique_checks=1, @@session.autocommit=1/*!*/;
+SET @@session.sql_mode=0/*!*/;
+SET @@session.auto_increment_increment=1,
@@session.auto_increment_offset=1/*!*/;
+/*!\C latin1 *//*!*/;
+SET
@@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/;
+SET @@session.lc_time_names=0/*!*/;
+SET @@session.collation_database=DEFAULT/*!*/;
+DROP DATABASE IF EXISTS test1
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-2 ddl
+/*!100001 SET @@session.gtid_seq_no=2*//*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-3 ddl
+/*!100001 SET @@session.gtid_seq_no=3*//*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-4 ddl
+/*!100001 SET @@session.gtid_seq_no=4*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test1
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-5 ddl
+/*!100001 SET @@session.gtid_seq_no=5*//*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-6 ddl
+/*!100001 SET @@session.gtid_seq_no=6*//*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-7
+/*!100001 SET @@session.gtid_seq_no=7*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test1`.`t1`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-8
+/*!100001 SET @@session.gtid_seq_no=8*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-9
+/*!100001 SET @@session.gtid_seq_no=9*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-10
+/*!100001 SET @@session.gtid_seq_no=10*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Annotate_rows:
+#Q> DELETE test1.t1, test2.t2
+#Q> FROM test1.t1 INNER JOIN test2.t2 INNER JOIN test3.t3
+#Q> WHERE test1.t1.a=test2.t2.a AND test2.t2.a=test3.t3
+#010909 4:46:40 server id # end_log_pos # Table_map: `test1`.`t1`
mapped to number #
+# at #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+'/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-11
+/*!100001 SET @@session.gtid_seq_no=11*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-12
+/*!100001 SET @@session.gtid_seq_no=12*//*!*/;
+BEGIN
+/*!*/;
+# at #
+# at #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Rotate to
master-bin.000002 pos: 4
+DELIMITER ;
+# End of log file
+ROLLBACK /* added by mysqlbinlog */;
+/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
+/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
+#
+#####################################################################################
+# mysqlbinlog --read-from-remote-server --skip-annotate-row-events
+# No Annotates should appear in this output
+#####################################################################################
+/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
+/*!40019 SET @@session.max_insert_delayed_threads=0*/;
+/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
+DELIMITER /*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Start: binlog v 4, server
v #.##.## created 010909 4:46:40 at startup
+ROLLBACK/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Gtid list []
+# at #
+#010909 4:46:40 server id # end_log_pos # Binlog checkpoint
master-bin.000001
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-1 ddl
+/*!100101 SET @@session.skip_parallel_replication=0*//*!*/;
+/*!100001 SET @@session.gtid_domain_id=0*//*!*/;
+/*!100001 SET @@session.server_id=1*//*!*/;
+/*!100001 SET @@session.gtid_seq_no=1*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+SET @@session.pseudo_thread_id=#/*!*/;
+SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0,
@@session.unique_checks=1, @@session.autocommit=1/*!*/;
+SET @@session.sql_mode=0/*!*/;
+SET @@session.auto_increment_increment=1,
@@session.auto_increment_offset=1/*!*/;
+/*!\C latin1 *//*!*/;
+SET
@@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/;
+SET @@session.lc_time_names=0/*!*/;
+SET @@session.collation_database=DEFAULT/*!*/;
+DROP DATABASE IF EXISTS test1
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-2 ddl
+/*!100001 SET @@session.gtid_seq_no=2*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+DROP DATABASE IF EXISTS test2
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-3 ddl
+/*!100001 SET @@session.gtid_seq_no=3*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+DROP DATABASE IF EXISTS test3
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-4 ddl
+/*!100001 SET @@session.gtid_seq_no=4*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test1
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-5 ddl
+/*!100001 SET @@session.gtid_seq_no=5*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test2
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-6 ddl
+/*!100001 SET @@session.gtid_seq_no=6*//*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+CREATE DATABASE test3
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-7
+/*!100001 SET @@session.gtid_seq_no=7*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test1`.`t1`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test1`.`t1`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-8
+/*!100001 SET @@session.gtid_seq_no=8*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-9
+/*!100001 SET @@session.gtid_seq_no=9*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test3`.`t3`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test3`.`t3`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test3`.`t3`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test3`.`t3`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-10
+/*!100001 SET @@session.gtid_seq_no=10*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test1`.`t1`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+# at #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
flags: STMT_END_F
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test1`.`t1`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-11
+/*!100001 SET @@session.gtid_seq_no=11*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Write_rows: table id #
flags: STMT_END_F
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### INSERT INTO `test2`.`t2`
+### SET
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # GTID 0-1-12
+/*!100001 SET @@session.gtid_seq_no=12*//*!*/;
+BEGIN
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Table_map: `test2`.`t2`
mapped to number #
+# at #
+#010909 4:46:40 server id # end_log_pos # Delete_rows: table id #
flags: STMT_END_F
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=3 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=2 /* INT meta=0 nullable=1 is_null=0 */
+### DELETE FROM `test2`.`t2`
+### WHERE
+### @1=1 /* INT meta=0 nullable=1 is_null=0 */
+# at #
+#010909 4:46:40 server id # end_log_pos # Query thread_id=#
exec_time=# error_code=0
+SET TIMESTAMP=1000000000/*!*/;
+COMMIT
+/*!*/;
+# at #
+#010909 4:46:40 server id # end_log_pos # Rotate to
master-bin.000002 pos: 4
+DELIMITER ;
+# End of log file
+ROLLBACK /* added by mysqlbinlog */;
+/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
+/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
diff --git a/mysql-test/suite/binlog_encryption/binlog_row_annotate.test
b/mysql-test/suite/binlog_encryption/binlog_row_annotate.test
new file mode 100644
index 0000000..2e6790f
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_row_annotate.test
@@ -0,0 +1,160 @@
+#
+# The test was taken from the binlog suite, a part of it was removed
+# because mysqlbinlog is not able to read encrypted logs directly
+#
+
+###############################################################################
+# WL47: Store in binlog text of statements that caused RBR events
+# new event: ANNOTATE_ROWS_EVENT
+# new master option: --binlog-annotate-row-events
+# new mysqlbinlog option: --skip-annotate-row-events
+#
+# Intended to test that:
+# *** If the --binlog-annotate-row-events option is switched on on master
+# then Annotate_rows events:
+# - are generated;
+# - are generated only once for "multi-table-maps" rbr queries;
+# - are not generated when the corresponding queries are filtered away;
+# - are generated when the corresponding queries are filtered away
partialy
+# (e.g. in case of multi-delete).
+# *** Annotate_rows events are printed by mysqlbinlog started without
+# --skip-annotate-row-events options both in remote and local cases.
+# *** Annotate_rows events are not printed by mysqlbinlog started with
+# --skip-annotate-row-events options both in remote and local cases.
+###############################################################################
+
+--source include/have_log_bin.inc
+--source include/have_binlog_format_row.inc
+--source include/binlog_start_pos.inc
+
+--disable_query_log
+
+set sql_mode="";
+
+# Fix timestamp to avoid varying results
+SET timestamp=1000000000;
+
+# Delete all existing binary logs
+RESET MASTER;
+
+--disable_warnings
+DROP DATABASE IF EXISTS test1;
+DROP DATABASE IF EXISTS test2;
+DROP DATABASE IF EXISTS test3;
+DROP DATABASE IF EXISTS xtest1;
+DROP DATABASE IF EXISTS xtest2;
+--enable_warnings
+
+CREATE DATABASE test1;
+CREATE TABLE test1.t1(a int);
+
+CREATE DATABASE test2;
+CREATE TABLE test2.t2(a int);
+CREATE VIEW test2.v2 AS SELECT * FROM test2.t2;
+
+CREATE DATABASE test3;
+CREATE TABLE test3.t3(a int);
+
+CREATE DATABASE xtest1;
+CREATE TABLE xtest1.xt1(a int);
+
+CREATE DATABASE xtest2;
+CREATE TABLE xtest2.xt2(a int);
+
+# By default SESSION binlog_annotate_row_events = OFF
+
+INSERT INTO test1.t1 VALUES (1), (2), (3);
+
+SET SESSION binlog_annotate_row_events = ON;
+
+INSERT INTO test2.t2 VALUES (1), (2), (3);
+INSERT INTO test3.t3 VALUES (1), (2), (3);
+
+# This query generates two Table maps but the Annotate
+# event should appear only once before the first Table map
+DELETE test1.t1, test2.t2
+ FROM test1.t1 INNER JOIN test2.t2 INNER JOIN test3.t3
+ WHERE test1.t1.a=test2.t2.a AND test2.t2.a=test3.t3.a;
+
+# This event should be filtered out together with Annotate event
+INSERT INTO xtest1.xt1 VALUES (1), (2), (3);
+
+# This event should pass the filter
+INSERT INTO test2.v2 VALUES (1), (2), (3);
+
+# This event should pass the filter only for test2.t2 part
+DELETE xtest1.xt1, test2.t2
+ FROM xtest1.xt1 INNER JOIN test2.t2 INNER JOIN test3.t3
+ WHERE xtest1.xt1.a=test2.t2.a AND test2.t2.a=test3.t3.a;
+
+# These events should be filtered out together with Annotate events
+INSERT INTO xtest1.xt1 VALUES (1), (2), (3);
+INSERT INTO xtest2.xt2 VALUES (1), (2), (3);
+DELETE xtest1.xt1, xtest2.xt2
+ FROM xtest1.xt1 INNER JOIN xtest2.xt2 INNER JOIN test3.t3
+ WHERE xtest1.xt1.a=xtest2.xt2.a AND xtest2.xt2.a=test3.t3.a;
+
+FLUSH LOGS;
+
+let $MYSQLD_DATADIR= `select @@datadir`;
+
+--enable_query_log
+
+--echo
#####################################################################################
+--echo # The following Annotate_rows events should appear below:
+--echo # - INSERT INTO test2.t2 VALUES (1), (2), (3)
+--echo # - INSERT INTO test3.t3 VALUES (1), (2), (3)
+--echo # - DELETE test1.t1, test2.t2 FROM <...>
+--echo # - INSERT INTO test2.t2 VALUES (1), (2), (3)
+--echo # - DELETE xtest1.xt1, test2.t2 FROM <...>
+--echo
#####################################################################################
+
+let $start_pos= `select @binlog_start_pos`;
+--replace_column 2 # 5 #
+--replace_result $start_pos <start_pos>
+--replace_regex /table_id: [0-9]+/table_id: #/ /\/\* xid=.* \*\//\/*
xid= *\//
+--eval show binlog events in 'master-bin.000001' from $start_pos
+
+--echo #
+--echo
#####################################################################################
+--echo # mysqlbinlog --read-from-remote-server
+--echo # The following Annotates should appear in this output:
+--echo # - INSERT INTO test2.t2 VALUES (1), (2), (3)
+--echo # - INSERT INTO test3.t3 VALUES (1), (2), (3)
+--echo # - DELETE test1.t1, test2.t2 FROM <...> (with two subsequent
Table maps)
+--echo # - INSERT INTO test2.t2 VALUES (1), (2), (3)
+--echo # - DELETE xtest1.xt1, test2.t2 FROM <...> (with one subsequent
Table map)
+--echo
#####################################################################################
+
+--replace_regex /server id [0-9]*/server id #/ /server v [^ ]*/server v
#.##.##/ /exec_time=[0-9]*/exec_time=#/ /thread_id=[0-9]*/thread_id=#/
/table id [0-9]*/table id #/ /mapped to number [0-9]*/mapped to number
#/ /end_log_pos [0-9]*/end_log_pos #/ /# at [0-9]*/# at #/
+--exec $MYSQL_BINLOG --base64-output=decode-rows -v -v
--read-from-remote-server --user=root --host=localhost
--port=$MASTER_MYPORT master-bin.000001
+
+--echo #
+--echo
#####################################################################################
+--echo # mysqlbinlog --read-from-remote-server --database=test1
+--echo # The following Annotate should appear in this output:
+--echo # - DELETE test1.t1, test2.t2 FROM <...>
+--echo
#####################################################################################
+
+--replace_regex /server id [0-9]*/server id #/ /server v [^ ]*/server v
#.##.##/ /exec_time=[0-9]*/exec_time=#/ /thread_id=[0-9]*/thread_id=#/
/table id [0-9]*/table id #/ /mapped to number [0-9]*/mapped to number
#/ /end_log_pos [0-9]*/end_log_pos #/ /# at [0-9]*/# at #/
+--exec $MYSQL_BINLOG --base64-output=decode-rows --database=test1 -v -v
--read-from-remote-server --user=root --host=localhost
--port=$MASTER_MYPORT master-bin.000001
+
+--echo #
+--echo
#####################################################################################
+--echo # mysqlbinlog --read-from-remote-server --skip-annotate-row-events
+--echo # No Annotates should appear in this output
+--echo
#####################################################################################
+
+--replace_regex /server id [0-9]*/server id #/ /server v [^ ]*/server v
#.##.##/ /exec_time=[0-9]*/exec_time=#/ /thread_id=[0-9]*/thread_id=#/
/table id [0-9]*/table id #/ /mapped to number [0-9]*/mapped to number
#/ /end_log_pos [0-9]*/end_log_pos #/ /# at [0-9]*/# at #/
+--exec $MYSQL_BINLOG --base64-output=decode-rows
--skip-annotate-row-events -v -v --read-from-remote-server --user=root
--host=localhost --port=$MASTER_MYPORT master-bin.000001
+
+# Clean-up
+
+--disable_query_log
+DROP DATABASE test1;
+DROP DATABASE test2;
+DROP DATABASE test3;
+DROP DATABASE xtest1;
+DROP DATABASE xtest2;
+--enable_query_log
+
diff --git
a/mysql-test/suite/binlog_encryption/binlog_write_error.result
b/mysql-test/suite/binlog_encryption/binlog_write_error.result
new file mode 100644
index 0000000..28cffb3
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_write_error.result
@@ -0,0 +1,108 @@
+#
+# Initialization
+#
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP FUNCTION IF EXISTS f2;
+DROP PROCEDURE IF EXISTS p1;
+DROP PROCEDURE IF EXISTS p2;
+DROP TRIGGER IF EXISTS tr1;
+DROP TRIGGER IF EXISTS tr2;
+DROP VIEW IF EXISTS v1, v2;
+#
+# Test injecting binlog write error when executing queries
+#
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+CREATE TABLE t1 (a INT);
+CREATE TABLE t1 (a INT);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+INSERT INTO t1 VALUES (1),(2),(3);
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+INSERT INTO t1 VALUES (4),(5),(6);
+INSERT INTO t1 VALUES (4),(5),(6);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+UPDATE t1 set a=a+1;
+UPDATE t1 set a=a+1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DELETE FROM t1;
+DELETE FROM t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+CREATE TRIGGER tr1 AFTER INSERT ON t1 FOR EACH ROW INSERT INTO t1
VALUES (new.a + 100);
+CREATE TRIGGER tr1 AFTER INSERT ON t1 FOR EACH ROW INSERT INTO t1
VALUES (new.a + 100);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DROP TRIGGER tr1;
+DROP TRIGGER tr1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+ALTER TABLE t1 ADD (b INT);
+ALTER TABLE t1 ADD (b INT);
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+CREATE VIEW v1 AS SELECT a FROM t1;
+CREATE VIEW v1 AS SELECT a FROM t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DROP VIEW v1;
+DROP VIEW v1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+CREATE PROCEDURE p1(OUT rows INT) SELECT count(*) INTO rows FROM t1;
+CREATE PROCEDURE p1(OUT rows INT) SELECT count(*) INTO rows FROM t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DROP PROCEDURE p1;
+DROP PROCEDURE p1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DROP TABLE t1;
+DROP TABLE t1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+CREATE FUNCTION f1() RETURNS INT return 1;
+CREATE FUNCTION f1() RETURNS INT return 1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DROP FUNCTION f1;
+DROP FUNCTION f1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+CREATE USER user1;
+CREATE USER user1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM user1;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM user1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+SET GLOBAL debug_dbug='d,injecting_fault_writing';
+DROP USER user1;
+DROP USER user1;
+ERROR HY000: Error writing file 'master-bin' ((errno: #)
+SET GLOBAL debug_dbug='';
+#
+# Cleanup
+#
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP PROCEDURE IF EXISTS p1;
+DROP TRIGGER IF EXISTS tr1;
+DROP VIEW IF EXISTS v1, v2;
diff --git a/mysql-test/suite/binlog_encryption/binlog_write_error.test
b/mysql-test/suite/binlog_encryption/binlog_write_error.test
new file mode 100644
index 0000000..aa0e705
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_write_error.test
@@ -0,0 +1,106 @@
+#
+# The test was taken from the binlog suite as is
+#
+
+#
+# === Name ===
+#
+# binlog_write_error.test
+#
+# === Description ===
+#
+# This test case check if the error of writing binlog file is properly
+# reported and handled when executing statements.
+#
+# === Related Bugs ===
+#
+# BUG#37148
+#
+
+source include/have_log_bin.inc;
+source include/have_debug.inc;
+source include/have_binlog_format_mixed_or_statement.inc;
+
+--echo #
+--echo # Initialization
+--echo #
+
+disable_warnings;
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP FUNCTION IF EXISTS f2;
+DROP PROCEDURE IF EXISTS p1;
+DROP PROCEDURE IF EXISTS p2;
+DROP TRIGGER IF EXISTS tr1;
+DROP TRIGGER IF EXISTS tr2;
+DROP VIEW IF EXISTS v1, v2;
+enable_warnings;
+
+--echo #
+--echo # Test injecting binlog write error when executing queries
+--echo #
+
+let $query= CREATE TABLE t1 (a INT);
+source include/binlog_inject_error.inc;
+
+INSERT INTO t1 VALUES (1),(2),(3);
+
+let $query= INSERT INTO t1 VALUES (4),(5),(6);
+source include/binlog_inject_error.inc;
+
+let $query= UPDATE t1 set a=a+1;
+source include/binlog_inject_error.inc;
+
+let $query= DELETE FROM t1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE TRIGGER tr1 AFTER INSERT ON t1 FOR EACH ROW INSERT
INTO t1 VALUES (new.a + 100);
+source include/binlog_inject_error.inc;
+
+let $query= DROP TRIGGER tr1;
+source include/binlog_inject_error.inc;
+
+let $query= ALTER TABLE t1 ADD (b INT);
+source include/binlog_inject_error.inc;
+
+let $query= CREATE VIEW v1 AS SELECT a FROM t1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP VIEW v1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE PROCEDURE p1(OUT rows INT) SELECT count(*) INTO rows
FROM t1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP PROCEDURE p1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP TABLE t1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE FUNCTION f1() RETURNS INT return 1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP FUNCTION f1;
+source include/binlog_inject_error.inc;
+
+let $query= CREATE USER user1;
+source include/binlog_inject_error.inc;
+
+let $query= REVOKE ALL PRIVILEGES, GRANT OPTION FROM user1;
+source include/binlog_inject_error.inc;
+
+let $query= DROP USER user1;
+source include/binlog_inject_error.inc;
+
+--echo #
+--echo # Cleanup
+--echo #
+
+disable_warnings;
+DROP TABLE IF EXISTS t1, t2;
+DROP FUNCTION IF EXISTS f1;
+DROP PROCEDURE IF EXISTS p1;
+DROP TRIGGER IF EXISTS tr1;
+DROP VIEW IF EXISTS v1, v2;
+enable_warnings;
diff --git
a/mysql-test/suite/binlog_encryption/binlog_xa_recover-master.opt
b/mysql-test/suite/binlog_encryption/binlog_xa_recover-master.opt
new file mode 100644
index 0000000..3c44f9f
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_xa_recover-master.opt
@@ -0,0 +1 @@
+--skip-stack-trace --skip-core-file
--loose-debug-dbug=+d,xa_recover_expect_master_bin_000004
diff --git a/mysql-test/suite/binlog_encryption/binlog_xa_recover.result
b/mysql-test/suite/binlog_encryption/binlog_xa_recover.result
new file mode 100644
index 0000000..6719d89
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_xa_recover.result
@@ -0,0 +1,240 @@
+SET GLOBAL max_binlog_size= 4096;
+SET GLOBAL innodb_flush_log_at_trx_commit= 1;
+RESET MASTER;
+CREATE TABLE t1 (a INT PRIMARY KEY, b MEDIUMTEXT) ENGINE=Innodb;
+INSERT INTO t1 VALUES (100, REPEAT("x", 4100));
+INSERT INTO t1 VALUES (101, REPEAT("x", 4100));
+INSERT INTO t1 VALUES (102, REPEAT("x", 4100));
+connect con1,localhost,root,,;
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con1_wait
WAIT_FOR con1_cont";
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con1_ready WAIT_FOR _ever";
+INSERT INTO t1 VALUES (1, REPEAT("x", 4100));
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con1_wait";
+connect con2,localhost,root,,;
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con2_wait
WAIT_FOR con2_cont";
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con2_ready WAIT_FOR _ever";
+INSERT INTO t1 VALUES (2, NULL);
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con2_wait";
+connect con3,localhost,root,,;
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con3_wait
WAIT_FOR con3_cont";
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con3_ready WAIT_FOR _ever";
+INSERT INTO t1 VALUES (3, REPEAT("x", 4100));
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con3_wait";
+connect con4,localhost,root,,;
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con4_wait
WAIT_FOR con4_cont";
+SET SESSION debug_dbug="+d,crash_commit_after_log";
+INSERT INTO t1 VALUES (4, NULL);
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con4_wait";
+SET DEBUG_SYNC= "now SIGNAL con1_cont";
+SET DEBUG_SYNC= "now WAIT_FOR con1_ready";
+SET DEBUG_SYNC= "now SIGNAL con2_cont";
+SET DEBUG_SYNC= "now WAIT_FOR con2_ready";
+SET DEBUG_SYNC= "now SIGNAL con3_cont";
+SET DEBUG_SYNC= "now WAIT_FOR con3_ready";
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+master-bin.000003 #
+master-bin.000004 #
+master-bin.000005 #
+master-bin.000006 #
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000003 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000003 # Start_encryption # #
+master-bin.000003 # Gtid_list # # [#-#-#]
+master-bin.000003 # Binlog_checkpoint # # master-bin.000002
+master-bin.000003 # Binlog_checkpoint # # master-bin.000003
+master-bin.000003 # Gtid # # BEGIN GTID #-#-#
+master-bin.000003 # Table_map # # table_id: # (test.t1)
+master-bin.000003 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000003 # Xid # # COMMIT /* XID */
+master-bin.000003 # Rotate # # master-bin.000004;pos=POS
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000004 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000004 # Start_encryption # #
+master-bin.000004 # Gtid_list # # [#-#-#]
+master-bin.000004 # Binlog_checkpoint # # master-bin.000003
+master-bin.000004 # Binlog_checkpoint # # master-bin.000004
+master-bin.000004 # Gtid # # BEGIN GTID #-#-#
+master-bin.000004 # Table_map # # table_id: # (test.t1)
+master-bin.000004 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000004 # Xid # # COMMIT /* XID */
+master-bin.000004 # Rotate # # master-bin.000005;pos=POS
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000005 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000005 # Start_encryption # #
+master-bin.000005 # Gtid_list # # [#-#-#]
+master-bin.000005 # Binlog_checkpoint # # master-bin.000004
+master-bin.000005 # Gtid # # BEGIN GTID #-#-#
+master-bin.000005 # Table_map # # table_id: # (test.t1)
+master-bin.000005 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000005 # Xid # # COMMIT /* XID */
+master-bin.000005 # Gtid # # BEGIN GTID #-#-#
+master-bin.000005 # Table_map # # table_id: # (test.t1)
+master-bin.000005 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000005 # Xid # # COMMIT /* XID */
+master-bin.000005 # Rotate # # master-bin.000006;pos=POS
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000006 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000006 # Start_encryption # #
+master-bin.000006 # Gtid_list # # [#-#-#]
+master-bin.000006 # Binlog_checkpoint # # master-bin.000004
+PURGE BINARY LOGS TO "master-bin.000006";
+show binary logs;
+Log_name File_size
+master-bin.000004 #
+master-bin.000005 #
+master-bin.000006 #
+SET DEBUG_SYNC= "now SIGNAL con4_cont";
+connection con4;
+Got one of the listed errors
+connection default;
+SELECT a FROM t1 ORDER BY a;
+a
+1
+2
+3
+4
+100
+101
+102
+Test that with multiple binlog checkpoints, recovery starts from the
last one.
+SET GLOBAL max_binlog_size= 4096;
+SET GLOBAL innodb_flush_log_at_trx_commit= 1;
+RESET MASTER;
+connect con10,localhost,root,,;
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con10_ready WAIT_FOR con10_cont";
+INSERT INTO t1 VALUES (10, REPEAT("x", 4100));
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con10_ready";
+connect con11,localhost,root,,;
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con11_ready WAIT_FOR con11_cont";
+INSERT INTO t1 VALUES (11, REPEAT("x", 4100));
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con11_ready";
+connect con12,localhost,root,,;
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con12_ready WAIT_FOR con12_cont";
+INSERT INTO t1 VALUES (12, REPEAT("x", 4100));
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con12_ready";
+INSERT INTO t1 VALUES (13, NULL);
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+master-bin.000003 #
+master-bin.000004 #
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000004 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000004 # Start_encryption # #
+master-bin.000004 # Gtid_list # # [#-#-#]
+master-bin.000004 # Binlog_checkpoint # # master-bin.000001
+master-bin.000004 # Gtid # # BEGIN GTID #-#-#
+master-bin.000004 # Table_map # # table_id: # (test.t1)
+master-bin.000004 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000004 # Xid # # COMMIT /* XID */
+SET DEBUG_SYNC= "now SIGNAL con10_cont";
+connection con10;
+connection default;
+SET @old_dbug= @@global.DEBUG_DBUG;
+SET GLOBAL debug_dbug="+d,binlog_background_checkpoint_processed";
+SET DEBUG_SYNC= "now SIGNAL con12_cont";
+connection con12;
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR binlog_background_checkpoint_processed";
+SET GLOBAL debug_dbug= @old_dbug;
+SET DEBUG_SYNC= "now SIGNAL con11_cont";
+connection con11;
+connection default;
+Checking that master-bin.000004 is the last binlog checkpoint
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000004 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000004 # Start_encryption # #
+master-bin.000004 # Gtid_list # # [#-#-#]
+master-bin.000004 # Binlog_checkpoint # # master-bin.000001
+master-bin.000004 # Gtid # # BEGIN GTID #-#-#
+master-bin.000004 # Table_map # # table_id: # (test.t1)
+master-bin.000004 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000004 # Xid # # COMMIT /* XID */
+master-bin.000004 # Binlog_checkpoint # # master-bin.000002
+master-bin.000004 # Binlog_checkpoint # # master-bin.000004
+Now crash the server
+SET SESSION debug_dbug="+d,crash_commit_after_log";
+INSERT INTO t1 VALUES (14, NULL);
+Got one of the listed errors
+connection default;
+SELECT a FROM t1 ORDER BY a;
+a
+1
+2
+3
+4
+10
+11
+12
+13
+14
+100
+101
+102
+*** Check that recovery works if we crashed early during rotate, before
+*** binlog checkpoint event could be written.
+SET GLOBAL max_binlog_size= 4096;
+SET GLOBAL innodb_flush_log_at_trx_commit= 1;
+RESET MASTER;
+INSERT INTO t1 VALUES (21, REPEAT("x", 4100));
+INSERT INTO t1 VALUES (22, REPEAT("x", 4100));
+INSERT INTO t1 VALUES (23, REPEAT("x", 4100));
+SET SESSION debug_dbug="+d,crash_before_write_checkpoint_event";
+INSERT INTO t1 VALUES (24, REPEAT("x", 4100));
+Got one of the listed errors
+SELECT a FROM t1 ORDER BY a;
+a
+1
+2
+3
+4
+10
+11
+12
+13
+14
+21
+22
+23
+24
+100
+101
+102
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+master-bin.000003 #
+master-bin.000004 #
+master-bin.000005 #
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000004 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000004 # Start_encryption # #
+master-bin.000004 # Gtid_list # # [#-#-#]
+master-bin.000004 # Binlog_checkpoint # # master-bin.000003
+master-bin.000004 # Binlog_checkpoint # # master-bin.000004
+master-bin.000004 # Gtid # # BEGIN GTID #-#-#
+master-bin.000004 # Table_map # # table_id: # (test.t1)
+master-bin.000004 # Write_rows_v1 # # table_id: # flags: STMT_END_F
+master-bin.000004 # Xid # # COMMIT /* XID */
+master-bin.000004 # Rotate # # master-bin.000005;pos=POS
+connection default;
+DROP TABLE t1;
diff --git a/mysql-test/suite/binlog_encryption/binlog_xa_recover.test
b/mysql-test/suite/binlog_encryption/binlog_xa_recover.test
new file mode 100644
index 0000000..6aa90ae
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/binlog_xa_recover.test
@@ -0,0 +1,304 @@
+#
+# The test was taken from the binlog suite as is, only result file was
modified
+#
+
+--source include/have_debug.inc
+--source include/have_debug_sync.inc
+--source include/have_binlog_format_row.inc
+# Valgrind does not work well with test that crashes the server
+--source include/not_valgrind.inc
+
+--enable_connect_log
+
+# (We do not need to restore these settings, as we crash the server).
+SET GLOBAL max_binlog_size= 4096;
+SET GLOBAL innodb_flush_log_at_trx_commit= 1;
+RESET MASTER;
+
+CREATE TABLE t1 (a INT PRIMARY KEY, b MEDIUMTEXT) ENGINE=Innodb;
+# Insert some data to force a couple binlog rotations (3), so we get some
+# normal binlog checkpoints before starting the test.
+INSERT INTO t1 VALUES (100, REPEAT("x", 4100));
+# Wait for the master-bin.000002 binlog checkpoint to appear.
+--let $wait_for_all= 0
+--let $show_statement= SHOW BINLOG EVENTS IN "master-bin.000002"
+--let $field= Info
+--let $condition= = "master-bin.000002"
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+INSERT INTO t1 VALUES (101, REPEAT("x", 4100));
+--let $wait_for_all= 0
+--let $show_statement= SHOW BINLOG EVENTS IN "master-bin.000003"
+--let $field= Info
+--let $condition= = "master-bin.000003"
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+INSERT INTO t1 VALUES (102, REPEAT("x", 4100));
+--let $wait_for_all= 0
+--let $show_statement= SHOW BINLOG EVENTS IN "master-bin.000004"
+--let $field= Info
+--let $condition= = "master-bin.000004"
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+# Now start a bunch of transactions that span multiple binlog
+# files. Leave then in the state prepared-but-not-committed in the engine
+# and crash the server. Check that crash recovery is able to recover all
+# of them.
+#
+# We use debug_sync to get all the transactions into the prepared state
before
+# we commit any of them. This is because the prepare step flushes the
InnoDB
+# redo log - including any commits made before, so recovery would become
+# unnecessary, decreasing the value of this test.
+#
+# We arrange to have con1 with a prepared transaction in master-bin.000004,
+# con2 and con3 with a prepared transaction in master-bin.000005, and a new
+# empty master-bin.000006. So the latest binlog checkpoint should be
+# master-bin.000006.
+
+connect(con1,localhost,root,,);
+# First wait after prepare and before write to binlog.
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con1_wait
WAIT_FOR con1_cont";
+# Then complete InnoDB commit in memory (but not commit checkpoint /
write to
+# disk), and hang until crash, leaving a transaction to be XA recovered.
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con1_ready WAIT_FOR _ever";
+send INSERT INTO t1 VALUES (1, REPEAT("x", 4100));
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con1_wait";
+
+connect(con2,localhost,root,,);
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con2_wait
WAIT_FOR con2_cont";
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con2_ready WAIT_FOR _ever";
+send INSERT INTO t1 VALUES (2, NULL);
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con2_wait";
+
+connect(con3,localhost,root,,);
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con3_wait
WAIT_FOR con3_cont";
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con3_ready WAIT_FOR _ever";
+send INSERT INTO t1 VALUES (3, REPEAT("x", 4100));
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con3_wait";
+
+connect(con4,localhost,root,,);
+SET DEBUG_SYNC= "ha_commit_trans_before_log_and_order SIGNAL con4_wait
WAIT_FOR con4_cont";
+SET SESSION debug_dbug="+d,crash_commit_after_log";
+send INSERT INTO t1 VALUES (4, NULL);
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con4_wait";
+
+SET DEBUG_SYNC= "now SIGNAL con1_cont";
+SET DEBUG_SYNC= "now WAIT_FOR con1_ready";
+SET DEBUG_SYNC= "now SIGNAL con2_cont";
+SET DEBUG_SYNC= "now WAIT_FOR con2_ready";
+SET DEBUG_SYNC= "now SIGNAL con3_cont";
+SET DEBUG_SYNC= "now WAIT_FOR con3_ready";
+
+# Check that everything is committed in binary log.
+--disable_connect_log
+--source include/show_binary_logs.inc
+--let $binlog_file= master-bin.000003
+--let $binlog_start= 4
+--source include/show_binlog_events.inc
+--let $binlog_file= master-bin.000004
+--source include/show_binlog_events.inc
+--let $binlog_file= master-bin.000005
+--source include/show_binlog_events.inc
+--let $binlog_file= master-bin.000006
+--source include/show_binlog_events.inc
+--enable_connect_log
+
+
+# Check that server will not purge too much.
+PURGE BINARY LOGS TO "master-bin.000006";
+--disable_connect_log
+--source include/show_binary_logs.inc
+--enable_connect_log
+
+# Now crash the server with one more transaction in prepared state.
+--write_file $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+wait-binlog_xa_recover.test
+EOF
+--error 0,2006,2013
+SET DEBUG_SYNC= "now SIGNAL con4_cont";
+connection con4;
+--error 2006,2013
+reap;
+
+--append_file $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+restart-group_commit_binlog_pos.test
+EOF
+
+connection default;
+--enable_reconnect
+--disable_connect_log
+--source include/wait_until_connected_again.inc
+--enable_connect_log
+
+# Check that all transactions are recovered.
+SELECT a FROM t1 ORDER BY a;
+
+--echo Test that with multiple binlog checkpoints, recovery starts from
the last one.
+SET GLOBAL max_binlog_size= 4096;
+SET GLOBAL innodb_flush_log_at_trx_commit= 1;
+RESET MASTER;
+
+# Rotate to binlog master-bin.000003 while delaying binlog checkpoints.
+# So we get multiple binlog checkpoints in master-bin.000003.
+# Then complete the checkpoints, crash, and check that we only scan
+# the necessary binlog file (ie. that we use the _last_ checkpoint).
+
+connect(con10,localhost,root,,);
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con10_ready WAIT_FOR con10_cont";
+send INSERT INTO t1 VALUES (10, REPEAT("x", 4100));
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con10_ready";
+
+connect(con11,localhost,root,,);
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con11_ready WAIT_FOR con11_cont";
+send INSERT INTO t1 VALUES (11, REPEAT("x", 4100));
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con11_ready";
+
+connect(con12,localhost,root,,);
+SET DEBUG_SYNC= "commit_after_group_release_commit_ordered SIGNAL
con12_ready WAIT_FOR con12_cont";
+send INSERT INTO t1 VALUES (12, REPEAT("x", 4100));
+
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR con12_ready";
+INSERT INTO t1 VALUES (13, NULL);
+
+--disable_connect_log
+--source include/show_binary_logs.inc
+--let $binlog_file= master-bin.000004
+--let $binlog_start= 4
+--source include/show_binlog_events.inc
+--enable_connect_log
+
+SET DEBUG_SYNC= "now SIGNAL con10_cont";
+connection con10;
+reap;
+connection default;
+
+# We need to sync the test case with the background processing of the
+# commit checkpoint, otherwise we get nondeterministic results.
+SET @old_dbug= @@global.DEBUG_DBUG;
+SET GLOBAL debug_dbug="+d,binlog_background_checkpoint_processed";
+
+SET DEBUG_SYNC= "now SIGNAL con12_cont";
+connection con12;
+reap;
+connection default;
+SET DEBUG_SYNC= "now WAIT_FOR binlog_background_checkpoint_processed";
+SET GLOBAL debug_dbug= @old_dbug;
+
+SET DEBUG_SYNC= "now SIGNAL con11_cont";
+connection con11;
+reap;
+
+connection default;
+# Wait for the last (master-bin.000004) binlog checkpoint to appear.
+--let $wait_for_all= 0
+--let $show_statement= SHOW BINLOG EVENTS IN "master-bin.000004"
+--let $field= Info
+--let $condition= = "master-bin.000004"
+--disable_connect_log
+--source include/wait_show_condition.inc
+
+--echo Checking that master-bin.000004 is the last binlog checkpoint
+--source include/show_binlog_events.inc
+--enable_connect_log
+
+--echo Now crash the server
+# It is not too easy to test XA recovery, as it runs early during server
+# startup, before any connections can be made.
+# What we do is set a DBUG error insert which will crash if XA recovery
+# starts from any other binlog than master-bin.000004 (check the file
+# binlog_xa_recover-master.opt). Then we will fail here if XA recovery
+# would start from the wrong place.
+--write_file $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+wait-binlog_xa_recover.test
+EOF
+SET SESSION debug_dbug="+d,crash_commit_after_log";
+--error 2006,2013
+INSERT INTO t1 VALUES (14, NULL);
+
+--append_file $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+restart-group_commit_binlog_pos.test
+EOF
+
+connection default;
+--enable_reconnect
+--disable_connect_log
+--source include/wait_until_connected_again.inc
+--enable_connect_log
+
+# Check that all transactions are recovered.
+SELECT a FROM t1 ORDER BY a;
+
+
+--echo *** Check that recovery works if we crashed early during rotate,
before
+--echo *** binlog checkpoint event could be written.
+
+SET GLOBAL max_binlog_size= 4096;
+SET GLOBAL innodb_flush_log_at_trx_commit= 1;
+RESET MASTER;
+
+# We need some initial data to reach binlog master-bin.000004. Otherwise
+# crash recovery fails due to the error insert used for previous test.
+INSERT INTO t1 VALUES (21, REPEAT("x", 4100));
+INSERT INTO t1 VALUES (22, REPEAT("x", 4100));
+# Wait for the master-bin.000003 binlog checkpoint to appear.
+--let $wait_for_all= 0
+--let $show_statement= SHOW BINLOG EVENTS IN "master-bin.000003"
+--let $field= Info
+--let $condition= = "master-bin.000003"
+--disable_connect_log
+--source include/wait_show_condition.inc
+INSERT INTO t1 VALUES (23, REPEAT("x", 4100));
+# Wait for the last (master-bin.000004) binlog checkpoint to appear.
+--let $wait_for_all= 0
+--let $show_statement= SHOW BINLOG EVENTS IN "master-bin.000004"
+--let $field= Info
+--let $condition= = "master-bin.000004"
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+--write_file $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+wait-binlog_xa_recover.test
+EOF
+SET SESSION debug_dbug="+d,crash_before_write_checkpoint_event";
+--error 2006,2013
+INSERT INTO t1 VALUES (24, REPEAT("x", 4100));
+
+--append_file $MYSQLTEST_VARDIR/tmp/mysqld.1.expect
+restart-group_commit_binlog_pos.test
+EOF
+
+--enable_reconnect
+--disable_connect_log
+--source include/wait_until_connected_again.inc
+--enable_connect_log
+
+# Check that all transactions are recovered.
+SELECT a FROM t1 ORDER BY a;
+
+--disable_connect_log
+--source include/show_binary_logs.inc
+--let $binlog_file= master-bin.000004
+--let $binlog_start= 4
+--source include/show_binlog_events.inc
+--enable_connect_log
+
+# Cleanup
+connection default;
+DROP TABLE t1;
diff --git a/mysql-test/suite/binlog_encryption/disabled.def
b/mysql-test/suite/binlog_encryption/disabled.def
new file mode 100644
index 0000000..cb780c4
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/disabled.def
@@ -0,0 +1 @@
+encrypted_master_switch_to_unencrypted : MDEV-11288 (server crash)
diff --git a/mysql-test/suite/binlog_encryption/encrypted_master.result
b/mysql-test/suite/binlog_encryption/encrypted_master.result
new file mode 100644
index 0000000..efa92e0
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_master.result
@@ -0,0 +1,694 @@
+#################
+# Initialization
+#################
+include/rpl_init.inc [topology=1->2]
+connection server_2;
+include/stop_slave_sql.inc
+connection server_1;
+SET @binlog_annotate_row_events.save= @@global.binlog_annotate_row_events;
+SET @binlog_checksum.save= @@global.binlog_checksum;
+SET @master_verify_checksum.save= @@global.master_verify_checksum;
+SET @binlog_row_image.save= @@global.binlog_row_image;
+####################################################
+# Test 1: simple binlog, no checksum, no annotation
+####################################################
+connection server_1;
+SET binlog_annotate_row_events= 0;
+SET GLOBAL binlog_annotate_row_events= 0;
+SET GLOBAL binlog_checksum= NONE;
+SET GLOBAL master_verify_checksum= 0;
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+CREATE DATABASE database_name_to_encrypt;
+USE database_name_to_encrypt;
+CREATE USER user_name_to_encrypt;
+GRANT ALL ON database_name_to_encrypt.* TO user_name_to_encrypt;
+SET PASSWORD FOR user_name_to_encrypt = PASSWORD('password_to_encrypt');
+CREATE TABLE innodb_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB,
+virt_column_name_to_encrypt INT AS (int_column_name_to_encrypt % 10)
VIRTUAL,
+pers_column_name_to_encrypt INT AS (int_column_name_to_encrypt) PERSISTENT,
+INDEX `index_name_to_encrypt`(`timestamp_column_name_to_encrypt`)
+) ENGINE=InnoDB +PARTITION BY RANGE (int_column_name_to_encrypt)
+SUBPARTITION BY KEY (int_column_name_to_encrypt)
+SUBPARTITIONS 2 (
+PARTITION partition0_name_to_encrypt VALUES LESS THAN (100),
+PARTITION partition1_name_to_encrypt VALUES LESS THAN (MAXVALUE)
+)
+;
+CREATE TABLE myisam_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+char_column_name_to_encrypt VARCHAR(255),
+datetime_column_name_to_encrypt DATETIME,
+text_column_name_to_encrypt TEXT
+) ENGINE=MyISAM;
+CREATE TABLE aria_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+varchar_column_name_to_encrypt VARCHAR(1024),
+enum_column_name_to_encrypt ENUM(
+'enum_value1_to_encrypt',
+'enum_value2_to_encrypt'
+ ),
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB
+) ENGINE=Aria;
+CREATE TRIGGER trigger_name_to_encrypt +AFTER INSERT ON
myisam_table_name_to_encrypt FOR EACH ROW
+INSERT INTO aria_table_name_to_encrypt (varchar_column_name_to_encrypt)
+VALUES (NEW.char_column_name_to_encrypt);
+CREATE DEFINER=user_name_to_encrypt VIEW view_name_to_encrypt +AS
SELECT * FROM innodb_table_name_to_encrypt;
+CREATE FUNCTION func_name_to_encrypt (func_parameter_to_encrypt INT)
+RETURNS VARCHAR(64)
+RETURN 'func_result_to_encrypt';
+CREATE PROCEDURE proc_name_to_encrypt (
+IN proc_in_parameter_to_encrypt CHAR(32),
+OUT proc_out_parameter_to_encrypt INT
+)
+BEGIN
+DECLARE procvar_name_to_encrypt CHAR(64) DEFAULT 'procvar_val_to_encrypt';
+DECLARE cursor_name_to_encrypt CURSOR FOR
+SELECT virt_column_name_to_encrypt FROM innodb_table_name_to_encrypt;
+DECLARE EXIT HANDLER FOR NOT FOUND
+BEGIN
+SET @stmt_var_to_encrypt = CONCAT(
+"SELECT + IF
(RAND()>0.5,'enum_value2_to_encrypt','enum_value1_to_encrypt')
+ FROM innodb_table_name_to_encrypt
+ INTO OUTFILE '", proc_in_parameter_to_encrypt, "'");
+PREPARE stmt_to_encrypt FROM @stmt_var_to_encrypt;
+EXECUTE stmt_to_encrypt;
+DEALLOCATE PREPARE stmt_to_encrypt;
+END;
+OPEN cursor_name_to_encrypt;
+proc_label_to_encrypt: LOOP +FETCH cursor_name_to_encrypt INTO
procvar_name_to_encrypt;
+END LOOP;
+CLOSE cursor_name_to_encrypt;
+END $$
+CREATE SERVER server_name_to_encrypt
+FOREIGN DATA WRAPPER mysql
+OPTIONS (HOST 'host_name_to_encrypt');
+connect
con1,localhost,user_name_to_encrypt,password_to_encrypt,database_name_to_encrypt;
+CREATE TEMPORARY TABLE tmp_table_name_to_encrypt (
+float_column_name_to_encrypt FLOAT,
+binary_column_name_to_encrypt BINARY(64)
+);
+disconnect con1;
+connection server_1;
+CREATE INDEX index_name_to_encrypt +ON myisam_table_name_to_encrypt
(datetime_column_name_to_encrypt);
+ALTER DATABASE database_name_to_encrypt CHARACTER SET utf8;
+ALTER TABLE innodb_table_name_to_encrypt +MODIFY
timestamp_column_name_to_encrypt TIMESTAMP NOT NULL +DEFAULT
CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
+;
+ALTER ALGORITHM=MERGE VIEW view_name_to_encrypt +AS SELECT * FROM
innodb_table_name_to_encrypt;
+RENAME TABLE innodb_table_name_to_encrypt TO new_table_name_to_encrypt;
+ALTER TABLE new_table_name_to_encrypt RENAME TO
innodb_table_name_to_encrypt;
+set @user_var1_to_encrypt= 'dyncol1_val_to_encrypt';
+set @user_var2_to_encrypt= 'dyncol2_name_to_encrypt';
+INSERT INTO view_name_to_encrypt VALUES
+(1, NOW(6),
COLUMN_CREATE('dyncol1_name_to_encrypt',@user_var1_to_encrypt), NULL, NULL),
+(2, NOW(6),
COLUMN_CREATE(@user_var2_to_encrypt,'dyncol2_val_to_encrypt'), NULL, NULL)
+;
+BEGIN NOT ATOMIC
+DECLARE counter_name_to_encrypt INT DEFAULT 0;
+START TRANSACTION;
+WHILE counter_name_to_encrypt<12 DO
+INSERT INTO innodb_table_name_to_encrypt +SELECT NULL, NOW(6),
blob_column_name_to_encrypt, NULL, NULL
+FROM innodb_table_name_to_encrypt
+ORDER BY int_column_name_to_encrypt;
+SET counter_name_to_encrypt = counter_name_to_encrypt+1;
+END WHILE;
+COMMIT;
+END
+$$
+INSERT INTO myisam_table_name_to_encrypt
+SELECT NULL, 'char_literal_to_encrypt', NULL, 'text_to_encrypt';
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+CALL proc_name_to_encrypt('file_name_to_encrypt',@useless_var_to_encrypt);
+TRUNCATE TABLE aria_table_name_to_encrypt;
+LOAD DATA INFILE 'file_name_to_encrypt' INTO TABLE
aria_table_name_to_encrypt
+(enum_column_name_to_encrypt);
+LOAD DATA LOCAL INFILE
'<DATADIR>/database_name_to_encrypt/file_name_to_encrypt' +INTO TABLE
aria_table_name_to_encrypt (enum_column_name_to_encrypt);
+UPDATE view_name_to_encrypt SET blob_column_name_to_encrypt =
+COLUMN_CREATE('dyncol1_name_to_encrypt',func_name_to_encrypt(0))
+;
+DELETE FROM aria_table_name_to_encrypt ORDER BY
int_column_name_to_encrypt LIMIT 10;
+ANALYZE TABLE myisam_table_name_to_encrypt;
+CHECK TABLE aria_table_name_to_encrypt;
+CHECKSUM TABLE innodb_table_name_to_encrypt, myisam_table_name_to_encrypt;
+RENAME USER user_name_to_encrypt to new_user_name_to_encrypt;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM new_user_name_to_encrypt;
+DROP DATABASE database_name_to_encrypt;
+DROP USER new_user_name_to_encrypt;
+DROP SERVER server_name_to_encrypt;
+####################################################
+# Test 2: binlog with checksum, no annotated events
+####################################################
+connection server_1;
+SET binlog_annotate_row_events= 0;
+SET GLOBAL binlog_annotate_row_events= 0;
+SET GLOBAL binlog_checksum= CRC32;
+SET GLOBAL master_verify_checksum= 1;
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+CREATE DATABASE database_name_to_encrypt;
+USE database_name_to_encrypt;
+CREATE USER user_name_to_encrypt;
+GRANT ALL ON database_name_to_encrypt.* TO user_name_to_encrypt;
+SET PASSWORD FOR user_name_to_encrypt = PASSWORD('password_to_encrypt');
+CREATE TABLE innodb_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB,
+virt_column_name_to_encrypt INT AS (int_column_name_to_encrypt % 10)
VIRTUAL,
+pers_column_name_to_encrypt INT AS (int_column_name_to_encrypt) PERSISTENT,
+INDEX `index_name_to_encrypt`(`timestamp_column_name_to_encrypt`)
+) ENGINE=InnoDB +PARTITION BY RANGE (int_column_name_to_encrypt)
+SUBPARTITION BY KEY (int_column_name_to_encrypt)
+SUBPARTITIONS 2 (
+PARTITION partition0_name_to_encrypt VALUES LESS THAN (100),
+PARTITION partition1_name_to_encrypt VALUES LESS THAN (MAXVALUE)
+)
+;
+CREATE TABLE myisam_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+char_column_name_to_encrypt VARCHAR(255),
+datetime_column_name_to_encrypt DATETIME,
+text_column_name_to_encrypt TEXT
+) ENGINE=MyISAM;
+CREATE TABLE aria_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+varchar_column_name_to_encrypt VARCHAR(1024),
+enum_column_name_to_encrypt ENUM(
+'enum_value1_to_encrypt',
+'enum_value2_to_encrypt'
+ ),
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB
+) ENGINE=Aria;
+CREATE TRIGGER trigger_name_to_encrypt +AFTER INSERT ON
myisam_table_name_to_encrypt FOR EACH ROW
+INSERT INTO aria_table_name_to_encrypt (varchar_column_name_to_encrypt)
+VALUES (NEW.char_column_name_to_encrypt);
+CREATE DEFINER=user_name_to_encrypt VIEW view_name_to_encrypt +AS
SELECT * FROM innodb_table_name_to_encrypt;
+CREATE FUNCTION func_name_to_encrypt (func_parameter_to_encrypt INT)
+RETURNS VARCHAR(64)
+RETURN 'func_result_to_encrypt';
+CREATE PROCEDURE proc_name_to_encrypt (
+IN proc_in_parameter_to_encrypt CHAR(32),
+OUT proc_out_parameter_to_encrypt INT
+)
+BEGIN
+DECLARE procvar_name_to_encrypt CHAR(64) DEFAULT 'procvar_val_to_encrypt';
+DECLARE cursor_name_to_encrypt CURSOR FOR
+SELECT virt_column_name_to_encrypt FROM innodb_table_name_to_encrypt;
+DECLARE EXIT HANDLER FOR NOT FOUND
+BEGIN
+SET @stmt_var_to_encrypt = CONCAT(
+"SELECT + IF
(RAND()>0.5,'enum_value2_to_encrypt','enum_value1_to_encrypt')
+ FROM innodb_table_name_to_encrypt
+ INTO OUTFILE '", proc_in_parameter_to_encrypt, "'");
+PREPARE stmt_to_encrypt FROM @stmt_var_to_encrypt;
+EXECUTE stmt_to_encrypt;
+DEALLOCATE PREPARE stmt_to_encrypt;
+END;
+OPEN cursor_name_to_encrypt;
+proc_label_to_encrypt: LOOP +FETCH cursor_name_to_encrypt INTO
procvar_name_to_encrypt;
+END LOOP;
+CLOSE cursor_name_to_encrypt;
+END $$
+CREATE SERVER server_name_to_encrypt
+FOREIGN DATA WRAPPER mysql
+OPTIONS (HOST 'host_name_to_encrypt');
+connect
con1,localhost,user_name_to_encrypt,password_to_encrypt,database_name_to_encrypt;
+CREATE TEMPORARY TABLE tmp_table_name_to_encrypt (
+float_column_name_to_encrypt FLOAT,
+binary_column_name_to_encrypt BINARY(64)
+);
+disconnect con1;
+connection server_1;
+CREATE INDEX index_name_to_encrypt +ON myisam_table_name_to_encrypt
(datetime_column_name_to_encrypt);
+ALTER DATABASE database_name_to_encrypt CHARACTER SET utf8;
+ALTER TABLE innodb_table_name_to_encrypt +MODIFY
timestamp_column_name_to_encrypt TIMESTAMP NOT NULL +DEFAULT
CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
+;
+ALTER ALGORITHM=MERGE VIEW view_name_to_encrypt +AS SELECT * FROM
innodb_table_name_to_encrypt;
+RENAME TABLE innodb_table_name_to_encrypt TO new_table_name_to_encrypt;
+ALTER TABLE new_table_name_to_encrypt RENAME TO
innodb_table_name_to_encrypt;
+set @user_var1_to_encrypt= 'dyncol1_val_to_encrypt';
+set @user_var2_to_encrypt= 'dyncol2_name_to_encrypt';
+INSERT INTO view_name_to_encrypt VALUES
+(1, NOW(6),
COLUMN_CREATE('dyncol1_name_to_encrypt',@user_var1_to_encrypt), NULL, NULL),
+(2, NOW(6),
COLUMN_CREATE(@user_var2_to_encrypt,'dyncol2_val_to_encrypt'), NULL, NULL)
+;
+BEGIN NOT ATOMIC
+DECLARE counter_name_to_encrypt INT DEFAULT 0;
+START TRANSACTION;
+WHILE counter_name_to_encrypt<12 DO
+INSERT INTO innodb_table_name_to_encrypt +SELECT NULL, NOW(6),
blob_column_name_to_encrypt, NULL, NULL
+FROM innodb_table_name_to_encrypt
+ORDER BY int_column_name_to_encrypt;
+SET counter_name_to_encrypt = counter_name_to_encrypt+1;
+END WHILE;
+COMMIT;
+END
+$$
+INSERT INTO myisam_table_name_to_encrypt
+SELECT NULL, 'char_literal_to_encrypt', NULL, 'text_to_encrypt';
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+CALL proc_name_to_encrypt('file_name_to_encrypt',@useless_var_to_encrypt);
+TRUNCATE TABLE aria_table_name_to_encrypt;
+LOAD DATA INFILE 'file_name_to_encrypt' INTO TABLE
aria_table_name_to_encrypt
+(enum_column_name_to_encrypt);
+LOAD DATA LOCAL INFILE
'<DATADIR>/database_name_to_encrypt/file_name_to_encrypt' +INTO TABLE
aria_table_name_to_encrypt (enum_column_name_to_encrypt);
+UPDATE view_name_to_encrypt SET blob_column_name_to_encrypt =
+COLUMN_CREATE('dyncol1_name_to_encrypt',func_name_to_encrypt(0))
+;
+DELETE FROM aria_table_name_to_encrypt ORDER BY
int_column_name_to_encrypt LIMIT 10;
+ANALYZE TABLE myisam_table_name_to_encrypt;
+CHECK TABLE aria_table_name_to_encrypt;
+CHECKSUM TABLE innodb_table_name_to_encrypt, myisam_table_name_to_encrypt;
+RENAME USER user_name_to_encrypt to new_user_name_to_encrypt;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM new_user_name_to_encrypt;
+DROP DATABASE database_name_to_encrypt;
+DROP USER new_user_name_to_encrypt;
+DROP SERVER server_name_to_encrypt;
+####################################################
+# Test 3: binlog with checksum and annotated events
+####################################################
+connection server_1;
+SET binlog_annotate_row_events= 1;
+SET GLOBAL binlog_annotate_row_events= 1;
+SET GLOBAL binlog_checksum= CRC32;
+SET GLOBAL master_verify_checksum= 1;
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+CREATE DATABASE database_name_to_encrypt;
+USE database_name_to_encrypt;
+CREATE USER user_name_to_encrypt;
+GRANT ALL ON database_name_to_encrypt.* TO user_name_to_encrypt;
+SET PASSWORD FOR user_name_to_encrypt = PASSWORD('password_to_encrypt');
+CREATE TABLE innodb_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB,
+virt_column_name_to_encrypt INT AS (int_column_name_to_encrypt % 10)
VIRTUAL,
+pers_column_name_to_encrypt INT AS (int_column_name_to_encrypt) PERSISTENT,
+INDEX `index_name_to_encrypt`(`timestamp_column_name_to_encrypt`)
+) ENGINE=InnoDB +PARTITION BY RANGE (int_column_name_to_encrypt)
+SUBPARTITION BY KEY (int_column_name_to_encrypt)
+SUBPARTITIONS 2 (
+PARTITION partition0_name_to_encrypt VALUES LESS THAN (100),
+PARTITION partition1_name_to_encrypt VALUES LESS THAN (MAXVALUE)
+)
+;
+CREATE TABLE myisam_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+char_column_name_to_encrypt VARCHAR(255),
+datetime_column_name_to_encrypt DATETIME,
+text_column_name_to_encrypt TEXT
+) ENGINE=MyISAM;
+CREATE TABLE aria_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+varchar_column_name_to_encrypt VARCHAR(1024),
+enum_column_name_to_encrypt ENUM(
+'enum_value1_to_encrypt',
+'enum_value2_to_encrypt'
+ ),
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB
+) ENGINE=Aria;
+CREATE TRIGGER trigger_name_to_encrypt +AFTER INSERT ON
myisam_table_name_to_encrypt FOR EACH ROW
+INSERT INTO aria_table_name_to_encrypt (varchar_column_name_to_encrypt)
+VALUES (NEW.char_column_name_to_encrypt);
+CREATE DEFINER=user_name_to_encrypt VIEW view_name_to_encrypt +AS
SELECT * FROM innodb_table_name_to_encrypt;
+CREATE FUNCTION func_name_to_encrypt (func_parameter_to_encrypt INT)
+RETURNS VARCHAR(64)
+RETURN 'func_result_to_encrypt';
+CREATE PROCEDURE proc_name_to_encrypt (
+IN proc_in_parameter_to_encrypt CHAR(32),
+OUT proc_out_parameter_to_encrypt INT
+)
+BEGIN
+DECLARE procvar_name_to_encrypt CHAR(64) DEFAULT 'procvar_val_to_encrypt';
+DECLARE cursor_name_to_encrypt CURSOR FOR
+SELECT virt_column_name_to_encrypt FROM innodb_table_name_to_encrypt;
+DECLARE EXIT HANDLER FOR NOT FOUND
+BEGIN
+SET @stmt_var_to_encrypt = CONCAT(
+"SELECT + IF
(RAND()>0.5,'enum_value2_to_encrypt','enum_value1_to_encrypt')
+ FROM innodb_table_name_to_encrypt
+ INTO OUTFILE '", proc_in_parameter_to_encrypt, "'");
+PREPARE stmt_to_encrypt FROM @stmt_var_to_encrypt;
+EXECUTE stmt_to_encrypt;
+DEALLOCATE PREPARE stmt_to_encrypt;
+END;
+OPEN cursor_name_to_encrypt;
+proc_label_to_encrypt: LOOP +FETCH cursor_name_to_encrypt INTO
procvar_name_to_encrypt;
+END LOOP;
+CLOSE cursor_name_to_encrypt;
+END $$
+CREATE SERVER server_name_to_encrypt
+FOREIGN DATA WRAPPER mysql
+OPTIONS (HOST 'host_name_to_encrypt');
+connect
con1,localhost,user_name_to_encrypt,password_to_encrypt,database_name_to_encrypt;
+CREATE TEMPORARY TABLE tmp_table_name_to_encrypt (
+float_column_name_to_encrypt FLOAT,
+binary_column_name_to_encrypt BINARY(64)
+);
+disconnect con1;
+connection server_1;
+CREATE INDEX index_name_to_encrypt +ON myisam_table_name_to_encrypt
(datetime_column_name_to_encrypt);
+ALTER DATABASE database_name_to_encrypt CHARACTER SET utf8;
+ALTER TABLE innodb_table_name_to_encrypt +MODIFY
timestamp_column_name_to_encrypt TIMESTAMP NOT NULL +DEFAULT
CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
+;
+ALTER ALGORITHM=MERGE VIEW view_name_to_encrypt +AS SELECT * FROM
innodb_table_name_to_encrypt;
+RENAME TABLE innodb_table_name_to_encrypt TO new_table_name_to_encrypt;
+ALTER TABLE new_table_name_to_encrypt RENAME TO
innodb_table_name_to_encrypt;
+set @user_var1_to_encrypt= 'dyncol1_val_to_encrypt';
+set @user_var2_to_encrypt= 'dyncol2_name_to_encrypt';
+INSERT INTO view_name_to_encrypt VALUES
+(1, NOW(6),
COLUMN_CREATE('dyncol1_name_to_encrypt',@user_var1_to_encrypt), NULL, NULL),
+(2, NOW(6),
COLUMN_CREATE(@user_var2_to_encrypt,'dyncol2_val_to_encrypt'), NULL, NULL)
+;
+BEGIN NOT ATOMIC
+DECLARE counter_name_to_encrypt INT DEFAULT 0;
+START TRANSACTION;
+WHILE counter_name_to_encrypt<12 DO
+INSERT INTO innodb_table_name_to_encrypt +SELECT NULL, NOW(6),
blob_column_name_to_encrypt, NULL, NULL
+FROM innodb_table_name_to_encrypt
+ORDER BY int_column_name_to_encrypt;
+SET counter_name_to_encrypt = counter_name_to_encrypt+1;
+END WHILE;
+COMMIT;
+END
+$$
+INSERT INTO myisam_table_name_to_encrypt
+SELECT NULL, 'char_literal_to_encrypt', NULL, 'text_to_encrypt';
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+CALL proc_name_to_encrypt('file_name_to_encrypt',@useless_var_to_encrypt);
+TRUNCATE TABLE aria_table_name_to_encrypt;
+LOAD DATA INFILE 'file_name_to_encrypt' INTO TABLE
aria_table_name_to_encrypt
+(enum_column_name_to_encrypt);
+LOAD DATA LOCAL INFILE
'<DATADIR>/database_name_to_encrypt/file_name_to_encrypt' +INTO TABLE
aria_table_name_to_encrypt (enum_column_name_to_encrypt);
+UPDATE view_name_to_encrypt SET blob_column_name_to_encrypt =
+COLUMN_CREATE('dyncol1_name_to_encrypt',func_name_to_encrypt(0))
+;
+DELETE FROM aria_table_name_to_encrypt ORDER BY
int_column_name_to_encrypt LIMIT 10;
+ANALYZE TABLE myisam_table_name_to_encrypt;
+CHECK TABLE aria_table_name_to_encrypt;
+CHECKSUM TABLE innodb_table_name_to_encrypt, myisam_table_name_to_encrypt;
+RENAME USER user_name_to_encrypt to new_user_name_to_encrypt;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM new_user_name_to_encrypt;
+DROP DATABASE database_name_to_encrypt;
+DROP USER new_user_name_to_encrypt;
+DROP SERVER server_name_to_encrypt;
+####################################################
+# Test 4: binlog with annotated events and binlog_row_image=minimal
+####################################################
+connection server_1;
+SET binlog_annotate_row_events= 1;
+SET GLOBAL binlog_annotate_row_events= 1;
+SET GLOBAL binlog_checksum= NONE;
+SET GLOBAL master_verify_checksum= 0;
+SET GLOBAL binlog_row_image= MINIMAL;
+SET binlog_row_image= MINIMAL;
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+CREATE DATABASE database_name_to_encrypt;
+USE database_name_to_encrypt;
+CREATE USER user_name_to_encrypt;
+GRANT ALL ON database_name_to_encrypt.* TO user_name_to_encrypt;
+SET PASSWORD FOR user_name_to_encrypt = PASSWORD('password_to_encrypt');
+CREATE TABLE innodb_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB,
+virt_column_name_to_encrypt INT AS (int_column_name_to_encrypt % 10)
VIRTUAL,
+pers_column_name_to_encrypt INT AS (int_column_name_to_encrypt) PERSISTENT,
+INDEX `index_name_to_encrypt`(`timestamp_column_name_to_encrypt`)
+) ENGINE=InnoDB +PARTITION BY RANGE (int_column_name_to_encrypt)
+SUBPARTITION BY KEY (int_column_name_to_encrypt)
+SUBPARTITIONS 2 (
+PARTITION partition0_name_to_encrypt VALUES LESS THAN (100),
+PARTITION partition1_name_to_encrypt VALUES LESS THAN (MAXVALUE)
+)
+;
+CREATE TABLE myisam_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+char_column_name_to_encrypt VARCHAR(255),
+datetime_column_name_to_encrypt DATETIME,
+text_column_name_to_encrypt TEXT
+) ENGINE=MyISAM;
+CREATE TABLE aria_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+varchar_column_name_to_encrypt VARCHAR(1024),
+enum_column_name_to_encrypt ENUM(
+'enum_value1_to_encrypt',
+'enum_value2_to_encrypt'
+ ),
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB
+) ENGINE=Aria;
+CREATE TRIGGER trigger_name_to_encrypt +AFTER INSERT ON
myisam_table_name_to_encrypt FOR EACH ROW
+INSERT INTO aria_table_name_to_encrypt (varchar_column_name_to_encrypt)
+VALUES (NEW.char_column_name_to_encrypt);
+CREATE DEFINER=user_name_to_encrypt VIEW view_name_to_encrypt +AS
SELECT * FROM innodb_table_name_to_encrypt;
+CREATE FUNCTION func_name_to_encrypt (func_parameter_to_encrypt INT)
+RETURNS VARCHAR(64)
+RETURN 'func_result_to_encrypt';
+CREATE PROCEDURE proc_name_to_encrypt (
+IN proc_in_parameter_to_encrypt CHAR(32),
+OUT proc_out_parameter_to_encrypt INT
+)
+BEGIN
+DECLARE procvar_name_to_encrypt CHAR(64) DEFAULT 'procvar_val_to_encrypt';
+DECLARE cursor_name_to_encrypt CURSOR FOR
+SELECT virt_column_name_to_encrypt FROM innodb_table_name_to_encrypt;
+DECLARE EXIT HANDLER FOR NOT FOUND
+BEGIN
+SET @stmt_var_to_encrypt = CONCAT(
+"SELECT + IF
(RAND()>0.5,'enum_value2_to_encrypt','enum_value1_to_encrypt')
+ FROM innodb_table_name_to_encrypt
+ INTO OUTFILE '", proc_in_parameter_to_encrypt, "'");
+PREPARE stmt_to_encrypt FROM @stmt_var_to_encrypt;
+EXECUTE stmt_to_encrypt;
+DEALLOCATE PREPARE stmt_to_encrypt;
+END;
+OPEN cursor_name_to_encrypt;
+proc_label_to_encrypt: LOOP +FETCH cursor_name_to_encrypt INTO
procvar_name_to_encrypt;
+END LOOP;
+CLOSE cursor_name_to_encrypt;
+END $$
+CREATE SERVER server_name_to_encrypt
+FOREIGN DATA WRAPPER mysql
+OPTIONS (HOST 'host_name_to_encrypt');
+connect
con1,localhost,user_name_to_encrypt,password_to_encrypt,database_name_to_encrypt;
+CREATE TEMPORARY TABLE tmp_table_name_to_encrypt (
+float_column_name_to_encrypt FLOAT,
+binary_column_name_to_encrypt BINARY(64)
+);
+disconnect con1;
+connection server_1;
+CREATE INDEX index_name_to_encrypt +ON myisam_table_name_to_encrypt
(datetime_column_name_to_encrypt);
+ALTER DATABASE database_name_to_encrypt CHARACTER SET utf8;
+ALTER TABLE innodb_table_name_to_encrypt +MODIFY
timestamp_column_name_to_encrypt TIMESTAMP NOT NULL +DEFAULT
CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
+;
+ALTER ALGORITHM=MERGE VIEW view_name_to_encrypt +AS SELECT * FROM
innodb_table_name_to_encrypt;
+RENAME TABLE innodb_table_name_to_encrypt TO new_table_name_to_encrypt;
+ALTER TABLE new_table_name_to_encrypt RENAME TO
innodb_table_name_to_encrypt;
+set @user_var1_to_encrypt= 'dyncol1_val_to_encrypt';
+set @user_var2_to_encrypt= 'dyncol2_name_to_encrypt';
+INSERT INTO view_name_to_encrypt VALUES
+(1, NOW(6),
COLUMN_CREATE('dyncol1_name_to_encrypt',@user_var1_to_encrypt), NULL, NULL),
+(2, NOW(6),
COLUMN_CREATE(@user_var2_to_encrypt,'dyncol2_val_to_encrypt'), NULL, NULL)
+;
+BEGIN NOT ATOMIC
+DECLARE counter_name_to_encrypt INT DEFAULT 0;
+START TRANSACTION;
+WHILE counter_name_to_encrypt<12 DO
+INSERT INTO innodb_table_name_to_encrypt +SELECT NULL, NOW(6),
blob_column_name_to_encrypt, NULL, NULL
+FROM innodb_table_name_to_encrypt
+ORDER BY int_column_name_to_encrypt;
+SET counter_name_to_encrypt = counter_name_to_encrypt+1;
+END WHILE;
+COMMIT;
+END
+$$
+INSERT INTO myisam_table_name_to_encrypt
+SELECT NULL, 'char_literal_to_encrypt', NULL, 'text_to_encrypt';
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+CALL proc_name_to_encrypt('file_name_to_encrypt',@useless_var_to_encrypt);
+TRUNCATE TABLE aria_table_name_to_encrypt;
+LOAD DATA INFILE 'file_name_to_encrypt' INTO TABLE
aria_table_name_to_encrypt
+(enum_column_name_to_encrypt);
+LOAD DATA LOCAL INFILE
'<DATADIR>/database_name_to_encrypt/file_name_to_encrypt' +INTO TABLE
aria_table_name_to_encrypt (enum_column_name_to_encrypt);
+UPDATE view_name_to_encrypt SET blob_column_name_to_encrypt =
+COLUMN_CREATE('dyncol1_name_to_encrypt',func_name_to_encrypt(0))
+;
+DELETE FROM aria_table_name_to_encrypt ORDER BY
int_column_name_to_encrypt LIMIT 10;
+ANALYZE TABLE myisam_table_name_to_encrypt;
+CHECK TABLE aria_table_name_to_encrypt;
+CHECKSUM TABLE innodb_table_name_to_encrypt, myisam_table_name_to_encrypt;
+RENAME USER user_name_to_encrypt to new_user_name_to_encrypt;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM new_user_name_to_encrypt;
+DROP DATABASE database_name_to_encrypt;
+DROP USER new_user_name_to_encrypt;
+DROP SERVER server_name_to_encrypt;
+#############################
+# Final checks for the master
+#############################
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: _to_encrypt
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of '_to_encrypt' in master-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: COMMIT
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'COMMIT' in master-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: TIMESTAMP
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'TIMESTAMP' in master-bin.0*
+include/save_master_pos.inc
+#############################
+# Final checks for the slave
+#############################
+connection server_2;
+include/sync_io_with_master.inc
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: _to_encrypt
+# Files to search: slave-relay-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of '_to_encrypt' in slave-relay-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: COMMIT
+# Files to search: slave-relay-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'COMMIT' in slave-relay-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: TIMESTAMP
+# Files to search: slave-relay-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'TIMESTAMP' in slave-relay-bin.0*
+include/start_slave.inc
+include/sync_slave_sql_with_io.inc
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: _to_encrypt
+# Files to search: slave-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of '_to_encrypt' in slave-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: COMMIT
+# Files to search: slave-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'COMMIT' in slave-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: TIMESTAMP
+# Files to search: slave-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'TIMESTAMP' in slave-bin.0*
+##########
+# Cleanup
+##########
+connection server_1;
+SET GLOBAL binlog_annotate_row_events= @binlog_annotate_row_events.save;
+SET GLOBAL binlog_checksum= @binlog_checksum.save;
+SET GLOBAL master_verify_checksum= @master_verify_checksum.save;
+SET GLOBAL binlog_row_image= @binlog_row_image.save;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/encrypted_master.test
b/mysql-test/suite/binlog_encryption/encrypted_master.test
new file mode 100644
index 0000000..cbf79c0
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_master.test
@@ -0,0 +1,188 @@
+#
+# The test checks that basic DDL and DML events are encrypted +# in the
binary log on master.
+# The test is to be run with all binlog formats +# (combinations for
rpl_init.inc take care of that).
+#
+#
+# The test runs with the encrypted master and non-encrypted slave. +#
It generates a sequence of events on master, and checks that
+# - all events are encrypted on master;
+# - slave is able to replicate from the master;
+# - relay logs and binary logs are not encrypted on slave.
+#
+# The same exercise is repeated +# - without annotated binlog events
and without binlog checksums;
+# - with binlog checksums;
+# - with annotated events and binlog checksums;
+# - with annotated events, default checksums and minimal binlog row image
+#
+
+--source encryption_algorithms.inc
+--source include/have_innodb.inc
+--enable_connect_log
+
+--echo #################
+--echo # Initialization
+--echo #################
+
+--disable_connect_log
+--let $rpl_topology= 1->2
+--source include/rpl_init.inc
+--enable_connect_log
+
+# We stop SQL thread because we want to have
+# all relay logs at the end of the test flow
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave_sql.inc
+--enable_connect_log
+
+--connection server_1
+
+SET @binlog_annotate_row_events.save= @@global.binlog_annotate_row_events;
+SET @binlog_checksum.save= @@global.binlog_checksum;
+SET @master_verify_checksum.save= @@global.master_verify_checksum;
+SET @binlog_row_image.save= @@global.binlog_row_image;
+
+--echo ####################################################
+--echo # Test 1: simple binlog, no checksum, no annotation
+--echo ####################################################
+
+--connection server_1
+
+SET binlog_annotate_row_events= 0;
+SET GLOBAL binlog_annotate_row_events= 0;
+SET GLOBAL binlog_checksum= NONE;
+SET GLOBAL master_verify_checksum= 0;
+
+--source testdata.inc
+
+--echo ####################################################
+--echo # Test 2: binlog with checksum, no annotated events
+--echo ####################################################
+
+--connection server_1
+
+SET binlog_annotate_row_events= 0;
+SET GLOBAL binlog_annotate_row_events= 0;
+SET GLOBAL binlog_checksum= CRC32;
+SET GLOBAL master_verify_checksum= 1;
+
+--source testdata.inc
+
+--echo ####################################################
+--echo # Test 3: binlog with checksum and annotated events
+--echo ####################################################
+
+--connection server_1
+
+SET binlog_annotate_row_events= 1;
+SET GLOBAL binlog_annotate_row_events= 1;
+SET GLOBAL binlog_checksum= CRC32;
+SET GLOBAL master_verify_checksum= 1;
+
+--source testdata.inc
+
+--echo ####################################################
+--echo # Test 4: binlog with annotated events and binlog_row_image=minimal
+--echo ####################################################
+
+--connection server_1
+
+SET binlog_annotate_row_events= 1;
+SET GLOBAL binlog_annotate_row_events= 1;
+SET GLOBAL binlog_checksum= NONE;
+SET GLOBAL master_verify_checksum= 0;
+SET GLOBAL binlog_row_image= MINIMAL;
+SET binlog_row_image= MINIMAL;
+
+--source testdata.inc
+
+--echo #############################
+--echo # Final checks for the master
+--echo #############################
+
+--let search_files=master-bin.0*
+--let search_pattern= _to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--let search_files=master-bin.0*
+--let search_pattern= COMMIT
+--let search_result= 0
+--source grep_binlog.inc
+
+--let search_files=master-bin.0*
+--let search_pattern= TIMESTAMP
+--let search_result= 0
+--source grep_binlog.inc
+
+--disable_connect_log
+--source include/save_master_pos.inc
+--enable_connect_log
+
+--echo #############################
+--echo # Final checks for the slave
+--echo #############################
+
+# Wait for the IO thread to write everything to relay logs
+
+--connection server_2
+
+--disable_connect_log
+--source include/sync_io_with_master.inc
+
+# Check that relay logs are unencrypted
+
+--let search_files=slave-relay-bin.0*
+--let search_pattern= _to_encrypt
+--let search_result= 1
+--source grep_binlog.inc
+
+--let search_files=slave-relay-bin.0*
+--let search_pattern= COMMIT
+--let search_result= 1
+--source grep_binlog.inc
+
+--let search_files=slave-relay-bin.0*
+--let search_pattern= TIMESTAMP
+--let search_result= 1
+--source grep_binlog.inc
+
+
+# Re-enable SQL thread, let it catch up with IO thread
+# and check slave binary logs
+
+--source include/start_slave.inc
+--source include/sync_slave_sql_with_io.inc
+--enable_connect_log
+
+--let search_files=slave-bin.0*
+--let search_pattern= _to_encrypt
+--let search_result= 1
+--source grep_binlog.inc
+
+--let search_files=slave-bin.0*
+--let search_pattern= COMMIT
+--let search_result= 1
+--source grep_binlog.inc
+
+--let search_files=slave-bin.0*
+--let search_pattern= TIMESTAMP
+--let search_result= 1
+--source grep_binlog.inc
+
+--echo ##########
+--echo # Cleanup
+--echo ##########
+
+--connection server_1
+SET GLOBAL binlog_annotate_row_events= @binlog_annotate_row_events.save;
+SET GLOBAL binlog_checksum= @binlog_checksum.save;
+SET GLOBAL master_verify_checksum= @master_verify_checksum.save;
+SET GLOBAL binlog_row_image= @binlog_row_image.save;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/encrypted_master_lost_key.result
b/mysql-test/suite/binlog_encryption/encrypted_master_lost_key.result
new file mode 100644
index 0000000..13af8cb
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_master_lost_key.result
@@ -0,0 +1,111 @@
+#################
+# Initialization
+#################
+include/rpl_init.inc [topology=1->2]
+connection server_2;
+include/stop_slave.inc
+#####################################################
+# Pre-test 1: Initial key value
+#####################################################
+connection server_1;
+CREATE TABLE table1_to_encrypt (
+pk INT AUTO_INCREMENT PRIMARY KEY,
+ts TIMESTAMP NULL,
+b BLOB
+) ENGINE=MyISAM;
+INSERT INTO table1_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table1_to_encrypt SELECT NULL,NOW(),b FROM table1_to_encrypt;
+SET binlog_format=ROW;
+INSERT INTO table1_to_encrypt SELECT NULL,NOW(),b FROM table1_to_encrypt;
+INSERT INTO table1_to_encrypt SELECT NULL,NOW(),b FROM table1_to_encrypt;
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: table1_to_encrypt
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'table1_to_encrypt' in master-bin.0*
+#######################################################
+# Pre-test 2: restart master with a different key value
+#######################################################
+connection default;
+connection server_1;
+CREATE TABLE table2_to_encrypt (
+pk INT AUTO_INCREMENT PRIMARY KEY,
+ts TIMESTAMP NULL,
+b BLOB
+) ENGINE=MyISAM;
+INSERT INTO table2_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+SET binlog_format=ROW;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: table2_to_encrypt
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'table2_to_encrypt' in master-bin.0*
+#####################################################
+# Pre-test 3: restart master again with the right key
+#####################################################
+connection default;
+connection server_1;
+CREATE TABLE table3_to_encrypt (
+pk INT AUTO_INCREMENT PRIMARY KEY,
+ts TIMESTAMP NULL,
+b BLOB
+) ENGINE=MyISAM;
+INSERT INTO table3_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table3_to_encrypt SELECT NULL,NOW(),b FROM table3_to_encrypt;
+INSERT INTO table3_to_encrypt SELECT NULL,NOW(),b FROM table3_to_encrypt;
+FLUSH BINARY LOGS;
+INSERT INTO table3_to_encrypt SELECT NULL,NOW(),b FROM table3_to_encrypt;
+# WARNING: Part of the test was disabled due to MDEV-11323
+#####################################################
+# Test 2: check that replication works if it starts
+# from a good binary log
+#####################################################
+connection server_2;
+include/stop_slave.inc
+Warnings:
+Note 1255 Slave already has been stopped
+RESET SLAVE ALL;
+DROP DATABASE test;
+CREATE DATABASE test;
+USE test;
+CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_PORT=<MASTER_PORT>,
MASTER_USER='root', MASTER_LOG_FILE='master-bin.000003';
+include/start_slave.inc
+SHOW TABLES;
+Tables_in_test
+table3_to_encrypt
+#####################################################
+# Test 3: check that replication works if we purge
+# master logs up to the good one
+#####################################################
+connection server_2;
+connection server_1;
+PURGE BINARY LOGS TO 'master-bin.000003';
+connection server_2;
+include/stop_slave.inc
+RESET SLAVE ALL;
+DROP DATABASE test;
+CREATE DATABASE test;
+USE test;
+CHANGE MASTER TO MASTER_HOST='127.0.0.1', MASTER_PORT=<MASTER_PORT>,
MASTER_USER='root';
+include/start_slave.inc
+SHOW TABLES;
+Tables_in_test
+table3_to_encrypt
+##########
+# Cleanup
+##########
+connection server_1;
+DROP TABLE table1_to_encrypt, table2_to_encrypt, table3_to_encrypt;
+connection server_2;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/encrypted_master_lost_key.test
b/mysql-test/suite/binlog_encryption/encrypted_master_lost_key.test
new file mode 100644
index 0000000..2e6cbfd
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_master_lost_key.test
@@ -0,0 +1,206 @@
+#
+# TODO: finalize the test when MDEV-11323 is fixed or clarified;
+# and write here what it actually checks.
+# The part where slave synchronizes with master will need
+# to be changed, but how it is changed depends on MDEV-11323.
+#
+# The test checks effects and workarounds for the situation when
+# the key used to encrypt previous binary logs on master has been lost,
+# and master runs with a different one.
+#
+# The test starts with encrypted binlogs on master. +# It stops
replication, generates a few statement and row events
+# on the master, then restarts the server with encrypted binlog, +# but
with a different value for key 1. +# +# Then it resumes replication and
checks what happens when the server
+# tries to feed the binary logs to the slave (it should not work).
+#
+# Then it resets the slave, configures it to start from a "good"
+# master binlog log, for which the master has a key, starts replication
+# and checks that it works.
+#
+# Then it resets the slave again, purges binary logs on master up +# to
the "good" one, starts replication and checks that it works.
+#
+
+--source include/have_binlog_format_mixed.inc
+
+--echo #################
+--echo # Initialization
+--echo #################
+
+--let $rpl_topology= 1->2
+--source include/rpl_init.inc
+
+--enable_connect_log
+
+# We stop replication because we want it to happen after the switch
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--echo #####################################################
+--echo # Pre-test 1: Initial key value
+--echo #####################################################
+
+--connection server_1
+
+CREATE TABLE table1_to_encrypt (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table1_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table1_to_encrypt SELECT NULL,NOW(),b FROM table1_to_encrypt;
+SET binlog_format=ROW;
+INSERT INTO table1_to_encrypt SELECT NULL,NOW(),b FROM table1_to_encrypt;
+INSERT INTO table1_to_encrypt SELECT NULL,NOW(),b FROM table1_to_encrypt;
+
+# Make sure that binary logs are encrypted
+
+--let search_files=master-bin.0*
+--let search_pattern= table1_to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--echo #######################################################
+--echo # Pre-test 2: restart master with a different key value
+--echo #######################################################
+
+--write_file $MYSQL_TMP_DIR/master_lose_key.key
+1;00000AAAAAAAAAAAAAAAAAAAAAA00000
+EOF
+
+--let $rpl_server_parameters=
--loose-file-key-management-filename=$MYSQL_TMP_DIR/master_lose_key.key
+
+--let $rpl_server_number= 1
+--source restart_server.inc
+
+CREATE TABLE table2_to_encrypt (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table2_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+SET binlog_format=ROW;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+
+# Make sure that binary logs are encrypted
+
+--let search_files=master-bin.0*
+--let search_pattern= table2_to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--echo #####################################################
+--echo # Pre-test 3: restart master again with the right key
+--echo #####################################################
+
+--let $rpl_server_parameters=
+--let $rpl_server_number= 1
+--source restart_server.inc
+
+--let $good_master_binlog= query_get_value(SHOW MASTER STATUS,File,1)
+
+CREATE TABLE table3_to_encrypt (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table3_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table3_to_encrypt SELECT NULL,NOW(),b FROM table3_to_encrypt;
+INSERT INTO table3_to_encrypt SELECT NULL,NOW(),b FROM table3_to_encrypt;
+FLUSH BINARY LOGS;
+INSERT INTO table3_to_encrypt SELECT NULL,NOW(),b FROM table3_to_encrypt;
+
+--save_master_pos
+
+# TODO: Fix and re-enable after MDEV-11323 is closed
+--disable_parsing
+
+--echo #####################################################
+--echo # Test 1: check how replication goes when it reaches
+--echo # the log with a wrong key
+--echo #####################################################
+--connection server_2
+
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+
+--sorted_result
+SHOW TABLES;
+
+--enable_parsing
+--echo # WARNING: Part of the test was disabled due to MDEV-11323
+
+--echo #####################################################
+--echo # Test 2: check that replication works if it starts
+--echo # from a good binary log
+--echo #####################################################
+--connection server_2
+
+--disable_connect_log
+--source include/stop_slave.inc
+RESET SLAVE ALL;
+DROP DATABASE test;
+CREATE DATABASE test;
+USE test;
+--replace_result $SERVER_MYPORT_1 <MASTER_PORT>
+eval CHANGE MASTER TO MASTER_HOST='127.0.0.1',
MASTER_PORT=$SERVER_MYPORT_1, MASTER_USER='root',
MASTER_LOG_FILE='$good_master_binlog';
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+
+--sorted_result
+SHOW TABLES;
+
+--echo #####################################################
+--echo # Test 3: check that replication works if we purge
+--echo # master logs up to the good one
+--echo #####################################################
+--connection server_2
+
+--connection server_1
+eval PURGE BINARY LOGS TO '$good_master_binlog';
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+RESET SLAVE ALL;
+DROP DATABASE test;
+CREATE DATABASE test;
+USE test;
+--replace_result $SERVER_MYPORT_1 <MASTER_PORT>
+eval CHANGE MASTER TO MASTER_HOST='127.0.0.1',
MASTER_PORT=$SERVER_MYPORT_1, MASTER_USER='root';
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+
+--sorted_result
+SHOW TABLES;
+
+--echo ##########
+--echo # Cleanup
+--echo ##########
+
+--connection server_1
+
+DROP TABLE table1_to_encrypt, table2_to_encrypt, table3_to_encrypt;
+
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/encrypted_master_switch_to_unencrypted.test
b/mysql-test/suite/binlog_encryption/encrypted_master_switch_to_unencrypted.test
new file mode 100644
index 0000000..ec8be86
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/encrypted_master_switch_to_unencrypted.test
@@ -0,0 +1,137 @@
+#
+# TODO: write here what the test checks after MDEV-11288 is fixed
+#
+# The test starts with unencrypted master. +# It stops replication,
generates a few statement and row events
+# on the master, then restarts the server with encrypted binlog, +#
generates some more events and restarts it back without encryption +#
(no encryption plugin).
+# Then it resumes replication and checks what happens when the server
+# tries to feed the binary logs (included the encrypted ones) +# to the
slave.
+#
+
+--source include/have_binlog_format_mixed.inc
+
+--echo #################
+--echo # Initialization
+--echo #################
+
+--let $rpl_topology= 1->2
+--source include/rpl_init.inc
+
+--enable_connect_log
+
+# We stop replication because we want it to happen after the switch
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--echo #####################################################
+--echo # Part 1: unencrypted master
+--echo #####################################################
+
+--connection server_1
+
+CREATE TABLE table1_no_encryption (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table1_no_encryption VALUES (NULL,NOW(),'data_no_encryption');
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+FLUSH BINARY LOGS;
+SET binlog_format=ROW;
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+
+# Make sure that binary logs are not encrypted
+
+--let search_files=master-bin.0*
+--let search_pattern= table1_no_encryption
+--let search_result= 1
+--source grep_binlog.inc
+
+--echo #####################################################
+--echo # Part 2: restart master, now with binlog encryption
+--echo #####################################################
+
+--let $rpl_server_parameters= --encrypt-binlog=1
--plugin-load-add=$FILE_KEY_MANAGEMENT_SO --file-key-management
--loose-file-key-management-filename=$MYSQL_TEST_DIR/std_data/keys.txt
+
+--let $rpl_server_number= 1
+--source restart_server.inc
+
+CREATE TABLE table2_to_encrypt (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table2_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+FLUSH BINARY LOGS;
+SET binlog_format=ROW;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+
+# Make sure that binary logs are encrypted
+
+--let search_files=master-bin.0*
+--let search_pattern= table2_to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--echo #####################################################
+--echo # Part 3: restart master again without encryption
+--echo #####################################################
+
+--let $rpl_server_parameters= --encrypt-binlog=0
+--let $rpl_server_number= 1
+--source restart_server.inc
+
+CREATE TABLE table3_no_encryption (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table3_no_encryption VALUES (NULL,NOW(),'data_no_encryption');
+INSERT INTO table3_no_encryption SELECT NULL,NOW(),b FROM
table3_no_encryption;
+INSERT INTO table3_no_encryption SELECT NULL,NOW(),b FROM
table3_no_encryption;
+
+--save_master_pos
+
+--echo #####################################################
+--echo # Check: resume replication and check how it goes
+--echo #####################################################
+--connection server_2
+
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+
+--sorted_result
+SHOW TABLES;
+
+--echo ##########
+--echo # Cleanup
+--echo ##########
+
+--connection server_1
+
+SELECT COUNT(*) FROM table1_no_encryption;
+SELECT COUNT(*) FROM table2_to_encrypt;
+SELECT COUNT(*) FROM table3_no_encryption;
+DROP TABLE table1_no_encryption, table2_to_encrypt, table3_no_encryption;
+
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/encrypted_slave.cnf
b/mysql-test/suite/binlog_encryption/encrypted_slave.cnf
new file mode 100644
index 0000000..fac94db
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_slave.cnf
@@ -0,0 +1,12 @@
+!include my.cnf
+
+[mysqld.1]
+encrypt-binlog=0
+
+[mysqld.2]
+#log-slave-updates
+encrypt-binlog
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+file-key-management
+loose-file-key-management-filename= @ENV.MYSQL_TEST_DIR/std_data/keys.txt
+binlog_checksum=NONE
diff --git a/mysql-test/suite/binlog_encryption/encrypted_slave.result
b/mysql-test/suite/binlog_encryption/encrypted_slave.result
new file mode 100644
index 0000000..b02904e
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_slave.result
@@ -0,0 +1,247 @@
+#################
+# Initialization
+#################
+include/rpl_init.inc [topology=1->2]
+connection server_2;
+include/stop_slave_sql.inc
+#################
+# Test flow
+#################
+connection server_1;
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+CREATE DATABASE database_name_to_encrypt;
+USE database_name_to_encrypt;
+CREATE USER user_name_to_encrypt;
+GRANT ALL ON database_name_to_encrypt.* TO user_name_to_encrypt;
+SET PASSWORD FOR user_name_to_encrypt = PASSWORD('password_to_encrypt');
+CREATE TABLE innodb_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB,
+virt_column_name_to_encrypt INT AS (int_column_name_to_encrypt % 10)
VIRTUAL,
+pers_column_name_to_encrypt INT AS (int_column_name_to_encrypt) PERSISTENT,
+INDEX `index_name_to_encrypt`(`timestamp_column_name_to_encrypt`)
+) ENGINE=InnoDB +PARTITION BY RANGE (int_column_name_to_encrypt)
+SUBPARTITION BY KEY (int_column_name_to_encrypt)
+SUBPARTITIONS 2 (
+PARTITION partition0_name_to_encrypt VALUES LESS THAN (100),
+PARTITION partition1_name_to_encrypt VALUES LESS THAN (MAXVALUE)
+)
+;
+CREATE TABLE myisam_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+char_column_name_to_encrypt VARCHAR(255),
+datetime_column_name_to_encrypt DATETIME,
+text_column_name_to_encrypt TEXT
+) ENGINE=MyISAM;
+CREATE TABLE aria_table_name_to_encrypt (
+int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+varchar_column_name_to_encrypt VARCHAR(1024),
+enum_column_name_to_encrypt ENUM(
+'enum_value1_to_encrypt',
+'enum_value2_to_encrypt'
+ ),
+timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+blob_column_name_to_encrypt BLOB
+) ENGINE=Aria;
+CREATE TRIGGER trigger_name_to_encrypt +AFTER INSERT ON
myisam_table_name_to_encrypt FOR EACH ROW
+INSERT INTO aria_table_name_to_encrypt (varchar_column_name_to_encrypt)
+VALUES (NEW.char_column_name_to_encrypt);
+CREATE DEFINER=user_name_to_encrypt VIEW view_name_to_encrypt +AS
SELECT * FROM innodb_table_name_to_encrypt;
+CREATE FUNCTION func_name_to_encrypt (func_parameter_to_encrypt INT)
+RETURNS VARCHAR(64)
+RETURN 'func_result_to_encrypt';
+CREATE PROCEDURE proc_name_to_encrypt (
+IN proc_in_parameter_to_encrypt CHAR(32),
+OUT proc_out_parameter_to_encrypt INT
+)
+BEGIN
+DECLARE procvar_name_to_encrypt CHAR(64) DEFAULT 'procvar_val_to_encrypt';
+DECLARE cursor_name_to_encrypt CURSOR FOR
+SELECT virt_column_name_to_encrypt FROM innodb_table_name_to_encrypt;
+DECLARE EXIT HANDLER FOR NOT FOUND
+BEGIN
+SET @stmt_var_to_encrypt = CONCAT(
+"SELECT + IF
(RAND()>0.5,'enum_value2_to_encrypt','enum_value1_to_encrypt')
+ FROM innodb_table_name_to_encrypt
+ INTO OUTFILE '", proc_in_parameter_to_encrypt, "'");
+PREPARE stmt_to_encrypt FROM @stmt_var_to_encrypt;
+EXECUTE stmt_to_encrypt;
+DEALLOCATE PREPARE stmt_to_encrypt;
+END;
+OPEN cursor_name_to_encrypt;
+proc_label_to_encrypt: LOOP +FETCH cursor_name_to_encrypt INTO
procvar_name_to_encrypt;
+END LOOP;
+CLOSE cursor_name_to_encrypt;
+END $$
+CREATE SERVER server_name_to_encrypt
+FOREIGN DATA WRAPPER mysql
+OPTIONS (HOST 'host_name_to_encrypt');
+connect
con1,localhost,user_name_to_encrypt,password_to_encrypt,database_name_to_encrypt;
+CREATE TEMPORARY TABLE tmp_table_name_to_encrypt (
+float_column_name_to_encrypt FLOAT,
+binary_column_name_to_encrypt BINARY(64)
+);
+disconnect con1;
+connection server_1;
+CREATE INDEX index_name_to_encrypt +ON myisam_table_name_to_encrypt
(datetime_column_name_to_encrypt);
+ALTER DATABASE database_name_to_encrypt CHARACTER SET utf8;
+ALTER TABLE innodb_table_name_to_encrypt +MODIFY
timestamp_column_name_to_encrypt TIMESTAMP NOT NULL +DEFAULT
CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
+;
+ALTER ALGORITHM=MERGE VIEW view_name_to_encrypt +AS SELECT * FROM
innodb_table_name_to_encrypt;
+RENAME TABLE innodb_table_name_to_encrypt TO new_table_name_to_encrypt;
+ALTER TABLE new_table_name_to_encrypt RENAME TO
innodb_table_name_to_encrypt;
+set @user_var1_to_encrypt= 'dyncol1_val_to_encrypt';
+set @user_var2_to_encrypt= 'dyncol2_name_to_encrypt';
+INSERT INTO view_name_to_encrypt VALUES
+(1, NOW(6),
COLUMN_CREATE('dyncol1_name_to_encrypt',@user_var1_to_encrypt), NULL, NULL),
+(2, NOW(6),
COLUMN_CREATE(@user_var2_to_encrypt,'dyncol2_val_to_encrypt'), NULL, NULL)
+;
+BEGIN NOT ATOMIC
+DECLARE counter_name_to_encrypt INT DEFAULT 0;
+START TRANSACTION;
+WHILE counter_name_to_encrypt<12 DO
+INSERT INTO innodb_table_name_to_encrypt +SELECT NULL, NOW(6),
blob_column_name_to_encrypt, NULL, NULL
+FROM innodb_table_name_to_encrypt
+ORDER BY int_column_name_to_encrypt;
+SET counter_name_to_encrypt = counter_name_to_encrypt+1;
+END WHILE;
+COMMIT;
+END
+$$
+INSERT INTO myisam_table_name_to_encrypt
+SELECT NULL, 'char_literal_to_encrypt', NULL, 'text_to_encrypt';
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+CALL proc_name_to_encrypt('file_name_to_encrypt',@useless_var_to_encrypt);
+TRUNCATE TABLE aria_table_name_to_encrypt;
+LOAD DATA INFILE 'file_name_to_encrypt' INTO TABLE
aria_table_name_to_encrypt
+(enum_column_name_to_encrypt);
+LOAD DATA LOCAL INFILE
'<DATADIR>/database_name_to_encrypt/file_name_to_encrypt' +INTO TABLE
aria_table_name_to_encrypt (enum_column_name_to_encrypt);
+UPDATE view_name_to_encrypt SET blob_column_name_to_encrypt =
+COLUMN_CREATE('dyncol1_name_to_encrypt',func_name_to_encrypt(0))
+;
+DELETE FROM aria_table_name_to_encrypt ORDER BY
int_column_name_to_encrypt LIMIT 10;
+ANALYZE TABLE myisam_table_name_to_encrypt;
+CHECK TABLE aria_table_name_to_encrypt;
+CHECKSUM TABLE innodb_table_name_to_encrypt, myisam_table_name_to_encrypt;
+RENAME USER user_name_to_encrypt to new_user_name_to_encrypt;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM new_user_name_to_encrypt;
+DROP DATABASE database_name_to_encrypt;
+DROP USER new_user_name_to_encrypt;
+DROP SERVER server_name_to_encrypt;
+#################
+# Master binlog checks
+#################
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: _to_encrypt
+# Files to search: master-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of '_to_encrypt' in master-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: COMMIT
+# Files to search: master-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'COMMIT' in master-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: TIMESTAMP
+# Files to search: master-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'TIMESTAMP' in master-bin.0*
+include/save_master_pos.inc
+#################
+# Relay log checks
+#################
+connection server_2;
+include/sync_io_with_master.inc
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: _to_encrypt
+# Files to search: slave-relay-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of '_to_encrypt' in slave-relay-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: COMMIT
+# Files to search: slave-relay-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'COMMIT' in slave-relay-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: TIMESTAMP
+# Files to search: slave-relay-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'TIMESTAMP' in slave-relay-bin.0*
+#################
+# Slave binlog checks
+#################
+include/start_slave.inc
+include/sync_slave_sql_with_io.inc
+include/sync_io_with_master.inc
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: _to_encrypt
+# Files to search: slave-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of '_to_encrypt' in slave-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: COMMIT
+# Files to search: slave-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'COMMIT' in slave-bin.0*
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: TIMESTAMP
+# Files to search: slave-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'TIMESTAMP' in slave-bin.0*
+##########
+# Cleanup
+##########
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/encrypted_slave.test
b/mysql-test/suite/binlog_encryption/encrypted_slave.test
new file mode 100644
index 0000000..b13d6d8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encrypted_slave.test
@@ -0,0 +1,122 @@
+#
+# The test checks that basic DDL and DML events are encrypted +# in the
relay and binary logs on slave.
+# The test is to be run with all binlog formats +# (combinations for
rpl_init.inc take care of that).
+#
+# The test runs with the non-encrypted master and encrypted slave. +#
It generates a sequence of events on master and checks that
+# relay logs and binary logs are encrypted on slave.
+#
+
+--source encryption_algorithms.inc
+--source include/have_innodb.inc
+
+--echo #################
+--echo # Initialization
+--echo #################
+
+--let $rpl_topology= 1->2
+--source include/rpl_init.inc
+
+--enable_connect_log
+--connection server_2
+
+# We stop SQL thread because we want to have
+# all relay logs at the end of the test flow
+
+--disable_connect_log
+--source include/stop_slave_sql.inc
+--enable_connect_log
+
+--echo #################
+--echo # Test flow
+--echo #################
+
+--connection server_1
+--source testdata.inc
+
+--echo #################
+--echo # Master binlog checks
+--echo #################
+
+--let search_files=master-bin.0*
+--let search_pattern= _to_encrypt
+--let search_result= 1
+--source grep_binlog.inc
+
+--let search_files=master-bin.0*
+--let search_pattern= COMMIT
+--let search_result= 1
+--source grep_binlog.inc
+
+--let search_files=master-bin.0*
+--let search_pattern= TIMESTAMP
+--let search_result= 1
+--source grep_binlog.inc
+
+--disable_connect_log
+--source include/save_master_pos.inc
+--enable_connect_log
+
+--echo #################
+--echo # Relay log checks
+--echo #################
+
+--connection server_2
+--disable_connect_log
+--source include/sync_io_with_master.inc
+--enable_connect_log
+
+--let search_files=slave-relay-bin.0*
+--let search_pattern= _to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--let search_files=slave-relay-bin.0*
+--let search_pattern= COMMIT
+--let search_result= 0
+--source grep_binlog.inc
+
+--let search_files=slave-relay-bin.0*
+--let search_pattern= TIMESTAMP
+--let search_result= 0
+--source grep_binlog.inc
+
+--echo #################
+--echo # Slave binlog checks
+--echo #################
+
+# Re-enable SQL thread, let it catch up with IO thread
+# and check slave binary logs
+
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_slave_sql_with_io.inc
+--enable_connect_log
+
+--disable_connect_log
+--source include/sync_io_with_master.inc
+--enable_connect_log
+
+--let search_files=slave-bin.0*
+--let search_pattern= _to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--let search_files=slave-bin.0*
+--let search_pattern= COMMIT
+--let search_result= 0
+--source grep_binlog.inc
+
+--let search_files=slave-bin.0*
+--let search_pattern= TIMESTAMP
+--let search_result= 0
+--source grep_binlog.inc
+
+--echo ##########
+--echo # Cleanup
+--echo ##########
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/encryption_algorithms.combinations
b/mysql-test/suite/binlog_encryption/encryption_algorithms.combinations
new file mode 100644
index 0000000..6bda5a6
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encryption_algorithms.combinations
@@ -0,0 +1,5 @@
+[ctr]
+loose-file-key-management-encryption-algorithm=aes_ctr
+
+[cbc]
+loose-file-key-management-encryption-algorithm=aes_cbc
diff --git
a/mysql-test/suite/binlog_encryption/encryption_algorithms.inc
b/mysql-test/suite/binlog_encryption/encryption_algorithms.inc
new file mode 100644
index 0000000..ca55962
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encryption_algorithms.inc
@@ -0,0 +1,2 @@
+# Empty include file just to enable encryption algorithm combinations
+# for those tests which need them
diff --git a/mysql-test/suite/binlog_encryption/encryption_combo.cnf
b/mysql-test/suite/binlog_encryption/encryption_combo.cnf
new file mode 100644
index 0000000..bc4ecbc
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encryption_combo.cnf
@@ -0,0 +1,5 @@
+!include my.cnf
+
+[mysqld.1]
+encrypt-binlog=0
+
diff --git a/mysql-test/suite/binlog_encryption/encryption_combo.result
b/mysql-test/suite/binlog_encryption/encryption_combo.result
new file mode 100644
index 0000000..06d7862
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encryption_combo.result
@@ -0,0 +1,94 @@
+#################
+# Initialization
+#################
+include/rpl_init.inc [topology=1->2]
+connection server_2;
+include/stop_slave.inc
+#####################################################
+# Part 1: unencrypted master
+#####################################################
+connection server_1;
+CREATE TABLE table1_no_encryption (
+pk INT AUTO_INCREMENT PRIMARY KEY,
+ts TIMESTAMP NULL,
+b BLOB
+) ENGINE=MyISAM;
+INSERT INTO table1_no_encryption VALUES (NULL,NOW(),'data_no_encryption');
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+FLUSH BINARY LOGS;
+SET binlog_format=ROW;
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: table1_no_encryption
+# Files to search: master-bin.0*
+# Expected result: 1 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Found occurrences of 'table1_no_encryption' in master-bin.0*
+#####################################################
+# Part 2: restart master, now with binlog encryption
+#####################################################
+connection default;
+connection server_1;
+CREATE TABLE table2_to_encrypt (
+pk INT AUTO_INCREMENT PRIMARY KEY,
+ts TIMESTAMP NULL,
+b BLOB
+) ENGINE=MyISAM;
+INSERT INTO table2_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+FLUSH BINARY LOGS;
+SET binlog_format=ROW;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: table2_to_encrypt
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'table2_to_encrypt' in master-bin.0*
+#####################################################
+# Part 3: restart master again without encryption
+#####################################################
+connection default;
+connection server_1;
+CREATE TABLE table3_no_encryption (
+pk INT AUTO_INCREMENT PRIMARY KEY,
+ts TIMESTAMP NULL,
+b BLOB
+) ENGINE=MyISAM;
+INSERT INTO table3_no_encryption VALUES (NULL,NOW(),'data_no_encryption');
+INSERT INTO table3_no_encryption SELECT NULL,NOW(),b FROM
table3_no_encryption;
+INSERT INTO table3_no_encryption SELECT NULL,NOW(),b FROM
table3_no_encryption;
+#####################################################
+# Check: resume replication and check that it works
+#####################################################
+connection server_2;
+include/start_slave.inc
+SHOW TABLES;
+Tables_in_test
+table1_no_encryption
+table2_to_encrypt
+table3_no_encryption
+##########
+# Cleanup
+##########
+connection server_1;
+SELECT COUNT(*) FROM table1_no_encryption;
+COUNT(*)
+8
+SELECT COUNT(*) FROM table2_to_encrypt;
+COUNT(*)
+8
+SELECT COUNT(*) FROM table3_no_encryption;
+COUNT(*)
+4
+DROP TABLE table1_no_encryption, table2_to_encrypt, table3_no_encryption;
+connection server_2;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/encryption_combo.test
b/mysql-test/suite/binlog_encryption/encryption_combo.test
new file mode 100644
index 0000000..fe733fd
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/encryption_combo.test
@@ -0,0 +1,136 @@
+#
+# The test checks that master with decryption capabilities can switch
+# between encrypted and unencrypted logs (with server restart), +# and
can feed the mix of encrypted/unencrypted logs to a slave.
+#
+# The test starts with unencrypted master. +# It stops replication,
generates a few statement and row events
+# on the master, then restarts the server with encrypted binlog, +#
generates some more events and restarts it back with unencrypted binlog.
+# Then it resumes replication and checks that all events +# are
replicated successfully. +#
+
+--source include/have_binlog_format_mixed.inc
+
+--echo #################
+--echo # Initialization
+--echo #################
+
+--let $rpl_topology= 1->2
+--source include/rpl_init.inc
+
+--enable_connect_log
+
+# We stop replication because we want it to happen after the switch
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--echo #####################################################
+--echo # Part 1: unencrypted master
+--echo #####################################################
+
+--connection server_1
+
+CREATE TABLE table1_no_encryption (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table1_no_encryption VALUES (NULL,NOW(),'data_no_encryption');
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+FLUSH BINARY LOGS;
+SET binlog_format=ROW;
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+INSERT INTO table1_no_encryption SELECT NULL,NOW(),b FROM
table1_no_encryption;
+
+# Make sure that binary logs are not encrypted
+
+--let search_files=master-bin.0*
+--let search_pattern= table1_no_encryption
+--let search_result= 1
+--source grep_binlog.inc
+
+--echo #####################################################
+--echo # Part 2: restart master, now with binlog encryption
+--echo #####################################################
+
+--let $rpl_server_parameters= --encrypt-binlog=1
+--let $rpl_server_number= 1
+--source restart_server.inc
+
+CREATE TABLE table2_to_encrypt (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table2_to_encrypt VALUES (NULL,NOW(),'data_to_encrypt');
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+FLUSH BINARY LOGS;
+SET binlog_format=ROW;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+INSERT INTO table2_to_encrypt SELECT NULL,NOW(),b FROM table2_to_encrypt;
+
+# Make sure that binary logs are encrypted
+
+--let search_files=master-bin.0*
+--let search_pattern= table2_to_encrypt
+--let search_result= 0
+--source grep_binlog.inc
+
+--echo #####################################################
+--echo # Part 3: restart master again without encryption
+--echo #####################################################
+
+--let $rpl_server_parameters= --encrypt-binlog=0
+--let $rpl_server_number= 1
+--source restart_server.inc
+
+CREATE TABLE table3_no_encryption (
+ pk INT AUTO_INCREMENT PRIMARY KEY,
+ ts TIMESTAMP NULL,
+ b BLOB
+) ENGINE=MyISAM; +
+INSERT INTO table3_no_encryption VALUES (NULL,NOW(),'data_no_encryption');
+INSERT INTO table3_no_encryption SELECT NULL,NOW(),b FROM
table3_no_encryption;
+INSERT INTO table3_no_encryption SELECT NULL,NOW(),b FROM
table3_no_encryption;
+
+--save_master_pos
+
+--echo #####################################################
+--echo # Check: resume replication and check that it works
+--echo #####################################################
+--connection server_2
+
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+
+--sorted_result
+SHOW TABLES;
+
+--echo ##########
+--echo # Cleanup
+--echo ##########
+
+--connection server_1
+
+SELECT COUNT(*) FROM table1_no_encryption;
+SELECT COUNT(*) FROM table2_to_encrypt;
+SELECT COUNT(*) FROM table3_no_encryption;
+DROP TABLE table1_no_encryption, table2_to_encrypt, table3_no_encryption;
+
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/grep_binlog.inc
b/mysql-test/suite/binlog_encryption/grep_binlog.inc
new file mode 100644
index 0000000..378ae56
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/grep_binlog.inc
@@ -0,0 +1,54 @@
+#
+# This include file checks whether the given pattern is present
+# in the given binary log(s)
+# The search logic returns 0 if no occurrences are found,
+# and 1 otherwise. +# The result can be reverted depending on
search_result configured
+# in the outer test.
+#
+# Usage:
+# --let search_files= filename | filename_prefix*
+# --let search_pattern= <search pattern> (default '_to_encrypt')
+# --let search_result= 0|1 (default 0, which means "should not be found")
+# --source grep_binlog.inc
+
+if (! $search_result)
+{
+ --let search_result= 0
+}
+if (! $search_pattern)
+{
+ --let search_pattern= _to_encrypt
+}
+--let datadir= `SELECT @@datadir`
+
+--echo #
+--echo # The next step will cause a perl error if the search does not
+--echo # meet the expectations. +--echo # Pattern to look for:
$search_pattern
+--echo # Files to search: $search_files
+--echo # Expected result: $search_result +--echo # (0 means the
pattern should not be found, 1 means it should be found)
+--echo #
+
+--error $search_result
+perl;
+ $|= 1;
+ use strict;
+ use warnings;
+ my @content= ();
+ my @fnames= glob("$ENV{datadir}/$ENV{search_files}");
+ if (not scalar(@fnames)) {
+ die "File pattern $ENV{search_files} must be wrong, no files
found\n";
+ }
+ foreach my $f (@fnames) {
+ open(FILE, "<", $f) or die "Could not open file $f: $!\n";
+ @content= (@content, grep(/$ENV{search_pattern}/, <FILE>));
+ close FILE;
+ }
+ print( (scalar(@content) ? "Found" : "Did not find any"). "
occurrences of '$ENV{search_pattern}' in $ENV{search_files}\n");
+ exit (scalar(@content) ? 1 : 0);
+EOF
+
+# Unset the mandatory option
+--let search_files=
diff --git
a/mysql-test/suite/binlog_encryption/master_switch_to_unencrypted.cnf
b/mysql-test/suite/binlog_encryption/master_switch_to_unencrypted.cnf
new file mode 100644
index 0000000..1cbb6cf
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/master_switch_to_unencrypted.cnf
@@ -0,0 +1,4 @@
+!include my.cnf
+
+[mysqld.1]
+encrypt-binlog=0
diff --git a/mysql-test/suite/binlog_encryption/multisource.cnf
b/mysql-test/suite/binlog_encryption/multisource.cnf
new file mode 100644
index 0000000..52db51d
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/multisource.cnf
@@ -0,0 +1,17 @@
+!include my.cnf
+
+[mysqld.1]
+log-bin=master-bin
+
+[mysqld.2]
+log-bin=master-bin
+
+[mysqld.3]
+innodb
+log-bin=slave-bin
+server-id=3
+log-warnings=2
+
+[ENV]
+SERVER_MYPORT_3= @mysqld.3.port
+SERVER_MYSOCK_3= @mysqld.3.socket
diff --git a/mysql-test/suite/binlog_encryption/multisource.result
b/mysql-test/suite/binlog_encryption/multisource.result
new file mode 100644
index 0000000..d99a377
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/multisource.result
@@ -0,0 +1,228 @@
+connect slave,127.0.0.1,root,,,$SERVER_MYPORT_3;
+change master 'abc' to relay_log_file='';
+ERROR HY000: Failed initializing relay log position: Could not find
target log during relay log initialization
+change master 'abc2' to master_host='';
+ERROR HY000: Incorrect arguments to MASTER_HOST
+change master 'master1' to +master_port=MYPORT_1,
+master_host='127.0.0.1', +master_user='root';
+start slave 'master1';
+set default_master_connection = 'master1';
+include/wait_for_slave_to_start.inc
+connect master1,127.0.0.1,root,,,$SERVER_MYPORT_1;
+connection slave;
+#
+# Checking SHOW SLAVE 'master1' STATUS
+#
+Master_Port = 'MYPORT_1'
+Relay_Log_File = 'mysqld-relay-bin-master1.000002'
+Slave_IO_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Last_Errno = '0'
+Last_SQL_Errno = '0'
+#
+# Checking SHOW SLAVE STATUS
+#
+Master_Port = 'MYPORT_1'
+Relay_Log_File = 'mysqld-relay-bin-master1.000002'
+Slave_IO_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Last_Errno = '0'
+Last_SQL_Errno = '0'
+#
+# Checking SHOW ALL SLAVES STATUS
+#
+Connection_name = 'master1'
+Master_Port = 'MYPORT_1'
+Relay_Log_File = 'mysqld-relay-bin-master1.000002'
+Slave_IO_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Last_Errno = '0'
+Last_SQL_Errno = '0'
+Slave_heartbeat_period = '60.000'
+#
+connection master1;
+drop database if exists db1;
+create database db1;
+use db1;
+create table t1 (i int auto_increment, f1 varchar(16), primary key pk
(i,f1)) engine=MyISAM;
+insert into t1 (f1) values ('one'),('two');
+connection slave;
+select * from db1.t1;
+i f1
+1 one
+2 two
+# List of relay log files in the datadir
+mysqld-relay-bin-master1.000001
+mysqld-relay-bin-master1.000002
+mysqld-relay-bin-master1.index
+include/show_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+mysqld-relay-bin-master1.000001 # Format_desc # # SERVER_VERSION,
BINLOG_VERSION
+mysqld-relay-bin-master1.000001 # Rotate # #
mysqld-relay-bin-master1.000002;pos=4
+include/show_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+mysqld-relay-bin-master1.000002 # Format_desc # # SERVER_VERSION,
BINLOG_VERSION
+mysqld-relay-bin-master1.000002 # Rotate # # master-bin.000001;pos=POS
+mysqld-relay-bin-master1.000002 # Format_desc # # SERVER_VERSION,
BINLOG_VERSION
+mysqld-relay-bin-master1.000002 # Gtid_list # # []
+mysqld-relay-bin-master1.000002 # Binlog_checkpoint # # master-bin.000001
+mysqld-relay-bin-master1.000002 # Gtid # # GTID #-#-#
+mysqld-relay-bin-master1.000002 # Query # # drop database if exists db1
+mysqld-relay-bin-master1.000002 # Gtid # # GTID #-#-#
+mysqld-relay-bin-master1.000002 # Query # # create database db1
+mysqld-relay-bin-master1.000002 # Gtid # # GTID #-#-#
+mysqld-relay-bin-master1.000002 # Query # # use `db1`; create table t1
(i int auto_increment, f1 varchar(16), primary key pk (i,f1)) engine=MyISAM
+mysqld-relay-bin-master1.000002 # Gtid # # BEGIN GTID #-#-#
+mysqld-relay-bin-master1.000002 # Intvar # # INSERT_ID=1
+mysqld-relay-bin-master1.000002 # Query # # use `db1`; insert into t1
(f1) values ('one'),('two')
+mysqld-relay-bin-master1.000002 # Query # # COMMIT
+change master 'master1' to
+master_port=MYPORT_2,
+master_host='127.0.0.1',
+master_user='root';
+ERROR HY000: This operation cannot be performed as you have a running
slave 'master1'; run STOP SLAVE 'master1' first
+change master to
+master_port=MYPORT_2,
+master_host='127.0.0.1',
+master_user='root';
+ERROR HY000: This operation cannot be performed as you have a running
slave 'master1'; run STOP SLAVE 'master1' first
+change master 'master2' to
+master_port=MYPORT_1,
+master_host='127.0.0.1',
+master_user='root';
+ERROR HY000: Connection 'master2' conflicts with existing connection
'master1'
+set default_master_connection = '';
+change master to
+master_port=MYPORT_2,
+master_host='127.0.0.1',
+master_user='root';
+start slave;
+include/wait_for_slave_to_start.inc
+#
+# Checking SHOW ALL SLAVES STATUS
+#
+Connection_name = ''
+Connection_name = 'master1'
+Master_Port = 'MYPORT_2'
+Master_Port = 'MYPORT_1'
+Relay_Log_File = 'mysqld-relay-bin.000002'
+Relay_Log_File = 'mysqld-relay-bin-master1.000002'
+Slave_IO_Running = 'Yes'
+Slave_IO_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Last_Errno = '0'
+Last_Errno = '0'
+Last_SQL_Errno = '0'
+Last_SQL_Errno = '0'
+Slave_heartbeat_period = '60.000'
+Slave_heartbeat_period = '60.000'
+#
+connection master1;
+insert into t1 (f1) values ('three');
+connect master2,127.0.0.1,root,,,$SERVER_MYPORT_2;
+drop database if exists db2;
+create database db2;
+use db2;
+create table t1 (pk int auto_increment primary key, f1 int) engine=InnoDB;
+begin;
+insert into t1 (f1) values (1),(2);
+connection slave;
+connection master2;
+connection slave;
+select * from db1.t1;
+i f1
+1 one
+2 two
+3 three
+select * from db2.t1;
+pk f1
+connection master2;
+commit;
+connection slave;
+select * from db2.t1;
+pk f1
+1 1
+2 2
+connection master1;
+flush logs;
+connection slave;
+connection master1;
+purge binary logs to 'master-bin.000002';
+show binary logs;
+Log_name File_size
+master-bin.000002 filesize
+insert into t1 (f1) values ('four');
+create table db1.t3 (f1 int) engine=InnoDB;
+connection slave;
+#
+# Checking SHOW ALL SLAVES STATUS
+#
+Connection_name = ''
+Connection_name = 'master1'
+Master_Port = 'MYPORT_2'
+Master_Port = 'MYPORT_1'
+Relay_Log_File = 'mysqld-relay-bin.000002'
+Relay_Log_File = 'mysqld-relay-bin-master1.000004'
+Slave_IO_Running = 'Yes'
+Slave_IO_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Slave_SQL_Running = 'Yes'
+Last_Errno = '0'
+Last_Errno = '0'
+Last_SQL_Errno = '0'
+Last_SQL_Errno = '0'
+Slave_heartbeat_period = '60.000'
+Slave_heartbeat_period = '60.000'
+#
+select * from db1.t1;
+i f1
+1 one
+2 two
+3 three
+4 four
+include/show_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+mysqld-relay-bin.000001 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+mysqld-relay-bin.000001 # Rotate # # mysqld-relay-bin.000002;pos=4
+include/show_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+mysqld-relay-bin.000002 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+mysqld-relay-bin.000002 # Rotate # # master-bin.000001;pos=POS
+mysqld-relay-bin.000002 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+mysqld-relay-bin.000002 # Gtid_list # # []
+mysqld-relay-bin.000002 # Binlog_checkpoint # # master-bin.000001
+mysqld-relay-bin.000002 # Gtid # # GTID #-#-#
+mysqld-relay-bin.000002 # Query # # drop database if exists db2
+mysqld-relay-bin.000002 # Gtid # # GTID #-#-#
+mysqld-relay-bin.000002 # Query # # create database db2
+mysqld-relay-bin.000002 # Gtid # # GTID #-#-#
+mysqld-relay-bin.000002 # Query # # use `db2`; create table t1 (pk int
auto_increment primary key, f1 int) engine=InnoDB
+mysqld-relay-bin.000002 # Gtid # # BEGIN GTID #-#-#
+mysqld-relay-bin.000002 # Intvar # # INSERT_ID=1
+mysqld-relay-bin.000002 # Query # # use `db2`; insert into t1 (f1)
values (1),(2)
+mysqld-relay-bin.000002 # Xid # # COMMIT /* XID */
+disconnect slave;
+connect slave,127.0.0.1,root,,,$SERVER_MYPORT_3;
+stop slave io_thread;
+show status like 'Slave_running';
+Variable_name Value
+Slave_running OFF
+set default_master_connection = 'master1';
+show status like 'Slave_running';
+Variable_name Value
+Slave_running ON
+drop database db1;
+drop database db2;
+include/reset_master_slave.inc
+disconnect slave;
+connection master1;
+drop database db1;
+include/reset_master_slave.inc
+disconnect master1;
+connection master2;
+drop database db2;
+include/reset_master_slave.inc
+disconnect master2;
diff --git a/mysql-test/suite/binlog_encryption/multisource.test
b/mysql-test/suite/binlog_encryption/multisource.test
new file mode 100644
index 0000000..28ad114
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/multisource.test
@@ -0,0 +1,335 @@
+#
+# The test was taken from the multi_source suite, with some minor changes.
+# The purpose is to check that a slave can replicate from an encrypted
+# and non-encrypted master simultaneously. In the test, server_1 is +#
encrypted, and server_2 is not
+#
+
+#
+# Test basic replication functionality +# in multi-source setup
+#
+
+--source include/not_embedded.inc
+--source include/binlog_start_pos.inc
+--let $rpl_server_count= 0
+
+--enable_connect_log
+
+--connect (slave,127.0.0.1,root,,,$SERVER_MYPORT_3)
+
+# MDEV-3984: crash/read of freed memory when changing master with named
connection
+# This fails after adding the new master 'abc', check we do not free twice.
+--error ER_RELAY_LOG_INIT
+change master 'abc' to relay_log_file='';
+# This fails before adding the new master, check that we do free it.
+--error ER_WRONG_ARGUMENTS
+change master 'abc2' to master_host='';
+
+
+# Start replication from the first master
+
+--replace_result $SERVER_MYPORT_1 MYPORT_1
+eval change master 'master1' to +master_port=$SERVER_MYPORT_1,
+master_host='127.0.0.1', +master_user='root';
+
+start slave 'master1';
+set default_master_connection = 'master1';
+--disable_connect_log
+--source include/wait_for_slave_to_start.inc
+--enable_connect_log
+
+--connect (master1,127.0.0.1,root,,,$SERVER_MYPORT_1)
+--save_master_pos
+
+--connection slave
+--sync_with_master 0,'master1'
+
+# Here and further: add an extra check on SQL thread status
+# as the normal sync is not always enough
+--disable_connect_log
+--source suite/multi_source/wait_for_sql_thread_read_all.inc
+--enable_connect_log
+
+# each of the 3 commands should produce +# 'master1' status
+
+let $wait_for_all= 1; +let $show_statement= SHOW ALL SLAVES STATUS;
+let $field= Slave_IO_State;
+let $condition= = 'Waiting for master to send event';
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+--echo #
+--echo # Checking SHOW SLAVE 'master1' STATUS
+--echo #
+--let $status_items= Master_Port, Relay_Log_File, Slave_IO_Running,
Slave_SQL_Running, Last_Errno, Last_SQL_Errno
+--let $slave_field_result_replace= /$SERVER_MYPORT_1/MYPORT_1/
+--let $slave_name= 'master1'
+--disable_connect_log
+--source include/show_slave_status.inc
+--enable_connect_log
+--let $slave_name=
+
+--echo #
+--echo # Checking SHOW SLAVE STATUS
+--echo #
+--disable_connect_log
+--source include/show_slave_status.inc
+--enable_connect_log
+
+--echo #
+--echo # Checking SHOW ALL SLAVES STATUS
+--echo #
+--let $all_slaves_status= 1
+--let $status_items= Connection_name, Master_Port, Relay_Log_File,
Slave_IO_Running, Slave_SQL_Running, Last_Errno, Last_SQL_Errno,
Slave_heartbeat_period
+--disable_connect_log
+--source include/show_slave_status.inc
+--enable_connect_log
+--let $all_slaves_status=
+--echo #
+
+
+# Check that replication actually works
+
+--connection master1
+
+--disable_warnings
+drop database if exists db1;
+--enable_warnings
+create database db1;
+use db1;
+create table t1 (i int auto_increment, f1 varchar(16), primary key pk
(i,f1)) engine=MyISAM;
+insert into t1 (f1) values ('one'),('two');
+--save_master_pos
+
+--connection slave
+--sync_with_master 0,'master1'
+
+--sorted_result
+select * from db1.t1;
+
+--let $datadir = `SELECT @@datadir`
+
+--echo # List of relay log files in the datadir
+--list_files $datadir mysqld-relay-bin-master1.*
+
+# Check that relay logs are recognizable
+
+let binlog_start=4;
+let binlog_file=;
+--disable_connect_log
+source include/show_relaylog_events.inc;
+let binlog_file= mysqld-relay-bin-master1.000002;
+source include/show_relaylog_events.inc;
+--enable_connect_log
+
+# Try to configure connection with the same name again,
+# should get an error because the slave is running
+
+--replace_result $SERVER_MYPORT_2 MYPORT_2
+--error ER_SLAVE_MUST_STOP
+eval change master 'master1' to
+master_port=$SERVER_MYPORT_2,
+master_host='127.0.0.1',
+master_user='root';
+
+# Try to configure using the default connection name
+# (which is 'master1' at the moment),
+# again, should get an error
+
+--replace_result $SERVER_MYPORT_2 MYPORT_2
+--error ER_SLAVE_MUST_STOP
+eval change master to
+master_port=$SERVER_MYPORT_2,
+master_host='127.0.0.1',
+master_user='root';
+
+# Try to configure a connection with the same master
+# using a different name, should get a conflict
+ +--replace_result $SERVER_MYPORT_1 MYPORT_1
+--error ER_CONNECTION_ALREADY_EXISTS
+eval change master 'master2' to
+master_port=$SERVER_MYPORT_1,
+master_host='127.0.0.1',
+master_user='root';
+
+
+# Set up a proper 'default' connection to master2
+
+set default_master_connection = '';
+
+--replace_result $SERVER_MYPORT_2 MYPORT_2
+eval change master to
+master_port=$SERVER_MYPORT_2,
+master_host='127.0.0.1',
+master_user='root';
+
+start slave;
+--disable_connect_log
+--source include/wait_for_slave_to_start.inc
+
+--source suite/multi_source/wait_for_sql_thread_read_all.inc
+--enable_connect_log
+
+# See both connections in the same status output
+
+let $wait_for_all= 1; +let $show_statement= SHOW ALL SLAVES STATUS;
+let $field= Slave_IO_State;
+let $condition= = 'Waiting for master to send event';
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+--echo #
+--echo # Checking SHOW ALL SLAVES STATUS
+--echo #
+--let $all_slaves_status= 1
+--let $status_items= Connection_name, Master_Port, Relay_Log_File,
Slave_IO_Running, Slave_SQL_Running, Last_Errno, Last_SQL_Errno,
Slave_heartbeat_period
+--let $slave_field_result_replace= /$SERVER_MYPORT_1/MYPORT_1/
/$SERVER_MYPORT_2/MYPORT_2/
+--disable_connect_log
+--source include/show_slave_status.inc
+--enable_connect_log
+--let $all_slaves_status=
+--echo #
+
+# Check that replication from two servers actually works
+
+--connection master1 +
+insert into t1 (f1) values ('three');
+--save_master_pos
+
+--connect (master2,127.0.0.1,root,,,$SERVER_MYPORT_2)
+
+--disable_warnings
+drop database if exists db2;
+--enable_warnings
+create database db2;
+use db2;
+create table t1 (pk int auto_increment primary key, f1 int) engine=InnoDB;
+begin;
+insert into t1 (f1) values (1),(2);
+
+--connection slave
+--sync_with_master 0,'master1'
+
+--connection master2
+--save_master_pos
+
+--connection slave
+--sync_with_master 0
+--sorted_result
+select * from db1.t1;
+select * from db2.t1;
+
+--connection master2
+commit;
+--save_master_pos
+
+--connection slave
+--sync_with_master 0
+--sorted_result
+select * from db2.t1;
+
+# Flush and purge logs on one master,
+# make sure slaves don't get confused
+
+--connection master1
+flush logs;
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+--save_master_pos
+--connection slave
+--sync_with_master 0, 'master1'
+
+--connection master1
+purge binary logs to 'master-bin.000002';
+let filesize=`select $binlog_start_pos+119+36`;
+--replace_result $filesize filesize
+show binary logs;
+insert into t1 (f1) values ('four');
+create table db1.t3 (f1 int) engine=InnoDB;
+--save_master_pos
+
+--connection slave
+--sync_with_master 0,'master1'
+
+--disable_connect_log
+--source suite/multi_source/wait_for_sql_thread_read_all.inc
+--enable_connect_log
+
+let $wait_for_all= 1; +let $show_statement= SHOW ALL SLAVES STATUS;
+let $field= Slave_IO_State;
+let $condition= = 'Waiting for master to send event';
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+--echo #
+--echo # Checking SHOW ALL SLAVES STATUS
+--echo #
+--let $all_slaves_status= 1
+--let $status_items= Connection_name, Master_Port, Relay_Log_File,
Slave_IO_Running, Slave_SQL_Running, Last_Errno, Last_SQL_Errno,
Slave_heartbeat_period
+--let $slave_field_result_replace= /$SERVER_MYPORT_1/MYPORT_1/
/$SERVER_MYPORT_2/MYPORT_2/
+--disable_connect_log
+--source include/show_slave_status.inc
+--enable_connect_log
+--let $all_slaves_status=
+--echo #
+
+--sorted_result
+select * from db1.t1;
+
+# This should show relay log events for the default master
+# (the one with the empty name)
+let binlog_file=;
+--disable_connect_log
+source include/show_relaylog_events.inc;
+let binlog_file= mysqld-relay-bin.000002;
+source include/show_relaylog_events.inc;
+--enable_connect_log
+
+# Make sure we don't lose control over replication connections
+# after reconnecting to the slave
+
+--disconnect slave
+--connect (slave,127.0.0.1,root,,,$SERVER_MYPORT_3)
+
+stop slave io_thread;
+show status like 'Slave_running';
+set default_master_connection = 'master1';
+show status like 'Slave_running';
+
+# Cleanup
+
+drop database db1;
+drop database db2;
+
+--disable_connect_log
+--source suite/multi_source/reset_master_slave.inc
+--enable_connect_log
+--disconnect slave
+
+--connection master1
+drop database db1;
+--disable_connect_log
+--source suite/multi_source/reset_master_slave.inc
+--enable_connect_log
+--disconnect master1
+
+--connection master2
+drop database db2;
+--disable_connect_log
+--source suite/multi_source/reset_master_slave.inc
+--enable_connect_log
+--disconnect master2
+
diff --git a/mysql-test/suite/binlog_encryption/my.cnf
b/mysql-test/suite/binlog_encryption/my.cnf
new file mode 100644
index 0000000..d787ebe
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/my.cnf
@@ -0,0 +1,27 @@
+!include include/default_mysqld.cnf
+!include include/default_client.cnf
+
+[mysqld.1]
+innodb
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename= @ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
+log-basename= master
+
+[mysqld.2]
+#!use-slave-opt
+innodb
+log-slave-updates
+log-basename= slave
+
+[ENV]
+
+# We will adopt tests with master-slave setup as well as rpl_init setup,
+# so need both sets of variables
+MASTER_MYPORT= @mysqld.1.port
+SERVER_MYPORT_1= @mysqld.1.port
+SERVER_MYSOCK_1= @mysqld.1.socket
+
+SLAVE_MYPORT= @mysqld.2.port
+SERVER_MYPORT_2= @mysqld.2.port
+SERVER_MYSOCK_2= @mysqld.2.socket
diff --git a/mysql-test/suite/binlog_encryption/restart_server.inc
b/mysql-test/suite/binlog_encryption/restart_server.inc
new file mode 100644
index 0000000..6cd0788
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/restart_server.inc
@@ -0,0 +1,35 @@
+#
+# We can not use the common include/restart_mysqld.inc or
include/rpl_restart_server.inc,
+# because they have hardcoded connection names (master, master1) +#
which are not initiated by rpl_init.inc.
+# This is the relevant and simplified part of the same set of scripts.
+#
+# ==== Usage ====
+#
+# --let $rpl_server_number= N
+# Number to identify the server that needs to reconnect.
+# 1 is the master server, 2 the slave server
+# [--let $rpl_server_parameters= --flag1 --flag2 ...]
+# --source restart_server.inc
+#
+
+--let $_cur_con= $CURRENT_CONNECTION
+
+--connection default
+--enable_reconnect
+
+--connection $_cur_con
+--enable_reconnect
+--exec echo "wait" > $MYSQLTEST_VARDIR/tmp/mysqld.$rpl_server_number.expect
+
+shutdown_server 10;
+
+--source include/wait_until_disconnected.inc
+
+--let $_rpl_start_server_command= restart
+if ($rpl_server_parameters)
+{
+ --let $_rpl_start_server_command= restart:$rpl_server_parameters
+}
+--exec echo "$_rpl_start_server_command" >
$MYSQLTEST_VARDIR/tmp/mysqld.$rpl_server_number.expect
+--source include/wait_until_connected_again.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_binlog_errors.cnf
b/mysql-test/suite/binlog_encryption/rpl_binlog_errors.cnf
new file mode 100644
index 0000000..2d3db66
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_binlog_errors.cnf
@@ -0,0 +1,7 @@
+!include my.cnf
+
+[mysqld.1]
+max_binlog_size=4096
+
+[mysqld.2]
+skip-slave-start
diff --git a/mysql-test/suite/binlog_encryption/rpl_binlog_errors.result
b/mysql-test/suite/binlog_encryption/rpl_binlog_errors.result
new file mode 100644
index 0000000..6c111ee
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_binlog_errors.result
@@ -0,0 +1,280 @@
+include/master-slave.inc
+[connection master]
+#######################################################################
+####################### PART 1: MASTER TESTS ##########################
+#######################################################################
+connection slave;
+include/stop_slave.inc
+connection master;
+call mtr.add_suppression("Can't generate a unique log-filename");
+call mtr.add_suppression("Writing one row to the row-based binary log
failed.*");
+call mtr.add_suppression("Error writing file .*");
+SET @old_debug= @@global.debug;
+SELECT repeat('x',8192) INTO OUTFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data';
+SELECT repeat('x',10) INTO OUTFILE 'MYSQLTEST_VARDIR/tmp/bug_46166-2.data';
+RESET MASTER;
+###################### TEST #1
+FLUSH LOGS;
+# assert: must show two binlogs
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+###################### TEST #2
+RESET MASTER;
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+FLUSH LOGS;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+# assert: must show one binlog
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+SET GLOBAL debug_dbug=@old_debug;
+RESET MASTER;
+###################### TEST #3
+CREATE TABLE t1 (a INT);
+CREATE TABLE t2 (a VARCHAR(16384)) Engine=InnoDB;
+CREATE TABLE t4 (a VARCHAR(16384));
+INSERT INTO t1 VALUES (1);
+RESET MASTER;
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data' INTO TABLE t2;
+# assert: must show two binlog
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+###################### TEST #4
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data' INTO TABLE t2;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+# assert: must show one entry
+SELECT count(*) FROM t2;
+count(*)
+1
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+###################### TEST #5
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166-2.data' INTO TABLE t2;
+# assert: must show one entry
+SELECT count(*) FROM t2;
+count(*)
+1
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+###################### TEST #6
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+SET AUTOCOMMIT=0;
+INSERT INTO t2 VALUES ('muse');
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data' INTO TABLE t2;
+INSERT INTO t2 VALUES ('muse');
+COMMIT;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+# assert: must show three entries
+SELECT count(*) FROM t2;
+count(*)
+3
+SET AUTOCOMMIT= 1;
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+###################### TEST #7
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+SELECT count(*) FROM t4;
+count(*)
+0
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data' INTO TABLE t4;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+# assert: must show 1 entry
+SELECT count(*) FROM t4;
+count(*)
+1
+### check that the incident event is written to the current log
+SET GLOBAL debug_dbug=@old_debug;
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Incident # # #1 (LOST_EVENTS)
+DELETE FROM t4;
+RESET MASTER;
+###################### TEST #8
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+# must show 0 entries
+SELECT count(*) FROM t4;
+count(*)
+0
+SELECT count(*) FROM t2;
+count(*)
+0
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data' INTO TABLE t4;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+LOAD DATA INFILE 'MYSQLTEST_VARDIR/tmp/bug_46166.data' INTO TABLE t2;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+INSERT INTO t2 VALUES ('aaa'), ('bbb'), ('ccc');
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+# INFO: Count(*) Before Offending DELETEs
+# assert: must show 1 entry
+SELECT count(*) FROM t4;
+count(*)
+1
+# assert: must show 4 entries
+SELECT count(*) FROM t2;
+count(*)
+4
+DELETE FROM t4;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+DELETE FROM t2;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+# INFO: Count(*) After Offending DELETEs
+# assert: must show zero entries
+SELECT count(*) FROM t4;
+count(*)
+0
+SELECT count(*) FROM t2;
+count(*)
+0
+SET GLOBAL debug_dbug=@old_debug;
+###################### TEST #9
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+SET SQL_LOG_BIN=0;
+INSERT INTO t2 VALUES ('aaa'), ('bbb'), ('ccc'), ('ddd');
+INSERT INTO t4 VALUES ('eee'), ('fff'), ('ggg'), ('hhh');
+# assert: must show four entries
+SELECT count(*) FROM t2;
+count(*)
+4
+SELECT count(*) FROM t4;
+count(*)
+4
+DELETE FROM t2;
+DELETE FROM t4;
+# assert: must show zero entries
+SELECT count(*) FROM t2;
+count(*)
+0
+SELECT count(*) FROM t4;
+count(*)
+0
+SET SQL_LOG_BIN=1;
+SET GLOBAL debug_dbug=@old_debug;
+###################### TEST #10
+call mtr.add_suppression("MSYQL_BIN_LOG::open failed to sync the index
file.");
+call mtr.add_suppression("Could not open .*");
+RESET MASTER;
+SHOW WARNINGS;
+Level Code Message
+SET GLOBAL debug_dbug="+d,fault_injection_registering_index";
+FLUSH LOGS;
+ERROR HY000: Can't open file: 'master-bin.000002' (errno: 1 "Operation
not permitted")
+SET GLOBAL debug_dbug="-d,fault_injection_registering_index";
+SHOW BINARY LOGS;
+ERROR HY000: You are not using binary logging
+CREATE TABLE t5 (a INT);
+INSERT INTO t4 VALUES ('bbbbb');
+INSERT INTO t2 VALUES ('aaaaa');
+DELETE FROM t4;
+DELETE FROM t2;
+DROP TABLE t5;
+###################### TEST #11
+include/rpl_restart_server.inc [server_number=1]
+SET GLOBAL debug_dbug="+d,fault_injection_openning_index";
+FLUSH LOGS;
+ERROR HY000: Can't open file: 'master-bin.index' (errno: 1 "Operation
not permitted")
+SET GLOBAL debug_dbug="-d,fault_injection_openning_index";
+RESET MASTER;
+ERROR HY000: Binlog closed, cannot RESET MASTER
+CREATE TABLE t5 (a INT);
+INSERT INTO t4 VALUES ('bbbbb');
+INSERT INTO t2 VALUES ('aaaaa');
+DELETE FROM t4;
+DELETE FROM t2;
+DROP TABLE t5;
+include/rpl_restart_server.inc [server_number=1]
+###################### TEST #12
+SET GLOBAL debug_dbug="+d,fault_injection_new_file_rotate_event";
+FLUSH LOGS;
+ERROR HY000: Can't open file: 'master-bin' (errno: 2 "No such file or
directory")
+SET GLOBAL debug_dbug="-d,fault_injection_new_file_rotate_event";
+RESET MASTER;
+ERROR HY000: Binlog closed, cannot RESET MASTER
+CREATE TABLE t5 (a INT);
+INSERT INTO t4 VALUES ('bbbbb');
+INSERT INTO t2 VALUES ('aaaaa');
+DELETE FROM t4;
+DELETE FROM t2;
+DROP TABLE t5;
+include/rpl_restart_server.inc [server_number=1]
+DROP TABLE t1, t2, t4;
+RESET MASTER;
+connection slave;
+include/start_slave.inc
+connection master;
+#######################################################################
+####################### PART 2: SLAVE TESTS ###########################
+#######################################################################
+include/rpl_reset.inc
+connection slave;
+call mtr.add_suppression("Slave I/O: Relay log write failure: could not
queue event from master.*");
+call mtr.add_suppression("Error writing file .*");
+call mtr.add_suppression("Could not open .*");
+call mtr.add_suppression("MSYQL_BIN_LOG::open failed to sync the index
file.");
+call mtr.add_suppression("Can't generate a unique log-filename .*");
+###################### TEST #13
+SET @old_debug=@@global.debug;
+include/stop_slave.inc
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+START SLAVE io_thread;
+include/wait_for_slave_io_error.inc [errno=1595]
+Last_IO_Error = 'Relay log write failure: could not queue event from
master'
+SET GLOBAL debug_dbug="-d,error_unique_log_filename";
+SET GLOBAL debug_dbug=@old_debug;
+include/rpl_restart_server.inc [server_number=2]
+###################### TEST #14
+SET @old_debug=@@global.debug;
+include/stop_slave.inc
+SET GLOBAL debug_dbug="+d,fault_injection_new_file_rotate_event";
+START SLAVE io_thread;
+include/wait_for_slave_io_error.inc [errno=1595]
+Last_IO_Error = 'Relay log write failure: could not queue event from
master'
+SET GLOBAL debug_dbug="-d,fault_injection_new_file_rotate_event";
+SET GLOBAL debug_dbug=@old_debug;
+include/rpl_restart_server.inc [server_number=2]
+###################### TEST #15
+SET @old_debug=@@global.debug;
+include/stop_slave.inc
+SET GLOBAL debug_dbug="+d,fault_injection_registering_index";
+START SLAVE io_thread;
+include/wait_for_slave_io_error.inc [errno=1595]
+Last_IO_Error = 'Relay log write failure: could not queue event from
master'
+SET GLOBAL debug_dbug="-d,fault_injection_registering_index";
+SET GLOBAL debug_dbug=@old_debug;
+include/rpl_restart_server.inc [server_number=2]
+###################### TEST #16
+SET @old_debug=@@global.debug;
+include/stop_slave.inc
+SET GLOBAL debug_dbug="+d,fault_injection_openning_index";
+START SLAVE io_thread;
+include/wait_for_slave_io_error.inc [errno=1595]
+Last_IO_Error = 'Relay log write failure: could not queue event from
master'
+SET GLOBAL debug_dbug="-d,fault_injection_openning_index";
+SET GLOBAL debug_dbug=@old_debug;
+include/rpl_restart_server.inc [server_number=2]
+include/stop_slave_sql.inc
+Warnings:
+Note 1255 Slave already has been stopped
+RESET SLAVE;
+RESET MASTER;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_binlog_errors.test
b/mysql-test/suite/binlog_encryption/rpl_binlog_errors.test
new file mode 100644
index 0000000..b830652
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_binlog_errors.test
@@ -0,0 +1,439 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+# BUG#46166: MYSQL_BIN_LOG::new_file_impl is not propagating error
+# when generating new name.
+# +# WHY
+# ===
+#
+# We want to check whether error is reported or not when
+# new_file_impl fails (this may happen when rotation is not
+# possible because there is some problem finding an +# unique filename).
+#
+# HOW
+# ===
+# +# Test cases are documented inline.
+
+-- source include/have_debug.inc
+-- source include/master-slave.inc
+
+--enable_connect_log
+
+-- echo
#######################################################################
+-- echo ####################### PART 1: MASTER TESTS
##########################
+-- echo
#######################################################################
+
+
+### ACTION: stopping slave as it is not needed for the first part of
+### the test
+
+-- connection slave
+--disable_connect_log
+-- source include/stop_slave.inc
+--enable_connect_log
+-- connection master
+
+call mtr.add_suppression("Can't generate a unique log-filename");
+call mtr.add_suppression("Writing one row to the row-based binary log
failed.*");
+call mtr.add_suppression("Error writing file .*");
+
+SET @old_debug= @@global.debug;
+
+### ACTION: create a large file (> 4096 bytes) that will be later used
+### in LOAD DATA INFILE to check binlog errors in its vacinity
+-- let $load_file= $MYSQLTEST_VARDIR/tmp/bug_46166.data
+-- let $MYSQLD_DATADIR= `select @@datadir`
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SELECT repeat('x',8192) INTO OUTFILE '$load_file'
+
+### ACTION: create a small file (< 4096 bytes) that will be later used
+### in LOAD DATA INFILE to check for absence of binlog errors
+### when file loading this file does not force flushing and
+### rotating the binary log
+-- let $load_file2= $MYSQLTEST_VARDIR/tmp/bug_46166-2.data
+-- let $MYSQLD_DATADIR= `select @@datadir`
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval SELECT repeat('x',10) INTO OUTFILE '$load_file2'
+
+RESET MASTER;
+
+-- echo ###################### TEST #1
+
+### ASSERTION: no problem flushing logs (should show two binlogs)
+FLUSH LOGS;
+-- echo # assert: must show two binlogs
+--disable_connect_log
+-- source include/show_binary_logs.inc
+--enable_connect_log
+
+-- echo ###################### TEST #2
+
+### ASSERTION: check that FLUSH LOGS actually fails and reports
+### failure back to the user if find_uniq_filename fails
+### (should show just one binlog)
+
+RESET MASTER;
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+-- error ER_NO_UNIQUE_LOGFILE
+FLUSH LOGS;
+-- echo # assert: must show one binlog
+--disable_connect_log
+-- source include/show_binary_logs.inc
+--enable_connect_log
+
+### ACTION: clean up and move to next test
+SET GLOBAL debug_dbug=@old_debug;
+RESET MASTER;
+
+-- echo ###################### TEST #3
+
+### ACTION: create some tables (t1, t2, t4) and insert some values in
+### table t1
+CREATE TABLE t1 (a INT);
+CREATE TABLE t2 (a VARCHAR(16384)) Engine=InnoDB;
+CREATE TABLE t4 (a VARCHAR(16384));
+INSERT INTO t1 VALUES (1);
+RESET MASTER;
+
+### ASSERTION: we force rotation of the binary log because it exceeds
+### the max_binlog_size option (should show two binary
+### logs)
+
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval LOAD DATA INFILE '$load_file' INTO TABLE t2
+
+# shows two binary logs
+-- echo # assert: must show two binlog
+--disable_connect_log
+-- source include/show_binary_logs.inc
+--enable_connect_log
+
+# clean up the table and the binlog to be used in next part of test
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+
+-- echo ###################### TEST #4
+
+### ASSERTION: load the big file into a transactional table and check
+### that it reports error. The table will contain the
+### changes performed despite the fact that it reported an
+### error.
+
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- error ER_NO_UNIQUE_LOGFILE
+-- eval LOAD DATA INFILE '$load_file' INTO TABLE t2
+
+# show table +-- echo # assert: must show one entry
+SELECT count(*) FROM t2;
+
+# clean up the table and the binlog to be used in next part of test
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+
+-- echo ###################### TEST #5
+
+### ASSERTION: load the small file into a transactional table and
+### check that it succeeds
+
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval LOAD DATA INFILE '$load_file2' INTO TABLE t2
+
+# show table +-- echo # assert: must show one entry
+SELECT count(*) FROM t2;
+
+# clean up the table and the binlog to be used in next part of test
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+
+-- echo ###################### TEST #6
+
+### ASSERTION: check that even if one is using a transactional table
+### and explicit transactions (no autocommit) if rotation
+### fails we get the error. Transaction is not rolledback
+### because rotation happens after the commit.
+
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+SET AUTOCOMMIT=0;
+INSERT INTO t2 VALUES ('muse');
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- eval LOAD DATA INFILE '$load_file' INTO TABLE t2
+INSERT INTO t2 VALUES ('muse');
+-- error ER_NO_UNIQUE_LOGFILE
+COMMIT;
+
+### ACTION: Show the contents of the table after the test +-- echo #
assert: must show three entries
+SELECT count(*) FROM t2;
+
+### ACTION: clean up and move to the next test +SET AUTOCOMMIT= 1;
+SET GLOBAL debug_dbug=@old_debug;
+DELETE FROM t2;
+RESET MASTER;
+
+-- echo ###################### TEST #7
+
+### ASSERTION: check that on a non-transactional table, if rotation
+### fails then an error is reported and an incident event
+### is written to the current binary log.
+
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+SELECT count(*) FROM t4;
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- error ER_NO_UNIQUE_LOGFILE
+-- eval LOAD DATA INFILE '$load_file' INTO TABLE t4
+
+-- echo # assert: must show 1 entry
+SELECT count(*) FROM t4;
+
+-- echo ### check that the incident event is written to the current log
+SET GLOBAL debug_dbug=@old_debug;
+-- let $binlog_limit= 5,1
+--disable_connect_log
+-- source include/show_binlog_events.inc
+--enable_connect_log
+
+# clean up and move to next test
+DELETE FROM t4;
+RESET MASTER;
+
+-- echo ###################### TEST #8
+
+### ASSERTION: check that statements end up in error but they succeed
+### on changing the data. +
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+-- echo # must show 0 entries
+SELECT count(*) FROM t4;
+SELECT count(*) FROM t2;
+
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- error ER_NO_UNIQUE_LOGFILE
+-- eval LOAD DATA INFILE '$load_file' INTO TABLE t4
+-- replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+-- error ER_NO_UNIQUE_LOGFILE
+-- eval LOAD DATA INFILE '$load_file' INTO TABLE t2
+-- error ER_NO_UNIQUE_LOGFILE
+INSERT INTO t2 VALUES ('aaa'), ('bbb'), ('ccc');
+
+-- echo # INFO: Count(*) Before Offending DELETEs
+-- echo # assert: must show 1 entry
+SELECT count(*) FROM t4;
+-- echo # assert: must show 4 entries
+SELECT count(*) FROM t2;
+
+-- error ER_NO_UNIQUE_LOGFILE
+DELETE FROM t4;
+-- error ER_NO_UNIQUE_LOGFILE
+DELETE FROM t2;
+
+-- echo # INFO: Count(*) After Offending DELETEs
+-- echo # assert: must show zero entries
+SELECT count(*) FROM t4;
+SELECT count(*) FROM t2;
+
+# remove fault injection
+SET GLOBAL debug_dbug=@old_debug;
+
+-- echo ###################### TEST #9
+
+### ASSERTION: check that if we disable binlogging, then statements
+### succeed.
+SET GLOBAL debug_dbug="+d,error_unique_log_filename";
+SET SQL_LOG_BIN=0;
+INSERT INTO t2 VALUES ('aaa'), ('bbb'), ('ccc'), ('ddd');
+INSERT INTO t4 VALUES ('eee'), ('fff'), ('ggg'), ('hhh');
+-- echo # assert: must show four entries
+SELECT count(*) FROM t2;
+SELECT count(*) FROM t4;
+DELETE FROM t2;
+DELETE FROM t4;
+-- echo # assert: must show zero entries
+SELECT count(*) FROM t2;
+SELECT count(*) FROM t4;
+SET SQL_LOG_BIN=1;
+SET GLOBAL debug_dbug=@old_debug;
+
+-- echo ###################### TEST #10
+
+### ASSERTION: check that error is reported if there is a failure
+### while registering the index file and the binary log
+### file or failure to write the rotate event.
+
+call mtr.add_suppression("MSYQL_BIN_LOG::open failed to sync the index
file.");
+call mtr.add_suppression("Could not open .*");
+
+RESET MASTER;
+SHOW WARNINGS;
+
+# +d,fault_injection_registering_index => injects fault on
MYSQL_BIN_LOG::open
+SET GLOBAL debug_dbug="+d,fault_injection_registering_index";
+-- replace_regex /\.[\\\/]master/master/
+-- error ER_CANT_OPEN_FILE
+FLUSH LOGS;
+SET GLOBAL debug_dbug="-d,fault_injection_registering_index";
+
+-- error ER_NO_BINARY_LOGGING
+SHOW BINARY LOGS;
+
+# issue some statements and check that they don't fail
+CREATE TABLE t5 (a INT);
+INSERT INTO t4 VALUES ('bbbbb');
+INSERT INTO t2 VALUES ('aaaaa');
+DELETE FROM t4;
+DELETE FROM t2;
+DROP TABLE t5;
+
+-- echo ###################### TEST #11
+
+### ASSERTION: check that error is reported if there is a failure
+### while opening the index file and the binary log file or
+### failure to write the rotate event.
+
+# restart the server so that we have binlog again
+--let $rpl_server_number= 1
+--disable_connect_log
+--source include/rpl_restart_server.inc
+--enable_connect_log
+
+# +d,fault_injection_openning_index => injects fault on
MYSQL_BIN_LOG::open_index_file
+SET GLOBAL debug_dbug="+d,fault_injection_openning_index";
+-- replace_regex /\.[\\\/]master/master/
+-- error ER_CANT_OPEN_FILE
+FLUSH LOGS;
+SET GLOBAL debug_dbug="-d,fault_injection_openning_index";
+
+-- error ER_FLUSH_MASTER_BINLOG_CLOSED
+RESET MASTER;
+
+# issue some statements and check that they don't fail
+CREATE TABLE t5 (a INT);
+INSERT INTO t4 VALUES ('bbbbb');
+INSERT INTO t2 VALUES ('aaaaa');
+DELETE FROM t4;
+DELETE FROM t2;
+DROP TABLE t5;
+
+# restart the server so that we have binlog again
+--let $rpl_server_number= 1
+--disable_connect_log
+--source include/rpl_restart_server.inc
+--enable_connect_log
+
+-- echo ###################### TEST #12
+
+### ASSERTION: check that error is reported if there is a failure
+### while writing the rotate event when creating a new log
+### file.
+
+# +d,fault_injection_new_file_rotate_event => injects fault on
MYSQL_BIN_LOG::MYSQL_BIN_LOG::new_file_impl
+SET GLOBAL debug_dbug="+d,fault_injection_new_file_rotate_event";
+-- error ER_ERROR_ON_WRITE
+FLUSH LOGS;
+SET GLOBAL debug_dbug="-d,fault_injection_new_file_rotate_event";
+
+-- error ER_FLUSH_MASTER_BINLOG_CLOSED
+RESET MASTER;
+
+# issue some statements and check that they don't fail
+CREATE TABLE t5 (a INT);
+INSERT INTO t4 VALUES ('bbbbb');
+INSERT INTO t2 VALUES ('aaaaa');
+DELETE FROM t4;
+DELETE FROM t2;
+DROP TABLE t5;
+
+# restart the server so that we have binlog again
+--let $rpl_server_number= 1
+--disable_connect_log
+--source include/rpl_restart_server.inc
+--enable_connect_log
+
+## clean up
+DROP TABLE t1, t2, t4;
+RESET MASTER;
+
+# restart slave again
+-- connection slave
+--disable_connect_log
+-- source include/start_slave.inc
+--enable_connect_log
+-- connection master
+
+-- echo
#######################################################################
+-- echo ####################### PART 2: SLAVE TESTS
###########################
+-- echo
#######################################################################
+
+### setup
+--disable_connect_log
+--source include/rpl_reset.inc
+--enable_connect_log
+-- connection slave
+
+# slave suppressions
+
+call mtr.add_suppression("Slave I/O: Relay log write failure: could not
queue event from master.*");
+call mtr.add_suppression("Error writing file .*");
+call mtr.add_suppression("Could not open .*");
+call mtr.add_suppression("MSYQL_BIN_LOG::open failed to sync the index
file.");
+call mtr.add_suppression("Can't generate a unique log-filename .*");
+-- echo ###################### TEST #13
+
+#### ASSERTION: check against unique log filename error
+-- let $io_thd_injection_fault_flag= error_unique_log_filename
+-- let $slave_io_errno= 1595
+-- let $show_slave_io_error= 1
+--disable_connect_log
+-- source include/io_thd_fault_injection.inc
+--enable_connect_log
+
+-- echo ###################### TEST #14
+
+#### ASSERTION: check against rotate failing
+-- let $io_thd_injection_fault_flag= fault_injection_new_file_rotate_event
+-- let $slave_io_errno= 1595
+-- let $show_slave_io_error= 1
+--disable_connect_log
+-- source include/io_thd_fault_injection.inc
+--enable_connect_log
+
+-- echo ###################### TEST #15
+
+#### ASSERTION: check against relay log open failure
+-- let $io_thd_injection_fault_flag= fault_injection_registering_index
+-- let $slave_io_errno= 1595
+-- let $show_slave_io_error= 1
+--disable_connect_log
+-- source include/io_thd_fault_injection.inc
+--enable_connect_log
+
+-- echo ###################### TEST #16
+
+#### ASSERTION: check against relay log index open failure
+-- let $io_thd_injection_fault_flag= fault_injection_openning_index
+-- let $slave_io_errno= 1595
+-- let $show_slave_io_error= 1
+--disable_connect_log
+-- source include/io_thd_fault_injection.inc
+--enable_connect_log
+
+### clean up
+--disable_connect_log
+-- source include/stop_slave_sql.inc
+--enable_connect_log
+RESET SLAVE;
+RESET MASTER;
+--let $rpl_only_running_threads= 1
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_cant_read_event_incident.result
b/mysql-test/suite/binlog_encryption/rpl_cant_read_event_incident.result
new file mode 100644
index 0000000..5aff978
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_cant_read_event_incident.result
@@ -0,0 +1,26 @@
+include/master-slave.inc
+[connection master]
+connection slave;
+include/stop_slave.inc
+connection master;
+call mtr.add_suppression("Error in Log_event::read_log_event()");
+include/rpl_stop_server.inc [server_number=1]
+include/rpl_start_server.inc [server_number=1]
+show binlog events;
+ERROR HY000: Error when executing command SHOW BINLOG EVENTS: Wrong
offset or I/O error
+connection slave;
+call mtr.add_suppression("Slave I/O: Got fatal error 1236 from master
when reading data from binary log");
+reset slave;
+start slave;
+include/wait_for_slave_param.inc [Last_IO_Errno]
+Last_IO_Errno = '1236'
+Last_IO_Error = 'Got fatal error 1236 from master when reading data
from binary log: 'binlog truncated in the middle of event; consider out
of disk space on master; the first event '.' at XXX, the last event read
from 'master-bin.000001' at XXX, the last byte read from
'master-bin.000001' at XXX.''
+connection master;
+reset master;
+connection slave;
+stop slave;
+reset slave;
+drop table if exists t;
+reset master;
+End of the tests
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_cant_read_event_incident.test
b/mysql-test/suite/binlog_encryption/rpl_cant_read_event_incident.test
new file mode 100644
index 0000000..5aa70b8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_cant_read_event_incident.test
@@ -0,0 +1,92 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+#
+# Bug#11747416 : 32228 A disk full makes binary log corrupt.
+#
+# +# The test demonstrates reading from binlog error propagation to
slave +# and reporting there.
+# Conditions for the bug include a crash at time of the last event to
+# the binlog was written partly. With the fixes the event is not sent
out +# any longer, but rather the dump thread sends out a sound error
message.
+# +# Crash is not simulated. A binlog with partly written event in its
end is installed
+# and replication is started from it.
+#
+
+--source include/master-slave.inc
+--source include/have_binlog_format_mixed.inc
+
+--enable_connect_log
+
+--connection slave
+# Make sure the slave is stopped while we are messing with master.
+# Otherwise we get occasional failures as the slave manages to re-connect
+# to the newly started master and we get extra events applied, causing
+# conflicts.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection master
+call mtr.add_suppression("Error in Log_event::read_log_event()");
+--let $datadir= `SELECT @@datadir`
+
+--let $rpl_server_number= 1
+--disable_connect_log
+--source include/rpl_stop_server.inc
+--enable_connect_log
+
+--remove_file $datadir/master-bin.000001
+--copy_file $MYSQL_TEST_DIR/std_data/bug11747416_32228_binlog.000001
$datadir/master-bin.000001
+
+--let $rpl_server_number= 1
+--disable_connect_log
+--source include/rpl_start_server.inc
+
+--source include/wait_until_connected_again.inc
+--enable_connect_log
+
+# evidence of the partial binlog
+--error ER_ERROR_WHEN_EXECUTING_COMMAND
+show binlog events;
+
+--connection slave
+call mtr.add_suppression("Slave I/O: Got fatal error 1236 from master
when reading data from binary log");
+reset slave;
+start slave;
+
+# ER_MASTER_FATAL_ERROR_READING_BINLOG 1236
+--let $slave_param=Last_IO_Errno
+--let $slave_param_value=1236
+--disable_connect_log
+--source include/wait_for_slave_param.inc
+
+--let $slave_field_result_replace= / at [0-9]*/ at XXX/
+--let $status_items= Last_IO_Errno, Last_IO_Error
+--source include/show_slave_status.inc
+--enable_connect_log
+
+#
+# Cleanup
+#
+
+--connection master
+reset master;
+
+--connection slave
+stop slave;
+reset slave;
+# Table was created from binlog, it may not be created if SQL thread is
running
+# slowly and IO thread reaches incident before SQL thread applies it.
+--disable_warnings
+drop table if exists t;
+--enable_warnings
+reset master;
+
+--echo End of the tests
+--let $rpl_only_running_threads= 1
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_checksum.cnf
b/mysql-test/suite/binlog_encryption/rpl_checksum.cnf
new file mode 100644
index 0000000..9d7ada8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_checksum.cnf
@@ -0,0 +1,10 @@
+!include my.cnf
+
+[mysqld.1]
+binlog-checksum=CRC32
+
+[mysqld.2]
+binlog-checksum=CRC32
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename=(a)ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
diff --git a/mysql-test/suite/binlog_encryption/rpl_checksum.result
b/mysql-test/suite/binlog_encryption/rpl_checksum.result
new file mode 100644
index 0000000..1ba9ff8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_checksum.result
@@ -0,0 +1,157 @@
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression('Slave can not handle replication events with
the checksum that master is configured to log');
+call mtr.add_suppression('Replication event checksum verification failed');
+call mtr.add_suppression('Relay log write failure: could not queue
event from master');
+call mtr.add_suppression('Master is configured to log replication
events with checksum, but will not send such events to slaves that
cannot process them');
+connection slave;
+set @save_slave_sql_verify_checksum = @@global.slave_sql_verify_checksum;
+connection master;
+set @save_master_verify_checksum = @@global.master_verify_checksum;
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+set @@global.binlog_checksum = NONE;
+select @@global.binlog_checksum;
+@@global.binlog_checksum
+NONE
+*** must be rotations seen ***
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+connection master;
+set @@global.binlog_checksum = NONE;
+create table t1 (a int);
+flush logs;
+flush logs;
+flush logs;
+connection slave;
+flush logs;
+flush logs;
+flush logs;
+select count(*) as zero from t1;
+zero
+0
+include/stop_slave.inc
+connection master;
+set @@global.binlog_checksum = CRC32;
+insert into t1 values (1) /* will not be applied on slave due to
simulation */;
+connection slave;
+set @@global.debug_dbug='d,simulate_slave_unaware_checksum';
+start slave;
+include/wait_for_slave_io_error.inc [errno=1236]
+select count(*) as zero from t1;
+zero
+0
+set @@global.debug_dbug='';
+include/start_slave.inc
+connection master;
+set @@global.master_verify_checksum = 1;
+set @@session.debug_dbug='d,simulate_checksum_test_failure';
+show binlog events;
+ERROR HY000: Error when executing command SHOW BINLOG EVENTS: Wrong
offset or I/O error
+set @@session.debug_dbug='';
+set @@global.master_verify_checksum = default;
+connection slave;
+connection slave;
+include/stop_slave.inc
+connection master;
+create table t2 (a int);
+connection slave;
+set @@global.debug_dbug='d,simulate_checksum_test_failure';
+start slave io_thread;
+include/wait_for_slave_io_error.inc [errno=1595,1913]
+set @@global.debug_dbug='';
+start slave io_thread;
+include/wait_for_slave_param.inc [Read_Master_Log_Pos]
+set @@global.slave_sql_verify_checksum = 1;
+set @@global.debug_dbug='d,simulate_checksum_test_failure';
+start slave sql_thread;
+include/wait_for_slave_sql_error.inc [errno=1593]
+Last_SQL_Error = 'Error initializing relay log position: I/O error
reading event at position 4'
+set @@global.debug_dbug='';
+include/start_slave.inc
+connection master;
+connection slave;
+select count(*) as 'must be zero' from t2;
+must be zero
+0
+connection slave;
+stop slave;
+reset slave;
+set @@global.binlog_checksum= IF(floor((rand()*1000)%2), "CRC32", "NONE");
+flush logs;
+connection master;
+set @@global.binlog_checksum= CRC32;
+reset master;
+flush logs;
+create table t3 (a int, b char(5));
+connection slave;
+include/start_slave.inc
+connection master;
+connection slave;
+select count(*) as 'must be zero' from t3;
+must be zero
+0
+include/stop_slave.inc
+change master to master_host='127.0.0.1',master_port=MASTER_PORT,
master_user='root';
+connection master;
+flush logs;
+reset master;
+insert into t3 value (1, @@global.binlog_checksum);
+connection slave;
+include/start_slave.inc
+flush logs;
+connection master;
+connection slave;
+select count(*) as 'must be one' from t3;
+must be one
+1
+connection master;
+set @@global.binlog_checksum= IF(floor((rand()*1000)%2), "CRC32", "NONE");
+insert into t3 value (1, @@global.binlog_checksum);
+connection slave;
+connection master;
+drop table t1, t2, t3;
+set @@global.master_verify_checksum = @save_master_verify_checksum;
+connection slave;
+*** Bug#59123 / MDEV-5799: INCIDENT_EVENT checksum written to error log
as garbage characters ***
+connection master;
+CREATE TABLE t4 (a INT PRIMARY KEY);
+INSERT INTO t4 VALUES (1);
+SET sql_log_bin=0;
+CALL mtr.add_suppression("\\[ERROR\\] Can't generate a unique
log-filename");
+SET sql_log_bin=1;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET debug_dbug= '+d,binlog_inject_new_name_error';
+FLUSH LOGS;
+ERROR HY000: Can't generate a unique log-filename master-bin.(1-999)
+
+SET debug_dbug= @old_dbug;
+INSERT INTO t4 VALUES (2);
+connection slave;
+include/wait_for_slave_sql_error.inc [errno=1590]
+Last_SQL_Error = 'The incident LOST_EVENTS occurred on the master.
Message: error writing to the binary log'
+FOUND /Slave SQL: The incident LOST_EVENTS occurred on the master\.
Message: error writing to the binary log, Internal MariaDB error code:
1590/ in mysqld.2.err
+SELECT * FROM t4 ORDER BY a;
+a
+1
+STOP SLAVE IO_THREAD;
+SET sql_slave_skip_counter= 1;
+include/start_slave.inc
+connection master;
+connection slave;
+SELECT * FROM t4 ORDER BY a;
+a
+1
+2
+connection slave;
+set @@global.binlog_checksum = CRC32;
+set @@global.slave_sql_verify_checksum = @save_slave_sql_verify_checksum;
+End of tests
+connection master;
+DROP TABLE t4;
+set @@global.binlog_checksum = CRC32;
+set @@global.master_verify_checksum = @save_master_verify_checksum;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_checksum.test
b/mysql-test/suite/binlog_encryption/rpl_checksum.test
new file mode 100644
index 0000000..2334c63
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_checksum.test
@@ -0,0 +1,326 @@
+#
+# The test was taken from the rpl suite, parts related to checking
+# the variable itself were removed. The test is run with encrypted
+# binlogs on master and slave
+#
+
+# WL2540 replication events checksum
+# Testing configuration parameters
+
+--source include/master-slave.inc
+--source include/have_debug.inc
+--source include/have_binlog_format_mixed.inc
+
+--enable_connect_log
+
+call mtr.add_suppression('Slave can not handle replication events with
the checksum that master is configured to log');
+call mtr.add_suppression('Replication event checksum verification failed');
+# due to C failure simulation
+call mtr.add_suppression('Relay log write failure: could not queue
event from master');
+call mtr.add_suppression('Master is configured to log replication
events with checksum, but will not send such events to slaves that
cannot process them');
+
+connection slave;
+
+set @save_slave_sql_verify_checksum = @@global.slave_sql_verify_checksum;
+
+connection master;
+
+set @save_master_verify_checksum = @@global.master_verify_checksum;
+
+--disable_connect_log
+source include/show_binary_logs.inc;
+set @@global.binlog_checksum = NONE;
+select @@global.binlog_checksum;
+--echo *** must be rotations seen ***
+source include/show_binary_logs.inc;
+--enable_connect_log
+
+#
+# B. Old Slave to New master conditions
+#
+# while master does not send a checksum-ed binlog the Old Slave can
+# work with the New Master
+
+connection master;
+
+set @@global.binlog_checksum = NONE;
+create table t1 (a int);
+
+# testing that binlog rotation preserves opt_binlog_checksum value
+flush logs;
+flush logs;
+flush logs;
+
+sync_slave_with_master;
+#connection slave;
+# checking that rotation on the slave side leaves slave stable
+flush logs;
+flush logs;
+flush logs;
+select count(*) as zero from t1;
+
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+
+connection master;
+set @@global.binlog_checksum = CRC32;
+--disable_connect_log
+-- source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+insert into t1 values (1) /* will not be applied on slave due to
simulation */;
+
+# instruction to the dump thread
+
+connection slave;
+set @@global.debug_dbug='d,simulate_slave_unaware_checksum';
+start slave;
+--let $slave_io_errno= 1236
+--let $show_slave_io_error= 0
+--disable_connect_log
+source include/wait_for_slave_io_error.inc;
+--enable_connect_log
+
+select count(*) as zero from t1;
+set @@global.debug_dbug='';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+# +# C. checksum failure simulations
+#
+
+# C1. Failure by a client thread
+connection master;
+set @@global.master_verify_checksum = 1;
+set @@session.debug_dbug='d,simulate_checksum_test_failure';
+--error ER_ERROR_WHEN_EXECUTING_COMMAND
+show binlog events;
+set @@session.debug_dbug='';
+set @@global.master_verify_checksum = default;
+
+#connection master;
+sync_slave_with_master;
+
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+
+connection master;
+create table t2 (a int);
+let $pos_master= query_get_value(SHOW MASTER STATUS, Position, 1);
+
+connection slave;
+
+# C2. Failure by IO thread
+# instruction to io thread
+set @@global.debug_dbug='d,simulate_checksum_test_failure';
+start slave io_thread;
+# When the checksum error is detected, the slave sets error code 1913
+# (ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE) in queue_event(), then
immediately
+# sets error 1595 (ER_SLAVE_RELAY_LOG_WRITE_FAILURE) in handle_slave_io().
+# So we usually get 1595, but it is occasionally possible to get 1913.
+--let $slave_io_errno= 1595,1913
+--let $show_slave_io_error= 0
+--disable_connect_log
+source include/wait_for_slave_io_error.inc;
+--enable_connect_log
+set @@global.debug_dbug='';
+
+# to make IO thread re-read it again w/o the failure
+start slave io_thread;
+let $slave_param= Read_Master_Log_Pos;
+let $slave_param_value= $pos_master;
+--disable_connect_log
+source include/wait_for_slave_param.inc;
+--enable_connect_log
+
+# C3. Failure by SQL thread
+# instruction to sql thread;
+set @@global.slave_sql_verify_checksum = 1;
+
+set @@global.debug_dbug='d,simulate_checksum_test_failure';
+
+start slave sql_thread;
+--let $slave_sql_errno= 1593
+--let $show_slave_sql_error= 1
+--disable_connect_log
+source include/wait_for_slave_sql_error.inc;
+--enable_connect_log
+
+# resuming SQL thread to parse out the event w/o the failure
+
+set @@global.debug_dbug=''; +--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+sync_slave_with_master;
+
+#connection slave;
+select count(*) as 'must be zero' from t2;
+
+#
+# D. Reset slave, Change-Master, Binlog & Relay-log rotations with +#
random value on binlog_checksum on both master and slave
+#
+connection slave;
+stop slave;
+reset slave;
+
+# randomize slave server's own checksum policy
+set @@global.binlog_checksum= IF(floor((rand()*1000)%2), "CRC32", "NONE");
+flush logs;
+
+connection master;
+set @@global.binlog_checksum= CRC32;
+reset master;
+flush logs;
+create table t3 (a int, b char(5));
+
+connection slave;
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+sync_slave_with_master;
+
+#connection slave;
+select count(*) as 'must be zero' from t3;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+--replace_result $MASTER_MYPORT MASTER_PORT
+eval change master to
master_host='127.0.0.1',master_port=$MASTER_MYPORT, master_user='root';
+
+connection master;
+flush logs;
+reset master;
+insert into t3 value (1, @@global.binlog_checksum);
+
+connection slave;
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+flush logs;
+
+connection master;
+sync_slave_with_master;
+
+#connection slave;
+select count(*) as 'must be one' from t3;
+
+connection master;
+set @@global.binlog_checksum= IF(floor((rand()*1000)%2), "CRC32", "NONE");
+insert into t3 value (1, @@global.binlog_checksum);
+sync_slave_with_master;
+
+#connection slave;
+
+#clean-up
+
+connection master;
+drop table t1, t2, t3;
+set @@global.master_verify_checksum = @save_master_verify_checksum;
+
+#
+# BUG#58564: flush_read_lock fails in mysql-trunk-bugfixing after
merging with WL#2540
+#
+# Sanity check that verifies that no assertions are triggered because
+# of old FD events (generated by versions prior to server released with
+# checksums feature)
+#
+# There is no need for query log, if something wrong this should trigger
+# an assertion
+
+--disable_query_log
+
+BINLOG '
+MfmqTA8BAAAAZwAAAGsAAAABAAQANS41LjctbTMtZGVidWctbG9nAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAx+apMEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA==
+';
+
+--enable_query_log
+
+#connection slave;
+sync_slave_with_master;
+
+
+--echo *** Bug#59123 / MDEV-5799: INCIDENT_EVENT checksum written to
error log as garbage characters ***
+
+--connection master
+
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+CREATE TABLE t4 (a INT PRIMARY KEY);
+INSERT INTO t4 VALUES (1);
+
+SET sql_log_bin=0;
+CALL mtr.add_suppression("\\[ERROR\\] Can't generate a unique
log-filename");
+SET sql_log_bin=1;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET debug_dbug= '+d,binlog_inject_new_name_error';
+--error ER_NO_UNIQUE_LOGFILE
+FLUSH LOGS;
+SET debug_dbug= @old_dbug;
+
+INSERT INTO t4 VALUES (2);
+
+--connection slave
+--let $slave_sql_errno= 1590
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+# Search the error log for the error message.
+# The bug was that 4 garbage bytes were output in the middle of the error
+# message; by searching for a pattern that spans that location, we can
+# catch the error.
+let $log_error_= `SELECT @@GLOBAL.log_error`;
+if(!$log_error_)
+{
+ # MySQL Server on windows is started with --console and thus
+ # does not know the location of its .err log, use default location
+ let $log_error_ = $MYSQLTEST_VARDIR/log/mysqld.2.err;
+}
+--let SEARCH_FILE= $log_error_
+--let SEARCH_RANGE=-50000
+--let SEARCH_PATTERN= Slave SQL: The incident LOST_EVENTS occurred on
the master\. Message: error writing to the binary log, Internal MariaDB
error code: 1590
+--disable_connect_log
+--source include/search_pattern_in_file.inc
+--enable_connect_log
+
+SELECT * FROM t4 ORDER BY a;
+STOP SLAVE IO_THREAD;
+SET sql_slave_skip_counter= 1;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection master
+--save_master_pos
+
+--connection slave
+--sync_with_master
+SELECT * FROM t4 ORDER BY a;
+
+
+--connection slave
+set @@global.binlog_checksum = CRC32;
+set @@global.slave_sql_verify_checksum = @save_slave_sql_verify_checksum;
+
+--echo End of tests
+
+--connection master
+DROP TABLE t4;
+
+set @@global.binlog_checksum = CRC32;
+set @@global.master_verify_checksum = @save_master_verify_checksum;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_checksum_cache.result
b/mysql-test/suite/binlog_encryption/rpl_checksum_cache.result
new file mode 100644
index 0000000..d316749
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_checksum_cache.result
@@ -0,0 +1,136 @@
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT. Statement is
unsafe because it uses a system function that may return a different
value on the slave. Statement: insert into t2 set data=repeat.*'a',
@act_size.*");
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT. Statement is
unsafe because it uses a system function that may return a different
value on the slave. Statement: insert into t1 values.*
NAME_CONST.*'n',.*, @data .*");
+connection master;
+set @save_binlog_cache_size = @@global.binlog_cache_size;
+set @save_binlog_checksum = @@global.binlog_checksum;
+set @save_master_verify_checksum = @@global.master_verify_checksum;
+set @@global.binlog_cache_size = 4096;
+set @@global.binlog_checksum = CRC32;
+set @@global.master_verify_checksum = 1;
+connection slave;
+include/stop_slave.inc
+include/start_slave.inc
+connection master;
+flush status;
+show status like "binlog_cache_use";
+Variable_name Value
+Binlog_cache_use 0
+show status like "binlog_cache_disk_use";
+Variable_name Value
+Binlog_cache_disk_use 0
+drop table if exists t1;
+create table t1 (a int PRIMARY KEY, b CHAR(32)) engine=innodb;
+create procedure test.p_init (n int, size int) +begin
+while n > 0 do
+select round(RAND() * size) into @act_size;
+set @data = repeat('a', @act_size);
+insert into t1 values(n, @data );
+set n= n-1;
+end while;
+end|
+begin;
+call test.p_init(4000, 32);
+commit;
+show status like "binlog_cache_use";
+Variable_name Value
+Binlog_cache_use 1
+*** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+Variable_name Value
+Binlog_cache_disk_use 1
+connection slave;
+include/diff_tables.inc [master:test.t1, slave:test.t1]
+connection master;
+begin;
+delete from t1;
+commit;
+connection slave;
+connection master;
+flush status;
+create table t2(a int auto_increment primary key, data VARCHAR(12288))
ENGINE=Innodb;
+show status like "binlog_cache_use";
+Variable_name Value
+Binlog_cache_use 1
+*** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+Variable_name Value
+Binlog_cache_disk_use 1
+connection slave;
+include/diff_tables.inc [master:test.t2, slave:test.t2]
+connection master;
+begin;
+delete from t2;
+commit;
+connection slave;
+connection master;
+flush status;
+create table t3(a int auto_increment primary key, data VARCHAR(8192))
engine=innodb;
+show status like "binlog_cache_use";
+Variable_name Value
+Binlog_cache_use 1
+*** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+Variable_name Value
+Binlog_cache_disk_use 1
+connection slave;
+include/diff_tables.inc [master:test.t3, slave:test.t3]
+connection master;
+begin;
+delete from t3;
+commit;
+connection slave;
+connection master;
+flush status;
+create procedure test.p1 (n int) +begin
+while n > 0 do
+case (select (round(rand()*100) % 3) + 1)
+when 1 then
+select round(RAND() * 32) into @act_size;
+set @data = repeat('a', @act_size);
+insert into t1 values(n, @data);
+when 2 then
+begin
+select round(8192 + RAND() * 4096) into @act_size;
+insert into t2 set data=repeat('a', @act_size);
+end;
+when 3 then
+begin
+select round(3686.4000 + RAND() * 819.2000) into @act_size;
+insert into t3 set data= repeat('a', @act_size);
+end;
+end case;
+set n= n-1;
+end while;
+end|
+set autocommit= 0;
+begin;
+call test.p1(1000);
+commit;
+show status like "binlog_cache_use";
+Variable_name Value
+Binlog_cache_use 1
+*** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+Variable_name Value
+Binlog_cache_disk_use 1
+connection slave;
+include/diff_tables.inc [master:test.t1, slave:test.t1]
+include/diff_tables.inc [master:test.t2, slave:test.t2]
+include/diff_tables.inc [master:test.t3, slave:test.t3]
+connection master;
+begin;
+delete from t1;
+delete from t2;
+delete from t3;
+commit;
+drop table t1, t2, t3;
+set @@global.binlog_cache_size = @save_binlog_cache_size;
+set @@global.binlog_checksum = @save_binlog_checksum;
+set @@global.master_verify_checksum = @save_master_verify_checksum;
+drop procedure test.p_init;
+drop procedure test.p1;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_checksum_cache.test
b/mysql-test/suite/binlog_encryption/rpl_checksum_cache.test
new file mode 100644
index 0000000..2629e1f
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_checksum_cache.test
@@ -0,0 +1,271 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+-- source include/master-slave.inc
+
+--enable_connect_log
+
+--disable_warnings
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT. Statement is
unsafe because it uses a system function that may return a different
value on the slave. Statement: insert into t2 set data=repeat.*'a',
@act_size.*");
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT. Statement is
unsafe because it uses a system function that may return a different
value on the slave. Statement: insert into t1 values.*
NAME_CONST.*'n',.*, @data .*");
+--enable_warnings
+
+connection master;
+set @save_binlog_cache_size = @@global.binlog_cache_size;
+set @save_binlog_checksum = @@global.binlog_checksum;
+set @save_master_verify_checksum = @@global.master_verify_checksum;
+set @@global.binlog_cache_size = 4096;
+set @@global.binlog_checksum = CRC32;
+set @@global.master_verify_checksum = 1;
+
+# restart slave to force the dump thread to verify events (on master side)
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+
+#
+# Testing a critical part of checksum handling dealing with transaction
cache.
+# The cache's buffer size is set to be less than the transaction's
footprint
+# in binlog.
+#
+# To verify combined buffer-by-buffer read out of the file and fixing
crc per event
+# there are the following parts:
+#
+# 1. the event size is much less than the cache's buffer
+# 2. the event size is bigger than the cache's buffer
+# 3. the event size if approximately the same as the cache's buffer
+# 4. all in above
+
+#
+# 1. the event size is much less than the cache's buffer
+#
+
+flush status;
+show status like "binlog_cache_use";
+show status like "binlog_cache_disk_use";
+--disable_warnings
+drop table if exists t1;
+--enable_warnings
+
+#
+# parameter to ensure the test slightly varies binlog content
+# between different invocations
+#
+let $deviation_size=32;
+eval create table t1 (a int PRIMARY KEY, b CHAR($deviation_size))
engine=innodb;
+
+# Now we are going to create transaction which is long enough so its +#
transaction binlog will be flushed to disk...
+
+delimiter |;
+create procedure test.p_init (n int, size int) +begin
+ while n > 0 do
+ select round(RAND() * size) into @act_size;
+ set @data = repeat('a', @act_size);
+ insert into t1 values(n, @data );
+ set n= n-1;
+ end while;
+end|
+
+delimiter ;|
+
+let $1 = 4000; # PB2 can run it slow to time out on following
sync_slave_with_master:s
+
+begin;
+--disable_warnings
+# todo: check if it is really so.
+#+Note 1592 Unsafe statement binlogged in statement format since
BINLOG_FORMAT = STATEMENT. Reason for unsafeness: Statement uses a
system function whose value may differ on slave.
+eval call test.p_init($1, $deviation_size);
+--enable_warnings
+commit;
+
+show status like "binlog_cache_use";
+--echo *** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+
+sync_slave_with_master;
+
+let $diff_tables=master:test.t1, slave:test.t1;
+--disable_connect_log
+source include/diff_tables.inc;
+--enable_connect_log
+
+# undoing changes with verifying the above once again
+connection master;
+
+begin;
+delete from t1;
+commit;
+
+sync_slave_with_master;
+
+
+#
+# 2. the event size is bigger than the cache's buffer
+#
+connection master;
+
+flush status;
+let $t2_data_size= `select 3 * @@global.binlog_cache_size`;
+let $t2_aver_size= `select 2 * @@global.binlog_cache_size`;
+let $t2_max_rand= `select 1 * @@global.binlog_cache_size`;
+
+eval create table t2(a int auto_increment primary key, data
VARCHAR($t2_data_size)) ENGINE=Innodb;
+let $1=100;
+--disable_query_log
+begin;
+while ($1)
+{
+ eval select round($t2_aver_size + RAND() * $t2_max_rand) into @act_size;
+ set @data = repeat('a', @act_size);
+ insert into t2 set data = @data;
+ dec $1;
+}
+commit;
+--enable_query_log
+show status like "binlog_cache_use";
+--echo *** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+
+sync_slave_with_master;
+
+let $diff_tables=master:test.t2, slave:test.t2;
+--disable_connect_log
+source include/diff_tables.inc;
+--enable_connect_log
+
+# undoing changes with verifying the above once again
+connection master;
+
+begin;
+delete from t2;
+commit;
+
+sync_slave_with_master;
+
+#
+# 3. the event size if approximately the same as the cache's buffer
+#
+
+connection master;
+
+flush status;
+let $t3_data_size= `select 2 * @@global.binlog_cache_size`;
+let $t3_aver_size= `select (9 * @@global.binlog_cache_size) / 10`;
+let $t3_max_rand= `select (2 * @@global.binlog_cache_size) / 10`;
+
+eval create table t3(a int auto_increment primary key, data
VARCHAR($t3_data_size)) engine=innodb;
+
+let $1= 300;
+--disable_query_log
+begin;
+while ($1)
+{
+ eval select round($t3_aver_size + RAND() * $t3_max_rand) into @act_size;
+ insert into t3 set data= repeat('a', @act_size);
+ dec $1;
+}
+commit;
+--enable_query_log
+show status like "binlog_cache_use";
+--echo *** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+
+sync_slave_with_master;
+
+let $diff_tables=master:test.t3, slave:test.t3;
+--disable_connect_log
+source include/diff_tables.inc;
+--enable_connect_log
+
+# undoing changes with verifying the above once again
+connection master;
+
+begin;
+delete from t3;
+commit;
+
+sync_slave_with_master;
+
+
+#
+# 4. all in above
+#
+
+connection master;
+flush status;
+
+delimiter |;
+eval create procedure test.p1 (n int) +begin
+ while n > 0 do
+ case (select (round(rand()*100) % 3) + 1)
+ when 1 then
+ select round(RAND() * $deviation_size) into @act_size;
+ set @data = repeat('a', @act_size);
+ insert into t1 values(n, @data);
+ when 2 then
+ begin
+ select round($t2_aver_size + RAND() * $t2_max_rand) into @act_size;
+ insert into t2 set data=repeat('a', @act_size);
+ end;
+ when 3 then
+ begin
+ select round($t3_aver_size + RAND() * $t3_max_rand) into @act_size;
+ insert into t3 set data= repeat('a', @act_size);
+ end;
+ end case;
+ set n= n-1;
+ end while;
+end|
+delimiter ;|
+
+let $1= 1000;
+set autocommit= 0;
+begin;
+--disable_warnings
+eval call test.p1($1);
+--enable_warnings
+commit;
+
+show status like "binlog_cache_use";
+--echo *** binlog_cache_disk_use must be non-zero ***
+show status like "binlog_cache_disk_use";
+
+sync_slave_with_master;
+
+let $diff_tables=master:test.t1, slave:test.t1;
+--disable_connect_log
+source include/diff_tables.inc;
+
+let $diff_tables=master:test.t2, slave:test.t2;
+source include/diff_tables.inc;
+
+let $diff_tables=master:test.t3, slave:test.t3;
+source include/diff_tables.inc;
+--enable_connect_log
+
+
+connection master;
+
+begin;
+delete from t1;
+delete from t2;
+delete from t3;
+commit;
+
+drop table t1, t2, t3;
+set @@global.binlog_cache_size = @save_binlog_cache_size;
+set @@global.binlog_checksum = @save_binlog_checksum;
+set @@global.master_verify_checksum = @save_master_verify_checksum;
+drop procedure test.p_init;
+drop procedure test.p1;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_corruption.cnf
b/mysql-test/suite/binlog_encryption/rpl_corruption.cnf
new file mode 100644
index 0000000..7f7d0ee
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_corruption.cnf
@@ -0,0 +1,9 @@
+!include my.cnf
+
+[mysqld.1]
+binlog-checksum=CRC32
+master-verify-checksum=1
+
+[mysqld.2]
+binlog-checksum=CRC32
+slave-sql-verify-checksum=1
diff --git a/mysql-test/suite/binlog_encryption/rpl_corruption.result
b/mysql-test/suite/binlog_encryption/rpl_corruption.result
new file mode 100644
index 0000000..14a67b3
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_corruption.result
@@ -0,0 +1,63 @@
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression('Found invalid event in binary log');
+call mtr.add_suppression('Slave I/O: Relay log write failure: could not
queue event from master');
+call mtr.add_suppression('event read from binlog did not pass crc check');
+call mtr.add_suppression('Replication event checksum verification failed');
+call mtr.add_suppression('Event crc check failed! Most likely there is
event corruption');
+call mtr.add_suppression('Slave SQL: Error initializing relay log
position: I/O error reading event at position .*, error.* 1593');
+SET @old_master_verify_checksum = @@master_verify_checksum;
+# 1. Creating test table/data and set corruption position for testing
+connection master;
+* insert/update/delete rows in table t1 *
+CREATE TABLE t1 (a INT NOT NULL PRIMARY KEY, b VARCHAR(10), c
VARCHAR(100));
+include/stop_slave.inc
+# 2. Corruption in master binlog and SHOW BINLOG EVENTS
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event_char";
+SHOW BINLOG EVENTS;
+ERROR HY000: Error when executing command SHOW BINLOG EVENTS: Wrong
offset or I/O error
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event_char";
+# 3. Master read a corrupted event from binlog and send the error to slave
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event2_set";
+connection slave;
+START SLAVE IO_THREAD;
+include/wait_for_slave_io_error.inc [errno=1236]
+connection master;
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event2_set";
+# 4. Master read a corrupted event from binlog and send it to slave
+connection master;
+SET GLOBAL master_verify_checksum=0;
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event2_set";
+connection slave;
+START SLAVE IO_THREAD;
+include/wait_for_slave_io_error.inc [errno=1595,1913]
+connection master;
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event2_set";
+SET GLOBAL debug_dbug= "";
+SET GLOBAL master_verify_checksum=1;
+# 5. Slave. Corruption in network
+connection slave;
+SET GLOBAL debug_dbug="+d,corrupt_queue_event";
+START SLAVE IO_THREAD;
+include/wait_for_slave_io_error.inc [errno=1595,1913]
+SET GLOBAL debug_dbug="-d,corrupt_queue_event";
+# 6. Slave. Corruption in relay log
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event_char";
+START SLAVE SQL_THREAD;
+include/wait_for_slave_sql_error.inc [errno=1593]
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event_char";
+SET GLOBAL debug_dbug= "";
+# 7. Seek diff for tables on master and slave
+connection slave;
+include/start_slave.inc
+connection master;
+connection slave;
+include/diff_tables.inc [master:test.t1, slave:test.t1]
+# 8. Clean up
+connection master;
+SET GLOBAL debug_dbug= "";
+SET GLOBAL master_verify_checksum = @old_master_verify_checksum;
+DROP TABLE t1;
+connection slave;
+SET GLOBAL debug_dbug= "";
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_corruption.test
b/mysql-test/suite/binlog_encryption/rpl_corruption.test
new file mode 100644
index 0000000..6413369
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_corruption.test
@@ -0,0 +1,193 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+############################################################
+# Purpose: WL#5064 Testing with corrupted events.
+# The test emulates the corruption at the vary stages
+# of replication:
+# - in binlog file
+# - in network
+# - in relay log
+############################################################
+
+#
+# The tests intensively utilize @@global.debug. Note,
+# Bug#11765758 - 58754,
+# @@global.debug is read by the slave threads through dbug-interface.
+# Hence, before a client thread set @@global.debug we have to ensure that:
+# (a) the slave threads are stopped, or (b) the slave threads are in
+# sync and waiting.
+
+--source include/have_debug.inc
+--source include/master-slave.inc
+
+--enable_connect_log
+
+# Block legal errors for MTR +call mtr.add_suppression('Found invalid
event in binary log');
+call mtr.add_suppression('Slave I/O: Relay log write failure: could not
queue event from master');
+call mtr.add_suppression('event read from binlog did not pass crc check');
+call mtr.add_suppression('Replication event checksum verification failed');
+call mtr.add_suppression('Event crc check failed! Most likely there is
event corruption');
+call mtr.add_suppression('Slave SQL: Error initializing relay log
position: I/O error reading event at position .*, error.* 1593');
+
+SET @old_master_verify_checksum = @@master_verify_checksum;
+
+# Creating test table/data and set corruption position for testing
+--echo # 1. Creating test table/data and set corruption position for
testing
+--connection master
+--echo * insert/update/delete rows in table t1 *
+# Corruption algorithm modifies only the first event and +# then will
be reset. To avoid checking always the first event +# from binlog
(usually it is FD) we randomly execute different +# statements and set
position for corruption inside events.
+
+CREATE TABLE t1 (a INT NOT NULL PRIMARY KEY, b VARCHAR(10), c
VARCHAR(100));
+--disable_query_log
+let $i=`SELECT 3+CEILING(10*RAND())`;
+let $j=1;
+let $pos=0;
+while ($i) { + eval INSERT INTO t1 VALUES ($j, 'a', NULL);
+ if (`SELECT RAND() > 0.7`)
+ {
+ eval UPDATE t1 SET c = REPEAT('a', 20) WHERE a = $j;
+ }
+ if (`SELECT RAND() > 0.8`)
+ {
+ eval DELETE FROM t1 WHERE a = $j;
+ }
+ if (!$pos) {
+ let $pos= query_get_value(SHOW MASTER STATUS, Position, 1);
+ --sync_slave_with_master
+ --disable_connect_log
+ --source include/stop_slave.inc
+ --enable_connect_log
+ --disable_query_log
+ --connection master
+ }
+ dec $i;
+ inc $j;
+}
+--enable_query_log
+
+
+# Emulate corruption in binlog file when SHOW BINLOG EVENTS is executing
+--echo # 2. Corruption in master binlog and SHOW BINLOG EVENTS
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event_char";
+--echo SHOW BINLOG EVENTS;
+--disable_query_log
+send_eval SHOW BINLOG EVENTS FROM $pos;
+--enable_query_log
+--error ER_ERROR_WHEN_EXECUTING_COMMAND
+reap;
+
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event_char";
+
+# Emulate corruption on master with crc checking on master
+--echo # 3. Master read a corrupted event from binlog and send the
error to slave
+
+# We have a rare but nasty potential race here: if the dump thread on
+# the master for the _old_ slave connection has not yet discovered
+# that the slave has disconnected, we will inject the corrupt event on
+# the wrong connection, and the test will fail
+# (+d,corrupt_read_log_event2 corrupts only one event).
+# So kill any lingering dump thread (we need to kill; otherwise dump thread
+# could manage to send all events down the socket before seeing it
close, and
+# hang forever waiting for new binlog events to be created).
+let $id= `select id from information_schema.processlist where command =
"Binlog Dump"`;
+if ($id)
+{
+ --disable_query_log
+ --error 0,1094
+ eval kill $id;
+ --enable_query_log
+}
+let $wait_condition=
+ SELECT COUNT(*)=0 FROM INFORMATION_SCHEMA.PROCESSLIST WHERE command =
'Binlog Dump';
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event2_set";
+--connection slave
+START SLAVE IO_THREAD;
+let $slave_io_errno= 1236;
+--let $slave_timeout= 10
+--disable_connect_log
+--source include/wait_for_slave_io_error.inc
+--enable_connect_log
+--connection master
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event2_set";
+
+# Emulate corruption on master without crc checking on master
+--echo # 4. Master read a corrupted event from binlog and send it to slave
+--connection master
+SET GLOBAL master_verify_checksum=0;
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event2_set";
+--connection slave
+START SLAVE IO_THREAD;
+# When the checksum error is detected, the slave sets error code 1913
+# (ER_NETWORK_READ_EVENT_CHECKSUM_FAILURE) in queue_event(), then
immediately
+# sets error 1595 (ER_SLAVE_RELAY_LOG_WRITE_FAILURE) in handle_slave_io().
+# So we usually get 1595, but it is occasionally possible to get 1913.
+let $slave_io_errno= 1595,1913;
+--disable_connect_log
+--source include/wait_for_slave_io_error.inc
+--enable_connect_log
+--connection master
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event2_set";
+SET GLOBAL debug_dbug= "";
+SET GLOBAL master_verify_checksum=1;
+
+# Emulate corruption in network
+--echo # 5. Slave. Corruption in network
+--connection slave
+SET GLOBAL debug_dbug="+d,corrupt_queue_event";
+START SLAVE IO_THREAD;
+let $slave_io_errno= 1595,1913;
+--disable_connect_log
+--source include/wait_for_slave_io_error.inc
+--enable_connect_log
+SET GLOBAL debug_dbug="-d,corrupt_queue_event";
+
+# Emulate corruption in relay log
+--echo # 6. Slave. Corruption in relay log
+
+SET GLOBAL debug_dbug="+d,corrupt_read_log_event_char";
+
+START SLAVE SQL_THREAD;
+let $slave_sql_errno= 1593;
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+SET GLOBAL debug_dbug="-d,corrupt_read_log_event_char";
+SET GLOBAL debug_dbug= "";
+
+# Start normal replication and compare same table on master
+# and slave
+--echo # 7. Seek diff for tables on master and slave
+--connection slave
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--connection master
+--sync_slave_with_master
+let $diff_tables= master:test.t1, slave:test.t1;
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+ +# Clean up
+--echo # 8. Clean up
+--connection master
+SET GLOBAL debug_dbug= "";
+SET GLOBAL master_verify_checksum = @old_master_verify_checksum;
+DROP TABLE t1;
+--sync_slave_with_master
+SET GLOBAL debug_dbug= "";
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_gtid_basic.cnf
b/mysql-test/suite/binlog_encryption/rpl_gtid_basic.cnf
new file mode 100644
index 0000000..ae47ef7
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_gtid_basic.cnf
@@ -0,0 +1,24 @@
+!include my.cnf
+
+[mysqld.1]
+log-slave-updates
+
+[mysqld.2]
+init-rpl-role= slave
+master-retry-count= 10
+skip-slave-start
+
+[mysqld.3]
+log-slave-updates
+innodb
+
+[mysqld.4]
+log-slave-updates
+innodb
+
+[ENV]
+SERVER_MYPORT_3= @mysqld.3.port
+SERVER_MYSOCK_3= @mysqld.3.socket
+
+SERVER_MYPORT_4= @mysqld.4.port
+SERVER_MYSOCK_4= @mysqld.4.socket
diff --git a/mysql-test/suite/binlog_encryption/rpl_gtid_basic.result
b/mysql-test/suite/binlog_encryption/rpl_gtid_basic.result
new file mode 100644
index 0000000..c1fbeb8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_gtid_basic.result
@@ -0,0 +1,558 @@
+include/rpl_init.inc [topology=1->2->3->4]
+connection server_1;
+*** GTID position should be empty here ***
+SELECT BINLOG_GTID_POS('<BINLOG_FILE>',<BINLOG_POS>);
+BINLOG_GTID_POS('<BINLOG_FILE>',<BINLOG_POS>)
+
+CREATE TABLE t1 (a INT PRIMARY KEY, b VARCHAR(10)) ENGINE=MyISAM;
+CREATE TABLE t2 (a INT PRIMARY KEY, b VARCHAR(10)) ENGINE=InnoDB;
+INSERT INTO t1 VALUES (1, "m1");
+INSERT INTO t1 VALUES (2, "m2"), (3, "m3"), (4, "m4");
+INSERT INTO t2 VALUES (1, "i1");
+BEGIN;
+INSERT INTO t2 VALUES (2, "i2"), (3, "i3");
+INSERT INTO t2 VALUES (4, "i4");
+COMMIT;
+*** GTID position should be non-empty here ***
+SELECT BINLOG_GTID_POS('<BINLOG_FILE>',<BINLOG_POS>);
+BINLOG_GTID_POS('<BINLOG_FILE>',<BINLOG_POS>)
+<GTID_POS_SERVER_1>
+connection server_2;
+*** GTID position should be the same as on server_1 ***
+SELECT BINLOG_GTID_POS('<BINLOG_FILE>',<BINLOG_POS>);
+BINLOG_GTID_POS('<BINLOG_FILE>',<BINLOG_POS>)
+<GTID_POS_SERVER_1>
+SELECT * FROM t1 ORDER BY a;
+a b
+1 m1
+2 m2
+3 m3
+4 m4
+SELECT * FROM t2 ORDER BY a;
+a b
+1 i1
+2 i2
+3 i3
+4 i4
+connection server_3;
+SELECT * FROM t1 ORDER BY a;
+a b
+1 m1
+2 m2
+3 m3
+4 m4
+SELECT * FROM t2 ORDER BY a;
+a b
+1 i1
+2 i2
+3 i3
+4 i4
+connection server_4;
+SELECT * FROM t1 ORDER BY a;
+a b
+1 m1
+2 m2
+3 m3
+4 m4
+SELECT * FROM t2 ORDER BY a;
+a b
+1 i1
+2 i2
+3 i3
+4 i4
+*** Now take out D, let it fall behind a bit, and then test
re-attaching it to A ***
+connection server_4;
+include/stop_slave.inc
+connection server_1;
+INSERT INTO t1 VALUES (5, "m1a");
+INSERT INTO t2 VALUES (5, "i1a");
+connection server_4;
+CHANGE MASTER TO master_host = '127.0.0.1', master_port = MASTER_PORT,
+MASTER_USE_GTID=CURRENT_POS;
+include/start_slave.inc
+SELECT * FROM t1 ORDER BY a;
+a b
+1 m1
+2 m2
+3 m3
+4 m4
+5 m1a
+SELECT * FROM t2 ORDER BY a;
+a b
+1 i1
+2 i2
+3 i3
+4 i4
+5 i1a
+*** Now move B to D (C is still replicating from B) ***
+connection server_2;
+include/stop_slave.inc
+CHANGE MASTER TO master_host = '127.0.0.1', master_port = SERVER_MYPORT_4,
+MASTER_USE_GTID=CURRENT_POS;
+include/start_slave.inc
+connection server_4;
+UPDATE t2 SET b="j1a" WHERE a=5;
+connection server_2;
+SELECT * FROM t1 ORDER BY a;
+a b
+1 m1
+2 m2
+3 m3
+4 m4
+5 m1a
+SELECT * FROM t2 ORDER BY a;
+a b
+1 i1
+2 i2
+3 i3
+4 i4
+5 j1a
+*** Now move C to D, after letting it fall a little behind ***
+connection server_3;
+include/stop_slave.inc
+connection server_1;
+INSERT INTO t2 VALUES (6, "i6b");
+INSERT INTO t2 VALUES (7, "i7b");
+include/save_master_gtid.inc
+connection server_3;
+CHANGE MASTER TO master_host = '127.0.0.1', master_port = SERVER_MYPORT_4,
+MASTER_USE_GTID=CURRENT_POS;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t2 ORDER BY a;
+a b
+1 i1
+2 i2
+3 i3
+4 i4
+5 j1a
+6 i6b
+7 i7b
+*** Now change everything back to what it was, to make rpl_end.inc happy
+connection server_2;
+include/sync_with_master_gtid.inc
+include/stop_slave.inc
+CHANGE MASTER TO master_host = '127.0.0.1', master_port = MASTER_MYPORT;
+include/start_slave.inc
+include/wait_for_slave_to_start.inc
+connection server_3;
+include/stop_slave.inc
+CHANGE MASTER TO master_host = '127.0.0.1', master_port = SLAVE_MYPORT;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+connection server_4;
+include/stop_slave.inc
+CHANGE MASTER TO master_host = '127.0.0.1', master_port = SERVER_MYPORT_3;
+include/start_slave.inc
+connection server_1;
+DROP TABLE t1,t2;
+include/save_master_gtid.inc
+*** A few more checks for BINLOG_GTID_POS function ***
+SELECT BINLOG_GTID_POS();
+ERROR 42000: Incorrect parameter count in the call to native function
'BINLOG_GTID_POS'
+SELECT BINLOG_GTID_POS('a');
+ERROR 42000: Incorrect parameter count in the call to native function
'BINLOG_GTID_POS'
+SELECT BINLOG_GTID_POS('a',1,NULL);
+ERROR 42000: Incorrect parameter count in the call to native function
'BINLOG_GTID_POS'
+SELECT BINLOG_GTID_POS(1,'a');
+BINLOG_GTID_POS(1,'a')
+NULL
+Warnings:
+Warning 1292 Truncated incorrect INTEGER value: 'a'
+SELECT BINLOG_GTID_POS(NULL,NULL);
+BINLOG_GTID_POS(NULL,NULL)
+NULL
+SELECT BINLOG_GTID_POS('',1);
+BINLOG_GTID_POS('',1)
+
+SELECT BINLOG_GTID_POS('a',1);
+BINLOG_GTID_POS('a',1)
+NULL
+SELECT BINLOG_GTID_POS('master-bin.000001',-1);
+BINLOG_GTID_POS('master-bin.000001',-1)
+NULL
+SELECT BINLOG_GTID_POS('master-bin.000001',0);
+BINLOG_GTID_POS('master-bin.000001',0)
+
+SELECT BINLOG_GTID_POS('master-bin.000001',18446744073709551615);
+BINLOG_GTID_POS('master-bin.000001',18446744073709551615)
+NULL
+SELECT BINLOG_GTID_POS('master-bin.000001',18446744073709551616);
+BINLOG_GTID_POS('master-bin.000001',18446744073709551616)
+NULL
+Warnings:
+Warning 1916 Got overflow when converting '18446744073709551616' to
INT. Value truncated.
+*** Some tests of @@GLOBAL.gtid_binlog_state ***
+connection server_2;
+include/sync_with_master_gtid.inc
+include/stop_slave.inc
+connection server_1;
+SET @old_state= @@GLOBAL.gtid_binlog_state;
+SET GLOBAL gtid_binlog_state = '';
+ERROR HY000: This operation is not allowed if any GTID has been logged
to the binary log. Run RESET MASTER first to erase the log
+RESET MASTER;
+SET GLOBAL gtid_binlog_state = '';
+FLUSH LOGS;
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+master-bin.000002 #
+SET GLOBAL gtid_binlog_state = '0-1-10,1-2-20,0-3-30';
+show binary logs;
+Log_name File_size
+master-bin.000001 #
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000001 # Format_desc # # SERVER_VERSION, BINLOG_VERSION
+master-bin.000001 # Start_encryption # #
+master-bin.000001 # Gtid_list # # [#-#-#]
+master-bin.000001 # Binlog_checkpoint # # master-bin.000001
+SET GLOBAL gtid_binlog_state = @old_state;
+ERROR HY000: This operation is not allowed if any GTID has been logged
to the binary log. Run RESET MASTER first to erase the log
+RESET MASTER;
+SET GLOBAL gtid_binlog_state = @old_state;
+CREATE TABLE t1 (a INT PRIMARY KEY);
+SET gtid_seq_no=100;
+INSERT INTO t1 VALUES (1);
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t1;
+a
+1
+Gtid_IO_Pos = '0-1-100'
+*** Test @@LAST_GTID and MASTER_GTID_WAIT() ***
+connection server_1;
+DROP TABLE t1;
+CREATE TABLE t1 (a INT PRIMARY KEY) ENGINE=InnoDB;
+connection server_2;
+include/stop_slave.inc
+connect m1,127.0.0.1,root,,test,$SERVER_MYPORT_1,;
+SELECT @@last_gtid;
+@@last_gtid
+
+SET gtid_seq_no=110;
+SELECT @@last_gtid;
+@@last_gtid
+
+BEGIN;
+SELECT @@last_gtid;
+@@last_gtid
+
+INSERT INTO t1 VALUES (2);
+SELECT @@last_gtid;
+@@last_gtid
+
+COMMIT;
+SELECT @@last_gtid;
+@@last_gtid
+0-1-110
+connect s1,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SET @pos= '0-1-110';
+SELECT master_gtid_wait(NULL);
+master_gtid_wait(NULL)
+NULL
+SELECT master_gtid_wait('', NULL);
+master_gtid_wait('', NULL)
+0
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+Variable_name Value
+Master_gtid_wait_count 1
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+Variable_name Value
+Master_gtid_wait_timeouts 0
+SHOW STATUS LIKE 'Master_gtid_wait_time';
+Variable_name Value
+Master_gtid_wait_time 0
+SELECT master_gtid_wait(@pos, 0.5);
+master_gtid_wait(@pos, 0.5)
+-1
+SELECT * FROM t1 ORDER BY a;
+a
+SELECT master_gtid_wait(@pos);
+connection server_2;
+include/start_slave.inc
+connection s1;
+master_gtid_wait(@pos)
+0
+SELECT * FROM t1 ORDER BY a;
+a
+2
+include/stop_slave.inc
+connection server_1;
+SET gtid_domain_id= 1;
+INSERT INTO t1 VALUES (3);
+connection s1;
+SET @pos= 'POS';
+SELECT master_gtid_wait(@pos, 0);
+master_gtid_wait(@pos, 0)
+-1
+SELECT * FROM t1 WHERE a >= 3;
+a
+SELECT master_gtid_wait(@pos, -1);
+connection server_2;
+include/start_slave.inc
+connection s1;
+master_gtid_wait(@pos, -1)
+0
+SELECT * FROM t1 WHERE a >= 3;
+a
+3
+SELECT master_gtid_wait('1-1-1', 0);
+master_gtid_wait('1-1-1', 0)
+0
+connection s1;
+SELECT master_gtid_wait('2-1-1,1-1-4,0-1-110');
+connect s2,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('0-1-1000', 0.5);
+connect s3,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('0-1-2000');
+connect s4,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('2-1-10');
+connect s5,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('2-1-6', 1);
+connect s6,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('2-1-5');
+connect s7,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('2-1-10');
+connect s8,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('2-1-5,1-1-4,0-1-110');
+connect s9,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('2-1-2');
+connection server_2;
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+Variable_name Value
+Master_gtid_wait_timeouts 0
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+Variable_name Value
+Master_gtid_wait_count 3
+SELECT master_gtid_wait('1-1-1');
+master_gtid_wait('1-1-1')
+0
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+Variable_name Value
+Master_gtid_wait_timeouts 0
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+Variable_name Value
+Master_gtid_wait_count 4
+SET @a= MASTER_GTID_WAIT_TIME;
+SELECT IF(@a <= 100*1000*1000, "OK", CONCAT("Error: wait time ", @a, "
is larger than expected"))
+AS Master_gtid_wait_time_as_expected;
+Master_gtid_wait_time_as_expected
+OK
+connect s10,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+SELECT master_gtid_wait('0-1-109');
+connection server_2;
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+Variable_name Value
+Master_gtid_wait_timeouts 0
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+Variable_name Value
+Master_gtid_wait_count 4
+SELECT master_gtid_wait('2-1-2', 0.5);
+master_gtid_wait('2-1-2', 0.5)
+-1
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+Variable_name Value
+Master_gtid_wait_timeouts 1
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+Variable_name Value
+Master_gtid_wait_count 5
+SET @a= MASTER_GTID_WAIT_TIME;
+SELECT IF(@a BETWEEN 0.4*1000*1000 AND 100*1000*1000, "OK",
CONCAT("Error: wait time ", @a, " not as expected")) AS
Master_gtid_wait_time_as_expected;
+Master_gtid_wait_time_as_expected
+OK
+KILL QUERY KILL_ID;
+connection s3;
+ERROR 70100: Query execution was interrupted
+connection server_1;
+SET gtid_domain_id=2;
+SET gtid_seq_no=2;
+INSERT INTO t1 VALUES (4);
+connection s9;
+master_gtid_wait('2-1-2')
+0
+connection server_2;
+KILL CONNECTION KILL_ID;
+connection s6;
+Got one of the listed errors
+connection server_1;
+SET gtid_domain_id=1;
+SET gtid_seq_no=4;
+INSERT INTO t1 VALUES (5);
+SET gtid_domain_id=2;
+SET gtid_seq_no=5;
+INSERT INTO t1 VALUES (6);
+connection s8;
+master_gtid_wait('2-1-5,1-1-4,0-1-110')
+0
+connection s1;
+master_gtid_wait('2-1-1,1-1-4,0-1-110')
+0
+connection s2;
+master_gtid_wait('0-1-1000', 0.5)
+-1
+connection s5;
+master_gtid_wait('2-1-6', 1)
+-1
+connection s10;
+master_gtid_wait('0-1-109')
+0
+connection server_1;
+SET gtid_domain_id=2;
+SET gtid_seq_no=10;
+INSERT INTO t1 VALUES (7);
+connection s4;
+master_gtid_wait('2-1-10')
+0
+connection s7;
+master_gtid_wait('2-1-10')
+0
+*** Test gtid_slave_pos when used with GTID ***
+connection server_2;
+include/stop_slave.inc
+connection server_1;
+SET gtid_domain_id=2;
+SET gtid_seq_no=1000;
+INSERT INTO t1 VALUES (10);
+INSERT INTO t1 VALUES (11);
+connection server_2;
+SET sql_slave_skip_counter= 1;
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+a
+11
+SELECT IF(LOCATE("2-1-1001", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1001 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+include/stop_slave.inc
+connection server_1;
+SET gtid_domain_id=2;
+SET gtid_seq_no=1010;
+INSERT INTO t1 VALUES (12);
+INSERT INTO t1 VALUES (13);
+connection server_2;
+SET sql_slave_skip_counter= 2;
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+a
+11
+13
+SELECT IF(LOCATE("2-1-1011", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1011 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+include/stop_slave.inc
+connection server_1;
+SET gtid_domain_id=2;
+SET gtid_seq_no=1020;
+INSERT INTO t1 VALUES (14);
+INSERT INTO t1 VALUES (15);
+INSERT INTO t1 VALUES (16);
+connection server_2;
+SET sql_slave_skip_counter= 3;
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+a
+11
+13
+15
+16
+SELECT IF(LOCATE("2-1-1022", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1022 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+include/stop_slave.inc
+connection server_1;
+SET gtid_domain_id=2;
+SET gtid_seq_no=1030;
+INSERT INTO t1 VALUES (17);
+INSERT INTO t1 VALUES (18);
+INSERT INTO t1 VALUES (19);
+connection server_2;
+SET sql_slave_skip_counter= 5;
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+a
+11
+13
+15
+16
+19
+SELECT IF(LOCATE("2-1-1032", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1032 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+include/stop_slave.inc
+connection server_1;
+SET gtid_domain_id=3;
+SET gtid_seq_no=100;
+CREATE TABLE t2 (a INT PRIMARY KEY);
+DROP TABLE t2;
+SET gtid_domain_id=2;
+SET gtid_seq_no=1040;
+INSERT INTO t1 VALUES (20);
+connection server_2;
+SET @saved_mode= @@GLOBAL.slave_ddl_exec_mode;
+SET GLOBAL slave_ddl_exec_mode=STRICT;
+SET sql_slave_skip_counter=1;
+START SLAVE UNTIL master_gtid_pos="3-1-100";
+include/sync_with_master_gtid.inc
+include/wait_for_slave_to_stop.inc
+SELECT * FROM t2;
+ERROR 42S02: Table 'test.t2' doesn't exist
+SELECT IF(LOCATE("3-1-100", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 3-1-100 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+SET sql_log_bin=0;
+CALL mtr.add_suppression("Slave: Unknown table 'test\\.t2' Error_code:
1051");
+SET sql_log_bin=1;
+START SLAVE;
+include/wait_for_slave_sql_error.inc [errno=1051]
+SELECT IF(LOCATE("3-1-100", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 3-1-100 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+STOP SLAVE IO_THREAD;
+SET sql_slave_skip_counter=2;
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 20 ORDER BY a;
+a
+20
+SELECT IF(LOCATE("3-1-101", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 3-1-101 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+SELECT IF(LOCATE("2-1-1040", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1040 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+status
+Ok
+SET GLOBAL slave_ddl_exec_mode= @saved_mode;
+*** Test GTID-connecting to a master with out-of-order sequence numbers
in the binlog. ***
+connection server_1;
+SET gtid_domain_id= @@GLOBAL.gtid_domain_id;
+INSERT INTO t1 VALUES (31);
+connection server_2;
+SET gtid_domain_id= @@GLOBAL.gtid_domain_id;
+INSERT INTO t1 VALUES (32);
+connection server_1;
+INSERT INTO t1 VALUES (33);
+connection server_2;
+connection server_3;
+include/stop_slave.inc
+connection server_1;
+INSERT INTO t1 VALUES (34);
+connection server_2;
+connection server_3;
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 30 ORDER BY a;
+a
+31
+32
+33
+34
+connection server_4;
+SELECT * FROM t1 WHERE a >= 30 ORDER BY a;
+a
+31
+32
+33
+34
+connection server_1;
+DROP TABLE t1;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_gtid_basic.test
b/mysql-test/suite/binlog_encryption/rpl_gtid_basic.test
new file mode 100644
index 0000000..1ce2a25
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_gtid_basic.test
@@ -0,0 +1,644 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+--let $rpl_topology=1->2->3->4
+--source include/rpl_init.inc
+
+--enable_connect_log
+
+# Set up a 4-deep replication topology, then test various fail-overs
+# using GTID.
+#
+# A -> B -> C -> D
+
+connection server_1;
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+--let $binlog_file = query_get_value(SHOW MASTER STATUS,File,1)
+--let $binlog_pos = query_get_value(SHOW MASTER STATUS,Position,1)
+--echo *** GTID position should be empty here ***
+--replace_result $binlog_file <BINLOG_FILE> $binlog_pos <BINLOG_POS>
+eval SELECT BINLOG_GTID_POS('$binlog_file',$binlog_pos);
+
+CREATE TABLE t1 (a INT PRIMARY KEY, b VARCHAR(10)) ENGINE=MyISAM;
+CREATE TABLE t2 (a INT PRIMARY KEY, b VARCHAR(10)) ENGINE=InnoDB;
+INSERT INTO t1 VALUES (1, "m1");
+INSERT INTO t1 VALUES (2, "m2"), (3, "m3"), (4, "m4");
+INSERT INTO t2 VALUES (1, "i1");
+BEGIN;
+INSERT INTO t2 VALUES (2, "i2"), (3, "i3");
+INSERT INTO t2 VALUES (4, "i4");
+COMMIT;
+save_master_pos;
+--disable_connect_log
+source include/wait_for_binlog_checkpoint.inc;
+--enable_connect_log
+--let $binlog_file = query_get_value(SHOW MASTER STATUS,File,1)
+--let $binlog_pos = query_get_value(SHOW MASTER STATUS,Position,1)
+--let $gtid_pos_server_1 = `SELECT @@gtid_binlog_pos`
+--echo *** GTID position should be non-empty here ***
+--replace_result $binlog_file <BINLOG_FILE> $binlog_pos <BINLOG_POS>
$gtid_pos_server_1 <GTID_POS_SERVER_1>
+eval SELECT BINLOG_GTID_POS('$binlog_file',$binlog_pos);
+
+connection server_2;
+sync_with_master;
+--disable_connect_log
+source include/wait_for_binlog_checkpoint.inc;
+--enable_connect_log
+--let $binlog_file = query_get_value(SHOW MASTER STATUS,File,1)
+--let $binlog_pos = query_get_value(SHOW MASTER STATUS,Position,1)
+--echo *** GTID position should be the same as on server_1 ***
+--replace_result $binlog_file <BINLOG_FILE> $binlog_pos <BINLOG_POS>
$gtid_pos_server_1 <GTID_POS_SERVER_1>
+eval SELECT BINLOG_GTID_POS('$binlog_file',$binlog_pos);
+SELECT * FROM t1 ORDER BY a;
+SELECT * FROM t2 ORDER BY a;
+save_master_pos;
+
+connection server_3;
+sync_with_master;
+SELECT * FROM t1 ORDER BY a;
+SELECT * FROM t2 ORDER BY a;
+save_master_pos;
+
+connection server_4;
+sync_with_master;
+SELECT * FROM t1 ORDER BY a;
+SELECT * FROM t2 ORDER BY a;
+
+
+--echo *** Now take out D, let it fall behind a bit, and then test
re-attaching it to A ***
+connection server_4;
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+connection server_1;
+INSERT INTO t1 VALUES (5, "m1a");
+INSERT INTO t2 VALUES (5, "i1a");
+save_master_pos;
+
+connection server_4;
+--replace_result $MASTER_MYPORT MASTER_PORT
+eval CHANGE MASTER TO master_host = '127.0.0.1', master_port =
$MASTER_MYPORT,
+ MASTER_USE_GTID=CURRENT_POS;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+sync_with_master;
+SELECT * FROM t1 ORDER BY a;
+SELECT * FROM t2 ORDER BY a;
+
+--echo *** Now move B to D (C is still replicating from B) ***
+connection server_2;
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+--replace_result $SERVER_MYPORT_4 SERVER_MYPORT_4
+eval CHANGE MASTER TO master_host = '127.0.0.1', master_port =
$SERVER_MYPORT_4,
+ MASTER_USE_GTID=CURRENT_POS;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+connection server_4;
+UPDATE t2 SET b="j1a" WHERE a=5;
+save_master_pos;
+
+connection server_2;
+sync_with_master;
+SELECT * FROM t1 ORDER BY a;
+SELECT * FROM t2 ORDER BY a;
+
+--echo *** Now move C to D, after letting it fall a little behind ***
+connection server_3;
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+connection server_1;
+INSERT INTO t2 VALUES (6, "i6b");
+INSERT INTO t2 VALUES (7, "i7b");
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+connection server_3;
+--replace_result $SERVER_MYPORT_4 SERVER_MYPORT_4
+eval CHANGE MASTER TO master_host = '127.0.0.1', master_port =
$SERVER_MYPORT_4,
+ MASTER_USE_GTID=CURRENT_POS;
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+--enable_connect_log
+SELECT * FROM t2 ORDER BY a;
+
+--echo *** Now change everything back to what it was, to make
rpl_end.inc happy
+# Also check that MASTER_USE_GTID=CURRENT_POS is still enabled.
+connection server_2;
+# We need to sync up server_2 before switching. If it happened to have
reached
+# the point 'UPDATE t2 SET b="j1a" WHERE a=5' it will fail to connect to
+# server_1, which is (deliberately) missing that transaction.
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--source include/stop_slave.inc
+--replace_result $MASTER_MYPORT MASTER_MYPORT
+eval CHANGE MASTER TO master_host = '127.0.0.1', master_port =
$MASTER_MYPORT;
+--source include/start_slave.inc
+--source include/wait_for_slave_to_start.inc
+--enable_connect_log
+
+connection server_3;
+--disable_connect_log
+--source include/stop_slave.inc
+--replace_result $SLAVE_MYPORT SLAVE_MYPORT
+eval CHANGE MASTER TO master_host = '127.0.0.1', master_port =
$SLAVE_MYPORT;
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+--enable_connect_log
+
+connection server_4;
+--disable_connect_log
+--source include/stop_slave.inc
+--replace_result $SERVER_MYPORT_3 SERVER_MYPORT_3
+eval CHANGE MASTER TO master_host = '127.0.0.1', master_port =
$SERVER_MYPORT_3;
+--source include/start_slave.inc
+--enable_connect_log
+
+connection server_1;
+DROP TABLE t1,t2;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--echo *** A few more checks for BINLOG_GTID_POS function ***
+--let $valid_binlog_name = query_get_value(SHOW BINARY LOGS,Log_name,1)
+--error ER_WRONG_PARAMCOUNT_TO_NATIVE_FCT
+SELECT BINLOG_GTID_POS();
+--error ER_WRONG_PARAMCOUNT_TO_NATIVE_FCT
+SELECT BINLOG_GTID_POS('a');
+--error ER_WRONG_PARAMCOUNT_TO_NATIVE_FCT
+SELECT BINLOG_GTID_POS('a',1,NULL);
+SELECT BINLOG_GTID_POS(1,'a');
+SELECT BINLOG_GTID_POS(NULL,NULL);
+SELECT BINLOG_GTID_POS('',1);
+SELECT BINLOG_GTID_POS('a',1);
+eval SELECT BINLOG_GTID_POS('$valid_binlog_name',-1);
+eval SELECT BINLOG_GTID_POS('$valid_binlog_name',0);
+eval SELECT BINLOG_GTID_POS('$valid_binlog_name',18446744073709551615);
+eval SELECT BINLOG_GTID_POS('$valid_binlog_name',18446744073709551616);
+
+
+--echo *** Some tests of @@GLOBAL.gtid_binlog_state ***
+--connection server_2
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET @old_state= @@GLOBAL.gtid_binlog_state;
+
+--error ER_BINLOG_MUST_BE_EMPTY
+SET GLOBAL gtid_binlog_state = '';
+RESET MASTER;
+SET GLOBAL gtid_binlog_state = '';
+FLUSH LOGS;
+--disable_connect_log
+--source include/show_binary_logs.inc
+SET GLOBAL gtid_binlog_state = '0-1-10,1-2-20,0-3-30';
+--source include/show_binary_logs.inc
+--let $binlog_file= master-bin.000001
+--let $binlog_start= 4
+--source include/show_binlog_events.inc
+--enable_connect_log
+#SELECT @@GLOBAL.gtid_binlog_pos;
+#SELECT @@GLOBAL.gtid_binlog_state;
+--error ER_BINLOG_MUST_BE_EMPTY
+SET GLOBAL gtid_binlog_state = @old_state;
+RESET MASTER;
+SET GLOBAL gtid_binlog_state = @old_state;
+
+# Check that slave can reconnect again, despite the RESET MASTER, as we
+# restored the state.
+
+CREATE TABLE t1 (a INT PRIMARY KEY);
+SET gtid_seq_no=100;
+INSERT INTO t1 VALUES (1);
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+# We cannot just use sync_with_master as we've done RESET MASTER, so
+# slave old-style position is wrong.
+# So sync on gtid position instead.
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--enable_connect_log
+
+SELECT * FROM t1;
+# Check that the IO gtid position in SHOW SLAVE STATUS is also correct.
+--let $status_items= Gtid_IO_Pos
+--disable_connect_log
+--source include/show_slave_status.inc
+--enable_connect_log
+
+--echo *** Test @@LAST_GTID and MASTER_GTID_WAIT() ***
+
+--connection server_1
+DROP TABLE t1;
+CREATE TABLE t1 (a INT PRIMARY KEY) ENGINE=InnoDB;
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connect (m1,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
+SELECT @@last_gtid;
+SET gtid_seq_no=110;
+SELECT @@last_gtid;
+BEGIN;
+SELECT @@last_gtid;
+INSERT INTO t1 VALUES (2);
+SELECT @@last_gtid;
+COMMIT;
+SELECT @@last_gtid;
+--let $pos= `SELECT @@gtid_binlog_pos`
+
+--connect (s1,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+eval SET @pos= '$pos';
+# Check NULL argument.
+SELECT master_gtid_wait(NULL);
+# Check empty argument returns immediately.
+SELECT master_gtid_wait('', NULL);
+# Check this gets counted
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+SHOW STATUS LIKE 'Master_gtid_wait_time';
+# Let's check that we get a timeout
+SELECT master_gtid_wait(@pos, 0.5);
+SELECT * FROM t1 ORDER BY a;
+# Now actually wait until the slave reaches the position
+send SELECT master_gtid_wait(@pos);
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection s1
+reap;
+SELECT * FROM t1 ORDER BY a;
+
+# Test waiting on a domain that does not exist yet.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET gtid_domain_id= 1;
+INSERT INTO t1 VALUES (3);
+--let $pos= `SELECT @@gtid_binlog_pos`
+
+--connection s1
+--replace_result $pos POS
+eval SET @pos= '$pos';
+SELECT master_gtid_wait(@pos, 0);
+SELECT * FROM t1 WHERE a >= 3;
+send SELECT master_gtid_wait(@pos, -1);
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection s1
+reap;
+SELECT * FROM t1 WHERE a >= 3;
+# Waiting for only part of the position.
+SELECT master_gtid_wait('1-1-1', 0);
+
+# Now test a lot of parallel master_gtid_wait() calls, completing in
different
+# order, and some of which time out or get killed on the way.
+
+--connection s1
+send SELECT master_gtid_wait('2-1-1,1-1-4,0-1-110');
+
+--connect (s2,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+# This will time out. No event 0-1-1000 exists
+send SELECT master_gtid_wait('0-1-1000', 0.5);
+
+--connect (s3,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+# This one we will kill
+--let $kill1_id= `SELECT connection_id()`
+send SELECT master_gtid_wait('0-1-2000');
+
+--connect (s4,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+send SELECT master_gtid_wait('2-1-10');
+
+--connect (s5,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+send SELECT master_gtid_wait('2-1-6', 1);
+
+# This one we will kill also.
+--connect (s6,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+--let $kill2_id= `SELECT connection_id()`
+send SELECT master_gtid_wait('2-1-5');
+
+--connect (s7,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+send SELECT master_gtid_wait('2-1-10');
+
+--connect (s8,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+send SELECT master_gtid_wait('2-1-5,1-1-4,0-1-110');
+
+--connect (s9,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+send SELECT master_gtid_wait('2-1-2');
+
+--connection server_2
+# This one completes immediately.
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+SELECT master_gtid_wait('1-1-1');
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+let $wait_time = query_get_value(SHOW STATUS LIKE
'Master_gtid_wait_time', Value, 1);
+--replace_result $wait_time MASTER_GTID_WAIT_TIME
+eval SET @a= $wait_time;
+SELECT IF(@a <= 100*1000*1000, "OK", CONCAT("Error: wait time ", @a, "
is larger than expected"))
+ AS Master_gtid_wait_time_as_expected;
+
+
+--connect (s10,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+send SELECT master_gtid_wait('0-1-109');
+
+--connection server_2
+# This one should time out.
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+SELECT master_gtid_wait('2-1-2', 0.5);
+SHOW STATUS LIKE 'Master_gtid_wait_timeouts';
+SHOW STATUS LIKE 'Master_gtid_wait_count';
+let $wait_time = query_get_value(SHOW STATUS LIKE
'Master_gtid_wait_time', Value, 1);
+--replace_result $wait_time MASTER_GTID_WAIT_TIME
+eval SET @a= $wait_time;
+# We expect a wait time of just a bit over 0.5 seconds. But thread
scheduling
+# and timer inaccuracies could introduce significant jitter. So allow a
+# generous interval.
+SELECT IF(@a BETWEEN 0.4*1000*1000 AND 100*1000*1000, "OK",
CONCAT("Error: wait time ", @a, " not as expected")) AS
Master_gtid_wait_time_as_expected;
+
+--replace_result $kill1_id KILL_ID
+eval KILL QUERY $kill1_id;
+--connection s3
+--error ER_QUERY_INTERRUPTED
+reap;
+
+--connection server_1
+SET gtid_domain_id=2;
+SET gtid_seq_no=2;
+INSERT INTO t1 VALUES (4);
+
+--connection s9
+reap;
+
+--connection server_2
+--replace_result $kill2_id KILL_ID
+eval KILL CONNECTION $kill2_id;
+
+--connection s6
+--error 2013,ER_CONNECTION_KILLED
+reap;
+
+--connection server_1
+SET gtid_domain_id=1;
+SET gtid_seq_no=4;
+INSERT INTO t1 VALUES (5);
+SET gtid_domain_id=2;
+SET gtid_seq_no=5;
+INSERT INTO t1 VALUES (6);
+
+--connection s8
+reap;
+--connection s1
+reap;
+--connection s2
+reap;
+--connection s5
+reap;
+--connection s10
+reap;
+
+--connection server_1
+SET gtid_domain_id=2;
+SET gtid_seq_no=10;
+INSERT INTO t1 VALUES (7);
+
+--connection s4
+reap;
+--connection s7
+reap;
+
+
+--echo *** Test gtid_slave_pos when used with GTID ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET gtid_domain_id=2;
+SET gtid_seq_no=1000;
+INSERT INTO t1 VALUES (10);
+INSERT INTO t1 VALUES (11);
+--save_master_pos
+
+--connection server_2
+SET sql_slave_skip_counter= 1;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+SELECT IF(LOCATE("2-1-1001", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1001 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET gtid_domain_id=2;
+SET gtid_seq_no=1010;
+INSERT INTO t1 VALUES (12);
+INSERT INTO t1 VALUES (13);
+--save_master_pos
+
+--connection server_2
+SET sql_slave_skip_counter= 2;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+SELECT IF(LOCATE("2-1-1011", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1011 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET gtid_domain_id=2;
+SET gtid_seq_no=1020;
+INSERT INTO t1 VALUES (14);
+INSERT INTO t1 VALUES (15);
+INSERT INTO t1 VALUES (16);
+--save_master_pos
+
+--connection server_2
+SET sql_slave_skip_counter= 3;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+SELECT IF(LOCATE("2-1-1022", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1022 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET gtid_domain_id=2;
+SET gtid_seq_no=1030;
+INSERT INTO t1 VALUES (17);
+INSERT INTO t1 VALUES (18);
+INSERT INTO t1 VALUES (19);
+--save_master_pos
+
+--connection server_2
+SET sql_slave_skip_counter= 5;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 10 ORDER BY a;
+SELECT IF(LOCATE("2-1-1032", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1032 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET gtid_domain_id=3;
+SET gtid_seq_no=100;
+CREATE TABLE t2 (a INT PRIMARY KEY);
+DROP TABLE t2;
+SET gtid_domain_id=2;
+SET gtid_seq_no=1040;
+INSERT INTO t1 VALUES (20);
+--save_master_pos
+
+--connection server_2
+SET @saved_mode= @@GLOBAL.slave_ddl_exec_mode;
+SET GLOBAL slave_ddl_exec_mode=STRICT;
+SET sql_slave_skip_counter=1;
+START SLAVE UNTIL master_gtid_pos="3-1-100";
+--let $master_pos=3-1-100
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--source include/wait_for_slave_to_stop.inc
+--enable_connect_log
+--error ER_NO_SUCH_TABLE
+SELECT * FROM t2;
+SELECT IF(LOCATE("3-1-100", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 3-1-100 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+# Start the slave again, it should fail on the DROP TABLE as the table
is not there.
+SET sql_log_bin=0;
+CALL mtr.add_suppression("Slave: Unknown table 'test\\.t2' Error_code:
1051");
+SET sql_log_bin=1;
+START SLAVE;
+--let $slave_sql_errno=1051
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+SELECT IF(LOCATE("3-1-100", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 3-1-100 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+STOP SLAVE IO_THREAD;
+SET sql_slave_skip_counter=2;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+
+SELECT * FROM t1 WHERE a >= 20 ORDER BY a;
+SELECT IF(LOCATE("3-1-101", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 3-1-101 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+SELECT IF(LOCATE("2-1-1040", @@GLOBAL.gtid_slave_pos)>0, "Ok",
CONCAT("ERROR! expected GTID 2-1-1040 not found in gtid_slave_pos: ",
@@GLOBAL.gtid_slave_pos)) AS status;
+
+SET GLOBAL slave_ddl_exec_mode= @saved_mode;
+
+
+--echo *** Test GTID-connecting to a master with out-of-order sequence
numbers in the binlog. ***
+
+# Create an out-of-order binlog on server 2.
+# Let server 3 replicate to an out-of-order point, stop it, restart it,
+# and check that it replicates correctly despite the out-of-order.
+
+--connection server_1
+SET gtid_domain_id= @@GLOBAL.gtid_domain_id;
+INSERT INTO t1 VALUES (31);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+SET gtid_domain_id= @@GLOBAL.gtid_domain_id;
+INSERT INTO t1 VALUES (32);
+
+--connection server_1
+INSERT INTO t1 VALUES (33);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--save_master_pos
+
+--connection server_3
+--sync_with_master
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+INSERT INTO t1 VALUES (34);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--save_master_pos
+
+--connection server_3
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 30 ORDER BY a;
+--save_master_pos
+
+--connection server_4
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 30 ORDER BY a;
+
+
+# Clean up.
+--connection server_1
+DROP TABLE t1;
+
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_incident.cnf
b/mysql-test/suite/binlog_encryption/rpl_incident.cnf
new file mode 100644
index 0000000..7294976
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_incident.cnf
@@ -0,0 +1,7 @@
+!include my.cnf
+
+[mysqld.1]
+binlog_checksum=NONE
+
+[mysqld.2]
+binlog_checksum=NONE
diff --git a/mysql-test/suite/binlog_encryption/rpl_incident.result
b/mysql-test/suite/binlog_encryption/rpl_incident.result
new file mode 100644
index 0000000..ab1aa50
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_incident.result
@@ -0,0 +1,42 @@
+include/master-slave.inc
+[connection master]
+RESET MASTER;
+CREATE TABLE t1 (a INT);
+INSERT INTO t1 VALUES (1),(2),(3);
+SELECT * FROM t1;
+a
+1
+2
+3
+SET GLOBAL debug_dbug= '+d,incident_database_resync_on_replace,*';
+REPLACE INTO t1 VALUES (4);
+SELECT * FROM t1;
+a
+1
+2
+3
+4
+connection slave;
+call mtr.add_suppression("Slave SQL.*The incident LOST_EVENTS occurred
on the master.* 1590");
+include/wait_for_slave_sql_error.inc [errno=1590]
+Last_SQL_Error = 'The incident LOST_EVENTS occurred on the master.
Message: <none>'
+SELECT * FROM t1;
+a
+1
+2
+3
+SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
+START SLAVE;
+SELECT * FROM t1;
+a
+1
+2
+3
+4
+connection master;
+DROP TABLE t1;
+FLUSH LOGS;
+Contain RELOAD DATABASE
+1
+connection slave;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_incident.test
b/mysql-test/suite/binlog_encryption/rpl_incident.test
new file mode 100644
index 0000000..a069a01
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_incident.test
@@ -0,0 +1,72 @@
+#
+# The test is a combination of rpl.rpl_incident and binlog.binlog_incident
+# tests, with some modifications in each
+#
+
+--source include/master-slave.inc
+--source include/have_debug.inc
+
+--enable_connect_log
+
+RESET MASTER;
+
+--let $binlog_start_pos= query_get_value(SHOW MASTER STATUS,Position,1)
+--let $binlog_file= query_get_value(SHOW MASTER STATUS,File,1)
+
+CREATE TABLE t1 (a INT);
+
+INSERT INTO t1 VALUES (1),(2),(3);
+SELECT * FROM t1;
+
+let $debug_save= `SELECT @@GLOBAL.debug`;
+SET GLOBAL debug_dbug= '+d,incident_database_resync_on_replace,*';
+
+# This will generate an incident log event and store it in the binary
+# log before the replace statement.
+REPLACE INTO t1 VALUES (4);
+--save_master_pos
+SELECT * FROM t1;
+
+--disable_query_log
+eval SET GLOBAL debug_dbug= '$debug_save';
+--enable_query_log
+
+connection slave;
+# Wait until SQL thread stops with error LOST_EVENT on master
+call mtr.add_suppression("Slave SQL.*The incident LOST_EVENTS occurred
on the master.* 1590");
+let $slave_sql_errno= 1590;
+let $show_slave_sql_error= 1;
+--disable_query_log
+source include/wait_for_slave_sql_error.inc;
+--enable_query_log
+
+# The 4 should not be inserted into the table, since the incident log
+# event should have stop the slave.
+SELECT * FROM t1;
+
+SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
+START SLAVE;
+--sync_with_master
+
+# Now, we should have inserted the row into the table and the slave
+# should be running. We should also have rotated to a new binary log.
+
+SELECT * FROM t1;
+
+connection master;
+
+DROP TABLE t1;
+FLUSH LOGS;
+
+exec $MYSQL_BINLOG --read-from-remote-server --port=$MASTER_MYPORT
--protocol=tcp -uroot --start-position=$binlog_start_pos $binlog_file
>$MYSQLTEST_VARDIR/tmp/binlog_incident.sql;
+--disable_query_log
+eval SELECT cont LIKE '%RELOAD DATABASE; # Shall generate syntax
error%' AS `Contain RELOAD DATABASE` FROM (SELECT
load_file('$MYSQLTEST_VARDIR/tmp/binlog_incident.sql') AS cont) AS tbl;
+--enable_query_log
+
+remove_file $MYSQLTEST_VARDIR/tmp/binlog_incident.sql;
+
+--sync_slave_with_master
+
+--disable_connect_log
+
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_init_slave_errors.result
b/mysql-test/suite/binlog_encryption/rpl_init_slave_errors.result
new file mode 100644
index 0000000..9174281
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_init_slave_errors.result
@@ -0,0 +1,22 @@
+include/master-slave.inc
+[connection master]
+connection slave;
+stop slave;
+reset slave;
+connection slave;
+SET GLOBAL debug_dbug=
"d,simulate_io_slave_error_on_init,simulate_sql_slave_error_on_init";
+start slave;
+include/wait_for_slave_sql_error.inc [errno=1593]
+Last_SQL_Error = 'Failed during slave thread initialization'
+call mtr.add_suppression("Failed during slave.* thread initialization");
+SET GLOBAL debug_dbug= "";
+connection slave;
+reset slave;
+SET GLOBAL init_slave= "garbage";
+start slave;
+include/wait_for_slave_sql_error.inc [errno=1064]
+Last_SQL_Error = 'Slave SQL thread aborted. Can't execute init_slave query'
+SET GLOBAL init_slave= "";
+include/stop_slave_io.inc
+RESET SLAVE;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_init_slave_errors.test
b/mysql-test/suite/binlog_encryption/rpl_init_slave_errors.test
new file mode 100644
index 0000000..8447852
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_init_slave_errors.test
@@ -0,0 +1,100 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+######################################################################
+# Some errors that cause the slave SQL thread to stop are not shown in
+# the Slave_SQL_Error column of "SHOW SLAVE STATUS". Instead, the error
+# is only in the server's error log.
+#
+# Two failures and their respective reporting are verified:
+# +# 1 - Failures during slave thread initialization
+# 2 - Failures while processing queries passed through the init_slave
+# option.
+#
+# In order to check the first type of failure, we inject a fault in the
+# SQL/IO Threads through SET GLOBAL debug.
+#
+# To check the second type, we set @@global.init_slave to an invalid +#
command thus preventing the initialization of the SQL Thread.
+#
+# Obs:
+# 1 - Note that testing failures while initializing the relay log
position +# is hard as the same function is called before the code
reaches the point +# that we want to test.
+#
+# 2 - This test does not target failures that are reported while
applying +# events such as duplicate keys, errors while reading the
relay-log.bin*, +# etc. Such errors are already checked on other tests.
+######################################################################
+
+######################################################################
+# Configuring the Environment
+######################################################################
+source include/have_debug.inc;
+source include/master-slave.inc;
+source include/have_log_bin.inc;
+
+--enable_connect_log
+
+connection slave;
+
+--disable_warnings
+stop slave;
+--enable_warnings
+reset slave;
+
+######################################################################
+# Injecting faults in the threads' initialization
+######################################################################
+connection slave;
+
+# Set debug flags on slave to force errors to occur
+SET GLOBAL debug_dbug=
"d,simulate_io_slave_error_on_init,simulate_sql_slave_error_on_init";
+
+start slave;
+
+#
+# slave is going to stop because of emulated failures
+# but there won't be any crashes nor asserts hit.
+#
+# 1593 = ER_SLAVE_FATAL_ERROR
+--let $slave_sql_errno= 1593
+--let $show_slave_sql_error= 1
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+call mtr.add_suppression("Failed during slave.* thread initialization");
+
+SET GLOBAL debug_dbug= "";
+
+######################################################################
+# Injecting faults in the init_slave option
+######################################################################
+connection slave;
+
+reset slave;
+
+SET GLOBAL init_slave= "garbage";
+
+start slave;
+# 1064 = ER_PARSE_ERROR
+--let $slave_sql_errno= 1064
+--let $show_slave_sql_error= 1
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+######################################################################
+# Clean up
+######################################################################
+SET GLOBAL init_slave= "";
+
+# Clean up Last_SQL_Error
+--disable_connect_log
+--source include/stop_slave_io.inc
+RESET SLAVE;
+--let $rpl_only_running_threads= 1
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_loaddata_local.result
b/mysql-test/suite/binlog_encryption/rpl_loaddata_local.result
new file mode 100644
index 0000000..f0d24df
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_loaddata_local.result
@@ -0,0 +1,134 @@
+include/master-slave.inc
+[connection master]
+create table t1(a int);
+select * into outfile 'MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile'
from t1;
+truncate table t1;
+load data local infile
'MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' into table t1;
+connection slave;
+select a,count(*) from t1 group by a;
+a count(*)
+1 10000
+connection master;
+drop table t1;
+connection slave;
+connection master;
+create table t1(a int);
+insert into t1 values (1), (2), (2), (3);
+select * into outfile 'MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile'
from t1;
+drop table t1;
+create table t1(a int primary key);
+load data local infile
'MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' into table t1;
+Warnings:
+Warning 1062 Duplicate entry '2' for key 'PRIMARY'
+SELECT * FROM t1 ORDER BY a;
+a
+1
+2
+3
+connection slave;
+SELECT * FROM t1 ORDER BY a;
+a
+1
+2
+3
+connection master;
+drop table t1;
+connection slave;
+==== Bug22504 Initialize ====
+connection master;
+SET sql_mode='ignore_space';
+CREATE TABLE t1(a int);
+insert into t1 values (1), (2), (3), (4);
+select * into outfile 'MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile'
from t1;
+truncate table t1;
+load data local infile
'MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' into table t1;
+SELECT * FROM t1 ORDER BY a;
+a
+1
+2
+3
+4
+connection slave;
+SELECT * FROM t1 ORDER BY a;
+a
+1
+2
+3
+4
+==== Clean up ====
+connection master;
+DROP TABLE t1;
+connection slave;
+
+Bug #43746:
+"return wrong query string when parse 'load data infile' sql statement"
+
+connection master;
+SELECT @@SESSION.sql_mode INTO @old_mode;
+SET sql_mode='ignore_space';
+CREATE TABLE t1(a int);
+INSERT INTO t1 VALUES (1), (2), (3), (4);
+SELECT * INTO OUTFILE 'MYSQLD_DATADIR/bug43746.sql' FROM t1;
+TRUNCATE TABLE t1;
+LOAD DATA LOCAL INFILE 'MYSQLD_DATADIR/bug43746.sql' INTO TABLE t1;
+LOAD/* look mum, with comments in weird places! */DATA/* oh hai */LOCAL
INFILE 'MYSQLD_DATADIR/bug43746.sql'/* we are */INTO/* from the
internets */TABLE t1;
+LOAD DATA/*!10000 LOCAL */INFILE 'MYSQLD_DATADIR/bug43746.sql' INTO
TABLE t1;
+LOAD DATA LOCAL INFILE 'MYSQLD_DATADIR/bug43746.sql' /*!10000 INTO */
TABLE t1;
+LOAD DATA LOCAL INFILE 'MYSQLD_DATADIR/bug43746.sql' /*!10000 INTO
TABLE */ t1;
+LOAD DATA /*!10000 LOCAL INFILE 'MYSQLD_DATADIR/bug43746.sql' INTO
TABLE */ t1;
+LOAD DATA/*!10000 LOCAL */INFILE 'MYSQLD_DATADIR/bug43746.sql'/*!10000
INTO*/TABLE t1;
+LOAD DATA/*!10000 LOCAL */INFILE 'MYSQLD_DATADIR/bug43746.sql'/* empty
*/INTO TABLE t1;
+LOAD DATA/*!10000 LOCAL */INFILE 'MYSQLD_DATADIR/bug43746.sql' INTO/*
empty */TABLE t1;
+LOAD/*!999999 special comments that do not expand */DATA/*!999999 code
from the future */LOCAL INFILE 'MYSQLD_DATADIR/bug43746.sql'/*!999999
have flux capacitor */INTO/*!999999 will travel */TABLE t1;
+SET
sql_mode='PIPES_AS_CONCAT,ANSI_QUOTES,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER';
+LOAD DATA LOCAL INFILE 'MYSQLD_DATADIR/bug43746.sql' INTO TABLE t1;
+connection slave;
+
+Bug #59267:
+"LOAD DATA LOCAL INFILE not executed on slave with SBR"
+
+connection master;
+SELECT * INTO OUTFILE 'MYSQLD_DATADIR/bug59267.sql' FROM t1;
+TRUNCATE TABLE t1;
+LOAD DATA LOCAL INFILE 'MYSQLD_DATADIR/bug59267.sql' INTO TABLE t1;
+SELECT 'Master', COUNT(*) FROM t1;
+Master COUNT(*)
+Master 44
+connection slave;
+SELECT 'Slave', COUNT(*) FROM t1;
+Slave COUNT(*)
+Slave 44
+connection master;
+DROP TABLE t1;
+SET SESSION sql_mode=@old_mode;
+connection slave;
+connection master;
+
+Bug #60580/#11902767:
+"statement improperly replicated crashes slave sql thread"
+
+connection master;
+CREATE TABLE t1(f1 INT, f2 INT);
+CREATE TABLE t2(f1 INT, f2 TIMESTAMP);
+INSERT INTO t2 VALUES(1, '2011-03-22 21:01:28');
+INSERT INTO t2 VALUES(2, '2011-03-21 21:01:28');
+INSERT INTO t2 VALUES(3, '2011-03-20 21:01:28');
+CREATE TABLE t3 AS SELECT * FROM t2;
+CREATE VIEW v1 AS SELECT * FROM t2
+WHERE f1 IN (SELECT f1 FROM t3 WHERE (t3.f2 IS NULL));
+SELECT 1 INTO OUTFILE 'MYSQLD_DATADIR/bug60580.csv' FROM DUAL;
+LOAD DATA LOCAL INFILE 'MYSQLD_DATADIR/bug60580.csv' INTO TABLE t1
(@f1) SET f2 = (SELECT f1 FROM v1 WHERE f1=@f1);
+SELECT * FROM t1;
+f1 f2
+NULL NULL
+connection slave;
+SELECT * FROM t1;
+f1 f2
+NULL NULL
+connection master;
+DROP VIEW v1;
+DROP TABLE t1, t2, t3;
+connection slave;
+connection master;
+include/rpl_end.inc
+# End of 5.1 tests
diff --git a/mysql-test/suite/binlog_encryption/rpl_loaddata_local.test
b/mysql-test/suite/binlog_encryption/rpl_loaddata_local.test
new file mode 100644
index 0000000..937ae77
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_loaddata_local.test
@@ -0,0 +1,233 @@
+#
+# The test was taken from the rpl suite (rpl_loaddatalocal.test)
+#
+
+# See if "LOAD DATA LOCAL INFILE" is well replicated
+# (LOAD DATA LOCAL INFILE is not written to the binlog
+# the same way as LOAD DATA INFILE : Append_blocks are smaller).
+# In MySQL 4.0 <4.0.12 there were 2 bugs with LOAD DATA LOCAL INFILE :
+# - the loaded file was not written entirely to the master's binlog,
+# only the first 4KB, 8KB or 16KB usually.
+# - the loaded file's first line was not written entirely to the
+# master's binlog (1st char was absent)
+source include/master-slave.inc;
+
+--enable_connect_log
+
+create table t1(a int);
+let $1=10000;
+disable_query_log;
+set SQL_LOG_BIN=0;
+while ($1)
+{
+ insert into t1 values(1);
+ dec $1;
+}
+set SQL_LOG_BIN=1;
+enable_query_log;
+let $MYSQLD_DATADIR= `select @@datadir`;
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval select * into outfile
'$MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' from t1;
+#This will generate a 20KB file, now test LOAD DATA LOCAL
+truncate table t1;
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval load data local infile
'$MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' into table t1;
+--remove_file $MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile
+sync_slave_with_master;
+select a,count(*) from t1 group by a;
+connection master;
+drop table t1;
+sync_slave_with_master;
+
+# End of 4.1 tests
+
+#
+# Now let us test how well we replicate LOAD DATA LOCAL in situation when
+# we met duplicates in tables to which we are adding rows.
+# (It supposed that LOAD DATA LOCAL ignores such errors)
+#
+connection master;
+create table t1(a int);
+insert into t1 values (1), (2), (2), (3);
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval select * into outfile
'$MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' from t1;
+drop table t1;
+create table t1(a int primary key);
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval load data local infile
'$MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' into table t1;
+--remove_file $MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile
+SELECT * FROM t1 ORDER BY a;
+save_master_pos;
+connection slave;
+sync_with_master;
+SELECT * FROM t1 ORDER BY a;
+connection master;
+drop table t1;
+save_master_pos;
+connection slave;
+sync_with_master;
+
+
+#
+# Bug22504 load data infile sql statement in replication architecture
get error
+#
+--echo ==== Bug22504 Initialize ====
+
+--connection master
+
+SET sql_mode='ignore_space';
+CREATE TABLE t1(a int);
+insert into t1 values (1), (2), (3), (4);
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval select * into outfile
'$MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' from t1;
+truncate table t1;
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval load data local infile
'$MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile' into table t1;
+--remove_file $MYSQLD_DATADIR/rpl_loaddatalocal.select_outfile
+SELECT * FROM t1 ORDER BY a;
+
+sync_slave_with_master;
+SELECT * FROM t1 ORDER BY a;
+
+--echo ==== Clean up ====
+
+connection master;
+DROP TABLE t1;
+
+sync_slave_with_master;
+
+--echo
+--echo Bug #43746:
+--echo "return wrong query string when parse 'load data infile' sql
statement"
+--echo
+
+connection master;
+let $MYSQLD_DATADIR= `select @@datadir`;
+SELECT @@SESSION.sql_mode INTO @old_mode;
+
+SET sql_mode='ignore_space';
+
+CREATE TABLE t1(a int);
+INSERT INTO t1 VALUES (1), (2), (3), (4);
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval SELECT * INTO OUTFILE '$MYSQLD_DATADIR/bug43746.sql' FROM t1;
+TRUNCATE TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA LOCAL INFILE '$MYSQLD_DATADIR/bug43746.sql' INTO TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD/* look mum, with comments in weird places! */DATA/* oh hai
*/LOCAL INFILE '$MYSQLD_DATADIR/bug43746.sql'/* we are */INTO/* from the
internets */TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA/*!10000 LOCAL */INFILE '$MYSQLD_DATADIR/bug43746.sql'
INTO TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA LOCAL INFILE '$MYSQLD_DATADIR/bug43746.sql' /*!10000
INTO */ TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA LOCAL INFILE '$MYSQLD_DATADIR/bug43746.sql' /*!10000
INTO TABLE */ t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA /*!10000 LOCAL INFILE '$MYSQLD_DATADIR/bug43746.sql'
INTO TABLE */ t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA/*!10000 LOCAL */INFILE
'$MYSQLD_DATADIR/bug43746.sql'/*!10000 INTO*/TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA/*!10000 LOCAL */INFILE '$MYSQLD_DATADIR/bug43746.sql'/*
empty */INTO TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA/*!10000 LOCAL */INFILE '$MYSQLD_DATADIR/bug43746.sql'
INTO/* empty */TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD/*!999999 special comments that do not expand */DATA/*!999999
code from the future */LOCAL INFILE
'$MYSQLD_DATADIR/bug43746.sql'/*!999999 have flux capacitor
*/INTO/*!999999 will travel */TABLE t1;
+
+SET
sql_mode='PIPES_AS_CONCAT,ANSI_QUOTES,NO_KEY_OPTIONS,NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER';
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA LOCAL INFILE '$MYSQLD_DATADIR/bug43746.sql' INTO TABLE t1;
+
+sync_slave_with_master;
+
+--echo
+--echo Bug #59267:
+--echo "LOAD DATA LOCAL INFILE not executed on slave with SBR"
+--echo
+
+connection master;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval SELECT * INTO OUTFILE '$MYSQLD_DATADIR/bug59267.sql' FROM t1;
+TRUNCATE TABLE t1;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA LOCAL INFILE '$MYSQLD_DATADIR/bug59267.sql' INTO TABLE t1;
+
+SELECT 'Master', COUNT(*) FROM t1;
+
+--sync_slave_with_master
+SELECT 'Slave', COUNT(*) FROM t1;
+
+# cleanup
+connection master;
+
+--remove_file $MYSQLD_DATADIR/bug43746.sql
+--remove_file $MYSQLD_DATADIR/bug59267.sql
+
+DROP TABLE t1;
+SET SESSION sql_mode=@old_mode;
+
+sync_slave_with_master;
+
+connection master;
+
+--echo
+--echo Bug #60580/#11902767:
+--echo "statement improperly replicated crashes slave sql thread"
+--echo
+
+connection master;
+let $MYSQLD_DATADIR= `select @@datadir`;
+
+CREATE TABLE t1(f1 INT, f2 INT);
+CREATE TABLE t2(f1 INT, f2 TIMESTAMP);
+
+INSERT INTO t2 VALUES(1, '2011-03-22 21:01:28');
+INSERT INTO t2 VALUES(2, '2011-03-21 21:01:28');
+INSERT INTO t2 VALUES(3, '2011-03-20 21:01:28');
+
+CREATE TABLE t3 AS SELECT * FROM t2;
+
+CREATE VIEW v1 AS SELECT * FROM t2
+ WHERE f1 IN (SELECT f1 FROM t3 WHERE (t3.f2 IS NULL));
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval SELECT 1 INTO OUTFILE '$MYSQLD_DATADIR/bug60580.csv' FROM DUAL;
+
+--replace_result $MYSQLD_DATADIR MYSQLD_DATADIR
+eval LOAD DATA LOCAL INFILE '$MYSQLD_DATADIR/bug60580.csv' INTO TABLE
t1 (@f1) SET f2 = (SELECT f1 FROM v1 WHERE f1=@f1);
+
+SELECT * FROM t1;
+
+sleep 1;
+
+sync_slave_with_master;
+
+SELECT * FROM t1;
+
+--remove_file $MYSQLD_DATADIR/bug60580.csv
+
+connection master;
+
+DROP VIEW v1;
+DROP TABLE t1, t2, t3;
+
+sync_slave_with_master;
+
+connection master;
+--disable_connect_log
+--source include/rpl_end.inc
+
+--echo # End of 5.1 tests
diff --git a/mysql-test/suite/binlog_encryption/rpl_loadfile.result
b/mysql-test/suite/binlog_encryption/rpl_loadfile.result
new file mode 100644
index 0000000..e6f2a37
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_loadfile.result
@@ -0,0 +1,262 @@
+include/master-slave.inc
+[connection master]
+connection master;
+DROP PROCEDURE IF EXISTS test.p1;
+DROP TABLE IF EXISTS test.t1;
+CREATE TABLE test.t1 (a INT, blob_column LONGBLOB, PRIMARY KEY(a));
+INSERT INTO test.t1 VALUES(1,'test');
+UPDATE test.t1 SET blob_column=LOAD_FILE('../../std_data/words2.dat')
WHERE a=1;
+create procedure test.p1()
+begin
+INSERT INTO test.t1 VALUES(2,'test');
+UPDATE test.t1 SET blob_column=LOAD_FILE('../../std_data/words2.dat')
WHERE a=2;
+end|
+CALL test.p1();
+SELECT * FROM test.t1 ORDER BY blob_column;
+a blob_column
+1 abase
+abased
+abasement
+abasements
+abases
+abash
+abashed
+abashes
+abashing
+abasing
+abate
+abated
+abatement
+abatements
+abater
+abates
+abating
+Abba
+abbe
+abbey
+abbeys
+abbot
+abbots
+Abbott
+abbreviate
+abbreviated
+abbreviates
+abbreviating
+abbreviation
+abbreviations
+Abby
+abdomen
+abdomens
+abdominal
+abduct
+abducted
+abduction
+abductions
+abductor
+abductors
+abducts
+Abe
+abed
+Abel
+Abelian
+Abelson
+Aberdeen
+Abernathy
+aberrant
+aberration
+
+2 abase
+abased
+abasement
+abasements
+abases
+abash
+abashed
+abashes
+abashing
+abasing
+abate
+abated
+abatement
+abatements
+abater
+abates
+abating
+Abba
+abbe
+abbey
+abbeys
+abbot
+abbots
+Abbott
+abbreviate
+abbreviated
+abbreviates
+abbreviating
+abbreviation
+abbreviations
+Abby
+abdomen
+abdomens
+abdominal
+abduct
+abducted
+abduction
+abductions
+abductor
+abductors
+abducts
+Abe
+abed
+Abel
+Abelian
+Abelson
+Aberdeen
+Abernathy
+aberrant
+aberration
+
+connection slave;
+connection slave;
+SELECT * FROM test.t1 ORDER BY blob_column;
+a blob_column
+1 abase
+abased
+abasement
+abasements
+abases
+abash
+abashed
+abashes
+abashing
+abasing
+abate
+abated
+abatement
+abatements
+abater
+abates
+abating
+Abba
+abbe
+abbey
+abbeys
+abbot
+abbots
+Abbott
+abbreviate
+abbreviated
+abbreviates
+abbreviating
+abbreviation
+abbreviations
+Abby
+abdomen
+abdomens
+abdominal
+abduct
+abducted
+abduction
+abductions
+abductor
+abductors
+abducts
+Abe
+abed
+Abel
+Abelian
+Abelson
+Aberdeen
+Abernathy
+aberrant
+aberration
+
+2 abase
+abased
+abasement
+abasements
+abases
+abash
+abashed
+abashes
+abashing
+abasing
+abate
+abated
+abatement
+abatements
+abater
+abates
+abating
+Abba
+abbe
+abbey
+abbeys
+abbot
+abbots
+Abbott
+abbreviate
+abbreviated
+abbreviates
+abbreviating
+abbreviation
+abbreviations
+Abby
+abdomen
+abdomens
+abdominal
+abduct
+abducted
+abduction
+abductions
+abductor
+abductors
+abducts
+Abe
+abed
+Abel
+Abelian
+Abelson
+Aberdeen
+Abernathy
+aberrant
+aberration
+
+connection master;
+DROP PROCEDURE IF EXISTS test.p1;
+DROP TABLE test.t1;
+connection slave;
+include/rpl_reset.inc
+connection master;
+SELECT repeat('x',20) INTO OUTFILE 'MYSQLTEST_VARDIR/tmp/bug_39701.data';
+DROP TABLE IF EXISTS t1;
+CREATE TABLE t1 (t text);
+CREATE PROCEDURE p(file varchar(4096)) +BEGIN
+INSERT INTO t1 VALUES (LOAD_FILE(file));
+END|
+connection slave;
+include/stop_slave.inc
+connection master;
+CALL p('MYSQLTEST_VARDIR/tmp/bug_39701.data');
+connection slave;
+include/start_slave.inc
+connection master;
+connection slave;
+include/diff_tables.inc [master:t1, slave:t1]
+connection master;
+DROP TABLE t1;
+DROP PROCEDURE p;
+include/rpl_end.inc
+#
+# Check that the loaded data is encrypted in the master binlog
+#
+#
+# The next step will cause a perl error if the search does not
+# meet the expectations. +# Pattern to look for: xxxxxxxxxxx
+# Files to search: master-bin.0*
+# Expected result: 0 +# (0 means the pattern should not be found,
1 means it should be found)
+#
+Did not find any occurrences of 'xxxxxxxxxxx' in master-bin.0*
diff --git a/mysql-test/suite/binlog_encryption/rpl_loadfile.test
b/mysql-test/suite/binlog_encryption/rpl_loadfile.test
new file mode 100644
index 0000000..2452cd8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_loadfile.test
@@ -0,0 +1,137 @@
+#
+# The test was taken from the rpl suite, with an additional check
+# for binlog encryption at the end
+#
+
+#############################################################################
+# Original Author: JBM
#
+# Original Date: Aug/18/2005
#
+#############################################################################
+# TEST: To test the LOAD_FILE() in rbr
#
+#############################################################################
+# Change Author: JBM
+# Change Date: 2006-01-16
+##########
+
+# Includes
+-- source include/have_binlog_format_mixed_or_row.inc
+-- source include/master-slave.inc
+--enable_connect_log
+-- source extra/rpl_tests/rpl_loadfile.test
+
+# BUG#39701: Mixed binlog format does not switch to row mode on LOAD_FILE
+#
+# DESCRIPTION
+#
+# Problem: when using load_file string function and mixed binlogging
format
+# there was no switch to row based binlogging format. This
leads
+# to scenarios on which the slave replicates the statement
and it
+# will try to load the file from local file system, which
in most
+# likely it will not exist.
+#
+# Solution:
+# Marking this function as unsafe for statement format,
makes the
+# statement using it to be logged in row based format. As
such, data +# replicated from the master, becomes the
content of the loaded file.
+# Consequently, the slave receives the necessary data to
complete
+# the load_file instruction correctly.
+#
+# IMPLEMENTATION
+#
+# The test is implemented as follows:
+#
+# On Master,
+# i) write to file the desired content.
+# ii) create table and stored procedure with load_file
+# iii) stop slave
+# iii) execute load_file +# iv) remove file
+#
+# On Slave,
+# v) start slave
+# vi) sync it with master so that it gets the updates from binlog
(which +# should have bin logged in row format). +#
+# If the the binlog format does not change to row, then the
assertion
+# done in the following step fails. This happens because
tables differ +# since the file does not exist anymore, meaning
that when slave +# attempts to execute LOAD_FILE statement it
inserts NULL on table +# instead of the same contents that the
master loaded when it executed +# the procedure (which was
executed when file existed).
+#
+# vii) assert that the contents of master and slave +#
table are the same
+
+--disable_connect_log
+--source include/rpl_reset.inc
+--enable_connect_log
+
+connection master;
+let $file= $MYSQLTEST_VARDIR/tmp/bug_39701.data;
+
+--replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+--eval SELECT repeat('x',20) INTO OUTFILE '$file'
+
+disable_warnings;
+DROP TABLE IF EXISTS t1;
+enable_warnings;
+
+CREATE TABLE t1 (t text);
+DELIMITER |;
+CREATE PROCEDURE p(file varchar(4096)) + BEGIN
+ INSERT INTO t1 VALUES (LOAD_FILE(file));
+ END|
+DELIMITER ;|
+
+# stop slave before issuing the load_file on master
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+
+connection master;
+
+# test: check that logging falls back to rbr.
+--replace_result $MYSQLTEST_VARDIR MYSQLTEST_VARDIR
+--eval CALL p('$file')
+
+# test: remove the file from the filesystem and assert that slave still
+# gets the loaded file
+remove_file $file;
+
+# now that the file is removed it is safe (regarding what we want to
test) +# to start slave
+connection slave;
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+sync_slave_with_master;
+
+# assertion: assert that the slave got the updates even
+# if the file was removed before the slave started,
+# meaning that contents were indeed transfered
+# through binlog (in row format)
+let $diff_tables= master:t1, slave:t1;
+--disable_connect_log
+source include/diff_tables.inc;
+--enable_connect_log
+
+# CLEAN UP
+--connection master
+DROP TABLE t1;
+DROP PROCEDURE p;
+
+--disable_connect_log
+--source include/rpl_end.inc
+
+--echo #
+--echo # Check that the loaded data is encrypted in the master binlog
+--echo #
+
+--let search_files=master-bin.0*
+--let search_pattern= xxxxxxxxxxx
+--let search_result= 0
+--source grep_binlog.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_mixed_binlog_max_cache_size.result
b/mysql-test/suite/binlog_encryption/rpl_mixed_binlog_max_cache_size.result
new file mode 100644
index 0000000..388c8e6
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_mixed_binlog_max_cache_size.result
@@ -0,0 +1,210 @@
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+SET GLOBAL max_binlog_cache_size = 4096;
+SET GLOBAL binlog_cache_size = 4096;
+SET GLOBAL max_binlog_stmt_cache_size = 4096;
+SET GLOBAL binlog_stmt_cache_size = 4096;
+disconnect master;
+connect master,127.0.0.1,root,,test,$MASTER_MYPORT,;
+CREATE TABLE t1(a INT PRIMARY KEY, data VARCHAR(30000)) ENGINE=Innodb;
+CREATE TABLE t2(a INT PRIMARY KEY, data VARCHAR(30000)) ENGINE=MyIsam;
+CREATE TABLE t3(a INT PRIMARY KEY, data VARCHAR(30000)) ENGINE=Innodb;
+########################################################################################
+# 1 - SINGLE STATEMENT
+########################################################################################
+connection master;
+*** Single statement on transactional table ***
+Got one of the listed errors
+*** Single statement on non-transactional table ***
+Got one of the listed errors
+include/wait_for_slave_sql_error_and_skip.inc [errno=1590]
+*** Single statement on both transactional and non-transactional
tables. ***
+Got one of the listed errors
+include/wait_for_slave_sql_error_and_skip.inc [errno=1590]
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################################
+# 2 - BEGIN - IMPLICIT COMMIT by DDL
+########################################################################################
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+set default_storage_engine=innodb;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+Got one of the listed errors
+INSERT INTO t1 (a, data) VALUES (7, 's');;
+INSERT INTO t2 (a, data) VALUES (8, 's');;
+INSERT INTO t1 (a, data) VALUES (9, 's');;
+ALTER TABLE t3 ADD COLUMN d int;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+INSERT INTO t1 (a, data) VALUES (19, 's');;
+INSERT INTO t2 (a, data) VALUES (20, 's');;
+INSERT INTO t1 (a, data) VALUES (21, 's');;
+CREATE TABLE t4 SELECT * FROM t1;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+INSERT INTO t1 (a, data) VALUES (27, 's');;
+INSERT INTO t2 (a, data) VALUES (28, 's');;
+INSERT INTO t1 (a, data) VALUES (29, 's');;
+CREATE TABLE t5 (a int);
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################################
+# 3 - BEGIN - COMMIT
+########################################################################################
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+Got one of the listed errors
+COMMIT;
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################################
+# 4 - BEGIN - ROLLBACK
+########################################################################################
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+Got one of the listed errors
+ROLLBACK;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################################
+# 5 - PROCEDURE
+########################################################################################
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+CREATE PROCEDURE p1(pd VARCHAR(30000))
+BEGIN
+INSERT INTO t1 (a, data) VALUES (1, pd);
+INSERT INTO t1 (a, data) VALUES (2, pd);
+INSERT INTO t1 (a, data) VALUES (3, pd);
+INSERT INTO t1 (a, data) VALUES (4, pd);
+INSERT INTO t1 (a, data) VALUES (5, 's');
+END//
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t1;
+BEGIN;
+Got one of the listed errors
+COMMIT;
+TRUNCATE TABLE t1;
+BEGIN;
+Got one of the listed errors
+ROLLBACK;
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################################
+# 6 - XID
+########################################################################################
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+Got one of the listed errors
+INSERT INTO t1 (a, data) VALUES (7, 's');;
+INSERT INTO t2 (a, data) VALUES (8, 's');;
+INSERT INTO t1 (a, data) VALUES (9, 's');;
+ROLLBACK TO sv;
+Warnings:
+Warning 1196 Some non-transactional changed tables couldn't be rolled back
+COMMIT;
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################################
+# 7 - NON-TRANS TABLE
+########################################################################################
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+BEGIN;
+Got one of the listed errors
+Got one of the listed errors
+Got one of the listed errors
+INSERT INTO t1 (a, data) VALUES (8, 's');;
+INSERT INTO t1 (a, data) VALUES (9, 's');;
+INSERT INTO t2 (a, data) VALUES (10, 's');;
+INSERT INTO t1 (a, data) VALUES (11, 's');;
+COMMIT;
+BEGIN;
+Got one of the listed errors
+COMMIT;
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+########################################################################
+# 8 - Bug#55375(Regression Bug) Transaction bigger than
+# max_binlog_cache_size crashes slave
+########################################################################
+# [ On Slave ]
+SET GLOBAL max_binlog_cache_size = 4096;
+SET GLOBAL binlog_cache_size = 4096;
+SET GLOBAL max_binlog_stmt_cache_size = 4096;
+SET GLOBAL binlog_stmt_cache_size = 4096;
+include/stop_slave.inc
+include/start_slave.inc
+CALL mtr.add_suppression("Multi-statement transaction required more
than 'max_binlog_cache_size' bytes of storage.*");
+CALL mtr.add_suppression("Multi-statement transaction required more
than 'max_binlog_stmt_cache_size' bytes of storage.*");
+CALL mtr.add_suppression("Writing one row to the row-based binary log
failed.*");
+CALL mtr.add_suppression("Slave SQL.*The incident LOST_EVENTS occurred
on the master. Message: error writing to the binary log");
+connection master;
+TRUNCATE t1;
+connection slave;
+connection master;
+SET GLOBAL max_binlog_cache_size= ORIGINAL_VALUE;
+SET GLOBAL binlog_cache_size= ORIGINAL_VALUE;
+SET GLOBAL max_binlog_stmt_cache_size= ORIGINAL_VALUE;
+SET GLOBAL binlog_stmt_cache_size= ORIGINAL_VALUE;
+disconnect master;
+connect master,127.0.0.1,root,,test,$MASTER_MYPORT,;
+BEGIN;
+Repeat statement 'INSERT INTO t1 VALUES($n, repeat("a", 32))' 128 times
+COMMIT;
+connection slave;
+include/wait_for_slave_sql_error.inc [errno=1197]
+SELECT count(*) FROM t1;
+count(*)
+0
+include/show_binlog_events.inc
+SET GLOBAL max_binlog_cache_size= ORIGINAL_VALUE;
+SET GLOBAL binlog_cache_size= ORIGINAL_VALUE;
+SET GLOBAL max_binlog_stmt_cache_size= ORIGINAL_VALUE;
+SET GLOBAL binlog_stmt_cache_size= ORIGINAL_VALUE;
+include/stop_slave.inc
+include/start_slave.inc
+connection master;
+connection slave;
+SELECT count(*) FROM t1;
+count(*)
+128
+########################################################################################
+# CLEAN
+########################################################################################
+connection master;
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+DROP TABLE IF EXISTS t4;
+DROP TABLE t5;
+DROP PROCEDURE p1;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_mixed_binlog_max_cache_size.test
b/mysql-test/suite/binlog_encryption/rpl_mixed_binlog_max_cache_size.test
new file mode 100644
index 0000000..28c43d2
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_mixed_binlog_max_cache_size.test
@@ -0,0 +1,489 @@
+#
+# The test was taken from the rpl suite, with some structural changes
+#
+
+--source include/master-slave.inc
+--source include/not_embedded.inc
+--source include/not_windows.inc
+--source include/have_binlog_format_mixed.inc
+
+--enable_connect_log
+
+########################################################################################
+# This test verifies if the binlog is not corrupted when the cache
buffer is not
+# big enough to accommodate the changes and is divided in five steps:
+#
+# 1 - Single Statements:
+# 1.1 - Single statement on transactional table.
+# 1.2 - Single statement on non-transactional table. +# 1.3 -
Single statement on both transactional and non-transactional tables.
+# In both 1.2 and 1.3, an incident event is logged to notify the
user that the
+# master and slave are diverging.
+#
+# 2 - Transactions ended by an implicit commit.
+#
+# 3 - Transactions ended by a COMMIT.
+#
+# 4 - Transactions ended by a ROLLBACK.
+#
+# 5 - Transactions with a failing statement that updates a
non-transactional
+# table. In this case, a failure means that the statement does not
get into
+# the cache and an incident event is logged to notify the user that
the master
+# and slave are diverging.
+#
+########################################################################################
+
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+
+let $old_max_binlog_cache_size= query_get_value(SHOW VARIABLES LIKE
"max_binlog_cache_size", Value, 1);
+let $old_binlog_cache_size= query_get_value(SHOW VARIABLES LIKE
"binlog_cache_size", Value, 1);
+let $old_max_binlog_stmt_cache_size= query_get_value(SHOW VARIABLES
LIKE "max_binlog_stmt_cache_size", Value, 1);
+let $old_binlog_stmt_cache_size= query_get_value(SHOW VARIABLES LIKE
"binlog_stmt_cache_size", Value, 1);
+
+SET GLOBAL max_binlog_cache_size = 4096;
+SET GLOBAL binlog_cache_size = 4096;
+SET GLOBAL max_binlog_stmt_cache_size = 4096;
+SET GLOBAL binlog_stmt_cache_size = 4096;
+disconnect master;
+connect (master,127.0.0.1,root,,test,$MASTER_MYPORT,);
+
+CREATE TABLE t1(a INT PRIMARY KEY, data VARCHAR(30000)) ENGINE=Innodb;
+CREATE TABLE t2(a INT PRIMARY KEY, data VARCHAR(30000)) ENGINE=MyIsam;
+CREATE TABLE t3(a INT PRIMARY KEY, data VARCHAR(30000)) ENGINE=Innodb;
+
+let $data = `select concat('"', repeat('a',2000), '"')`;
+
+--echo
########################################################################################
+--echo # 1 - SINGLE STATEMENT
+--echo
########################################################################################
+
+connection master;
+
+--echo *** Single statement on transactional table ***
+--disable_query_log
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+eval INSERT INTO t1 (a, data) VALUES (1,
+ CONCAT($data, $data, $data, $data, $data));
+--enable_query_log
+
+--echo *** Single statement on non-transactional table ***
+--disable_query_log
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+eval INSERT INTO t2 (a, data) VALUES (2,
+ CONCAT($data, $data, $data, $data, $data, $data));
+--enable_query_log
+
+# Incident event
+# 1590=ER_SLAVE_INCIDENT
+--let $slave_sql_errno= 1590
+--disable_connect_log
+--source include/wait_for_slave_sql_error_and_skip.inc
+--enable_connect_log
+
+--disable_query_log
+eval INSERT INTO t1 (a, data) VALUES (3, $data);
+eval INSERT INTO t1 (a, data) VALUES (4, $data);
+eval INSERT INTO t1 (a, data) VALUES (5, $data);
+eval INSERT INTO t2 (a, data) VALUES (3, $data);
+eval INSERT INTO t2 (a, data) VALUES (4, $data);
+eval INSERT INTO t2 (a, data) VALUES (5, $data);
+--enable_query_log
+
+--echo *** Single statement on both transactional and non-transactional
tables. ***
+--disable_query_log
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+eval UPDATE t2, t1 SET t2.data = CONCAT($data, $data, $data, $data),
+ t1.data = CONCAT($data, $data, $data, $data);
+--enable_query_log
+
+# 1590=ER_SLAVE_INCIDENT
+--let $slave_sql_errno= 1590
+--let $slave_skip_counter= `SELECT IF(@@binlog_format = 'ROW', 2, 1)`
+--disable_connect_log
+--source include/wait_for_slave_sql_error_and_skip.inc
+
+--let $diff_tables= master:t1,slave:t1
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################################
+--echo # 2 - BEGIN - IMPLICIT COMMIT by DDL
+--echo
########################################################################################
+
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+set default_storage_engine=innodb;
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (1, $data);
+--eval INSERT INTO t1 (a, data) VALUES (2, $data);
+--eval INSERT INTO t1 (a, data) VALUES (3, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (4, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (5, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (6, $data);
+--enable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (7, 's');
+--eval INSERT INTO t2 (a, data) VALUES (8, 's');
+--eval INSERT INTO t1 (a, data) VALUES (9, 's');
+
+ALTER TABLE t3 ADD COLUMN d int;
+
+--disable_query_log
+--eval INSERT INTO t2 (a, data) VALUES (10, $data);
+--eval INSERT INTO t2 (a, data) VALUES (11, $data);
+--eval INSERT INTO t2 (a, data) VALUES (12, $data);
+--eval INSERT INTO t2 (a, data) VALUES (13, $data);
+--enable_query_log
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (14, $data);
+--eval INSERT INTO t1 (a, data) VALUES (15, $data);
+--eval INSERT INTO t1 (a, data) VALUES (16, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (17, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (18, $data);
+--enable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (19, 's');
+--eval INSERT INTO t2 (a, data) VALUES (20, 's');
+--eval INSERT INTO t1 (a, data) VALUES (21, 's');
+
+if (`SELECT @@binlog_format = 'STATEMENT' || @@binlog_format = 'MIXED'`)
+{
+ CREATE TABLE t4 SELECT * FROM t1;
+}
+if (`SELECT @@binlog_format = 'ROW'`)
+{
+ --error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+ CREATE TABLE t4 SELECT * FROM t1;
+}
+
+--disable_query_log
+--eval INSERT INTO t2 (a, data) VALUES (15, $data);
+--enable_query_log
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (22, $data);
+--eval INSERT INTO t1 (a, data) VALUES (23, $data);
+--eval INSERT INTO t1 (a, data) VALUES (24, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (25, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (26, $data);
+--enable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (27, 's');
+--eval INSERT INTO t2 (a, data) VALUES (28, 's');
+--eval INSERT INTO t1 (a, data) VALUES (29, 's');
+
+CREATE TABLE t5 (a int);
+
+--sync_slave_with_master
+--let $diff_tables= master:t1,slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################################
+--echo # 3 - BEGIN - COMMIT
+--echo
########################################################################################
+
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (1, $data);
+--eval INSERT INTO t1 (a, data) VALUES (2, $data);
+--eval INSERT INTO t1 (a, data) VALUES (3, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (4, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (5, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (6, $data);
+--eval INSERT INTO t1 (a, data) VALUES (7, 's');
+--eval INSERT INTO t2 (a, data) VALUES (8, 's');
+--eval INSERT INTO t1 (a, data) VALUES (9, 's');
+--enable_query_log
+COMMIT;
+
+--sync_slave_with_master
+--let $diff_tables= master:t1,slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################################
+--echo # 4 - BEGIN - ROLLBACK
+--echo
########################################################################################
+
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (1, $data);
+--eval INSERT INTO t1 (a, data) VALUES (2, $data);
+--eval INSERT INTO t1 (a, data) VALUES (3, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (4, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (5, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (6, $data);
+--eval INSERT INTO t1 (a, data) VALUES (7, 's');
+--eval INSERT INTO t2 (a, data) VALUES (8, 's');
+--eval INSERT INTO t1 (a, data) VALUES (9, 's');
+--enable_query_log
+ROLLBACK;
+
+--sync_slave_with_master
+--let $diff_tables= master:t1,slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################################
+--echo # 5 - PROCEDURE +--echo
########################################################################################
+
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+
+DELIMITER //;
+
+CREATE PROCEDURE p1(pd VARCHAR(30000))
+BEGIN
+ INSERT INTO t1 (a, data) VALUES (1, pd);
+ INSERT INTO t1 (a, data) VALUES (2, pd);
+ INSERT INTO t1 (a, data) VALUES (3, pd);
+ INSERT INTO t1 (a, data) VALUES (4, pd);
+ INSERT INTO t1 (a, data) VALUES (5, 's');
+END//
+
+DELIMITER ;//
+
+TRUNCATE TABLE t1;
+
+--disable_query_log
+eval CALL p1($data);
+--enable_query_log
+
+TRUNCATE TABLE t1;
+
+BEGIN;
+--disable_query_log
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+eval CALL p1($data);
+--enable_query_log
+COMMIT;
+
+TRUNCATE TABLE t1;
+
+BEGIN;
+--disable_query_log
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+eval CALL p1($data);
+--enable_query_log
+ROLLBACK;
+
+--sync_slave_with_master
+--let $diff_tables= master:t1,slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################################
+--echo # 6 - XID
+--echo
########################################################################################
+
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (1, $data);
+--eval INSERT INTO t1 (a, data) VALUES (2, $data);
+--eval INSERT INTO t1 (a, data) VALUES (3, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (4, $data);
+SAVEPOINT sv;
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (5, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (6, $data);
+--enable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (7, 's');
+--eval INSERT INTO t2 (a, data) VALUES (8, 's');
+--eval INSERT INTO t1 (a, data) VALUES (9, 's');
+ROLLBACK TO sv;
+COMMIT;
+
+--sync_slave_with_master
+--let $diff_tables= master:t1,slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################################
+--echo # 7 - NON-TRANS TABLE
+--echo
########################################################################################
+
+connection master;
+TRUNCATE TABLE t1;
+TRUNCATE TABLE t2;
+TRUNCATE TABLE t3;
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (1, $data);
+--eval INSERT INTO t1 (a, data) VALUES (2, $data);
+--eval INSERT INTO t2 (a, data) VALUES (3, $data);
+--eval INSERT INTO t1 (a, data) VALUES (4, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (5, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (6, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (7, $data);
+--eval UPDATE t2 SET data= CONCAT($data, $data);
+--enable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (8, 's');
+--eval INSERT INTO t1 (a, data) VALUES (9, 's');
+--eval INSERT INTO t2 (a, data) VALUES (10, 's');
+--eval INSERT INTO t1 (a, data) VALUES (11, 's');
+COMMIT;
+
+BEGIN;
+--disable_query_log
+--eval INSERT INTO t1 (a, data) VALUES (15, $data);
+--eval INSERT INTO t1 (a, data) VALUES (16, $data);
+--eval INSERT INTO t2 (a, data) VALUES (17, $data);
+--eval INSERT INTO t1 (a, data) VALUES (18, $data);
+--error ER_TRANS_CACHE_FULL, ER_STMT_CACHE_FULL, ER_ERROR_ON_WRITE
+--eval INSERT INTO t1 (a, data) VALUES (19, $data);
+--enable_query_log
+COMMIT;
+
+--sync_slave_with_master
+--let $diff_tables= master:t1,slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--echo
########################################################################
+--echo # 8 - Bug#55375(Regression Bug) Transaction bigger than
+--echo # max_binlog_cache_size crashes slave
+--echo
########################################################################
+
+--echo # [ On Slave ]
+SET GLOBAL max_binlog_cache_size = 4096;
+SET GLOBAL binlog_cache_size = 4096;
+SET GLOBAL max_binlog_stmt_cache_size = 4096;
+SET GLOBAL binlog_stmt_cache_size = 4096;
+
+--disable_connect_log
+source include/stop_slave.inc;
+source include/start_slave.inc;
+--enable_connect_log
+CALL mtr.add_suppression("Multi-statement transaction required more
than 'max_binlog_cache_size' bytes of storage.*");
+CALL mtr.add_suppression("Multi-statement transaction required more
than 'max_binlog_stmt_cache_size' bytes of storage.*");
+CALL mtr.add_suppression("Writing one row to the row-based binary log
failed.*");
+CALL mtr.add_suppression("Slave SQL.*The incident LOST_EVENTS occurred
on the master. Message: error writing to the binary log");
+
+connection master;
+TRUNCATE t1;
+
+sync_slave_with_master;
+--let binlog_start= query_get_value(SHOW MASTER STATUS, Position, 1)
+--let binlog_file= query_get_value(SHOW MASTER STATUS, File, 1)
+
+connection master;
+--replace_result $old_max_binlog_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL max_binlog_cache_size= $old_max_binlog_cache_size
+--replace_result $old_binlog_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL binlog_cache_size= $old_binlog_cache_size
+--replace_result $old_max_binlog_stmt_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL max_binlog_stmt_cache_size=
$old_max_binlog_stmt_cache_size
+--replace_result $old_binlog_stmt_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL binlog_stmt_cache_size= $old_binlog_stmt_cache_size
+disconnect master;
+connect (master,127.0.0.1,root,,test,$MASTER_MYPORT,);
+
+--let $n=128
+BEGIN;
+--disable_query_log
+--echo Repeat statement 'INSERT INTO t1 VALUES(\$n, repeat("a", 32))'
$n times
+while ($n)
+{
+ --eval INSERT INTO t1 VALUES ($n, repeat("a", 32))
+ --dec $n
+}
+--enable_query_log
+COMMIT;
+
+--connection slave
+--let $slave_sql_errno= 1197
+if (`SELECT @@binlog_format = 'ROW'`)
+{
+ --let $slave_sql_errno= 1534
+}
+--disable_connect_log
+source include/wait_for_slave_sql_error.inc;
+
+SELECT count(*) FROM t1;
+source include/show_binlog_events.inc;
+--enable_connect_log
+
+--replace_result $old_max_binlog_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL max_binlog_cache_size= $old_max_binlog_cache_size
+--replace_result $old_binlog_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL binlog_cache_size= $old_binlog_cache_size
+--replace_result $old_max_binlog_stmt_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL max_binlog_stmt_cache_size=
$old_max_binlog_stmt_cache_size
+--replace_result $old_binlog_stmt_cache_size ORIGINAL_VALUE
+--eval SET GLOBAL binlog_stmt_cache_size= $old_binlog_stmt_cache_size
+
+--disable_connect_log
+source include/stop_slave.inc;
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+sync_slave_with_master;
+SELECT count(*) FROM t1;
+
+--echo
########################################################################################
+--echo # CLEAN
+--echo
########################################################################################
+
+connection master;
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+# t4 exists only if binlog_format!=row, so so a warning is generated
+# if binog_format=row
+--disable_warnings
+DROP TABLE IF EXISTS t4;
+--enable_warnings
+DROP TABLE t5;
+DROP PROCEDURE p1;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_packet.cnf
b/mysql-test/suite/binlog_encryption/rpl_packet.cnf
new file mode 100644
index 0000000..0f01aec
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_packet.cnf
@@ -0,0 +1,10 @@
+!include my.cnf
+
+[mysqld.1]
+max_allowed_packet=1024
+net_buffer_length=1024
+
+[mysqld.2]
+max_allowed_packet=1024
+net_buffer_length=1024
+slave_max_allowed_packet=1024
diff --git a/mysql-test/suite/binlog_encryption/rpl_packet.result
b/mysql-test/suite/binlog_encryption/rpl_packet.result
new file mode 100644
index 0000000..4a2a5d7
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_packet.result
@@ -0,0 +1,83 @@
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression("Slave I/O: Got a packet bigger than
'slave_max_allowed_packet' bytes, .*error.* 1153");
+call mtr.add_suppression("Log entry on master is longer than
slave_max_allowed_packet");
+drop database if exists
DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________;
+create database
DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________;
+connection master;
+SET @@global.max_allowed_packet=1024;
+SET @@global.net_buffer_length=1024;
+connection slave;
+include/stop_slave.inc
+include/start_slave.inc
+disconnect master;
+connect
master,localhost,root,,DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________;
+connection master;
+select @@net_buffer_length, @@max_allowed_packet;
+@@net_buffer_length @@max_allowed_packet
+1024 1024
+create table `t1` (`f1` LONGTEXT) ENGINE=MyISAM;
+INSERT INTO `t1`(`f1`) VALUES
('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaa
aaaaaaaaaaaaaaaaaaa1023');
+connection slave;
+select count(*) from
`DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________`.`t1`
/* must be 1 */;
+count(*)
+1
+SHOW STATUS LIKE 'Slave_running';
+Variable_name Value
+Slave_running ON
+select * from information_schema.session_status where variable_name=
'SLAVE_RUNNING';
+VARIABLE_NAME VARIABLE_VALUE
+SLAVE_RUNNING ON
+connection master;
+drop database
DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________;
+connection slave;
+connection master;
+SET @@global.max_allowed_packet=4096;
+SET @@global.net_buffer_length=4096;
+connection slave;
+include/stop_slave.inc
+include/start_slave.inc
+disconnect master;
+connect master, localhost, root;
+connection master;
+CREATE TABLE `t1` (`f1` LONGTEXT) ENGINE=MyISAM;
+connection slave;
+connection master;
+INSERT INTO `t1`(`f1`) VALUES
('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa2048');
+connection slave;
+include/wait_for_slave_io_error.inc [errno=1153]
+Last_IO_Error = 'Got a packet bigger than 'slave_max_allowed_packet' bytes'
+include/stop_slave_sql.inc
+include/rpl_reset.inc
+connection master;
+DROP TABLE t1;
+connection slave;
+connection master;
+CREATE TABLE t1 (f1 int PRIMARY KEY, f2 LONGTEXT, f3 LONGTEXT)
ENGINE=MyISAM;
+connection slave;
+connection master;
+INSERT INTO t1(f1, f2, f3) VALUES(1, REPEAT('a',
@@global.max_allowed_packet), REPEAT('b', @@global.max_allowed_packet));
+connection slave;
+include/wait_for_slave_io_error.inc [errno=1153]
+Last_IO_Error = 'Got a packet bigger than 'slave_max_allowed_packet' bytes'
+STOP SLAVE;
+RESET SLAVE;
+connection master;
+RESET MASTER;
+SET @max_allowed_packet_0= @@session.max_allowed_packet;
+SHOW BINLOG EVENTS;
+SET @max_allowed_packet_1= @@session.max_allowed_packet;
+SHOW BINLOG EVENTS;
+SET @max_allowed_packet_2= @@session.max_allowed_packet;
+==== clean up ====
+connection master;
+DROP TABLE t1;
+SET @@global.max_allowed_packet= 1024;
+Warnings:
+Warning 1708 The value of 'max_allowed_packet' should be no less than
the value of 'net_buffer_length'
+SET @@global.net_buffer_length= 1024;
+SET @@global.slave_max_allowed_packet= 1073741824;
+connection slave;
+DROP TABLE t1;
+RESET SLAVE;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_packet.test
b/mysql-test/suite/binlog_encryption/rpl_packet.test
new file mode 100644
index 0000000..42b77a8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_packet.test
@@ -0,0 +1,199 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+# ==== Purpose ====
+#
+# Check replication protocol packet size handling
+#
+# ==== Related bugs ====
+# Bug#19402 SQL close to the size of the max_allowed_packet fails on slave
+# BUG#23755: Replicated event larger that max_allowed_packet infinitely
re-transmits
+# BUG#42914: No LAST_IO_ERROR for max_allowed_packet errors
+# BUG#55322: SHOW BINLOG EVENTS increases @@SESSION.MAX_ALLOWED_PACKET
+
+# max-out size db name +source include/master-slave.inc;
+source include/have_binlog_format_row.inc;
+
+--enable_connect_log
+
+call mtr.add_suppression("Slave I/O: Got a packet bigger than
'slave_max_allowed_packet' bytes, .*error.* 1153");
+call mtr.add_suppression("Log entry on master is longer than
slave_max_allowed_packet");
+let $db= DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________;
+disable_warnings;
+eval drop database if exists $db;
+enable_warnings;
+eval create database $db;
+
+connection master;
+let $old_max_allowed_packet= `SELECT @@global.max_allowed_packet`;
+let $old_net_buffer_length= `SELECT @@global.net_buffer_length`;
+let $old_slave_max_allowed_packet= `SELECT
@@global.slave_max_allowed_packet`;
+SET @@global.max_allowed_packet=1024;
+SET @@global.net_buffer_length=1024;
+
+sync_slave_with_master;
+# Restart slave for setting to take effect
+--disable_connect_log
+source include/stop_slave.inc;
+source include/start_slave.inc;
+--enable_connect_log
+
+# Reconnect to master for new setting to take effect
+disconnect master;
+
+# alas, can't use eval here; if db name changed apply the change here
+connect
(master,localhost,root,,DB_NAME_OF_MAX_LENGTH_AKA_NAME_LEN_64_BYTES_____________________);
+
+connection master;
+select @@net_buffer_length, @@max_allowed_packet;
+
+create table `t1` (`f1` LONGTEXT) ENGINE=MyISAM;
+
+INSERT INTO `t1`(`f1`) VALUES
('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaa
aaaaaaaaaaaaaaaaaaa1023');
+sync_slave_with_master;
+
+eval select count(*) from `$db`.`t1` /* must be 1 */;
+
+SHOW STATUS LIKE 'Slave_running';
+select * from information_schema.session_status where variable_name=
'SLAVE_RUNNING';
+connection master;
+eval drop database $db;
+sync_slave_with_master;
+
+#
+# Bug #23755: Replicated event larger that max_allowed_packet
infinitely re-transmits
+#
+# Check that a situation when the size of event on the master is
greater than +# max_allowed_packet on the slave does not lead to
infinite re-transmits.
+
+connection master;
+
+# Change the max packet size on master
+
+SET @@global.max_allowed_packet=4096;
+SET @@global.net_buffer_length=4096;
+
+# Restart slave for new setting to take effect
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+source include/start_slave.inc;
+--enable_connect_log
+
+# Reconnect to master for new setting to take effect
+disconnect master;
+connect (master, localhost, root);
+connection master;
+
+CREATE TABLE `t1` (`f1` LONGTEXT) ENGINE=MyISAM;
+
+sync_slave_with_master;
+
+connection master;
+
+INSERT INTO `t1`(`f1`) VALUES
('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa2048');
+
+
+#
+# Bug#42914: The slave I/O thread must stop after trying to read the above
+# event, However there is no Last_IO_Error report.
+#
+
+# The slave I/O thread must stop after trying to read the above event
+connection slave;
+# 1153 = ER_NET_PACKET_TOO_LARGE
+--let $slave_io_errno= 1153
+--let $show_slave_io_error= 1
+--disable_connect_log
+--source include/wait_for_slave_io_error.inc
+--enable_connect_log
+
+# TODO: this is needed because of BUG#55790. Remove once that is fixed.
+--disable_connect_log
+--source include/stop_slave_sql.inc
+--enable_connect_log
+
+#
+# Bug#42914: On the master, if a binary log event is larger than
+# max_allowed_packet, the error message
ER_MASTER_FATAL_ERROR_READING_BINLOG
+# is sent to a slave when it requests a dump from the master, thus
leading the
+# I/O thread to stop. However, there is no Last_IO_Error reported.
+#
+
+--let $rpl_only_running_threads= 1
+--disable_connect_log
+--source include/rpl_reset.inc
+--enable_connect_log
+--connection master
+DROP TABLE t1;
+--sync_slave_with_master
+
+
+connection master;
+CREATE TABLE t1 (f1 int PRIMARY KEY, f2 LONGTEXT, f3 LONGTEXT)
ENGINE=MyISAM;
+sync_slave_with_master;
+
+connection master;
+INSERT INTO t1(f1, f2, f3) VALUES(1, REPEAT('a',
@@global.max_allowed_packet), REPEAT('b', @@global.max_allowed_packet));
+
+connection slave;
+# The slave I/O thread must stop after receiving
+# 1153 = ER_NET_PACKET_TOO_LARGE
+--let $slave_io_errno= 1153
+--let $show_slave_io_error= 1
+--disable_connect_log
+--source include/wait_for_slave_io_error.inc
+--enable_connect_log
+
+# Remove the bad binlog and clear error status on slave.
+STOP SLAVE;
+RESET SLAVE;
+--connection master
+RESET MASTER;
+
+
+#
+# BUG#55322: SHOW BINLOG EVENTS increases @@SESSION.MAX_ALLOWED_PACKET
+#
+# In BUG#55322, @@session.max_allowed_packet increased each time SHOW
+# BINLOG EVENTS was issued. To verify that this bug is fixed, we
+# execute SHOW BINLOG EVENTS twice and check that max_allowed_packet
+# never changes. We turn off the result log because we don't care
+# about the contents of the binlog.
+
+--disable_result_log
+SET @max_allowed_packet_0= @@session.max_allowed_packet;
+SHOW BINLOG EVENTS;
+SET @max_allowed_packet_1= @@session.max_allowed_packet;
+SHOW BINLOG EVENTS;
+SET @max_allowed_packet_2= @@session.max_allowed_packet;
+--enable_result_log
+if (`SELECT NOT(@max_allowed_packet_0 = @max_allowed_packet_1 AND
@max_allowed_packet_1 = @max_allowed_packet_2)`)
+{
+ --echo ERROR: max_allowed_packet changed after executing SHOW BINLOG
EVENTS
+ --disable_connect_log
+ --source include/show_rpl_debug_info.inc
+ --enable_connect_log
+ SELECT @max_allowed_packet_0, @max_allowed_packet_1,
@max_allowed_packet_2;
+ --die @max_allowed_packet changed after executing SHOW BINLOG EVENTS
+}
+
+
+--echo ==== clean up ====
+connection master;
+DROP TABLE t1;
+eval SET @@global.max_allowed_packet= $old_max_allowed_packet;
+eval SET @@global.net_buffer_length= $old_net_buffer_length;
+eval SET @@global.slave_max_allowed_packet= $old_slave_max_allowed_packet;
+# slave is stopped
+connection slave;
+DROP TABLE t1;
+
+# Clear Last_IO_Error
+RESET SLAVE;
+
+--disable_connect_log
+--source include/rpl_end.inc
+# End of tests
diff --git a/mysql-test/suite/binlog_encryption/rpl_parallel.result
b/mysql-test/suite/binlog_encryption/rpl_parallel.result
new file mode 100644
index 0000000..9774836
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_parallel.result
@@ -0,0 +1,2025 @@
+include/master-slave.inc
+[connection master]
+connection server_2;
+SET @old_parallel_threads=@@GLOBAL.slave_parallel_threads;
+SET GLOBAL slave_parallel_threads=10;
+ERROR HY000: This operation cannot be performed as you have a running
slave ''; run STOP SLAVE '' first
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=10;
+SELECT IF(COUNT(*) < 10, "OK", CONCAT("Found too many system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+IF(COUNT(*) < 10, "OK", CONCAT("Found too many system user processes:
", COUNT(*)))
+OK
+CHANGE MASTER TO master_use_gtid=slave_pos;
+include/start_slave.inc
+SELECT IF(COUNT(*) >= 10, "OK", CONCAT("Found too few system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+IF(COUNT(*) >= 10, "OK", CONCAT("Found too few system user processes:
", COUNT(*)))
+OK
+include/stop_slave.inc
+SELECT IF(COUNT(*) < 10, "OK", CONCAT("Found too many system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+IF(COUNT(*) < 10, "OK", CONCAT("Found too many system user processes:
", COUNT(*)))
+OK
+include/start_slave.inc
+SELECT IF(COUNT(*) >= 10, "OK", CONCAT("Found too few system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+IF(COUNT(*) >= 10, "OK", CONCAT("Found too few system user processes:
", COUNT(*)))
+OK
+*** Test long-running query in domain 1 can run in parallel with short
queries in domain 0 ***
+connection server_1;
+ALTER TABLE mysql.gtid_slave_pos ENGINE=InnoDB;
+CREATE TABLE t1 (a int PRIMARY KEY) ENGINE=MyISAM;
+CREATE TABLE t2 (a int PRIMARY KEY) ENGINE=InnoDB;
+INSERT INTO t1 VALUES (1);
+INSERT INTO t2 VALUES (1);
+connection server_2;
+connect con_temp1,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+LOCK TABLE t1 WRITE;
+connection server_1;
+SET gtid_domain_id=1;
+INSERT INTO t1 VALUES (2);
+SET gtid_domain_id=0;
+INSERT INTO t2 VALUES (2);
+INSERT INTO t2 VALUES (3);
+BEGIN;
+INSERT INTO t2 VALUES (4);
+INSERT INTO t2 VALUES (5);
+COMMIT;
+INSERT INTO t2 VALUES (6);
+connection server_2;
+SELECT * FROM t2 ORDER by a;
+a
+1
+2
+3
+4
+5
+6
+connection con_temp1;
+SELECT * FROM t1;
+a
+1
+UNLOCK TABLES;
+connection server_2;
+SELECT * FROM t1 ORDER BY a;
+a
+1
+2
+*** Test two transactions in different domains committed in opposite
order on slave but in a single group commit. ***
+connection server_2;
+include/stop_slave.inc
+connection server_1;
+SET sql_log_bin=0;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format='statement';
+SET gtid_domain_id=1;
+INSERT INTO t2 VALUES (foo(10,
+'commit_before_enqueue SIGNAL ready1 WAIT_FOR cont1',
+'commit_after_release_LOCK_prepare_ordered SIGNAL ready2'));
+connection server_2;
+FLUSH LOGS;
+SET sql_log_bin=0;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+IF d1 != '' THEN
+SET debug_sync = d1;
+END IF;
+IF d2 != '' THEN
+SET debug_sync = d2;
+END IF;
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+SET @old_format=@@GLOBAL.binlog_format;
+SET GLOBAL binlog_format=statement;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+SET debug_sync='now WAIT_FOR ready1';
+connection server_1;
+SET gtid_domain_id=2;
+INSERT INTO t2 VALUES (foo(11,
+'commit_before_enqueue SIGNAL ready3 WAIT_FOR cont3',
+'commit_after_release_LOCK_prepare_ordered SIGNAL ready4 WAIT_FOR cont4'));
+SET gtid_domain_id=0;
+SELECT * FROM t2 WHERE a >= 10 ORDER BY a;
+a
+10
+11
+connection server_2;
+SET debug_sync='now WAIT_FOR ready3';
+SET debug_sync='now SIGNAL cont3';
+SET debug_sync='now WAIT_FOR ready4';
+SET debug_sync='now SIGNAL cont1';
+SET debug_sync='now WAIT_FOR ready2';
+SET debug_sync='now SIGNAL cont4';
+SELECT * FROM t2 WHERE a >= 10 ORDER BY a;
+a
+10
+11
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+slave-bin.000002 # Binlog_checkpoint # # slave-bin.000002
+slave-bin.000002 # Gtid # # BEGIN GTID #-#-# cid=#
+slave-bin.000002 # Query # # use `test`; INSERT INTO t2 VALUES (foo(11,
+'commit_before_enqueue SIGNAL ready3 WAIT_FOR cont3',
+'commit_after_release_LOCK_prepare_ordered SIGNAL ready4 WAIT_FOR cont4'))
+slave-bin.000002 # Xid # # COMMIT /* XID */
+slave-bin.000002 # Gtid # # BEGIN GTID #-#-# cid=#
+slave-bin.000002 # Query # # use `test`; INSERT INTO t2 VALUES (foo(10,
+'commit_before_enqueue SIGNAL ready1 WAIT_FOR cont1',
+'commit_after_release_LOCK_prepare_ordered SIGNAL ready2'))
+slave-bin.000002 # Xid # # COMMIT /* XID */
+FLUSH LOGS;
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET debug_sync='RESET';
+include/start_slave.inc
+*** Test that group-committed transactions on the master can replicate
in parallel on the slave. ***
+connection server_1;
+SET debug_sync='RESET';
+FLUSH LOGS;
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=InnoDB;
+INSERT INTO t3 VALUES (1,1), (3,3), (5,5), (7,7);
+connection server_2;
+connection con_temp1;
+BEGIN;
+INSERT INTO t3 VALUES (2,102);
+connect con_temp2,127.0.0.1,root,,test,$SERVER_MYPORT_2,;
+BEGIN;
+INSERT INTO t3 VALUES (4,104);
+connect con_temp3,127.0.0.1,root,,test,$SERVER_MYPORT_1,;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (2, foo(12,
+'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued1
WAIT_FOR slave_cont1',
+''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connect con_temp4,127.0.0.1,root,,test,$SERVER_MYPORT_1,;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (4, foo(14,
+'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued2',
+''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+connect con_temp5,127.0.0.1,root,,test,$SERVER_MYPORT_1,;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (6, foo(16,
+'group_commit_waiting_for_prior SIGNAL slave_queued3',
+''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+connection con_temp3;
+connection con_temp4;
+connection con_temp5;
+SET debug_sync='RESET';
+connection server_1;
+SELECT * FROM t3 ORDER BY a;
+a b
+1 1
+2 12
+3 3
+4 14
+5 5
+6 16
+7 7
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+master-bin.000002 # Binlog_checkpoint # # master-bin.000001
+master-bin.000002 # Binlog_checkpoint # # master-bin.000002
+master-bin.000002 # Gtid # # GTID #-#-#
+master-bin.000002 # Query # # use `test`; CREATE TABLE t3 (a INT
PRIMARY KEY, b INT) ENGINE=InnoDB
+master-bin.000002 # Gtid # # BEGIN GTID #-#-#
+master-bin.000002 # Query # # use `test`; INSERT INTO t3 VALUES (1,1),
(3,3), (5,5), (7,7)
+master-bin.000002 # Xid # # COMMIT /* XID */
+master-bin.000002 # Gtid # # BEGIN GTID #-#-# cid=#
+master-bin.000002 # Query # # use `test`; INSERT INTO t3 VALUES (2, foo(12,
+'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued1
WAIT_FOR slave_cont1',
+''))
+master-bin.000002 # Xid # # COMMIT /* XID */
+master-bin.000002 # Gtid # # BEGIN GTID #-#-# cid=#
+master-bin.000002 # Query # # use `test`; INSERT INTO t3 VALUES (4, foo(14,
+'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued2',
+''))
+master-bin.000002 # Xid # # COMMIT /* XID */
+master-bin.000002 # Gtid # # BEGIN GTID #-#-# cid=#
+master-bin.000002 # Query # # use `test`; INSERT INTO t3 VALUES (6, foo(16,
+'group_commit_waiting_for_prior SIGNAL slave_queued3',
+''))
+master-bin.000002 # Xid # # COMMIT /* XID */
+connection server_2;
+SET debug_sync='now WAIT_FOR slave_queued3';
+connection con_temp1;
+ROLLBACK;
+connection server_2;
+SET debug_sync='now WAIT_FOR slave_queued1';
+connection con_temp2;
+ROLLBACK;
+connection server_2;
+SET debug_sync='now WAIT_FOR slave_queued2';
+SET debug_sync='now SIGNAL slave_cont1';
+SELECT * FROM t3 ORDER BY a;
+a b
+1 1
+2 12
+3 3
+4 14
+5 5
+6 16
+7 7
+include/show_binlog_events.inc
+Log_name Pos Event_type Server_id End_log_pos Info
+slave-bin.000003 # Binlog_checkpoint # # slave-bin.000003
+slave-bin.000003 # Gtid # # GTID #-#-#
+slave-bin.000003 # Query # # use `test`; CREATE TABLE t3 (a INT PRIMARY
KEY, b INT) ENGINE=InnoDB
+slave-bin.000003 # Gtid # # BEGIN GTID #-#-#
+slave-bin.000003 # Query # # use `test`; INSERT INTO t3 VALUES (1,1),
(3,3), (5,5), (7,7)
+slave-bin.000003 # Xid # # COMMIT /* XID */
+slave-bin.000003 # Gtid # # BEGIN GTID #-#-# cid=#
+slave-bin.000003 # Query # # use `test`; INSERT INTO t3 VALUES (2, foo(12,
+'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued1
WAIT_FOR slave_cont1',
+''))
+slave-bin.000003 # Xid # # COMMIT /* XID */
+slave-bin.000003 # Gtid # # BEGIN GTID #-#-# cid=#
+slave-bin.000003 # Query # # use `test`; INSERT INTO t3 VALUES (4, foo(14,
+'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued2',
+''))
+slave-bin.000003 # Xid # # COMMIT /* XID */
+slave-bin.000003 # Gtid # # BEGIN GTID #-#-# cid=#
+slave-bin.000003 # Query # # use `test`; INSERT INTO t3 VALUES (6, foo(16,
+'group_commit_waiting_for_prior SIGNAL slave_queued3',
+''))
+slave-bin.000003 # Xid # # COMMIT /* XID */
+*** Test STOP SLAVE in parallel mode ***
+connection server_2;
+include/stop_slave.inc
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+connection server_1;
+SET binlog_direct_non_transactional_updates=0;
+SET sql_log_bin=0;
+CALL mtr.add_suppression("Statement is unsafe because it accesses a
non-transactional table after accessing a transactional table within the
same transaction");
+SET sql_log_bin=1;
+BEGIN;
+INSERT INTO t2 VALUES (20);
+INSERT INTO t1 VALUES (20);
+INSERT INTO t2 VALUES (21);
+INSERT INTO t3 VALUES (20, 20);
+COMMIT;
+INSERT INTO t3 VALUES(21, 21);
+INSERT INTO t3 VALUES(22, 22);
+SET binlog_format=@old_format;
+connection con_temp1;
+BEGIN;
+INSERT INTO t2 VALUES (21);
+connection server_2;
+START SLAVE;
+connection con_temp2;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,rpl_parallel_wait_for_done_trigger";
+STOP SLAVE;
+connection con_temp1;
+SET debug_sync='now WAIT_FOR wait_for_done_waiting';
+ROLLBACK;
+connection con_temp2;
+SET GLOBAL debug_dbug=@old_dbug;
+SET debug_sync='RESET';
+connection server_2;
+include/wait_for_slave_to_stop.inc
+SELECT * FROM t1 WHERE a >= 20 ORDER BY a;
+a
+20
+SELECT * FROM t2 WHERE a >= 20 ORDER BY a;
+a
+20
+21
+SELECT * FROM t3 WHERE a >= 20 ORDER BY a;
+a b
+20 20
+include/start_slave.inc
+SELECT * FROM t1 WHERE a >= 20 ORDER BY a;
+a
+20
+SELECT * FROM t2 WHERE a >= 20 ORDER BY a;
+a
+20
+21
+SELECT * FROM t3 WHERE a >= 20 ORDER BY a;
+a b
+20 20
+21 21
+22 22
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** Test killing slave threads at various wait points ***
+*** 1. Test killing transaction waiting in commit for previous
transaction to commit ***
+connection con_temp3;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (31, foo(31,
+'commit_before_prepare_ordered WAIT_FOR t2_waiting',
+'commit_after_prepare_ordered SIGNAL t1_ready WAIT_FOR t1_cont'));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con_temp4;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+BEGIN;
+INSERT INTO t3 VALUES (32, foo(32,
+'ha_write_row_end SIGNAL t2_query WAIT_FOR t2_cont',
+''));
+INSERT INTO t3 VALUES (33, foo(33,
+'group_commit_waiting_for_prior SIGNAL t2_waiting',
+'group_commit_waiting_for_prior_killed SIGNAL t2_killed'));
+COMMIT;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+connection con_temp5;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (34, foo(34,
+'',
+''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+connection con_temp3;
+connection con_temp4;
+connection con_temp5;
+connection server_1;
+SELECT * FROM t3 WHERE a >= 30 ORDER BY a;
+a b
+31 31
+32 32
+33 33
+34 34
+SET debug_sync='RESET';
+connection server_2;
+SET sql_log_bin=0;
+CALL mtr.add_suppression("Query execution was interrupted");
+CALL mtr.add_suppression("Commit failed due to failure of an earlier
commit on which this one depends");
+CALL mtr.add_suppression("Slave: Connection was killed");
+SET sql_log_bin=1;
+SET debug_sync='now WAIT_FOR t2_query';
+SET debug_sync='now SIGNAL t2_cont';
+SET debug_sync='now WAIT_FOR t1_ready';
+KILL THD_ID;
+SET debug_sync='now WAIT_FOR t2_killed';
+SET debug_sync='now SIGNAL t1_cont';
+include/wait_for_slave_sql_error.inc [errno=1317,1927,1964]
+STOP SLAVE IO_THREAD;
+SELECT * FROM t3 WHERE a >= 30 ORDER BY a;
+a b
+31 31
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_1;
+INSERT INTO t3 VALUES (39,0);
+connection server_2;
+include/start_slave.inc
+SELECT * FROM t3 WHERE a >= 30 ORDER BY a;
+a b
+31 31
+32 32
+33 33
+34 34
+39 0
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+IF d1 != '' THEN
+SET debug_sync = d1;
+END IF;
+IF d2 != '' THEN
+SET debug_sync = d2;
+END IF;
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** 2. Same as (1), but without restarting IO thread after kill of SQL
threads ***
+connection con_temp3;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (41, foo(41,
+'commit_before_prepare_ordered WAIT_FOR t2_waiting',
+'commit_after_prepare_ordered SIGNAL t1_ready WAIT_FOR t1_cont'));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con_temp4;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+BEGIN;
+INSERT INTO t3 VALUES (42, foo(42,
+'ha_write_row_end SIGNAL t2_query WAIT_FOR t2_cont',
+''));
+INSERT INTO t3 VALUES (43, foo(43,
+'group_commit_waiting_for_prior SIGNAL t2_waiting',
+'group_commit_waiting_for_prior_killed SIGNAL t2_killed'));
+COMMIT;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+connection con_temp5;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (44, foo(44,
+'',
+''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+connection con_temp3;
+connection con_temp4;
+connection con_temp5;
+connection server_1;
+SELECT * FROM t3 WHERE a >= 40 ORDER BY a;
+a b
+41 41
+42 42
+43 43
+44 44
+SET debug_sync='RESET';
+connection server_2;
+SET debug_sync='now WAIT_FOR t2_query';
+SET debug_sync='now SIGNAL t2_cont';
+SET debug_sync='now WAIT_FOR t1_ready';
+KILL THD_ID;
+SET debug_sync='now WAIT_FOR t2_killed';
+SET debug_sync='now SIGNAL t1_cont';
+include/wait_for_slave_sql_error.inc [errno=1317,1927,1964]
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_1;
+INSERT INTO t3 VALUES (49,0);
+connection server_2;
+START SLAVE SQL_THREAD;
+SELECT * FROM t3 WHERE a >= 40 ORDER BY a;
+a b
+41 41
+42 42
+43 43
+44 44
+49 0
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+IF d1 != '' THEN
+SET debug_sync = d1;
+END IF;
+IF d2 != '' THEN
+SET debug_sync = d2;
+END IF;
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** 3. Same as (2), but not using gtid mode ***
+connection server_2;
+include/stop_slave.inc
+CHANGE MASTER TO master_use_gtid=no;
+include/start_slave.inc
+connection server_1;
+connection con_temp3;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (51, foo(51,
+'commit_before_prepare_ordered WAIT_FOR t2_waiting',
+'commit_after_prepare_ordered SIGNAL t1_ready WAIT_FOR t1_cont'));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con_temp4;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+BEGIN;
+INSERT INTO t3 VALUES (52, foo(52,
+'ha_write_row_end SIGNAL t2_query WAIT_FOR t2_cont',
+''));
+INSERT INTO t3 VALUES (53, foo(53,
+'group_commit_waiting_for_prior SIGNAL t2_waiting',
+'group_commit_waiting_for_prior_killed SIGNAL t2_killed'));
+COMMIT;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+connection con_temp5;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (54, foo(54,
+'',
+''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+connection con_temp3;
+connection con_temp4;
+connection con_temp5;
+connection server_1;
+SELECT * FROM t3 WHERE a >= 50 ORDER BY a;
+a b
+51 51
+52 52
+53 53
+54 54
+SET debug_sync='RESET';
+connection server_2;
+SET debug_sync='now WAIT_FOR t2_query';
+SET debug_sync='now SIGNAL t2_cont';
+SET debug_sync='now WAIT_FOR t1_ready';
+KILL THD_ID;
+SET debug_sync='now WAIT_FOR t2_killed';
+SET debug_sync='now SIGNAL t1_cont';
+include/wait_for_slave_sql_error.inc [errno=1317,1927,1964]
+SELECT * FROM t3 WHERE a >= 50 ORDER BY a;
+a b
+51 51
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_1;
+INSERT INTO t3 VALUES (59,0);
+connection server_2;
+START SLAVE SQL_THREAD;
+SELECT * FROM t3 WHERE a >= 50 ORDER BY a;
+a b
+51 51
+52 52
+53 53
+54 54
+59 0
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+IF d1 != '' THEN
+SET debug_sync = d1;
+END IF;
+IF d2 != '' THEN
+SET debug_sync = d2;
+END IF;
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+include/stop_slave.inc
+CHANGE MASTER TO master_use_gtid=slave_pos;
+include/start_slave.inc
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=4;
+include/start_slave.inc
+*** 4. Test killing thread that is waiting to start transaction until
previous transaction commits ***
+connection server_1;
+SET binlog_format=statement;
+SET gtid_domain_id=2;
+BEGIN;
+INSERT INTO t3 VALUES (70, foo(70,
+'rpl_parallel_start_waiting_for_prior SIGNAL t4_waiting', ''));
+INSERT INTO t3 VALUES (60, foo(60,
+'ha_write_row_end SIGNAL d2_query WAIT_FOR d2_cont2',
+'rpl_parallel_end_of_group SIGNAL d2_done WAIT_FOR d2_cont'));
+COMMIT;
+SET gtid_domain_id=0;
+connection server_2;
+SET debug_sync='now WAIT_FOR d2_query';
+connection server_1;
+SET gtid_domain_id=1;
+BEGIN;
+INSERT INTO t3 VALUES (61, foo(61,
+'rpl_parallel_start_waiting_for_prior SIGNAL t3_waiting',
+'rpl_parallel_start_waiting_for_prior_killed SIGNAL t3_killed'));
+INSERT INTO t3 VALUES (62, foo(62,
+'ha_write_row_end SIGNAL d1_query WAIT_FOR d1_cont2',
+'rpl_parallel_end_of_group SIGNAL d1_done WAIT_FOR d1_cont'));
+COMMIT;
+SET gtid_domain_id=0;
+connection server_2;
+SET debug_sync='now WAIT_FOR d1_query';
+connection server_1;
+SET gtid_domain_id=0;
+INSERT INTO t3 VALUES (63, foo(63,
+'ha_write_row_end SIGNAL d0_query WAIT_FOR d0_cont2',
+'rpl_parallel_end_of_group SIGNAL d0_done WAIT_FOR d0_cont'));
+connection server_2;
+SET debug_sync='now WAIT_FOR d0_query';
+connection server_1;
+SET gtid_domain_id=3;
+BEGIN;
+INSERT INTO t3 VALUES (68, foo(68,
+'rpl_parallel_start_waiting_for_prior SIGNAL t2_waiting', ''));
+INSERT INTO t3 VALUES (69, foo(69,
+'ha_write_row_end SIGNAL d3_query WAIT_FOR d3_cont2',
+'rpl_parallel_end_of_group SIGNAL d3_done WAIT_FOR d3_cont'));
+COMMIT;
+SET gtid_domain_id=0;
+connection server_2;
+SET debug_sync='now WAIT_FOR d3_query';
+SET debug_sync='now SIGNAL d2_cont2';
+SET debug_sync='now WAIT_FOR d2_done';
+SET debug_sync='now SIGNAL d1_cont2';
+SET debug_sync='now WAIT_FOR d1_done';
+SET debug_sync='now SIGNAL d0_cont2';
+SET debug_sync='now WAIT_FOR d0_done';
+SET debug_sync='now SIGNAL d3_cont2';
+SET debug_sync='now WAIT_FOR d3_done';
+connection con_temp3;
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (64, foo(64,
+'rpl_parallel_before_mark_start_commit SIGNAL t1_waiting WAIT_FOR
t1_cont', ''));
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2 WAIT_FOR master_cont2';
+INSERT INTO t3 VALUES (65, foo(65, '', ''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+connection con_temp4;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+INSERT INTO t3 VALUES (66, foo(66, '', ''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued3';
+connection con_temp5;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued4';
+INSERT INTO t3 VALUES (67, foo(67, '', ''));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued4';
+SET debug_sync='now SIGNAL master_cont2';
+connection con_temp3;
+connection con_temp4;
+connection con_temp5;
+connection server_1;
+SELECT * FROM t3 WHERE a >= 60 ORDER BY a;
+a b
+60 60
+61 61
+62 62
+63 63
+64 64
+65 65
+66 66
+67 67
+68 68
+69 69
+70 70
+SET debug_sync='RESET';
+connection server_2;
+SET debug_sync='now SIGNAL d0_cont';
+SET debug_sync='now WAIT_FOR t1_waiting';
+SET debug_sync='now SIGNAL d3_cont';
+SET debug_sync='now WAIT_FOR t2_waiting';
+SET debug_sync='now SIGNAL d1_cont';
+SET debug_sync='now WAIT_FOR t3_waiting';
+SET debug_sync='now SIGNAL d2_cont';
+SET debug_sync='now WAIT_FOR t4_waiting';
+KILL THD_ID;
+SET debug_sync='now WAIT_FOR t3_killed';
+SET debug_sync='now SIGNAL t1_cont';
+include/wait_for_slave_sql_error.inc [errno=1317,1927,1964]
+STOP SLAVE IO_THREAD;
+SELECT * FROM t3 WHERE a >= 60 AND a != 65 ORDER BY a;
+a b
+60 60
+61 61
+62 62
+63 63
+64 64
+68 68
+69 69
+70 70
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_1;
+UPDATE t3 SET b=b+1 WHERE a=60;
+connection server_2;
+include/start_slave.inc
+SELECT * FROM t3 WHERE a >= 60 ORDER BY a;
+a b
+60 61
+61 61
+62 62
+63 63
+64 64
+65 65
+66 66
+67 67
+68 68
+69 69
+70 70
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+RETURNS INT DETERMINISTIC
+BEGIN
+IF d1 != '' THEN
+SET debug_sync = d1;
+END IF;
+IF d2 != '' THEN
+SET debug_sync = d2;
+END IF;
+RETURN x;
+END
+||
+SET sql_log_bin=1;
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** 5. Test killing thread that is waiting for queue of max length to
shorten ***
+SET @old_max_queued= @@GLOBAL.slave_parallel_max_queued;
+SET GLOBAL slave_parallel_max_queued=9000;
+connection server_1;
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (80, foo(0,
+'ha_write_row_end SIGNAL query_waiting WAIT_FOR query_cont', ''));
+connection server_2;
+SET debug_sync='now WAIT_FOR query_waiting';
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,rpl_parallel_wait_queue_max";
+connection server_1;
+SELECT * FROM t3 WHERE a >= 80 ORDER BY a;
+a b
+80 0
+81 10000
+connection server_2;
+SET debug_sync='now WAIT_FOR wait_queue_ready';
+KILL THD_ID;
+SET debug_sync='now WAIT_FOR wait_queue_killed';
+SET debug_sync='now SIGNAL query_cont';
+include/wait_for_slave_sql_error.inc [errno=1317,1927,1964]
+STOP SLAVE IO_THREAD;
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_max_queued= @old_max_queued;
+connection server_1;
+INSERT INTO t3 VALUES (82,0);
+SET binlog_format=@old_format;
+connection server_2;
+SET debug_sync='RESET';
+include/start_slave.inc
+SELECT * FROM t3 WHERE a >= 80 ORDER BY a;
+a b
+80 0
+81 10000
+82 0
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** MDEV-5788 Incorrect free of rgi->deferred_events in parallel
replication ***
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL replicate_ignore_table="test.t3";
+SET GLOBAL slave_parallel_threads=2;
+include/start_slave.inc
+connection server_1;
+INSERT INTO t3 VALUES (100, rand());
+INSERT INTO t3 VALUES (101, rand());
+connection server_2;
+connection server_1;
+INSERT INTO t3 VALUES (102, rand());
+INSERT INTO t3 VALUES (103, rand());
+INSERT INTO t3 VALUES (104, rand());
+INSERT INTO t3 VALUES (105, rand());
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL replicate_ignore_table="";
+include/start_slave.inc
+connection server_1;
+INSERT INTO t3 VALUES (106, rand());
+INSERT INTO t3 VALUES (107, rand());
+connection server_2;
+SELECT * FROM t3 WHERE a >= 100 ORDER BY a;
+a b
+106 #
+107 #
+*** MDEV-5921: In parallel replication, an error is not correctly
signalled to the next transaction ***
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+connection server_1;
+INSERT INTO t3 VALUES (110, 1);
+connection server_2;
+SELECT * FROM t3 WHERE a >= 110 ORDER BY a;
+a b
+110 1
+SET sql_log_bin=0;
+INSERT INTO t3 VALUES (111, 666);
+SET sql_log_bin=1;
+connection server_1;
+connect con1,127.0.0.1,root,,test,$SERVER_MYPORT_1,;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+INSERT INTO t3 VALUES (111, 2);
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connect con2,127.0.0.1,root,,test,$SERVER_MYPORT_1,;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+INSERT INTO t3 VALUES (112, 3);
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+connection con1;
+connection con2;
+SET debug_sync='RESET';
+connection server_2;
+include/wait_for_slave_sql_error.inc [errno=1062]
+include/wait_for_slave_sql_to_stop.inc
+SELECT * FROM t3 WHERE a >= 110 ORDER BY a;
+a b
+110 1
+111 666
+SET sql_log_bin=0;
+DELETE FROM t3 WHERE a=111 AND b=666;
+SET sql_log_bin=1;
+START SLAVE SQL_THREAD;
+SELECT * FROM t3 WHERE a >= 110 ORDER BY a;
+a b
+110 1
+111 2
+112 3
+***MDEV-5914: Parallel replication deadlock due to InnoDB lock
conflicts ***
+connection server_2;
+include/stop_slave.inc
+connection server_1;
+CREATE TABLE t4 (a INT PRIMARY KEY, b INT, KEY b_idx(b)) ENGINE=InnoDB;
+INSERT INTO t4 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
+connection con1;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+UPDATE t4 SET b=NULL WHERE a=6;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con2;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+DELETE FROM t4 WHERE b <= 3;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+connection con1;
+connection con2;
+SET debug_sync='RESET';
+connection server_2;
+include/start_slave.inc
+include/stop_slave.inc
+SELECT * FROM t4 ORDER BY a;
+a b
+1 NULL
+3 NULL
+4 4
+5 NULL
+6 NULL
+connection server_1;
+DELETE FROM t4;
+INSERT INTO t4 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
+connection con1;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+INSERT INTO t4 VALUES (7, NULL);
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con2;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+DELETE FROM t4 WHERE b <= 3;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+connection con1;
+connection con2;
+SET debug_sync='RESET';
+connection server_2;
+include/start_slave.inc
+include/stop_slave.inc
+SELECT * FROM t4 ORDER BY a;
+a b
+1 NULL
+3 NULL
+4 4
+5 NULL
+6 6
+7 NULL
+connection server_1;
+DELETE FROM t4;
+INSERT INTO t4 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
+connection con1;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+UPDATE t4 SET b=NULL WHERE a=6;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con2;
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format='statement';
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+DELETE FROM t4 WHERE b <= 1;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+connection con1;
+connection con2;
+SET @old_format=@@GLOBAL.binlog_format;
+SET debug_sync='RESET';
+connection server_2;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,disable_thd_need_ordering_with";
+include/start_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+SELECT * FROM t4 ORDER BY a;
+a b
+1 NULL
+2 2
+3 NULL
+4 4
+5 NULL
+6 NULL
+SET @last_gtid= 'GTID';
+SELECT IF(@@gtid_slave_pos LIKE CONCAT('%',@last_gtid,'%'), "GTID found
ok",
+CONCAT("GTID ", @last_gtid, " not found in gtid_slave_pos=",
@@gtid_slave_pos))
+AS result;
+result
+GTID found ok
+SELECT "ROW FOUND" AS `Is the row found?`
+ FROM mysql.gtid_slave_pos
+WHERE CONCAT(domain_id, "-", server_id, "-", seq_no) = @last_gtid;
+Is the row found?
+ROW FOUND
+*** MDEV-5938: Exec_master_log_pos not updated at log rotate in
parallel replication ***
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=1;
+SET DEBUG_SYNC= 'RESET';
+include/start_slave.inc
+connection server_1;
+CREATE TABLE t5 (a INT PRIMARY KEY, b INT);
+INSERT INTO t5 VALUES (1,1);
+INSERT INTO t5 VALUES (2,2), (3,8);
+INSERT INTO t5 VALUES (4,16);
+connection server_2;
+test_check
+OK
+test_check
+OK
+connection server_1;
+FLUSH LOGS;
+connection server_2;
+test_check
+OK
+test_check
+OK
+*** MDEV_6435: Incorrect error handling when query binlogged partially
on master with "killed" error ***
+connection server_1;
+CREATE TABLE t6 (a INT) ENGINE=MyISAM;
+CREATE TRIGGER tr AFTER INSERT ON t6 FOR EACH ROW SET @a = 1;
+connection con1;
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+SET debug_sync='sp_head_execute_before_loop SIGNAL ready WAIT_FOR cont';
+INSERT INTO t6 VALUES (1), (2), (3);
+connection server_1;
+SET debug_sync='now WAIT_FOR ready';
+KILL QUERY CONID;
+SET debug_sync='now SIGNAL cont';
+connection con1;
+ERROR 70100: Query execution was interrupted
+SET binlog_format= @old_format;
+SET debug_sync='RESET';
+connection server_1;
+SET debug_sync='RESET';
+connection server_2;
+include/wait_for_slave_sql_error.inc [errno=1317]
+STOP SLAVE IO_THREAD;
+SET GLOBAL gtid_slave_pos= 'AFTER_ERROR_GTID_POS';
+include/start_slave.inc
+connection server_1;
+INSERT INTO t6 VALUES (4);
+SELECT * FROM t6 ORDER BY a;
+a
+1
+4
+connection server_2;
+SELECT * FROM t6 ORDER BY a;
+a
+4
+*** MDEV-6551: Some replication errors are ignored if
slave_parallel_threads > 0 ***
+connection server_1;
+INSERT INTO t2 VALUES (31);
+include/save_master_gtid.inc
+connection server_2;
+include/sync_with_master_gtid.inc
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads= 0;
+include/start_slave.inc
+SET sql_log_bin= 0;
+INSERT INTO t2 VALUES (32);
+SET sql_log_bin= 1;
+connection server_1;
+INSERT INTO t2 VALUES (32);
+FLUSH LOGS;
+INSERT INTO t2 VALUES (33);
+INSERT INTO t2 VALUES (34);
+SELECT * FROM t2 WHERE a >= 30 ORDER BY a;
+a
+31
+32
+33
+34
+include/save_master_gtid.inc
+connection server_2;
+include/wait_for_slave_sql_error.inc [errno=1062]
+connection server_2;
+include/stop_slave_io.inc
+SET GLOBAL slave_parallel_threads=10;
+START SLAVE;
+include/wait_for_slave_sql_error.inc [errno=1062]
+START SLAVE SQL_THREAD;
+include/wait_for_slave_sql_error.inc [errno=1062]
+SELECT * FROM t2 WHERE a >= 30 ORDER BY a;
+a
+31
+32
+SET sql_slave_skip_counter= 1;
+ERROR HY000: When using parallel replication and GTID with multiple
replication domains, @@sql_slave_skip_counter can not be used. Instead,
setting @@gtid_slave_pos explicitly can be used to skip to after a given
GTID position.
+include/stop_slave_io.inc
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t2 WHERE a >= 30 ORDER BY a;
+a
+31
+32
+33
+34
+*** MDEV-6775: Wrong binlog order in parallel replication ***
+connection server_1;
+DELETE FROM t4;
+INSERT INTO t4 VALUES (1,NULL), (3,NULL), (4,4), (5, NULL), (6, 6);
+include/save_master_gtid.inc
+connection server_2;
+include/sync_with_master_gtid.inc
+include/stop_slave.inc
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,inject_binlog_commit_before_get_LOCK_log";
+SET @old_format=@@GLOBAL.binlog_format;
+SET GLOBAL binlog_format=ROW;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+connection con1;
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+UPDATE t4 SET b=NULL WHERE a=6;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con2;
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+DELETE FROM t4 WHERE b <= 3;
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+connection con1;
+SET binlog_format= @old_format;
+connection con2;
+SET binlog_format= @old_format;
+SET debug_sync='RESET';
+SELECT * FROM t4 ORDER BY a;
+a b
+1 NULL
+3 NULL
+4 4
+5 NULL
+6 NULL
+connection server_2;
+include/start_slave.inc
+SET debug_sync= 'now WAIT_FOR waiting';
+SELECT * FROM t4 ORDER BY a;
+a b
+1 NULL
+3 NULL
+4 4
+5 NULL
+6 NULL
+SET debug_sync= 'now SIGNAL cont';
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL binlog_format= @old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** MDEV-7237: Parallel replication: incorrect relaylog position after
stop/start the slave ***
+connection server_1;
+INSERT INTO t2 VALUES (40);
+connection server_2;
+include/stop_slave.inc
+CHANGE MASTER TO master_use_gtid=no;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,rpl_parallel_scheduled_gtid_0_x_100";
+SET GLOBAL debug_dbug="+d,rpl_parallel_wait_for_done_trigger";
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+connection server_1;
+INSERT INTO t2 VALUES (41);
+INSERT INTO t2 VALUES (42);
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+DELETE FROM t2 WHERE a=40;
+SET binlog_format= @old_format;
+INSERT INTO t2 VALUES (43);
+INSERT INTO t2 VALUES (44);
+FLUSH LOGS;
+INSERT INTO t2 VALUES (45);
+SET gtid_seq_no=100;
+INSERT INTO t2 VALUES (46);
+connection con_temp2;
+BEGIN;
+SELECT * FROM t2 WHERE a=40 FOR UPDATE;
+a
+40
+connection server_2;
+include/start_slave.inc
+SET debug_sync= 'now WAIT_FOR scheduled_gtid_0_x_100';
+STOP SLAVE;
+connection con_temp2;
+SET debug_sync= 'now WAIT_FOR wait_for_done_waiting';
+ROLLBACK;
+connection server_2;
+include/wait_for_slave_sql_to_stop.inc
+SELECT * FROM t2 WHERE a >= 40 ORDER BY a;
+a
+41
+42
+include/start_slave.inc
+SELECT * FROM t2 WHERE a >= 40 ORDER BY a;
+a
+41
+42
+43
+44
+45
+46
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+SET DEBUG_SYNC= 'RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+CHANGE MASTER TO master_use_gtid=slave_pos;
+include/start_slave.inc
+*** MDEV-7326 Server deadlock in connection with parallel replication ***
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=3;
+SET GLOBAL debug_dbug="+d,rpl_parallel_simulate_temp_err_xid";
+include/start_slave.inc
+connection server_1;
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format= STATEMENT;
+INSERT INTO t1 VALUES (foo(50,
+"rpl_parallel_start_waiting_for_prior SIGNAL t3_ready",
+"rpl_parallel_end_of_group SIGNAL prep_ready WAIT_FOR prep_cont"));
+connection server_2;
+SET DEBUG_SYNC= "now WAIT_FOR prep_ready";
+connection server_1;
+INSERT INTO t2 VALUES (foo(50,
+"rpl_parallel_simulate_temp_err_xid SIGNAL t1_ready1 WAIT_FOR t1_cont1",
+"rpl_parallel_retry_after_unmark SIGNAL t1_ready2 WAIT_FOR t1_cont2"));
+connection server_2;
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready1";
+connection server_1;
+INSERT INTO t1 VALUES (foo(51,
+"rpl_parallel_before_mark_start_commit SIGNAL t2_ready1 WAIT_FOR t2_cont1",
+"rpl_parallel_after_mark_start_commit SIGNAL t2_ready2"));
+connection server_2;
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready1";
+SET DEBUG_SYNC= "now SIGNAL t1_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready2";
+connection server_1;
+INSERT INTO t1 VALUES (52);
+SET BINLOG_FORMAT= @old_format;
+SELECT * FROM t2 WHERE a>=50 ORDER BY a;
+a
+50
+SELECT * FROM t1 WHERE a>=50 ORDER BY a;
+a
+50
+51
+52
+connection server_2;
+SET DEBUG_SYNC= "now SIGNAL prep_cont";
+SET DEBUG_SYNC= "now WAIT_FOR t3_ready";
+SET DEBUG_SYNC= "now SIGNAL t2_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready2";
+SET DEBUG_SYNC= "now SIGNAL t1_cont2";
+connection server_1;
+connection server_2;
+SELECT * FROM t2 WHERE a>=50 ORDER BY a;
+a
+50
+SELECT * FROM t1 WHERE a>=50 ORDER BY a;
+a
+50
+51
+52
+SET DEBUG_SYNC="reset";
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** MDEV-7326 Server deadlock in connection with parallel replication ***
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=3;
+SET GLOBAL debug_dbug="+d,rpl_parallel_simulate_temp_err_xid";
+include/start_slave.inc
+connection server_1;
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format= STATEMENT;
+INSERT INTO t1 VALUES (foo(60,
+"rpl_parallel_start_waiting_for_prior SIGNAL t3_ready",
+"rpl_parallel_end_of_group SIGNAL prep_ready WAIT_FOR prep_cont"));
+connection server_2;
+SET DEBUG_SYNC= "now WAIT_FOR prep_ready";
+connection server_1;
+INSERT INTO t2 VALUES (foo(60,
+"rpl_parallel_simulate_temp_err_xid SIGNAL t1_ready1 WAIT_FOR t1_cont1",
+"rpl_parallel_retry_after_unmark SIGNAL t1_ready2 WAIT_FOR t1_cont2"));
+connection server_2;
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready1";
+connection con_temp3;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+INSERT INTO t1 VALUES (foo(61,
+"rpl_parallel_before_mark_start_commit SIGNAL t2_ready1 WAIT_FOR t2_cont1",
+"rpl_parallel_after_mark_start_commit SIGNAL t2_ready2"));
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued1';
+connection con_temp4;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+INSERT INTO t6 VALUES (62);
+connection server_1;
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+connection con_temp3;
+connection con_temp4;
+connection server_1;
+SET debug_sync='RESET';
+SET BINLOG_FORMAT= @old_format;
+SELECT * FROM t2 WHERE a>=60 ORDER BY a;
+a
+60
+SELECT * FROM t1 WHERE a>=60 ORDER BY a;
+a
+60
+61
+SELECT * FROM t6 WHERE a>=60 ORDER BY a;
+a
+62
+connection server_2;
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready1";
+SET DEBUG_SYNC= "now SIGNAL t1_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready2";
+connection server_2;
+SET DEBUG_SYNC= "now SIGNAL prep_cont";
+SET DEBUG_SYNC= "now WAIT_FOR t3_ready";
+SET DEBUG_SYNC= "now SIGNAL t2_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready2";
+SET DEBUG_SYNC= "now SIGNAL t1_cont2";
+connection server_1;
+connection server_2;
+SELECT * FROM t2 WHERE a>=60 ORDER BY a;
+a
+60
+SELECT * FROM t1 WHERE a>=60 ORDER BY a;
+a
+60
+61
+SELECT * FROM t6 WHERE a>=60 ORDER BY a;
+a
+62
+SET DEBUG_SYNC="reset";
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** MDEV-7335: Potential parallel slave deadlock with specific binlog
corruption ***
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=1;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,slave_discard_xid_for_gtid_0_x_1000";
+connection server_1;
+INSERT INTO t2 VALUES (101);
+INSERT INTO t2 VALUES (102);
+INSERT INTO t2 VALUES (103);
+INSERT INTO t2 VALUES (104);
+INSERT INTO t2 VALUES (105);
+SET gtid_seq_no=1000;
+INSERT INTO t2 VALUES (106);
+INSERT INTO t2 VALUES (107);
+INSERT INTO t2 VALUES (108);
+INSERT INTO t2 VALUES (109);
+INSERT INTO t2 VALUES (110);
+INSERT INTO t2 VALUES (111);
+INSERT INTO t2 VALUES (112);
+INSERT INTO t2 VALUES (113);
+INSERT INTO t2 VALUES (114);
+INSERT INTO t2 VALUES (115);
+INSERT INTO t2 VALUES (116);
+INSERT INTO t2 VALUES (117);
+INSERT INTO t2 VALUES (118);
+INSERT INTO t2 VALUES (119);
+INSERT INTO t2 VALUES (120);
+INSERT INTO t2 VALUES (121);
+INSERT INTO t2 VALUES (122);
+INSERT INTO t2 VALUES (123);
+INSERT INTO t2 VALUES (124);
+INSERT INTO t2 VALUES (125);
+INSERT INTO t2 VALUES (126);
+INSERT INTO t2 VALUES (127);
+INSERT INTO t2 VALUES (128);
+INSERT INTO t2 VALUES (129);
+INSERT INTO t2 VALUES (130);
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t2 WHERE a >= 100 ORDER BY a;
+a
+101
+102
+103
+104
+105
+107
+108
+109
+110
+111
+112
+113
+114
+115
+116
+117
+118
+119
+120
+121
+122
+123
+124
+125
+126
+127
+128
+129
+130
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** MDEV-6676 - test syntax of @@slave_parallel_mode ***
+connection server_2;
+Parallel_Mode = 'conservative'
+include/stop_slave.inc
+SET GLOBAL slave_parallel_mode='aggressive';
+Parallel_Mode = 'aggressive'
+SET GLOBAL slave_parallel_mode='conservative';
+Parallel_Mode = 'conservative'
+*** MDEV-6676 - test that empty parallel_mode does not replicate in
parallel ***
+connection server_1;
+INSERT INTO t2 VALUES (1040);
+include/save_master_gtid.inc
+connection server_2;
+SET GLOBAL slave_parallel_mode='none';
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,slave_crash_if_parallel_apply";
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t2 WHERE a >= 1040 ORDER BY a;
+a
+1040
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+*** MDEV-6676 - test disabling domain-based parallel replication ***
+connection server_1;
+SET gtid_domain_id = 1;
+INSERT INTO t2 VALUES (1041);
+INSERT INTO t2 VALUES (1042);
+INSERT INTO t2 VALUES (1043);
+INSERT INTO t2 VALUES (1044);
+INSERT INTO t2 VALUES (1045);
+INSERT INTO t2 VALUES (1046);
+DELETE FROM t2 WHERE a >= 1041;
+SET gtid_domain_id = 2;
+INSERT INTO t2 VALUES (1041);
+INSERT INTO t2 VALUES (1042);
+INSERT INTO t2 VALUES (1043);
+INSERT INTO t2 VALUES (1044);
+INSERT INTO t2 VALUES (1045);
+INSERT INTO t2 VALUES (1046);
+SET gtid_domain_id = 0;
+include/save_master_gtid.inc
+connection server_2;
+SET GLOBAL slave_parallel_mode=minimal;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t2 WHERE a >= 1040 ORDER BY a;
+a
+1040
+1041
+1042
+1043
+1044
+1045
+1046
+include/stop_slave.inc
+SET GLOBAL slave_parallel_mode='conservative';
+include/start_slave.inc
+*** MDEV-7847: "Slave worker thread retried transaction 10 time(s) in
vain, giving up", followed by replication hanging ***
+*** MDEV-7882: Excessive transaction retry in parallel replication ***
+connection server_1;
+CREATE TABLE t7 (a int PRIMARY KEY, b INT) ENGINE=InnoDB;
+CREATE TABLE t8 (a int PRIMARY KEY, b INT) ENGINE=InnoDB;
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=40;
+SELECT @old_retries:=@@GLOBAL.slave_transaction_retries;
+@old_retries:=@@GLOBAL.slave_transaction_retries
+10
+SET GLOBAL slave_transaction_retries= 5;
+connection server_1;
+INSERT INTO t7 VALUES (1,1), (2,2), (3,3), (4,4), (5,5);
+SET @old_dbug= @@SESSION.debug_dbug;
+SET @commit_id= 42;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+INSERT INTO t8 VALUES (1,1);
+INSERT INTO t8 VALUES (2,2);
+INSERT INTO t8 VALUES (3,3);
+INSERT INTO t8 VALUES (4,4);
+INSERT INTO t8 VALUES (5,5);
+INSERT INTO t8 VALUES (6,6);
+INSERT INTO t8 VALUES (7,7);
+INSERT INTO t8 VALUES (8,8);
+UPDATE t7 SET b=9 WHERE a=3;
+UPDATE t7 SET b=10 WHERE a=3;
+UPDATE t7 SET b=11 WHERE a=3;
+INSERT INTO t8 VALUES (12,12);
+INSERT INTO t8 VALUES (13,13);
+UPDATE t7 SET b=14 WHERE a=3;
+UPDATE t7 SET b=15 WHERE a=3;
+INSERT INTO t8 VALUES (16,16);
+UPDATE t7 SET b=17 WHERE a=3;
+INSERT INTO t8 VALUES (18,18);
+INSERT INTO t8 VALUES (19,19);
+UPDATE t7 SET b=20 WHERE a=3;
+INSERT INTO t8 VALUES (21,21);
+UPDATE t7 SET b=22 WHERE a=3;
+INSERT INTO t8 VALUES (23,24);
+INSERT INTO t8 VALUES (24,24);
+UPDATE t7 SET b=25 WHERE a=3;
+INSERT INTO t8 VALUES (26,26);
+UPDATE t7 SET b=27 WHERE a=3;
+BEGIN;
+INSERT INTO t8 VALUES (28,28);
+INSERT INTO t8 VALUES (29,28), (30,28);
+INSERT INTO t8 VALUES (31,28);
+INSERT INTO t8 VALUES (32,28);
+INSERT INTO t8 VALUES (33,28);
+INSERT INTO t8 VALUES (34,28);
+INSERT INTO t8 VALUES (35,28);
+INSERT INTO t8 VALUES (36,28);
+INSERT INTO t8 VALUES (37,28);
+INSERT INTO t8 VALUES (38,28);
+INSERT INTO t8 VALUES (39,28);
+INSERT INTO t8 VALUES (40,28);
+INSERT INTO t8 VALUES (41,28);
+INSERT INTO t8 VALUES (42,28);
+COMMIT;
+SET @commit_id=43;
+INSERT INTO t8 VALUES (43,43);
+INSERT INTO t8 VALUES (44,44);
+UPDATE t7 SET b=45 WHERE a=3;
+INSERT INTO t8 VALUES (46,46);
+INSERT INTO t8 VALUES (47,47);
+UPDATE t7 SET b=48 WHERE a=3;
+INSERT INTO t8 VALUES (49,49);
+INSERT INTO t8 VALUES (50,50);
+SET @commit_id=44;
+INSERT INTO t8 VALUES (51,51);
+INSERT INTO t8 VALUES (52,52);
+UPDATE t7 SET b=53 WHERE a=3;
+INSERT INTO t8 VALUES (54,54);
+INSERT INTO t8 VALUES (55,55);
+UPDATE t7 SET b=56 WHERE a=3;
+INSERT INTO t8 VALUES (57,57);
+UPDATE t7 SET b=58 WHERE a=3;
+INSERT INTO t8 VALUES (58,58);
+INSERT INTO t8 VALUES (59,59);
+INSERT INTO t8 VALUES (60,60);
+INSERT INTO t8 VALUES (61,61);
+UPDATE t7 SET b=62 WHERE a=3;
+INSERT INTO t8 VALUES (63,63);
+INSERT INTO t8 VALUES (64,64);
+INSERT INTO t8 VALUES (65,65);
+INSERT INTO t8 VALUES (66,66);
+UPDATE t7 SET b=67 WHERE a=3;
+INSERT INTO t8 VALUES (68,68);
+UPDATE t7 SET b=69 WHERE a=3;
+UPDATE t7 SET b=70 WHERE a=3;
+UPDATE t7 SET b=71 WHERE a=3;
+INSERT INTO t8 VALUES (72,72);
+UPDATE t7 SET b=73 WHERE a=3;
+UPDATE t7 SET b=74 WHERE a=3;
+UPDATE t7 SET b=75 WHERE a=3;
+UPDATE t7 SET b=76 WHERE a=3;
+INSERT INTO t8 VALUES (77,77);
+UPDATE t7 SET b=78 WHERE a=3;
+INSERT INTO t8 VALUES (79,79);
+UPDATE t7 SET b=80 WHERE a=3;
+INSERT INTO t8 VALUES (81,81);
+UPDATE t7 SET b=82 WHERE a=3;
+INSERT INTO t8 VALUES (83,83);
+UPDATE t7 SET b=84 WHERE a=3;
+SET @commit_id=45;
+INSERT INTO t8 VALUES (85,85);
+UPDATE t7 SET b=86 WHERE a=3;
+INSERT INTO t8 VALUES (87,87);
+SET @commit_id=46;
+INSERT INTO t8 VALUES (88,88);
+INSERT INTO t8 VALUES (89,89);
+INSERT INTO t8 VALUES (90,90);
+SET SESSION debug_dbug=@old_dbug;
+INSERT INTO t8 VALUES (91,91);
+INSERT INTO t8 VALUES (92,92);
+INSERT INTO t8 VALUES (93,93);
+INSERT INTO t8 VALUES (94,94);
+INSERT INTO t8 VALUES (95,95);
+INSERT INTO t8 VALUES (96,96);
+INSERT INTO t8 VALUES (97,97);
+INSERT INTO t8 VALUES (98,98);
+INSERT INTO t8 VALUES (99,99);
+SELECT * FROM t7 ORDER BY a;
+a b
+1 1
+2 2
+3 86
+4 4
+5 5
+SELECT * FROM t8 ORDER BY a;
+a b
+1 1
+2 2
+3 3
+4 4
+5 5
+6 6
+7 7
+8 8
+12 12
+13 13
+16 16
+18 18
+19 19
+21 21
+23 24
+24 24
+26 26
+28 28
+29 28
+30 28
+31 28
+32 28
+33 28
+34 28
+35 28
+36 28
+37 28
+38 28
+39 28
+40 28
+41 28
+42 28
+43 43
+44 44
+46 46
+47 47
+49 49
+50 50
+51 51
+52 52
+54 54
+55 55
+57 57
+58 58
+59 59
+60 60
+61 61
+63 63
+64 64
+65 65
+66 66
+68 68
+72 72
+77 77
+79 79
+81 81
+83 83
+85 85
+87 87
+88 88
+89 89
+90 90
+91 91
+92 92
+93 93
+94 94
+95 95
+96 96
+97 97
+98 98
+99 99
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t7 ORDER BY a;
+a b
+1 1
+2 2
+3 86
+4 4
+5 5
+SELECT * FROM t8 ORDER BY a;
+a b
+1 1
+2 2
+3 3
+4 4
+5 5
+6 6
+7 7
+8 8
+12 12
+13 13
+16 16
+18 18
+19 19
+21 21
+23 24
+24 24
+26 26
+28 28
+29 28
+30 28
+31 28
+32 28
+33 28
+34 28
+35 28
+36 28
+37 28
+38 28
+39 28
+40 28
+41 28
+42 28
+43 43
+44 44
+46 46
+47 47
+49 49
+50 50
+51 51
+52 52
+54 54
+55 55
+57 57
+58 58
+59 59
+60 60
+61 61
+63 63
+64 64
+65 65
+66 66
+68 68
+72 72
+77 77
+79 79
+81 81
+83 83
+85 85
+87 87
+88 88
+89 89
+90 90
+91 91
+92 92
+93 93
+94 94
+95 95
+96 96
+97 97
+98 98
+99 99
+include/stop_slave.inc
+SET GLOBAL slave_transaction_retries= @old_retries;
+SET GLOBAL slave_parallel_threads=10;
+include/start_slave.inc
+*** MDEV-7888: ANALYZE TABLE does wakeup_subsequent_commits(), causing
wrong binlog order and parallel replication hang ***
+connection server_2;
+include/stop_slave.inc
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_analyze_table_sleep';
+connection server_1;
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+SET @commit_id= 10000;
+ANALYZE TABLE t2;
+Table Op Msg_type Msg_text
+test.t2 analyze status OK
+INSERT INTO t3 VALUES (120, 0);
+SET @commit_id= 10001;
+INSERT INTO t3 VALUES (121, 0);
+SET SESSION debug_dbug=@old_dbug;
+SELECT * FROM t3 WHERE a >= 120 ORDER BY a;
+a b
+120 0
+121 0
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t3 WHERE a >= 120 ORDER BY a;
+a b
+120 0
+121 0
+include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_dbug;
+include/start_slave.inc
+*** MDEV-7929: record_gtid() for non-transactional event group calls
wakeup_subsequent_commits() too early, causing slave hang. ***
+connection server_2;
+include/stop_slave.inc
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_record_gtid_serverid_100_sleep';
+connection server_1;
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+SET @old_server_id= @@SESSION.server_id;
+SET SESSION server_id= 100;
+SET @commit_id= 10010;
+ALTER TABLE t1 COMMENT "Hulubulu!";
+SET SESSION server_id= @old_server_id;
+INSERT INTO t3 VALUES (130, 0);
+SET @commit_id= 10011;
+INSERT INTO t3 VALUES (131, 0);
+SET SESSION debug_dbug=@old_dbug;
+SELECT * FROM t3 WHERE a >= 130 ORDER BY a;
+a b
+130 0
+131 0
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t3 WHERE a >= 130 ORDER BY a;
+a b
+130 0
+131 0
+include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_dbug;
+include/start_slave.inc
+*** MDEV-8031: Parallel replication stops on "connection killed" error
(probably incorrectly handled deadlock kill) ***
+connection server_1;
+INSERT INTO t3 VALUES (201,0), (202,0);
+include/save_master_gtid.inc
+connection server_2;
+include/sync_with_master_gtid.inc
+include/stop_slave.inc
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_mdev8031';
+connection server_1;
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+SET @commit_id= 10200;
+INSERT INTO t3 VALUES (203, 1);
+INSERT INTO t3 VALUES (204, 1);
+INSERT INTO t3 VALUES (205, 1);
+UPDATE t3 SET b=b+1 WHERE a=201;
+UPDATE t3 SET b=b+1 WHERE a=201;
+UPDATE t3 SET b=b+1 WHERE a=201;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=203;
+UPDATE t3 SET b=b+1 WHERE a=203;
+UPDATE t3 SET b=b+1 WHERE a=204;
+UPDATE t3 SET b=b+1 WHERE a=204;
+UPDATE t3 SET b=b+1 WHERE a=204;
+UPDATE t3 SET b=b+1 WHERE a=203;
+UPDATE t3 SET b=b+1 WHERE a=205;
+UPDATE t3 SET b=b+1 WHERE a=205;
+SET SESSION debug_dbug=@old_dbug;
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+a b
+201 3
+202 4
+203 4
+204 4
+205 3
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+a b
+201 3
+202 4
+203 4
+204 4
+205 3
+include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_dbug;
+include/start_slave.inc
+*** Check getting deadlock killed inside open_binlog() during retry. ***
+connection server_2;
+include/stop_slave.inc
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_retry_event_group_open_binlog_kill';
+SET @old_max= @@GLOBAL.max_relay_log_size;
+SET GLOBAL max_relay_log_size= 4096;
+connection server_1;
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+SET @commit_id= 10210;
+Omit long queries that cause relaylog rotations and transaction retries...
+SET SESSION debug_dbug=@old_dbug;
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+a b
+201 6
+202 8
+203 7
+204 7
+205 5
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+a b
+201 6
+202 8
+203 7
+204 7
+205 5
+include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_debg;
+SET GLOBAL max_relay_log_size= @old_max;
+include/start_slave.inc
+*** MDEV-8302: Duplicate key with parallel replication ***
+connection server_2;
+include/stop_slave.inc
+/* Inject a small sleep which makes the race easier to hit. */
+SET @old_dbug=@@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,inject_mdev8302";
+connection server_1;
+INSERT INTO t7 VALUES (100,1), (101,2), (102,3), (103,4), (104,5);
+SET @old_dbug= @@SESSION.debug_dbug;
+SET @commit_id= 20000;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+SET SESSION debug_dbug=@old_dbug;
+SELECT * FROM t7 ORDER BY a;
+a b
+1 1
+2 2
+3 86
+4 4
+5 5
+100 5
+101 1
+102 2
+103 3
+104 4
+include/save_master_gtid.inc
+connection server_2;
+include/start_slave.inc
+include/sync_with_master_gtid.inc
+SELECT * FROM t7 ORDER BY a;
+a b
+1 1
+2 2
+3 86
+4 4
+5 5
+100 5
+101 1
+102 2
+103 3
+104 4
+include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+include/start_slave.inc
+*** MDEV-8725: Assertion on ROLLBACK statement in the binary log ***
+connection server_1;
+BEGIN;
+INSERT INTO t2 VALUES (2000);
+INSERT INTO t1 VALUES (2000);
+INSERT INTO t2 VALUES (2001);
+ROLLBACK;
+SELECT * FROM t1 WHERE a>=2000 ORDER BY a;
+a
+2000
+SELECT * FROM t2 WHERE a>=2000 ORDER BY a;
+a
+include/save_master_gtid.inc
+connection server_2;
+include/sync_with_master_gtid.inc
+SELECT * FROM t1 WHERE a>=2000 ORDER BY a;
+a
+2000
+SELECT * FROM t2 WHERE a>=2000 ORDER BY a;
+a
+connection server_2;
+include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=@old_parallel_threads;
+include/start_slave.inc
+SET DEBUG_SYNC= 'RESET';
+connection server_1;
+DROP function foo;
+DROP TABLE t1,t2,t3,t4,t5,t6,t7,t8;
+SET DEBUG_SYNC= 'RESET';
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_parallel.test
b/mysql-test/suite/binlog_encryption/rpl_parallel.test
new file mode 100644
index 0000000..51eae32
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_parallel.test
@@ -0,0 +1,2729 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+--source include/have_debug.inc
+--source include/have_debug_sync.inc
+--source include/master-slave.inc
+
+--enable_connect_log
+
+# Test various aspects of parallel replication.
+
+--connection server_2
+SET @old_parallel_threads=@@GLOBAL.slave_parallel_threads;
+--error ER_SLAVE_MUST_STOP
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=10;
+
+# Check that we do not spawn any worker threads when no slave is running.
+SELECT IF(COUNT(*) < 10, "OK", CONCAT("Found too many system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+
+CHANGE MASTER TO master_use_gtid=slave_pos;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+# Check that worker threads get spawned when slave starts.
+SELECT IF(COUNT(*) >= 10, "OK", CONCAT("Found too few system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+# ... and that worker threads get removed when slave stops.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SELECT IF(COUNT(*) < 10, "OK", CONCAT("Found too many system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+SELECT IF(COUNT(*) >= 10, "OK", CONCAT("Found too few system user
processes: ", COUNT(*))) FROM information_schema.processlist WHERE user
= "system user";
+
+--echo *** Test long-running query in domain 1 can run in parallel with
short queries in domain 0 ***
+
+--connection server_1
+ALTER TABLE mysql.gtid_slave_pos ENGINE=InnoDB;
+CREATE TABLE t1 (a int PRIMARY KEY) ENGINE=MyISAM;
+CREATE TABLE t2 (a int PRIMARY KEY) ENGINE=InnoDB;
+INSERT INTO t1 VALUES (1);
+INSERT INTO t2 VALUES (1);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+
+# Block the table t1 to simulate a replicated query taking a long time.
+--connect (con_temp1,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+LOCK TABLE t1 WRITE;
+
+--connection server_1
+SET gtid_domain_id=1;
+# This query will be blocked on the slave until UNLOCK TABLES.
+INSERT INTO t1 VALUES (2);
+SET gtid_domain_id=0;
+# These t2 queries can be replicated in parallel with the prior t1
query, as
+# they are in a separate replication domain.
+INSERT INTO t2 VALUES (2);
+INSERT INTO t2 VALUES (3);
+BEGIN;
+INSERT INTO t2 VALUES (4);
+INSERT INTO t2 VALUES (5);
+COMMIT;
+INSERT INTO t2 VALUES (6);
+
+--connection server_2
+--let $wait_condition= SELECT COUNT(*) = 6 FROM t2
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+
+SELECT * FROM t2 ORDER by a;
+
+--connection con_temp1
+SELECT * FROM t1;
+UNLOCK TABLES;
+
+--connection server_2
+--let $wait_condition= SELECT COUNT(*) = 2 FROM t1
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+
+SELECT * FROM t1 ORDER BY a;
+
+
+--echo *** Test two transactions in different domains committed in
opposite order on slave but in a single group commit. ***
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+# Use a stored function to inject a debug_sync into the appropriate THD.
+# The function does nothing on the master, and on the slave it injects the
+# desired debug_sync action(s).
+SET sql_log_bin=0;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format='statement';
+SET gtid_domain_id=1;
+INSERT INTO t2 VALUES (foo(10,
+ 'commit_before_enqueue SIGNAL ready1 WAIT_FOR cont1',
+ 'commit_after_release_LOCK_prepare_ordered SIGNAL ready2'));
+
+--connection server_2
+FLUSH LOGS;
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+SET sql_log_bin=0;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ IF d1 != '' THEN
+ SET debug_sync = d1;
+ END IF;
+ IF d2 != '' THEN
+ SET debug_sync = d2;
+ END IF;
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+SET @old_format=@@GLOBAL.binlog_format;
+SET GLOBAL binlog_format=statement;
+# We need to restart all parallel threads for the new global setting to
+# be copied to the session-level values.
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+# First make sure the first insert is ready to commit, but not queued yet.
+SET debug_sync='now WAIT_FOR ready1';
+
+--connection server_1
+SET gtid_domain_id=2;
+INSERT INTO t2 VALUES (foo(11,
+ 'commit_before_enqueue SIGNAL ready3 WAIT_FOR cont3',
+ 'commit_after_release_LOCK_prepare_ordered SIGNAL ready4 WAIT_FOR
cont4'));
+SET gtid_domain_id=0;
+SELECT * FROM t2 WHERE a >= 10 ORDER BY a;
+
+--connection server_2
+# Now wait for the second insert to queue itself as the leader, and then
+# wait for more commits to queue up.
+SET debug_sync='now WAIT_FOR ready3';
+SET debug_sync='now SIGNAL cont3';
+SET debug_sync='now WAIT_FOR ready4';
+# Now allow the first insert to queue up to participate in group commit.
+SET debug_sync='now SIGNAL cont1';
+SET debug_sync='now WAIT_FOR ready2';
+# Finally allow the second insert to proceed and do the group commit.
+SET debug_sync='now SIGNAL cont4';
+
+--let $wait_condition= SELECT COUNT(*) = 2 FROM t2 WHERE a >= 10
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+SELECT * FROM t2 WHERE a >= 10 ORDER BY a;
+# The two INSERT transactions should have been committed in opposite order,
+# but in the same group commit (seen by precense of cid=# in the SHOW
+# BINLOG output).
+--let $binlog_file= slave-bin.000002
+--disable_connect_log
+--source include/show_binlog_events.inc
+--enable_connect_log
+FLUSH LOGS;
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+
+# Restart all the slave parallel worker threads, to clear all
debug_sync actions.
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET debug_sync='RESET';
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** Test that group-committed transactions on the master can
replicate in parallel on the slave. ***
+--connection server_1
+SET debug_sync='RESET';
+FLUSH LOGS;
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=InnoDB;
+# Create some sentinel rows so that the rows inserted in parallel fall into
+# separate gaps and do not cause gap lock conflicts.
+INSERT INTO t3 VALUES (1,1), (3,3), (5,5), (7,7);
+--save_master_pos
+--connection server_2
+--sync_with_master
+
+# We want to test that the transactions can execute out-of-order on
+# the slave, but still end up committing in-order, and in a single
+# group commit.
+#
+# The idea is to group-commit three transactions together on the master:
+# A, B, and C. On the slave, C will execute the insert first, then A,
+# and then B. But B manages to complete before A has time to commit, so
+# all three end up committing together.
+#
+# So we start by setting up some row locks that will block transactions
+# A and B from executing, allowing C to run first.
+
+--connection con_temp1
+BEGIN;
+INSERT INTO t3 VALUES (2,102);
+--connect (con_temp2,127.0.0.1,root,,test,$SERVER_MYPORT_2,)
+BEGIN;
+INSERT INTO t3 VALUES (4,104);
+
+# On the master, queue three INSERT transactions as a single group commit.
+--connect (con_temp3,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (2, foo(12,
+ 'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued1
WAIT_FOR slave_cont1',
+ ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connect (con_temp4,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (4, foo(14,
+ 'commit_after_release_LOCK_prepare_ordered SIGNAL slave_queued2',
+ ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+
+--connect (con_temp5,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (6, foo(16,
+ 'group_commit_waiting_for_prior SIGNAL slave_queued3',
+ ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con_temp3
+REAP;
+--connection con_temp4
+REAP;
+--connection con_temp5
+REAP;
+SET debug_sync='RESET';
+
+--connection server_1
+SELECT * FROM t3 ORDER BY a;
+--let $binlog_file= master-bin.000002
+--disable_connect_log
+--source include/show_binlog_events.inc
+--enable_connect_log
+
+# First, wait until insert 3 is ready to queue up for group commit, but is
+# waiting for insert 2 to commit before it can do so itself.
+--connection server_2
+SET debug_sync='now WAIT_FOR slave_queued3';
+
+# Next, let insert 1 proceed, and allow it to queue up as the group commit
+# leader, but let it wait for insert 2 to also queue up before proceeding.
+--connection con_temp1
+ROLLBACK;
+--connection server_2
+SET debug_sync='now WAIT_FOR slave_queued1';
+
+# Now let insert 2 proceed and queue up.
+--connection con_temp2
+ROLLBACK;
+--connection server_2
+SET debug_sync='now WAIT_FOR slave_queued2';
+# And finally, we can let insert 1 proceed and do the group commit with all
+# three insert transactions together.
+SET debug_sync='now SIGNAL slave_cont1';
+
+# Wait for the commit to complete and check that all three transactions
+# group-committed together (will be seen in the binlog as all three having
+# cid=# on their GTID event).
+--let $wait_condition= SELECT COUNT(*) = 3 FROM t3 WHERE a IN (2,4,6)
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+SELECT * FROM t3 ORDER BY a;
+--let $binlog_file= slave-bin.000003
+--disable_connect_log
+--source include/show_binlog_events.inc
+--enable_connect_log
+
+
+--echo *** Test STOP SLAVE in parallel mode ***
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+# Respawn all worker threads to clear any left-over debug_sync or other
stuff.
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+
+--connection server_1
+# Set up a couple of transactions. The first will be blocked halfway
+# through on a lock, and while it is blocked we initiate STOP SLAVE.
+# We then test that the halfway-initiated transaction is allowed to
+# complete, but no subsequent ones.
+# We have to use statement-based mode and set
+# binlog_direct_non_transactional_updates=0; otherwise the binlog will
+# be split into two event groups, one for the MyISAM part and one for the
+# InnoDB part.
+SET binlog_direct_non_transactional_updates=0;
+SET sql_log_bin=0;
+CALL mtr.add_suppression("Statement is unsafe because it accesses a
non-transactional table after accessing a transactional table within the
same transaction");
+SET sql_log_bin=1;
+BEGIN;
+INSERT INTO t2 VALUES (20);
+--disable_warnings
+INSERT INTO t1 VALUES (20);
+--enable_warnings
+INSERT INTO t2 VALUES (21);
+INSERT INTO t3 VALUES (20, 20);
+COMMIT;
+INSERT INTO t3 VALUES(21, 21);
+INSERT INTO t3 VALUES(22, 22);
+SET binlog_format=@old_format;
+--save_master_pos
+
+# Start a connection that will block the replicated transaction halfway.
+--connection con_temp1
+BEGIN;
+INSERT INTO t2 VALUES (21);
+
+--connection server_2
+START SLAVE;
+# Wait for the MyISAM change to be visible, after which replication
will wait
+# for con_temp1 to roll back.
+--let $wait_condition= SELECT COUNT(*) = 1 FROM t1 WHERE a=20
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+
+--connection con_temp2
+# Initiate slave stop. It will have to wait for the current event group
+# to complete.
+# The dbug injection causes debug_sync to signal 'wait_for_done_waiting'
+# when the SQL driver thread is ready.
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,rpl_parallel_wait_for_done_trigger";
+send STOP SLAVE;
+
+--connection con_temp1
+SET debug_sync='now WAIT_FOR wait_for_done_waiting';
+ROLLBACK;
+
+--connection con_temp2
+reap;
+SET GLOBAL debug_dbug=@old_dbug;
+SET debug_sync='RESET';
+
+--connection server_2
+--disable_connect_log
+--source include/wait_for_slave_to_stop.inc
+--enable_connect_log
+# We should see the first transaction applied, but not the two others.
+SELECT * FROM t1 WHERE a >= 20 ORDER BY a;
+SELECT * FROM t2 WHERE a >= 20 ORDER BY a;
+SELECT * FROM t3 WHERE a >= 20 ORDER BY a;
+
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t1 WHERE a >= 20 ORDER BY a;
+SELECT * FROM t2 WHERE a >= 20 ORDER BY a;
+SELECT * FROM t3 WHERE a >= 20 ORDER BY a;
+
+
+--connection server_2
+# Respawn all worker threads to clear any left-over debug_sync or other
stuff.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** Test killing slave threads at various wait points ***
+--echo *** 1. Test killing transaction waiting in commit for previous
transaction to commit ***
+
+# Set up three transactions on the master that will be group-committed
+# together so they can be replicated in parallel on the slave.
+--connection con_temp3
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (31, foo(31,
+ 'commit_before_prepare_ordered WAIT_FOR t2_waiting',
+ 'commit_after_prepare_ordered SIGNAL t1_ready WAIT_FOR t1_cont'));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con_temp4
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+BEGIN;
+# This insert is just so we can get T2 to wait while a query is running
that we
+# can see in SHOW PROCESSLIST so we can get its thread_id to kill later.
+INSERT INTO t3 VALUES (32, foo(32,
+ 'ha_write_row_end SIGNAL t2_query WAIT_FOR t2_cont',
+ ''));
+# This insert sets up debug_sync points so that T2 will tell when it is
at its
+# wait point where we want to kill it - and when it has been killed.
+INSERT INTO t3 VALUES (33, foo(33,
+ 'group_commit_waiting_for_prior SIGNAL t2_waiting',
+ 'group_commit_waiting_for_prior_killed SIGNAL t2_killed'));
+send COMMIT;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+
+--connection con_temp5
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (34, foo(34,
+ '',
+ ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con_temp3
+REAP;
+--connection con_temp4
+REAP;
+--connection con_temp5
+REAP;
+
+--connection server_1
+SELECT * FROM t3 WHERE a >= 30 ORDER BY a;
+SET debug_sync='RESET';
+
+--connection server_2
+SET sql_log_bin=0;
+CALL mtr.add_suppression("Query execution was interrupted");
+CALL mtr.add_suppression("Commit failed due to failure of an earlier
commit on which this one depends");
+CALL mtr.add_suppression("Slave: Connection was killed");
+SET sql_log_bin=1;
+# Wait until T2 is inside executing its insert of 32, then find it in SHOW
+# PROCESSLIST to know its thread id for KILL later.
+SET debug_sync='now WAIT_FOR t2_query';
+--let $thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(32%' AND INFO NOT LIKE '%LIKE%'`
+SET debug_sync='now SIGNAL t2_cont';
+
+# Wait until T2 has entered its wait for T1 to commit, and T1 has
+# progressed into its commit phase.
+SET debug_sync='now WAIT_FOR t1_ready';
+
+# Now kill the transaction T2.
+--replace_result $thd_id THD_ID
+eval KILL $thd_id;
+
+# Wait until T2 has reacted on the kill.
+SET debug_sync='now WAIT_FOR t2_killed';
+
+# Now we can allow T1 to proceed.
+SET debug_sync='now SIGNAL t1_cont';
+
+--let $slave_sql_errno= 1317,1927,1964
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+STOP SLAVE IO_THREAD;
+SELECT * FROM t3 WHERE a >= 30 ORDER BY a;
+
+# Now we have to disable the debug_sync statements, so they do not trigger
+# when the events are retried.
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+--connection server_1
+INSERT INTO t3 VALUES (39,0);
+--save_master_pos
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 30 ORDER BY a;
+# Restore the foo() function.
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ IF d1 != '' THEN
+ SET debug_sync = d1;
+ END IF;
+ IF d2 != '' THEN
+ SET debug_sync = d2;
+ END IF;
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+
+--connection server_2
+# Respawn all worker threads to clear any left-over debug_sync or other
stuff.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** 2. Same as (1), but without restarting IO thread after kill
of SQL threads ***
+
+# Set up three transactions on the master that will be group-committed
+# together so they can be replicated in parallel on the slave.
+--connection con_temp3
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (41, foo(41,
+ 'commit_before_prepare_ordered WAIT_FOR t2_waiting',
+ 'commit_after_prepare_ordered SIGNAL t1_ready WAIT_FOR t1_cont'));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con_temp4
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+BEGIN;
+# This insert is just so we can get T2 to wait while a query is running
that we
+# can see in SHOW PROCESSLIST so we can get its thread_id to kill later.
+INSERT INTO t3 VALUES (42, foo(42,
+ 'ha_write_row_end SIGNAL t2_query WAIT_FOR t2_cont',
+ ''));
+# This insert sets up debug_sync points so that T2 will tell when it is
at its
+# wait point where we want to kill it - and when it has been killed.
+INSERT INTO t3 VALUES (43, foo(43,
+ 'group_commit_waiting_for_prior SIGNAL t2_waiting',
+ 'group_commit_waiting_for_prior_killed SIGNAL t2_killed'));
+send COMMIT;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+
+--connection con_temp5
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (44, foo(44,
+ '',
+ ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con_temp3
+REAP;
+--connection con_temp4
+REAP;
+--connection con_temp5
+REAP;
+
+--connection server_1
+SELECT * FROM t3 WHERE a >= 40 ORDER BY a;
+SET debug_sync='RESET';
+
+--connection server_2
+# Wait until T2 is inside executing its insert of 42, then find it in SHOW
+# PROCESSLIST to know its thread id for KILL later.
+SET debug_sync='now WAIT_FOR t2_query';
+--let $thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(42%' AND INFO NOT LIKE '%LIKE%'`
+SET debug_sync='now SIGNAL t2_cont';
+
+# Wait until T2 has entered its wait for T1 to commit, and T1 has
+# progressed into its commit phase.
+SET debug_sync='now WAIT_FOR t1_ready';
+
+# Now kill the transaction T2.
+--replace_result $thd_id THD_ID
+eval KILL $thd_id;
+
+# Wait until T2 has reacted on the kill.
+SET debug_sync='now WAIT_FOR t2_killed';
+
+# Now we can allow T1 to proceed.
+SET debug_sync='now SIGNAL t1_cont';
+
+--let $slave_sql_errno= 1317,1927,1964
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+# Now we have to disable the debug_sync statements, so they do not trigger
+# when the events are retried.
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+--connection server_1
+INSERT INTO t3 VALUES (49,0);
+--save_master_pos
+
+--connection server_2
+START SLAVE SQL_THREAD;
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 40 ORDER BY a;
+# Restore the foo() function.
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ IF d1 != '' THEN
+ SET debug_sync = d1;
+ END IF;
+ IF d2 != '' THEN
+ SET debug_sync = d2;
+ END IF;
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+
+--connection server_2
+# Respawn all worker threads to clear any left-over debug_sync or other
stuff.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** 3. Same as (2), but not using gtid mode ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+CHANGE MASTER TO master_use_gtid=no;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+# Set up three transactions on the master that will be group-committed
+# together so they can be replicated in parallel on the slave.
+--connection con_temp3
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (51, foo(51,
+ 'commit_before_prepare_ordered WAIT_FOR t2_waiting',
+ 'commit_after_prepare_ordered SIGNAL t1_ready WAIT_FOR t1_cont'));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con_temp4
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+SET binlog_format=statement;
+BEGIN;
+# This insert is just so we can get T2 to wait while a query is running
that we
+# can see in SHOW PROCESSLIST so we can get its thread_id to kill later.
+INSERT INTO t3 VALUES (52, foo(52,
+ 'ha_write_row_end SIGNAL t2_query WAIT_FOR t2_cont',
+ ''));
+# This insert sets up debug_sync points so that T2 will tell when it is
at its
+# wait point where we want to kill it - and when it has been killed.
+INSERT INTO t3 VALUES (53, foo(53,
+ 'group_commit_waiting_for_prior SIGNAL t2_waiting',
+ 'group_commit_waiting_for_prior_killed SIGNAL t2_killed'));
+send COMMIT;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+
+--connection con_temp5
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+SET binlog_format=statement;
+send INSERT INTO t3 VALUES (54, foo(54,
+ '',
+ ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued3';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con_temp3
+REAP;
+--connection con_temp4
+REAP;
+--connection con_temp5
+REAP;
+
+--connection server_1
+SELECT * FROM t3 WHERE a >= 50 ORDER BY a;
+SET debug_sync='RESET';
+
+--connection server_2
+# Wait until T2 is inside executing its insert of 52, then find it in SHOW
+# PROCESSLIST to know its thread id for KILL later.
+SET debug_sync='now WAIT_FOR t2_query';
+--let $thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(52%' AND INFO NOT LIKE '%LIKE%'`
+SET debug_sync='now SIGNAL t2_cont';
+
+# Wait until T2 has entered its wait for T1 to commit, and T1 has
+# progressed into its commit phase.
+SET debug_sync='now WAIT_FOR t1_ready';
+
+# Now kill the transaction T2.
+--replace_result $thd_id THD_ID
+eval KILL $thd_id;
+
+# Wait until T2 has reacted on the kill.
+SET debug_sync='now WAIT_FOR t2_killed';
+
+# Now we can allow T1 to proceed.
+SET debug_sync='now SIGNAL t1_cont';
+
+--let $slave_sql_errno= 1317,1927,1964
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+SELECT * FROM t3 WHERE a >= 50 ORDER BY a;
+
+# Now we have to disable the debug_sync statements, so they do not trigger
+# when the events are retried.
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+--connection server_1
+INSERT INTO t3 VALUES (59,0);
+--save_master_pos
+
+--connection server_2
+START SLAVE SQL_THREAD;
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 50 ORDER BY a;
+# Restore the foo() function.
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ IF d1 != '' THEN
+ SET debug_sync = d1;
+ END IF;
+ IF d2 != '' THEN
+ SET debug_sync = d2;
+ END IF;
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+CHANGE MASTER TO master_use_gtid=slave_pos;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_2
+# Respawn all worker threads to clear any left-over debug_sync or other
stuff.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=4;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** 4. Test killing thread that is waiting to start transaction
until previous transaction commits ***
+
+# We set up four transactions T1, T2, T3, and T4 on the master. T2, T3,
and T4
+# can run in parallel with each other (same group commit and commit id),
+# but not in parallel with T1.
+#
+# We use four worker threads, each Ti will be queued on each their own
+# worker thread. We will delay T1 commit, T3 will wait for T1 to begin
+# commit before it can start. We will kill T3 during this wait, and
+# check that everything works correctly.
+#
+# It is rather tricky to get the correct thread id of the worker to kill.
+# We start by injecting four dummy transactions in a debug_sync-controlled
+# manner to be able to get known thread ids for the workers in a pool with
+# just 4 worker threads. Then we let in each of the real test transactions
+# T1-T4 one at a time in a way which allows us to know which transaction
+# ends up with which thread id.
+
+--connection server_1
+SET binlog_format=statement;
+SET gtid_domain_id=2;
+BEGIN;
+# This debug_sync will linger on and be used to control T4 later.
+INSERT INTO t3 VALUES (70, foo(70,
+ 'rpl_parallel_start_waiting_for_prior SIGNAL t4_waiting', ''));
+INSERT INTO t3 VALUES (60, foo(60,
+ 'ha_write_row_end SIGNAL d2_query WAIT_FOR d2_cont2',
+ 'rpl_parallel_end_of_group SIGNAL d2_done WAIT_FOR d2_cont'));
+COMMIT;
+SET gtid_domain_id=0;
+
+--connection server_2
+SET debug_sync='now WAIT_FOR d2_query';
+--let $d2_thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(60%' AND INFO NOT LIKE '%LIKE%'`
+
+--connection server_1
+SET gtid_domain_id=1;
+BEGIN;
+# These debug_sync's will linger on and be used to control T3 later.
+INSERT INTO t3 VALUES (61, foo(61,
+ 'rpl_parallel_start_waiting_for_prior SIGNAL t3_waiting',
+ 'rpl_parallel_start_waiting_for_prior_killed SIGNAL t3_killed'));
+INSERT INTO t3 VALUES (62, foo(62,
+ 'ha_write_row_end SIGNAL d1_query WAIT_FOR d1_cont2',
+ 'rpl_parallel_end_of_group SIGNAL d1_done WAIT_FOR d1_cont'));
+COMMIT;
+SET gtid_domain_id=0;
+
+--connection server_2
+SET debug_sync='now WAIT_FOR d1_query';
+--let $d1_thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(62%' AND INFO NOT LIKE '%LIKE%'`
+
+--connection server_1
+SET gtid_domain_id=0;
+INSERT INTO t3 VALUES (63, foo(63,
+ 'ha_write_row_end SIGNAL d0_query WAIT_FOR d0_cont2',
+ 'rpl_parallel_end_of_group SIGNAL d0_done WAIT_FOR d0_cont'));
+
+--connection server_2
+SET debug_sync='now WAIT_FOR d0_query';
+--let $d0_thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(63%' AND INFO NOT LIKE '%LIKE%'`
+
+--connection server_1
+SET gtid_domain_id=3;
+BEGIN;
+# These debug_sync's will linger on and be used to control T2 later.
+INSERT INTO t3 VALUES (68, foo(68,
+ 'rpl_parallel_start_waiting_for_prior SIGNAL t2_waiting', ''));
+INSERT INTO t3 VALUES (69, foo(69,
+ 'ha_write_row_end SIGNAL d3_query WAIT_FOR d3_cont2',
+ 'rpl_parallel_end_of_group SIGNAL d3_done WAIT_FOR d3_cont'));
+COMMIT;
+SET gtid_domain_id=0;
+
+--connection server_2
+SET debug_sync='now WAIT_FOR d3_query';
+--let $d3_thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
INFO LIKE '%foo(69%' AND INFO NOT LIKE '%LIKE%'`
+
+SET debug_sync='now SIGNAL d2_cont2';
+SET debug_sync='now WAIT_FOR d2_done';
+SET debug_sync='now SIGNAL d1_cont2';
+SET debug_sync='now WAIT_FOR d1_done';
+SET debug_sync='now SIGNAL d0_cont2';
+SET debug_sync='now WAIT_FOR d0_done';
+SET debug_sync='now SIGNAL d3_cont2';
+SET debug_sync='now WAIT_FOR d3_done';
+
+# Now prepare the real transactions T1, T2, T3, T4 on the master.
+
+--connection con_temp3
+# Create transaction T1.
+SET binlog_format=statement;
+INSERT INTO t3 VALUES (64, foo(64,
+ 'rpl_parallel_before_mark_start_commit SIGNAL t1_waiting
WAIT_FOR t1_cont', ''));
+
+# Create transaction T2, as a group commit leader on the master.
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2 WAIT_FOR master_cont2';
+send INSERT INTO t3 VALUES (65, foo(65, '', ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+
+--connection con_temp4
+# Create transaction T3, participating in T2's group commit.
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued3';
+send INSERT INTO t3 VALUES (66, foo(66, '', ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued3';
+
+--connection con_temp5
+# Create transaction T4, participating in group commit with T2 and T3.
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued4';
+send INSERT INTO t3 VALUES (67, foo(67, '', ''));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued4';
+SET debug_sync='now SIGNAL master_cont2';
+
+--connection con_temp3
+REAP;
+--connection con_temp4
+REAP;
+--connection con_temp5
+REAP;
+
+--connection server_1
+SELECT * FROM t3 WHERE a >= 60 ORDER BY a;
+SET debug_sync='RESET';
+
+--connection server_2
+# Now we have the four transactions pending for replication on the slave.
+# Let them be queued for our three worker threads in a controlled fashion.
+# We put them at a stage where T1 is delayed and T3 is waiting for T1 to
+# commit before T3 can start. Then we kill T3.
+
+# Make the worker D0 free, and wait for T1 to be queued in it.
+SET debug_sync='now SIGNAL d0_cont';
+SET debug_sync='now WAIT_FOR t1_waiting';
+
+# Make the worker D3 free, and wait for T2 to be queued in it.
+SET debug_sync='now SIGNAL d3_cont';
+SET debug_sync='now WAIT_FOR t2_waiting';
+
+# Now release worker D1, and wait for T3 to be queued in it.
+# T3 will wait for T1 to commit before it can start.
+SET debug_sync='now SIGNAL d1_cont';
+SET debug_sync='now WAIT_FOR t3_waiting';
+
+# Release worker D2. Wait for T4 to be queued, so we are sure it has
+# received the debug_sync signal (else we might overwrite it with the
+# next debug_sync).
+SET debug_sync='now SIGNAL d2_cont';
+SET debug_sync='now WAIT_FOR t4_waiting';
+
+# Now we kill the waiting transaction T3 in worker D1.
+--replace_result $d1_thd_id THD_ID
+eval KILL $d1_thd_id;
+
+# Wait until T3 has reacted on the kill.
+SET debug_sync='now WAIT_FOR t3_killed';
+
+# Now we can allow T1 to proceed.
+SET debug_sync='now SIGNAL t1_cont';
+
+--let $slave_sql_errno= 1317,1927,1964
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+STOP SLAVE IO_THREAD;
+# Since T2, T3, and T4 run in parallel, we can not be sure if T2 will
have time
+# to commit or not before the stop. However, T1 should commit, and
T3/T4 may
+# not have committed. (After slave restart we check that all become
committed
+# eventually).
+SELECT * FROM t3 WHERE a >= 60 AND a != 65 ORDER BY a;
+
+# Now we have to disable the debug_sync statements, so they do not trigger
+# when the events are retried.
+SET debug_sync='RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+--connection server_1
+UPDATE t3 SET b=b+1 WHERE a=60;
+--save_master_pos
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 60 ORDER BY a;
+# Restore the foo() function.
+SET sql_log_bin=0;
+DROP FUNCTION foo;
+--delimiter ||
+CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
+ RETURNS INT DETERMINISTIC
+ BEGIN
+ IF d1 != '' THEN
+ SET debug_sync = d1;
+ END IF;
+ IF d2 != '' THEN
+ SET debug_sync = d2;
+ END IF;
+ RETURN x;
+ END
+||
+--delimiter ;
+SET sql_log_bin=1;
+
+--connection server_2
+# Respawn all worker threads to clear any left-over debug_sync or other
stuff.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** 5. Test killing thread that is waiting for queue of max
length to shorten ***
+
+# Find the thread id of the driver SQL thread that we want to kill.
+--let $wait_condition= SELECT COUNT(*) = 1 FROM
INFORMATION_SCHEMA.PROCESSLIST WHERE STATE LIKE '%Slave has read all
relay log%'
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+--let $thd_id= `SELECT ID FROM INFORMATION_SCHEMA.PROCESSLIST WHERE
STATE LIKE '%Slave has read all relay log%'`
+SET @old_max_queued= @@GLOBAL.slave_parallel_max_queued;
+SET GLOBAL slave_parallel_max_queued=9000;
+
+--connection server_1
+--let bigstring= `SELECT REPEAT('x', 10000)`
+SET binlog_format=statement;
+# Create an event that will wait to be signalled.
+INSERT INTO t3 VALUES (80, foo(0,
+ 'ha_write_row_end SIGNAL query_waiting WAIT_FOR query_cont', ''));
+
+--connection server_2
+SET debug_sync='now WAIT_FOR query_waiting';
+# Inject that the SQL driver thread will signal `wait_queue_ready' to
debug_sync
+# as it goes to wait for the event queue to become smaller than the
value of
+# @@slave_parallel_max_queued.
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,rpl_parallel_wait_queue_max";
+
+--connection server_1
+--disable_query_log
+# Create an event that will fill up the queue.
+# The Xid event at the end of the event group will have to wait for the
Query
+# event with the INSERT to drain so the queue becomes shorter. However
that in
+# turn waits for the prior event group to continue.
+eval INSERT INTO t3 VALUES (81, LENGTH('$bigstring'));
+--enable_query_log
+SELECT * FROM t3 WHERE a >= 80 ORDER BY a;
+
+--connection server_2
+SET debug_sync='now WAIT_FOR wait_queue_ready';
+
+--replace_result $thd_id THD_ID
+eval KILL $thd_id;
+
+SET debug_sync='now WAIT_FOR wait_queue_killed';
+SET debug_sync='now SIGNAL query_cont';
+
+--let $slave_sql_errno= 1317,1927,1964
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+STOP SLAVE IO_THREAD;
+
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_max_queued= @old_max_queued;
+
+--connection server_1
+INSERT INTO t3 VALUES (82,0);
+SET binlog_format=@old_format;
+--save_master_pos
+
+--connection server_2
+SET debug_sync='RESET';
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 80 ORDER BY a;
+
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format=@old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--echo *** MDEV-5788 Incorrect free of rgi->deferred_events in parallel
replication ***
+
+--connection server_2
+# Use just two worker threads, so we are sure to get the rpl_group_info
added
+# to the free list, which is what triggered the bug.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL replicate_ignore_table="test.t3";
+SET GLOBAL slave_parallel_threads=2;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+INSERT INTO t3 VALUES (100, rand());
+INSERT INTO t3 VALUES (101, rand());
+
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+
+--connection server_1
+INSERT INTO t3 VALUES (102, rand());
+INSERT INTO t3 VALUES (103, rand());
+INSERT INTO t3 VALUES (104, rand());
+INSERT INTO t3 VALUES (105, rand());
+
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL replicate_ignore_table="";
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+INSERT INTO t3 VALUES (106, rand());
+INSERT INTO t3 VALUES (107, rand());
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--replace_column 2 #
+SELECT * FROM t3 WHERE a >= 100 ORDER BY a;
+
+
+--echo *** MDEV-5921: In parallel replication, an error is not
correctly signalled to the next transaction ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+INSERT INTO t3 VALUES (110, 1);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 110 ORDER BY a;
+# Inject a duplicate key error.
+SET sql_log_bin=0;
+INSERT INTO t3 VALUES (111, 666);
+SET sql_log_bin=1;
+
+--connection server_1
+
+# Create a group commit with two inserts, the first one conflicts with
a row on the slave
+--connect (con1,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+send INSERT INTO t3 VALUES (111, 2);
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connect (con2,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+send INSERT INTO t3 VALUES (112, 3);
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con1
+REAP;
+--connection con2
+REAP;
+SET debug_sync='RESET';
+--save_master_pos
+
+--connection server_2
+--let $slave_sql_errno= 1062
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--source include/wait_for_slave_sql_to_stop.inc
+--enable_connect_log
+# We should not see the row (112,3) here, it should be rolled back due to
+# error signal from the prior transaction.
+SELECT * FROM t3 WHERE a >= 110 ORDER BY a;
+SET sql_log_bin=0;
+DELETE FROM t3 WHERE a=111 AND b=666;
+SET sql_log_bin=1;
+START SLAVE SQL_THREAD;
+--sync_with_master
+SELECT * FROM t3 WHERE a >= 110 ORDER BY a;
+
+
+--echo ***MDEV-5914: Parallel replication deadlock due to InnoDB lock
conflicts ***
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+--connection server_1
+CREATE TABLE t4 (a INT PRIMARY KEY, b INT, KEY b_idx(b)) ENGINE=InnoDB;
+INSERT INTO t4 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
+
+# Create a group commit with UPDATE and DELETE, in that order.
+# The bug was that while the UPDATE's row lock does not block the
DELETE, the
+# DELETE's gap lock _does_ block the UPDATE. This could cause a deadlock
+# on the slave.
+--connection con1
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+send UPDATE t4 SET b=NULL WHERE a=6;
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con2
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+send DELETE FROM t4 WHERE b <= 3;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con1
+REAP;
+--connection con2
+REAP;
+SET debug_sync='RESET';
+--save_master_pos
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--sync_with_master
+--source include/stop_slave.inc
+--enable_connect_log
+
+SELECT * FROM t4 ORDER BY a;
+
+
+# Another example, this one with INSERT vs. DELETE
+--connection server_1
+DELETE FROM t4;
+INSERT INTO t4 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
+
+# Create a group commit with INSERT and DELETE, in that order.
+# The bug was that while the INSERT's insert intention lock does not block
+# the DELETE, the DELETE's gap lock _does_ block the INSERT. This could
cause
+# a deadlock on the slave.
+--connection con1
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+send INSERT INTO t4 VALUES (7, NULL);
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con2
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+send DELETE FROM t4 WHERE b <= 3;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con1
+REAP;
+--connection con2
+REAP;
+SET debug_sync='RESET';
+--save_master_pos
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--sync_with_master
+--source include/stop_slave.inc
+--enable_connect_log
+
+SELECT * FROM t4 ORDER BY a;
+
+
+# MDEV-6549, failing to update gtid_slave_pos for a transaction that
was retried.
+# The problem was that when a transaction updates the mysql.gtid_slave_pos
+# table, it clears the flag that marks that there is a GTID position that
+# needs to be updated. Then, if the transaction got killed after that due
+# to a deadlock, the subsequent retry would fail to notice that the
GTID needs
+# to be recorded in gtid_slave_pos.
+#
+# (In the original bug report, the symptom was an assertion; this was
however
+# just a side effect of the missing update of gtid_slave_pos, which also
+# happened to cause a missing clear of OPTION_GTID_BEGIN).
+--connection server_1
+DELETE FROM t4;
+INSERT INTO t4 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
+
+# Create two transactions that can run in parallel on the slave but cause
+# a deadlock if the second runs before the first.
+--connection con1
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+send UPDATE t4 SET b=NULL WHERE a=6;
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con2
+# Must use statement-based binlogging. Otherwise the transaction will
not be
+# binlogged at all, as it modifies no rows.
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format='statement';
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+send DELETE FROM t4 WHERE b <= 1;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con1
+REAP;
+--connection con2
+REAP;
+SET @old_format=@@GLOBAL.binlog_format;
+SET debug_sync='RESET';
+--save_master_pos
+--let $last_gtid= `SELECT @@last_gtid`
+
+--connection server_2
+# Disable the usual skip of gap locks for transactions that are run in
+# parallel, using DBUG. This allows the deadlock to occur, and this in turn
+# triggers a retry of the second transaction, and the code that was
buggy and
+# caused the gtid_slave_pos update to be skipped in the retry.
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,disable_thd_need_ordering_with";
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SET GLOBAL debug_dbug=@old_dbug;
+
+SELECT * FROM t4 ORDER BY a;
+# Check that the GTID of the second transaction was correctly recorded in
+# gtid_slave_pos, in the variable as well as in the table.
+--replace_result $last_gtid GTID
+eval SET @last_gtid= '$last_gtid';
+SELECT IF(@@gtid_slave_pos LIKE CONCAT('%',@last_gtid,'%'), "GTID found
ok",
+ CONCAT("GTID ", @last_gtid, " not found in gtid_slave_pos=",
@@gtid_slave_pos))
+ AS result;
+SELECT "ROW FOUND" AS `Is the row found?`
+ FROM mysql.gtid_slave_pos
+ WHERE CONCAT(domain_id, "-", server_id, "-", seq_no) = @last_gtid;
+
+
+--echo *** MDEV-5938: Exec_master_log_pos not updated at log rotate in
parallel replication ***
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=1;
+SET DEBUG_SYNC= 'RESET';
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+CREATE TABLE t5 (a INT PRIMARY KEY, b INT);
+INSERT INTO t5 VALUES (1,1);
+INSERT INTO t5 VALUES (2,2), (3,8);
+INSERT INTO t5 VALUES (4,16);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+let $io_file= query_get_value(SHOW SLAVE STATUS, Master_Log_File, 1);
+let $io_pos= query_get_value(SHOW SLAVE STATUS, Read_Master_Log_Pos, 1);
+let $sql_file= query_get_value(SHOW SLAVE STATUS,
Relay_Master_Log_File, 1);
+let $sql_pos= query_get_value(SHOW SLAVE STATUS, Exec_Master_Log_Pos, 1);
+--disable_query_log
+eval SELECT IF('$io_file' = '$sql_file', "OK", "Not ok, $io_file <>
$sql_file") AS test_check;
+eval SELECT IF('$io_pos' = '$sql_pos', "OK", "Not ok, $io_pos <>
$sql_pos") AS test_check;
+--enable_query_log
+
+--connection server_1
+FLUSH LOGS;
+--disable_connect_log
+--source include/wait_for_binlog_checkpoint.inc
+--enable_connect_log
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+let $io_file= query_get_value(SHOW SLAVE STATUS, Master_Log_File, 1);
+let $io_pos= query_get_value(SHOW SLAVE STATUS, Read_Master_Log_Pos, 1);
+let $sql_file= query_get_value(SHOW SLAVE STATUS,
Relay_Master_Log_File, 1);
+let $sql_pos= query_get_value(SHOW SLAVE STATUS, Exec_Master_Log_Pos, 1);
+--disable_query_log
+eval SELECT IF('$io_file' = '$sql_file', "OK", "Not ok, $io_file <>
$sql_file") AS test_check;
+eval SELECT IF('$io_pos' = '$sql_pos', "OK", "Not ok, $io_pos <>
$sql_pos") AS test_check;
+--enable_query_log
+
+
+--echo *** MDEV_6435: Incorrect error handling when query binlogged
partially on master with "killed" error ***
+
+--connection server_1
+CREATE TABLE t6 (a INT) ENGINE=MyISAM;
+CREATE TRIGGER tr AFTER INSERT ON t6 FOR EACH ROW SET @a = 1;
+
+--connection con1
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+--let $conid = `SELECT CONNECTION_ID()`
+SET debug_sync='sp_head_execute_before_loop SIGNAL ready WAIT_FOR cont';
+send INSERT INTO t6 VALUES (1), (2), (3);
+
+--connection server_1
+SET debug_sync='now WAIT_FOR ready';
+--replace_result $conid CONID
+eval KILL QUERY $conid;
+SET debug_sync='now SIGNAL cont';
+
+--connection con1
+--error ER_QUERY_INTERRUPTED
+--reap
+SET binlog_format= @old_format;
+SET debug_sync='RESET';
+--let $after_error_gtid_pos= `SELECT @@gtid_binlog_pos`
+
+--connection server_1
+SET debug_sync='RESET';
+
+
+--connection server_2
+--let $slave_sql_errno= 1317
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+STOP SLAVE IO_THREAD;
+--replace_result $after_error_gtid_pos AFTER_ERROR_GTID_POS
+eval SET GLOBAL gtid_slave_pos= '$after_error_gtid_pos';
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+INSERT INTO t6 VALUES (4);
+SELECT * FROM t6 ORDER BY a;
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+SELECT * FROM t6 ORDER BY a;
+
+
+--echo *** MDEV-6551: Some replication errors are ignored if
slave_parallel_threads > 0 ***
+
+--connection server_1
+INSERT INTO t2 VALUES (31);
+--let $gtid1= `SELECT @@LAST_GTID`
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--source include/stop_slave.inc
+SET GLOBAL slave_parallel_threads= 0;
+--source include/start_slave.inc
+--enable_connect_log
+
+# Force a duplicate key error on the slave.
+SET sql_log_bin= 0;
+INSERT INTO t2 VALUES (32);
+SET sql_log_bin= 1;
+
+--connection server_1
+INSERT INTO t2 VALUES (32);
+--let $gtid2= `SELECT @@LAST_GTID`
+# Rotate the binlog; the bug is triggered when the master binlog file
changes
+# after the event group that causes the duplicate key error.
+FLUSH LOGS;
+INSERT INTO t2 VALUES (33);
+INSERT INTO t2 VALUES (34);
+SELECT * FROM t2 WHERE a >= 30 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--let $slave_sql_errno= 1062
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave_io.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=10;
+START SLAVE;
+
+--let $slave_sql_errno= 1062
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+# Note: IO thread is still running at this point.
+# The bug seems to have been that restarting the SQL thread after an
error with
+# the IO thread still running, somehow picks up a later relay log
position and
+# thus ends up skipping the failing event, rather than re-executing.
+
+START SLAVE SQL_THREAD;
+--let $slave_sql_errno= 1062
+--disable_connect_log
+--source include/wait_for_slave_sql_error.inc
+--enable_connect_log
+
+SELECT * FROM t2 WHERE a >= 30 ORDER BY a;
+
+# Skip the duplicate error, so we can proceed.
+--error ER_SLAVE_SKIP_NOT_IN_GTID
+SET sql_slave_skip_counter= 1;
+--disable_connect_log
+--source include/stop_slave_io.inc
+--enable_connect_log
+--disable_query_log
+eval SET GLOBAL gtid_slave_pos = REPLACE(@@gtid_slave_pos, "$gtid1",
"$gtid2");
+--enable_query_log
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+--enable_connect_log
+
+SELECT * FROM t2 WHERE a >= 30 ORDER BY a;
+
+
+--echo *** MDEV-6775: Wrong binlog order in parallel replication ***
+--connection server_1
+# A bit tricky bug to reproduce. On the master, we binlog in statement-mode
+# two transactions, an UPDATE followed by a DELETE. On the slave, we
replicate
+# with binlog-mode set to ROW, which means the DELETE, which modifies
no rows,
+# is not binlogged. Then we inject a wait in the group commit code on the
+# slave, shortly before the actual commit of the UPDATE. The bug was
that the
+# DELETE could wake up from wait_for_prior_commit() before the commit
of the
+# UPDATE. So the test could see the slave position updated to after DELETE,
+# while the UPDATE was still not visible.
+DELETE FROM t4;
+INSERT INTO t4 VALUES (1,NULL), (3,NULL), (4,4), (5, NULL), (6, 6);
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--source include/stop_slave.inc
+--enable_connect_log
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,inject_binlog_commit_before_get_LOCK_log";
+SET @old_format=@@GLOBAL.binlog_format;
+SET GLOBAL binlog_format=ROW;
+# Re-spawn the worker threads to be sure they pick up the new binlog format
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+
+--connection con1
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+send UPDATE t4 SET b=NULL WHERE a=6;
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con2
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+send DELETE FROM t4 WHERE b <= 3;
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con1
+REAP;
+SET binlog_format= @old_format;
+--connection con2
+REAP;
+SET binlog_format= @old_format;
+SET debug_sync='RESET';
+--save_master_pos
+SELECT * FROM t4 ORDER BY a;
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+SET debug_sync= 'now WAIT_FOR waiting';
+--sync_with_master
+SELECT * FROM t4 ORDER BY a;
+SET debug_sync= 'now SIGNAL cont';
+
+# Re-spawn the worker threads to remove any DBUG injections or DEBUG_SYNC.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL binlog_format= @old_format;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-7237: Parallel replication: incorrect relaylog position
after stop/start the slave ***
+--connection server_1
+INSERT INTO t2 VALUES (40);
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+CHANGE MASTER TO master_use_gtid=no;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+# This DBUG injection causes a DEBUG_SYNC signal
"scheduled_gtid_0_x_100" when
+# GTID 0-1-100 has been scheduled for and fetched by a worker thread.
+SET GLOBAL debug_dbug="+d,rpl_parallel_scheduled_gtid_0_x_100";
+# This DBUG injection causes a DEBUG_SYNC signal
"wait_for_done_waiting" when
+# STOP SLAVE has signalled all worker threads to stop.
+SET GLOBAL debug_dbug="+d,rpl_parallel_wait_for_done_trigger";
+# Reset worker threads to make DBUG setting catch on.
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+
+
+--connection server_1
+# Setup some transaction for the slave to replicate.
+INSERT INTO t2 VALUES (41);
+INSERT INTO t2 VALUES (42);
+# Need to log the DELETE in statement format, so we can see it in
processlist.
+SET @old_format= @@binlog_format;
+SET binlog_format= statement;
+DELETE FROM t2 WHERE a=40;
+SET binlog_format= @old_format;
+INSERT INTO t2 VALUES (43);
+INSERT INTO t2 VALUES (44);
+# Force the slave to switch to a new relay log file.
+FLUSH LOGS;
+INSERT INTO t2 VALUES (45);
+# Inject a GTID 0-1-100, which will trigger a DEBUG_SYNC signal when this
+# transaction has been fetched by a worker thread.
+SET gtid_seq_no=100;
+INSERT INTO t2 VALUES (46);
+--save_master_pos
+
+--connection con_temp2
+# Temporarily block the DELETE on a=40 from completing.
+BEGIN;
+SELECT * FROM t2 WHERE a=40 FOR UPDATE;
+
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+# Wait for a worker thread to start on the DELETE that will be blocked
+# temporarily by the SELECT FOR UPDATE.
+--let $wait_condition= SELECT count(*) > 0 FROM
information_schema.processlist WHERE state='updating' and info LIKE
'%DELETE FROM t2 WHERE a=40%'
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+
+# The DBUG injection set above will make the worker thread signal the
following
+# debug_sync when the GTID 0-1-100 has been reached by a worker thread.
+# Thus, at this point, the SQL driver thread has reached the next
+# relay log file name, while a worker thread is still processing a
+# transaction in the previous relay log file, blocked on the SELECT FOR
+# UPDATE.
+SET debug_sync= 'now WAIT_FOR scheduled_gtid_0_x_100';
+# At this point, the SQL driver thread is in the new relay log file, while
+# the DELETE from the old relay log file is not yet complete. We will stop
+# the slave at this point. The bug was that the DELETE statement would
+# update the slave position to the _new_ relay log file name instead of
+# its own old file name. Thus, by stoping and restarting the slave at this
+# point, we would get an error at restart due to incorrect position. (If
+# we would let the slave catch up before stopping, the incorrect position
+# would be corrected by a later transaction).
+
+send STOP SLAVE;
+
+--connection con_temp2
+# Wait for STOP SLAVE to have proceeded sufficiently that it has signalled
+# all worker threads to stop; this ensures that we will stop after the
DELETE
+# transaction (and not after a later transaction that might have been able
+# to set a fixed position).
+SET debug_sync= 'now WAIT_FOR wait_for_done_waiting';
+# Now release the row lock that was blocking the replication of DELETE.
+ROLLBACK;
+
+--connection server_2
+reap;
+--disable_connect_log
+--source include/wait_for_slave_sql_to_stop.inc
+--enable_connect_log
+SELECT * FROM t2 WHERE a >= 40 ORDER BY a;
+# Now restart the slave. With the bug present, this would start at an
+# incorrect relay log position, causing relay log read error (or if
unlucky,
+# silently skip a number of events).
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+--sync_with_master
+SELECT * FROM t2 WHERE a >= 40 ORDER BY a;
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL debug_dbug=@old_dbug;
+SET DEBUG_SYNC= 'RESET';
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+CHANGE MASTER TO master_use_gtid=slave_pos;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-7326 Server deadlock in connection with parallel
replication ***
+# We use three transactions, each in a separate group commit.
+# T1 does mark_start_commit(), then gets a deadlock error.
+# T2 wakes up and starts running
+# T1 does unmark_start_commit()
+# T3 goes to wait for T2 to start its commit
+# T2 does mark_start_commit()
+# The bug was that at this point, T3 got deadlocked. Because T1 has
unmarked(),
+# T3 did not yet see the count_committing_event_groups reach its target
value
+# yet. But when T1 later re-did mark_start_commit(), it failed to send
a wakeup
+# to T3.
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=3;
+SET GLOBAL debug_dbug="+d,rpl_parallel_simulate_temp_err_xid";
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format= STATEMENT;
+# This debug_sync will linger on and be used to control T3 later.
+INSERT INTO t1 VALUES (foo(50,
+ "rpl_parallel_start_waiting_for_prior SIGNAL t3_ready",
+ "rpl_parallel_end_of_group SIGNAL prep_ready WAIT_FOR prep_cont"));
+--save_master_pos
+--connection server_2
+# Wait for the debug_sync point for T3 to be set. But let the preparation
+# transaction remain hanging, so that T1 and T2 will be scheduled for the
+# remaining two worker threads.
+SET DEBUG_SYNC= "now WAIT_FOR prep_ready";
+
+--connection server_1
+INSERT INTO t2 VALUES (foo(50,
+ "rpl_parallel_simulate_temp_err_xid SIGNAL t1_ready1 WAIT_FOR t1_cont1",
+ "rpl_parallel_retry_after_unmark SIGNAL t1_ready2 WAIT_FOR t1_cont2"));
+--save_master_pos
+
+--connection server_2
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready1";
+# T1 has now done mark_start_commit(). It will later do a rollback and
retry.
+
+--connection server_1
+# Use a MyISAM table for T2 and T3, so they do not trigger the +#
rpl_parallel_simulate_temp_err_xid DBUG insertion on XID event.
+INSERT INTO t1 VALUES (foo(51,
+ "rpl_parallel_before_mark_start_commit SIGNAL t2_ready1 WAIT_FOR
t2_cont1",
+ "rpl_parallel_after_mark_start_commit SIGNAL t2_ready2"));
+
+--connection server_2
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready1";
+# T2 has now started running, but has not yet done mark_start_commit()
+SET DEBUG_SYNC= "now SIGNAL t1_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready2";
+# T1 has now done unmark_start_commit() in preparation for its retry.
+
+--connection server_1
+INSERT INTO t1 VALUES (52);
+SET BINLOG_FORMAT= @old_format;
+SELECT * FROM t2 WHERE a>=50 ORDER BY a;
+SELECT * FROM t1 WHERE a>=50 ORDER BY a;
+
+--connection server_2
+# Let the preparation transaction complete, so that the same worker thread
+# can continue with the transaction T3.
+SET DEBUG_SYNC= "now SIGNAL prep_cont";
+SET DEBUG_SYNC= "now WAIT_FOR t3_ready";
+# T3 has now gone to wait for T2 to start committing
+SET DEBUG_SYNC= "now SIGNAL t2_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready2";
+# T2 has now done mark_start_commit().
+# Let things run, and check that T3 does not get deadlocked.
+SET DEBUG_SYNC= "now SIGNAL t1_cont2";
+--sync_with_master
+
+--connection server_1
+--save_master_pos
+--connection server_2
+--sync_with_master
+SELECT * FROM t2 WHERE a>=50 ORDER BY a;
+SELECT * FROM t1 WHERE a>=50 ORDER BY a;
+SET DEBUG_SYNC="reset";
+
+# Re-spawn the worker threads to remove any DBUG injections or DEBUG_SYNC.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-7326 Server deadlock in connection with parallel
replication ***
+# Similar to the previous test, but with T2 and T3 in the same GCO.
+# We use three transactions, T1 in one group commit and T2/T3 in another.
+# T1 does mark_start_commit(), then gets a deadlock error.
+# T2 wakes up and starts running
+# T1 does unmark_start_commit()
+# T3 goes to wait for T1 to start its commit
+# T2 does mark_start_commit()
+# The bug was that at this point, T3 got deadlocked. T2 increments the
+# count_committing_event_groups but does not signal T3, as they are in
+# the same GCO. Then later when T1 increments, it would also not signal
+# T3, because now the count_committing_event_groups is not equal to the
+# wait_count of T3 (it is one larger).
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=3;
+SET GLOBAL debug_dbug="+d,rpl_parallel_simulate_temp_err_xid";
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--connection server_1
+SET @old_format= @@SESSION.binlog_format;
+SET binlog_format= STATEMENT;
+# This debug_sync will linger on and be used to control T3 later.
+INSERT INTO t1 VALUES (foo(60,
+ "rpl_parallel_start_waiting_for_prior SIGNAL t3_ready",
+ "rpl_parallel_end_of_group SIGNAL prep_ready WAIT_FOR prep_cont"));
+--save_master_pos
+--connection server_2
+# Wait for the debug_sync point for T3 to be set. But let the preparation
+# transaction remain hanging, so that T1 and T2 will be scheduled for the
+# remaining two worker threads.
+SET DEBUG_SYNC= "now WAIT_FOR prep_ready";
+
+--connection server_1
+INSERT INTO t2 VALUES (foo(60,
+ "rpl_parallel_simulate_temp_err_xid SIGNAL t1_ready1 WAIT_FOR t1_cont1",
+ "rpl_parallel_retry_after_unmark SIGNAL t1_ready2 WAIT_FOR t1_cont2"));
+--save_master_pos
+
+--connection server_2
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready1";
+# T1 has now done mark_start_commit(). It will later do a rollback and
retry.
+
+# Do T2 and T3 in a single group commit.
+# Use a MyISAM table for T2 and T3, so they do not trigger the +#
rpl_parallel_simulate_temp_err_xid DBUG insertion on XID event.
+--connection con_temp3
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued1 WAIT_FOR master_cont1';
+SET binlog_format=statement;
+send INSERT INTO t1 VALUES (foo(61,
+ "rpl_parallel_before_mark_start_commit SIGNAL t2_ready1 WAIT_FOR
t2_cont1",
+ "rpl_parallel_after_mark_start_commit SIGNAL t2_ready2"));
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued1';
+
+--connection con_temp4
+SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL
master_queued2';
+send INSERT INTO t6 VALUES (62);
+
+--connection server_1
+SET debug_sync='now WAIT_FOR master_queued2';
+SET debug_sync='now SIGNAL master_cont1';
+
+--connection con_temp3
+REAP;
+--connection con_temp4
+REAP;
+
+--connection server_1
+SET debug_sync='RESET';
+SET BINLOG_FORMAT= @old_format;
+SELECT * FROM t2 WHERE a>=60 ORDER BY a;
+SELECT * FROM t1 WHERE a>=60 ORDER BY a;
+SELECT * FROM t6 WHERE a>=60 ORDER BY a;
+
+--connection server_2
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready1";
+# T2 has now started running, but has not yet done mark_start_commit()
+SET DEBUG_SYNC= "now SIGNAL t1_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t1_ready2";
+# T1 has now done unmark_start_commit() in preparation for its retry.
+
+--connection server_2
+# Let the preparation transaction complete, so that the same worker thread
+# can continue with the transaction T3.
+SET DEBUG_SYNC= "now SIGNAL prep_cont";
+SET DEBUG_SYNC= "now WAIT_FOR t3_ready";
+# T3 has now gone to wait for T2 to start committing
+SET DEBUG_SYNC= "now SIGNAL t2_cont1";
+SET DEBUG_SYNC= "now WAIT_FOR t2_ready2";
+# T2 has now done mark_start_commit().
+# Let things run, and check that T3 does not get deadlocked.
+SET DEBUG_SYNC= "now SIGNAL t1_cont2";
+--sync_with_master
+
+--connection server_1
+--save_master_pos
+--connection server_2
+--sync_with_master
+SELECT * FROM t2 WHERE a>=60 ORDER BY a;
+SELECT * FROM t1 WHERE a>=60 ORDER BY a;
+SELECT * FROM t6 WHERE a>=60 ORDER BY a;
+SET DEBUG_SYNC="reset";
+
+# Re-spawn the worker threads to remove any DBUG injections or DEBUG_SYNC.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_threads=0;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--echo *** MDEV-7335: Potential parallel slave deadlock with specific
binlog corruption ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=1;
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,slave_discard_xid_for_gtid_0_x_1000";
+
+--connection server_1
+INSERT INTO t2 VALUES (101);
+INSERT INTO t2 VALUES (102);
+INSERT INTO t2 VALUES (103);
+INSERT INTO t2 VALUES (104);
+INSERT INTO t2 VALUES (105);
+# Inject a partial event group (missing XID at the end). The bug was
that such
+# partial group was not handled appropriately, leading to server deadlock.
+SET gtid_seq_no=1000;
+INSERT INTO t2 VALUES (106);
+INSERT INTO t2 VALUES (107);
+INSERT INTO t2 VALUES (108);
+INSERT INTO t2 VALUES (109);
+INSERT INTO t2 VALUES (110);
+INSERT INTO t2 VALUES (111);
+INSERT INTO t2 VALUES (112);
+INSERT INTO t2 VALUES (113);
+INSERT INTO t2 VALUES (114);
+INSERT INTO t2 VALUES (115);
+INSERT INTO t2 VALUES (116);
+INSERT INTO t2 VALUES (117);
+INSERT INTO t2 VALUES (118);
+INSERT INTO t2 VALUES (119);
+INSERT INTO t2 VALUES (120);
+INSERT INTO t2 VALUES (121);
+INSERT INTO t2 VALUES (122);
+INSERT INTO t2 VALUES (123);
+INSERT INTO t2 VALUES (124);
+INSERT INTO t2 VALUES (125);
+INSERT INTO t2 VALUES (126);
+INSERT INTO t2 VALUES (127);
+INSERT INTO t2 VALUES (128);
+INSERT INTO t2 VALUES (129);
+INSERT INTO t2 VALUES (130);
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+--enable_connect_log
+# The partial event group (a=106) should be rolled back and thus missing.
+SELECT * FROM t2 WHERE a >= 100 ORDER BY a;
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL debug_dbug=@old_dbug;
+SET GLOBAL slave_parallel_threads=10;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+--echo *** MDEV-6676 - test syntax of @@slave_parallel_mode ***
+--connection server_2
+
+--let $status_items= Parallel_Mode
+--disable_connect_log
+--source include/show_slave_status.inc
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_mode='aggressive';
+--let $status_items= Parallel_Mode
+--disable_connect_log
+--source include/show_slave_status.inc
+SET GLOBAL slave_parallel_mode='conservative';
+--let $status_items= Parallel_Mode
+--source include/show_slave_status.inc
+--enable_connect_log
+
+
+--echo *** MDEV-6676 - test that empty parallel_mode does not replicate
in parallel ***
+--connection server_1
+INSERT INTO t2 VALUES (1040);
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+SET GLOBAL slave_parallel_mode='none';
+# Test that we do not use parallel apply, by injecting an unconditional
+# crash in the parallel apply code.
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,slave_crash_if_parallel_apply";
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+SELECT * FROM t2 WHERE a >= 1040 ORDER BY a;
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL debug_dbug=@old_dbug;
+
+
+--echo *** MDEV-6676 - test disabling domain-based parallel replication ***
+--connection server_1
+# Let's do a bunch of transactions that will conflict if run
out-of-order in
+# domain-based parallel replication mode.
+SET gtid_domain_id = 1;
+INSERT INTO t2 VALUES (1041);
+INSERT INTO t2 VALUES (1042);
+INSERT INTO t2 VALUES (1043);
+INSERT INTO t2 VALUES (1044);
+INSERT INTO t2 VALUES (1045);
+INSERT INTO t2 VALUES (1046);
+DELETE FROM t2 WHERE a >= 1041;
+SET gtid_domain_id = 2;
+INSERT INTO t2 VALUES (1041);
+INSERT INTO t2 VALUES (1042);
+INSERT INTO t2 VALUES (1043);
+INSERT INTO t2 VALUES (1044);
+INSERT INTO t2 VALUES (1045);
+INSERT INTO t2 VALUES (1046);
+SET gtid_domain_id = 0;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+--connection server_2
+SET GLOBAL slave_parallel_mode=minimal;
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+SELECT * FROM t2 WHERE a >= 1040 ORDER BY a;
+--source include/stop_slave.inc
+SET GLOBAL slave_parallel_mode='conservative';
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-7847: "Slave worker thread retried transaction 10
time(s) in vain, giving up", followed by replication hanging ***
+--echo *** MDEV-7882: Excessive transaction retry in parallel
replication ***
+
+--connection server_1
+CREATE TABLE t7 (a int PRIMARY KEY, b INT) ENGINE=InnoDB;
+CREATE TABLE t8 (a int PRIMARY KEY, b INT) ENGINE=InnoDB;
+--save_master_pos
+
+--connection server_2
+--sync_with_master
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL slave_parallel_threads=40;
+SELECT @old_retries:=@@GLOBAL.slave_transaction_retries;
+SET GLOBAL slave_transaction_retries= 5;
+
+
+# Using dbug error injection, we artificially create event groups with
a lot of
+# conflicting transactions in each event group. The bugs were
originally seen
+# "in the wild" with transactions that did not conflict on the master,
and only
+# conflicted very rarely on the slave (maybe some edge case with InnoDB
btree
+# page splits or something like that). The event groups here loosely
reflect
+# the structure of the original failure's group commits.
+
+
+--connection server_1
+INSERT INTO t7 VALUES (1,1), (2,2), (3,3), (4,4), (5,5);
+SET @old_dbug= @@SESSION.debug_dbug;
+SET @commit_id= 42;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+INSERT INTO t8 VALUES (1,1);
+INSERT INTO t8 VALUES (2,2);
+INSERT INTO t8 VALUES (3,3);
+INSERT INTO t8 VALUES (4,4);
+INSERT INTO t8 VALUES (5,5);
+INSERT INTO t8 VALUES (6,6);
+INSERT INTO t8 VALUES (7,7);
+INSERT INTO t8 VALUES (8,8);
+
+UPDATE t7 SET b=9 WHERE a=3;
+UPDATE t7 SET b=10 WHERE a=3;
+UPDATE t7 SET b=11 WHERE a=3;
+
+INSERT INTO t8 VALUES (12,12);
+INSERT INTO t8 VALUES (13,13);
+
+UPDATE t7 SET b=14 WHERE a=3;
+UPDATE t7 SET b=15 WHERE a=3;
+
+INSERT INTO t8 VALUES (16,16);
+
+UPDATE t7 SET b=17 WHERE a=3;
+
+INSERT INTO t8 VALUES (18,18);
+INSERT INTO t8 VALUES (19,19);
+
+UPDATE t7 SET b=20 WHERE a=3;
+
+INSERT INTO t8 VALUES (21,21);
+
+UPDATE t7 SET b=22 WHERE a=3;
+
+INSERT INTO t8 VALUES (23,24);
+INSERT INTO t8 VALUES (24,24);
+
+UPDATE t7 SET b=25 WHERE a=3;
+
+INSERT INTO t8 VALUES (26,26);
+
+UPDATE t7 SET b=27 WHERE a=3;
+
+BEGIN;
+INSERT INTO t8 VALUES (28,28);
+INSERT INTO t8 VALUES (29,28), (30,28);
+INSERT INTO t8 VALUES (31,28);
+INSERT INTO t8 VALUES (32,28);
+INSERT INTO t8 VALUES (33,28);
+INSERT INTO t8 VALUES (34,28);
+INSERT INTO t8 VALUES (35,28);
+INSERT INTO t8 VALUES (36,28);
+INSERT INTO t8 VALUES (37,28);
+INSERT INTO t8 VALUES (38,28);
+INSERT INTO t8 VALUES (39,28);
+INSERT INTO t8 VALUES (40,28);
+INSERT INTO t8 VALUES (41,28);
+INSERT INTO t8 VALUES (42,28);
+COMMIT;
+
+
+SET @commit_id=43;
+INSERT INTO t8 VALUES (43,43);
+INSERT INTO t8 VALUES (44,44);
+
+UPDATE t7 SET b=45 WHERE a=3;
+
+INSERT INTO t8 VALUES (46,46);
+INSERT INTO t8 VALUES (47,47);
+
+UPDATE t7 SET b=48 WHERE a=3;
+
+INSERT INTO t8 VALUES (49,49);
+INSERT INTO t8 VALUES (50,50);
+
+
+SET @commit_id=44;
+INSERT INTO t8 VALUES (51,51);
+INSERT INTO t8 VALUES (52,52);
+
+UPDATE t7 SET b=53 WHERE a=3;
+
+INSERT INTO t8 VALUES (54,54);
+INSERT INTO t8 VALUES (55,55);
+
+UPDATE t7 SET b=56 WHERE a=3;
+
+INSERT INTO t8 VALUES (57,57);
+
+UPDATE t7 SET b=58 WHERE a=3;
+
+INSERT INTO t8 VALUES (58,58);
+INSERT INTO t8 VALUES (59,59);
+INSERT INTO t8 VALUES (60,60);
+INSERT INTO t8 VALUES (61,61);
+
+UPDATE t7 SET b=62 WHERE a=3;
+
+INSERT INTO t8 VALUES (63,63);
+INSERT INTO t8 VALUES (64,64);
+INSERT INTO t8 VALUES (65,65);
+INSERT INTO t8 VALUES (66,66);
+
+UPDATE t7 SET b=67 WHERE a=3;
+
+INSERT INTO t8 VALUES (68,68);
+
+UPDATE t7 SET b=69 WHERE a=3;
+UPDATE t7 SET b=70 WHERE a=3;
+UPDATE t7 SET b=71 WHERE a=3;
+
+INSERT INTO t8 VALUES (72,72);
+
+UPDATE t7 SET b=73 WHERE a=3;
+UPDATE t7 SET b=74 WHERE a=3;
+UPDATE t7 SET b=75 WHERE a=3;
+UPDATE t7 SET b=76 WHERE a=3;
+
+INSERT INTO t8 VALUES (77,77);
+
+UPDATE t7 SET b=78 WHERE a=3;
+
+INSERT INTO t8 VALUES (79,79);
+
+UPDATE t7 SET b=80 WHERE a=3;
+
+INSERT INTO t8 VALUES (81,81);
+
+UPDATE t7 SET b=82 WHERE a=3;
+
+INSERT INTO t8 VALUES (83,83);
+
+UPDATE t7 SET b=84 WHERE a=3;
+
+
+SET @commit_id=45;
+INSERT INTO t8 VALUES (85,85);
+UPDATE t7 SET b=86 WHERE a=3;
+INSERT INTO t8 VALUES (87,87);
+
+
+SET @commit_id=46;
+INSERT INTO t8 VALUES (88,88);
+INSERT INTO t8 VALUES (89,89);
+INSERT INTO t8 VALUES (90,90);
+
+SET SESSION debug_dbug=@old_dbug;
+
+INSERT INTO t8 VALUES (91,91);
+INSERT INTO t8 VALUES (92,92);
+INSERT INTO t8 VALUES (93,93);
+INSERT INTO t8 VALUES (94,94);
+INSERT INTO t8 VALUES (95,95);
+INSERT INTO t8 VALUES (96,96);
+INSERT INTO t8 VALUES (97,97);
+INSERT INTO t8 VALUES (98,98);
+INSERT INTO t8 VALUES (99,99);
+
+
+SELECT * FROM t7 ORDER BY a;
+SELECT * FROM t8 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+SELECT * FROM t7 ORDER BY a;
+SELECT * FROM t8 ORDER BY a;
+
+--source include/stop_slave.inc
+SET GLOBAL slave_transaction_retries= @old_retries;
+SET GLOBAL slave_parallel_threads=10;
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-7888: ANALYZE TABLE does wakeup_subsequent_commits(),
causing wrong binlog order and parallel replication hang ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_analyze_table_sleep';
+
+--connection server_1
+# Inject two group commits. The bug was that ANALYZE TABLE would call
+# wakeup_subsequent_commits() too early, allowing the following transaction
+# in the same group to run ahead and binlog and free the GCO. Then we get
+# wrong binlog order and later access freed GCO, which causes lost wakeup
+# of following GCO and thus replication hang.
+# We injected a small sleep in ANALYZE to make the race easier to hit (this
+# can only cause false negatives in versions with the bug, not false
positives,
+# so sleep is ok here. And it's in general not possible to trigger reliably
+# the race with debug_sync, since the bugfix makes the race impossible).
+
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+
+# Group commit with cid=10000, two event groups.
+SET @commit_id= 10000;
+ANALYZE TABLE t2;
+INSERT INTO t3 VALUES (120, 0);
+
+# Group commit with cid=10001, one event group.
+SET @commit_id= 10001;
+INSERT INTO t3 VALUES (121, 0);
+
+SET SESSION debug_dbug=@old_dbug;
+
+SELECT * FROM t3 WHERE a >= 120 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+
+SELECT * FROM t3 WHERE a >= 120 ORDER BY a;
+
+--source include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_dbug;
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-7929: record_gtid() for non-transactional event group
calls wakeup_subsequent_commits() too early, causing slave hang. ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_record_gtid_serverid_100_sleep';
+
+--connection server_1
+# Inject two group commits. The bug was that record_gtid for a
+# non-transactional event group would commit its own transaction, which
would
+# cause ha_commit_trans() to call wakeup_subsequent_commits() too
early. This
+# in turn lead to access to freed group_commit_orderer object, losing a
wakeup
+# and causing slave threads to hang.
+# We inject a small sleep in the corresponding record_gtid() to make
the race
+# easier to hit.
+
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+
+# Group commit with cid=10010, two event groups.
+SET @old_server_id= @@SESSION.server_id;
+SET SESSION server_id= 100;
+SET @commit_id= 10010;
+ALTER TABLE t1 COMMENT "Hulubulu!";
+SET SESSION server_id= @old_server_id;
+INSERT INTO t3 VALUES (130, 0);
+
+# Group commit with cid=10011, one event group.
+SET @commit_id= 10011;
+INSERT INTO t3 VALUES (131, 0);
+
+SET SESSION debug_dbug=@old_dbug;
+
+SELECT * FROM t3 WHERE a >= 130 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+
+SELECT * FROM t3 WHERE a >= 130 ORDER BY a;
+
+--source include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_dbug;
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-8031: Parallel replication stops on "connection killed"
error (probably incorrectly handled deadlock kill) ***
+
+--connection server_1
+INSERT INTO t3 VALUES (201,0), (202,0);
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--source include/stop_slave.inc
+--enable_connect_log
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_mdev8031';
+
+--connection server_1
+# We artificially create a situation that hopefully resembles the original
+# bug which was only seen "in the wild", and only once.
+# Setup a fake group commit with lots of conflicts that will lead to
deadloc
+# kill. The slave DBUG injection causes the slave to be deadlock killed at
+# a particular point during the retry, and then later do a small sleep at
+# another critical point where the prior transaction then has a chance to
+# complete. Finally an extra KILL check catches an unhandled, lingering
+# deadlock kill. So rather artificial, but at least it exercises the
+# relevant code paths.
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+
+SET @commit_id= 10200;
+INSERT INTO t3 VALUES (203, 1);
+INSERT INTO t3 VALUES (204, 1);
+INSERT INTO t3 VALUES (205, 1);
+UPDATE t3 SET b=b+1 WHERE a=201;
+UPDATE t3 SET b=b+1 WHERE a=201;
+UPDATE t3 SET b=b+1 WHERE a=201;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=202;
+UPDATE t3 SET b=b+1 WHERE a=203;
+UPDATE t3 SET b=b+1 WHERE a=203;
+UPDATE t3 SET b=b+1 WHERE a=204;
+UPDATE t3 SET b=b+1 WHERE a=204;
+UPDATE t3 SET b=b+1 WHERE a=204;
+UPDATE t3 SET b=b+1 WHERE a=203;
+UPDATE t3 SET b=b+1 WHERE a=205;
+UPDATE t3 SET b=b+1 WHERE a=205;
+SET SESSION debug_dbug=@old_dbug;
+
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+--source include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_dbug;
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** Check getting deadlock killed inside open_binlog() during
retry. ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET @old_dbug= @@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug= '+d,inject_retry_event_group_open_binlog_kill';
+SET @old_max= @@GLOBAL.max_relay_log_size;
+SET GLOBAL max_relay_log_size= 4096;
+
+--connection server_1
+SET @old_dbug= @@SESSION.debug_dbug;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+
+--let $large= `SELECT REPEAT("*", 8192)`
+SET @commit_id= 10210;
+--echo Omit long queries that cause relaylog rotations and transaction
retries...
+--disable_query_log
+eval UPDATE t3 SET b=b+1 WHERE a=201 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=201 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=201 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=202 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=202 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=202 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=202 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=203 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=203 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=204 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=204 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=204 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=203 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=205 /* $large */;
+eval UPDATE t3 SET b=b+1 WHERE a=205 /* $large */;
+--enable_query_log
+SET SESSION debug_dbug=@old_dbug;
+
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+
+SELECT * FROM t3 WHERE a>=200 ORDER BY a;
+--source include/stop_slave.inc
+SET GLOBAL debug_dbug= @old_debg;
+SET GLOBAL max_relay_log_size= @old_max;
+--source include/start_slave.inc
+--enable_connect_log
+
+
+--echo *** MDEV-8302: Duplicate key with parallel replication ***
+
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+/* Inject a small sleep which makes the race easier to hit. */
+SET @old_dbug=@@GLOBAL.debug_dbug;
+SET GLOBAL debug_dbug="+d,inject_mdev8302";
+
+
+--connection server_1
+INSERT INTO t7 VALUES (100,1), (101,2), (102,3), (103,4), (104,5);
+
+# Artificially create a bunch of group commits with conflicting
transactions.
+# The bug happened when T1 and T2 was in one group commit, and T3 was
in the
+# following group commit. T2 is a DELETE of a row with same primary key
as a
+# row that T3 inserts. T1 and T2 can conflict, causing T2 to be deadlock
+# killed after starting to commit. The bug was that T2 could roll back
before
+# doing unmark_start_commit(); this could allow T3 to run before the retry
+# of T2, causing duplicate key violation.
+
+SET @old_dbug= @@SESSION.debug_dbug;
+SET @commit_id= 20000;
+SET SESSION debug_dbug="+d,binlog_force_commit_id";
+
+--let $n = 100
+--disable_query_log
+while ($n)
+{
+ eval UPDATE t7 SET b=b+1 WHERE a=100+($n MOD 5);
+ eval DELETE FROM t7 WHERE a=100+($n MOD 5);
+
+ SET @commit_id = @commit_id + 1;
+ eval INSERT INTO t7 VALUES (100+($n MOD 5), $n);
+ SET @commit_id = @commit_id + 1;
+ dec $n;
+}
+--enable_query_log
+SET SESSION debug_dbug=@old_dbug;
+
+
+SELECT * FROM t7 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+
+--connection server_2
+--disable_connect_log
+--source include/start_slave.inc
+--source include/sync_with_master_gtid.inc
+SELECT * FROM t7 ORDER BY a;
+
+--source include/stop_slave.inc
+SET GLOBAL debug_dbug=@old_dbug;
+--source include/start_slave.inc
+--enable_connect_log
+
+
+
+--echo *** MDEV-8725: Assertion on ROLLBACK statement in the binary log ***
+--connection server_1
+# Inject an event group terminated by ROLLBACK, by mixing MyISAM and InnoDB
+# in a transaction. The bug was an assertion on the ROLLBACK due to
+# mark_start_commit() being already called.
+--disable_warnings
+BEGIN;
+INSERT INTO t2 VALUES (2000);
+INSERT INTO t1 VALUES (2000);
+INSERT INTO t2 VALUES (2001);
+ROLLBACK;
+--enable_warnings
+SELECT * FROM t1 WHERE a>=2000 ORDER BY a;
+SELECT * FROM t2 WHERE a>=2000 ORDER BY a;
+--disable_connect_log
+--source include/save_master_gtid.inc
+--enable_connect_log
+
+--connection server_2
+--disable_connect_log
+--source include/sync_with_master_gtid.inc
+--enable_connect_log
+SELECT * FROM t1 WHERE a>=2000 ORDER BY a;
+SELECT * FROM t2 WHERE a>=2000 ORDER BY a;
+
+
+# Clean up.
+--connection server_2
+--disable_connect_log
+--source include/stop_slave.inc
+SET GLOBAL slave_parallel_threads=@old_parallel_threads;
+--source include/start_slave.inc
+SET DEBUG_SYNC= 'RESET';
+--enable_connect_log
+
+--connection server_1
+DROP function foo;
+DROP TABLE t1,t2,t3,t4,t5,t6,t7,t8;
+SET DEBUG_SYNC= 'RESET';
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.cnf
b/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.cnf
new file mode 100644
index 0000000..b8e22e9
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.cnf
@@ -0,0 +1,6 @@
+!include my.cnf
+
+[mysqld.2]
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename=(a)ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
diff --git
a/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.result
b/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.result
new file mode 100644
index 0000000..204db2b
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.result
@@ -0,0 +1,13 @@
+include/master-slave.inc
+[connection master]
+connection slave;
+SET DEBUG_SYNC= 'after_show_binlog_events SIGNAL on_show_binlog_events
WAIT_FOR end';
+SHOW BINLOG EVENTS;
+connection slave1;
+SET DEBUG_SYNC= 'now WAIT_FOR on_show_binlog_events';
+FLUSH LOGS;
+SET DEBUG_SYNC= 'now SIGNAL end';
+connection slave;
+SET DEBUG_SYNC= 'RESET';
+connection master;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.test
b/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.test
new file mode 100644
index 0000000..fa21200
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_parallel_show_binlog_events_purge_logs.test
@@ -0,0 +1,39 @@
+#
+# The test was taken from the rpl suite as is. It is run with encrypted
+# binlogs on master and slave
+#
+
+# BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER
+#
+# The function mysql_show_binlog_events has a local stack variable
+# 'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
+# this variable goes out of scope and is destroyed before clean
+# thd->current_linfo.
+#
+# This test case runs SHOW BINLOG EVENTS and FLUSH LOGS to make sure
+# that with the fix local variable linfo is valid along all
+# mysql_show_binlog_events function scope.
+#
+--source include/have_debug_sync.inc
+--source include/master-slave.inc
+
+--enable_connect_log
+
+--connection slave
+SET DEBUG_SYNC= 'after_show_binlog_events SIGNAL on_show_binlog_events
WAIT_FOR end';
+--send SHOW BINLOG EVENTS
+
+--connection slave1
+SET DEBUG_SYNC= 'now WAIT_FOR on_show_binlog_events';
+FLUSH LOGS;
+SET DEBUG_SYNC= 'now SIGNAL end';
+
+--connection slave
+--disable_result_log
+--reap
+--enable_result_log
+SET DEBUG_SYNC= 'RESET';
+
+--connection master
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_relayrotate-slave.opt
b/mysql-test/suite/binlog_encryption/rpl_relayrotate-slave.opt
new file mode 100644
index 0000000..1665aec
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_relayrotate-slave.opt
@@ -0,0 +1,5 @@
+--max_relay_log_size=16384
+--log-warnings
+--plugin-load-add=$FILE_KEY_MANAGEMENT_SO
+--loose-file-key-management-filename=$MYSQLTEST_VARDIR/std_data/keys.txt
+--encrypt-binlog
diff --git a/mysql-test/suite/binlog_encryption/rpl_relayrotate.result
b/mysql-test/suite/binlog_encryption/rpl_relayrotate.result
new file mode 100644
index 0000000..142626e
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_relayrotate.result
@@ -0,0 +1,20 @@
+include/master-slave.inc
+[connection master]
+connection master;
+connection slave;
+connection slave;
+stop slave;
+connection master;
+create table t1 (a int) engine=innodb;
+connection slave;
+reset slave;
+start slave;
+stop slave;
+start slave;
+select max(a) from t1;
+max(a)
+8000
+connection master;
+drop table t1;
+connection slave;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_relayrotate.test
b/mysql-test/suite/binlog_encryption/rpl_relayrotate.test
new file mode 100644
index 0000000..ee5266e
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_relayrotate.test
@@ -0,0 +1,21 @@
+#
+# The test was taken from the rpl suite as is, but we will run it
+# with the encrypted slave (not just encrypted master as most of
+# adopted tests)
+#
+
+#######################################################
+# Wrapper for rpl_relayrotate.test to allow multi #
+# Engines to reuse test code. By JBM 2006-02-15 #
+#######################################################
+
+# Slow test, don't run during staging part
+-- source include/not_staging.inc
+-- source include/master-slave.inc
+
+--enable_connect_log
+
+let $engine_type=innodb;
+-- source extra/rpl_tests/rpl_relayrotate.test
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_semi_sync.result
b/mysql-test/suite/binlog_encryption/rpl_semi_sync.result
new file mode 100644
index 0000000..6d57468
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_semi_sync.result
@@ -0,0 +1,488 @@
+include/master-slave.inc
+[connection master]
+connection master;
+call mtr.add_suppression("Timeout waiting for reply of binlog");
+call mtr.add_suppression("Read semi-sync reply");
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT.");
+connection slave;
+call mtr.add_suppression("Master server does not support semi-sync");
+call mtr.add_suppression("Semi-sync slave .* reply");
+call mtr.add_suppression("Slave SQL.*Request to stop slave SQL Thread
received while applying a group that has non-transactional changes;
waiting for completion of the group");
+connection master;
+#
+# Uninstall semi-sync plugins on master and slave
+#
+connection slave;
+include/stop_slave.inc
+reset slave;
+set global rpl_semi_sync_master_enabled= 0;
+set global rpl_semi_sync_slave_enabled= 0;
+connection master;
+reset master;
+set global rpl_semi_sync_master_enabled= 0;
+set global rpl_semi_sync_slave_enabled= 0;
+#
+# Main test of semi-sync replication start here
+#
+connection master;
+set global rpl_semi_sync_master_timeout= 60000;
+[ default state of semi-sync on master should be OFF ]
+show variables like 'rpl_semi_sync_master_enabled';
+Variable_name Value
+rpl_semi_sync_master_enabled OFF
+[ enable semi-sync on master ]
+set global rpl_semi_sync_master_enabled = 1;
+show variables like 'rpl_semi_sync_master_enabled';
+Variable_name Value
+rpl_semi_sync_master_enabled ON
+[ status of semi-sync on master should be ON even without any semi-sync
slaves ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 0
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 0
+#
+# BUG#45672 Semisync repl: ActiveTranx:insert_tranx_node: transaction
node allocation failed
+# BUG#45673 Semisynch reports correct operation even if no slave is
connected
+#
+[ status of semi-sync on master should be OFF ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 0
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status OFF
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 0
+reset master;
+connection slave;
+[ default state of semi-sync on slave should be OFF ]
+show variables like 'rpl_semi_sync_slave_enabled';
+Variable_name Value
+rpl_semi_sync_slave_enabled OFF
+[ enable semi-sync on slave ]
+set global rpl_semi_sync_slave_enabled = 1;
+show variables like 'rpl_semi_sync_slave_enabled';
+Variable_name Value
+rpl_semi_sync_slave_enabled ON
+include/start_slave.inc
+connection master;
+[ initial master state after the semi-sync slave connected ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 1
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 0
+create table t1(a int) engine = ENGINE_TYPE;
+[ master state after CREATE TABLE statement ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 1
+select CONNECTIONS_NORMAL_SLAVE - CONNECTIONS_NORMAL_SLAVE as 'Should
be 0';
+Should be 0
+0
+[ insert records to table ]
+insert t1 values (10);
+insert t1 values (9);
+insert t1 values (8);
+insert t1 values (7);
+insert t1 values (6);
+insert t1 values (5);
+insert t1 values (4);
+insert t1 values (3);
+insert t1 values (2);
+insert t1 values (1);
+[ master status after inserts ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 11
+connection slave;
+[ slave status after replicated inserts ]
+show status like 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status ON
+select count(distinct a) from t1;
+count(distinct a)
+10
+select min(a) from t1;
+min(a)
+1
+select max(a) from t1;
+max(a)
+10
+
+# BUG#50157
+# semi-sync replication crashes when replicating a transaction which
+# include 'CREATE TEMPORARY TABLE `MyISAM_t` SELECT * FROM `Innodb_t` ;
+connection master;
+SET SESSION AUTOCOMMIT= 0;
+CREATE TABLE t2(c1 INT) ENGINE=innodb;
+connection slave;
+connection master;
+BEGIN;
+
+# Even though it is in a transaction, this statement is binlogged into
binlog
+# file immediately.
+CREATE TEMPORARY TABLE t3 SELECT c1 FROM t2 where 1=1;
+
+# These statements will not be binlogged until the transaction is committed
+INSERT INTO t2 VALUES(11);
+INSERT INTO t2 VALUES(22);
+COMMIT;
+DROP TABLE t2, t3;
+SET SESSION AUTOCOMMIT= 1;
+connection slave;
+#
+# Test semi-sync master will switch OFF after one transaction
+# timeout waiting for slave reply.
+#
+connection slave;
+include/stop_slave.inc
+connection master;
+set global rpl_semi_sync_master_timeout= 5000;
+[ master status should be ON ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 14
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 1
+[ semi-sync replication of these transactions will fail ]
+insert into t1 values (500);
+[ master status should be OFF ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status OFF
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 1
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 14
+delete from t1 where a=10;
+delete from t1 where a=9;
+delete from t1 where a=8;
+delete from t1 where a=7;
+delete from t1 where a=6;
+delete from t1 where a=5;
+delete from t1 where a=4;
+delete from t1 where a=3;
+delete from t1 where a=2;
+delete from t1 where a=1;
+insert into t1 values (100);
+[ master status should be OFF ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status OFF
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 12
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 14
+#
+# Test semi-sync status on master will be ON again when slave catches up
+#
+connection slave;
+[ slave status should be OFF ]
+show status like 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status OFF
+include/start_slave.inc
+[ slave status should be ON ]
+show status like 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status ON
+select count(distinct a) from t1;
+count(distinct a)
+2
+select min(a) from t1;
+min(a)
+100
+select max(a) from t1;
+max(a)
+500
+connection master;
+[ master status should be ON again after slave catches up ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 12
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 14
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 1
+#
+# Test disable/enable master semi-sync on the fly.
+#
+drop table t1;
+connection slave;
+include/stop_slave.inc
+#
+# Flush status
+#
+connection master;
+[ Semi-sync master status variables before FLUSH STATUS ]
+SHOW STATUS LIKE 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 12
+SHOW STATUS LIKE 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 15
+FLUSH NO_WRITE_TO_BINLOG STATUS;
+[ Semi-sync master status variables after FLUSH STATUS ]
+SHOW STATUS LIKE 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+SHOW STATUS LIKE 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 0
+connection master;
+show master logs;
+Log_name master-bin.000001
+File_size #
+show variables like 'rpl_semi_sync_master_enabled';
+Variable_name Value
+rpl_semi_sync_master_enabled ON
+[ disable semi-sync on the fly ]
+set global rpl_semi_sync_master_enabled=0;
+show variables like 'rpl_semi_sync_master_enabled';
+Variable_name Value
+rpl_semi_sync_master_enabled OFF
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status OFF
+[ enable semi-sync on the fly ]
+set global rpl_semi_sync_master_enabled=1;
+show variables like 'rpl_semi_sync_master_enabled';
+Variable_name Value
+rpl_semi_sync_master_enabled ON
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+#
+# Test RESET MASTER/SLAVE
+#
+connection slave;
+include/start_slave.inc
+connection master;
+create table t1 (a int) engine = ENGINE_TYPE;
+drop table t1;
+connection slave;
+show status like 'Rpl_relay%';
+Variable_name Value
+[ test reset master ]
+connection master;
+reset master;
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 0
+connection slave;
+include/stop_slave.inc
+reset slave;
+connection master;
+kill query _tid;
+connection slave;
+include/start_slave.inc
+connection master;
+create table t1 (a int) engine = ENGINE_TYPE;
+insert into t1 values (1);
+insert into t1 values (2), (3);
+connection slave;
+select * from t1;
+a
+1
+2
+3
+connection master;
+[ master semi-sync status should be ON ]
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 3
+#
+# Start semi-sync replication without SUPER privilege
+#
+connection slave;
+include/stop_slave.inc
+reset slave;
+connection master;
+reset master;
+kill query _tid;
+set sql_log_bin=0;
+grant replication slave on *.* to rpl(a)127.0.0.1 identified by
'rpl_password';
+flush privileges;
+set sql_log_bin=1;
+connection slave;
+grant replication slave on *.* to rpl(a)127.0.0.1 identified by
'rpl_password';
+flush privileges;
+change master to master_user='rpl',master_password='rpl_password';
+include/start_slave.inc
+show status like 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status ON
+connection master;
+[ master semi-sync should be ON ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 1
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 0
+insert into t1 values (4);
+insert into t1 values (5);
+[ master semi-sync should be ON ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 1
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+show status like 'Rpl_semi_sync_master_no_tx';
+Variable_name Value
+Rpl_semi_sync_master_no_tx 0
+show status like 'Rpl_semi_sync_master_yes_tx';
+Variable_name Value
+Rpl_semi_sync_master_yes_tx 2
+#
+# Test semi-sync slave connect to non-semi-sync master
+#
+connection slave;
+include/stop_slave.inc
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status OFF
+connection master;
+kill query _tid;
+[ Semi-sync status on master should be ON ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 0
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status ON
+set global rpl_semi_sync_master_enabled= 0;
+connection slave;
+SHOW VARIABLES LIKE 'rpl_semi_sync_slave_enabled';
+Variable_name Value
+rpl_semi_sync_slave_enabled ON
+include/start_slave.inc
+connection master;
+insert into t1 values (8);
+[ master semi-sync clients should be 1, status should be OFF ]
+show status like 'Rpl_semi_sync_master_clients';
+Variable_name Value
+Rpl_semi_sync_master_clients 1
+show status like 'Rpl_semi_sync_master_status';
+Variable_name Value
+Rpl_semi_sync_master_status OFF
+connection slave;
+show status like 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status ON
+connection slave;
+include/stop_slave.inc
+connection master;
+set global rpl_semi_sync_master_enabled= 0;
+connection slave;
+SHOW VARIABLES LIKE 'rpl_semi_sync_slave_enabled';
+Variable_name Value
+rpl_semi_sync_slave_enabled ON
+include/start_slave.inc
+connection master;
+insert into t1 values (10);
+connection slave;
+#
+# Test non-semi-sync slave connect to semi-sync master
+#
+connection master;
+set global rpl_semi_sync_master_timeout= 5000;
+set global rpl_semi_sync_master_enabled= 1;
+connection slave;
+include/stop_slave.inc
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status OFF
+[ uninstall semi-sync slave plugin ]
+set global rpl_semi_sync_slave_enabled= 0;
+[ reinstall semi-sync slave plugin and disable semi-sync ]
+SHOW VARIABLES LIKE 'rpl_semi_sync_slave_enabled';
+Variable_name Value
+rpl_semi_sync_slave_enabled OFF
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status OFF
+include/start_slave.inc
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+Variable_name Value
+Rpl_semi_sync_slave_status OFF
+#
+# Clean up
+#
+connection slave;
+include/stop_slave.inc
+set global rpl_semi_sync_slave_enabled= 0;
+connection master;
+set global rpl_semi_sync_master_enabled= 0;
+connection slave;
+change master to master_user='root',master_password='';
+include/start_slave.inc
+connection master;
+drop table t1;
+connection slave;
+connection master;
+drop user rpl(a)127.0.0.1;
+flush privileges;
+set global rpl_semi_sync_master_timeout= default;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_semi_sync.test
b/mysql-test/suite/binlog_encryption/rpl_semi_sync.test
new file mode 100644
index 0000000..275e095
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_semi_sync.test
@@ -0,0 +1,606 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+source include/have_semisync.inc;
+source include/not_embedded.inc;
+source include/master-slave.inc;
+
+--enable_connect_log
+
+let $engine_type= InnoDB;
+
+# Suppress warnings that might be generated during the test
+connection master;
+call mtr.add_suppression("Timeout waiting for reply of binlog");
+call mtr.add_suppression("Read semi-sync reply");
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT.");
+connection slave;
+call mtr.add_suppression("Master server does not support semi-sync");
+call mtr.add_suppression("Semi-sync slave .* reply");
+call mtr.add_suppression("Slave SQL.*Request to stop slave SQL Thread
received while applying a group that has non-transactional changes;
waiting for completion of the group");
+connection master;
+
+# wait for dying connections (if any) to disappear
+let $wait_condition= select count(*) = 0 from
information_schema.processlist where command='killed';
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+
+# After fix of BUG#45848, semi-sync slave should not create any extra
+# connections on master, save the count of connections before start
+# semi-sync slave for comparison below.
+let $_connections_normal_slave= query_get_value(SHOW STATUS LIKE
'Threads_connected', Value, 1);
+
+--echo #
+--echo # Uninstall semi-sync plugins on master and slave
+--echo #
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+reset slave;
+set global rpl_semi_sync_master_enabled= 0;
+set global rpl_semi_sync_slave_enabled= 0;
+
+connection master;
+reset master;
+set global rpl_semi_sync_master_enabled= 0;
+set global rpl_semi_sync_slave_enabled= 0;
+
+--echo #
+--echo # Main test of semi-sync replication start here
+--echo #
+
+connection master;
+
+set global rpl_semi_sync_master_timeout= 60000; # 60s
+
+echo [ default state of semi-sync on master should be OFF ];
+show variables like 'rpl_semi_sync_master_enabled';
+
+echo [ enable semi-sync on master ];
+set global rpl_semi_sync_master_enabled = 1;
+show variables like 'rpl_semi_sync_master_enabled';
+
+echo [ status of semi-sync on master should be ON even without any
semi-sync slaves ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+--echo #
+--echo # BUG#45672 Semisync repl: ActiveTranx:insert_tranx_node:
transaction node allocation failed
+--echo # BUG#45673 Semisynch reports correct operation even if no slave
is connected
+--echo #
+
+# BUG#45672 When semi-sync is enabled on master, it would allocate
+# transaction node even without semi-sync slave connected, and would
+# finally result in transaction node allocation error.
+#
+# Semi-sync master will pre-allocate 'max_connections' transaction
+# nodes, so here we do more than that much transactions to check if it
+# will fail or not.
+# select @@global.max_connections + 1;
+let $i= `select @@global.max_connections + 1`;
+disable_query_log;
+eval create table t1 (a int) engine=$engine_type;
+while ($i)
+{
+ eval insert into t1 values ($i);
+ dec $i;
+}
+drop table t1;
+enable_query_log;
+
+# BUG#45673
+echo [ status of semi-sync on master should be OFF ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+--replace_result 305 304
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+# reset master to make sure the following test will start with a clean
environment
+reset master;
+
+connection slave;
+
+echo [ default state of semi-sync on slave should be OFF ];
+show variables like 'rpl_semi_sync_slave_enabled';
+
+echo [ enable semi-sync on slave ];
+set global rpl_semi_sync_slave_enabled = 1;
+show variables like 'rpl_semi_sync_slave_enabled';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+
+# NOTE: Rpl_semi_sync_master_client will only be updated when
+# semi-sync slave has started binlog dump request
+let $status_var= Rpl_semi_sync_master_clients;
+let $status_var_value= 1;
+--disable_connect_log
+source include/wait_for_status_var.inc;
+--enable_connect_log
+
+echo [ initial master state after the semi-sync slave connected ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+replace_result $engine_type ENGINE_TYPE;
+eval create table t1(a int) engine = $engine_type;
+
+echo [ master state after CREATE TABLE statement ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+# After fix of BUG#45848, semi-sync slave should not create any extra
+# connections on master.
+let $_connections_semisync_slave= query_get_value(SHOW STATUS LIKE
'Threads_connected', Value, 1);
+replace_result $_connections_normal_slave CONNECTIONS_NORMAL_SLAVE
$_connections_semisync_slave CONNECTIONS_SEMISYNC_SLAVE;
+eval select $_connections_semisync_slave - $_connections_normal_slave
as 'Should be 0';
+
+echo [ insert records to table ];
+insert t1 values (10);
+insert t1 values (9);
+insert t1 values (8);
+insert t1 values (7);
+insert t1 values (6);
+insert t1 values (5);
+insert t1 values (4);
+insert t1 values (3);
+insert t1 values (2);
+insert t1 values (1);
+
+echo [ master status after inserts ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+sync_slave_with_master;
+
+echo [ slave status after replicated inserts ];
+show status like 'Rpl_semi_sync_slave_status';
+
+select count(distinct a) from t1;
+select min(a) from t1;
+select max(a) from t1;
+
+--echo
+--echo # BUG#50157
+--echo # semi-sync replication crashes when replicating a transaction which
+--echo # include 'CREATE TEMPORARY TABLE `MyISAM_t` SELECT * FROM
`Innodb_t` ;
+
+connection master;
+SET SESSION AUTOCOMMIT= 0;
+CREATE TABLE t2(c1 INT) ENGINE=innodb;
+sync_slave_with_master;
+
+connection master;
+BEGIN;
+--echo
+--echo # Even though it is in a transaction, this statement is
binlogged into binlog
+--echo # file immediately.
+--disable_warnings
+CREATE TEMPORARY TABLE t3 SELECT c1 FROM t2 where 1=1;
+--enable_warnings
+--echo
+--echo # These statements will not be binlogged until the transaction
is committed
+INSERT INTO t2 VALUES(11);
+INSERT INTO t2 VALUES(22);
+COMMIT;
+
+DROP TABLE t2, t3;
+SET SESSION AUTOCOMMIT= 1;
+sync_slave_with_master;
+
+
+--echo #
+--echo # Test semi-sync master will switch OFF after one transaction
+--echo # timeout waiting for slave reply.
+--echo #
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+
+connection master;
+set global rpl_semi_sync_master_timeout= 5000;
+
+# The first semi-sync check should be on because after slave stop,
+# there are no transactions on the master.
+echo [ master status should be ON ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+--replace_result 305 304
+show status like 'Rpl_semi_sync_master_yes_tx';
+show status like 'Rpl_semi_sync_master_clients';
+
+echo [ semi-sync replication of these transactions will fail ];
+insert into t1 values (500);
+
+# Wait for the semi-sync replication of this transaction to timeout
+let $status_var= Rpl_semi_sync_master_status;
+let $status_var_value= OFF;
+--disable_connect_log
+source include/wait_for_status_var.inc;
+--enable_connect_log
+
+# The second semi-sync check should be off because one transaction
+# times out during waiting.
+echo [ master status should be OFF ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+--replace_result 305 304
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+# Semi-sync status on master is now OFF, so all these transactions
+# will be replicated asynchronously.
+delete from t1 where a=10;
+delete from t1 where a=9;
+delete from t1 where a=8;
+delete from t1 where a=7;
+delete from t1 where a=6;
+delete from t1 where a=5;
+delete from t1 where a=4;
+delete from t1 where a=3;
+delete from t1 where a=2;
+delete from t1 where a=1;
+
+insert into t1 values (100);
+
+echo [ master status should be OFF ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+--replace_result 305 304
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+--echo #
+--echo # Test semi-sync status on master will be ON again when slave
catches up
+--echo #
+
+# Save the master position for later use.
+save_master_pos;
+
+connection slave;
+
+echo [ slave status should be OFF ];
+show status like 'Rpl_semi_sync_slave_status';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+sync_with_master;
+
+echo [ slave status should be ON ];
+show status like 'Rpl_semi_sync_slave_status';
+
+select count(distinct a) from t1;
+select min(a) from t1;
+select max(a) from t1;
+
+connection master;
+
+# The master semi-sync status should be on again after slave catches up.
+echo [ master status should be ON again after slave catches up ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+--replace_result 305 304
+show status like 'Rpl_semi_sync_master_yes_tx';
+show status like 'Rpl_semi_sync_master_clients';
+
+--echo #
+--echo # Test disable/enable master semi-sync on the fly.
+--echo #
+
+drop table t1;
+sync_slave_with_master;
+
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+
+--echo #
+--echo # Flush status
+--echo #
+connection master;
+echo [ Semi-sync master status variables before FLUSH STATUS ];
+SHOW STATUS LIKE 'Rpl_semi_sync_master_no_tx';
+SHOW STATUS LIKE 'Rpl_semi_sync_master_yes_tx';
+# Do not write the FLUSH STATUS to binlog, to make sure we'll get a
+# clean status after this.
+FLUSH NO_WRITE_TO_BINLOG STATUS;
+echo [ Semi-sync master status variables after FLUSH STATUS ];
+SHOW STATUS LIKE 'Rpl_semi_sync_master_no_tx';
+SHOW STATUS LIKE 'Rpl_semi_sync_master_yes_tx';
+
+connection master;
+
+--disable_connect_log
+source include/show_master_logs.inc;
+--enable_connect_log
+show variables like 'rpl_semi_sync_master_enabled';
+
+echo [ disable semi-sync on the fly ];
+set global rpl_semi_sync_master_enabled=0;
+show variables like 'rpl_semi_sync_master_enabled';
+show status like 'Rpl_semi_sync_master_status';
+
+echo [ enable semi-sync on the fly ];
+set global rpl_semi_sync_master_enabled=1;
+show variables like 'rpl_semi_sync_master_enabled';
+show status like 'Rpl_semi_sync_master_status';
+
+--echo #
+--echo # Test RESET MASTER/SLAVE
+--echo #
+
+connection slave;
+
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+
+replace_result $engine_type ENGINE_TYPE;
+eval create table t1 (a int) engine = $engine_type;
+drop table t1;
+
+##show status like 'Rpl_semi_sync_master_status';
+
+sync_slave_with_master;
+--replace_column 2 #
+show status like 'Rpl_relay%';
+
+echo [ test reset master ];
+connection master;
+
+reset master;
+
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+connection slave;
+
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+reset slave;
+
+# Kill the dump thread on master for previous slave connection and
+# wait for it to exit
+connection master;
+let $_tid= `select id from information_schema.processlist where command
= 'Binlog Dump' limit 1`;
+if ($_tid)
+{
+ --replace_result $_tid _tid
+ eval kill query $_tid;
+
+ # After dump thread exit, Rpl_semi_sync_master_clients will be 0
+ let $status_var= Rpl_semi_sync_master_clients;
+ let $status_var_value= 0;
+--disable_connect_log
+ source include/wait_for_status_var.inc;
+--enable_connect_log
+}
+
+connection slave;
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+
+# Wait for dump thread to start, Rpl_semi_sync_master_clients will be
+# 1 after dump thread started.
+let $status_var= Rpl_semi_sync_master_clients;
+let $status_var_value= 1;
+--disable_connect_log
+source include/wait_for_status_var.inc;
+--enable_connect_log
+
+replace_result $engine_type ENGINE_TYPE;
+eval create table t1 (a int) engine = $engine_type;
+insert into t1 values (1);
+insert into t1 values (2), (3);
+
+sync_slave_with_master;
+
+select * from t1;
+
+connection master;
+
+echo [ master semi-sync status should be ON ];
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+--echo #
+--echo # Start semi-sync replication without SUPER privilege
+--echo #
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+reset slave;
+connection master;
+reset master;
+
+# Kill the dump thread on master for previous slave connection and wait
for it to exit
+let $_tid= `select id from information_schema.processlist where command
= 'Binlog Dump' limit 1`;
+if ($_tid)
+{
+ --replace_result $_tid _tid
+ eval kill query $_tid;
+
+ # After dump thread exit, Rpl_semi_sync_master_clients will be 0
+ let $status_var= Rpl_semi_sync_master_clients;
+ let $status_var_value= 0;
+ --disable_connect_log
+ source include/wait_for_status_var.inc;
+ --enable_connect_log
+}
+
+# Do not binlog the following statement because it will generate
+# different events for ROW and STATEMENT format
+set sql_log_bin=0;
+grant replication slave on *.* to rpl(a)127.0.0.1 identified by
'rpl_password';
+flush privileges;
+set sql_log_bin=1;
+connection slave;
+grant replication slave on *.* to rpl(a)127.0.0.1 identified by
'rpl_password';
+flush privileges;
+change master to master_user='rpl',master_password='rpl_password';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+show status like 'Rpl_semi_sync_slave_status';
+connection master;
+
+# Wait for the semi-sync binlog dump thread to start
+let $status_var= Rpl_semi_sync_master_clients;
+let $status_var_value= 1;
+--disable_connect_log
+source include/wait_for_status_var.inc;
+--enable_connect_log
+echo [ master semi-sync should be ON ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+insert into t1 values (4);
+insert into t1 values (5);
+echo [ master semi-sync should be ON ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+show status like 'Rpl_semi_sync_master_no_tx';
+show status like 'Rpl_semi_sync_master_yes_tx';
+
+--echo #
+--echo # Test semi-sync slave connect to non-semi-sync master
+--echo #
+
+# Disable semi-sync on master
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+
+connection master;
+
+# Kill the dump thread on master for previous slave connection and wait
for it to exit
+let $_tid= `select id from information_schema.processlist where command
= 'Binlog Dump' limit 1`;
+if ($_tid)
+{
+ --replace_result $_tid _tid
+ eval kill query $_tid;
+
+ # After dump thread exit, Rpl_semi_sync_master_clients will be 0
+ let $status_var= Rpl_semi_sync_master_clients;
+ let $status_var_value= 0;
+ --disable_connect_log
+ source include/wait_for_status_var.inc;
+ --enable_connect_log
+}
+
+echo [ Semi-sync status on master should be ON ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+set global rpl_semi_sync_master_enabled= 0;
+
+connection slave;
+SHOW VARIABLES LIKE 'rpl_semi_sync_slave_enabled';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+connection master;
+insert into t1 values (8);
+let $status_var= Rpl_semi_sync_master_clients;
+let $status_var_value= 1;
+--disable_connect_log
+source include/wait_for_status_var.inc;
+--enable_connect_log
+echo [ master semi-sync clients should be 1, status should be OFF ];
+show status like 'Rpl_semi_sync_master_clients';
+show status like 'Rpl_semi_sync_master_status';
+sync_slave_with_master;
+show status like 'Rpl_semi_sync_slave_status';
+
+# Uninstall semi-sync plugin on master
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+connection master;
+set global rpl_semi_sync_master_enabled= 0;
+
+connection slave;
+SHOW VARIABLES LIKE 'rpl_semi_sync_slave_enabled';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+insert into t1 values (10);
+sync_slave_with_master;
+
+--echo #
+--echo # Test non-semi-sync slave connect to semi-sync master
+--echo #
+
+connection master;
+set global rpl_semi_sync_master_timeout= 5000; # 5s
+set global rpl_semi_sync_master_enabled= 1;
+
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+
+echo [ uninstall semi-sync slave plugin ];
+set global rpl_semi_sync_slave_enabled= 0;
+
+echo [ reinstall semi-sync slave plugin and disable semi-sync ];
+SHOW VARIABLES LIKE 'rpl_semi_sync_slave_enabled';
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+SHOW STATUS LIKE 'Rpl_semi_sync_slave_status';
+
+--echo #
+--echo # Clean up
+--echo #
+
+connection slave;
+--disable_connect_log
+source include/stop_slave.inc;
+--enable_connect_log
+set global rpl_semi_sync_slave_enabled= 0;
+
+connection master;
+set global rpl_semi_sync_master_enabled= 0;
+
+connection slave;
+change master to master_user='root',master_password='';
+--disable_connect_log
+source include/start_slave.inc;
+--enable_connect_log
+
+connection master;
+drop table t1;
+sync_slave_with_master;
+
+connection master;
+drop user rpl(a)127.0.0.1;
+flush privileges;
+set global rpl_semi_sync_master_timeout= default;
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_skip_replication.cnf
b/mysql-test/suite/binlog_encryption/rpl_skip_replication.cnf
new file mode 100644
index 0000000..b8e22e9
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_skip_replication.cnf
@@ -0,0 +1,6 @@
+!include my.cnf
+
+[mysqld.2]
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename=(a)ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
diff --git
a/mysql-test/suite/binlog_encryption/rpl_skip_replication.result
b/mysql-test/suite/binlog_encryption/rpl_skip_replication.result
new file mode 100644
index 0000000..ded85f3
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_skip_replication.result
@@ -0,0 +1,312 @@
+include/master-slave.inc
+[connection master]
+connection slave;
+CREATE USER 'nonsuperuser'@'127.0.0.1';
+GRANT ALTER,CREATE,DELETE,DROP,EVENT,INSERT,PROCESS,REPLICATION SLAVE,
+SELECT,UPDATE ON *.* TO 'nonsuperuser'@'127.0.0.1';
+connect nonpriv, 127.0.0.1, nonsuperuser,, test, $SLAVE_MYPORT,;
+connection nonpriv;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+ERROR 42000: Access denied; you need (at least one of) the SUPER
privilege(s) for this operation
+disconnect nonpriv;
+connection slave;
+DROP USER'nonsuperuser'@'127.0.0.1';
+SELECT @@global.replicate_events_marked_for_skip;
+@@global.replicate_events_marked_for_skip
+REPLICATE
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+ERROR HY000: This operation cannot be performed as you have a running
slave ''; run STOP SLAVE '' first
+SELECT @@global.replicate_events_marked_for_skip;
+@@global.replicate_events_marked_for_skip
+REPLICATE
+STOP SLAVE;
+SET SESSION replicate_events_marked_for_skip=FILTER_ON_MASTER;
+ERROR HY000: Variable 'replicate_events_marked_for_skip' is a GLOBAL
variable and should be set with SET GLOBAL
+SELECT @@global.replicate_events_marked_for_skip;
+@@global.replicate_events_marked_for_skip
+REPLICATE
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+SELECT @@global.replicate_events_marked_for_skip;
+@@global.replicate_events_marked_for_skip
+FILTER_ON_MASTER
+START SLAVE;
+connection master;
+SELECT @@skip_replication;
+@@skip_replication
+0
+SET GLOBAL skip_replication=1;
+ERROR HY000: Variable 'skip_replication' is a SESSION variable and
can't be used with SET GLOBAL
+SELECT @@skip_replication;
+@@skip_replication
+0
+CREATE TABLE t1 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+CREATE TABLE t2 (a INT PRIMARY KEY, b INT) ENGINE=innodb;
+INSERT INTO t1(a) VALUES (1);
+INSERT INTO t2(a) VALUES (1);
+SET skip_replication=1;
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+INSERT INTO t1(a) VALUES (2);
+INSERT INTO t2(a) VALUES (2);
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+connection slave;
+connection slave;
+SHOW TABLES;
+Tables_in_test
+t1
+t2
+SELECT * FROM t1;
+a b
+1 NULL
+SELECT * FROM t2;
+a b
+1 NULL
+connection master;
+DROP TABLE t3;
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+connection slave;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+START SLAVE;
+connection master;
+SET skip_replication=1;
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+INSERT INTO t1(a) VALUES (3);
+INSERT INTO t2(a) VALUES (3);
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+connection slave;
+connection slave;
+SHOW TABLES;
+Tables_in_test
+t1
+t2
+SELECT * FROM t1;
+a b
+1 NULL
+SELECT * FROM t2;
+a b
+1 NULL
+connection master;
+DROP TABLE t3;
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+connection slave;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+connection master;
+SET skip_replication=1;
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+INSERT INTO t3(a) VALUES(2);
+connection slave;
+connection slave;
+SELECT * FROM t3;
+a b
+2 NULL
+connection master;
+DROP TABLE t3;
+TRUNCATE t1;
+connection slave;
+connection slave;
+RESET MASTER;
+connection master;
+SET skip_replication=0;
+INSERT INTO t1 VALUES (1,0);
+SET skip_replication=1;
+INSERT INTO t1 VALUES (2,0);
+SET skip_replication=0;
+INSERT INTO t1 VALUES (3,0);
+connection slave;
+connection slave;
+SELECT * FROM t1 ORDER by a;
+a b
+1 0
+2 0
+3 0
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+connection master;
+TRUNCATE t1;
+SELECT * FROM t1 ORDER by a;
+a b
+1 0
+2 0
+3 0
+connection slave;
+START SLAVE;
+connection master;
+connection slave;
+connection slave;
+SELECT * FROM t1 ORDER by a;
+a b
+1 0
+3 0
+connection master;
+TRUNCATE t1;
+connection slave;
+connection slave;
+STOP SLAVE;
+SET GLOBAL sql_slave_skip_counter=6;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+START SLAVE;
+connection master;
+SET @old_binlog_format= @@binlog_format;
+SET binlog_format= statement;
+SET skip_replication=0;
+INSERT INTO t1 VALUES (1,5);
+SET skip_replication=1;
+INSERT INTO t1 VALUES (2,5);
+SET skip_replication=0;
+INSERT INTO t1 VALUES (3,5);
+INSERT INTO t1 VALUES (4,5);
+SET binlog_format= @old_binlog_format;
+connection slave;
+connection slave;
+SELECT * FROM t1;
+a b
+4 5
+connection slave;
+include/stop_slave.inc
+SET @old_slave_binlog_format= @@global.binlog_format;
+SET GLOBAL binlog_format= row;
+include/start_slave.inc
+connection master;
+TRUNCATE t1;
+SET @old_binlog_format= @@binlog_format;
+SET binlog_format= row;
+BINLOG
'wlZOTw8BAAAA8QAAAPUAAAAAAAQANS41LjIxLU1hcmlhREItZGVidWctbG9nAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAA2QAEGggAAAAICAgCAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAAAAAAAAA371saA==';
+BINLOG 'wlZOTxMBAAAAKgAAAGMBAAAAgCkAAAAAAAEABHRlc3QAAnQxAAIDAwAC
+wlZOTxcBAAAAJgAAAIkBAAAAgCkAAAAAAAEAAv/8AQAAAAgAAAA=';
+BINLOG 'wlZOTxMBAAAAKgAAADwCAAAAACkAAAAAAAEABHRlc3QAAnQxAAIDAwAC
+wlZOTxcBAAAAJgAAAGICAAAAACkAAAAAAAEAAv/8AgAAAAgAAAA=';
+SET binlog_format= @old_binlog_format;
+SELECT * FROM t1 ORDER BY a;
+a b
+1 8
+2 8
+connection slave;
+connection slave;
+SELECT * FROM t1 ORDER by a;
+a b
+2 8
+include/stop_slave.inc
+SET GLOBAL binlog_format= @old_slave_binlog_format;
+include/start_slave.inc
+connection master;
+SET skip_replication=0;
+BEGIN;
+SET skip_replication=0;
+ERROR HY000: Cannot modify @@session.skip_replication inside a transaction
+SET skip_replication=1;
+ERROR HY000: Cannot modify @@session.skip_replication inside a transaction
+ROLLBACK;
+SET skip_replication=1;
+BEGIN;
+SET skip_replication=0;
+ERROR HY000: Cannot modify @@session.skip_replication inside a transaction
+SET skip_replication=1;
+ERROR HY000: Cannot modify @@session.skip_replication inside a transaction
+COMMIT;
+SET autocommit=0;
+INSERT INTO t2(a) VALUES(100);
+SET skip_replication=1;
+ERROR HY000: Cannot modify @@session.skip_replication inside a transaction
+ROLLBACK;
+SET autocommit=1;
+SET skip_replication=1;
+CREATE FUNCTION foo (x INT) RETURNS INT BEGIN SET SESSION
skip_replication=x; RETURN x; END|
+CREATE PROCEDURE bar(x INT) BEGIN SET SESSION skip_replication=x; END|
+CREATE FUNCTION baz (x INT) RETURNS INT BEGIN CALL bar(x); RETURN x; END|
+SELECT foo(0);
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+SELECT baz(0);
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+SET @a= foo(1);
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+SET @a= baz(1);
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+UPDATE t2 SET b=foo(0);
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+UPDATE t2 SET b=baz(0);
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+INSERT INTO t1 VALUES (101, foo(1));
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+INSERT INTO t1 VALUES (101, baz(0));
+ERROR HY000: Cannot modify @@session.skip_replication inside a stored
function or trigger
+SELECT @@skip_replication;
+@@skip_replication
+1
+CALL bar(0);
+SELECT @@skip_replication;
+@@skip_replication
+0
+CALL bar(1);
+SELECT @@skip_replication;
+@@skip_replication
+1
+DROP FUNCTION foo;
+DROP PROCEDURE bar;
+DROP FUNCTION baz;
+connection master;
+SET skip_replication= 0;
+TRUNCATE t1;
+connection slave;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+START SLAVE IO_THREAD;
+connection master;
+SET skip_replication= 1;
+INSERT INTO t1(a) VALUES (1);
+SET skip_replication= 0;
+INSERT INTO t1(a) VALUES (2);
+include/save_master_pos.inc
+connection slave;
+include/sync_io_with_master.inc
+STOP SLAVE IO_THREAD;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+connection master;
+connection slave;
+connection slave;
+SELECT * FROM t1;
+a b
+2 NULL
+connection master;
+SET skip_replication= 0;
+TRUNCATE t1;
+connection slave;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+START SLAVE IO_THREAD;
+connection master;
+SET skip_replication= 1;
+INSERT INTO t1(a) VALUES (1);
+SET skip_replication= 0;
+INSERT INTO t1(a) VALUES (2);
+include/save_master_pos.inc
+connection slave;
+include/sync_io_with_master.inc
+STOP SLAVE IO_THREAD;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+connection master;
+connection slave;
+connection slave;
+SELECT * FROM t1 ORDER BY a;
+a b
+1 NULL
+2 NULL
+connection master;
+SET skip_replication=0;
+DROP TABLE t1,t2;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_skip_replication.test
b/mysql-test/suite/binlog_encryption/rpl_skip_replication.test
new file mode 100644
index 0000000..84672f1
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_skip_replication.test
@@ -0,0 +1,401 @@
+#
+# The test was taken from the rpl suite.
+# Since mysqlbinlog cannot read encrypted logs directly, it was +#
switched to read-from-remote-server mode.
+# The test is run with master and slave binlogs encrypted
+#
+
+--source include/master-slave.inc
+--enable_connect_log
+
+connection slave;
+# Test that SUPER is required to change @@replicate_events_marked_for_skip.
+CREATE USER 'nonsuperuser'@'127.0.0.1';
+GRANT ALTER,CREATE,DELETE,DROP,EVENT,INSERT,PROCESS,REPLICATION SLAVE,
+ SELECT,UPDATE ON *.* TO 'nonsuperuser'@'127.0.0.1';
+connect(nonpriv, 127.0.0.1, nonsuperuser,, test, $SLAVE_MYPORT,);
+connection nonpriv;
+--error ER_SPECIFIC_ACCESS_DENIED_ERROR
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+disconnect nonpriv;
+connection slave;
+DROP USER'nonsuperuser'@'127.0.0.1';
+
+SELECT @@global.replicate_events_marked_for_skip;
+--error ER_SLAVE_MUST_STOP
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+SELECT @@global.replicate_events_marked_for_skip;
+STOP SLAVE;
+--error ER_GLOBAL_VARIABLE
+SET SESSION replicate_events_marked_for_skip=FILTER_ON_MASTER;
+SELECT @@global.replicate_events_marked_for_skip;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+SELECT @@global.replicate_events_marked_for_skip;
+START SLAVE;
+
+connection master;
+SELECT @@skip_replication;
+--error ER_LOCAL_VARIABLE
+SET GLOBAL skip_replication=1;
+SELECT @@skip_replication;
+
+CREATE TABLE t1 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+CREATE TABLE t2 (a INT PRIMARY KEY, b INT) ENGINE=innodb;
+INSERT INTO t1(a) VALUES (1);
+INSERT INTO t2(a) VALUES (1);
+
+
+# Test that master-side filtering works.
+SET skip_replication=1;
+
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+INSERT INTO t1(a) VALUES (2);
+INSERT INTO t2(a) VALUES (2);
+
+# Inject a rotate event in the binlog stream sent to slave (otherwise
we will
+# fail sync_slave_with_master as the last event on the master is not
present
+# on the slave).
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+
+sync_slave_with_master;
+connection slave;
+SHOW TABLES;
+SELECT * FROM t1;
+SELECT * FROM t2;
+
+connection master;
+DROP TABLE t3;
+
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+sync_slave_with_master;
+
+
+# Test that slave-side filtering works.
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+START SLAVE;
+
+connection master;
+SET skip_replication=1;
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+INSERT INTO t1(a) VALUES (3);
+INSERT INTO t2(a) VALUES (3);
+
+# Inject a rotate event in the binlog stream sent to slave (otherwise
we will
+# fail sync_slave_with_master as the last event on the master is not
present
+# on the slave).
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+
+sync_slave_with_master;
+connection slave;
+SHOW TABLES;
+SELECT * FROM t1;
+SELECT * FROM t2;
+
+connection master;
+DROP TABLE t3;
+
+FLUSH NO_WRITE_TO_BINLOG LOGS;
+sync_slave_with_master;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+
+
+# Test that events with @@skip_replication=1 are not filtered when
filtering is
+# not set on slave.
+connection master;
+SET skip_replication=1;
+CREATE TABLE t3 (a INT PRIMARY KEY, b INT) ENGINE=myisam;
+INSERT INTO t3(a) VALUES(2);
+sync_slave_with_master;
+connection slave;
+SELECT * FROM t3;
+connection master;
+DROP TABLE t3;
+
+#
+# Test that the slave will preserve the @@skip_replication flag in its
+# own binlog.
+#
+
+TRUNCATE t1;
+sync_slave_with_master;
+connection slave;
+RESET MASTER;
+
+connection master;
+SET skip_replication=0;
+INSERT INTO t1 VALUES (1,0);
+SET skip_replication=1;
+INSERT INTO t1 VALUES (2,0);
+SET skip_replication=0;
+INSERT INTO t1 VALUES (3,0);
+
+sync_slave_with_master;
+connection slave;
+# Since slave has @@replicate_events_marked_for_skip=REPLICATE, it
should have
+# applied all events.
+SELECT * FROM t1 ORDER by a;
+
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+let $SLAVE_DATADIR= `select @@datadir`;
+
+connection master;
+TRUNCATE t1;
+
+# Now apply the slave binlog to the master, to check that both the slave
+# and mysqlbinlog will preserve the @@skip_replication flag.
+--exec $MYSQL_BINLOG --read-from-remote-server --protocol=tcp
--host=127.0.0.1 --port=$SLAVE_MYPORT -uroot slave-bin.000001 >
$MYSQLTEST_VARDIR/tmp/rpl_skip_replication.binlog
+--exec $MYSQL test < $MYSQLTEST_VARDIR/tmp/rpl_skip_replication.binlog
+
+# The master should have all three events.
+SELECT * FROM t1 ORDER by a;
+
+# The slave should be missing event 2, which is marked with the
+# @@skip_replication flag.
+
+connection slave;
+START SLAVE;
+
+connection master;
+sync_slave_with_master;
+
+connection slave;
+SELECT * FROM t1 ORDER by a;
+
+#
+# Test that @@sql_slave_skip_counter does not count skipped
@@skip_replication
+# events.
+#
+
+connection master;
+TRUNCATE t1;
+
+sync_slave_with_master;
+connection slave;
+STOP SLAVE;
+# We will skip two INSERTs (in addition to any skipped due to
+# @@skip_replication). Since from 5.5 every statement is wrapped in
+# BEGIN ... END, we need to skip 6 events for this.
+SET GLOBAL sql_slave_skip_counter=6;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+START SLAVE;
+
+connection master;
+# Need to fix @@binlog_format to get consistent event count.
+SET @old_binlog_format= @@binlog_format;
+SET binlog_format= statement;
+SET skip_replication=0;
+INSERT INTO t1 VALUES (1,5);
+SET skip_replication=1;
+INSERT INTO t1 VALUES (2,5);
+SET skip_replication=0;
+INSERT INTO t1 VALUES (3,5);
+INSERT INTO t1 VALUES (4,5);
+SET binlog_format= @old_binlog_format;
+
+sync_slave_with_master;
+connection slave;
+
+# The slave should have skipped the first three inserts (number 1 and 3 due
+# to @@sql_slave_skip_counter=2, number 2 due to
+# @@replicate_events_marked_for_skip=FILTER_ON_SLAVE). So only number 4
+# should be left.
+SELECT * FROM t1;
+
+
+#
+# Check that BINLOG statement preserves the @@skip_replication flag.
+#
+connection slave;
+# Need row @@binlog_format for BINLOG statements containing row events.
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET @old_slave_binlog_format= @@global.binlog_format;
+SET GLOBAL binlog_format= row;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+connection master;
+TRUNCATE t1;
+
+SET @old_binlog_format= @@binlog_format;
+SET binlog_format= row;
+# Format description log event.
+BINLOG
'wlZOTw8BAAAA8QAAAPUAAAAAAAQANS41LjIxLU1hcmlhREItZGVidWctbG9nAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAA2QAEGggAAAAICAgCAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
+AAAAAAAAAAAA371saA==';
+# INSERT INTO t1 VALUES (1,8) # with @@skip_replication=1
+BINLOG 'wlZOTxMBAAAAKgAAAGMBAAAAgCkAAAAAAAEABHRlc3QAAnQxAAIDAwAC
+wlZOTxcBAAAAJgAAAIkBAAAAgCkAAAAAAAEAAv/8AQAAAAgAAAA=';
+# INSERT INTO t1 VALUES (2,8) # with @@skip_replication=0
+BINLOG 'wlZOTxMBAAAAKgAAADwCAAAAACkAAAAAAAEABHRlc3QAAnQxAAIDAwAC
+wlZOTxcBAAAAJgAAAGICAAAAACkAAAAAAAEAAv/8AgAAAAgAAAA=';
+SET binlog_format= @old_binlog_format;
+
+SELECT * FROM t1 ORDER BY a;
+sync_slave_with_master;
+connection slave;
+# Slave should have only the second insert, the first should be ignored
due to
+# the @@skip_replication flag.
+SELECT * FROM t1 ORDER by a;
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+SET GLOBAL binlog_format= @old_slave_binlog_format;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+
+# Test that it is not possible to change @@skip_replication inside a
+# transaction or statement, thereby replicating only parts of statements
+# or transactions.
+connection master;
+SET skip_replication=0;
+
+BEGIN;
+--error ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET skip_replication=0;
+--error ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET skip_replication=1;
+ROLLBACK;
+SET skip_replication=1;
+BEGIN;
+--error ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET skip_replication=0;
+--error ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET skip_replication=1;
+COMMIT;
+SET autocommit=0;
+INSERT INTO t2(a) VALUES(100);
+--error ER_INSIDE_TRANSACTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET skip_replication=1;
+ROLLBACK;
+SET autocommit=1;
+
+SET skip_replication=1;
+--delimiter |
+CREATE FUNCTION foo (x INT) RETURNS INT BEGIN SET SESSION
skip_replication=x; RETURN x; END|
+CREATE PROCEDURE bar(x INT) BEGIN SET SESSION skip_replication=x; END|
+CREATE FUNCTION baz (x INT) RETURNS INT BEGIN CALL bar(x); RETURN x; END|
+--delimiter ;
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SELECT foo(0);
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SELECT baz(0);
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET @a= foo(1);
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+SET @a= baz(1);
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+UPDATE t2 SET b=foo(0);
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+UPDATE t2 SET b=baz(0);
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+INSERT INTO t1 VALUES (101, foo(1));
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_SKIP_REPLICATION
+INSERT INTO t1 VALUES (101, baz(0));
+SELECT @@skip_replication;
+CALL bar(0);
+SELECT @@skip_replication;
+CALL bar(1);
+SELECT @@skip_replication;
+DROP FUNCTION foo;
+DROP PROCEDURE bar;
+DROP FUNCTION baz;
+
+
+# Test that master-side filtering happens on the master side, and that
+# slave-side filtering happens on the slave.
+
+# First test that events do not reach the slave when master-side filtering
+# is configured. Do this by replicating first with only the IO thread
running
+# and master-side filtering; then change to no filtering and start the SQL
+# thread. This should still skip the events, as master-side filtering
+# means the events never reached the slave.
+connection master;
+SET skip_replication= 0;
+TRUNCATE t1;
+sync_slave_with_master;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_MASTER;
+START SLAVE IO_THREAD;
+connection master;
+SET skip_replication= 1;
+INSERT INTO t1(a) VALUES (1);
+SET skip_replication= 0;
+INSERT INTO t1(a) VALUES (2);
+--disable_connect_log
+--source include/save_master_pos.inc
+--enable_connect_log
+connection slave;
+--disable_connect_log
+--source include/sync_io_with_master.inc
+--enable_connect_log
+STOP SLAVE IO_THREAD;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+connection master;
+sync_slave_with_master;
+connection slave;
+# Now only the second insert of (2) should be visible, as the first was
+# filtered on the master, so even though the SQL thread ran without
skipping
+# events, it will never see the event in the first place.
+SELECT * FROM t1;
+
+# Now tests that when slave-side filtering is configured, events _do_ reach
+# the slave.
+connection master;
+SET skip_replication= 0;
+TRUNCATE t1;
+sync_slave_with_master;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=FILTER_ON_SLAVE;
+START SLAVE IO_THREAD;
+connection master;
+SET skip_replication= 1;
+INSERT INTO t1(a) VALUES (1);
+SET skip_replication= 0;
+INSERT INTO t1(a) VALUES (2);
+--disable_connect_log
+--source include/save_master_pos.inc
+--enable_connect_log
+connection slave;
+--disable_connect_log
+--source include/sync_io_with_master.inc
+--enable_connect_log
+STOP SLAVE IO_THREAD;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+connection master;
+sync_slave_with_master;
+connection slave;
+# Now both inserts should be visible. Since filtering was configured to be
+# slave-side, the event is in the relay log, and when the SQL thread ran we
+# had disabled filtering again.
+SELECT * FROM t1 ORDER BY a;
+
+
+# Clean up.
+connection master;
+SET skip_replication=0;
+DROP TABLE t1,t2;
+connection slave;
+STOP SLAVE;
+SET GLOBAL replicate_events_marked_for_skip=REPLICATE;
+START SLAVE;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_special_charset.opt
b/mysql-test/suite/binlog_encryption/rpl_special_charset.opt
new file mode 100644
index 0000000..b071fb2
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_special_charset.opt
@@ -0,0 +1 @@
+--character-set-server=utf16
diff --git
a/mysql-test/suite/binlog_encryption/rpl_special_charset.result
b/mysql-test/suite/binlog_encryption/rpl_special_charset.result
new file mode 100644
index 0000000..218ced9
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_special_charset.result
@@ -0,0 +1,10 @@
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression("Cannot use utf16 as character_set_client");
+CREATE TABLE t1(i VARCHAR(20));
+INSERT INTO t1 VALUES (0xFFFF);
+connection slave;
+include/diff_tables.inc [master:t1, slave:t1]
+connection master;
+DROP TABLE t1;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_special_charset.test
b/mysql-test/suite/binlog_encryption/rpl_special_charset.test
new file mode 100644
index 0000000..9ad630d
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_special_charset.test
@@ -0,0 +1,35 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+################################################################################
+# Bug#19855907 IO THREAD AUTHENTICATION ISSUE WITH SOME CHARACTER SETS
+# Problem: IO thread fails to connect to master if servers are
configured with
+# special character sets like utf16, utf32, ucs2.
+#
+# Analysis: MySQL server does not support few special character sets like
+# utf16,utf32 and ucs2 as "client's character set"(eg: utf16,utf32, ucs2).
+# When IO thread is trying to connect to Master, it sets server's
character
+# set as client's character set. When Slave server is started with these
+# special character sets, IO thread (a connection to Master) fails because
+# of the above said reason.
+#
+# Fix: If server's character set is not supported as client's character
set,
+# then set default's client character set(latin1) as client's
character set.
+###############################################################################
+--source include/master-slave.inc
+--enable_connect_log
+
+call mtr.add_suppression("Cannot use utf16 as character_set_client");
+CREATE TABLE t1(i VARCHAR(20));
+INSERT INTO t1 VALUES (0xFFFF);
+--sync_slave_with_master
+--let diff_tables=master:t1, slave:t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+# Cleanup
+--connection master
+DROP TABLE t1;
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_sporadic_master-master.opt
b/mysql-test/suite/binlog_encryption/rpl_sporadic_master-master.opt
new file mode 100644
index 0000000..5f038b6
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sporadic_master-master.opt
@@ -0,0 +1 @@
+--debug-sporadic-binlog-dump-fail --debug-max-binlog-dump-events=2
diff --git
a/mysql-test/suite/binlog_encryption/rpl_sporadic_master.result
b/mysql-test/suite/binlog_encryption/rpl_sporadic_master.result
new file mode 100644
index 0000000..32ae637
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sporadic_master.result
@@ -0,0 +1,28 @@
+include/master-slave.inc
+[connection master]
+create table t2(n int);
+create table t1(n int not null auto_increment primary key);
+insert into t1 values (NULL),(NULL);
+truncate table t1;
+insert into t1 values (4),(NULL);
+connection slave;
+include/stop_slave.inc
+include/start_slave.inc
+connection master;
+insert into t1 values (NULL),(NULL);
+flush logs;
+truncate table t1;
+insert into t1 values (10),(NULL),(NULL),(NULL),(NULL),(NULL);
+connection slave;
+select * from t1 ORDER BY n;
+n
+10
+11
+12
+13
+14
+15
+connection master;
+drop table t1,t2;
+connection slave;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_sporadic_master.test
b/mysql-test/suite/binlog_encryption/rpl_sporadic_master.test
new file mode 100644
index 0000000..e6c14f8
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sporadic_master.test
@@ -0,0 +1,34 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+# test to see if replication can continue when master sporadically fails on
+# COM_BINLOG_DUMP and additionally limits the number of events per dump
+
+source include/master-slave.inc;
+--enable_connect_log
+
+create table t2(n int);
+create table t1(n int not null auto_increment primary key);
+insert into t1 values (NULL),(NULL);
+truncate table t1;
+# We have to use 4 in the following to make this test work with all
table types
+insert into t1 values (4),(NULL);
+sync_slave_with_master;
+--disable_connect_log
+--source include/stop_slave.inc
+--source include/start_slave.inc
+--enable_connect_log
+connection master;
+insert into t1 values (NULL),(NULL);
+flush logs;
+truncate table t1;
+insert into t1 values (10),(NULL),(NULL),(NULL),(NULL),(NULL);
+sync_slave_with_master;
+select * from t1 ORDER BY n;
+connection master;
+drop table t1,t2;
+sync_slave_with_master;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_ssl.result
b/mysql-test/suite/binlog_encryption/rpl_ssl.result
new file mode 100644
index 0000000..0b3a6cd
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_ssl.result
@@ -0,0 +1,55 @@
+include/master-slave.inc
+[connection master]
+connection master;
+create user replssl@localhost;
+grant replication slave on *.* to replssl@localhost require ssl;
+create table t1 (t int auto_increment, KEY(t));
+connection slave;
+stop slave;
+change master to
+master_user='replssl',
+master_password='',
+master_ssl=1,
+master_ssl_ca ='MYSQL_TEST_DIR/std_data/cacert.pem',
+master_ssl_cert='MYSQL_TEST_DIR/std_data/client-cert.pem',
+master_ssl_key='MYSQL_TEST_DIR/std_data/client-key.pem';
+start slave;
+connection master;
+insert into t1 values(1);
+connection slave;
+select * from t1;
+t
+1
+Master_SSL_Allowed = 'Yes'
+Master_SSL_CA_Path = ''
+Master_SSL_CA_File = 'MYSQL_TEST_DIR/std_data/cacert.pem'
+Master_SSL_Cert = 'MYSQL_TEST_DIR/std_data/client-cert.pem'
+Master_SSL_Key = 'MYSQL_TEST_DIR/std_data/client-key.pem'
+include/check_slave_is_running.inc
+STOP SLAVE;
+select * from t1;
+t
+1
+connection master;
+insert into t1 values (NULL);
+connection slave;
+include/wait_for_slave_to_start.inc
+Master_SSL_Allowed = 'Yes'
+Master_SSL_CA_Path = ''
+Master_SSL_CA_File = 'MYSQL_TEST_DIR/std_data/cacert.pem'
+Master_SSL_Cert = 'MYSQL_TEST_DIR/std_data/client-cert.pem'
+Master_SSL_Key = 'MYSQL_TEST_DIR/std_data/client-key.pem'
+include/check_slave_is_running.inc
+connection master;
+drop user replssl@localhost;
+drop table t1;
+connection slave;
+include/stop_slave.inc
+CHANGE MASTER TO
+master_user = 'root',
+master_ssl = 0,
+master_ssl_ca = '',
+master_ssl_cert = '',
+master_ssl_key = '';
+End of 5.0 tests
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_ssl.test
b/mysql-test/suite/binlog_encryption/rpl_ssl.test
new file mode 100644
index 0000000..ed39045
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_ssl.test
@@ -0,0 +1,126 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+--enable_connect_log
+
+--disable_connect_log
+source include/have_ssl_communication.inc;
+source include/master-slave.inc;
+--enable_connect_log
+
+# create a user for replication that requires ssl encryption
+connection master;
+create user replssl@localhost;
+grant replication slave on *.* to replssl@localhost require ssl;
+create table t1 (t int auto_increment, KEY(t));
+
+sync_slave_with_master;
+
+# Set slave to use SSL for connection to master
+stop slave;
+--replace_result $MYSQL_TEST_DIR MYSQL_TEST_DIR
+eval change master to
+ master_user='replssl',
+ master_password='',
+ master_ssl=1,
+ master_ssl_ca ='$MYSQL_TEST_DIR/std_data/cacert.pem',
+ master_ssl_cert='$MYSQL_TEST_DIR/std_data/client-cert.pem',
+ master_ssl_key='$MYSQL_TEST_DIR/std_data/client-key.pem';
+start slave;
+
+# Switch to master and insert one record, then sync it to slave
+connection master;
+insert into t1 values(1);
+sync_slave_with_master;
+
+# The record should now be on slave
+select * from t1;
+
+# The slave is synced and waiting/reading from master
+# SHOW SLAVE STATUS will show "Waiting for master to send event"
+let $status_items= Master_SSL_Allowed, Master_SSL_CA_Path,
Master_SSL_CA_File, Master_SSL_Cert, Master_SSL_Key;
+--disable_connect_log
+source include/show_slave_status.inc;
+source include/check_slave_is_running.inc;
+--enable_connect_log
+
+# Stop the slave, as reported in bug#21871 it would hang
+STOP SLAVE;
+
+select * from t1;
+
+# Do the same thing a number of times
+disable_query_log;
+disable_result_log;
+# 2007-11-27 mats Bug #32756 Starting and stopping the slave in a
loop can lose rows
+# After discussions with Engineering, I'm disabling this part of the
test to avoid it causing
+# red trees.
+disable_parsing;
+let $i= 100;
+while ($i)
+{
+ start slave;
+ connection master;
+ insert into t1 values (NULL);
+ select * from t1; # Some variance
+ connection slave;
+ select * from t1; # Some variance
+ stop slave;
+ dec $i;
+}
+enable_parsing;
+START SLAVE;
+enable_query_log;
+enable_result_log;
+connection master;
+# INSERT one more record to make sure
+# the sync has something to do
+insert into t1 values (NULL);
+let $master_count= `select count(*) from t1`;
+
+sync_slave_with_master;
+--disable_connect_log
+--source include/wait_for_slave_to_start.inc
+source include/show_slave_status.inc;
+source include/check_slave_is_running.inc;
+--enable_connect_log
+
+let $slave_count= `select count(*) from t1`;
+
+if ($slave_count != $master_count)
+{
+ echo master and slave differed in number of rows;
+ echo master: $master_count;
+ echo slave: $slave_count;
+
+ connection master;
+ echo === master ===;
+ select count(*) t1;
+ select * from t1;
+ connection slave;
+ echo === slave ===;
+ select count(*) t1;
+ select * from t1;
+ query_vertical show slave status;
+}
+
+connection master;
+drop user replssl@localhost;
+drop table t1;
+sync_slave_with_master;
+
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+CHANGE MASTER TO
+ master_user = 'root',
+ master_ssl = 0,
+ master_ssl_ca = '',
+ master_ssl_cert = '',
+ master_ssl_key = '';
+
+--echo End of 5.0 tests
+--let $rpl_only_running_threads= 1
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space-slave.opt
b/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space-slave.opt
new file mode 100644
index 0000000..f780540
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space-slave.opt
@@ -0,0 +1 @@
+--relay-log-space-limit=8192 --relay-log-purge --max-relay-log-size=4096
diff --git
a/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space.result
b/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space.result
new file mode 100644
index 0000000..3113eec
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space.result
@@ -0,0 +1,6 @@
+include/master-slave.inc
+[connection master]
+include/assert.inc [Assert that relay log space is close to the limit]
+include/diff_tables.inc [master:test.t1,slave:test.t1]
+connection slave;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space.test
b/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space.test
new file mode 100644
index 0000000..164577b
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_stm_relay_ign_space.test
@@ -0,0 +1,117 @@
+#
+# The test was taken from the the rpl suite as is
+#
+
+--enable_connect_log
+
+#
+# BUG#12400313 / BUG#64503 test case
+#
+#
+# Description
+# -----------
+# +# This test case starts the slave server with: +#
--relay-log-space-limit=8192 --relay-log-purge --max-relay-log-size=4096
+# +# Then it issues some queries that will cause the slave to reach
+# relay-log-space-limit. We lock the table so that the SQL thread is
+# not able to purge the log and then we issue some more statements.
+#
+# The purpose is to show that the IO thread will honor the limits
+# while the SQL thread is not able to purge the relay logs, which did
+# not happen before this patch. In addition we assert that while
+# ignoring the limit (SQL thread needs to rotate before purging), the
+# IO thread does not do it in an uncontrolled manner.
+
+--disable_connect_log
+--source include/have_binlog_format_statement.inc
+--source include/master-slave.inc
+--enable_connect_log
+
+--disable_query_log
+CREATE TABLE t1 (c1 TEXT) engine=InnoDB;
+
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+
+--sync_slave_with_master
+
+# wait for the SQL thread to sleep
+--let $show_statement= SHOW PROCESSLIST
+--let $field= State
+--let $condition= = 'Slave has read all relay log; waiting for the
slave I/O thread to update it'
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+# now the io thread has set rli->ignore_space_limit
+# lets lock the table so that once the SQL thread awakes
+# it blocks there and does not set rli->ignore_space_limit
+# back to zero
+LOCK TABLE t1 WRITE;
+
+# now issue more statements that will overflow the +#
rli->log_space_limit (in this case ~10K)
+--connection master
+
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+INSERT INTO t1 VALUES
('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxx');
+
+--connection slave
+
+# ASSERT that the IO thread waits for the SQL thread to release some +#
space before continuing
+--let $show_statement= SHOW PROCESSLIST
+--let $field= State
+--let $condition= LIKE 'Waiting for %'
+# before the patch (IO would have transfered everything)
+#--let $condition= = 'Waiting for master to send event'
+# after the patch (now it waits for space to be freed)
+#--let $condition= = 'Waiting for the slave SQL thread to free enough
relay log space'
+--disable_connect_log
+--source include/wait_show_condition.inc
+--enable_connect_log
+
+# without the patch we can uncomment the following two lines and
+# watch the IO thread synchronize with the master, thus writing +#
relay logs way over the space limit
+#--connection master
+#--source include/sync_slave_io_with_master.inc
+
+## ASSERT that the IO thread has honored the limit+few bytes required
to be able to purge
+--let $relay_log_space_while_sql_is_executing = query_get_value(SHOW
SLAVE STATUS, Relay_Log_Space, 1)
+--let $relay_log_space_limit = query_get_value(SHOW VARIABLES LIKE
"relay_log_space_limit", Value, 1)
+--let $assert_text= Assert that relay log space is close to the limit
+--let $assert_cond= $relay_log_space_while_sql_is_executing <=
$relay_log_space_limit * 1.15
+--disable_connect_log
+--source include/assert.inc
+--enable_connect_log
+
+# unlock the table and let SQL thread continue applying events
+UNLOCK TABLES;
+
+--connection master
+--sync_slave_with_master
+--let $diff_tables=master:test.t1,slave:test.t1
+--disable_connect_log
+--source include/diff_tables.inc
+--enable_connect_log
+
+--connection master
+DROP TABLE t1;
+--enable_query_log
+--sync_slave_with_master
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_switch_stm_row_mixed.result
b/mysql-test/suite/binlog_encryption/rpl_switch_stm_row_mixed.result
new file mode 100644
index 0000000..289e516
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_switch_stm_row_mixed.result
@@ -0,0 +1,457 @@
+include/master-slave.inc
+[connection master]
+connection master;
+drop database if exists mysqltest1;
+create database mysqltest1;
+use mysqltest1;
+set @my_binlog_format= @@global.binlog_format;
+set session binlog_format=mixed;
+show session variables like "binlog_format%";
+Variable_name Value
+binlog_format MIXED
+set session binlog_format=statement;
+show session variables like "binlog_format%";
+Variable_name Value
+binlog_format STATEMENT
+set session binlog_format=row;
+show session variables like "binlog_format%";
+Variable_name Value
+binlog_format ROW
+set global binlog_format=DEFAULT;
+show global variables like "binlog_format%";
+Variable_name Value
+binlog_format STATEMENT
+set global binlog_format=MIXED;
+show global variables like "binlog_format%";
+Variable_name Value
+binlog_format MIXED
+set global binlog_format=STATEMENT;
+show global variables like "binlog_format%";
+Variable_name Value
+binlog_format STATEMENT
+set global binlog_format=ROW;
+show global variables like "binlog_format%";
+Variable_name Value
+binlog_format ROW
+show session variables like "binlog_format%";
+Variable_name Value
+binlog_format ROW
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+ROW ROW
+CREATE TABLE t1 (a varchar(100));
+prepare stmt1 from 'insert into t1 select concat(UUID(),?)';
+set @string="emergency_1_";
+insert into t1 values("work_2_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values(concat(UUID(),"work_3_"));
+execute stmt1 using @string;
+deallocate prepare stmt1;
+insert into t1 values(concat("for_4_",UUID()));
+insert into t1 select "yesterday_5_";
+create temporary table tmp(a char(100));
+insert into tmp values("see_6_");
+set binlog_format=statement;
+ERROR HY000: Cannot switch out of the row-based binary log format when
the session has open temporary tables
+insert into t1 select * from tmp;
+drop temporary table tmp;
+set binlog_format=statement;
+show global variables like "binlog_format%";
+Variable_name Value
+binlog_format ROW
+show session variables like "binlog_format%";
+Variable_name Value
+binlog_format STATEMENT
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+ROW STATEMENT
+set global binlog_format=statement;
+show global variables like "binlog_format%";
+Variable_name Value
+binlog_format STATEMENT
+show session variables like "binlog_format%";
+Variable_name Value
+binlog_format STATEMENT
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+STATEMENT STATEMENT
+prepare stmt1 from 'insert into t1 select ?';
+set @string="emergency_7_";
+insert into t1 values("work_8_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values("work_9_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+insert into t1 values("for_10_");
+insert into t1 select "yesterday_11_";
+set binlog_format=statement;
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+STATEMENT STATEMENT
+set global binlog_format=statement;
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+STATEMENT STATEMENT
+prepare stmt1 from 'insert into t1 select ?';
+set @string="emergency_12_";
+insert into t1 values("work_13_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values("work_14_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+insert into t1 values("for_15_");
+insert into t1 select "yesterday_16_";
+set global binlog_format=mixed;
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+MIXED STATEMENT
+set binlog_format=default;
+select @@global.binlog_format, @@session.binlog_format;
+@@global.binlog_format @@session.binlog_format
+MIXED MIXED
+prepare stmt1 from 'insert into t1 select concat(UUID(),?)';
+set @string="emergency_17_";
+insert into t1 values("work_18_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values(concat(UUID(),"work_19_"));
+execute stmt1 using @string;
+deallocate prepare stmt1;
+insert into t1 values(concat("for_20_",UUID()));
+insert into t1 select "yesterday_21_";
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values(concat(UUID(),"work_22_"));
+execute stmt1 using @string;
+deallocate prepare stmt1;
+insert into t1 values(concat("for_23_",UUID()));
+insert into t1 select "yesterday_24_";
+create table t2 ENGINE=MyISAM select rpad(UUID(),100,' ');
+create table t3 select 1 union select UUID();
+create table t4 select * from t1 where 3 in (select 1 union select 2
union select UUID() union select 3);
+create table t5 select * from t1 where 3 in (select 1 union select 2
union select curdate() union select 3);
+Warnings:
+Warning 1292 Incorrect datetime value: '3'
+insert into t5 select UUID() from t1 where 3 in (select 1 union select
2 union select 3 union select * from t4);
+create procedure foo()
+begin
+insert into t1 values("work_25_");
+insert into t1 values(concat("for_26_",UUID()));
+insert into t1 select "yesterday_27_";
+end|
+create procedure foo2()
+begin
+insert into t1 values(concat("emergency_28_",UUID()));
+insert into t1 values("work_29_");
+insert into t1 values(concat("for_30_",UUID()));
+set session binlog_format=row; # accepted for stored procs
+insert into t1 values("more work_31_");
+set session binlog_format=mixed;
+end|
+create function foo3() returns bigint unsigned
+begin
+set session binlog_format=row; # rejected for stored funcs
+insert into t1 values("alarm");
+return 100;
+end|
+create procedure foo4(x varchar(100))
+begin
+insert into t1 values(concat("work_250_",x));
+insert into t1 select "yesterday_270_";
+end|
+call foo();
+call foo2();
+call foo4("hello");
+call foo4(UUID());
+call foo4("world");
+select foo3();
+ERROR HY000: Cannot change the binary logging format inside a stored
function or trigger
+select * from t1 where a="alarm";
+a
+drop function foo3;
+create function foo3() returns bigint unsigned
+begin
+insert into t1 values("foo3_32_");
+call foo();
+return 100;
+end|
+insert into t2 select foo3();
+prepare stmt1 from 'insert into t2 select foo3()';
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+create function foo4() returns bigint unsigned
+begin
+insert into t2 select foo3();
+return 100;
+end|
+select foo4();
+foo4()
+100
+prepare stmt1 from 'select foo4()';
+execute stmt1;
+foo4()
+100
+execute stmt1;
+foo4()
+100
+deallocate prepare stmt1;
+create function foo5() returns bigint unsigned
+begin
+insert into t2 select UUID();
+return 100;
+end|
+select foo5();
+foo5()
+100
+prepare stmt1 from 'select foo5()';
+execute stmt1;
+foo5()
+100
+execute stmt1;
+foo5()
+100
+deallocate prepare stmt1;
+create function foo6(x varchar(100)) returns bigint unsigned
+begin
+insert into t2 select x;
+return 100;
+end|
+select foo6("foo6_1_");
+foo6("foo6_1_")
+100
+select foo6(concat("foo6_2_",UUID()));
+foo6(concat("foo6_2_",UUID()))
+100
+prepare stmt1 from 'select foo6(concat("foo6_3_",UUID()))';
+execute stmt1;
+foo6(concat("foo6_3_",UUID()))
+100
+execute stmt1;
+foo6(concat("foo6_3_",UUID()))
+100
+deallocate prepare stmt1;
+create view v1 as select uuid();
+create table t11 (data varchar(255));
+insert into t11 select * from v1;
+insert into t11 select TABLE_NAME from INFORMATION_SCHEMA.TABLES where
TABLE_SCHEMA='mysqltest1' and TABLE_NAME IN ('v1','t11');
+prepare stmt1 from "insert into t11 select TABLE_NAME from
INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA='mysqltest1' and TABLE_NAME
IN ('v1','t11')";
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+create trigger t11_bi before insert on t11 for each row
+begin
+set NEW.data = concat(NEW.data,UUID());
+end|
+insert into t11 values("try_560_");
+insert delayed into t2 values("delay_1_");
+insert delayed into t2 values(concat("delay_2_",UUID()));
+insert delayed into t2 values("delay_6_");
+insert delayed into t2 values(rand());
+set @a=2.345;
+insert delayed into t2 values(@a);
+connection slave;
+connection master;
+create table t20 select * from t1;
+create table t21 select * from t2;
+create table t22 select * from t3;
+drop table t1,t2,t3;
+create table t1 (a int primary key auto_increment, b varchar(100));
+create table t2 (a int primary key auto_increment, b varchar(100));
+create table t3 (b varchar(100));
+create function f (x varchar(100)) returns int deterministic
+begin
+insert into t1 values(null,x);
+insert into t2 values(null,x);
+return 1;
+end|
+select f("try_41_");
+f("try_41_")
+1
+connection slave;
+use mysqltest1;
+insert into t2 values(2,null),(3,null),(4,null);
+delete from t2 where a>=2;
+connection master;
+select f("try_42_");
+f("try_42_")
+1
+connection slave;
+insert into t2 values(3,null),(4,null);
+delete from t2 where a>=3;
+connection master;
+prepare stmt1 from 'select f(?)';
+set @string="try_43_";
+insert into t1 values(null,"try_44_");
+execute stmt1 using @string;
+f(?)
+1
+deallocate prepare stmt1;
+connection slave;
+connection master;
+create table t12 select * from t1;
+drop table t1;
+create table t1 (a int, b varchar(100), key(a));
+select f("try_45_");
+f("try_45_")
+1
+create table t13 select * from t1;
+drop table t1;
+create table t1 (a int primary key auto_increment, b varchar(100));
+drop function f;
+create table t14 (unique (a)) select * from t2;
+truncate table t2;
+create function f1 (x varchar(100)) returns int deterministic
+begin
+insert into t1 values(null,x);
+return 1;
+end|
+create function f2 (x varchar(100)) returns int deterministic
+begin
+insert into t2 values(null,x);
+return 1;
+end|
+select f1("try_46_"),f2("try_47_");
+f1("try_46_") f2("try_47_")
+1 1
+connection slave;
+insert into t2 values(2,null),(3,null),(4,null);
+delete from t2 where a>=2;
+connection master;
+select f1("try_48_"),f2("try_49_");
+f1("try_48_") f2("try_49_")
+1 1
+insert into t3 values(concat("try_50_",f1("try_51_"),f2("try_52_")));
+connection slave;
+connection master;
+drop function f2;
+create function f2 (x varchar(100)) returns int deterministic
+begin
+declare y int;
+insert into t1 values(null,x);
+set y = (select count(*) from t2);
+return y;
+end|
+select f1("try_53_"),f2("try_54_");
+f1("try_53_") f2("try_54_")
+1 3
+connection slave;
+connection master;
+drop function f2;
+create trigger t1_bi before insert on t1 for each row
+begin
+insert into t2 values(null,"try_55_");
+end|
+insert into t1 values(null,"try_56_");
+alter table t1 modify a int, drop primary key;
+insert into t1 values(null,"try_57_");
+connection slave;
+connection master;
+CREATE TEMPORARY TABLE t15 SELECT UUID();
+create table t16 like t15;
+INSERT INTO t16 SELECT * FROM t15;
+insert into t16 values("try_65_");
+drop table t15;
+insert into t16 values("try_66_");
+connection slave;
+connection master;
+select count(*) from t1;
+count(*)
+7
+select count(*) from t2;
+count(*)
+5
+select count(*) from t3;
+count(*)
+1
+select count(*) from t4;
+count(*)
+29
+select count(*) from t5;
+count(*)
+58
+select count(*) from t11;
+count(*)
+8
+select count(*) from t20;
+count(*)
+66
+select count(*) from t21;
+count(*)
+19
+select count(*) from t22;
+count(*)
+2
+select count(*) from t12;
+count(*)
+4
+select count(*) from t13;
+count(*)
+1
+select count(*) from t14;
+count(*)
+4
+select count(*) from t16;
+count(*)
+3
+connection slave;
+connection master;
+DROP TABLE IF EXISTS t11;
+SET SESSION BINLOG_FORMAT=STATEMENT;
+CREATE TABLE t11 (song VARCHAR(255));
+LOCK TABLES t11 WRITE;
+SET SESSION BINLOG_FORMAT=ROW;
+INSERT INTO t11 VALUES('Several Species of Small Furry Animals Gathered
Together in a Cave and Grooving With a Pict');
+SET SESSION BINLOG_FORMAT=STATEMENT;
+INSERT INTO t11 VALUES('Careful With That Axe, Eugene');
+UNLOCK TABLES;
+SELECT * FROM t11;
+song Several Species of Small Furry Animals Gathered Together in a Cave
and Grooving With a Pict
+song Careful With That Axe, Eugene
+connection slave;
+USE mysqltest1;
+SELECT * FROM t11;
+song Several Species of Small Furry Animals Gathered Together in a Cave
and Grooving With a Pict
+song Careful With That Axe, Eugene
+connection master;
+DROP TABLE IF EXISTS t12;
+SET SESSION BINLOG_FORMAT=MIXED;
+CREATE TABLE t12 (data LONG);
+LOCK TABLES t12 WRITE;
+INSERT INTO t12 VALUES(UUID());
+UNLOCK TABLES;
+connection slave;
+connection master;
+CREATE FUNCTION my_user()
+RETURNS CHAR(64)
+BEGIN
+DECLARE user CHAR(64);
+SELECT USER() INTO user;
+RETURN user;
+END $$
+CREATE FUNCTION my_current_user()
+RETURNS CHAR(64)
+BEGIN
+DECLARE user CHAR(64);
+SELECT CURRENT_USER() INTO user;
+RETURN user;
+END $$
+DROP TABLE IF EXISTS t13;
+CREATE TABLE t13 (data CHAR(64));
+INSERT INTO t13 VALUES (USER());
+INSERT INTO t13 VALUES (my_user());
+INSERT INTO t13 VALUES (CURRENT_USER());
+INSERT INTO t13 VALUES (my_current_user());
+connection slave;
+connection master;
+drop database mysqltest1;
+connection slave;
+connection master;
+set global binlog_format =@my_binlog_format;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_switch_stm_row_mixed.test
b/mysql-test/suite/binlog_encryption/rpl_switch_stm_row_mixed.test
new file mode 100644
index 0000000..fc43c6a
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_switch_stm_row_mixed.test
@@ -0,0 +1,634 @@
+#
+# The test was taken from the rpl suite with cosmetic changes
+#
+
+#
+# rpl_switch_stm_row_mixed tests covers
+#
+# - Master is switching explicitly between STATEMENT, ROW, and MIXED
+# binlog format showing when it is possible and when not. +# -
Master switching from MIXED to RBR implicitly listing all use
+# cases, e.g a query invokes UUID(), thereafter to serve as the
+# definition of MIXED binlog format
+# - correctness of execution
+
+--enable_connect_log
+
+--disable_connect_log
+-- source include/have_binlog_format_mixed_or_row.inc
+-- source include/master-slave.inc
+--enable_connect_log
+
+# Since this test generates row-based events in the binary log, the
+# slave SQL thread cannot be in STATEMENT mode to execute this test,
+# so we only execute it for MIXED and ROW as default value of
+# BINLOG_FORMAT.
+
+connection master;
+--disable_warnings
+drop database if exists mysqltest1;
+create database mysqltest1;
+--enable_warnings
+use mysqltest1;
+
+# Save binlog format
+set @my_binlog_format= @@global.binlog_format;
+
+# play with switching
+set session binlog_format=mixed;
+show session variables like "binlog_format%";
+set session binlog_format=statement;
+show session variables like "binlog_format%";
+set session binlog_format=row;
+show session variables like "binlog_format%";
+
+set global binlog_format=DEFAULT;
+show global variables like "binlog_format%";
+set global binlog_format=MIXED;
+show global variables like "binlog_format%";
+set global binlog_format=STATEMENT;
+show global variables like "binlog_format%";
+set global binlog_format=ROW;
+show global variables like "binlog_format%";
+show session variables like "binlog_format%";
+select @@global.binlog_format, @@session.binlog_format;
+
+CREATE TABLE t1 (a varchar(100));
+
+prepare stmt1 from 'insert into t1 select concat(UUID(),?)';
+set @string="emergency_1_";
+insert into t1 values("work_2_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values(concat(UUID(),"work_3_"));
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+insert into t1 values(concat("for_4_",UUID()));
+insert into t1 select "yesterday_5_";
+
+# verify that temp tables prevent a switch to SBR
+create temporary table tmp(a char(100));
+insert into tmp values("see_6_");
+--error ER_TEMP_TABLE_PREVENTS_SWITCH_OUT_OF_RBR
+set binlog_format=statement;
+insert into t1 select * from tmp;
+drop temporary table tmp;
+
+# Now we go to SBR
+set binlog_format=statement;
+show global variables like "binlog_format%";
+show session variables like "binlog_format%";
+select @@global.binlog_format, @@session.binlog_format;
+set global binlog_format=statement;
+show global variables like "binlog_format%";
+show session variables like "binlog_format%";
+select @@global.binlog_format, @@session.binlog_format;
+
+prepare stmt1 from 'insert into t1 select ?';
+set @string="emergency_7_";
+insert into t1 values("work_8_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values("work_9_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+insert into t1 values("for_10_");
+insert into t1 select "yesterday_11_";
+
+# test statement (is not default after wl#3368)
+set binlog_format=statement;
+select @@global.binlog_format, @@session.binlog_format;
+set global binlog_format=statement;
+select @@global.binlog_format, @@session.binlog_format;
+
+prepare stmt1 from 'insert into t1 select ?';
+set @string="emergency_12_";
+insert into t1 values("work_13_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values("work_14_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+insert into t1 values("for_15_");
+insert into t1 select "yesterday_16_";
+
+# and now the mixed mode
+
+set global binlog_format=mixed;
+select @@global.binlog_format, @@session.binlog_format;
+set binlog_format=default;
+select @@global.binlog_format, @@session.binlog_format;
+
+prepare stmt1 from 'insert into t1 select concat(UUID(),?)';
+set @string="emergency_17_";
+insert into t1 values("work_18_");
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values(concat(UUID(),"work_19_"));
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+insert into t1 values(concat("for_20_",UUID()));
+insert into t1 select "yesterday_21_";
+
+prepare stmt1 from 'insert into t1 select ?';
+insert into t1 values(concat(UUID(),"work_22_"));
+execute stmt1 using @string;
+deallocate prepare stmt1;
+
+insert into t1 values(concat("for_23_",UUID()));
+insert into t1 select "yesterday_24_";
+
+# Test of CREATE TABLE SELECT
+
+create table t2 ENGINE=MyISAM select rpad(UUID(),100,' ');
+create table t3 select 1 union select UUID();
+--disable_warnings
+create table t4 select * from t1 where 3 in (select 1 union select 2
union select UUID() union select 3);
+--enable_warnings
+create table t5 select * from t1 where 3 in (select 1 union select 2
union select curdate() union select 3);
+# what if UUID() is first:
+--disable_warnings
+insert into t5 select UUID() from t1 where 3 in (select 1 union select
2 union select 3 union select * from t4);
+--enable_warnings
+
+# inside a stored procedure
+
+delimiter |;
+create procedure foo()
+begin
+insert into t1 values("work_25_");
+insert into t1 values(concat("for_26_",UUID()));
+insert into t1 select "yesterday_27_";
+end|
+create procedure foo2()
+begin
+insert into t1 values(concat("emergency_28_",UUID()));
+insert into t1 values("work_29_");
+insert into t1 values(concat("for_30_",UUID()));
+set session binlog_format=row; # accepted for stored procs
+insert into t1 values("more work_31_");
+set session binlog_format=mixed;
+end|
+create function foo3() returns bigint unsigned
+begin
+ set session binlog_format=row; # rejected for stored funcs
+ insert into t1 values("alarm");
+ return 100;
+end|
+create procedure foo4(x varchar(100))
+begin
+insert into t1 values(concat("work_250_",x));
+insert into t1 select "yesterday_270_";
+end|
+delimiter ;|
+call foo();
+call foo2();
+call foo4("hello");
+call foo4(UUID());
+call foo4("world");
+
+# test that can't SET in a stored function
+--error ER_STORED_FUNCTION_PREVENTS_SWITCH_BINLOG_FORMAT
+select foo3();
+select * from t1 where a="alarm";
+
+# Tests of stored functions/triggers/views for BUG#20930 "Mixed
+# binlogging mode does not work with stored functions, triggers,
+# views"
+
+# Function which calls procedure
+drop function foo3;
+delimiter |;
+create function foo3() returns bigint unsigned
+begin
+ insert into t1 values("foo3_32_");
+ call foo();
+ return 100;
+end|
+delimiter ;|
+insert into t2 select foo3();
+
+prepare stmt1 from 'insert into t2 select foo3()';
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+
+# Test if stored function calls stored function which calls procedure
+# which requires row-based.
+
+delimiter |;
+create function foo4() returns bigint unsigned
+begin
+ insert into t2 select foo3();
+ return 100;
+end|
+delimiter ;|
+select foo4();
+
+prepare stmt1 from 'select foo4()';
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+
+# A simple stored function
+delimiter |;
+create function foo5() returns bigint unsigned
+begin
+ insert into t2 select UUID();
+ return 100;
+end|
+delimiter ;|
+select foo5();
+
+prepare stmt1 from 'select foo5()';
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+
+# A simple stored function where UUID() is in the argument
+delimiter |;
+create function foo6(x varchar(100)) returns bigint unsigned
+begin
+ insert into t2 select x;
+ return 100;
+end|
+delimiter ;|
+select foo6("foo6_1_");
+select foo6(concat("foo6_2_",UUID()));
+
+prepare stmt1 from 'select foo6(concat("foo6_3_",UUID()))';
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+
+
+# Test of views using UUID()
+
+create view v1 as select uuid();
+create table t11 (data varchar(255));
+insert into t11 select * from v1;
+# Test of querying INFORMATION_SCHEMA which parses the view's body,
+# to verify that it binlogs statement-based (is not polluted by
+# the parsing of the view's body).
+insert into t11 select TABLE_NAME from INFORMATION_SCHEMA.TABLES where
TABLE_SCHEMA='mysqltest1' and TABLE_NAME IN ('v1','t11');
+prepare stmt1 from "insert into t11 select TABLE_NAME from
INFORMATION_SCHEMA.TABLES where TABLE_SCHEMA='mysqltest1' and TABLE_NAME
IN ('v1','t11')";
+execute stmt1;
+execute stmt1;
+deallocate prepare stmt1;
+
+# Test of triggers with UUID()
+delimiter |;
+create trigger t11_bi before insert on t11 for each row
+begin
+ set NEW.data = concat(NEW.data,UUID());
+end|
+delimiter ;|
+insert into t11 values("try_560_");
+
+# Test that INSERT DELAYED works in mixed mode (BUG#20649)
+insert delayed into t2 values("delay_1_");
+insert delayed into t2 values(concat("delay_2_",UUID()));
+insert delayed into t2 values("delay_6_");
+
+# Test for BUG#20633 (INSERT DELAYED RAND()/user_variable does not
+# replicate fine in statement-based ; we test that in mixed mode it
+# works).
+insert delayed into t2 values(rand());
+set @a=2.345;
+insert delayed into t2 values(@a);
+
+# With INSERT DELAYED, rows are written to the binlog after they are
+# written to the table. Therefore, it is not enough to wait until the
+# rows make it to t2 on the master (the rows may not be in the binlog
+# at that time, and may still not be in the binlog when
+# sync_slave_with_master is later called). Instead, we wait until the
+# rows make it to t2 on the slave. We first call
+# sync_slave_with_master, so that we are sure that t2 has been created
+# on the slave.
+sync_slave_with_master;
+let $wait_condition= SELECT COUNT(*) = 19 FROM mysqltest1.t2;
+
+--disable_connect_log
+--source include/wait_condition.inc
+--enable_connect_log
+connection master;
+
+# If you want to do manual testing of the mixed mode regarding UDFs (not
+# testable automatically as quite platform- and compiler-dependent),
+# you just need to set the variable below to 1, and to
+# "make udf_example.so" in sql/, and to copy sql/udf_example.so to
+# MYSQL_TEST_DIR/lib/mysql.
+let $you_want_to_test_UDF=0;
+if ($you_want_to_test_UDF)
+{
+ CREATE FUNCTION metaphon RETURNS STRING SONAME 'udf_example.so';
+ prepare stmt1 from 'insert into t1 select metaphon(?)';
+ set @string="emergency_133_";
+ insert into t1 values("work_134_");
+ execute stmt1 using @string;
+ deallocate prepare stmt1;
+ prepare stmt1 from 'insert into t1 select ?';
+ insert into t1 values(metaphon("work_135_"));
+ execute stmt1 using @string;
+ deallocate prepare stmt1;
+ insert into t1 values(metaphon("for_136_"));
+ insert into t1 select "yesterday_137_";
+ create table t6 select metaphon("for_138_");
+ create table t7 select 1 union select metaphon("for_139_");
+ create table t8 select * from t1 where 3 in (select 1 union select 2
union select metaphon("for_140_") union select 3);
+ create table t9 select * from t1 where 3 in (select 1 union select 2
union select curdate() union select 3);
+}
+
+create table t20 select * from t1; # save for comparing later
+create table t21 select * from t2;
+create table t22 select * from t3;
+drop table t1,t2,t3;
+
+# This tests the fix to
+# BUG#19630 stored function inserting into two auto_increment breaks
statement-based binlog
+# We verify that under the mixed binlog mode, a stored function
+# modifying at least two tables having an auto_increment column,
+# is binlogged row-based. Indeed in statement-based binlogging,
+# only the auto_increment value generated for the first table
+# is recorded in the binlog, the value generated for the 2nd table
+# lacking.
+
+create table t1 (a int primary key auto_increment, b varchar(100));
+create table t2 (a int primary key auto_increment, b varchar(100));
+create table t3 (b varchar(100));
+delimiter |;
+create function f (x varchar(100)) returns int deterministic
+begin
+ insert into t1 values(null,x);
+ insert into t2 values(null,x);
+ return 1;
+end|
+delimiter ;|
+select f("try_41_");
+# Two operations which compensate each other except that their net
+# effect is that they advance the auto_increment counter of t2 on slave:
+sync_slave_with_master;
+use mysqltest1;
+insert into t2 values(2,null),(3,null),(4,null);
+delete from t2 where a>=2;
+
+connection master;
+# this is the call which didn't replicate well
+select f("try_42_");
+sync_slave_with_master;
+
+# now use prepared statement and test again, just to see that the RBB
+# mode isn't set at PREPARE but at EXECUTE.
+
+insert into t2 values(3,null),(4,null);
+delete from t2 where a>=3;
+
+connection master;
+prepare stmt1 from 'select f(?)';
+set @string="try_43_";
+insert into t1 values(null,"try_44_"); # should be SBB
+execute stmt1 using @string; # should be RBB
+deallocate prepare stmt1;
+sync_slave_with_master;
+
+# verify that if only one table has auto_inc, it does not trigger RBB
+# (we'll check in binlog further below)
+
+connection master;
+create table t12 select * from t1; # save for comparing later
+drop table t1;
+create table t1 (a int, b varchar(100), key(a));
+select f("try_45_");
+
+# restore table's key
+create table t13 select * from t1;
+drop table t1;
+create table t1 (a int primary key auto_increment, b varchar(100));
+
+# now test if it's two functions, each of them inserts in one table
+
+drop function f;
+# we need a unique key to have sorting of rows by mysqldump
+create table t14 (unique (a)) select * from t2;
+truncate table t2;
+delimiter |;
+create function f1 (x varchar(100)) returns int deterministic
+begin
+ insert into t1 values(null,x);
+ return 1;
+end|
+create function f2 (x varchar(100)) returns int deterministic
+begin
+ insert into t2 values(null,x);
+ return 1;
+end|
+delimiter ;|
+select f1("try_46_"),f2("try_47_");
+
+sync_slave_with_master;
+insert into t2 values(2,null),(3,null),(4,null);
+delete from t2 where a>=2;
+
+connection master;
+# Test with SELECT and INSERT
+select f1("try_48_"),f2("try_49_");
+insert into t3 values(concat("try_50_",f1("try_51_"),f2("try_52_")));
+sync_slave_with_master;
+
+# verify that if f2 does only read on an auto_inc table, this does not
+# switch to RBB
+connection master;
+drop function f2;
+delimiter |;
+create function f2 (x varchar(100)) returns int deterministic
+begin
+ declare y int;
+ insert into t1 values(null,x);
+ set y = (select count(*) from t2);
+ return y;
+end|
+delimiter ;|
+select f1("try_53_"),f2("try_54_");
+sync_slave_with_master;
+
+# And now, a normal statement with a trigger (no stored functions)
+
+connection master;
+drop function f2;
+delimiter |;
+create trigger t1_bi before insert on t1 for each row
+begin
+ insert into t2 values(null,"try_55_");
+end|
+delimiter ;|
+insert into t1 values(null,"try_56_");
+# and now remove one auto_increment and verify SBB
+alter table t1 modify a int, drop primary key;
+insert into t1 values(null,"try_57_");
+sync_slave_with_master;
+
+# Test for BUG#20499 "mixed mode with temporary table breaks binlog"
+# Slave used to have only 2 rows instead of 3.
+connection master;
+CREATE TEMPORARY TABLE t15 SELECT UUID();
+create table t16 like t15;
+INSERT INTO t16 SELECT * FROM t15;
+# we'll verify that this one is done RBB
+insert into t16 values("try_65_");
+drop table t15;
+# we'll verify that this one is done SBB
+insert into t16 values("try_66_");
+sync_slave_with_master;
+
+# and now compare:
+
+connection master;
+
+# first check that data on master is sensible
+select count(*) from t1;
+select count(*) from t2;
+select count(*) from t3;
+select count(*) from t4;
+select count(*) from t5;
+select count(*) from t11;
+select count(*) from t20;
+select count(*) from t21;
+select count(*) from t22;
+select count(*) from t12;
+select count(*) from t13;
+select count(*) from t14;
+select count(*) from t16;
+if ($you_want_to_test_UDF)
+{
+ select count(*) from t6;
+ select count(*) from t7;
+ select count(*) from t8;
+ select count(*) from t9;
+}
+
+sync_slave_with_master;
+
+#
+# Bug#20863 If binlog format is changed between update and unlock of
+# tables, wrong binlog
+#
+
+connection master;
+DROP TABLE IF EXISTS t11;
+SET SESSION BINLOG_FORMAT=STATEMENT;
+CREATE TABLE t11 (song VARCHAR(255));
+LOCK TABLES t11 WRITE;
+SET SESSION BINLOG_FORMAT=ROW;
+INSERT INTO t11 VALUES('Several Species of Small Furry Animals Gathered
Together in a Cave and Grooving With a Pict');
+SET SESSION BINLOG_FORMAT=STATEMENT;
+INSERT INTO t11 VALUES('Careful With That Axe, Eugene');
+UNLOCK TABLES;
+
+--query_vertical SELECT * FROM t11
+sync_slave_with_master;
+USE mysqltest1;
+--query_vertical SELECT * FROM t11
+
+connection master;
+DROP TABLE IF EXISTS t12;
+SET SESSION BINLOG_FORMAT=MIXED;
+CREATE TABLE t12 (data LONG);
+LOCK TABLES t12 WRITE;
+INSERT INTO t12 VALUES(UUID());
+UNLOCK TABLES;
+sync_slave_with_master;
+
+#
+# BUG#28086: SBR of USER() becomes corrupted on slave
+# +
+connection master;
+
+# Just to get something that is non-trivial, albeit still simple, we
+# stuff the result of USER() and CURRENT_USER() into a variable.
+--delimiter $$
+CREATE FUNCTION my_user()
+ RETURNS CHAR(64)
+BEGIN
+ DECLARE user CHAR(64);
+ SELECT USER() INTO user;
+ RETURN user;
+END $$
+--delimiter ;
+
+--delimiter $$
+CREATE FUNCTION my_current_user()
+ RETURNS CHAR(64)
+BEGIN
+ DECLARE user CHAR(64);
+ SELECT CURRENT_USER() INTO user;
+ RETURN user;
+END $$
+--delimiter ;
+
+DROP TABLE IF EXISTS t13;
+CREATE TABLE t13 (data CHAR(64));
+INSERT INTO t13 VALUES (USER());
+INSERT INTO t13 VALUES (my_user());
+INSERT INTO t13 VALUES (CURRENT_USER());
+INSERT INTO t13 VALUES (my_current_user());
+
+sync_slave_with_master;
+
+# as we're using UUID we don't SELECT but use "diff" like in rpl_row_UUID
+--exec $MYSQL_DUMP --compact --order-by-primary --skip-extended-insert
--no-create-info mysqltest1 >
$MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_master.sql
+--exec $MYSQL_DUMP_SLAVE --compact --order-by-primary
--skip-extended-insert --no-create-info mysqltest1 >
$MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_slave.sql
+
+# Let's compare. Note: If they match test will pass, if they do not match
+# the test will show that the diff statement failed and not reject file
+# will be created. You will need to go to the mysql-test dir and diff
+# the files your self to see what is not matching
+
+diff_files $MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_master.sql
$MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_slave.sql;
+
+connection master;
+
+# Now test that mysqlbinlog works fine on a binlog generated by the
+# mixed mode
+
+# BUG#11312 "DELIMITER is not written to the binary log that causes
+# syntax error" makes that mysqlbinlog will fail if we pass it the
+# text of queries; this forces us to use --base64-output here.
+
+# BUG#20929 "BINLOG command causes invalid free plus assertion
+# failure" makes mysqld segfault when receiving --base64-output
+
+# So I can't enable this piece of test
+# SIGH
+
+if ($enable_when_11312_or_20929_fixed)
+{
+--exec $MYSQL_BINLOG --base64-output
$MYSQLTEST_VARDIR/log/master-bin.000001 >
$MYSQLTEST_VARDIR/tmp/mysqlbinlog_mixed.sql
+drop database mysqltest1;
+--exec $MYSQL < $MYSQLTEST_VARDIR/tmp/mysqlbinlog_mixed.sql
+--exec $MYSQL_DUMP --compact --order-by-primary --skip-extended-insert
--no-create-info mysqltest1 >
$MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_master.sql
+# the old mysqldump output on slave is the same as what it was on
+# master before restoring on master.
+diff_files $MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_master.sql
$MYSQLTEST_VARDIR/tmp/rpl_switch_stm_row_mixed_slave.sql;
+}
+
+drop database mysqltest1;
+sync_slave_with_master;
+
+connection master;
+# Restore binlog format setting
+set global binlog_format =@my_binlog_format;
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_sync-master.opt
b/mysql-test/suite/binlog_encryption/rpl_sync-master.opt
new file mode 100644
index 0000000..04b06bf
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sync-master.opt
@@ -0,0 +1,2 @@
+--default-storage-engine=MyISAM
+--loose-innodb-file-per-table=0
diff --git a/mysql-test/suite/binlog_encryption/rpl_sync-slave.opt
b/mysql-test/suite/binlog_encryption/rpl_sync-slave.opt
new file mode 100644
index 0000000..2e8be18
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sync-slave.opt
@@ -0,0 +1,2 @@
+--sync-relay-log-info=1 --relay-log-recovery=1
--loose-innodb_file_format_check=1 --default-storage-engine=MyISAM
--loose-innodb-file-per-table=0
+--skip-core-file --skip-slave-start
diff --git a/mysql-test/suite/binlog_encryption/rpl_sync.result
b/mysql-test/suite/binlog_encryption/rpl_sync.result
new file mode 100644
index 0000000..1240c44
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sync.result
@@ -0,0 +1,53 @@
+=====Configuring the enviroment=======;
+include/master-slave.inc
+[connection master]
+call mtr.add_suppression('Attempting backtrace');
+call mtr.add_suppression("Recovery from master pos .* and file
master-bin.000001");
+ALTER TABLE mysql.gtid_slave_pos ENGINE=InnoDB;
+flush tables;
+CREATE TABLE t1(a INT, PRIMARY KEY(a)) engine=innodb;
+insert into t1(a) values(1);
+insert into t1(a) values(2);
+insert into t1(a) values(3);
+=====Inserting data on the master but without the SQL Thread being
running=======;
+connection slave;
+connection slave;
+include/stop_slave_sql.inc
+connection master;
+insert into t1(a) values(4);
+insert into t1(a) values(5);
+insert into t1(a) values(6);
+=====Removing relay log files and crashing/recoverying the slave=======;
+connection slave;
+include/stop_slave_io.inc
+SET SESSION debug_dbug="d,crash_before_rotate_relaylog";
+FLUSH LOGS;
+ERROR HY000: Lost connection to MySQL server during query
+include/rpl_reconnect.inc
+=====Dumping and comparing tables=======;
+include/start_slave.inc
+connection master;
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+=====Corrupting the master.info=======;
+connection slave;
+include/stop_slave.inc
+connection master;
+FLUSH LOGS;
+insert into t1(a) values(7);
+insert into t1(a) values(8);
+insert into t1(a) values(9);
+connection slave;
+SET SESSION debug_dbug="d,crash_before_rotate_relaylog";
+FLUSH LOGS;
+ERROR HY000: Lost connection to MySQL server during query
+include/rpl_reconnect.inc
+=====Dumping and comparing tables=======;
+include/start_slave.inc
+connection master;
+connection slave;
+include/diff_tables.inc [master:t1,slave:t1]
+=====Clean up=======;
+connection master;
+drop table t1;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_sync.test
b/mysql-test/suite/binlog_encryption/rpl_sync.test
new file mode 100644
index 0000000..6f97165
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_sync.test
@@ -0,0 +1,180 @@
+#
+# The test was taken from the rpl suite, with cosmetic fixes
+#
+
+--enable_connect_log
+
+########################################################################################
+# This test verifies the options --sync-relay-log-info and
--relay-log-recovery by +# crashing the slave in two different situations:
+# (case-1) - Corrupt the relay log with changes which were not
processed by
+# the SQL Thread and crashes it.
+# (case-2) - Corrupt the master.info with wrong coordinates and
crashes it.
+#
+# Case 1:
+# 1 - Stops the SQL Thread
+# 2 - Inserts new records into the master.
+# 3 - Corrupts the relay-log.bin* which most likely has such changes.
+# 4 - Crashes the slave
+# 5 - Verifies if the slave is sync with the master which means that
the information
+# loss was circumvented by the recovery process.
+#
+# Case 2:
+# 1 - Stops the SQL/IO Threads
+# 2 - Inserts new records into the master.
+# 3 - Corrupts the master.info with wrong coordinates.
+# 4 - Crashes the slave
+# 5 - Verifies if the slave is sync with the master which means that
the information
+# loss was circumvented by the recovery process.
+########################################################################################
+
+########################################################################################
+# Configuring the environment
+########################################################################################
+--echo =====Configuring the enviroment=======;
+
+--disable_connect_log
+--source include/master-slave.inc
+--source include/not_embedded.inc
+--source include/not_valgrind.inc
+--source include/have_debug.inc
+--source include/not_crashrep.inc
+--enable_connect_log
+
+call mtr.add_suppression('Attempting backtrace');
+call mtr.add_suppression("Recovery from master pos .* and file
master-bin.000001");
+# Use innodb so we do not get "table should be repaired" issues.
+ALTER TABLE mysql.gtid_slave_pos ENGINE=InnoDB;
+flush tables;
+CREATE TABLE t1(a INT, PRIMARY KEY(a)) engine=innodb;
+
+insert into t1(a) values(1);
+insert into t1(a) values(2);
+insert into t1(a) values(3);
+
+########################################################################################
+# Case 1: Corrupt a relay-log.bin*
+########################################################################################
+--echo =====Inserting data on the master but without the SQL Thread
being running=======;
+sync_slave_with_master;
+
+connection slave;
+let $MYSQLD_SLAVE_DATADIR= `select @@datadir`;
+--replace_result $MYSQLD_SLAVE_DATADIR MYSQLD_SLAVE_DATADIR
+--copy_file $MYSQLD_SLAVE_DATADIR/master.info
$MYSQLD_SLAVE_DATADIR/master.backup
+--disable_connect_log
+--source include/stop_slave_sql.inc
+--enable_connect_log
+
+connection master;
+insert into t1(a) values(4);
+insert into t1(a) values(5);
+insert into t1(a) values(6);
+
+--echo =====Removing relay log files and crashing/recoverying the
slave=======;
+connection slave;
+--disable_connect_log
+--source include/stop_slave_io.inc
+--enable_connect_log
+
+let $file= query_get_value("SHOW SLAVE STATUS", Relay_Log_File, 1);
+
+--let FILE_TO_CORRUPT= $MYSQLD_SLAVE_DATADIR/$file
+perl;
+$file= $ENV{'FILE_TO_CORRUPT'};
+open(FILE, ">$file") || die "Unable to open $file.";
+truncate(FILE,0);
+print FILE "failure";
+close ($file);
+EOF
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.2.expect
+SET SESSION debug_dbug="d,crash_before_rotate_relaylog";
+--error 2013
+FLUSH LOGS;
+
+--let $rpl_server_number= 2
+--disable_connect_log
+--source include/rpl_reconnect.inc
+--enable_connect_log
+
+--echo =====Dumping and comparing tables=======;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+connection master;
+sync_slave_with_master;
+
+let $diff_tables=master:t1,slave:t1;
+--disable_connect_log
+source include/diff_tables.inc;
+--enable_connect_log
+
+########################################################################################
+# Case 2: Corrupt a master.info
+########################################################################################
+--echo =====Corrupting the master.info=======;
+connection slave;
+--disable_connect_log
+--source include/stop_slave.inc
+--enable_connect_log
+
+connection master;
+FLUSH LOGS;
+
+insert into t1(a) values(7);
+insert into t1(a) values(8);
+insert into t1(a) values(9);
+
+connection slave;
+let MYSQLD_SLAVE_DATADIR=`select @@datadir`;
+
+--perl
+use strict;
+use warnings;
+my $src= "$ENV{'MYSQLD_SLAVE_DATADIR'}/master.backup";
+my $dst= "$ENV{'MYSQLD_SLAVE_DATADIR'}/master.info";
+open(FILE, "<", $src) or die;
+my @content= <FILE>;
+close FILE;
+open(FILE, ">", $dst) or die;
+binmode FILE;
+print FILE @content;
+close FILE;
+EOF
+
+--exec echo "restart" > $MYSQLTEST_VARDIR/tmp/mysqld.2.expect
+SET SESSION debug_dbug="d,crash_before_rotate_relaylog";
+--error 2013
+FLUSH LOGS;
+
+--let $rpl_server_number= 2
+--disable_connect_log
+--source include/rpl_reconnect.inc
+--enable_connect_log
+
+--echo =====Dumping and comparing tables=======;
+--disable_connect_log
+--source include/start_slave.inc
+--enable_connect_log
+
+connection master;
+sync_slave_with_master;
+
+let $diff_tables=master:t1,slave:t1;
+--disable_connect_log
+source include/diff_tables.inc;
+--enable_connect_log
+
+########################################################################################
+# Clean up
+########################################################################################
+--echo =====Clean up=======;
+connection master;
+drop table t1;
+
+--remove_file $MYSQLD_SLAVE_DATADIR/master.backup
+--disable_connect_log
+--source include/rpl_end.inc
+
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.cnf
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.cnf
new file mode 100644
index 0000000..b8e22e9
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.cnf
@@ -0,0 +1,6 @@
+!include my.cnf
+
+[mysqld.2]
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename=(a)ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.result
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.result
new file mode 100644
index 0000000..d61255c
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.result
@@ -0,0 +1,91 @@
+include/master-slave.inc
+[connection master]
+connection master;
+SELECT @@global.mysql56_temporal_format AS on_master;
+on_master
+1
+connection slave;
+SELECT @@global.mysql56_temporal_format AS on_slave;
+on_slave
+1
+connection master;
+CREATE TABLE t1
+(
+c0 TIME(0),
+c1 TIME(1),
+c2 TIME(2),
+c3 TIME(3),
+c4 TIME(4),
+c5 TIME(5),
+c6 TIME(6)
+);
+CREATE TABLE t2
+(
+c0 TIMESTAMP(0),
+c1 TIMESTAMP(1),
+c2 TIMESTAMP(2),
+c3 TIMESTAMP(3),
+c4 TIMESTAMP(4),
+c5 TIMESTAMP(5),
+c6 TIMESTAMP(6)
+);
+CREATE TABLE t3
+(
+c0 DATETIME(0),
+c1 DATETIME(1),
+c2 DATETIME(2),
+c3 DATETIME(3),
+c4 DATETIME(4),
+c5 DATETIME(5),
+c6 DATETIME(6)
+);
+INSERT INTO t1 VALUES
('01:01:01','01:01:01.1','01:01:01.11','01:01:01.111','01:01:01.1111','01:01:01.11111','01:01:01.111111');
+INSERT INTO t2 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+INSERT INTO t3 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+TABLE_NAME TABLE_ROWS AVG_ROW_LENGTH DATA_LENGTH
+t1 1 34 34
+t2 1 41 41
+t3 1 48 48
+connection slave;
+connection slave;
+SELECT * FROM t1;;
+c0 01:01:01
+c1 01:01:01.1
+c2 01:01:01.11
+c3 01:01:01.111
+c4 01:01:01.1111
+c5 01:01:01.11111
+c6 01:01:01.111111
+SELECT * FROM t2;;
+c0 2001-01-01 01:01:01
+c1 2001-01-01 01:01:01.1
+c2 2001-01-01 01:01:01.11
+c3 2001-01-01 01:01:01.111
+c4 2001-01-01 01:01:01.1111
+c5 2001-01-01 01:01:01.11111
+c6 2001-01-01 01:01:01.111111
+SELECT * FROM t3;;
+c0 2001-01-01 01:01:01
+c1 2001-01-01 01:01:01.1
+c2 2001-01-01 01:01:01.11
+c3 2001-01-01 01:01:01.111
+c4 2001-01-01 01:01:01.1111
+c5 2001-01-01 01:01:01.11111
+c6 2001-01-01 01:01:01.111111
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+TABLE_NAME TABLE_ROWS AVG_ROW_LENGTH DATA_LENGTH
+t1 1 34 34
+t2 1 41 41
+t3 1 48 48
+connection master;
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+connection slave;
+SET @@global.mysql56_temporal_format=DEFAULT;
+connection master;
+SET @@global.mysql56_temporal_format=DEFAULT;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.test
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.test
new file mode 100644
index 0000000..74d8968
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_default_to_default.test
@@ -0,0 +1,82 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+--source include/master-slave.inc
+--enable_connect_log
+
+if ($force_master_mysql56_temporal_format)
+{
+ connection master;
+ eval SET
@@global.mysql56_temporal_format=$force_master_mysql56_temporal_format;
+}
+
+if ($force_slave_mysql56_temporal_format)
+{
+ connection slave;
+ eval SET
@@global.mysql56_temporal_format=$force_slave_mysql56_temporal_format;
+}
+
+connection master;
+SELECT @@global.mysql56_temporal_format AS on_master;
+connection slave;
+SELECT @@global.mysql56_temporal_format AS on_slave;
+connection master;
+
+CREATE TABLE t1
+(
+ c0 TIME(0),
+ c1 TIME(1),
+ c2 TIME(2),
+ c3 TIME(3),
+ c4 TIME(4),
+ c5 TIME(5),
+ c6 TIME(6)
+);
+CREATE TABLE t2
+(
+ c0 TIMESTAMP(0),
+ c1 TIMESTAMP(1),
+ c2 TIMESTAMP(2),
+ c3 TIMESTAMP(3),
+ c4 TIMESTAMP(4),
+ c5 TIMESTAMP(5),
+ c6 TIMESTAMP(6)
+);
+
+CREATE TABLE t3
+(
+ c0 DATETIME(0),
+ c1 DATETIME(1),
+ c2 DATETIME(2),
+ c3 DATETIME(3),
+ c4 DATETIME(4),
+ c5 DATETIME(5),
+ c6 DATETIME(6)
+);
+INSERT INTO t1 VALUES
('01:01:01','01:01:01.1','01:01:01.11','01:01:01.111','01:01:01.1111','01:01:01.11111','01:01:01.111111');
+INSERT INTO t2 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+INSERT INTO t3 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+sync_slave_with_master;
+
+connection slave;
+--query_vertical SELECT * FROM t1;
+--query_vertical SELECT * FROM t2;
+--query_vertical SELECT * FROM t3;
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+
+connection master;
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+
+connection slave;
+SET @@global.mysql56_temporal_format=DEFAULT;
+connection master;
+SET @@global.mysql56_temporal_format=DEFAULT;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.cnf
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.cnf
new file mode 100644
index 0000000..b8e22e9
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.cnf
@@ -0,0 +1,6 @@
+!include my.cnf
+
+[mysqld.2]
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename=(a)ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.result
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.result
new file mode 100644
index 0000000..5c51816
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.result
@@ -0,0 +1,95 @@
+include/master-slave.inc
+[connection master]
+connection master;
+SET @@global.mysql56_temporal_format=false;;
+connection slave;
+SET @@global.mysql56_temporal_format=true;;
+connection master;
+SELECT @@global.mysql56_temporal_format AS on_master;
+on_master
+0
+connection slave;
+SELECT @@global.mysql56_temporal_format AS on_slave;
+on_slave
+1
+connection master;
+CREATE TABLE t1
+(
+c0 TIME(0),
+c1 TIME(1),
+c2 TIME(2),
+c3 TIME(3),
+c4 TIME(4),
+c5 TIME(5),
+c6 TIME(6)
+);
+CREATE TABLE t2
+(
+c0 TIMESTAMP(0),
+c1 TIMESTAMP(1),
+c2 TIMESTAMP(2),
+c3 TIMESTAMP(3),
+c4 TIMESTAMP(4),
+c5 TIMESTAMP(5),
+c6 TIMESTAMP(6)
+);
+CREATE TABLE t3
+(
+c0 DATETIME(0),
+c1 DATETIME(1),
+c2 DATETIME(2),
+c3 DATETIME(3),
+c4 DATETIME(4),
+c5 DATETIME(5),
+c6 DATETIME(6)
+);
+INSERT INTO t1 VALUES
('01:01:01','01:01:01.1','01:01:01.11','01:01:01.111','01:01:01.1111','01:01:01.11111','01:01:01.111111');
+INSERT INTO t2 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+INSERT INTO t3 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+TABLE_NAME TABLE_ROWS AVG_ROW_LENGTH DATA_LENGTH
+t1 1 33 33
+t2 1 41 41
+t3 1 50 50
+connection slave;
+connection slave;
+SELECT * FROM t1;;
+c0 01:01:01
+c1 01:01:01.1
+c2 01:01:01.11
+c3 01:01:01.111
+c4 01:01:01.1111
+c5 01:01:01.11111
+c6 01:01:01.111111
+SELECT * FROM t2;;
+c0 2001-01-01 01:01:01
+c1 2001-01-01 01:01:01.1
+c2 2001-01-01 01:01:01.11
+c3 2001-01-01 01:01:01.111
+c4 2001-01-01 01:01:01.1111
+c5 2001-01-01 01:01:01.11111
+c6 2001-01-01 01:01:01.111111
+SELECT * FROM t3;;
+c0 2001-01-01 01:01:01
+c1 2001-01-01 01:01:01.1
+c2 2001-01-01 01:01:01.11
+c3 2001-01-01 01:01:01.111
+c4 2001-01-01 01:01:01.1111
+c5 2001-01-01 01:01:01.11111
+c6 2001-01-01 01:01:01.111111
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+TABLE_NAME TABLE_ROWS AVG_ROW_LENGTH DATA_LENGTH
+t1 1 34 34
+t2 1 41 41
+t3 1 48 48
+connection master;
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+connection slave;
+SET @@global.mysql56_temporal_format=DEFAULT;
+connection master;
+SET @@global.mysql56_temporal_format=DEFAULT;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.test
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.test
new file mode 100644
index 0000000..6ac5fe6
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mariadb53_to_mysql56.test
@@ -0,0 +1,18 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+#
+# MariaDB-5.3 fractional temporal types do not store metadata +# when
running with --binlog-format=row, thus can replicate
+# only into a field with exactly the same data type and format.
+#
+# Skip when running with --binlog-format=row.
+# But mixed and statement formats should work without problems.
+#
+-- source include/have_binlog_format_mixed_or_statement.inc
+
+--let $force_master_mysql56_temporal_format=false;
+--let $force_slave_mysql56_temporal_format=true;
+
+--source rpl_temporal_format_default_to_default.test
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.cnf
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.cnf
new file mode 100644
index 0000000..b8e22e9
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.cnf
@@ -0,0 +1,6 @@
+!include my.cnf
+
+[mysqld.2]
+plugin-load-add= @ENV.FILE_KEY_MANAGEMENT_SO
+loose-file-key-management-filename=(a)ENV.MYSQLTEST_VARDIR/std_data/keys.txt
+encrypt-binlog
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.result
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.result
new file mode 100644
index 0000000..9d086d3
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.result
@@ -0,0 +1,95 @@
+include/master-slave.inc
+[connection master]
+connection master;
+SET @@global.mysql56_temporal_format=true;;
+connection slave;
+SET @@global.mysql56_temporal_format=false;;
+connection master;
+SELECT @@global.mysql56_temporal_format AS on_master;
+on_master
+1
+connection slave;
+SELECT @@global.mysql56_temporal_format AS on_slave;
+on_slave
+0
+connection master;
+CREATE TABLE t1
+(
+c0 TIME(0),
+c1 TIME(1),
+c2 TIME(2),
+c3 TIME(3),
+c4 TIME(4),
+c5 TIME(5),
+c6 TIME(6)
+);
+CREATE TABLE t2
+(
+c0 TIMESTAMP(0),
+c1 TIMESTAMP(1),
+c2 TIMESTAMP(2),
+c3 TIMESTAMP(3),
+c4 TIMESTAMP(4),
+c5 TIMESTAMP(5),
+c6 TIMESTAMP(6)
+);
+CREATE TABLE t3
+(
+c0 DATETIME(0),
+c1 DATETIME(1),
+c2 DATETIME(2),
+c3 DATETIME(3),
+c4 DATETIME(4),
+c5 DATETIME(5),
+c6 DATETIME(6)
+);
+INSERT INTO t1 VALUES
('01:01:01','01:01:01.1','01:01:01.11','01:01:01.111','01:01:01.1111','01:01:01.11111','01:01:01.111111');
+INSERT INTO t2 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+INSERT INTO t3 VALUES ('2001-01-01 01:01:01','2001-01-01
01:01:01.1','2001-01-01 01:01:01.11','2001-01-01
01:01:01.111','2001-01-01 01:01:01.1111','2001-01-01
01:01:01.11111','2001-01-01 01:01:01.111111');
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+TABLE_NAME TABLE_ROWS AVG_ROW_LENGTH DATA_LENGTH
+t1 1 34 34
+t2 1 41 41
+t3 1 48 48
+connection slave;
+connection slave;
+SELECT * FROM t1;;
+c0 01:01:01
+c1 01:01:01.1
+c2 01:01:01.11
+c3 01:01:01.111
+c4 01:01:01.1111
+c5 01:01:01.11111
+c6 01:01:01.111111
+SELECT * FROM t2;;
+c0 2001-01-01 01:01:01
+c1 2001-01-01 01:01:01.1
+c2 2001-01-01 01:01:01.11
+c3 2001-01-01 01:01:01.111
+c4 2001-01-01 01:01:01.1111
+c5 2001-01-01 01:01:01.11111
+c6 2001-01-01 01:01:01.111111
+SELECT * FROM t3;;
+c0 2001-01-01 01:01:01
+c1 2001-01-01 01:01:01.1
+c2 2001-01-01 01:01:01.11
+c3 2001-01-01 01:01:01.111
+c4 2001-01-01 01:01:01.1111
+c5 2001-01-01 01:01:01.11111
+c6 2001-01-01 01:01:01.111111
+SELECT TABLE_NAME, TABLE_ROWS, AVG_ROW_LENGTH,DATA_LENGTH FROM
INFORMATION_SCHEMA.TABLES
+WHERE TABLE_NAME RLIKE 't[1-3]' ORDER BY TABLE_NAME;
+TABLE_NAME TABLE_ROWS AVG_ROW_LENGTH DATA_LENGTH
+t1 1 33 33
+t2 1 41 41
+t3 1 50 50
+connection master;
+DROP TABLE t1;
+DROP TABLE t2;
+DROP TABLE t3;
+connection slave;
+SET @@global.mysql56_temporal_format=DEFAULT;
+connection master;
+SET @@global.mysql56_temporal_format=DEFAULT;
+include/rpl_end.inc
diff --git
a/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.test
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.test
new file mode 100644
index 0000000..cf135fb
--- /dev/null
+++
b/mysql-test/suite/binlog_encryption/rpl_temporal_format_mysql56_to_mariadb53.test
@@ -0,0 +1,8 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+--let $force_master_mysql56_temporal_format=true;
+--let $force_slave_mysql56_temporal_format=false;
+
+--source rpl_temporal_format_default_to_default.test
diff --git a/mysql-test/suite/binlog_encryption/rpl_typeconv.result
b/mysql-test/suite/binlog_encryption/rpl_typeconv.result
new file mode 100644
index 0000000..988962f
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_typeconv.result
@@ -0,0 +1,548 @@
+include/master-slave.inc
+[connection master]
+connection slave;
+set @saved_slave_type_conversions = @@global.slave_type_conversions;
+CREATE TABLE type_conversions (
+TestNo INT AUTO_INCREMENT PRIMARY KEY,
+Source TEXT,
+Target TEXT,
+Flags TEXT,
+On_Master TEXT,
+On_Slave TEXT,
+Expected TEXT,
+Compare INT,
+Error TEXT);
+SELECT @@global.slave_type_conversions;
+@@global.slave_type_conversions
+
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='';
+SELECT @@global.slave_type_conversions;
+@@global.slave_type_conversions
+
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_NON_LOSSY';
+SELECT @@global.slave_type_conversions;
+@@global.slave_type_conversions
+ALL_NON_LOSSY
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY';
+SELECT @@global.slave_type_conversions;
+@@global.slave_type_conversions
+ALL_LOSSY
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY,ALL_NON_LOSSY';
+SELECT @@global.slave_type_conversions;
+@@global.slave_type_conversions
+ALL_LOSSY,ALL_NON_LOSSY
+SET GLOBAL
SLAVE_TYPE_CONVERSIONS='ALL_LOSSY,ALL_NON_LOSSY,NONEXISTING_BIT';
+ERROR 42000: Variable 'slave_type_conversions' can't be set to the
value of 'NONEXISTING_BIT'
+SELECT @@global.slave_type_conversions;
+@@global.slave_type_conversions
+ALL_LOSSY,ALL_NON_LOSSY
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='';
+**** Running tests with @@SLAVE_TYPE_CONVERSIONS = '' ****
+include/rpl_reset.inc
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_NON_LOSSY';
+**** Running tests with @@SLAVE_TYPE_CONVERSIONS = 'ALL_NON_LOSSY' ****
+include/rpl_reset.inc
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY';
+**** Running tests with @@SLAVE_TYPE_CONVERSIONS = 'ALL_LOSSY' ****
+include/rpl_reset.inc
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY,ALL_NON_LOSSY';
+**** Running tests with @@SLAVE_TYPE_CONVERSIONS =
'ALL_LOSSY,ALL_NON_LOSSY' ****
+include/rpl_reset.inc
+connection slave;
+**** Result of conversions ****
+Source_Type Target_Type All_Type_Conversion_Flags Value_On_Slave
+TINYBLOB TINYBLOB <Correct value>
+TINYBLOB BLOB <Correct error>
+TINYBLOB MEDIUMBLOB <Correct error>
+TINYBLOB LONGBLOB <Correct error>
+BLOB TINYBLOB <Correct error>
+BLOB BLOB <Correct value>
+BLOB MEDIUMBLOB <Correct error>
+BLOB LONGBLOB <Correct error>
+MEDIUMBLOB TINYBLOB <Correct error>
+MEDIUMBLOB BLOB <Correct error>
+MEDIUMBLOB MEDIUMBLOB <Correct value>
+MEDIUMBLOB LONGBLOB <Correct error>
+LONGBLOB TINYBLOB <Correct error>
+LONGBLOB BLOB <Correct error>
+LONGBLOB MEDIUMBLOB <Correct error>
+LONGBLOB LONGBLOB <Correct value>
+GEOMETRY BLOB <Correct error>
+BLOB GEOMETRY <Correct error>
+GEOMETRY GEOMETRY <Correct value>
+BIT(1) BIT(1) <Correct value>
+DATE DATE <Correct value>
+ENUM('master',' ENUM('master',' <Correct value>
+CHAR(10) ENUM('master',' <Correct error>
+CHAR(10) SET('master','s <Correct error>
+ENUM('master',' CHAR(10) <Correct error>
+SET('master','s CHAR(10) <Correct error>
+SET('master','s SET('master','s <Correct value>
+SET('master','s SET('master','s <Correct value>
+SET('0','1','2' SET('0','1','2' <Correct value>
+SET('0','1','2' SET('0','1','2' <Correct error>
+SET('0','1','2' SET('0','1','2' <Correct error>
+SET('0','1','2' SET('0','1','2' <Correct error>
+TINYINT TINYINT <Correct value>
+TINYINT SMALLINT <Correct error>
+TINYINT MEDIUMINT <Correct error>
+TINYINT INT <Correct error>
+TINYINT BIGINT <Correct error>
+SMALLINT TINYINT <Correct error>
+SMALLINT TINYINT <Correct error>
+SMALLINT TINYINT UNSIGNE <Correct error>
+SMALLINT SMALLINT <Correct value>
+SMALLINT MEDIUMINT <Correct error>
+SMALLINT INT <Correct error>
+SMALLINT BIGINT <Correct error>
+MEDIUMINT TINYINT <Correct error>
+MEDIUMINT TINYINT <Correct error>
+MEDIUMINT TINYINT UNSIGNE <Correct error>
+MEDIUMINT SMALLINT <Correct error>
+MEDIUMINT MEDIUMINT <Correct value>
+MEDIUMINT INT <Correct error>
+MEDIUMINT BIGINT <Correct error>
+INT TINYINT <Correct error>
+INT TINYINT <Correct error>
+INT TINYINT UNSIGNE <Correct error>
+INT SMALLINT <Correct error>
+INT MEDIUMINT <Correct error>
+INT INT <Correct value>
+INT BIGINT <Correct error>
+BIGINT TINYINT <Correct error>
+BIGINT SMALLINT <Correct error>
+BIGINT MEDIUMINT <Correct error>
+BIGINT INT <Correct error>
+BIGINT BIGINT <Correct value>
+CHAR(20) CHAR(20) <Correct value>
+CHAR(20) CHAR(30) <Correct error>
+CHAR(20) CHAR(10) <Correct error>
+CHAR(20) VARCHAR(20) <Correct error>
+CHAR(20) VARCHAR(30) <Correct error>
+CHAR(20) VARCHAR(10) <Correct error>
+CHAR(20) TINYTEXT <Correct error>
+CHAR(20) TEXT <Correct error>
+CHAR(20) MEDIUMTEXT <Correct error>
+CHAR(20) LONGTEXT <Correct error>
+VARCHAR(20) VARCHAR(20) <Correct value>
+VARCHAR(20) VARCHAR(30) <Correct error>
+VARCHAR(20) VARCHAR(10) <Correct error>
+VARCHAR(20) CHAR(30) <Correct error>
+VARCHAR(20) CHAR(10) <Correct error>
+VARCHAR(20) TINYTEXT <Correct error>
+VARCHAR(20) TEXT <Correct error>
+VARCHAR(20) MEDIUMTEXT <Correct error>
+VARCHAR(20) LONGTEXT <Correct error>
+VARCHAR(500) VARCHAR(500) <Correct value>
+VARCHAR(500) VARCHAR(510) <Correct error>
+VARCHAR(500) VARCHAR(255) <Correct error>
+VARCHAR(500) TINYTEXT <Correct error>
+VARCHAR(500) TEXT <Correct error>
+VARCHAR(500) MEDIUMTEXT <Correct error>
+VARCHAR(500) LONGTEXT <Correct error>
+TINYTEXT VARCHAR(500) <Correct error>
+TEXT VARCHAR(500) <Correct error>
+MEDIUMTEXT VARCHAR(500) <Correct error>
+LONGTEXT VARCHAR(500) <Correct error>
+TINYTEXT CHAR(255) <Correct error>
+TINYTEXT CHAR(250) <Correct error>
+TEXT CHAR(255) <Correct error>
+MEDIUMTEXT CHAR(255) <Correct error>
+LONGTEXT CHAR(255) <Correct error>
+TINYTEXT TINYTEXT <Correct value>
+TINYTEXT TEXT <Correct error>
+TEXT TINYTEXT <Correct error>
+DECIMAL(10,5) DECIMAL(10,5) <Correct value>
+DECIMAL(10,5) DECIMAL(10,6) <Correct error>
+DECIMAL(10,5) DECIMAL(11,5) <Correct error>
+DECIMAL(10,5) DECIMAL(11,6) <Correct error>
+DECIMAL(10,5) DECIMAL(10,4) <Correct error>
+DECIMAL(10,5) DECIMAL(9,5) <Correct error>
+DECIMAL(10,5) DECIMAL(9,4) <Correct error>
+FLOAT DECIMAL(10,5) <Correct error>
+DOUBLE DECIMAL(10,5) <Correct error>
+DECIMAL(10,5) FLOAT <Correct error>
+DECIMAL(10,5) DOUBLE <Correct error>
+FLOAT FLOAT <Correct value>
+DOUBLE DOUBLE <Correct value>
+FLOAT DOUBLE <Correct error>
+DOUBLE FLOAT <Correct error>
+BIT(5) BIT(5) <Correct value>
+BIT(5) BIT(6) <Correct error>
+BIT(6) BIT(5) <Correct error>
+BIT(5) BIT(12) <Correct error>
+BIT(12) BIT(5) <Correct error>
+TINYBLOB TINYBLOB ALL_NON_LOSSY <Correct value>
+TINYBLOB BLOB ALL_NON_LOSSY <Correct value>
+TINYBLOB MEDIUMBLOB ALL_NON_LOSSY <Correct value>
+TINYBLOB LONGBLOB ALL_NON_LOSSY <Correct value>
+BLOB TINYBLOB ALL_NON_LOSSY <Correct error>
+BLOB BLOB ALL_NON_LOSSY <Correct value>
+BLOB MEDIUMBLOB ALL_NON_LOSSY <Correct value>
+BLOB LONGBLOB ALL_NON_LOSSY <Correct value>
+MEDIUMBLOB TINYBLOB ALL_NON_LOSSY <Correct error>
+MEDIUMBLOB BLOB ALL_NON_LOSSY <Correct error>
+MEDIUMBLOB MEDIUMBLOB ALL_NON_LOSSY <Correct value>
+MEDIUMBLOB LONGBLOB ALL_NON_LOSSY <Correct value>
+LONGBLOB TINYBLOB ALL_NON_LOSSY <Correct error>
+LONGBLOB BLOB ALL_NON_LOSSY <Correct error>
+LONGBLOB MEDIUMBLOB ALL_NON_LOSSY <Correct error>
+LONGBLOB LONGBLOB ALL_NON_LOSSY <Correct value>
+GEOMETRY BLOB ALL_NON_LOSSY <Correct error>
+BLOB GEOMETRY ALL_NON_LOSSY <Correct error>
+GEOMETRY GEOMETRY ALL_NON_LOSSY <Correct value>
+BIT(1) BIT(1) ALL_NON_LOSSY <Correct value>
+DATE DATE ALL_NON_LOSSY <Correct value>
+ENUM('master',' ENUM('master',' ALL_NON_LOSSY <Correct value>
+CHAR(10) ENUM('master',' ALL_NON_LOSSY <Correct error>
+CHAR(10) SET('master','s ALL_NON_LOSSY <Correct error>
+ENUM('master',' CHAR(10) ALL_NON_LOSSY <Correct error>
+SET('master','s CHAR(10) ALL_NON_LOSSY <Correct error>
+SET('master','s SET('master','s ALL_NON_LOSSY <Correct value>
+SET('master','s SET('master','s ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_NON_LOSSY <Correct error>
+SET('0','1','2' SET('0','1','2' ALL_NON_LOSSY <Correct error>
+TINYINT TINYINT ALL_NON_LOSSY <Correct value>
+TINYINT SMALLINT ALL_NON_LOSSY <Correct value>
+TINYINT MEDIUMINT ALL_NON_LOSSY <Correct value>
+TINYINT INT ALL_NON_LOSSY <Correct value>
+TINYINT BIGINT ALL_NON_LOSSY <Correct value>
+SMALLINT TINYINT ALL_NON_LOSSY <Correct error>
+SMALLINT TINYINT ALL_NON_LOSSY <Correct error>
+SMALLINT TINYINT UNSIGNE ALL_NON_LOSSY <Correct error>
+SMALLINT SMALLINT ALL_NON_LOSSY <Correct value>
+SMALLINT MEDIUMINT ALL_NON_LOSSY <Correct value>
+SMALLINT INT ALL_NON_LOSSY <Correct value>
+SMALLINT BIGINT ALL_NON_LOSSY <Correct value>
+MEDIUMINT TINYINT ALL_NON_LOSSY <Correct error>
+MEDIUMINT TINYINT ALL_NON_LOSSY <Correct error>
+MEDIUMINT TINYINT UNSIGNE ALL_NON_LOSSY <Correct error>
+MEDIUMINT SMALLINT ALL_NON_LOSSY <Correct error>
+MEDIUMINT MEDIUMINT ALL_NON_LOSSY <Correct value>
+MEDIUMINT INT ALL_NON_LOSSY <Correct value>
+MEDIUMINT BIGINT ALL_NON_LOSSY <Correct value>
+INT TINYINT ALL_NON_LOSSY <Correct error>
+INT TINYINT ALL_NON_LOSSY <Correct error>
+INT TINYINT UNSIGNE ALL_NON_LOSSY <Correct error>
+INT SMALLINT ALL_NON_LOSSY <Correct error>
+INT MEDIUMINT ALL_NON_LOSSY <Correct error>
+INT INT ALL_NON_LOSSY <Correct value>
+INT BIGINT ALL_NON_LOSSY <Correct value>
+BIGINT TINYINT ALL_NON_LOSSY <Correct error>
+BIGINT SMALLINT ALL_NON_LOSSY <Correct error>
+BIGINT MEDIUMINT ALL_NON_LOSSY <Correct error>
+BIGINT INT ALL_NON_LOSSY <Correct error>
+BIGINT BIGINT ALL_NON_LOSSY <Correct value>
+CHAR(20) CHAR(20) ALL_NON_LOSSY <Correct value>
+CHAR(20) CHAR(30) ALL_NON_LOSSY <Correct value>
+CHAR(20) CHAR(10) ALL_NON_LOSSY <Correct error>
+CHAR(20) VARCHAR(20) ALL_NON_LOSSY <Correct value>
+CHAR(20) VARCHAR(30) ALL_NON_LOSSY <Correct value>
+CHAR(20) VARCHAR(10) ALL_NON_LOSSY <Correct error>
+CHAR(20) TINYTEXT ALL_NON_LOSSY <Correct value>
+CHAR(20) TEXT ALL_NON_LOSSY <Correct value>
+CHAR(20) MEDIUMTEXT ALL_NON_LOSSY <Correct value>
+CHAR(20) LONGTEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(20) ALL_NON_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(30) ALL_NON_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(10) ALL_NON_LOSSY <Correct error>
+VARCHAR(20) CHAR(30) ALL_NON_LOSSY <Correct value>
+VARCHAR(20) CHAR(10) ALL_NON_LOSSY <Correct error>
+VARCHAR(20) TINYTEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(20) TEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(20) MEDIUMTEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(20) LONGTEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(500) ALL_NON_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(510) ALL_NON_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(255) ALL_NON_LOSSY <Correct error>
+VARCHAR(500) TINYTEXT ALL_NON_LOSSY <Correct error>
+VARCHAR(500) TEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(500) MEDIUMTEXT ALL_NON_LOSSY <Correct value>
+VARCHAR(500) LONGTEXT ALL_NON_LOSSY <Correct value>
+TINYTEXT VARCHAR(500) ALL_NON_LOSSY <Correct value>
+TEXT VARCHAR(500) ALL_NON_LOSSY <Correct error>
+MEDIUMTEXT VARCHAR(500) ALL_NON_LOSSY <Correct error>
+LONGTEXT VARCHAR(500) ALL_NON_LOSSY <Correct error>
+TINYTEXT CHAR(255) ALL_NON_LOSSY <Correct value>
+TINYTEXT CHAR(250) ALL_NON_LOSSY <Correct error>
+TEXT CHAR(255) ALL_NON_LOSSY <Correct error>
+MEDIUMTEXT CHAR(255) ALL_NON_LOSSY <Correct error>
+LONGTEXT CHAR(255) ALL_NON_LOSSY <Correct error>
+TINYTEXT TINYTEXT ALL_NON_LOSSY <Correct value>
+TINYTEXT TEXT ALL_NON_LOSSY <Correct value>
+TEXT TINYTEXT ALL_NON_LOSSY <Correct error>
+DECIMAL(10,5) DECIMAL(10,5) ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,6) ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(11,5) ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(11,6) ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,4) ALL_NON_LOSSY <Correct error>
+DECIMAL(10,5) DECIMAL(9,5) ALL_NON_LOSSY <Correct error>
+DECIMAL(10,5) DECIMAL(9,4) ALL_NON_LOSSY <Correct error>
+FLOAT DECIMAL(10,5) ALL_NON_LOSSY <Correct error>
+DOUBLE DECIMAL(10,5) ALL_NON_LOSSY <Correct error>
+DECIMAL(10,5) FLOAT ALL_NON_LOSSY <Correct error>
+DECIMAL(10,5) DOUBLE ALL_NON_LOSSY <Correct error>
+FLOAT FLOAT ALL_NON_LOSSY <Correct value>
+DOUBLE DOUBLE ALL_NON_LOSSY <Correct value>
+FLOAT DOUBLE ALL_NON_LOSSY <Correct value>
+DOUBLE FLOAT ALL_NON_LOSSY <Correct error>
+BIT(5) BIT(5) ALL_NON_LOSSY <Correct value>
+BIT(5) BIT(6) ALL_NON_LOSSY <Correct value>
+BIT(6) BIT(5) ALL_NON_LOSSY <Correct error>
+BIT(5) BIT(12) ALL_NON_LOSSY <Correct value>
+BIT(12) BIT(5) ALL_NON_LOSSY <Correct error>
+TINYBLOB TINYBLOB ALL_LOSSY <Correct value>
+TINYBLOB BLOB ALL_LOSSY <Correct error>
+TINYBLOB MEDIUMBLOB ALL_LOSSY <Correct error>
+TINYBLOB LONGBLOB ALL_LOSSY <Correct error>
+BLOB TINYBLOB ALL_LOSSY <Correct value>
+BLOB BLOB ALL_LOSSY <Correct value>
+BLOB MEDIUMBLOB ALL_LOSSY <Correct error>
+BLOB LONGBLOB ALL_LOSSY <Correct error>
+MEDIUMBLOB TINYBLOB ALL_LOSSY <Correct value>
+MEDIUMBLOB BLOB ALL_LOSSY <Correct value>
+MEDIUMBLOB MEDIUMBLOB ALL_LOSSY <Correct value>
+MEDIUMBLOB LONGBLOB ALL_LOSSY <Correct error>
+LONGBLOB TINYBLOB ALL_LOSSY <Correct value>
+LONGBLOB BLOB ALL_LOSSY <Correct value>
+LONGBLOB MEDIUMBLOB ALL_LOSSY <Correct value>
+LONGBLOB LONGBLOB ALL_LOSSY <Correct value>
+GEOMETRY BLOB ALL_LOSSY <Correct error>
+BLOB GEOMETRY ALL_LOSSY <Correct error>
+GEOMETRY GEOMETRY ALL_LOSSY <Correct value>
+BIT(1) BIT(1) ALL_LOSSY <Correct value>
+DATE DATE ALL_LOSSY <Correct value>
+ENUM('master',' ENUM('master',' ALL_LOSSY <Correct value>
+CHAR(10) ENUM('master',' ALL_LOSSY <Correct error>
+CHAR(10) SET('master','s ALL_LOSSY <Correct error>
+ENUM('master',' CHAR(10) ALL_LOSSY <Correct error>
+SET('master','s CHAR(10) ALL_LOSSY <Correct error>
+SET('master','s SET('master','s ALL_LOSSY <Correct value>
+SET('master','s SET('master','s ALL_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY <Correct error>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY <Correct value>
+TINYINT TINYINT ALL_LOSSY <Correct value>
+TINYINT SMALLINT ALL_LOSSY <Correct error>
+TINYINT MEDIUMINT ALL_LOSSY <Correct error>
+TINYINT INT ALL_LOSSY <Correct error>
+TINYINT BIGINT ALL_LOSSY <Correct error>
+SMALLINT TINYINT ALL_LOSSY <Correct value>
+SMALLINT TINYINT ALL_LOSSY <Correct value>
+SMALLINT TINYINT UNSIGNE ALL_LOSSY <Correct value>
+SMALLINT SMALLINT ALL_LOSSY <Correct value>
+SMALLINT MEDIUMINT ALL_LOSSY <Correct error>
+SMALLINT INT ALL_LOSSY <Correct error>
+SMALLINT BIGINT ALL_LOSSY <Correct error>
+MEDIUMINT TINYINT ALL_LOSSY <Correct value>
+MEDIUMINT TINYINT ALL_LOSSY <Correct value>
+MEDIUMINT TINYINT UNSIGNE ALL_LOSSY <Correct value>
+MEDIUMINT SMALLINT ALL_LOSSY <Correct value>
+MEDIUMINT MEDIUMINT ALL_LOSSY <Correct value>
+MEDIUMINT INT ALL_LOSSY <Correct error>
+MEDIUMINT BIGINT ALL_LOSSY <Correct error>
+INT TINYINT ALL_LOSSY <Correct value>
+INT TINYINT ALL_LOSSY <Correct value>
+INT TINYINT UNSIGNE ALL_LOSSY <Correct value>
+INT SMALLINT ALL_LOSSY <Correct value>
+INT MEDIUMINT ALL_LOSSY <Correct value>
+INT INT ALL_LOSSY <Correct value>
+INT BIGINT ALL_LOSSY <Correct error>
+BIGINT TINYINT ALL_LOSSY <Correct value>
+BIGINT SMALLINT ALL_LOSSY <Correct value>
+BIGINT MEDIUMINT ALL_LOSSY <Correct value>
+BIGINT INT ALL_LOSSY <Correct value>
+BIGINT BIGINT ALL_LOSSY <Correct value>
+CHAR(20) CHAR(20) ALL_LOSSY <Correct value>
+CHAR(20) CHAR(30) ALL_LOSSY <Correct error>
+CHAR(20) CHAR(10) ALL_LOSSY <Correct value>
+CHAR(20) VARCHAR(20) ALL_LOSSY <Correct error>
+CHAR(20) VARCHAR(30) ALL_LOSSY <Correct error>
+CHAR(20) VARCHAR(10) ALL_LOSSY <Correct value>
+CHAR(20) TINYTEXT ALL_LOSSY <Correct error>
+CHAR(20) TEXT ALL_LOSSY <Correct error>
+CHAR(20) MEDIUMTEXT ALL_LOSSY <Correct error>
+CHAR(20) LONGTEXT ALL_LOSSY <Correct error>
+VARCHAR(20) VARCHAR(20) ALL_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(30) ALL_LOSSY <Correct error>
+VARCHAR(20) VARCHAR(10) ALL_LOSSY <Correct value>
+VARCHAR(20) CHAR(30) ALL_LOSSY <Correct error>
+VARCHAR(20) CHAR(10) ALL_LOSSY <Correct value>
+VARCHAR(20) TINYTEXT ALL_LOSSY <Correct error>
+VARCHAR(20) TEXT ALL_LOSSY <Correct error>
+VARCHAR(20) MEDIUMTEXT ALL_LOSSY <Correct error>
+VARCHAR(20) LONGTEXT ALL_LOSSY <Correct error>
+VARCHAR(500) VARCHAR(500) ALL_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(510) ALL_LOSSY <Correct error>
+VARCHAR(500) VARCHAR(255) ALL_LOSSY <Correct value>
+VARCHAR(500) TINYTEXT ALL_LOSSY <Correct value>
+VARCHAR(500) TEXT ALL_LOSSY <Correct error>
+VARCHAR(500) MEDIUMTEXT ALL_LOSSY <Correct error>
+VARCHAR(500) LONGTEXT ALL_LOSSY <Correct error>
+TINYTEXT VARCHAR(500) ALL_LOSSY <Correct error>
+TEXT VARCHAR(500) ALL_LOSSY <Correct value>
+MEDIUMTEXT VARCHAR(500) ALL_LOSSY <Correct value>
+LONGTEXT VARCHAR(500) ALL_LOSSY <Correct value>
+TINYTEXT CHAR(255) ALL_LOSSY <Correct error>
+TINYTEXT CHAR(250) ALL_LOSSY <Correct value>
+TEXT CHAR(255) ALL_LOSSY <Correct value>
+MEDIUMTEXT CHAR(255) ALL_LOSSY <Correct value>
+LONGTEXT CHAR(255) ALL_LOSSY <Correct value>
+TINYTEXT TINYTEXT ALL_LOSSY <Correct value>
+TINYTEXT TEXT ALL_LOSSY <Correct error>
+TEXT TINYTEXT ALL_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,5) ALL_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,6) ALL_LOSSY <Correct error>
+DECIMAL(10,5) DECIMAL(11,5) ALL_LOSSY <Correct error>
+DECIMAL(10,5) DECIMAL(11,6) ALL_LOSSY <Correct error>
+DECIMAL(10,5) DECIMAL(10,4) ALL_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(9,5) ALL_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(9,4) ALL_LOSSY <Correct value>
+FLOAT DECIMAL(10,5) ALL_LOSSY <Correct value>
+DOUBLE DECIMAL(10,5) ALL_LOSSY <Correct value>
+DECIMAL(10,5) FLOAT ALL_LOSSY <Correct value>
+DECIMAL(10,5) DOUBLE ALL_LOSSY <Correct value>
+FLOAT FLOAT ALL_LOSSY <Correct value>
+DOUBLE DOUBLE ALL_LOSSY <Correct value>
+FLOAT DOUBLE ALL_LOSSY <Correct error>
+DOUBLE FLOAT ALL_LOSSY <Correct value>
+BIT(5) BIT(5) ALL_LOSSY <Correct value>
+BIT(5) BIT(6) ALL_LOSSY <Correct error>
+BIT(6) BIT(5) ALL_LOSSY <Correct value>
+BIT(5) BIT(12) ALL_LOSSY <Correct error>
+BIT(12) BIT(5) ALL_LOSSY <Correct value>
+TINYBLOB TINYBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYBLOB BLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYBLOB MEDIUMBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYBLOB LONGBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BLOB TINYBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BLOB BLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BLOB MEDIUMBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BLOB LONGBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMBLOB TINYBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMBLOB BLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMBLOB MEDIUMBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMBLOB LONGBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+LONGBLOB TINYBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+LONGBLOB BLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+LONGBLOB MEDIUMBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+LONGBLOB LONGBLOB ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+GEOMETRY BLOB ALL_LOSSY,ALL_NON_LOSSY <Correct error>
+BLOB GEOMETRY ALL_LOSSY,ALL_NON_LOSSY <Correct error>
+GEOMETRY GEOMETRY ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIT(1) BIT(1) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DATE DATE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+ENUM('master',' ENUM('master',' ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(10) ENUM('master',' ALL_LOSSY,ALL_NON_LOSSY <Correct error>
+CHAR(10) SET('master','s ALL_LOSSY,ALL_NON_LOSSY <Correct error>
+ENUM('master',' CHAR(10) ALL_LOSSY,ALL_NON_LOSSY <Correct error>
+SET('master','s CHAR(10) ALL_LOSSY,ALL_NON_LOSSY <Correct error>
+SET('master','s SET('master','s ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SET('master','s SET('master','s ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SET('0','1','2' SET('0','1','2' ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYINT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYINT SMALLINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYINT MEDIUMINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYINT INT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYINT BIGINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT TINYINT UNSIGNE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT SMALLINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT MEDIUMINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT INT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+SMALLINT BIGINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT TINYINT UNSIGNE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT SMALLINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT MEDIUMINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT INT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMINT BIGINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT TINYINT UNSIGNE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT SMALLINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT MEDIUMINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT INT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+INT BIGINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIGINT TINYINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIGINT SMALLINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIGINT MEDIUMINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIGINT INT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIGINT BIGINT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) CHAR(20) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) CHAR(30) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) CHAR(10) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) VARCHAR(20) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) VARCHAR(30) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) VARCHAR(10) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) TINYTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) TEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) MEDIUMTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+CHAR(20) LONGTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(20) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(30) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) VARCHAR(10) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) CHAR(30) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) CHAR(10) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) TINYTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) TEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) MEDIUMTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(20) LONGTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(500) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(510) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) VARCHAR(255) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) TINYTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) TEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) MEDIUMTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+VARCHAR(500) LONGTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYTEXT VARCHAR(500) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TEXT VARCHAR(500) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMTEXT VARCHAR(500) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+LONGTEXT VARCHAR(500) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYTEXT CHAR(255) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYTEXT CHAR(250) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TEXT CHAR(255) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+MEDIUMTEXT CHAR(255) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+LONGTEXT CHAR(255) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYTEXT TINYTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TINYTEXT TEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+TEXT TINYTEXT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,6) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(11,5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(11,6) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(10,4) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(9,5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DECIMAL(9,4) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+FLOAT DECIMAL(10,5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DOUBLE DECIMAL(10,5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) FLOAT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DECIMAL(10,5) DOUBLE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+FLOAT FLOAT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DOUBLE DOUBLE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+FLOAT DOUBLE ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DOUBLE FLOAT ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIT(5) BIT(5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIT(5) BIT(6) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIT(6) BIT(5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIT(5) BIT(12) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+BIT(12) BIT(5) ALL_LOSSY,ALL_NON_LOSSY <Correct value>
+DROP TABLE type_conversions;
+call mtr.add_suppression("Slave SQL.*Column 1 of table .test.t1. cannot
be converted from type.* error.* 1677");
+connection master;
+DROP TABLE t1;
+connection slave;
+set global slave_type_conversions = @saved_slave_type_conversions;
+include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/rpl_typeconv.test
b/mysql-test/suite/binlog_encryption/rpl_typeconv.test
new file mode 100644
index 0000000..9d565c4
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/rpl_typeconv.test
@@ -0,0 +1,81 @@
+#
+# The test was taken from the rpl suite as is
+#
+
+--enable_connect_log
+
+--disable_connect_log
+--source include/have_binlog_format_row.inc
+--source include/master-slave.inc
+--enable_connect_log
+
+connection slave;
+set @saved_slave_type_conversions = @@global.slave_type_conversions;
+CREATE TABLE type_conversions (
+ TestNo INT AUTO_INCREMENT PRIMARY KEY,
+ Source TEXT,
+ Target TEXT,
+ Flags TEXT,
+ On_Master TEXT,
+ On_Slave TEXT,
+ Expected TEXT,
+ Compare INT,
+ Error TEXT);
+
+SELECT @@global.slave_type_conversions;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='';
+SELECT @@global.slave_type_conversions;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_NON_LOSSY';
+SELECT @@global.slave_type_conversions;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY';
+SELECT @@global.slave_type_conversions;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY,ALL_NON_LOSSY';
+SELECT @@global.slave_type_conversions;
+--error ER_WRONG_VALUE_FOR_VAR
+SET GLOBAL
SLAVE_TYPE_CONVERSIONS='ALL_LOSSY,ALL_NON_LOSSY,NONEXISTING_BIT';
+SELECT @@global.slave_type_conversions;
+
+# Checking strict interpretation of type conversions
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='';
+source extra/rpl_tests/type_conversions.test;
+
+# Checking lossy integer type conversions
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_NON_LOSSY';
+source extra/rpl_tests/type_conversions.test;
+
+# Checking non-lossy integer type conversions
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY';
+source extra/rpl_tests/type_conversions.test;
+
+# Checking all type conversions
+connection slave;
+SET GLOBAL SLAVE_TYPE_CONVERSIONS='ALL_LOSSY,ALL_NON_LOSSY';
+source extra/rpl_tests/type_conversions.test;
+
+connection slave;
+--echo **** Result of conversions ****
+disable_query_log;
+SELECT RPAD(Source, 15, ' ') AS Source_Type,
+ RPAD(Target, 15, ' ') AS Target_Type,
+ RPAD(Flags, 25, ' ') AS All_Type_Conversion_Flags,
+ IF(Compare IS NULL AND Error IS NOT NULL, '<Correct error>',
+ IF(Compare, '<Correct value>',
+ CONCAT("'", On_Slave, "' != '", Expected, "'")))
+ AS Value_On_Slave
+ FROM type_conversions;
+enable_query_log;
+DROP TABLE type_conversions;
+
+call mtr.add_suppression("Slave SQL.*Column 1 of table .test.t1. cannot
be converted from type.* error.* 1677");
+
+connection master;
+DROP TABLE t1;
+sync_slave_with_master;
+
+set global slave_type_conversions = @saved_slave_type_conversions;
+
+--disable_connect_log
+--source include/rpl_end.inc
diff --git a/mysql-test/suite/binlog_encryption/suite.pm
b/mysql-test/suite/binlog_encryption/suite.pm
new file mode 100644
index 0000000..f1d5e3a
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/suite.pm
@@ -0,0 +1,18 @@
+package My::Suite::BinlogEncryption;
+
+@ISA = qw(My::Suite);
+
+return "No file key management plugin" unless defined
$ENV{FILE_KEY_MANAGEMENT_SO};
+
+sub skip_combinations {
+ my @combinations;
+
+ $skip{'encryption_algorithms.combinations'} = [ 'ctr' ]
+ unless $::mysqld_variables{'version-ssl-library'} =~ /OpenSSL (\S+)/
+ and $1 ge "1.0.1";
+
+ %skip;
+}
+
+bless { };
+
diff --git a/mysql-test/suite/binlog_encryption/testdata.inc
b/mysql-test/suite/binlog_encryption/testdata.inc
new file mode 100644
index 0000000..f949911
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/testdata.inc
@@ -0,0 +1,207 @@
+#
+# This include file creates some basic events which should go to the
binary log.
+# What happens to the binary log depends on the test which calls the file,
+# and should be checked from the test.
+#
+# Names are intentionally long and ugly, to make grepping more reliable.
+#
+# Some of events are considered unsafe for SBR (not necessarily
correctly, +# but here isn't the place to check the logic), so we just
suppress the warning.
+#
+# For those few queries which produce result sets (e.g. ANALYZE,
CHECKSUM etc.),
+# we don't care about the result, so it will not be printed to the output.
+
+call mtr.add_suppression("Unsafe statement written to the binary log
using statement format since BINLOG_FORMAT = STATEMENT");
+
+#
+# Some DDL
+#
+
+CREATE DATABASE database_name_to_encrypt;
+USE database_name_to_encrypt;
+
+CREATE USER user_name_to_encrypt;
+GRANT ALL ON database_name_to_encrypt.* TO user_name_to_encrypt;
+SET PASSWORD FOR user_name_to_encrypt = PASSWORD('password_to_encrypt');
+
+CREATE TABLE innodb_table_name_to_encrypt (
+ int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+ timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+ blob_column_name_to_encrypt BLOB,
+ virt_column_name_to_encrypt INT AS (int_column_name_to_encrypt % 10)
VIRTUAL,
+ pers_column_name_to_encrypt INT AS (int_column_name_to_encrypt)
PERSISTENT,
+ INDEX `index_name_to_encrypt`(`timestamp_column_name_to_encrypt`)
+) ENGINE=InnoDB + PARTITION BY RANGE (int_column_name_to_encrypt)
+ SUBPARTITION BY KEY (int_column_name_to_encrypt)
+ SUBPARTITIONS 2 (
+ PARTITION partition0_name_to_encrypt VALUES LESS THAN (100),
+ PARTITION partition1_name_to_encrypt VALUES LESS THAN (MAXVALUE)
+ )
+;
+
+CREATE TABLE myisam_table_name_to_encrypt (
+ int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+ char_column_name_to_encrypt VARCHAR(255),
+ datetime_column_name_to_encrypt DATETIME,
+ text_column_name_to_encrypt TEXT
+) ENGINE=MyISAM;
+
+CREATE TABLE aria_table_name_to_encrypt (
+ int_column_name_to_encrypt INT AUTO_INCREMENT PRIMARY KEY,
+ varchar_column_name_to_encrypt VARCHAR(1024),
+ enum_column_name_to_encrypt ENUM(
+ 'enum_value1_to_encrypt',
+ 'enum_value2_to_encrypt'
+ ),
+ timestamp_column_name_to_encrypt TIMESTAMP(6) NULL,
+ blob_column_name_to_encrypt BLOB
+) ENGINE=Aria;
+
+CREATE TRIGGER trigger_name_to_encrypt + AFTER INSERT ON
myisam_table_name_to_encrypt FOR EACH ROW
+ INSERT INTO aria_table_name_to_encrypt (varchar_column_name_to_encrypt)
+ VALUES (NEW.char_column_name_to_encrypt);
+
+CREATE DEFINER=user_name_to_encrypt VIEW view_name_to_encrypt + AS
SELECT * FROM innodb_table_name_to_encrypt;
+
+CREATE FUNCTION func_name_to_encrypt (func_parameter_to_encrypt INT)
+ RETURNS VARCHAR(64)
+ RETURN 'func_result_to_encrypt';
+ +--delimiter $$
+CREATE PROCEDURE proc_name_to_encrypt (
+ IN proc_in_parameter_to_encrypt CHAR(32),
+ OUT proc_out_parameter_to_encrypt INT
+)
+BEGIN
+ DECLARE procvar_name_to_encrypt CHAR(64) DEFAULT
'procvar_val_to_encrypt';
+ DECLARE cursor_name_to_encrypt CURSOR FOR
+ SELECT virt_column_name_to_encrypt FROM innodb_table_name_to_encrypt;
+ DECLARE EXIT HANDLER FOR NOT FOUND
+ BEGIN
+ SET @stmt_var_to_encrypt = CONCAT(
+ "SELECT + IF
(RAND()>0.5,'enum_value2_to_encrypt','enum_value1_to_encrypt')
+ FROM innodb_table_name_to_encrypt
+ INTO OUTFILE '", proc_in_parameter_to_encrypt, "'");
+ PREPARE stmt_to_encrypt FROM @stmt_var_to_encrypt;
+ EXECUTE stmt_to_encrypt;
+ DEALLOCATE PREPARE stmt_to_encrypt;
+ END;
+ OPEN cursor_name_to_encrypt;
+ proc_label_to_encrypt: LOOP + FETCH cursor_name_to_encrypt INTO
procvar_name_to_encrypt;
+ END LOOP;
+ CLOSE cursor_name_to_encrypt;
+END $$
+--delimiter ;
+
+CREATE SERVER server_name_to_encrypt
+ FOREIGN DATA WRAPPER mysql
+ OPTIONS (HOST 'host_name_to_encrypt');
+
+--let $_cur_con= $CURRENT_CONNECTION
+--connect
(con1,localhost,user_name_to_encrypt,password_to_encrypt,database_name_to_encrypt)
+CREATE TEMPORARY TABLE tmp_table_name_to_encrypt (
+ float_column_name_to_encrypt FLOAT,
+ binary_column_name_to_encrypt BINARY(64)
+);
+--disconnect con1
+--connection $_cur_con
+
+CREATE INDEX index_name_to_encrypt + ON myisam_table_name_to_encrypt
(datetime_column_name_to_encrypt);
+
+ALTER DATABASE database_name_to_encrypt CHARACTER SET utf8;
+
+ALTER TABLE innodb_table_name_to_encrypt + MODIFY
timestamp_column_name_to_encrypt TIMESTAMP NOT NULL + DEFAULT
CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
+;
+
+ALTER ALGORITHM=MERGE VIEW view_name_to_encrypt + AS SELECT * FROM
innodb_table_name_to_encrypt;
+
+RENAME TABLE innodb_table_name_to_encrypt TO new_table_name_to_encrypt;
+ALTER TABLE new_table_name_to_encrypt RENAME TO
innodb_table_name_to_encrypt;
+
+#
+# Some DML
+#
+
+--disable_warnings
+
+set @user_var1_to_encrypt= 'dyncol1_val_to_encrypt';
+set @user_var2_to_encrypt= 'dyncol2_name_to_encrypt';
+
+INSERT INTO view_name_to_encrypt VALUES
+ (1, NOW(6),
COLUMN_CREATE('dyncol1_name_to_encrypt',@user_var1_to_encrypt), NULL, NULL),
+ (2, NOW(6),
COLUMN_CREATE(@user_var2_to_encrypt,'dyncol2_val_to_encrypt'), NULL, NULL)
+;
+--delimiter $$
+BEGIN NOT ATOMIC
+ DECLARE counter_name_to_encrypt INT DEFAULT 0;
+ START TRANSACTION;
+ WHILE counter_name_to_encrypt<12 DO
+ INSERT INTO innodb_table_name_to_encrypt + SELECT NULL,
NOW(6), blob_column_name_to_encrypt, NULL, NULL
+ FROM innodb_table_name_to_encrypt
+ ORDER BY int_column_name_to_encrypt;
+ SET counter_name_to_encrypt = counter_name_to_encrypt+1;
+ END WHILE;
+ COMMIT;
+ END
+$$
+--delimiter ;
+
+INSERT INTO myisam_table_name_to_encrypt
+ SELECT NULL, 'char_literal_to_encrypt', NULL, 'text_to_encrypt';
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+ SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+ SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+INSERT INTO myisam_table_name_to_encrypt (char_column_name_to_encrypt)
+ SELECT char_column_name_to_encrypt FROM myisam_table_name_to_encrypt;
+
+CALL proc_name_to_encrypt('file_name_to_encrypt',@useless_var_to_encrypt);
+
+TRUNCATE TABLE aria_table_name_to_encrypt;
+
+LOAD DATA INFILE 'file_name_to_encrypt' INTO TABLE
aria_table_name_to_encrypt
+ (enum_column_name_to_encrypt);
+
+--let datadir= `SELECT @@datadir`
+--replace_result $datadir <DATADIR>
+eval LOAD DATA LOCAL INFILE
'$datadir/database_name_to_encrypt/file_name_to_encrypt' + INTO TABLE
aria_table_name_to_encrypt (enum_column_name_to_encrypt);
+--remove_file $datadir/database_name_to_encrypt/file_name_to_encrypt
+
+UPDATE view_name_to_encrypt SET blob_column_name_to_encrypt = +
COLUMN_CREATE('dyncol1_name_to_encrypt',func_name_to_encrypt(0))
+;
+
+DELETE FROM aria_table_name_to_encrypt ORDER BY
int_column_name_to_encrypt LIMIT 10;
+
+--enable_warnings
+
+#
+# Other statements
+#
+
+--disable_result_log
+ANALYZE TABLE myisam_table_name_to_encrypt;
+CHECK TABLE aria_table_name_to_encrypt;
+CHECKSUM TABLE innodb_table_name_to_encrypt, myisam_table_name_to_encrypt;
+--enable_result_log
+RENAME USER user_name_to_encrypt to new_user_name_to_encrypt;
+REVOKE ALL PRIVILEGES, GRANT OPTION FROM new_user_name_to_encrypt;
+
+#
+# Cleanup
+#
+
+DROP DATABASE database_name_to_encrypt;
+DROP USER new_user_name_to_encrypt;
+DROP SERVER server_name_to_encrypt;
diff --git a/mysql-test/suite/binlog_encryption/testdata.opt
b/mysql-test/suite/binlog_encryption/testdata.opt
new file mode 100644
index 0000000..b0c5b9c
--- /dev/null
+++ b/mysql-test/suite/binlog_encryption/testdata.opt
@@ -0,0 +1 @@
+--partition
diff --git a/mysql-test/unstable-tests b/mysql-test/unstable-tests
index 8c79ab0..afd7a3c 100644
--- a/mysql-test/unstable-tests
+++ b/mysql-test/unstable-tests
@@ -89,6 +89,10 @@ binlog.binlog_xa_recover : MDEV-8517
- Extra checkpoint
#----------------------------------------------------------------
+binlog_encryption.* : Added in 10.1.20
+
+#----------------------------------------------------------------
+
connect.tbl : MDEV-9844, MDEV-10179 - sporadic crashes,
valgrind warnings, wrong results
connect.jdbc : New test, added on 2016-07-15
connect.jdbc-new : New test, added on 2016-07-14
1
0

[Maria-developers] MDEV-11360 Dynamic SQL: DEFAULT as a bind parameter
by Alexander Barkov 27 Nov '16
by Alexander Barkov 27 Nov '16
27 Nov '16
Hello Sanja,
Please review MDEV-11360.
Thanks!
2
2

[Maria-developers] Please review MDEV-11357 Split Item_cache::get_cache() into virtual methods in Type_handler
by Alexander Barkov 26 Nov '16
by Alexander Barkov 26 Nov '16
26 Nov '16
Hello Sanja,
Can you please review a patch for MDEV-11357?
This is another per-requisite task for:
MDEV-4912 Add a plugin to field types (column types)
Thanks!
1
0

[Maria-developers] MDEV-11297: Re why tree_delete doesn't work in Item_func_group_concat code
by Sergey Petrunia 25 Nov '16
by Sergey Petrunia 25 Nov '16
25 Nov '16
Hi Varun,
Part#1 of the followup to our discussion about removing from the TREE object.
So I tried to get tree_remove() to work.
- I enabled deletion from the tree based on your analysis.
- then, I got the tree_remove code to compile by making these changes:
diff --git a/sql/item_sum.cc b/sql/item_sum.cc
index e137720..8bcd7c6 100644
--- a/sql/item_sum.cc
+++ b/sql/item_sum.cc
@@ -3835,10 +3835,13 @@ bool Item_func_group_concat::add()
return 1;
if(limit_clause && (tree->elements_in_tree > row_limit+offset_limit))
{
- //tree_search_edge(tree,tree->parents,&tree->parents,
- //offsetof(TREE_ELEMENT, left));
- //tree_delete(tree, table->record[0] + table->s->null_bytes, 0,
- // tree->custom_arg);
+ TREE_ELEMENT *parents[MAX_TREE_HEIGHT+1];
+ TREE_ELEMENT **pos;
+ void *key;
+ key= tree_search_edge(tree, parents, &pos,
+ offsetof(TREE_ELEMENT, left));
+ uint tree_key_length= table->s->reclength - table->s->null_bytes;
+ tree_delete(tree, key, tree_key_length, tree->custom_arg);
}
}
/*
@@ -4036,7 +4039,10 @@ bool Item_func_group_concat::setup(THD *thd)
thd->variables.sortbuff_size/16), 0,
tree_key_length,
group_concat_key_cmp_with_order, NULL, (void*) this,
- MYF(MY_THREAD_SPECIFIC));
+ //MYF(MY_THREAD_SPECIFIC));
+ limit_clause? MYF(MY_TREE_WITH_DELETE) : MYF(0));
+ // psergey-note: it passes MY_THREAD_SPECIFIC, while init_tree
+ // actually checks for MY_TREE_WITH_DELETE!
}
if (distinct)
It still didn't remove the element!
I started debugging, ended up here:
#0 group_concat_key_cmp_with_order (arg=0x7fffc78604d0, key1=0x7fffc78595a8, key2=0x7fffc78595a8) at /home/psergey/dev-git/10.2-varun/sql/item_sum.cc:3464
#1 0x00005555563d54e3 in tree_delete (tree=0x7fffc7860620, key=0x7fffc78595a8, key_size=11, custom_arg=0x7fffc78604d0) at /home/psergey/dev-git/10.2-varun/mysys/tree.c:293
#2 0x0000555555e14546 in Item_func_group_concat::add (this=0x7fffc78604d0) at /home/psergey/dev-git/10.2-varun/sql/item_sum.cc:3844
#3 0x0000555555e15b8b in Aggregator_simple::add (this=0x7fffc78632a8) at /home/psergey/dev-git/10.2-varun/sql/item_sum.h:679
#4 0x0000555555b52e89 in Item_sum::aggregator_add (this=0x7fffc78604d0) at /home/psergey/dev-git/10.2-varun/sql/item_sum.h:527
#5 0x0000555555b48a98 in update_sum_func (func_ptr=0x7fffc78619b0) at /home/psergey/dev-git/10.2-varun/sql/sql_select.cc:23240
and the code at the end of group_concat_key_cmp_with_order says:
/*
We can't return 0 because in that case the tree class would remove this
item as double value. This would cause problems for case-changes and
if the returned values are not the same we do the sort on.
*/
return 1;
So, group_concat_key_cmp_with_order() never returns 0, which means
tree_delete() can't find the element it wants to delete, and that's why no
element is ever deleted.
(The rest of discussion is coming in separate emails).
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] MDEV-11337 Split Item::save_in_field() into virtual methods in Type_handler
by Alexander Barkov 25 Nov '16
by Alexander Barkov 25 Nov '16
25 Nov '16
Hello Nirbhay,
Can you please review a patch for 10.3:
MDEV-11337 Split Item::save_in_field() into virtual methods in Type_handler
It also automatically fixed two problems:
MDEV-11331 Wrong result for INSERT INTO t1 (datetime_field) VALUES
(hybrid_function_of_TIME_data_type)
MDEV-11333 Expect "Impossible where condition" for WHERE
timestamp_field>=DATE_ADD(TIMESTAMP'9999-01-01 00:00:00',INTERVAL 1000 YEAR)
because the new code is now symmetric for all data types.
Thanks!
2
2

[Maria-developers] Please review MDEV-11347Move add_create_index_prepare(), add_key_to_list(), set_trigger_new_row(), set_local_variable(), set_system_variable(), create_item_for_sp_var() as methods to LEX
by Alexander Barkov 24 Nov '16
by Alexander Barkov 24 Nov '16
24 Nov '16
Hello Sanja,
Please review a patch for MDEV-11347.
It turns a few other function into methods in LEX,
to be able to reuse them easier.
Thanks!
1
0

[Maria-developers] MDEV-11344 Split Arg_comparator::set_compare_func() into virtual methods in Type_handler
by Alexander Barkov 24 Nov '16
by Alexander Barkov 24 Nov '16
24 Nov '16
Hello Sanja,
Can you please review a patch for MDEV-11344?
Thanks.
1
1

[Maria-developers] Please review MDEV-11330 Split Item_func_hybrid_field_type::val_xxx() into methods in Type_handler
by Alexander Barkov 22 Nov '16
by Alexander Barkov 22 Nov '16
22 Nov '16
Hello Vicențiu,
Please review a patch for MDEV-11330.
Thanks!
1
0

[Maria-developers] Please review MDEV-11298 also fixing GIS bugs MDEV-9405 and MDEV-9425
by Alexander Barkov 21 Nov '16
by Alexander Barkov 21 Nov '16
21 Nov '16
Hello Alexey,
Please review a patch for:
MDEV-11302 Add class Type_ext_attributes and
Type_handler::join_type_ext_attributes()
A detailed description can be found in the task ticket:
https://jira.mariadb.org/browse/MDEV-11302
This patch also fixes the problems reported in:
MDEV-9405 Hybrid functions, SP do not preserve geometry type
MDEV-9425 Hybrid functions and UNION do not preserve spatial REF_SYSTEM_ID
Some calls for geometry_type() and/or srid() were forgotten in the old code.
The new code replaces calls for geometry_type() and srid()
to a generic type_ext_attributes() and makes maintaining/adding
of similar data type specific attributes easier.
The new class Type_ext_attributes will be used in a few field creation
methods in Type_handler later.
Thanks!
1
1

Re: [Maria-developers] [Commits] 41a12f9: MDEV-8320 Allow index usage for DATE(datetime_column) = const.
by Sergey Petrunia 21 Nov '16
by Sergey Petrunia 21 Nov '16
21 Nov '16
In-Reply-To: <20160928105123.6D948140DDC(a)nebo.localdomain>
Hi Alexey,
Thanks for your patience in waiting for the review. Please find it below.
On Wed, Sep 28, 2016 at 02:50:19PM +0400, Alexey Botchkov wrote:
> revision-id: 41a12f990519fb68eaa66ecc6860985471e6ba5a (mariadb-10.1.8-264-g41a12f9)
> parent(s): 28f441e36aaaec15ce7d447ef709fad7fbc7cf7d
> committer: Alexey Botchkov
> timestamp: 2016-09-28 14:48:54 +0400
> message:
>
> MDEV-8320 Allow index usage for DATE(datetime_column) = const.
>
> Test for 'sargable functions' added.
>
First, t/range.test crashes after I apply the patch. MTR output is here:
https://gist.github.com/spetrunia/d10165820664e0d18d4a667d44d226ee
but I've got the crash on two different machines so should be easy to repeat
> diff --git a/sql/item_cmpfunc.h b/sql/item_cmpfunc.h
> index 6d432bd..516bb07 100644
> --- a/sql/item_cmpfunc.h
> +++ b/sql/item_cmpfunc.h
> @@ -136,6 +136,14 @@ class Item_bool_func :public Item_int_func
> {
> protected:
> /*
> + Some functions modify it's arguments for the optimizer.
> + So for example the condition 'Func(fieldX) = constY' turned into
> + 'fieldX = cnuR(constY)' so that optimizer can use an index on fieldX.
> + */
What's cnuR?
Ok, I eventually got it, but the comments should not have such puzzles.
> + Item *opt_args[3];
> + uint opt_arg_count;
> +
> + /*
> +static Item_field *get_local_field (Item *field)
> +{
> + Item *ri= field->real_item();
> + return (ri->type() == Item::FIELD_ITEM
> + && !(field->used_tables() & OUTER_REF_TABLE_BIT)
> + && !((Item_field *)ri)->get_depended_from()) ? (Item_field *) ri : 0;
> +}
Please fix indentation and add comments.
Does this function do what is_local_field does, or there is some difference?
> +
> +
> +static Item_field *field_in_sargable_func(Item *fn)
> +{
> + fn= fn->real_item();
> +
> + if (fn->type() == Item::FUNC_ITEM &&
> + strcmp(((Item_func *)fn)->func_name(), "cast_as_date") == 0)
> +
> + {
> + Item_date_typecast *dt= (Item_date_typecast *) fn;
> + return get_local_field(dt->arguments()[0]);
> + }
> + return 0;
Please use NULL instead of 0, and !strcmp() instead of strcmp()=0.
> @@ -5036,6 +5060,25 @@ Item_func_like::add_key_fields(JOIN *join, KEY_FIELD **key_fields,
> }
>
>
> +bool Item_bool_rowready_func2::add_extra_key_fields(THD *thd,
> + JOIN *join, KEY_FIELD **key_fields,
> + uint *and_level,
> + table_map usable_tables,
> + SARGABLE_PARAM **sargables)
> +{
> + Item_field *f;
> + if ((f= field_in_sargable_func(args[0])) && args[1]->const_item())
What is the difference between add_key_fields and add_extra_key_fields? Any
cases where one should call one but not the other?
Please also do indentation as coding style specifies.
> diff --git a/sql/item_timefunc.cc b/sql/item_timefunc.cc
> index 41dc967..3124444 100644
> --- a/sql/item_timefunc.cc
> +++ b/sql/item_timefunc.cc
> @@ -2569,6 +2569,39 @@ bool Item_date_typecast::get_date(MYSQL_TIME *ltime, ulonglong fuzzy_date)
> }
>
>
> +bool Item_date_typecast::create_reverse_func(enum Functype cmp_type,
> + THD *thd, Item *r_arg, uint *a_cnt, Item** a)
> +{
We need a specification of what exactly this function does, and a usage
scenario in the comment.
This function actually creates multiple (up to 3?) functions. If one has a
condition
DATE(t1.d) < '2000-01-04'
then we get
(gdb) p ((Item*)cond)->opt_arg_count
$37 = 3
(gdb) p dbug_print_item(((Item*)cond)->opt_args[0])
$38 = 0x555557083e20 <dbug_item_print_buf> "t1.d"
(gdb) p dbug_print_item(((Item*)cond)->opt_args[1])
$39 = 0x555557083e20 <dbug_item_print_buf> "day_begin('2000-01-19')"
(gdb) p dbug_print_item(((Item*)cond)->opt_args[2])
$40 = 0x555557083e20 <dbug_item_print_buf> "day_end('2000-01-19')"
which makes sense, but the description is lacking. Probably the name
"create_reverse_func" is not good, because 1. multiple functions are created
and 2. neither of them is the reverse.
I can't suggest a better name at the moment, though. Let's both think about how
to make this code cleared for an uninformed reader.
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
2
1

Re: [Maria-developers] MDEV-11245 Move prepare_create_field and sp_prepare_create_field() as methods to Column_definition
by Alexander Barkov 20 Nov '16
by Alexander Barkov 20 Nov '16
20 Nov '16
Hi Sanja,
On 11/20/2016 12:29 AM, Oleksandr Byelkin wrote:
> Hi!
>
> Patch is OK to push.
Thanks for reviewing!
> Actually it is good sign that 'refefence->field'
> turned to 'field' now!
Yeah, this makes the code easier to read.
Greetings!
>
> On Fri, Nov 18, 2016 at 12:43 PM, Alexander Barkov <bar(a)mariadb.org
> <mailto:bar@mariadb.org>> wrote:
>
> Hello Sanja,
>
> can you please review a patch for MDEV-11245 for 10.3?
>
> It's a small refactoring intended to make patches for these tasks
> look better:
>
> - MDEV-10577 sql_mode=ORACLE: %TYPE in variable declarations
> - MDEV-10914 ROW data type for stored routine variables
>
> Thanks!
>
>
1
0

[Maria-developers] MDEV-11245 Move prepare_create_field and sp_prepare_create_field() as methods to Column_definition
by Alexander Barkov 18 Nov '16
by Alexander Barkov 18 Nov '16
18 Nov '16
Hello Sanja,
can you please review a patch for MDEV-11245 for 10.3?
It's a small refactoring intended to make patches for these tasks look
better:
- MDEV-10577 sql_mode=ORACLE: %TYPE in variable declarations
- MDEV-10914 ROW data type for stored routine variables
Thanks!
1
0

17 Nov '16
Hello Alexey,
I have a question about REF_SYSTEM_ID.
DROP TABLE IF EXISTS t1;
CREATE TABLE t1 (a POINT REF_SYSTEM_ID=10);
SELECT G_TABLE_NAME,G_GEOMETRY_COLUMN,SRID FROM
INFORMATION_SCHEMA.GEOMETRY_COLUMNS WHERE G_TABLE_NAME='t1';
The above script returns:
+--------------+-------------------+------+
| G_TABLE_NAME | G_GEOMETRY_COLUMN | SRID |
+--------------+-------------------+------+
| t1 | a | 10 |
+--------------+-------------------+------+
This looks correct. The REF_SYSTEM_ID (which is 10) is correctly
displayed in the SRID column.
But now if I do:
SHOW CREATE TABLE t1;
It prints:
+-------+-------------------------------------------------------------------------------------+
| Table | Create Table
|
+-------+-------------------------------------------------------------------------------------+
| t1 | CREATE TABLE `t1` (
`a` point DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+-------------------------------------------------------------------------------------+
It does not display the REF_SYSTEM_ID part of the column definition.
Looks like a bug from a glance.
Can you comment please?
1
0

[Maria-developers] Please review MDEV-11298 Split Item_func_hex::val_str_ascii() into virtual methods in Type_handler
by Alexander Barkov 17 Nov '16
by Alexander Barkov 17 Nov '16
17 Nov '16
Hello Alexey,
Please review a patch for MDEV-11298 (for 10.3).
Thanks.
1
0

[Maria-developers] Please review MDEV-11294 Move definitions of Derivation, DTCollation, Type_std_attributes from field.h and item.h to sql_type.h
by Alexander Barkov 16 Nov '16
by Alexander Barkov 16 Nov '16
16 Nov '16
Hello Alexey,
Please review a patch for MDEV-11294.
Thanks!
1
0
Hi, Igor!
On 11/15/2016 08:23 AM, Igor Babaev wrote:
> commit 9ed264a391b160ef84121c8699bb3ff61c0059ce
> Author: Igor Babaev <igor(a)askmonty.org>
> Commit: Igor Babaev <igor(a)askmonty.org>
>
> Fixed bug mdev-11072.
> In a general case the conditions with outer fields cannot
> be pushed into materialized views / derived tables.
> However if the outer field in the condition refers to a
> single row table then the condition may be pushable.
> In this case a special care should be taken for outer
> fields when pushing the condition into a materialized view /
> derived table.
Thank you a lot, OK to push!
[skip]
1
0

[Maria-developers] Mdev-10715 -- Galera: Replicate MariaDB GTID to other nodes in the cluster
by Sachin Setiya 15 Nov '16
by Sachin Setiya 15 Nov '16
15 Nov '16
Hi Nirbyay, Serg, Kristian,
In this mail I am answering some questions raised by Kristian and blue
print for
this task.
1. Questions of Kristian
<knielsen> sachin_setiya_7: so maybe the problem is - that a node
broadcasts its write set before the commit order has been determined?
I do not think , this is the problem. Galera enforces the commit order.
Yes, it broadcast write set in prepare phase. but it also
guarantees that t1->t2 order will be maintained in all participating N
nodes.
<knielsen> sachin_setiya_7: how is the galera internal transaction id
allocated and broadcast?
I am here assuming that we are talking about gtid-sequence no.
Suppose our initial seqno is S. So basically at this time all N have same
sequence no.
Some transaction T is executed at node Ni .It broadcast the writeset with
its current sequence no S.
At all Node Nj (including Ni).It receives this message. It checks some
conditions
Like it “totally ordered action”. If yes then Nj updates its sequence no to
+ 1.
Here is the relevant code.
if (gu_likely(GCS_ACT_TORDERED == rcvd->act.type &&
GCS_GROUP_PRIMARY == group->state &&
group->nodes[sender_idx].status >= GCS_NODE_STATE_DONOR
&&
!(group->frag_reset && local) &&
commonly_supported_version)) {
/* Common situation -
* increment and assign act_id only for totally ordered actions
* and only in PRIM (skip messages while in state exchange) */
rcvd->id = ++group->act_id_;
}
In function
static inline ssize_t
gcs_group_handle_act_msg (gcs_group_t* const group,
const gcs_act_frag_t* const frg,
const gcs_recv_msg_t* const msg,
struct gcs_act_rcvd* const rcvd,
bool commonly_supported_version)
So basically It is like certification(generated at each node ) but done in
quite early
phase.
Blueprint of task:- We can do something like galera GTID, we will take
initial
sequence no from server. We will add one more variable in gcs_group_t
Named s_sequence_no and will increment it at each node. We also have to
Create a gtid event and append it to message received at Nj , so that on
late stages wsrep_apply_cb() can take care of gtid.
Please let me know what you think.
--
Regards
Sachin Setiya
Software Engineer at MariaDB
2
1

[Maria-developers] Changing the error message for ER_LOCK_WAIT_TIMEOUT
by Sergey Petrunia 12 Nov '16
by Sergey Petrunia 12 Nov '16
12 Nov '16
Hi Sergei and everyone,
MariaDB defines ER_LOCK_WAIT_TIMEOUT as
share/errmsg-utf8.txt: eng "Lock wait timeout exceeded; try restarting transaction"
facebook/mysql-5.6 has an enhancement: it also shows what kind of lock is held:
share/errmsg-utf8.txt: eng "Lock wait timeout exceeded; try restarting transaction: %-.256s"
the new error messages have more info and look like this:
https://gist.github.com/spetrunia/266272c384a1b43081572e1ba2baf3f3
note that MyRocks also provides extra information.
So, the questions are:
- Is it (generally) possible to change error message texts in 10.2 still?
- Can/should we change the ER_LOCK_WAIT_TIMEOUT error text?
- (non-question) I assume that adding another error code with the new error
text is not a good solution: two error codes for the same error will be very
confusing.
The number of times ER_LOCK_WAIT_TIMEOUT is used in the source is actually
quite small: https://gist.github.com/spetrunia/2bc2ed7040a930d75b39162becbc7963
(25 occurences, and most of them actually dont care about the error message).
There are lots of .result files to update, though.
Any opinions?
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
2
1

11 Nov '16
MariaDB -
On 10.1 branch, grep on mysql-test for encrypt.binlog returns
./r/mysqld--help.result:180: --encrypt-binlog Encrypt binary logs
(including relay logs)
./r/mysqld--help.result:1179:encrypt-binlog FALSE
./suite/sys_vars/r/sysvars_server_embedded.result:698:VARIABLE_NAME
ENCRYPT_BINLOG
./suite/sys_vars/r/sysvars_server_notembedded.result:712:VARIABLE_NAME
ENCRYPT_BINLOG
Thus it does not look that the feature has actual tests. How do you
know it works?
Thanks,
--
Laurynas
2
3

Re: [Maria-developers] Please review: support for --force-restart in MTR
by Sergey Petrunia 10 Nov '16
by Sergey Petrunia 10 Nov '16
10 Nov '16
On Mon, Nov 07, 2016 at 06:53:30AM +0100, Sergei Golubchik wrote:
> On November 7, 2016 1:34:18 AM GMT+01:00, Sergey Petrunia <sergey(a)mariadb.com> wrote:
> >Hi Sergei, Elena
> >
> >I'm not sure who maintains mysql-test-run.pl but you two seem to have
> >contributed to it in the past, this is why I'm addressing this to you.
> >
> >I'm going to get "MTR v2" to support --force-restart, just like MTR v1
> >did;
> >
> >http://lists.askmonty.org/pipermail/commits/2016-November/010062.html
> >http://lists.askmonty.org/pipermail/commits/2016-November/010063.html
> >
> >Since I don't really understand this MTRv1 vs MTRv2 change (and why
> >some things
> >are in v1 but not in v2, etc), I wanted to check this change with you.
>
> No, please, don't add that. I've removed force-restart and all
> other pseudo-arguments few years ago. There are many
> ways to guarantee a restart without it - add some dummy
> option to .opt file, add a .cnf file, add a .sh file, or - that's
> what I did in old tests that needed force-restart (only about
> 5-10% out of those, that used it!) - restart the server from
> inside the test file.
Ok, I've reverted the above two patches, will restart the server from the .test
file.
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0
We currently have more than 200 branches in the MariaDB Server github repo,
most of which look like they are no longer used. I would like to remove the
unused ones to remove clutter. Eg. in `git branch -la` or the drop-downs on
the github webpage.
Below is a list of a little less than two-thirds of the branches that I
determined to be probably unused. I was fairly conservative in my criteria,
ie. if there was reasonable doubt, I omitted a branch from the list to
delete.
Please take a look at the list and see if there is any branch that should
_not_ be deleted from the github repository. Or if someone objects to
deleting anything on principle grounds, let me know as well, of course.
(I Cc'ed people that were the last to commit on a branch to be deleted).
Note that if one of these branches should become needed again later,
eg. some bb-XXX branch not used for a long time, there is no problem
re-creating it. The only problem with removing a branch should be if it
contains contents that will be needed in the future, and which is not
available anywhere else (and I tried to avoid putting such branches on the
list in the first place).
Also note that once these are deleted, a simple `git pull` will not remove
them from the local clone. The `git fetch --prune` can be used to effect
this.
If there are no objections, I will do the deletions in two weeks.
- Kristian.
-----------------------------------------------------------------------
These branches have not been updated for a long time, and looked like they
were not used any more. They are annotated with the date and the committer
of the last commit:
origin/bb-10.2-decimal 5 months ago Monty <monty(a)mariadb.org>
origin/bb-10.1-jan2 8 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/bb-10.1-xtrabackup 8 months ago Vladislav Vaintroub <wlad(a)mariadb.com>
origin/10.2-window_simple 9 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/bb-10.2-vicentiu-create 10 months ago Vicențiu Ciorbaru <vicentiu(a)mariadb.org>
origin/bb-10.0-galera-jan 11 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/10.2-travis-ci 12 months ago Otto Kekäläinen <otto(a)mariadb.org>
origin/bb-10.1-systemd 1 year ago Sergey Vojtovich <svoj(a)mariadb.org>
origin/bb-10.1-jan-encryption 1 year, 1 month ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/10.1-spider 1 year, 1 month ago Michael Widenius <monty(a)mariadb.org>
origin/bb-svoj 1 year, 1 month ago Sergey Vojtovich <svoj(a)mariadb.org>
origin/wip-binlog-encryption 1 year, 2 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-5.5-inno 1 year, 3 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/bb-10.1-default 1 year, 3 months ago Monty <monty(a)mariadb.org>
origin/10.0-FusionIO-Galera 1 year, 4 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/10.0-custombld 1 year, 4 months ago Kristian Nielsen <knielsen(a)knielsen-hq.org>
origin/10.1-window 1 year, 4 months ago Vicentiu Ciorbaru <vicentiu(a)mariadb.org>
origin/10.0-FusionIO 1 year, 5 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/bb-10.1-galera-merge 1 year, 5 months ago Nirbhay Choubey <nirbhay(a)mariadb.com>
origin/bb-10.1-binlog_row_image 1 year, 5 months ago Vicențiu Ciorbaru <vicentiu(a)mariadb.org>
origin/bb-5.5-galera-merge 1 year, 7 months ago Nirbhay Choubey <nirbhay(a)mariadb.com>
origin/bb-10.1-explain-analyze 1 year, 7 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/10.0-power 1 year, 7 months ago Sergey Vojtovich <svoj(a)mariadb.org>
origin/bb-10.0-slave 1 year, 8 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/bb-10.1-icheck 1 year, 8 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/bb-10.1-logf 1 year, 8 months ago Jan Lindström <jan.lindstrom(a)mariadb.com>
origin/bb-10.0-validation 1 year, 8 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-sema 1 year, 9 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/sanja-old-10.0-batch 1 year, 10 months ago Oleksandr Byelkin <sanja(a)mariadb.com>
origin/bb-10.1-atomics 1 year, 10 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-lf-no-oom 1 year, 10 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-encryption 1 year, 11 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-page-compress 1 year, 11 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-eperi 1 year, 11 months ago Michael Widenius <monty(a)mariadb.org>
origin/bb-10.1-explain-json 1 year, 11 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/bb-lf-iterator 1 year, 11 months ago Sergey Vojtovich <svoj(a)mariadb.org>
origin/bb-10.1-4ksectors 2 years ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-set-statement 2 years ago Oleksandr Byelkin <sanja(a)mariadb.com>
origin/bb-set-statement 2 years, 1 month ago Oleksandr Byelkin <sanja(a)mariadb.com>
origin/bb-10.1-orderby-fixes 2 years, 1 month ago Sergei Petrunia <psergey(a)askmonty.org>
origin/bb-10.1-galera 2 years, 2 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-deprecate-condpush-flag 2 years, 2 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-sysvars 2 years, 2 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-userstat 2 years, 2 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-innodb-defrag 2 years, 3 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-no-ndb 2 years, 3 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/10.1-fus 2 years, 4 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-fus 2 years, 4 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-fusionio 2 years, 4 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-acl-cleanup 2 years, 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-cmake4plugins 2 years, 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-rediscover 2 years, 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-relro 2 years, 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/bb-10.1-set-sysvars 2 years, 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/10.1-explain-analyze 2 years, 5 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/10.1-explain-json 2 years, 5 months ago Sergei Petrunia <psergey(a)askmonty.org>
These branches all contain an MDEV in their name that has been closed:
origin/10.1-MDEV-6877-binlog_row_image MDEV-6877 Closed
origin/10.1-MDEV-7811 MDEV-7811 Closed
origin/10.1-MDEV-7813 MDEV-7813 Closed
origin/10.2-MDEV-3944 MDEV-3944 Closed
origin/10.2-MDEV-8348 MDEV-8348 Closed
origin/10.2-MDEV-8931 MDEV-8931 Closed
origin/10.2-MDEV-9114 MDEV-9114 Closed
origin/bb-10.0.22-mdev8989 MDEV-8989 Closed
origin/bb-10.0-galera-mdev8496-hf MDEV-8496 Closed
origin/bb-10.0-mdev-10341 MDEV-10341 Closed
origin/bb-10.0-mdev7474 MDEV-7474 Closed
origin/bb-10.1-mdev5429 MDEV-5429 Closed
origin/bb-10.1-MDEV-6066 MDEV-6066 Closed
origin/bb-10.1-mdev6657 MDEV-6657 Closed
origin/bb-10.1-mdev6657-r2 MDEV-6657 Closed
origin/bb-10.1-MDEV-7006 MDEV-7006 Closed
origin/bb-10.1-mdev7110 MDEV-7110 Closed
origin/bb-10.1-mdev7572 MDEV-7572 Closed
origin/bb-10.1-mdev-8063 MDEV-8063 Closed
origin/bb-10.1-MDEV-8241 MDEV-8241 Closed
origin/bb-10.1-MDEV-8346 MDEV-8346 Closed
origin/bb-10.1-mdev8646 MDEV-8646 Closed
origin/bb-10.1-mdev8989 MDEV-8989 Closed
origin/bb-10.1-mdev9007 MDEV-9007 Closed
origin/bb-10.1-mdev9021 MDEV-9021 Closed
origin/bb-10.1-mdev-9304 MDEV-9304 Closed
origin/bb-10.1-mdev9362 MDEV-9362 Closed
origin/bb-10.1-mdev-9468 MDEV-9468 Closed
origin/bb-10.2-mdev10813 MDEV-10813 Closed
origin/bb-10.2-mdev5492 MDEV-5492 Closed
origin/bb-10.2-mdev-5535 MDEV-5535 Closed
origin/bb-10.2-MDEV-6720 MDEV-6720 Closed
origin/bb-10.2-mdev7660 MDEV-7660 Closed
origin/bb-10.2-mdev8646 MDEV-8646 Closed
origin/bb-10.2-mdev8789 MDEV-8789 Closed
origin/bb-10.2-mdev9857 MDEV-9857 Closed
origin/bb-10.2-mdev9864 MDEV-9864 Closed
origin/bb-5.5-mdev6735 MDEV-6735 Closed
origin/bb-5.5-MDEV-7445-7565-7846 MDEV-7445 Closed
origin/bb-5.5-mdev-7912 MDEV-7912 Closed
origin/bb-5.5-mdev-9304 MDEV-9304 Closed
origin/bb-MDEV-5317 MDEV-5317 Closed
origin/bb-mdev6089 MDEV-6089 Closed
origin/bb-mdev7715 MDEV-7715 Closed
origin/bb-mdev7728 MDEV-7728 Closed
origin/bb-mdev7793 MDEV-7793 Closed
origin/bb-mdev7894 MDEV-7894 Closed
origin/bb-mdev7895 MDEV-7895 Closed
origin/bb-mdev7922 MDEV-7922 Closed
origin/bb-vicentiu-mdev7978 MDEV-7978 Closed
origin/hf-10.1-mdev9021 MDEV-9021 Closed
origin/hf-10.1-mdev9853 MDEV-9853 Closed
origin/mdev-60-merge MDEV-60 Closed
origin/MDEV-7015 MDEV-7015 Closed
origin/MDEV-8909 MDEV-8909 Closed
origin/mdev-8380 MDEV-8380 Closed
These branches are fully merged into an existing main tree (5.5, 10.0, 10.1,
10.2, 5.5-galera, or 10.0-galera), and have not been updated for a while:
origin/MDEV-8947 5 months ago Kristian Nielsen <knielsen(a)knielsen-hq.org>
origin/bb-fast-connect 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/10.2-test1234 6 months ago Galina Shalygina <galashalygina(a)gmail.com>
origin/10.2-connector-c-integ 7 months ago Vladislav Vaintroub <wlad(a)mariadb.com>
origin/bb-10.2-mdev9543 7 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/10.2-ssl 9 months ago Vladislav Vaintroub <wlad(a)mariadb.com>
origin/svoj-gittest 1 year, 6 months ago Sergey Vojtovich <svoj(a)mariadb.org>
origin/10.0-defragment 1 year, 6 months ago Vicentiu Ciorbaru <vicentiu(a)mariadb.org>
origin/bb-5.5-knielsen 1 year, 8 months ago Kristian Nielsen <knielsen(a)knielsen-hq.org>
origin/bb-power 1 year, 10 months ago Sergey Vojtovich <svoj(a)mariadb.org>
origin/bb-10.1-explain-json 1 year, 11 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/bb-10.1-orderby-fixes 2 years, 1 month ago Sergei Petrunia <psergey(a)askmonty.org>
origin/bb-10.1-galera 2 years, 2 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-fusionio 2 years, 4 months ago Jan Lindström <jan.lindstrom(a)skysql.com>
origin/bb-10.1-cmake4plugins 2 years, 5 months ago Sergei Golubchik <serg(a)mariadb.org>
origin/10.1-explain-analyze 2 years, 5 months ago Sergei Petrunia <psergey(a)askmonty.org>
origin/10.1-explain-json 2 years, 5 months ago Sergei Petrunia <psergey(a)askmonty.org>
5
7

[Maria-developers] Please review a fix for MDEV-10780 (and likely for MDEV-10806 and MDEV-10910)
by Alexander Barkov 10 Nov '16
by Alexander Barkov 10 Nov '16
10 Nov '16
Hello Sanja, Elena, Wlad,
Sanja, please review a patch for MDEV-10780.
The patch is for 10.1. I could not reproduce it in 10.0.
But it should be safe to push it into 10.0 anyway.
This patch is most likely fixing MDEV-10806 (assigned to Sanja)
and MDEV-10910 (assigned to Elena).
Thanks!
3
6
Hi,
I'm new on this mailing list, so sorry if it's not the right place.
I'm trying to build MariaDB 10.3 on Windows 10 with Visual Studio Express 2014.
Compiling failed with many errors "fatal error C1189: #error: Macro definition of snprintf conflicts with Standard Library function declaration".
To correct this, in ma_global.h, i replaced line
#define snprintf _snprintf
by
#ifdef _MSC_VER
#if _MSC_VER< 1900
#define snprintf _snprintf
#endif
#else
#define snprintf _snprintf
#endif
and build success.
Best regards.
J.Brauge
-----Message d'origine-----
De : Maria-developers [mailto:maria-developers-bounces+j.brauge=qualiac.com@lists.launchpad.net] De la part de Sergei Golubchik
Envoyé : vendredi 4 novembre 2016 11:41
À : maria-developers(a)lists.launchpad.net
Objet : [Maria-developers] security spring cleaning in MariaDB org on github
Hi,
Now when Github has per-user ownership rights and suggests to migrate away from the legacy admin team
https://help.github.com/articles/migrating-your-previous-admin-teams-to-the…
we're performing some spring cleaning in this area.
The legacy admin team (named "Core") is removed. Most of its members lost admin access to the org. Currently only the MariaDB Foundation CEO and few board members (those, who actually have used admin access
recently) retained their admin rights.
Everyone who was in the Core team should still have write access to repositories, if you've found that it's not the case, please complain asap.
If you think you need admin access, please request it (again).
Note that
- only members of the organization can have it, being in the
Developers group is a plus
- 2FA is required for all admins (and highly recommended for
all other members)
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
_______________________________________________
Mailing list: https://launchpad.net/~maria-developers
Post to : maria-developers(a)lists.launchpad.net
Unsubscribe : https://launchpad.net/~maria-developers
More help : https://help.launchpad.net/ListHelp
2
1

[Maria-developers] Please review MDEV-11245 Move prepare_create_field and sp_prepare_create_field() as methods to Column_definition
by Alexander Barkov 07 Nov '16
by Alexander Barkov 07 Nov '16
07 Nov '16
Hello Monty,
I'm done with:
MDEV-10577 sql_mode=ORACLE: %TYPE in variable declarations
But I'd like to move a certain part of MDEV-10577 into a separate patch.
I created an MDEV entry for this:
MDEV-11245 Move prepare_create_field and sp_prepare_create_field() as
methods to Column_definition
Can you please review it?
See attached.
There's also a copy here: hasky.askmonty.org:/tmp/MDEV-11245.diff
The patch is for 10.3.
Thanks.
1
0

[Maria-developers] Community Involvement - GSoC Mentor Summit impressions
by Vicențiu Ciorbaru 07 Nov '16
by Vicențiu Ciorbaru 07 Nov '16
07 Nov '16
Hi everyone!
Since MariaDB has completed another Google Summer of Code program this
year, with some great projects as well, the MariaDB Foundation was able to
send 2 mentors to the summit in California this year. I was one of the
people that had the privilege to go.
There were a lot of talks there about community involvement and how to
encourage people to contribute. I've picked the ideas that I've found to be
most interesting and useful and have blogged [1] about them. I intend to
try and apply some of them if possible. Since this is a community matter
(albeit more GSoC related), it makes sense to ask for feedback from the
community. I think some of the ideas are at least partly implemented but we
can always do better.
If you're interested, lets have a discussion on this.
Vicențiu
[1]
http://vicentiu.ciorbaru.io/community-involvement-gsoc-mentor-summit-impres…
1
0

07 Nov '16
Hi,
Now when Github has per-user ownership rights and suggests to migrate
away from the legacy admin team
https://help.github.com/articles/migrating-your-previous-admin-teams-to-the…
we're performing some spring cleaning in this area.
The legacy admin team (named "Core") is removed. Most of its members
lost admin access to the org. Currently only the MariaDB Foundation CEO
and few board members (those, who actually have used admin access
recently) retained their admin rights.
Everyone who was in the Core team should still have write access to
repositories, if you've found that it's not the case, please complain
asap.
If you think you need admin access, please request it (again).
Note that
- only members of the organization can have it, being in the
Developers group is a plus
- 2FA is required for all admins (and highly recommended for
all other members)
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
5

07 Nov '16
Hi Sergei, Elena
I'm not sure who maintains mysql-test-run.pl but you two seem to have
contributed to it in the past, this is why I'm addressing this to you.
I'm going to get "MTR v2" to support --force-restart, just like MTR v1 did;
http://lists.askmonty.org/pipermail/commits/2016-November/010062.html
http://lists.askmonty.org/pipermail/commits/2016-November/010063.html
Since I don't really understand this MTRv1 vs MTRv2 change (and why some things
are in v1 but not in v2, etc), I wanted to check this change with you.
Thanks,
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0

Re: [Maria-developers] [MariaDB/server] MDEV-11065 - Compressed binary log (#247)
by Kristian Nielsen 03 Nov '16
by Kristian Nielsen 03 Nov '16
03 Nov '16
vinchen <notifications(a)github.com> writes:
> The new code is here:
> https://github.com/vinchen/server/commits/GCSAdmin-10.2-binlog-compressed2-2
> And added two fixed:
> 1.Avoid overflowing buffers in case of corrupt events
> 2.Check the compressed algorithm.
Looks fine, thanks for fixing this.
I have now merged and pushed this to MariaDB 10.2.
> > Rows_log_event::write_data_body(). I *think* the reason for that is that
> > the
> > SQL thread never sees the compressed events (they are uncompressed by the
> > IO
> > thread), but I would like your confirmation that my understanding is
> > correct.
> > 3. Did you think about testing that BINLOG statements with compressed
> As mentioned above, the compressed events would uncompressed in constructor
> in SQL thread.
Right, thanks, I understand now.
So the BINLOG statements in the output of mysqlbinlog have the uncompressed
data. This makes sense, just like the SQL queries are uncompressed before
being output by mysqlbinlog.
So in fact, one change I suggested is wrong, to accept compressed event
types in BINLOG statements. So I reverted this change again, and also added
some test cases:
https://github.com/MariaDB/server/commit/56a041cde657e5618c519a3c50e8075136…
BTW, I think this is recently merged code (from the delayed replication
feature) that would not have appeared in your original patch.
Thanks for the explanation to help me understand this.
I also fixed a .result file - this is a failure that would only show up when
running the test suite with --embedded:
https://github.com/MariaDB/server/commit/3c0ff6153f75bb8e63c08ff3c828235512…
So I think everything should be fine now and this should appear in MariaDB
10.2.3, if I understand correctly.
Once again thanks for the patch. It was a pleasure to see a replication
addition that was so well done, and with so much attention to detail.
- Kristian.
1
0

[Maria-developers] MDEV-11219 main.null fails in buldbot and outside with ps-protocol
by Alexander Barkov 03 Nov '16
by Alexander Barkov 03 Nov '16
03 Nov '16
Hello Sergei,
Please review a patch for MDEV-11219.
Thanks!
1
0

[Maria-developers] Please review MDEV-10811 Change design from "Item is Type_handler" to "Item has Type_handler"
by Alexander Barkov 03 Nov '16
by Alexander Barkov 03 Nov '16
03 Nov '16
Hello Alexey,
Please review a patch for 10.3 for
MDEV-10811 Change design from "Item is Type_handler" to "Item has
Type_handler
Thanks.
1
0

03 Nov '16
GCSAdmin <notifications(a)github.com> writes:
> We add new event types to support compress the binlog as follow:
> QUERY_COMPRESSED_EVENT,
> WRITE_ROWS_COMPRESSED_EVENT_V1,
> UPDATE_ROWS_COMPRESSED_EVENT_V1,
> DELETE_POWS_COMPRESSED_EVENT_V1,
> WRITE_ROWS_COMPRESSED_EVENT,
> UPDATE_ROWS_COMPRESSED_EVENT,
> DELETE_POWS_COMPRESSED_EVENT
>
Thanks for the patch!
Overall, I'm much impressed with the quality, I see a lot of attention to
detail. Below, I have a number of comments and suggestions, but they are all
minor for a patch of this complexity.
A number of the comments are style fixes or simple changes, and I found it
easier to just do the changes to the source for you to review. I hope that
is ok. I rebased the series without intermediate merge to simplify review to
a single patch. The rebase with my suggested changes on top is here:
https://github.com/knielsen/server/commits/GCSAdmin-10.2-binlog-compressed2…
Please check the changes I made and let me know if you disagree with
anything or I misunderstood something. Changes are mostly:
- A bunch of code style (indentation, lines <= 80 chars, clarify comments,
and so on).
- Change LOG_EVENT_IS_QUERY() etc. from macros to static inline functions.
- check_event_type() updated (new code merged into 10.2 recently).
- Minor .result file update, also from code merged into 10.2 recently.
I also have the following questions/suggestions:
1. I think the code should sanity-check the compressed event header, to
check that it does not access outside the event buffer in corrupt
events. Eg. if lenlen is 7 and there is less than 7 bytes available in the
event. I know that replication code in general is not robust to corrupt
events, but it seems best that new code avoids overflowing buffers in case
of corrupt event data.
2. There are some places where the compressed event types are not added to
switch() or if() statements, mainly related to minimal binlog images, I
think: Rows_log_event::read_write_bitmaps_cmp(),
Rows_log_event::get_data_size(), Rows_log_event::do_apply_event(),
Rows_log_event::write_data_body(). I *think* the reason for that is that the
SQL thread never sees the compressed events (they are uncompressed by the IO
thread), but I would like your confirmation that my understanding is
correct.
3. Did you think about testing that BINLOG statements with compressed events
can execute correctly? This happens with mysqlbinlog | mysql, when there are
(compressed) row-based events in the binlog. I'm wondering if this could
expose compressed events to code that normally runs in the SQL thread and
expects to have events uncompressed for it by the IO thread?
A few detailed comments on the patch below (most comments are done as
changes in the above-linked git branch).
Otherwise, after answer to above 3 questions, I think the patch looks good
to go into 10.2.
- Kristian.
-----------------------------------------------------------------------
> +#define BINLOG_COMPRESSED_HEADER_LEN 1
> +#define BINLOG_COMPRESSED_ORIGINAL_LENGTH_MAX_BYTES 4
> +/**
> + Compressed Record
> + Record Header: 1 Byte
> + 0 Bit: Always 1, mean compressed;
> + 1-3 Bit: Reversed, compressed algorithm??Always 0, means zlib
> + 4-7 Bit: Bytes of "Record Original Length"
> + Record Original Length: 1-4 Bytes
> + Compressed Buf:
I did not understand this. Why "reversed"? It seems to be bits 0-2 that have
the bytes of "Record Original Length" and bit 7 that has the '1' bit meaning
compression enabled? Maybe this can be clarified.
> +int query_event_uncompress(const Format_description_log_event *description_event, bool contain_checksum,
> + const char *src, char* buf, ulong buf_size, bool* is_malloc,
> + char **dst, ulong *newlen)
> +{
> + ulong len = uint4korr(src + EVENT_LEN_OFFSET);
> + const char *tmp = src;
> +
> + DBUG_ASSERT((uchar)src[EVENT_TYPE_OFFSET] == QUERY_COMPRESSED_EVENT);
> +
> + uint8 common_header_len= description_event->common_header_len;
> + uint8 post_header_len= description_event->post_header_len[QUERY_COMPRESSED_EVENT-1];
> +
> + tmp += common_header_len;
> +
> + uint db_len = (uint)tmp[Q_DB_LEN_OFFSET];
> + uint16 status_vars_len= uint2korr(tmp + Q_STATUS_VARS_LEN_OFFSET);
> +
> + tmp += post_header_len + status_vars_len + db_len + 1;
> +
> + uint32 un_len = binlog_get_uncompress_len(tmp);
> + *newlen = (tmp - src) + un_len;
This is one place I would like to see a check on the data in the
event. Maybe just that 'len' is larger than 1+(tmp[0]&7). Or maybe simply
that len is at least 10, the minimal size for compressed events.
> +int row_log_event_uncompress(const Format_description_log_event *description_event, bool contain_checksum,
> + const char *src, char* buf, ulong buf_size, bool* is_malloc,
> + char **dst, ulong *newlen)
> +{
> + uint32 un_len = binlog_get_uncompress_len(tmp);
> + *newlen = (tmp - src) + un_len;
Another place where I'd like a check against looking outside the event
buffer.
> +int binlog_buf_uncompress(const char *src, char *dst, uint32 len, uint32 *newlen)
> +{
> + if((src[0] & 0x80) == 0)
> + {
> + return 1;
> + }
> +
> + uint32 lenlen = src[0] & 0x07;
> + uLongf buflen = *newlen;
> + if(uncompress((Bytef *)dst, &buflen, (const Bytef*)src + 1 + lenlen, len) != Z_OK)
Again check on length.
5
12

02 Nov '16
Hello Ian,
In 10.2 we made a few changes under terms of these bug reports:
MDEV-9874 LOAD XML INFILE does not handle well broken multi-byte characters
MDEV-9823 LOAD DATA INFILE silently truncates incomplete byte sequences
MDEV-9842 LOAD DATA INFILE does not work well with a TEXT column when
using sjis
MDEV-9811 LOAD DATA INFILE does not work well with gbk in some cases
MDEV-9824 LOAD DATA does not work with multi-byte strings in LINES
TERMINATED BY when IGNORE is specified
The idea is that the LOAD FILE behavior is now more consistent with
INSERT/UPDATE behavior, to store as much data as possible.
When some broken byte sequence is found, now LOAD data replaces broken
bytes to question marks and keeps loading the value. In older versions
LOAD truncated the value on the leftmost broken byte.
So suppose I create a file with these bytes:
SELECT CONCAT('aaa',0xF09F988E,'bbb') INTO OUTFILE '/tmp/test.txt';
where {{0xF09F988E}} is UTF8MB4 encoding for the character "U+1F60E
SMILING FACE WITH SUNGLASSES".
and now erroneously load it as a 3-byte utf8:
DROP TABLE IF EXISTS t1;
CREATE TABLE t1 (a VARCHAR(10) CHARACTER SET utf8);
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE t1 CHARACTER SET utf8;
SHOW WARNINGS;
SELECT * FROM t1;
(notice CHARACTER SET utf8 instead of CHARACTER SET utf8mb4 in LOAD).
In 5.5 the above script would return:
+---------+------+-------------------------------------------------------------------------+
| Level | Code | Message
|
+---------+------+-------------------------------------------------------------------------+
| Warning | 1366 | Incorrect string value: '\xF0\x9F\x98\x8Ebb...' for
column 'a' at row 1 |
+---------+------+-------------------------------------------------------------------------+
+------+
| a |
+------+
| aaa |
+------+
In 10.2 it returns the same warning, but loads more data:
+------------+
| a |
+------------+
| aaa????bbb |
+------------+
Valerii suggests that we document these changes more precisely,
mentioning that these are actually incompatible changes!
The sad thing is that there is even yet another different behavior in
10.0.26:
MDEV-11217 Regression: LOAD DATA INFILE started to fail with an error
We're currently thinking what to do with that.
1
0
Folks,
At the MariaDB Developer meetup there was some discussion about the
limits of compatibility especially regarding replication.
I've started a section on this page:
https://mariadb.com/kb/en/mariadb/mariadb-vs-mysql-compatibility/
If you can identify what can/cannot work in various replication modes
between MySQL and MariaDB please document it on this page.
If you cannot edit this for some reason reply to the MariaDB Discuss
list and eventually the results can be correlated and the page can be
updated.
As usual other material changes to improving the KB documentation is
welcome.
Daniel
1
0

[Maria-developers] MDEV-7660 MySQL WL#6671 "Improve scalability by not using thr_lock.c locks for InnoDB tables"
by Sergei Golubchik 29 Oct '16
by Sergei Golubchik 29 Oct '16
29 Oct '16
Hi, Sergey,
I think it's fine. I had a few questions though, see below:
> commit 47aa5f9
> Author: Sergey Vojtovich <svoj(a)mariadb.org>
> Date: Fri May 6 13:44:07 2016 +0400
>
> MDEV-7660 - MySQL WL#6671 "Improve scalability by not using thr_lock.c locks
> for InnoDB tables"
>
> Don't use thr_lock.c locks for InnoDB tables.
>
> Let HANDLER READ call external_lock() even if SE is not going to be locked by
> THR_LOCK. This fixes at least main.implicit_commit failure.
>
> Removed tests for BUG#45143 and BUG#55930 which cover InnoDB + THR_LOCK. To
> operate properly these tests require code flow to go through THR_LOCK debug
> sync points, which is not the case after this patch. These tests are removed
> by WL#6671 as well. An alternative is to port them to different storage engine.
>
> For the very same reason partition_debug_sync test was adjusted to use MyISAM.
>
> diff --git a/sql/sql_handler.cc b/sql/sql_handler.cc
> index e8ade81..76107ae 100644
> --- a/sql/sql_handler.cc
> +++ b/sql/sql_handler.cc
> @@ -752,11 +752,12 @@ bool mysql_ha_read(THD *thd, TABLE_LIST *tables,
> tables->table= table; // This is used by fix_fields
> table->pos_in_table_list= tables;
>
> - if (handler->lock->lock_count > 0)
> + if (handler->lock->table_count > 0)
> {
> int lock_error;
>
> - handler->lock->locks[0]->type= handler->lock->locks[0]->org_type;
> + if (handler->lock->lock_count > 0)
> + handler->lock->locks[0]->type= handler->lock->locks[0]->org_type;
I don't understand this code in mysql_ha_read() at all :(
even before your changes
> /* save open_tables state */
> TABLE* backup_open_tables= thd->open_tables;
> diff --git a/storage/xtradb/handler/ha_innodb.h b/storage/xtradb/handler/ha_innodb.h
> index 2027a59..efb8120 100644
> --- a/storage/xtradb/handler/ha_innodb.h
> +++ b/storage/xtradb/handler/ha_innodb.h
> @@ -218,6 +218,7 @@ class ha_innobase: public handler
> bool can_switch_engines();
> uint referenced_by_foreign_key();
> void free_foreign_key_create_info(char* str);
> + uint lock_count(void) const;
> THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
> enum thr_lock_type lock_type);
> void init_table_handle_for_HANDLER();
>
> commit 5645626
> Author: Sergey Vojtovich <svoj(a)mariadb.org>
> Date: Tue May 24 12:25:56 2016 +0400
>
> MDEV-7660 - MySQL WL#6671 "Improve scalability by not using thr_lock.c locks
> for InnoDB tables"
>
> - InnoDB now acquires shared lock for HANDLER ... READ
Why?
> - LOCK TABLES now disables autocommit implicitely
> - UNLOCK TABLES now re-enables autocommit implicitely if it was disabled by
> LOCK TABLES
> - adjusted test cases to this new behavior
>
> diff --git a/sql/sql_base.cc b/sql/sql_base.cc
> index 3091bd6..7d39484 100644
> --- a/sql/sql_base.cc
> +++ b/sql/sql_base.cc
> @@ -2824,6 +2824,8 @@ Locked_tables_list::unlock_locked_tables(THD *thd)
> request for metadata locks and TABLE_LIST elements.
> */
> reset();
> + if (thd->variables.option_bits & OPTION_AUTOCOMMIT)
> + thd->variables.option_bits&= ~(OPTION_NOT_AUTOCOMMIT);
1. Was it possible - before your change - for OPTION_AUTOCOMMIT and
OPTION_NOT_AUTOCOMMIT to be out of sync?
2. What if someone changes @@autocommit under LOCK TABLES?
Do you have a test for that?
3. Do you need to set SERVER_STATUS_AUTOCOMMIT here?
> }
>
>
> diff --git a/mysql-test/t/innodb_mysql_lock.test b/mysql-test/t/innodb_mysql_lock.test
> index cb57c09..85ba418 100644
> --- a/mysql-test/t/innodb_mysql_lock.test
> +++ b/mysql-test/t/innodb_mysql_lock.test
> @@ -150,14 +150,16 @@ let $wait_condition=
> --source include/wait_condition.inc
> LOCK TABLES t1 READ;
> SELECT release_lock('bug42147_lock');
> +let $wait_condition=
> + SELECT COUNT(*) > 0 FROM information_schema.processlist
> + WHERE state = 'executing'
> + AND info = 'INSERT INTO t1 SELECT get_lock(\'bug42147_lock\', 60)';
> +--source include/wait_condition.inc
> +UNLOCK TABLES;
I don't understand the original test case. But after your changes it
actually makes sense :)
>
> connection default;
> --reap
>
> -connection con2;
> -UNLOCK TABLES;
> -
> -connection default;
> disconnect con2;
> DROP TABLE t1;
>
> diff --git a/mysql-test/r/partition_explicit_prune.result b/mysql-test/r/partition_explicit_prune.result
> index 765803d..7b9c53d 100644
> --- a/mysql-test/r/partition_explicit_prune.result
> +++ b/mysql-test/r/partition_explicit_prune.result
> @@ -281,7 +281,7 @@ UNLOCK TABLES;
> SELECT * FROM INFORMATION_SCHEMA.SESSION_STATUS
> WHERE VARIABLE_NAME LIKE 'HANDLER_%' AND VARIABLE_VALUE > 0;
> VARIABLE_NAME VARIABLE_VALUE
> -HANDLER_COMMIT 2
> +HANDLER_COMMIT 3
why is that?
> HANDLER_READ_RND_NEXT 52
> HANDLER_TMP_WRITE 72
> HANDLER_WRITE 2
> diff --git a/mysql-test/suite/handler/handler.inc b/mysql-test/suite/handler/handler.inc
> index b1e881f..8cad6a5 100644
> --- a/mysql-test/suite/handler/handler.inc
> +++ b/mysql-test/suite/handler/handler.inc
> @@ -1091,6 +1091,12 @@ connection default;
> --reap
> drop table t2;
>
> +# This test expects "handler t1 read a next" to get blocked on table level
> +# lock so that further "drop table t1" can break the lock and close handler.
> +# This notification mechanism doesn't work with InnoDB since it bypasses
> +# table level locks.
what happens for InnoDB then?
I'd expect "handler t1 read a next" to get blocked inside the InnoDB.
what does then "drop table t1" do?
> +if ($engine_type != 'InnoDB')
> +{
> --echo #
> --echo # Bug #46224 HANDLER statements within a transaction might
> --echo # lead to deadlocks
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
8

[Maria-developers] Patch for unaligned word access in CONNECT storage engine
by Kristian Nielsen 28 Oct '16
by Kristian Nielsen 28 Oct '16
28 Oct '16
Who should be contacted about issues in the CONNECT storage engine?
The attached patch is from Debian Bug#838914
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=838914
Apparently the code does direct unaligned accesses of word data. This works
fine on the x86 architecture, but on some other architectures (like MIPS),
it causes a bus error.
In other places in the server code, the similar issue is handled correctly
with uint4korr() and similar macros (though these also deal with byte
order).
I think the patch (or something similar) is good and should be
upstreamed. But I am not sure how the CONNECT storage engine is maintained -
should this go directly into MariaDB? If there is an upstream maintained
CONNECT storage engine, probably it should preferably go there first?
- Kristian.g
2
3

[Maria-developers] MariaDB connectors - dynamic libraries for Fedora (and all OSs that forks from it)
by Michal Schorm 27 Oct '16
by Michal Schorm 27 Oct '16
27 Oct '16
Hello all,
I am new guy, who tries to add MariaDB connectors to Fedora. (And in the
future, probabbly most of the MariaDB stuff will rest on me in Fedora)
So far I encountered one great issue.
In Fedora, we only use dynamic libraries (which makes, for example,
maintaining different packages with same library much much easier).
But you guys, you don't seem like you are all for it.
Right now, I stopped at mariadb-connector-odbc package.
There is an issue, which makes it unusable when build as a dynamic library.
In that moment, it uses stuff from library of connector C, but connector C
library seems to keep its secrets to itself - concretely wrapper functions.
Here are impossible solutions:
* use static libraries in Fedora (can't - dynamic libys are one of the
fundamental things in Fedora, same priority as use only FOSS)
* link odbc connector towards mariadb-libs (can't - connectors are meant as
replacement for mariadb-libs)
Here are some solutions that are at least not denied by deafult:
* start to support building all of your components as dynamic libraries
(Best solution for us (for Fedora), because the closer to upstream code we
are, the better. Needless to say, over 40 OSs forks somehow from Fedora)
* export everything from C connector library (I assume, that is not what
you want to do)
* export just what is used by other components - mainly wrappers - some
exaples for ODBC connector: my_no_flags_free, my_malloc,
my_charset_utf8_general_ci, my_strndup, my_realloc, my_strdup,
my_snprintf (Compromise
for both sides, but can rise more issues later)
Are there any reasons to not support dynamic libraries?
What could be the best solution from your point of view?
Cheers,
Michal
--
Michal Schorm
Core Services - Databases Team
mail: mschorm(a)redhat.com
Brno-IRC: mschorm
2
1

Re: [Maria-developers] [Commits] b976f9f: MDEV-11126: Crash while altering persistent virtual column
by Sergei Golubchik 26 Oct '16
by Sergei Golubchik 26 Oct '16
26 Oct '16
Hi, Jan!
Ok to push!
On Oct 25, Jan Lindström wrote:
> revision-id: b976f9f9ed8ce8d023b80f4d09c5ad5f74aae1fb (mariadb-10.0.27-14-gb976f9f)
> parent(s): 3321f1adc74b54e7534000c06eeca166730ccc4a
> author: Jan Lindström
> committer: Jan Lindström
> timestamp: 2016-10-25 15:08:15 +0300
> message:
>
> MDEV-11126: Crash while altering persistent virtual column
>
> Problem was that if old virtual column is computed and stored there
> was no check if new column is really virtual column.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

Re: [Maria-developers] 65b0617: MDEV-10846 Running mysqldump backup twice returns error: Table
by Sergei Golubchik 26 Oct '16
by Sergei Golubchik 26 Oct '16
26 Oct '16
Hi, Alexey!
Ok to push, thanks!
Just fix the test first (see below)
On Oct 22, Alexey Botchkov wrote:
> revision-id: 65b0617836390bb76104cb37094e16bafa6da8b2 (mariadb-10.0.27-12-g65b0617)
> parent(s): fb38d2642011c574cc9103ae1a1f9dd77f7f027e
> committer: Alexey Botchkov
> timestamp: 2016-10-22 12:08:15 +0400
> message:
>
> MDEV-10846 Running mysqldump backup twice returns error: Table
> 'mysql.proc' doesn't exist.
>
> The mysql_rm_db() doesn't seem to expect the 'mysql' database
> to be deleted. Checks for that added.
> Also fixed the bug MDEV-11105 Table named 'db' has weird side effect.
> The db.opt file now removed separately.
>
> ---
> mysql-test/r/drop.result | 6 ++++++
> mysql-test/t/drop.test | 9 +++++++++
> sql/sql_db.cc | 26 +++++++++++++++++++++-----
> 3 files changed, 36 insertions(+), 5 deletions(-)
>
> diff --git a/mysql-test/r/drop.result b/mysql-test/r/drop.result
> index c23ffbe3..ee1758f 100644
> --- a/mysql-test/r/drop.result
> +++ b/mysql-test/r/drop.result
> @@ -209,3 +209,9 @@ INSERT INTO table1 VALUES (1);
> ERROR 42S02: Unknown table 't.notable'
> DROP TABLE table1,table2;
> # End BUG#34750
> +#
> +# MDEV-11105 Table named 'db' has weird side effect.
> +#
> +CREATE DATABASE mysqltest;
> +CREATE TABLE mysqltest.t1(id INT);
huh? the table name is supposed to be 'db' here :)
> +DROP DATABASE mysqltest;
> diff --git a/sql/sql_db.cc b/sql/sql_db.cc
> index e89c3d9..0a3ff64 100644
> --- a/sql/sql_db.cc
> +++ b/sql/sql_db.cc
> @@ -784,7 +784,7 @@ bool mysql_alter_db(THD *thd, const char *db, HA_CREATE_INFO *create_info)
> bool mysql_rm_db(THD *thd,char *db,bool if_exists, bool silent)
> {
> ulong deleted_tables= 0;
> - bool error= true;
> + bool error= true, rm_mysql_schema;
> char path[FN_REFLEN + 16];
> MY_DIR *dirp;
> uint length;
> @@ -809,6 +809,18 @@ bool mysql_rm_db(THD *thd,char *db,bool if_exists, bool silent)
> length= build_table_filename(path, sizeof(path) - 1, db, "", "", 0);
> strmov(path+length, MY_DB_OPT_FILE); // Append db option file name
> del_dbopt(path); // Remove dboption hash entry
> + /*
> + Now remove the db.opt file.
> + The 'find_db_tables_and_rm_known_files' doesn't remove this file
> + if there exists a table with the name 'db', so let's just do it
> + separately. We know this file exists and needs to be deleted anyway.
ah, thanks, I see now. I wondered why a table 'db' would matter.
> + */
> + if (my_delete_with_symlink(path, MYF(0)) && my_errno != ENOENT)
> + {
> + my_error(EE_DELETE, MYF(0), path, my_errno);
> + DBUG_RETURN(true);
here I don't think this error is quite correct. nowhere else
mysql_rm_db() seems to fail with EE_DELETE. It always says
"Error dropping database (can't rmdir './exp_db', errno: 39 "Directory not empty")"
May be you should add this EE_DELETE as a warning?
> + }
> +
> path[length]= '\0'; // Remove file name
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0
Hi, Sergei.
I'd like to draw your attention to this old issue:
https://jira.mariadb.org/browse/MDEV-7389
The idea was to make a bigger thing - to modify the plugin API
so it is easier to use and let user to do more. Particularly to
notify warnings to the audit plugins for this 7389 task.
That was done with this task: https://jira.mariadb.org/browse/MDEV-5313
Just to refresh our memory:
I proposed to get rid off the API versions and version-dependent
memory structures that are used to transfer data to and from the plugin.
All we need to do is adding new 'audit_plugin_service'. Which is
just the normal service that offer methods to the auditing plugin to
send commands to the server and get the server data. You can look at the
patch http://lists.askmonty.org/pipermail/commits/2016-February/009025.html
So, Serg, do you have anything to say on that subject?
Best regards.
HF
3
4

Re: [Maria-developers] REVIEW: Fwd: [Commits] 8cd4778: MDEV-9114: Bulk operations (Array binding)
by Michael Widenius 24 Oct '16
by Michael Widenius 24 Oct '16
24 Oct '16
Hi!
On Sun, Oct 9, 2016 at 1:45 AM, Andrea <montyw(a)askmonty.org> wrote:
>
>
>
> --- Forwarded message ---
> From: Oleksandr Byelkin <sanja(a)montyprogram.com>
> Date: 8 October 2016 1:01:58 p.m.
> Subject: REVIEW: Fwd: [Commits] 8cd4778: MDEV-9114: Bulk operations (Array
> binding)
> To: Michael Widenius <monty(a)mariadb.com>
>
> Hi, Monty!
>
>
> It is the patch. There left 2 "TODO" marks about embedded I think that
> best variant leave there it as is, but maybe you have other opinion.
Embedded is ok to not have bulk insert (for now), but we should give
an error, not an assert if this is used!
Review of bulk operations patch
> diff --git a/include/mysql.h.pp b/include/mysql.h.pp
> index 857f5b9..c985792 100644
> --- a/include/mysql.h.pp
> +++ b/include/mysql.h.pp
> @@ -11,11 +11,17 @@ enum enum_server_command
> COM_STMT_RESET, COM_SET_OPTION, COM_STMT_FETCH, COM_DAEMON,
> COM_MDB_GAP_BEG,
> COM_MDB_GAP_END=250,
> - COM_SLAVE_WORKER,
> - COM_SLAVE_IO,
> - COM_SLAVE_SQL,
> - COM_MULTI,
> - COM_END
> + COM_SLAVE_WORKER=251,
> + COM_SLAVE_IO=252,
> + COM_SLAVE_SQL=253,
> + COM_MULTI=254,
> + COM_END=255
> +};
Why the numbering ?
(Just curious as this shouldn't be needed)
> +enum enum_indicator_type
> +{
> + STMT_INDICATOR_NONE= 0,
> + STMT_INDICATOR_NULL,
> + STMT_INDICATOR_DEFAULT
> };
Please add comment what this enum is used for.
Note that this enum_indicator_type isn't used in this commit.
(I have a fix for this later on)
You should probably just rename it to indicator_type.
(On can always use 'enum indicator_type' if one wants to point out in the code
that it's an enum).
> diff --git a/sql/item.cc b/sql/item.cc
> index 61635ea..41a7aaf 100644
> --- a/sql/item.cc
> +++ b/sql/item.cc
> @@ -812,6 +818,13 @@ bool Item_ident::collect_outer_ref_processor(void *param)
> return FALSE;
> }
>
> +void Item_ident::set_default_value_target(Item *item)
> +{
> + if ((default_value_target= item) && fixed &&
Please add a comment under which circumstances fixed could be false.
Also add a comment what it means when set_default_value_source isn't called.
Will it be called later or isn't it needed ?
> + type() == FIELD_ITEM)
> + default_value_target->set_default_value_source(((Item_field *)this)->
> + field);
> +}
> @@ -7242,6 +7300,7 @@ bool Item_ref::fix_fields(THD *thd, Item **reference)
> Item_field* fld;
> if (!(fld= new (thd->mem_root) Item_field(thd, from_field)))
> goto error;
> + fld->set_default_value_target(default_value_target);
Add a comment like: /* Note that fld->fixed isn't set here */
> @@ -8428,36 +8487,38 @@ bool Item_default_value::eq(const Item *item, bool binary_cmp) const
> bool Item_default_value::fix_fields(THD *thd, Item **items)
> {
> Item *real_arg;
> - Item_field *field_arg;
> Field *def_field;
> DBUG_ASSERT(fixed == 0);
>
> - if (!arg)
> + if (!arg && !arg_fld)
> {
> fixed= 1;
> return FALSE;
> }
> - if (!arg->fixed && arg->fix_fields(thd, &arg))
> - goto error;
> + if (arg)
> + {
> + if (!arg->fixed && arg->fix_fields(thd, &arg))
> + goto error;
>
>
> - real_arg= arg->real_item();
> - if (real_arg->type() != FIELD_ITEM)
> - {
> - my_error(ER_NO_DEFAULT_FOR_FIELD, MYF(0), arg->name);
> - goto error;
> - }
> + real_arg= arg->real_item();
> + if (real_arg->type() != FIELD_ITEM)
> + {
> + my_error(ER_NO_DEFAULT_FOR_FIELD, MYF(0), arg->name);
> + goto error;
> + }
>
> - field_arg= (Item_field *)real_arg;
> - if ((field_arg->field->flags & NO_DEFAULT_VALUE_FLAG))
> + arg_fld= ((Item_field *)real_arg)->field;
> + }
What is the reason from having arg_field (type field) instead of
field_arg (type Item_field) ?
arg_field is a bad name for a class variable as it doesn't tell us
what the variable actually contains. A variable name in a function is
not that critical to name right as one can understand what it contains
by looking at is usage. For a class variable this isn't the case.
field_in_table or just field would be a bit more clear (but not yet perfect).
I looked at the useage arg_field and don't understand why this needs to be
part of the class and not just a local variable.
As def_field is basically a copy of def_field, then another possible name
for arg_fld would be def_field_org.
> --- a/sql/item.h
> +++ b/sql/item.h
> @@ -1872,6 +1872,9 @@ class Item: public Value_source,
> {
> marker &= ~EXTRACTION_MASK;
> }
> +
> + virtual void set_default_value_source(Field *fld) {};
> + virtual bool set_default_if_needed() { return FALSE; };
> };
>
>
> @@ -2351,6 +2354,8 @@ class Item_ident :public Item_result_field
> const char *orig_table_name;
> const char *orig_field_name;
>
remove the extra empty line here
> + Item *default_value_target;
> +
> @@ -2727,6 +2736,15 @@ class Item_param :public Item_basic_value,
> DECIMAL_VALUE
> } state;
>
> + Field *default_value_ref;
> + Item_default_value *default_value_source;
> + /*
> + Used for bulk protocol. Indicates if we should expect
> + indicators byte before value of the parameter
> + */
> + my_bool indicators;
> + uint indicator;
> @@ -4980,14 +5011,19 @@ class Item_default_value : public Item_field
> void calculate();
> public:
> Item *arg;
> + Field *arg_fld;
Why is the above needed ?
How is it used ?
> +++ b/sql/sql_base.cc
> @@ -7810,9 +7810,10 @@ fill_record(THD *thd, TABLE *table_arg, List<Item> &fields, List<Item> &values,
> if (table->next_number_field &&
> rfield->field_index == table->next_number_field->field_index)
> table->auto_increment_field_not_null= TRUE;
> - if (rfield->vcol_info &&
> - value->type() != Item::DEFAULT_VALUE_ITEM &&
> - value->type() != Item::NULL_ITEM &&
> + Item::Type type= value->type();
> + if (rfield->vcol_info &&
> + type != Item::DEFAULT_VALUE_ITEM &&
> + type != Item::NULL_ITEM &&
> table->s->table_category != TABLE_CATEGORY_TEMPORARY)
The above code is slower than the original as it will requre an extra
function call.
Better to do:
if (rfield->vcol_info)
{
Item::Type type= value->type();
if (type != Item::DEFAULT_VALUE_ITEM &&
type != Item::NULL_ITEM &&
table->s->table_category != TABLE_CATEGORY_TEMPORARY)
...
> @@ -7820,6 +7821,8 @@ fill_record(THD *thd, TABLE *table_arg, List<Item> &fields, List<Item> &values,
> ER_THD(thd, ER_WARNING_NON_DEFAULT_VALUE_FOR_VIRTUAL_COLUMN),
> rfield->field_name, table->s->table_name.str);
> }
> + if (value->set_default_if_needed())
> + goto err;
For which use case is the above needed?
(Would like to understand the exact usage case and preferably have this
documented in the code as it's far from clear why this is needed).
Normal default handling should be handled by the following code:
if (!update && table_arg->default_field &&
table_arg->update_default_fields(0, ignore_errors))
goto err;
Why doesn't this work for bulk operations?
If value is 'default', then just not storing it into field should have the
above code to store the default value into the field.
> @@ -8060,9 +8063,10 @@ fill_record(THD *thd, TABLE *table, Field **ptr, List<Item> &values,
> value=v++;
> if (field->field_index == autoinc_index)
> table->auto_increment_field_not_null= TRUE;
> - if (field->vcol_info &&
> - value->type() != Item::DEFAULT_VALUE_ITEM &&
> - value->type() != Item::NULL_ITEM &&
> + Item::Type type= value->type();
> + if (field->vcol_info &&
> + type != Item::DEFAULT_VALUE_ITEM &&
> + type != Item::NULL_ITEM &&
> table->s->table_category != TABLE_CATEGORY_TEMPORARY)
Move calling of value->type() down (see other comment above for similar code)
> @@ -8070,6 +8074,8 @@ fill_record(THD *thd, TABLE *table, Field **ptr, List<Item> &values,
> ER_THD(thd, ER_WARNING_NON_DEFAULT_VALUE_FOR_VIRTUAL_COLUMN),
> field->field_name, table->s->table_name.str);
> }
> + if (value->set_default_if_needed())
> + goto err;
Why is the above code needed ?
> +++ b/sql/sql_class.cc
> @@ -5766,6 +5767,17 @@ int THD::decide_logging_format(TABLE_LIST *tables)
> !(wsrep_binlog_format() == BINLOG_FORMAT_STMT &&
> !binlog_filter->db_ok(db)))
> {
> +
> + if (is_bulk_op())
> + {
> + if (wsrep_binlog_format() == BINLOG_FORMAT_STMT)
> + {
> + my_error(ER_BINLOG_NON_SUPPORTED_BULK, MYF(0));
> + DBUG_PRINT("info",
> + ("decision: no logging since an error was generated"));
no logging -> aborting exceution
> diff --git a/sql/sql_class.h b/sql/sql_class.h
> index 51642ec..b444b36 100644
> --- a/sql/sql_class.h
> +++ b/sql/sql_class.h
> @@ -2465,6 +2465,8 @@ class THD :public Statement,
> */
> Query_arena *stmt_arena;
>
> + void *bulk_param;
> +
> /*
> map for tables that will be updated for a multi-table update query
> statement, for other query statements, this will be zero.
> @@ -3440,6 +3442,12 @@ class THD :public Statement,
> To raise this flag, use my_error().
> */
> inline bool is_error() const { return m_stmt_da->is_error(); }
> + void set_bulk_execution(void *bulk)
> + {
> + bulk_param= bulk;
> + m_stmt_da->set_bulk_execution(MY_TEST(bulk));
> + }
> + bool is_bulk_op() const { return m_stmt_da->is_bulk_op(); }
Why not instead do:
bool is_bulk_op() const { return MY_TEST(bulk_param); }
If this isn't the same it would be good to know why.
> index 1d234c5..34d608f 100644
> --- a/sql/sql_error.cc
> +++ b/sql/sql_error.cc
> @@ -320,7 +320,7 @@ Sql_condition::set_sqlstate(const char* sqlstate)
> }
>
> Diagnostics_area::Diagnostics_area(bool initialize)
> - : m_main_wi(0, false, initialize)
> + : is_bulk_execution(0), m_main_wi(0, false, initialize)
> {
Please add a comment why you need is_bulk_execution both in THD and
Diagnostics_area. If this is the same and this is just a cache, it would be
good to do an assert check in set_ok_status() to verify this
> @@ -376,7 +377,7 @@ Diagnostics_area::set_ok_status(ulonglong affected_rows,
> const char *message)
> {
> DBUG_ENTER("set_ok_status");
> - DBUG_ASSERT(! is_set());
> + DBUG_ASSERT(!is_set() || (m_status == DA_OK_BULK && is_bulk_op()));
> /*
> In production, refuse to overwrite an error or a custom response
> with an OK packet.
> @@ -384,14 +385,23 @@ Diagnostics_area::set_ok_status(ulonglong affected_rows,
> if (is_error() || is_disabled())
> return;
>
> - m_statement_warn_count= current_statement_warn_count();
> - m_affected_rows= affected_rows;
Add here the following comment:
/*
When running a bulk operation, m_status will be DA_OK for the first operation
and set to DA_OK_BULK for all following operations
*/
> + if (m_status == DA_OK_BULK)
> + {
> + DBUG_ASSERT(is_bulk_op());
You don't need the above assert as you have already tested this a few lines
above.
> + m_statement_warn_count+= current_statement_warn_count();
> + m_affected_rows+= affected_rows;
> + }
> + else
> + {
> + m_statement_warn_count= current_statement_warn_count();
> + m_affected_rows= affected_rows;
> + m_status= (is_bulk_op() ? DA_OK_BULK : DA_OK);
> + }
> --- a/sql/protocol.cc
> +++ b/sql/protocol.cc
> @@ -572,6 +572,7 @@ void Protocol::end_statement()
> thd->get_stmt_da()->statement_warn_count());
> break;
> case Diagnostics_area::DA_OK:
> + case Diagnostics_area::DA_OK_BULK:
> error= send_ok(thd->server_status,
> thd->get_stmt_da()->statement_warn_count(),
> thd->get_stmt_da()->affected_rows(),
For bulk operations, why do we send a a packet for every array element?
isn't it enough with just sending a summary last?
> diff --git a/sql/sql_insert.cc b/sql/sql_insert.cc
> @@ -637,6 +638,66 @@ static void save_insert_query_plan(THD* thd, TABLE_LIST *table_list)
> }
>
Please add a detailed comment what the following function does
>
> +inline static void set_defaults_relation(Item *fld, Item *val)
> +{
> + Item::Type type= fld->type();
> + if (type == Item::FIELD_ITEM)
> + {
> + Item_field *item_field= (Item_field *)fld;
Why test fixed ?
Shouldn't all items should be fixed this late in the game.
> + if (item_field->fixed)
> + val->set_default_value_source(item_field->field);
> + else
> + item_field->set_default_value_target(val);
> + }
> + else if (type == Item::REF_ITEM)
> + {
> + Item_ref *item_field= (Item_ref *)fld;
> + // may turn to Item_field after fix_fields()
> + if (!item_field->fixed)
> + item_field->set_default_value_target(val);
> + }
> +}
> +
Please add a detailed comment what the following function does.
For example, is this only needed to be exceuted when using bulk operations?
> +void setup_deault_parameters(TABLE_LIST *table, List<Item> *fields,
> + List<Item> *values)
Typo. should be setup_default_parameters
> +{
> +
> + List_iterator_fast<Item> itv(*values);
> + Item *val;
> + if (fields->elements)
> + {
> + List_iterator_fast<Item> itf(*fields);
> + Item *fld;
> + while((fld= itf++) && (val= itv++))
> + {
> + set_defaults_relation(fld->real_item(), val);
> + }
> + }
> + else if (table != NULL)
I checked both calls to setup_default_parameters and it looks impossible that
table could be NULL when this function is called. (Both calls has
table_list->next_local= 0 just before the call).
Please add instead of the above test an DBUG_ASSERT(table) in the
beginning of this function.
> + {
> + if (table->view)
> + {
> + Field_iterator_view field_it;
> + field_it.set(table);
> + for (; !field_it.end_of_fields() && (val= itv++); field_it.next())
> + {
> + set_defaults_relation(field_it.item()->real_item(), val);
> + }
> + }
> + else
> + {
I assume this if for multi-table update ?
If yes, please add a comment
> + Field_iterator_table_ref field_it;
> + field_it.set(table);
> + for (; !field_it.end_of_fields() && (val= itv++); field_it.next())
> + {
> + Field *fld= field_it.field();
> + val->set_default_value_source(fld);
> + }
> + }
> + }
> +}
I have to acknowledge that I am not completely sure when one should use
set_default_value_source, set_defaults_relation or set_default_value_target.
Can you please document somewhere in the code the purpose of the above
functions and when they should be used.
> @@ -770,6 +833,7 @@ bool mysql_insert(THD *thd,TABLE_LIST *table_list,
> if (setup_fields(thd, Ref_ptr_array(), *values, MARK_COLUMNS_READ, 0, 0))
> goto abort;
> switch_to_nullable_trigger_fields(*values, table);
Why do we go loop over all values an extra time below?
If this is only needed for bulk operations, then we should have
some extra tests there!
> + setup_deault_parameters(table_list, &fields, values);
> @@ -885,105 +949,113 @@ bool mysql_insert(THD *thd,TABLE_LIST *table_list,
> goto values_loop_end;
> }
> }
> -
> - while ((values= its++))
> + for (ulong iteration= 0; iteration < bulk_iterations; iteration++)
Change to do--while as we are always executing this loop once.
> {
> - if (fields.elements || !value_count)
> +
> + if (iteration && bulk_parameters_set(thd))
> + goto abort;
Wouldn't a better name be 'get_bulk_arguments_from_client()'
This would more closely reflect what the function does.
Can you please send me a copy of your sql_insert.cc. It was very hard
to review some parts of this file as there was many indententation
changes. I tried to apply your patch to my 10.2 version, but it didn't
apply.
> @@ -1445,6 +1517,7 @@ bool mysql_prepare_insert(THD *thd, TABLE_LIST *table_list,
> /* Prepare the fields in the statement. */
> if (values)
> {
> +
Remove extra empty line
> /* if we have INSERT ... VALUES () we cannot have a GROUP BY clause */
> DBUG_ASSERT (!select_lex->group_list.elements);
>
> @@ -1463,6 +1536,10 @@ bool mysql_prepare_insert(THD *thd, TABLE_LIST *table_list,
> check_insert_fields(thd, context->table_list, fields, *values,
> !insert_into_view, 0, &map));
>
> + setup_deault_parameters(table_list, &fields, values);
Is the above call always needed?
> diff --git a/sql/sql_prepare.cc b/sql/sql_prepare.cc
> @@ -960,11 +973,65 @@ static bool insert_params(Prepared_statement *stmt, uchar *null_array,
> }
>
>
> +static bool insert_bulk_params(Prepared_statement *stmt,
> + uchar **read_pos, uchar *data_end,
> + bool reset)
> +{
> + Item_param **begin= stmt->param_array;
> + Item_param **end= begin + stmt->param_count;
> +
> + DBUG_ENTER("insert_params");
> +
> + for (Item_param **it= begin; it < end; ++it)
> + {
> + Item_param *param= *it;
> + if (reset)
> + param->reset();
> + if (param->state != Item_param::LONG_DATA_VALUE)
> + {
> + if (param->indicators)
> + param->indicator= *((*read_pos)++);
> + else
> + param->indicator= STMT_INDICATOR_NONE;
> + if ((*read_pos) > data_end)
> + DBUG_RETURN(1);
> + switch (param->indicator) {
move { to next line
> + case STMT_INDICATOR_NONE:
> + if ((*read_pos) >= data_end)
> + DBUG_RETURN(1);
> + param->set_param_func(param, read_pos, (uint) (data_end - (*read_pos)));
> + if (param->state == Item_param::NO_VALUE)
> + DBUG_RETURN(1);
> + break;
> + case STMT_INDICATOR_NULL:
> + param->set_null();
> + break;
> + case STMT_INDICATOR_DEFAULT:
> + if (param->set_default(TRUE))
> + DBUG_RETURN(1);
If all the default handling is about above handling, we should be able to
do things much simpler by just not marking the field to have given a value and
let the normal default handling take over.
> + break;
> + }
> + }
> + /*
> + A long data stream was supplied for this parameter marker.
> + This was done after prepare, prior to providing a placeholder
> + type (the types are supplied at execute). Check that the
> + supplied type of placeholder can accept a data stream.
> + */
> + else
Move comment after else. Check also that comment is accurate as there
is no checks below.
> + DBUG_RETURN(1); // long is not supported here
> + }
> + DBUG_RETURN(0);
> @@ -2982,12 +3056,16 @@ void mysqld_stmt_execute(THD *thd, char *packet_arg, uint packet_length)
> thd->profiling.set_query_source(stmt->query(), stmt->query_length());
> #endif
> DBUG_PRINT("exec_query", ("%s", stmt->query()));
> - DBUG_PRINT("info",("stmt: 0x%lx", (long) stmt));
> + DBUG_PRINT("info",("stmt: 0x%lx iterations: %lu", (long) stmt, iterations));
>
When changing a DBUG_PRINT that contains 0x%lx, please change to %p
and remove the cast to long.
<cut>
> +my_bool Prepared_statement::set_bulk_parameters(bool reset)
> +{
> + DBUG_ENTER("Prepared_statement::set_bulk_parameters");
> + DBUG_PRINT("info", ("iteration: %lu", iterations));
> + if (iterations)
> + {
> +#ifndef EMBEDDED_LIBRARY
> + if ((*set_bulk_params)(this, &packet, packet_end, reset))
> +#else
> + DBUG_ASSERT(0); //TODO: support bulk parameters for embedded server
> +#endif
Please add an error instead when trying to do a bulk operations on the
client side in the embedded server. We shouldn't crash because
something isn't supported.
> +bool
> +Prepared_statement::execute_bulk_loop(String *expanded_query,
> + bool open_cursor,
> + uchar *packet_arg,
> + uchar *packet_end_arg,
> + ulong iterations_arg)
> +{
> + Reprepare_observer reprepare_observer;
> + bool error= 0;
> + packet= packet_arg;
> + packet_end= packet_end_arg;
> + iterations= iterations_arg;
> + start_param= true;
> +#ifndef DBUG_OFF
> + Item *free_list_state= thd->free_list;
> +#endif
> + thd->select_number= select_number_after_prepare;
> + thd->set_bulk_execution((void *)this);
> + /* Check if we got an error when sending long data */
> + if (state == Query_arena::STMT_ERROR)
> + {
> + my_message(last_errno, last_error, MYF(0));
> + thd->set_bulk_execution(0);
> + return TRUE;
> + }
> +
> + if (!(sql_command_flags[lex->sql_command] & CF_SP_BULK_SAFE))
> + {
> + my_error(ER_UNSUPPORTED_PS, MYF(0));
> + thd->set_bulk_execution(0);
> + return TRUE;
> + }
> +
> +#ifndef EMBEDDED_LIBRARY
> + if (setup_conversion_functions(this, &packet, packet_end, TRUE))
> +#else
> + DBUG_ASSERT(0); //TODO: support bulk parameters for embedded server
> +#endif
> + {
> + my_error(ER_WRONG_ARGUMENTS, MYF(0),
> + "mysqld_stmt_bulk_execute");
> + reset_stmt_params(this);
> + thd->set_bulk_execution(0);
> + return true;
> + }
> +
> +#ifdef NOT_YET_FROM_MYSQL_5_6
> + if (unlikely(thd->security_ctx->password_expired &&
> + !lex->is_change_password))
> + {
> + my_error(ER_MUST_CHANGE_PASSWORD, MYF(0));
> + thd->set_bulk_execution(0);
> + return true;
> + }
> +#endif
> +
> + while((iterations || start_param) && !error && !thd->is_error())
Please add space after while
Add also a comment how the code works.
As 'iterations' doesn't change at all in the loop, the above while is a bit
strange.
> + {
> + int reprepare_attempt= 0;
> +
Add a comment how the following code works.
I understand that if there is an bulk insert, then the loop that is fetching
data will happen mysql_insert(). However I don't understand how this will work
with other bulk operations that doesn't have SP_BULK_OPTIMZED set.
> + if (!(sql_command_flags[lex->sql_command] & CF_SP_BULK_OPTIMIZED))
> + {
> + if (set_bulk_parameters(TRUE))
> + {
> + thd->set_bulk_execution(0);
> + return true;
> + }
> + }
> +
> +reexecute:
---------------------
Conclusion:
Most of bulk insert code looks ok. Some missing features (embedded
server error handling, no binlog logging) that must be fixed before
pushing (at least binlog logging is critical).
However, I don't like the default handling in the new code:
- Too much new code and complex code (a lot of similar functions that
works a slightly different)
- A lot of extra overhead to loop over all fields and values, that is not
needed.
- Totally different DEFAULT handling compared to how things are normally done.
I would prefer that all the new default handling would be removed and instead
we should do things the "normal way":
- If a field is given value of DEFAULT, it should not be given a value. In
this case the normal default handling should be able to take over and
handle the case.
If this is impossible, then we should try to ensure that the code will work
exactly as if we would have given column=DEFAULT in SQL.
As we don't need any loops over all values for handling the above case
we shouldn't need that for bulk inserts either.
I will be back working normally on Friday, so then we can discuss the DEFAULT
handling on IRC then.
Regards,
Monty
2
1
Hi, Sergey.
I'd like your opinion about this difference in the result of the null.test:
----------------------------------------------------------------------------------------------
--- /home/hf/wgit/cmt-mdev-9143/mysql-test/r/null.result 2016-10-23
13:33:54.050093010 +0400
+++ /home/hf/wgit/cmt-mdev-9143/mysql-test/r/null.reject 2016-10-24
17:31:55.553248271 +0400
@@ -1584,7 +1584,7 @@
id select_type table type possible_keys key key_len
ref rows filtered Extra
1 SIMPLE t1 ALL NULL NULL NULL NULL 3
100.00 Using where
Warnings:
-Note 1003 select `test`.`t1`.`c1` AS `c1` from `test`.`t1` where
(((`test`.`t1`.`c1` is not null) >= <cache>((not(1)))) is not null)
+Note 1003 select `test`.`t1`.`c1` AS `c1` from `test`.`t1` where
(((`test`.`t1`.`c1` is not null) >= 0) is not null)
SELECT * FROM t1 WHERE ((c1 IS NOT NULL) >= (NOT TRUE)) IS NOT NULL;
c1
1
----------------------------------------------------------------------------------------------
I mean i made a change in the code that changed the test result, which
is normally not good.
Though i think here we rather have an improvement.
What do you think? Would you approve that change in the result?
See below for the patch that caused all this:
----------------------------------------------------------------------------------------------
diff --git a/sql/item.cc b/sql/item.cc
index 448e34b..2388679 100644
--- a/sql/item.cc
+++ b/sql/item.cc
@@ -2900,6 +2900,14 @@ void Item_int::print(String *str, enum_query_type
query_type)
}
+Item *Item_bool::neg_transformer(THD *thd)
+{
+ value= !value;
+ name= 0;
+ return this;
+}
+
+
Item_uint::Item_uint(THD *thd, const char *str_arg, uint length):
Item_int(thd, str_arg, length)
{
diff --git a/sql/item.h b/sql/item.h
index 7644235..ab70fdb 100644
--- a/sql/item.h
+++ b/sql/item.h
@@ -3008,6 +3008,7 @@ class Item_bool :public Item_int
Item_bool(THD *thd, const char *str_arg, longlong i):
Item_int(thd, str_arg, i, 1) {}
bool is_bool_type() { return true; }
+ Item *neg_transformer(THD *thd);
};
----------------------------------------------------------------------------------------------
Best regards.
HF
2
2

Re: [Maria-developers] [Commits] aa9bd40: MDEV-10824 - Crash in CREATE OR REPLACE TABLE t1 AS SELECT spfunc()
by Vicențiu Ciorbaru 24 Oct '16
by Vicențiu Ciorbaru 24 Oct '16
24 Oct '16
Hi Sergey!
I think you commited the AAA.test file by mystake.
Regards,
Vicentiu
On Mon, 24 Oct 2016 at 13:47 Sergey Vojtovich <svoj(a)mariadb.org> wrote:
> revision-id: aa9bd40f8067e1421ad71d2ada367544a6db78ca
> (mariadb-10.0.27-8-gaa9bd40)
> parent(s): 4dfb6a3f54cfb26535636197cc5fa70fe5bacc2e
> committer: Sergey Vojtovich
> timestamp: 2016-10-24 15:26:11 +0400
> message:
>
> MDEV-10824 - Crash in CREATE OR REPLACE TABLE t1 AS SELECT spfunc()
>
> Code flow hit incorrect branch while closing table instances before
> removal.
> This branch expects thread to hold open table instance, whereas CREATE OR
> REPLACE doesn't actually hold open table instance.
>
> Before CREATE OR REPLACE TABLE it was impossible to hit this condition in
> LTM_PRELOCKED mode, thus the problem didn't expose itself during DROP TABLE
> or DROP DATABASE.
>
> Fixed by adjusting condition to take into account LTM_PRELOCKED mode,
> which can
> be set during CREATE OR REPLACE TABLE.
>
> ---
> mysql-test/r/create_or_replace.result | 11 +++++++++++
> mysql-test/t/AAA.test | 24 ++++++++++++++++++++++++
> mysql-test/t/create_or_replace.test | 12 ++++++++++++
> sql/sql_parse.cc | 12 ------------
> sql/sql_table.cc | 3 ++-
> 5 files changed, 49 insertions(+), 13 deletions(-)
>
> diff --git a/mysql-test/r/create_or_replace.result
> b/mysql-test/r/create_or_replace.result
> index 3a894e9..a43dc2e 100644
> --- a/mysql-test/r/create_or_replace.result
> +++ b/mysql-test/r/create_or_replace.result
> @@ -442,3 +442,14 @@ KILL QUERY con_id;
> ERROR 70100: Query execution was interrupted
> drop table t1;
> DROP TABLE t2;
> +#
> +# MDEV-10824 - Crash in CREATE OR REPLACE TABLE t1 AS SELECT spfunc()
> +#
> +CREATE TABLE t1(a INT);
> +CREATE FUNCTION f1() RETURNS VARCHAR(16383) RETURN 'test';
> +CREATE OR REPLACE TABLE t1 AS SELECT f1();
> +LOCK TABLE t1 WRITE;
> +CREATE OR REPLACE TABLE t1 AS SELECT f1();
> +UNLOCK TABLES;
> +DROP FUNCTION f1;
> +DROP TABLE t1;
> diff --git a/mysql-test/t/AAA.test b/mysql-test/t/AAA.test
> new file mode 100644
> index 0000000..22e22dd
> --- /dev/null
> +++ b/mysql-test/t/AAA.test
> @@ -0,0 +1,24 @@
> +CREATE TABLE t1(a INT);
> +DELIMITER $$;
> +CREATE FUNCTION f2() RETURNS VARCHAR(16383) RETURN 'test';
> +CREATE FUNCTION f1() RETURNS VARCHAR(16383)
> +BEGIN
> + INSERT INTO t1 VALUES(1);
> + RETURN 'test';
> +END;
> +$$
> +CREATE PROCEDURE p1() CREATE OR REPLACE TABLE t1 AS SELECT f2();
> +$$
> +DELIMITER ;$$
> +
> +CALL p1;
> +
> +#CREATE OR REPLACE TABLE t1 AS SELECT f1();
> +LOCK TABLE t1 WRITE;
> +#CREATE OR REPLACE TABLE t1 AS SELECT f1();
> +UNLOCK TABLES;
> +
> +DROP PROCEDURE p1;
> +DROP FUNCTION f1;
> +DROP FUNCTION f2;
> +DROP TABLE t1;
> diff --git a/mysql-test/t/create_or_replace.test
> b/mysql-test/t/create_or_replace.test
> index 7bba2b3..b37417f 100644
> --- a/mysql-test/t/create_or_replace.test
> +++ b/mysql-test/t/create_or_replace.test
> @@ -386,3 +386,15 @@ drop table t1;
> # Cleanup
> #
> DROP TABLE t2;
> +
> +--echo #
> +--echo # MDEV-10824 - Crash in CREATE OR REPLACE TABLE t1 AS SELECT
> spfunc()
> +--echo #
> +CREATE TABLE t1(a INT);
> +CREATE FUNCTION f1() RETURNS VARCHAR(16383) RETURN 'test';
> +CREATE OR REPLACE TABLE t1 AS SELECT f1();
> +LOCK TABLE t1 WRITE;
> +CREATE OR REPLACE TABLE t1 AS SELECT f1();
> +UNLOCK TABLES;
> +DROP FUNCTION f1;
> +DROP TABLE t1;
> diff --git a/sql/sql_parse.cc b/sql/sql_parse.cc
> index cbf723c..70511fc 100644
> --- a/sql/sql_parse.cc
> +++ b/sql/sql_parse.cc
> @@ -2858,12 +2858,6 @@ case SQLCOM_PREPARE:
> }
>
> /*
> - For CREATE TABLE we should not open the table even if it exists.
> - If the table exists, we should either not create it or replace it
> - */
> - lex->query_tables->open_strategy= TABLE_LIST::OPEN_STUB;
> -
> - /*
> If we are a slave, we should add OR REPLACE if we don't have
> IF EXISTS. This will help a slave to recover from
> CREATE TABLE OR EXISTS failures by dropping the table and
> @@ -8225,12 +8219,6 @@ bool create_table_precheck(THD *thd, TABLE_LIST
> *tables,
> if (check_fk_parent_table_access(thd, &lex->create_info,
> &lex->alter_info, create_table->db))
> goto err;
>
> - /*
> - For CREATE TABLE we should not open the table even if it exists.
> - If the table exists, we should either not create it or replace it
> - */
> - lex->query_tables->open_strategy= TABLE_LIST::OPEN_STUB;
> -
> error= FALSE;
>
> err:
> diff --git a/sql/sql_table.cc b/sql/sql_table.cc
> index 7cf31ee..050a338 100644
> --- a/sql/sql_table.cc
> +++ b/sql/sql_table.cc
> @@ -2464,7 +2464,8 @@ int mysql_rm_table_no_locks(THD *thd, TABLE_LIST
> *tables, bool if_exists,
> if (table_type && table_type != view_pseudo_hton)
> ha_lock_engine(thd, table_type);
>
> - if (thd->locked_tables_mode)
> + if (thd->locked_tables_mode == LTM_LOCK_TABLES ||
> + thd->locked_tables_mode == LTM_PRELOCKED_UNDER_LOCK_TABLES)
> {
> if (wait_while_table_is_used(thd, table->table,
> HA_EXTRA_NOT_USED))
> {
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
1
0
Who should be contacted about issues in the mroonga storage engine?
The attached patch is from Debian Bug#838914
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=838914
Apparently, libatomic is needed on this platform to support 64-bit atomic
operations.
The patch looks reasonable and should probably be upstreamed. But I am not
sure how the mroonga storage engine is maintained - should this go directly
into MariaDB? If there is an upstream maintained mroonga storage engine,
probably it should preferably go there first?
- Kristian.
2
1
Filed https://jira.mariadb.org/browse/MDEV-11101 but I have no idea who
will fix this. All other engines use less than 20M while tokudb uses 161M
including one file in mysql-test that is more than 30M.
Lots of time, network and disk space is being wasted by this.
--
Mark Callaghan
mdcallag(a)gmail.com
1
0

Re: [Maria-developers] [MariaDB/server] MDEV-11064 - Restrict the speed of reading binlog from Master (#246)
by Kristian Nielsen 21 Oct '16
by Kristian Nielsen 21 Oct '16
21 Oct '16
vinchen <notifications(a)github.com> writes:
> cli_safe_read_reallen() and and my_net_read_packet_reallen() is a good
> way to fix the ABI problem. I will fix it like this.
>
> And the minimum precision is second in slave_sleep(), and it also a
> the mutex. I think it is too heavy in most case. (It will wait
> millisecond seconds usually)
> So I will use my_sleep when wait less than 1 second, otherwise use slave_sleep.
>
> Is it ok?
Yes, it looks fine thanks!
And thanks for the explanation, yes now I see that usually the sleep will be
quite small, unless there is a huge event. So I agree with using my_sleep()
in the common case.
I'll merge it like this after checking that everything compiles and tests
ok.
Thanks!
- Kristian.
1
0

Re: [Maria-developers] [MariaDB/server] Restrict the speed of reading binlog from Master (#246)
by Kristian Nielsen 19 Oct '16
by Kristian Nielsen 19 Oct '16
19 Oct '16
GCSAdmin <notifications(a)github.com> writes:
> In some case, the speed of reading binlog from master is high, especially when doing a new replica.
> It would bring the high traffic in master.
> So We introduce a new variable "read_binlog_speed_limit" to control the binlog reading rate for IO thread to solve the problem.
> It can work when slave_compressed_protocol is on.
> But it maybe doesn't work well when the binlog event is very big.
> You can view, comment on, or merge this pull request online at:
>
> https://github.com/MariaDB/server/pull/246
> -- Patch Links --
>
> https://github.com/MariaDB/server/pull/246.patch
> https://github.com/MariaDB/server/pull/246.diff
Overall this looks clean and simple.
There is one problem. The patch adds a field real_network_read_len to the
NET structure. This will break the client library ABI, because NET is a part
of MYSQL. So this would cause client programs to crash if they are linked
with a different version of the client library. So this needs to be changed
(if I understand correctly).
One option might be to introduce new functions like cli_safe_read_reallen()
and my_net_read_packet_reallen(), which return in addition the actual amount
of bytes read from the server. The old cli_safe_read() and
my_net_read_packet() could then become simple wrapper functions around
those. And cli_safe_read_reallen() can be used in read_event() in
sql/slave.cc.
A smaller issue is that in case of a large packet, a large my_sleep() may be
invoked, which will cause STOP SLAVE to hang. I think this can be solved
simply by calling slave_sleep() instead, it handles terminating the wait
early if interrupted by STOP SLAVE.
Detailed comments on the patch below. I rebased the series against latest
10.2 to get a clean diff (the pull request includes a couple merges against
the main 10.2 tree, these changes are unrelated to the patch). The rebase is
in https://github.com/knielsen/server/tree/GCSAdmin-10.2-binlog-speed-limit-2
> diff --git a/include/mysql.h.pp b/include/mysql.h.pp
> index 857f5b9..1da038c 100644
> --- a/include/mysql.h.pp
> +++ b/include/mysql.h.pp
> @@ -35,6 +35,7 @@ typedef struct st_net {
> my_bool thread_specific_malloc;
> unsigned char compress;
> my_bool unused3;
> + unsigned long real_network_read_len;
As explained above, I believe this would break the ABI (that's the purpose
of mysql.h.pp, to catch such problems).
> diff --git a/sql/slave.cc b/sql/slave.cc
> index 20bf68e..52bb668 100644
> --- a/sql/slave.cc
> +++ b/sql/slave.cc
> @@ -3307,13 +3308,14 @@ static int request_dump(THD *thd, MYSQL* mysql, Master_info* mi,
> try a reconnect. We do not want to print anything to
> the error log in this case because this a anormal
> event in an idle server.
> + network_read_len get the real network read length in VIO, especially using compressed protocol
>
> RETURN VALUES
> 'packet_error' Error
> number Length of packet
> */
>
> -static ulong read_event(MYSQL* mysql, Master_info *mi, bool* suppress_warnings)
> +static ulong read_event(MYSQL* mysql, Master_info *mi, bool* suppress_warnings, ulong* network_read_len)
Generally, lines longer than 80 characters should be avoided (coding style).
> @@ -4473,6 +4479,34 @@ Stopping slave I/O thread due to out-of-memory error from master");
> goto err;
> }
>
> + /* Control the binlog read speed of master when read_binlog_speed_limit is non-zero
> + */
> + ulonglong read_binlog_speed_limit_in_bytes = opt_read_binlog_speed_limit * 1024;
> + if (read_binlog_speed_limit_in_bytes)
> + {
> + /* prevent the tokenamount become a large value,
> + for example, the IO thread doesn't work for a long time
> + */
> + if (tokenamount > read_binlog_speed_limit_in_bytes * 2)
> + {
> + lastchecktime = my_hrtime().val;
> + tokenamount = read_binlog_speed_limit_in_bytes * 2;
> + }
> +
> + do
> + {
> + ulonglong currenttime = my_hrtime().val;
> + tokenamount += (currenttime - lastchecktime) * read_binlog_speed_limit_in_bytes / (1000*1000);
> + lastchecktime = currenttime;
> + if(tokenamount < network_read_len)
> + {
> + ulonglong micro_sleeptime = 1000*1000 * (network_read_len - tokenamount) / read_binlog_speed_limit_in_bytes ;
> + my_sleep(micro_sleeptime > 1000 ? micro_sleeptime : 1000); // at least sleep 1000 micro second
> + }
> + }while(tokenamount < network_read_len);
> + tokenamount -= network_read_len;
> + }
> +
As explained above, probably better to use slave_sleep() here to allow STOP
SLAVE to interrupt a long sleep.
Would it make sense to do this wait after calling queue_event()? This way
the SQL thread can start applying the event immediately, reducing slave lag.
What do you think?
Thanks,
- Kristian.
1
0

[Maria-developers] Consolidated patch for mdev-8646 introduced a bug MDEV-11081
by Alexander Barkov 18 Oct '16
by Alexander Barkov 18 Oct '16
18 Oct '16
Hello Igor,
This patch:
commit 2cfc450bf78c2d951729d1a0e8f731c0d987b1d5
Author: Igor Babaev <igor(a)askmonty.org>
Date: Tue Feb 9 12:35:59 2016 -0800
seems to have introduced this bug:
MDEV-11081 Cursor fetches NULL for aggregate functions
Can you please have a look?
Thanks!
1
0

[Maria-developers] Article on git rebasing - follow up from Developer Meetup
by Vicențiu Ciorbaru 17 Oct '16
by Vicențiu Ciorbaru 17 Oct '16
17 Oct '16
Hi everyone!
During the Developer Meetup in Amsterdam, we've had quite some productive
discussions. One of these was on how to make use of git rebase to clean up
our history before pushing final changes. The presentation on it was a bit
rushed and perhaps some did not get the full gist out of it.
I've decided to write an article on this and go into more details. The
content should be generally useful and should give a primer to anyone not
familiar with how to use git rebase.
You can find the article at:
http://vicentiu.ciorbaru.io/keeping-development-history-clean-using-git-reb…
Hope it is of use.
Regards,
Vicențiu
1
0

17 Oct '16
Hi!
I was trying to run a test that fails in the upcoming bb-10.2-jan on
the normal 10.2 tree, when I noticed this strange issue:
- Test fails with timeout when running with --debug
- When looking at the trace file, I notice that we get a duplicate key
error for the table gtid_slave_post (MyISAM table). Is this something
normal ?
To repeat:
Store the included test in suite/rpl and run it with:
mysql-test-run --debug --record rpl_skr
Issue 2:
bb-10.2-jan tree is a working tree for a merge of MariaDB 10.2 and MySQL 5.7
When running rpl_skr in 10.2 it takes 2 seconds
When running it in the bb-10.2-jan tree it takes either a long time
or we get a timeout.
This is probably because of the new lock code in lock0lock.cc and
lock0wait.cc which doesn't break conflicting transaction but instead
waits for a timeout
Would appreciate any help with this!
Note that rpl_skr is a test that was originally part of
rpl_parallel.test, but I have made it separate to be able to more
easily test for this issue.
Before testing bb-10.2-jan, please apply this patch that fixes one
critical issue in this tree related to group-commit:
diff --git a/storage/innobase/handler/ha_innodb.cc
b/storage/innobase/handler/ha_innodb.cc
index b69468d..0e9caed 100644
--- a/storage/innobase/handler/ha_innodb.cc
+++ b/storage/innobase/handler/ha_innodb.cc
@@ -4976,16 +4953,15 @@ innobase_commit(
thd_wakeup_subsequent_commits(thd, 0);
/* Now do a write + flush of logs. */
- if (!read_only) {
- trx_commit_complete_for_mysql(trx);
- }
+ trx_commit_complete_for_mysql(trx);
trx_deregister_from_2pc(trx);
} else {
/* We just mark the SQL statement ended and do not do a
transaction commit */
+ DBUG_PRINT("info", ("Just mark SQL statement"));
/* If we had reserved the auto-inc lock for some
table in this SQL statement we release it now */
Regards,
Monty
3
9

Re: [Maria-developers] [MariaDB/server] MDEV-11039 - Add new scheduling algorithm for reducing tail latencies (#245)
by Kristian Nielsen 17 Oct '16
by Kristian Nielsen 17 Oct '16
17 Oct '16
Jiamin Huang <notifications(a)github.com> writes:
> @sensssz pushed 1 commit.
>
> 5dc7ad8 Reduce conflict during in-order replication.
Cool, that looks nice and simple. Thanks!
- Kristian.
1
0
Phil Sweeney <launchpad(a)sweeneymail.com> writes:
> Re: https://jira.mariadb.org/browse/MDEV-7145
> Delayed replication (a feature shipped in MySQL 5.6) has been a MariaDB
> feature request since 2014, and has been listed as 'major/red' priority in
> Can anyone comment on whether this is still likely to make it into 10.2.
I am planning to look into this.
- Kristian.
3
4

Re: [Maria-developers] [MariaDB/server] MDEV-11064 - Restrict the speed of reading binlog from Master (#246)
by Kristian Nielsen 15 Oct '16
by Kristian Nielsen 15 Oct '16
15 Oct '16
Sergey Vojtovich <notifications(a)github.com> writes:
> Similar to other open source projects, the MariaDB Foundation needs to
> have shared ownership of all code that is included in the MariaDB
> distribution. The easiest way to achieve this is by submitting your
> code under the BSD-new license.
This is a deliberate lie. There is no such requirement. In fact, most of the
code in MariaDB Server is contributed only under the GPLv2, including code
from the MariaDB Corporation, Oracle, Galera, Tokutek, ...
You already indicated that your code is contributed under the GPLv2 by
publishing it on a public github branch here:
https://github.com/GCSAdmin/MariaDB/tree/10.2-binlog-speed-limit
Please consider keeping your contribution under the GPLv2. Like other open
source projects, the MariaDB server needs the strong protection that the GPL
gives.
You are of course free to use whatever license you choose for your own
work. But I cannot be seen as supporting this continued abuse from managers
at the MariaDB Corporation, people that will not even stand forward
publically. So If you choose to support the BSD/MCA, I regret that I will
not be able to look into your pull request.
Otherwise, I will be happy to look into it, probably sometimes during the
coming week.
Thanks,
- Kristian.
1
0

Re: [Maria-developers] [Commits] d0064c6: MDEV-7635: Convert log_queries_not_using_indexes to ulong
by Daniel Black 14 Oct '16
by Daniel Black 14 Oct '16
14 Oct '16
So this is making log-queries-not-using-indexes to take on the same
meaning as min_examined_row_limit for the case of no-index and both seam
to apply?
Is thd->get_examined_row_count() the right comparison value if the
non-indexed table had much fewer rows examined?
Note -
https://mariadb.com/kb/en/mariadb/server-system-variables/#log_queries_not_…
needs updating and needs to reference min_examined_row_limit as well.
Are log_slow_admin_statements / log_slow_slave_statements going to be
modified in the same way and min-examined-row-limit to be deprecated?
Is giving log-queries-not-using-indexes a default value of 5000 (as
suggested for min-examined-row limit) applicable here?
On 14/10/16 10:09, Nirbhay Choubey wrote:
> revision-id: d0064c6e94414cfd6bbfcf171b1efabababe1d2e (mariadb-10.2.1-55-gd0064c6)
> parent(s): 0d70fd0f9b7b8480b6053ef2dfcb55d917de2bca
> author: Nirbhay Choubey
> committer: Nirbhay Choubey
> timestamp: 2016-10-13 19:09:53 -0400
> message:
>
> MDEV-7635: Convert log_queries_not_using_indexes to ulong
>
> ---
> mysql-test/r/mysqld--help.result | 7 ++++---
> mysql-test/r/show_check.result | 10 +++++-----
> mysql-test/r/variables.result | 4 ++--
> .../r/log_queries_not_using_indexes_basic.result | 18 ++++++++++--------
> .../t/log_queries_not_using_indexes_basic.test | 14 +++++++-------
> mysql-test/t/show_check.test | 4 ++--
> sql/mysqld.cc | 2 +-
> sql/mysqld.h | 2 +-
> sql/sql_parse.cc | 3 ++-
> sql/sys_vars.cc | 9 +++++----
> 10 files changed, 39 insertions(+), 34 deletions(-)
>
> diff --git a/mysql-test/r/mysqld--help.result b/mysql-test/r/mysqld--help.result
> index cb1399d..2a570f8 100644
> --- a/mysql-test/r/mysqld--help.result
> +++ b/mysql-test/r/mysqld--help.result
> @@ -365,9 +365,10 @@ The following options may be given as the first argument:
> --log-isam[=name] Log all MyISAM changes to file.
> --log-output=name How logs should be written. Any combination of: NONE,
> FILE, TABLE
> - --log-queries-not-using-indexes
> + --log-queries-not-using-indexes[=#]
> Log queries that are executed without benefit of any
> - index to the slow log if it is open
> + index and that examined fewer rows than specified to the
> + slow log if it is open
> --log-short-format Don't log extra information to update and slow-query
> logs.
> --log-slave-updates Tells the slave to log the updates from the slave thread
> @@ -1257,7 +1258,7 @@ log-bin-trust-function-creators FALSE
> log-error
> log-isam myisam.log
> log-output FILE
> -log-queries-not-using-indexes FALSE
> +log-queries-not-using-indexes 0
> log-short-format FALSE
> log-slave-updates FALSE
> log-slow-admin-statements FALSE
> diff --git a/mysql-test/r/show_check.result b/mysql-test/r/show_check.result
> index 19a2597..db1a0ee 100644
> --- a/mysql-test/r/show_check.result
> +++ b/mysql-test/r/show_check.result
> @@ -1228,27 +1228,27 @@ use test;
> flush status;
> show variables like "log_queries_not_using_indexes";
> Variable_name Value
> -log_queries_not_using_indexes ON
> +log_queries_not_using_indexes 1
> select 1 from information_schema.tables limit 1;
> 1
> 1
> show status like 'slow_queries';
> Variable_name Value
> Slow_queries 1
> -set global log_queries_not_using_indexes=OFF;
> +set global log_queries_not_using_indexes=0;
> show variables like "log_queries_not_using_indexes";
> Variable_name Value
> -log_queries_not_using_indexes OFF
> +log_queries_not_using_indexes 0
> select 1 from information_schema.tables limit 1;
> 1
> 1
> show status like 'slow_queries';
> Variable_name Value
> Slow_queries 1
> -set global log_queries_not_using_indexes=ON;
> +set global log_queries_not_using_indexes=1;
> show variables like "log_queries_not_using_indexes";
> Variable_name Value
> -log_queries_not_using_indexes ON
> +log_queries_not_using_indexes 1
> select 1 from information_schema.tables limit 1;
> 1
> 1
> diff --git a/mysql-test/r/variables.result b/mysql-test/r/variables.result
> index 50379a5..1423284 100644
> --- a/mysql-test/r/variables.result
> +++ b/mysql-test/r/variables.result
> @@ -995,10 +995,10 @@ select @@log_queries_not_using_indexes;
> 0
> show variables like 'log_queries_not_using_indexes';
> Variable_name Value
> -log_queries_not_using_indexes OFF
> +log_queries_not_using_indexes 0
> select * from information_schema.session_variables where variable_name like 'log_queries_not_using_indexes';
> VARIABLE_NAME VARIABLE_VALUE
> -LOG_QUERIES_NOT_USING_INDEXES OFF
> +LOG_QUERIES_NOT_USING_INDEXES 0
> select @@"";
> ERROR 42000: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '""' at line 1
> select @@&;
> diff --git a/mysql-test/suite/sys_vars/r/log_queries_not_using_indexes_basic.result b/mysql-test/suite/sys_vars/r/log_queries_not_using_indexes_basic.result
> index fcb5abb..9dafcb7 100644
> --- a/mysql-test/suite/sys_vars/r/log_queries_not_using_indexes_basic.result
> +++ b/mysql-test/suite/sys_vars/r/log_queries_not_using_indexes_basic.result
> @@ -29,10 +29,12 @@ SELECT @@global.log_queries_not_using_indexes;
> @@global.log_queries_not_using_indexes
> 0
> SET GLOBAL log_queries_not_using_indexes= ON;
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SELECT @@global.log_queries_not_using_indexes;
> @@global.log_queries_not_using_indexes
> -1
> +0
> SET GLOBAL log_queries_not_using_indexes= OFF;
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SELECT @@global.log_queries_not_using_indexes;
> @@global.log_queries_not_using_indexes
> 0
> @@ -47,20 +49,20 @@ SELECT @@global.log_queries_not_using_indexes;
> @@global.log_queries_not_using_indexes
> 0
> SET @@global.log_queries_not_using_indexes= 'DEFAULT';
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of 'DEFAULT'
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SET @@global.log_queries_not_using_indexes= 'true';
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of 'true'
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SET @@global.log_queries_not_using_indexes= BLABLA;
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of 'BLABLA'
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SET @@global.log_queries_not_using_indexes= 25;
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of '25'
> SET GLOBAL log_queries_not_using_indexes= -1;
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of '-1'
> +Warnings:
> +Warning 1292 Truncated incorrect log_queries_not_using_indexes value: '-1'
> SET @badvar= 'true';
> SET @@global.log_queries_not_using_indexes= @badvar;
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of 'true'
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SET GLOBAL log_queries_not_using_indexes= 'DEFAULT';
> -ERROR 42000: Variable 'log_queries_not_using_indexes' can't be set to the value of 'DEFAULT'
> +ERROR 42000: Incorrect argument type to variable 'log_queries_not_using_indexes'
> SET log_queries_not_using_indexes= TRUE;
> ERROR HY000: Variable 'log_queries_not_using_indexes' is a GLOBAL variable and should be set with SET GLOBAL
> SET SESSION log_queries_not_using_indexes= TRUE;
> diff --git a/mysql-test/suite/sys_vars/t/log_queries_not_using_indexes_basic.test b/mysql-test/suite/sys_vars/t/log_queries_not_using_indexes_basic.test
> index a726bff..806030d 100644
> --- a/mysql-test/suite/sys_vars/t/log_queries_not_using_indexes_basic.test
> +++ b/mysql-test/suite/sys_vars/t/log_queries_not_using_indexes_basic.test
> @@ -47,9 +47,11 @@ SELECT @@global.log_queries_not_using_indexes;
> SET GLOBAL log_queries_not_using_indexes= DEFAULT;
> SELECT @@global.log_queries_not_using_indexes;
>
> +--error ER_WRONG_TYPE_FOR_VAR
> SET GLOBAL log_queries_not_using_indexes= ON;
> SELECT @@global.log_queries_not_using_indexes;
>
> +--error ER_WRONG_TYPE_FOR_VAR
> SET GLOBAL log_queries_not_using_indexes= OFF;
> SELECT @@global.log_queries_not_using_indexes;
>
> @@ -66,26 +68,24 @@ SELECT @@global.log_queries_not_using_indexes;
> # Check if the value in GLOBAL Table matches value in variable #
> #################################################################
>
> ---error ER_WRONG_VALUE_FOR_VAR
> +--error ER_WRONG_TYPE_FOR_VAR
> SET @@global.log_queries_not_using_indexes= 'DEFAULT';
>
> ---error ER_WRONG_VALUE_FOR_VAR
> +--error ER_WRONG_TYPE_FOR_VAR
> SET @@global.log_queries_not_using_indexes= 'true';
>
> ---error ER_WRONG_VALUE_FOR_VAR
> +--error ER_WRONG_TYPE_FOR_VAR
> SET @@global.log_queries_not_using_indexes= BLABLA;
>
> ---error ER_WRONG_VALUE_FOR_VAR
> SET @@global.log_queries_not_using_indexes= 25;
>
> ---error ER_WRONG_VALUE_FOR_VAR
> SET GLOBAL log_queries_not_using_indexes= -1;
>
> SET @badvar= 'true';
> ---error ER_WRONG_VALUE_FOR_VAR
> +--error ER_WRONG_TYPE_FOR_VAR
> SET @@global.log_queries_not_using_indexes= @badvar;
>
> ---error ER_WRONG_VALUE_FOR_VAR
> +--error ER_WRONG_TYPE_FOR_VAR
> SET GLOBAL log_queries_not_using_indexes= 'DEFAULT';
>
> --error ER_GLOBAL_VARIABLE
> diff --git a/mysql-test/t/show_check.test b/mysql-test/t/show_check.test
> index a14c42d..9644b77 100644
> --- a/mysql-test/t/show_check.test
> +++ b/mysql-test/t/show_check.test
> @@ -945,11 +945,11 @@ flush status;
> show variables like "log_queries_not_using_indexes";
> select 1 from information_schema.tables limit 1;
> show status like 'slow_queries';
> -set global log_queries_not_using_indexes=OFF;
> +set global log_queries_not_using_indexes=0;
> show variables like "log_queries_not_using_indexes";
> select 1 from information_schema.tables limit 1;
> show status like 'slow_queries';
> -set global log_queries_not_using_indexes=ON;
> +set global log_queries_not_using_indexes=1;
> show variables like "log_queries_not_using_indexes";
> select 1 from information_schema.tables limit 1;
> show status like 'slow_queries';
> diff --git a/sql/mysqld.cc b/sql/mysqld.cc
> index 28e91e2..f02f2b0 100644
> --- a/sql/mysqld.cc
> +++ b/sql/mysqld.cc
> @@ -395,7 +395,7 @@ my_bool disable_log_notes;
> static my_bool opt_abort;
> ulonglong log_output_options;
> my_bool opt_userstat_running;
> -my_bool opt_log_queries_not_using_indexes= 0;
> +ulong opt_log_queries_not_using_indexes= 0;
> bool opt_error_log= IF_WIN(1,0);
> bool opt_disable_networking=0, opt_skip_show_db=0;
> bool opt_skip_name_resolve=0;
> diff --git a/sql/mysqld.h b/sql/mysqld.h
> index 846a01a..d577a9b 100644
> --- a/sql/mysqld.h
> +++ b/sql/mysqld.h
> @@ -114,7 +114,7 @@ extern my_bool opt_backup_history_log;
> extern my_bool opt_backup_progress_log;
> extern ulonglong log_output_options;
> extern ulong log_backup_output_options;
> -extern my_bool opt_log_queries_not_using_indexes;
> +extern ulong opt_log_queries_not_using_indexes;
> extern bool opt_disable_networking, opt_skip_show_db;
> extern bool opt_skip_name_resolve;
> extern bool opt_ignore_builtin_innodb;
> diff --git a/sql/sql_parse.cc b/sql/sql_parse.cc
> index c76e22a..de2da9e 100644
> --- a/sql/sql_parse.cc
> +++ b/sql/sql_parse.cc
> @@ -2417,7 +2417,8 @@ void log_slow_statement(THD *thd)
> if (((thd->server_status & SERVER_QUERY_WAS_SLOW) ||
> ((thd->server_status &
> (SERVER_QUERY_NO_INDEX_USED | SERVER_QUERY_NO_GOOD_INDEX_USED)) &&
> - opt_log_queries_not_using_indexes &&
> + (opt_log_queries_not_using_indexes > 0) &&
> + (thd->get_examined_row_count() >= opt_log_queries_not_using_indexes) &&
> !(sql_command_flags[thd->lex->sql_command] & CF_STATUS_COMMAND))) &&
> thd->get_examined_row_count() >= thd->variables.min_examined_row_limit)
> {
> diff --git a/sql/sys_vars.cc b/sql/sys_vars.cc
> index 47a0a38..d11d690 100644
> --- a/sql/sys_vars.cc
> +++ b/sql/sys_vars.cc
> @@ -1178,12 +1178,13 @@ static Sys_var_charptr Sys_log_error(
> CMD_LINE(OPT_ARG, OPT_LOG_ERROR),
> IN_FS_CHARSET, DEFAULT(disabled_my_option));
>
> -static Sys_var_mybool Sys_log_queries_not_using_indexes(
> +static Sys_var_ulong Sys_log_queries_not_using_indexes(
> "log_queries_not_using_indexes",
> - "Log queries that are executed without benefit of any index to the "
> - "slow log if it is open",
> + "Log queries that are executed without benefit of any index and that "
> + "examined fewer rows than specified to the slow log if it is open",
> GLOBAL_VAR(opt_log_queries_not_using_indexes),
> - CMD_LINE(OPT_ARG), DEFAULT(FALSE));
> + CMD_LINE(OPT_ARG), VALID_RANGE(0, UINT_MAX), DEFAULT(0),
> + BLOCK_SIZE(1));
>
> static Sys_var_mybool Sys_log_slow_admin_statements(
> "log_slow_admin_statements",
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
>
1
0

Re: [Maria-developers] [Commits] a3d469b: MDEV-11035: Restore removed disallow-writes for Galera
by Nirbhay Choubey 13 Oct '16
by Nirbhay Choubey 13 Oct '16
13 Oct '16
Hi Jan!
On Wed, Oct 12, 2016 at 7:31 AM, Jan Lindström <jan.lindstrom(a)mariadb.com>
wrote:
> revision-id: a3d469b991732dddfafe966011e996166cd98671
> (mariadb-10.2.2-39-ga3d469b)
> parent(s): 6e46de4a674c55858ec5b2528dcebb69010b34d6
> author: Jan Lindström
> committer: Jan Lindström
> timestamp: 2016-10-12 14:29:36 +0300
> message:
>
> MDEV-11035: Restore removed disallow-writes for Galera
>
> Found actually only one missing condition.
>
The patch is unfortunately incomplete. There are a bunch of
WAIT_ALLOW_WRITES'
(when compared to 10.1) that are currently missing in 10.2. Also, there is
a memory
leak as InnoDB does not free this event on shutdown (srv_free()).
You can use galera.galera_var_innodb_disallow_writes to verify the fix.
Best,
Nirbhay
> ---
> storage/innobase/handler/ha_innodb.cc | 6 ++++--
> storage/innobase/srv/srv0srv.cc | 4 +---
> 2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/storage/innobase/handler/ha_innodb.cc
> b/storage/innobase/handler/ha_innodb.cc
> index 1e74154..dc7e6eb 100644
> --- a/storage/innobase/handler/ha_innodb.cc
> +++ b/storage/innobase/handler/ha_innodb.cc
> @@ -23276,10 +23276,11 @@ innobase_disallow_writes_update(
> {
> *(my_bool*)var_ptr = *(my_bool*)save;
> ut_a(srv_allow_writes_event);
> - if (*(my_bool*)var_ptr)
> + if (*(my_bool*)var_ptr) {
> os_event_reset(srv_allow_writes_event);
> - else
> + } else {
> os_event_set(srv_allow_writes_event);
> + }
> }
>
> static MYSQL_SYSVAR_BOOL(disallow_writes, innobase_disallow_writes,
> @@ -23287,6 +23288,7 @@ static MYSQL_SYSVAR_BOOL(disallow_writes,
> innobase_disallow_writes,
> "Tell InnoDB to stop any writes to disk",
> NULL, innobase_disallow_writes_update, FALSE);
> #endif /* WITH_INNODB_DISALLOW_WRITES */
> +
> static MYSQL_SYSVAR_BOOL(random_read_ahead, srv_random_read_ahead,
> PLUGIN_VAR_NOCMDARG,
> "Whether to use read ahead for random access within an extent.",
> diff --git a/storage/innobase/srv/srv0srv.cc
> b/storage/innobase/srv/srv0srv.cc
> index 49de954..a46f62f 100644
> --- a/storage/innobase/srv/srv0srv.cc
> +++ b/storage/innobase/srv/srv0srv.cc
> @@ -1969,9 +1969,7 @@ DECLARE_THREAD(srv_error_monitor_thread)(
> if (sync_array_print_long_waits(&waiter, &sema)
> && sema == old_sema && os_thread_eq(waiter, old_waiter)) {
> #if defined(WITH_WSREP) && defined(WITH_INNODB_DISALLOW_WRITES)
> - if (true) {
> - // JAN: TODO: MySQL 5.7
> - //if (srv_allow_writes_event->is_set) {
> + if (os_event_is_set(srv_allow_writes_event)) {
> #endif /* WITH_WSREP */
> fatal_cnt++;
> #if defined(WITH_WSREP) && defined(WITH_INNODB_DISALLOW_WRITES)
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
1
0

[Maria-developers] Error building mariadb-10.0 with Clang 3.8: Unqualified lookup in templates
by Luke Benes 13 Oct '16
by Luke Benes 13 Oct '16
13 Oct '16
Building mariadb-10.0 with Clang results in:
graph_concepts.hpp:93:17: error: call to function 'out_edges' that is neither visible in the template definition nor found by argument-dependent lookup
Full log:
http://clang.debian.net/logs/2016-08-30/mariadb-10.0_10.0.26-3_unstable_cla…
2
1

Re: [Maria-developers] [MariaDB/server] Add new scheduling algorithm for reducing tail latencies (#245)
by Kristian Nielsen 12 Oct '16
by Kristian Nielsen 12 Oct '16
12 Oct '16
Jiamin Huang <notifications(a)github.com> writes:
> This branch introduces a new scheduling algorithm
> (Variance-Aware-Transaction-Scheduling, VATS) for the record lock manager
> of InnoDB and XtraDB. Instead of using First-Come-First-Served (FCFS), the
> newly introduced algorithm uses an Eldest-Transaction-First (ETF)
> heuristic, which prefers older transactions over new ones.
It could be interesting to extend this to also understand commit order for
in-order parallel replication.
For in-order parallel replication, the commit order of transactions is fixed
from the start. Suppose transactions Tm and Tn are both requesting the same
record lock, and that Tm goes before Tn. If Tn gets the lock, it will be
immediately killed since Tm must be allowed to go first. But there is no
guarantee that Tm will be the older transaction. So prefering Tm over Tn
here based on the commit order can potentially avoid some transaction
rollback/retry.
The function thd_deadlock_victim_preference() seems appropriate for
overriding the age-of-transaction preference. If this function returns a
prefered deadlock victim, give the lock to the other transaction. If not,
the VATS algorithm can be applied.
Especially in aggressive parallel replication, there can be a lot of
conflicting transactions (one user reported 20% of transactions conflicting,
while still getting good speedup from parallel replication). So it might be
a worthwhile thing to do.
- Kristian.
1
0

11 Oct '16
Yoshinori Matsunobu <yoshinori(a)fb.com> writes:
> About transaction ids, it's not visible from MyRocks yet. We're currently working on
> RocksDB to add an API to get transaction id, and making it available via MyRocks.
Ok, I see.
Optimistic parallel replication needs the ability to somehow find the THD
that is holding the row lock that is blocking another THD. If we have T1
followed by T2, T2 will only commit after T1 has. So if there is a
conflicting row lock, we need a way to identify T2 so that the conflict can
be resolved. Lock wait timeout is not sufficient here because of the
requirement of in-order commit.
> Rows are normally released at transaction commit or rollback, but there are some exceptions.
> - Auto-increment id allocation is implemented as std::atomic<longlong> and
> the lock is released earlier than statement/transaction, like
> InnoDB. I hope this doesn't matter for
> parallel replication, since auto-inc ids are always given on slaves
> (either RBR image, or insert_id with SBR).
Agree, it does not matter, MyRocks just should not report these lock
conflicts with thd_rpl_deadlock_check() (and since this is using a different
mechanism, there is no reason it would).
> - MyRocks has data dictionary
> (https://github.com/facebook/mysql-5.6/wiki/MyRocks-data-dictionary-format)
> and data dictionary operations' transaction scope is different from
> applications'. For example, internal index
> id allocation is done (and committed) immediately. There is no SQL
> statements to directly manipulate data dictionary,
> so I assume this won't matter for replication either.
Agree, it shouldn't.
Optimistic parallel replication handles DDL pessimistically anyway - DDL is
not run in parallel with any other statements.
Thanks,
- Kristian.
1
0

10 Oct '16
Sergey, Yoshinori, it was great talking to you about MyRocks in Amsterdam.
I took a first look at how to extend MyRocks to work with optimistic
parallel replication. It looks conceptually quite simple.
Sergey, I understand you have more pressing priorities right now (like
getting a tree to build :), so let us revisit this in more detail when you
get to it.
It looks like the fix is conceptually as simple as this patch, which calls
thd_rpl_deadlock_check() whenever a transaction is blocked on a row lock:
-----------------------------------------------------------------------
diff --git a/utilities/transactions/transaction_lock_mgr.cc b/utilities/transactions/transaction_lock_mgr.cc
index 28e8598..5ff291f 100644
--- a/utilities/transactions/transaction_lock_mgr.cc
+++ b/utilities/transactions/transaction_lock_mgr.cc
@@ -317,6 +317,8 @@ Status TransactionLockMgr::AcquireWithTimeout(LockMap* lock_map,
return result;
}
+extern "C" int thd_rpl_deadlock_check(MYSQL_THD thd, MYSQL_THD other_thd);
+
// Try to lock this key after we have acquired the mutex.
// Sets *expire_time to the expiration time in microseconds
// or 0 if no expiration.
@@ -340,6 +342,9 @@ Status TransactionLockMgr::AcquireLocked(LockMap* lock_map,
lock_info.expiration_time = txn_lock_info.expiration_time;
// lock_cnt does not change
} else {
+ THD *blocked_thd = getTHD(txn_lock_info.txn_id);
+ THD *bloking_thd = getTHD(lock_info.txn_id);
+ thd_rpl_deadlock_check(blocked_thd, blocking_thd);
result = Status::TimedOut(Status::SubCode::kLockTimeout);
}
}
-----------------------------------------------------------------------
A real patch will need some plumbing to put the code in the right place and
have the right information available. Ie. probably the
thd_rpl_deadlock_check() call will go into an overridden virtual method in
ha_rocksdb.cc. I also did not check if/how one can get from txn_id to THD
(what is called getTHD() above), I assume it can be implemented reasonably
easy if it is not already there? Hints will be appreciated here as I am new
to the MyRocks and RocksDB codebases.
When are row locks released? I am interested in whether row locks can be
released earlier than at transaction commit time. If so, the simple patch
above will give false positives, and it might be worth it to investigate
ways to not report locks that are released earlier than commit. Eg. in
InnoDB, auto-increment locks are released earlier than commit, and thus are
not reported.
Once something like this is in place, I think optimistic parallel
replication should work. In case of a conflict between transactions T1 and
T2, thd_rpl_deadlock_check(T1, T2) will be called and will cause T2 to be
killed so that T1 can proceed and T2 be re-tried afterwards. So things look
good now; let us revisit this when there is a tree to work on.
- Kristian.
1
0

08 Oct '16
Hi, Alexey!
On Oct 08, Alexey Botchkov wrote:
> > Other comments, that you didn't reply to, does it mean you agree with
> > them (like adding tests with values that need json-escaping)? Okay.
>
> Right, it means that I just agreed with the comment.
>
> I don't think we're going to need more unittest files so yep, will
> move it to strings/ Will check for missing functions from MySQL, try
> running MySQL tests with us, then push.
>
> After that we'll still have one inconsistency with MySQL - that they
> have that JSON type for fields and values. JSON-values let the user
> to compare values as 'json'-s. Not sure anybody ever uses it. But
> JSON fields are probably good to import from MySQL, maybe in read-only
> mode. Just to let people switch to MariaDB easily over these JSON
> tables.
Yes... I think, out of missing features, JSON_TABLE is the most
interesting, but also the comlexity is a big unknown. The second one
would be GeoJSON (and it's fairly predictable), then goes JSON
comparison and JSON data type.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

08 Oct '16
Hi, Alexey!
On Sep 21, Alexey Botchkov wrote:
>
> > > + In the worst case one character from the 'a' string
> > > + turns into '\uXXXX\uXXXX' which is 12.
>
> > how comes? add an example, please. Like "for example, character x'1234'
> > in the charset A becomes '\u1234\u5678' if JSON string is in the charset B"
>
> For instance the 'SMILING FACE WITH SUNGLASSES' character,
> which is utf32(0001F60E) should be represented as \uD83D\uDE0E.
I hope you put this in the comment in the code...
Other comments, that you didn't reply to, does it mean you agree with
them (like adding tests with values that need json-escaping)? Okay.
Thanks for unit tests. There weren't many, but if you think they cover
all the library functionality, then it's fine. Just one thought - do you
expect any more json_lib related test *files* in the future? If not, you
could move json_lib-t.cc into unittest/strings. And why the test is .cc?
The file looked pure C to me.
Anyway, I think it's ok to push after resolving these minor issues. Thanks!
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

Re: [Maria-developers] [Commits] 1c9da8d: MDEV-9312: storage engine not enforced during galera cluster replication
by Jan Lindström 29 Sep '16
by Jan Lindström 29 Sep '16
29 Sep '16
Hi Nirbhay,
This looks ok but one question (no need to change now):
On Wed, Sep 28, 2016 at 7:36 PM, Nirbhay Choubey <nirbhay(a)mariadb.com>
wrote:
>
> + Since some wsrep threads (THDs) are create before plugins are
> + initialized, LOCK_plugin mutex needs to be initialized here.
> + */
>
Is there some fundamental reason why we can't create wsrep threads after
other plugins and
related global system variables (that are really needed for wsrep) are
initialized ?
R: Jan
2
1

Re: [Maria-developers] [Commits] a8162d4: MDEV-9312: storage engine not enforced during galera cluster replication
by Sergei Golubchik 28 Sep '16
by Sergei Golubchik 28 Sep '16
28 Sep '16
Hi, Nirbhay!
I don't understand, why do you need to create a dummy plugin here?
On Sep 27, Nirbhay Choubey wrote:
> revision-id: a8162d4a8737cff67889390fad0153acc175391d (mariadb-10.1.17-22-ga8162d4)
> parent(s): 6a6b253a6ecbd4d3dd254044d12ec64475453275
> author: Nirbhay Choubey
> committer: Nirbhay Choubey
> timestamp: 2016-09-27 09:03:26 -0400
> message:
>
> MDEV-9312: storage engine not enforced during galera cluster replication
>
> Perform a post initialization of plugin-related variables
> of wsrep threads after their global counterparts have been
> initialized.
...
> +#ifdef WITH_WSREP
> +
> +/*
> + Placeholder for global_system_variables.table_plugin required during
> + initialization of startup wsrep threads.
> +*/
> +static st_plugin_int *wsrep_dummy_plugin;
> +
> +/*
> + Initialize wsrep_dummy_plugin and assign it to
> + global_system_variables.table_plugin.
> +*/
> +void wsrep_plugins_pre_init()
> +{
> + wsrep_dummy_plugin=
> + (st_plugin_int *) my_malloc(sizeof(st_plugin_int), MYF(0));
> + wsrep_dummy_plugin->state= PLUGIN_IS_DISABLED;
> + global_system_variables.table_plugin= plugin_int_to_ref(wsrep_dummy_plugin);
> +}
> +
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
2

Re: [Maria-developers] 12ffe70: MDEV-9416: MariaDB galera got signal 11 when altering table add unique index
by Sergei Golubchik 28 Sep '16
by Sergei Golubchik 28 Sep '16
28 Sep '16
Hi, Nirbhay!
On Sep 24, Nirbhay Choubey wrote:
> revision-id: 12ffe70831c9c69c3d6ef83431443696455c332b (mariadb-10.1.17-23-g12ffe70)
> parent(s): 8ffded0a78c0a4912b32acac4c3f58f04c3bcd87
> author: Nirbhay Choubey
> committer: Nirbhay Choubey
> timestamp: 2016-09-24 00:27:38 -0400
> message:
>
> MDEV-9416: MariaDB galera got signal 11 when altering table add unique index
>
> When a BF thread attempts to abort a victim thread's transaction,
> the victim thread is not locked and thus its not safe to rely on
> its data structures like htons registered for the trx.
>
> So, instead of getting the registered htons from victim, innodb's
> hton can be looked up directly from installed_htons[] and used to
> abort the transaction. (Same technique is used in older versions)
>
> diff --git a/sql/handler.cc b/sql/handler.cc
> index 4e4c8fa..6a32fec 100644
> --- a/sql/handler.cc
> +++ b/sql/handler.cc
> @@ -6108,29 +6108,16 @@ int ha_abort_transaction(THD *bf_thd, THD *victim_thd, my_bool signal)
> DBUG_RETURN(0);
> }
>
> - /* Try statement transaction if standard one is not set. */
> - THD_TRANS *trans= (victim_thd->transaction.all.ha_list) ?
> - &victim_thd->transaction.all : &victim_thd->transaction.stmt;
> -
> - Ha_trx_info *ha_info= trans->ha_list, *ha_info_next;
> -
> - for (; ha_info; ha_info= ha_info_next)
> + handlerton *hton= installed_htons[DB_TYPE_INNODB];
> + if (hton && hton->abort_transaction)
> {
> - handlerton *hton= ha_info->ht();
> - if (!hton->abort_transaction)
> - {
> - /* Skip warning for binlog & wsrep. */
> - if (hton->db_type != DB_TYPE_BINLOG && hton != wsrep_hton)
> - {
> - WSREP_WARN("Cannot abort transaction.");
> - }
> - }
> - else
> - {
> - hton->abort_transaction(hton, bf_thd, victim_thd, signal);
> - }
> - ha_info_next= ha_info->next();
> + hton->abort_transaction(hton, bf_thd, victim_thd, signal);
Okay, but please add a comment that it's safe to abort from one thread
(bf_thd) the transaction, running in another thread (victim_thd),
because innodb's lock_sys and trx_mutex guarantee the necessary protection.
And that it's not safe to access victim_thd->transaction, because it's
not protected from concurrent accesses. And it's an overkill to take
LOCK_plugin and iterate the whole installed_htons[] array every time.
Then ok to push
> }
> + else
> + {
> + WSREP_WARN("Cannot abort InnoDB transaction");
> + }
> +
> DBUG_RETURN(0);
> }
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

Re: [Maria-developers] [Commits] f2aea43: MDEV-10649: Optimizer sometimes use "index" instead of "range" access for UPDATE
by Jan Lindström 28 Sep '16
by Jan Lindström 28 Sep '16
28 Sep '16
On Tue, Sep 6, 2016 at 8:37 PM, Sergei Petrunia <psergey(a)askmonty.org>
wrote:
> revision-id: f2aea435df7e92fcf8f09f8f6c160161168c5bed
> parent(s): a14f61ef749ad9f9ab2b0f5badf6754ba7443c9e
> committer: Sergei Petrunia
> branch nick: 10.0
> timestamp: 2016-09-06 20:37:21 +0300
> message:
>
> MDEV-10649: Optimizer sometimes use "index" instead of "range" access for
> UPDATE
>
> (XtraDB variant only, for now)
>
> Re-opening a TABLE object (after e.g. FLUSH TABLES or open table cache
> eviction) causes ha_innobase to call
> dict_stats_update(DICT_STATS_FETCH_ONLY_IF_NOT_IN_MEMORY).
>
> Inside this call, the following is done:
> dict_stats_empty_table(table);
> dict_stats_copy(table, t);
>
> On the other hand, commands like UPDATE make this call to get the "rows in
> table" statistics in table->stats.records:
>
> ha_innobase->info(HA_STATUS_VARIABLE|HA_STATUS_NO_LOCK)
>
> note the HA_STATUS_NO_LOCK parameter. It means, no locks are taken by
> ::info() If the ::info() call happens between dict_stats_empty_table
> and dict_stats_copy calls, the UPDATE's optimizer will get an estimate
> of table->stats.records=1, which causes it to pick a full table scan,
> which in turn will take a lot of row locks and cause other bad
> consequences.
>
> ---
> storage/xtradb/dict/dict0stats.cc | 29 +++++++++++++++++++----------
> 1 file changed, 19 insertions(+), 10 deletions(-)
>
> diff --git a/storage/xtradb/dict/dict0stats.cc b/storage/xtradb/dict/
> dict0stats.cc
> index b073398..a4aa436 100644
> --- a/storage/xtradb/dict/dict0stats.cc
> +++ b/storage/xtradb/dict/dict0stats.cc
> @@ -673,7 +673,10 @@ Write all zeros (or 1 where it makes sense) into a
> table and its indexes'
> dict_stats_copy(
> /*============*/
> dict_table_t* dst, /*!< in/out: destination table */
> - const dict_table_t* src) /*!< in: source table */
> + const dict_table_t* src, /*!< in: source table */
> + bool reset_ignored_indexes) /*!< in: if true, set ignored
> indexes
> + to have the same statistics
> as if
> + the table was empty */
> {
> dst->stats_last_recalc = src->stats_last_recalc;
> dst->stat_n_rows = src->stat_n_rows;
> @@ -692,7 +695,16 @@ Write all zeros (or 1 where it makes sense) into a
> table and its indexes'
> && (src_idx = dict_table_get_next_index(src_idx)))) {
>
> if (dict_stats_should_ignore_index(dst_idx)) {
> - continue;
> + if (reset_ignored_indexes) {
> + /* Reset index statistics for all ignored
> indexes,
> + unless they are FT indexes (these have no
> statistics)*/
> + if (dst_idx->type & DICT_FTS) {
> + continue;
> + }
> + dict_stats_empty_index(dst_idx);
>
Does this really help? Yes, we hold dict_sys mutex here so
dict_stats_empty_index here is safe for all readers using same mutex.
However, as you pointed up above, info() uses no locking method.
Thus, we really do not know what it will return, all values before
dict_stats_copy(), some of the indexes with new stats, all indexes with new
stats.
>
> @@ -3240,13 +3252,10 @@ N*AVG(Ui). In each call it searches for the
> currently fetched index into
>
> dict_table_stats_lock(table, RW_X_LATCH);
>
> - /* Initialize all stats to dummy values before
> - copying because dict_stats_table_clone_create()
> does
> - skip corrupted indexes so our dummy object 't' may
> - have less indexes than the real object 'table'. */
> - dict_stats_empty_table(table);
> -
> - dict_stats_copy(table, t);
> + /* Pass reset_ignored_indexes=true as parameter
> + to dict_stats_copy. This will cause statictics
> + for corrupted indexes to be set to empty values */
> + dict_stats_copy(table, t, true);
>
This is better solution than the original but as noted above, is this
enough ?
R: Jan
2
2

28 Sep '16
Hello Monty.
I checked your patch for MDEV-6112 multiple triggers per table.
Here are some quick notes. I'll send a detailed review separately.
1. It would be nice to push refactoring changes in separate patch:
a. do sql_mode_t refactoring
b. "trg_idx -> Trigger *" refactoring
c. opt_debugging related things
2. Why not to reuse the FOLLOWING_SYM and PRECEDING_SYM keywords
instead of adding new ones FOLLOWS_SYM, PRECEDES_SYM ?
(this is a non-standard syntax anyway)
3. I'd prefer PRECEDES/FOLLOWS to go before FOR EACH ROW.
"FOR EACH ROW stmt" is a kind of single clause responsible for the action.
It's very confusing to have PRECEDES/FOLLOWS characteristics in between
"FOR EACH ROW" and stmt.
4. You have a lot of changes related to trim_whitespace() where you declare
an unused "prefix_removed" variable. It would be nice to avoid declaring it.
Possible ways:
a. Perhaps the new function version can be defined like this:
size_t trim_whitespace(CHARSET_INFO *cs, LEX_STRING *str);
and make it return "prefix length removed", instead of adding a new
parameter.
b. Another option is just to overload it:
extern void trim_whitespace(CHARSET_INFO *cs, LEX_STRING *str,
uint *prefix_removed);
inline void trim_whitespace(CHARSET_INFO *cs, LEX_STRING *str)
{
uint not_used;
trim_whitespace(cs, str, ¬_used);
}
5. The "CREATED" field is not populated well. I'm getting strange values
like 1970-04-24 09:35:00.30.
Btw, his query returns a correct value:
SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(CREATED)*100)
FROM INFORMATION_SCHEMA.triggers;
You must have forgotten to multiply or delete to 100 somewhere.
6. In replication tests it's better to make sure that CREATED is safely
replicated (instead of changing it to #).
By the way, in other tests it's also a good idea not to use # for CREATED.
(see the previous problem)
You can use this:
SET TIMESTAMP=UNIX_TIMESTAMP('2001-01-01 10:20:30');
...
SET TIMESTAM=DEFAULT;
7. I suggest to move this code into a method, say find_trigger():
+ if (table->triggers)
+ {
+ Trigger *t;
+ for (t= table->triggers->get_trigger(TRG_EVENT_INSERT,
+ TRG_ACTION_BEFORE);
+ t;
+ t= t->next)
+ if (t->is_fields_updated_in_trigger(&full_part_field_set))
+ DBUG_RETURN(false);
+ }
It's repeated at least two times.
8. The following structure fields are defined two times:
enum trigger_order_type ordering_clause;
LEX_STRING anchor_trigger_name;
The first time for st_trg_chistics.
The second time for %union in sql_yacc.yy
Note, there will be the third time in sql_yacc_ora.yy.
Please define a structure in sql_lex.h instead:
struct st_trg_execution_order_chistics
{
/**
FOLLOWS or PRECEDES as specified in the CREATE TRIGGER statement.
*/
enum trigger_order_type ordering_clause;
/**
Trigger name referenced in the FOLLOWS/PRECEDES clause of the
CREATE TRIGGER statement.
*/
LEX_STRING anchor_trigger_name;
};
and reuse it in here:
struct st_trg_chirstics: public st_trg_execution_order_chistics
{
...
};
and in here:
%union {
...
st_trg_execition_order_chistics trg_execution_order_chirstics;
...
};
Please also rename trg_characteristics to trg_execution_order_chistics,
because "trg_characteristics" is more suitable to something having type
st_trg_chirstics rather than st_trg_execution_order_chistics.
Greetings.
1
1

[Maria-developers] Possibly a problem in "Allowed to use WITH clauses before SELECT in CREATE ... SELECT and INSERT ... SELECT"
by Alexander Barkov 28 Sep '16
by Alexander Barkov 28 Sep '16
28 Sep '16
Hello Igor,
I was rebasing bb-10.2-compatibility on top of the latest 10.2 and
noticed this grammar change:
> create_select_query_specification:
> - SELECT_SYM create_select_part2 create_select_part3 create_select_part4
> + SELECT_SYM opt_with_clause create_select_part2 create_select_part3
> + create_select_part4
> + {
> + Select->set_with_clause($2);
> + }
> ;
I have some questions:
- From my understanding, opt_with_clause should go before SELECT_SYM
Why does it go after?
- Why this grammar change is not covered in tests?
- What is the MDEV number for this change?
Thanks!
1
0

Re: [Maria-developers] [Commits] 13b5098: MDEV-9531: GROUP_CONCAT with ORDER BY inside takes a lot of memory while it's executed
by Sergei Golubchik 26 Sep '16
by Sergei Golubchik 26 Sep '16
26 Sep '16
Hi, Oleksandr!
On Jun 28, Oleksandr Byelkin wrote:
> revision-id: 13b5098fcaa888173472d255e29aff22bcc5baae (mariadb-10.1.13-18-g13b5098)
> parent(s): 732adec0a4c75d99389230feeb0deca0ad668de7
> committer: Oleksandr Byelkin
> timestamp: 2016-06-28 10:59:59 +0200
> message:
>
> MDEV-9531: GROUP_CONCAT with ORDER BY inside takes a lot of memory while it's executed
>
> Limitation added to Red-Black tree.
>
> ---
> include/my_tree.h | 14 +++-
> mysql-test/r/group_concat_big.result | 6 ++
> mysql-test/t/group_concat_big.result | 6 ++
> mysql-test/t/group_concat_big.test | 6 ++
> mysys/tree.c | 156 ++++++++++++++++++++++++-----------
> sql/item_sum.cc | 45 ++++++++--
> 6 files changed, 177 insertions(+), 56 deletions(-)
>
> diff --git a/include/my_tree.h b/include/my_tree.h
> index f8be55f..f1916b9 100644
> --- a/include/my_tree.h
> +++ b/include/my_tree.h
> @@ -57,11 +57,14 @@ typedef struct st_tree_element {
> } TREE_ELEMENT;
>
> #define ELEMENT_CHILD(element, offs) (*(TREE_ELEMENT**)((char*)element + offs))
> +#define R_ELEMENT_CHILD(element, offs) ((TREE_ELEMENT**)((char*)element + offs))
Please,
#define ELEMENT_CHILD_PTR(element, offs) ((TREE_ELEMENT**)((char*)element + offs))
#define ELEMENT_CHILD(element, offs) (*ELEMENT_CHILD_PTR(element, offs))
> typedef struct st_tree {
> TREE_ELEMENT *root,null_element;
> TREE_ELEMENT **parents[MAX_TREE_HEIGHT];
> + TREE_ELEMENT *free_element;
> uint offset_to_key,elements_in_tree,size_of_element;
> + uint elements_limit, del_direction;
> size_t memory_limit, allocated;
> qsort_cmp2 compare;
> void *custom_arg;
> diff --git a/mysql-test/r/group_concat_big.result b/mysql-test/r/group_concat_big.result
> new file mode 100644
> index 0000000..4de0ebb
> --- /dev/null
> +++ b/mysql-test/r/group_concat_big.result
> @@ -0,0 +1,6 @@
> +SELECT GROUP_CONCAT( seq, seq, seq, seq, seq, seq, seq, seq ORDER BY
> +2,1,3,4,6,5,8,7 ) AS cfield1 FROM seq_1_to_50000000;
> +cfield1
> +11111111,22222222,33333333,44444444,55555555,66666666,77777777,88888888,99999999,1010101010101010,1111111111111111,1212121212121212,1313131313131313,1414141414141414,1515151515151515,1616161616161616,1717171717171717,1818181818181818,1919191919191919,2020202020202020,2121212121212121,2222222222222222,2323232323232323,2424242424242424,2525252525252525,2626262626262626,2727272727272727,2828282828282828,2929292929292929,3030303030303030,3131313131313131,3232323232323232,3333333333333333,3434343434343434,3535353535353535,3636363636363636,3737373737373737,3838383838383838,3939393939393939,4040404040404040,4141414141414141,4242424242424242,4343434343434343,4444444444444444,4545454545454545,4646464646464646,4747474747474747,4848484848484848,4949494949494949,5050505050505050,5151515151515151,5252525252525252,5353535353535353,5454545454545454,5555555555555555,5656565656565656,5757575757575757,5858585858585858,5959595959595959,6060606060606060,6161616161616161,6262626262626262,6363636
> 363636363,6464646464646464,65656565
> +Warnings:
> +Warning 1260 Row 65 was cut by GROUP_CONCAT()
1. Same result without your patch?
2. I suppose all rows after 65 skipped completely, aren't they?
> diff --git a/mysql-test/t/group_concat_big.result b/mysql-test/t/group_concat_big.result
> new file mode 100644
> index 0000000..4de0ebb
> --- /dev/null
> +++ b/mysql-test/t/group_concat_big.result
> @@ -0,0 +1,6 @@
> +SELECT GROUP_CONCAT( seq, seq, seq, seq, seq, seq, seq, seq ORDER BY
> +2,1,3,4,6,5,8,7 ) AS cfield1 FROM seq_1_to_50000000;
> +cfield1
> +11111111,22222222,33333333,44444444,55555555,66666666,77777777,88888888,99999999,1010101010101010,1111111111111111,1212121212121212,1313131313131313,1414141414141414,1515151515151515,1616161616161616,1717171717171717,1818181818181818,1919191919191919,2020202020202020,2121212121212121,2222222222222222,2323232323232323,2424242424242424,2525252525252525,2626262626262626,2727272727272727,2828282828282828,2929292929292929,3030303030303030,3131313131313131,3232323232323232,3333333333333333,3434343434343434,3535353535353535,3636363636363636,3737373737373737,3838383838383838,3939393939393939,4040404040404040,4141414141414141,4242424242424242,4343434343434343,4444444444444444,4545454545454545,4646464646464646,4747474747474747,4848484848484848,4949494949494949,5050505050505050,5151515151515151,5252525252525252,5353535353535353,5454545454545454,5555555555555555,5656565656565656,5757575757575757,5858585858585858,5959595959595959,6060606060606060,6161616161616161,6262626262626262,6363636
> 363636363,6464646464646464,65656565
> +Warnings:
> +Warning 1260 Row 65 was cut by GROUP_CONCAT()
Interesting. How did you manage to include the same file twice in a
commit diff?
> diff --git a/mysys/tree.c b/mysys/tree.c
> index a9fc542..6c094d9 100644
> --- a/mysys/tree.c
> +++ b/mysys/tree.c
> @@ -157,6 +172,10 @@ static void free_tree(TREE *tree, myf free_flags)
> free_root(&tree->mem_root, free_flags);
> }
> }
> + if (tree->free_element && tree->with_delete && tree->free)
> + (*tree->free)(tree->free_element, free_free,
> + tree->custom_arg);
"&& tree->with_delete" is wrong here
> + tree->free_element= 0;
> tree->root= &tree->null_element;
> tree->elements_in_tree=0;
> tree->allocated=0;
> @@ -190,6 +209,42 @@ static void delete_tree_element(TREE *tree, TREE_ELEMENT *element)
> }
>
function comment would be nice here
> +void tree_exclude(TREE *tree, TREE_ELEMENT ***parent)
> +{
> + int remove_colour;
> + TREE_ELEMENT ***org_parent, *nod;
> + TREE_ELEMENT *element= **parent;
> @@ -202,7 +257,32 @@ TREE_ELEMENT *tree_insert(TREE *tree, void *key, uint key_size,
> void* custom_arg)
> {
> int cmp;
> - TREE_ELEMENT *element,***parent;
> + TREE_ELEMENT *element, ***parent;
> +
> + if (tree->elements_limit && tree->elements_in_tree &&
no need for "&& tree->elements_in_tree"
> + tree->elements_in_tree >= tree->elements_limit)
> + {
> + /*
> + The limit reached so we should remove one.
> + It is done on the very beginning because:
> + 1) "parents" will be used
> + 2) removing make incorrect search pass
what does that mean?
> + If we will not insert now, we leave freed element for future use
> + */
> + DBUG_ASSERT(key_size == 0);
why?
> +
> + parent= tree->parents;
> + *parent = &tree->root; element= tree->root;
> + while (element != &tree->null_element)
> + {
> + *++parent= R_ELEMENT_CHILD(element, tree->del_direction);
> + element= ELEMENT_CHILD(element, tree->del_direction);
> + }
> + parent--;
> + tree->free_element= **parent;
> + tree_exclude(tree, parent);
> + tree->elements_in_tree--;
> + }
>
> parent= tree->parents;
> *parent = &tree->root; element= tree->root;
> diff --git a/sql/item_sum.cc b/sql/item_sum.cc
> index 1cfee1a..8bcc2ce 100644
> --- a/sql/item_sum.cc
> +++ b/sql/item_sum.cc
> @@ -3390,7 +3390,7 @@ Item_func_group_concat::fix_fields(THD *thd, Item **ref)
> fixed= 1;
> return FALSE;
> }
> -
> +struct tree_node_size {void *left,*right; uint32 f;};
why do you need that?
>
> bool Item_func_group_concat::setup(THD *thd)
> {
> @@ -3499,17 +3499,50 @@ bool Item_func_group_concat::setup(THD *thd)
>
> if (arg_count_order)
> {
> + uint min_rec_length= 1; //at least 1 byte per rec (coma)
> + {
> + List_iterator_fast<Item> li(all_fields);
> + Item *item;
> + while ((item= li++))
> + {
> + switch (item->result_type())
> + {
> + case STRING_RESULT:
> + break; // could be empty string
> + case DECIMAL_RESULT:
> + case REAL_RESULT:
> + min_rec_length+=2;
> + break;
> + case INT_RESULT:
> + min_rec_length++;
> + break;
> + case TIME_RESULT:
> + min_rec_length+=6;
> + break;
> + default:
> + case ROW_RESULT:
> + DBUG_ASSERT(0);
> + break;
> + }
> + }
> + }
yeah... that'll work, of course, but for strings (most common case)
you'll allocates tens or hundreds (if not thousands) times more memory
than needed :(
A hackish workaround could be to adjust tree->elements_limit (in
Item_func_group_concat::add) after each insertion. But in this case it
would be simpler to limit the tree by size (in bytes) and adjust tree
size after each insertion. What do you think about it?
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
3

24 Sep '16
I am basing my work on the async_queries.c example.
In my code, I am calling mysql_real_connect_start(), and the return value
indicates that MySQL is waiting for READ and TIMEOUT. However, if I call
mysql_get_timeout_value() (or the _ms() version) right after that, it
returns 0. Is this expected? As far as I can see, this is causing the next
call to mysql_real_connect_cont() (which happens immediately, because the
timeout triggers right away) to return an error.
This is on OS X 10.11.6 with the MariaDB client just installed via brew;
mariadb_config --version says 5.5.1.
This is the code I'm calling:
status = mysql_real_connect_start(&ret, &mysql, h, u, p, d, 0,
0, 0);
printf("MYSQL connect %d", status);
if (status & MYSQL_WAIT_TIMEOUT) {
unsigned int ms = mysql_get_timeout_value_ms(&mysql);
printf(" and includes a timeout of %u ms", ms);
}
printf("\n");
That code prints: MYSQL connect 9 and includes a timeout of 0 ms
I did find two ways to make the code work:
1. Not passing MYSQL_WAIT_TIMEOUT back to the mysql_real_connect_cont()
(so it thinks a timeout has not happened at all). This allows all the
queries to actually complete, but the code ends up polling heavily.
2. Forcing the return value of mysql_get_timeout_value_ms() to be > 0.
This makes the code work beautifully, but of course I would rather follow
MySQL's hint as to the timeout extension.
I would appreciate any hints as to what I am missing. Thanks much in
advance.
--
Gonzalo Diethelm
gonzalo.diethelm(a)gmail.com
2
4

Re: [Maria-developers] 8ffded0: MDEV-9312: storage engine not enforced during galera cluster replication
by Sergei Golubchik 24 Sep '16
by Sergei Golubchik 24 Sep '16
24 Sep '16
Hi, Nirbhay!
On Sep 24, Nirbhay Choubey wrote:
> diff --git a/sql/sql_plugin.cc b/sql/sql_plugin.cc
> index 60248f3..99b3311 100644
> --- a/sql/sql_plugin.cc
> +++ b/sql/sql_plugin.cc
> @@ -3116,7 +3118,12 @@ void plugin_thdvar_init(THD *thd)
> thd->variables.dynamic_variables_size= 0;
> thd->variables.dynamic_variables_ptr= 0;
>
> - if (IF_WSREP((!WSREP(thd) || !thd->wsrep_applier),1))
> + /*
> + The following initializations are deferred for some wsrep system threads
> + created during startup as they could be created even before LOCK_plugin
> + and plugins are initialized.
> + */
> + if (IF_WSREP((plugins_are_initialized),1))
why do you need that? I think you can safely call plugin_thdvar_init()
twice, so you don't really need to skip the first invocation.
It's that principle that the one reponsible for the problem should pay
for it, others shouldn't suffer :)
In ha_maria:implicit_commit, it was Aria's hack, and that if() was added
inside Aria. Here is purely wsrep problem, putting an if() on a common
code path for all threads looks just wrong.
> {
> mysql_mutex_lock(&LOCK_plugin);
> thd->variables.table_plugin=
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

[Maria-developers] Unique Blob Index for lower transaction isolation levels
by Sachin Setia 22 Sep '16
by Sachin Setia 22 Sep '16
22 Sep '16
Hi Sergei,
Actually I was thinking a different way of implementing this.
in function ha_write_row , after it calls storage engine write, we
can check for inserted values and see if it is inserted more than
once.
Of course, it can be I/O intensive (or it may not be , because data
should be in buffer ). We have to do this only when there is multiple
clients doing inserts in same table. But I am not sure if there is any
way to find this. What do you think ?
Regards
sachin
1
0

Re: [Maria-developers] [Commits] 949e2ae: MDEV-10315 - Online ALTER TABLE may get stuck in tdc_remove_table
by Sergei Golubchik 21 Sep '16
by Sergei Golubchik 21 Sep '16
21 Sep '16
Hi, Sergey!
On Jul 01, Sergey Vojtovich wrote:
> revision-id: 949e2aede909686053ab83ebf2741901be416a07 (mariadb-10.0.26-4-g949e2ae)
> parent(s): 0fdb17e6c3f50ae22eb97b6363bcbd8b0cd9e040
> committer: Sergey Vojtovich
> timestamp: 2016-07-01 13:57:18 +0400
> message:
>
> MDEV-10315 - Online ALTER TABLE may get stuck in tdc_remove_table
>
> There was race condition between online ALTER TABLE and statements performing
> TABLE_SHARE release without marking it flushed (e.g. in case of table cache
> overflow, SET @@global.table_open_cache, manager thread purgine table cache).
>
> The reason was missing mysql_cond_broadcast().
ok to push
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

[Maria-developers] MDEV-10425 Assertion `collation.derivation == DERIVATION_IMPLICIT' failed in Item_func_conv_charset::fix_length_and_dec()
by Alexander Barkov 21 Sep '16
by Alexander Barkov 21 Sep '16
21 Sep '16
Hello Sergei,
Please review.
Thanks!
2
1

Re: [Maria-developers] [Commits] a288cb6: MDEV-8320 Allow index usage for DATE(datetime_column) = const.
by Sergey Petrunia 20 Sep '16
by Sergey Petrunia 20 Sep '16
20 Sep '16
Hi Alexey,
The patch doesn't have any testcase. Did you forget to add them?
On Tue, Sep 20, 2016 at 01:22:19PM +0400, Alexey Botchkov wrote:
> revision-id: a288cb698195b1e57abbb426f1cc9a804d65ff45 (mariadb-10.1.8-262-ga288cb6)
> parent(s): cb575abf76be82553b9c1c12c9112cbc6f53a547
> committer: Alexey Botchkov
> timestamp: 2016-09-20 13:19:08 +0400
> message:
>
> MDEV-8320 Allow index usage for DATE(datetime_column) = const.
>
> create_reverse_func() method added so functions can specify how
> to unpack field argument out of it.
> opt_arguments added to Item_bool_func2 so it can have different
> arguments for the optimizer and the calcualtion itself.
>
> ---
> sql/item.h | 8 +++++
> sql/item_cmpfunc.h | 52 ++++++++++++-------------------
> sql/item_func.h | 5 +++
> sql/item_timefunc.cc | 87 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> sql/item_timefunc.h | 49 ++++++++++++++++++++++++++++-
> sql/opt_range.cc | 48 +++++++++++++++++++++++++++++
> sql/sql_select.cc | 70 ++++++++++++++++++++++++++++++++++++++----
> 7 files changed, 279 insertions(+), 40 deletions(-)
>
> diff --git a/sql/item.h b/sql/item.h
> index 5b82548..200e2e0 100644
> --- a/sql/item.h
> +++ b/sql/item.h
> @@ -1212,6 +1212,14 @@ class Item: public Value_source,
> {
> return;
> }
> + virtual bool add_extra_key_fields(THD *thd,
> + JOIN *join, KEY_FIELD **key_fields,
> + uint *and_level,
> + table_map usable_tables,
> + SARGABLE_PARAM **sargables)
> + {
> + return false;
> + }
> /*
> Make a select tree for all keys in a condition or a condition part
> @param param Context
> diff --git a/sql/item_cmpfunc.h b/sql/item_cmpfunc.h
> index 6d432bd..516bb07 100644
> --- a/sql/item_cmpfunc.h
> +++ b/sql/item_cmpfunc.h
> @@ -136,6 +136,14 @@ class Item_bool_func :public Item_int_func
> {
> protected:
> /*
> + Some functions modify it's arguments for the optimizer.
> + So for example the condition 'Func(fieldX) = constY' turned into
> + 'fieldX = cnuR(constY)' so that optimizer can use an index on fieldX.
> + */
> + Item *opt_args[3];
> + uint opt_arg_count;
> +
> + /*
> Build a SEL_TREE for a simple predicate
> @param param PARAM from SQL_SELECT::test_quick_select
> @param field field in the predicate
> @@ -189,12 +197,12 @@ class Item_bool_func :public Item_int_func
> KEY_PART *key_part,
> Item_func::Functype type, Item *value);
> public:
> - Item_bool_func(THD *thd): Item_int_func(thd) {}
> - Item_bool_func(THD *thd, Item *a): Item_int_func(thd, a) {}
> - Item_bool_func(THD *thd, Item *a, Item *b): Item_int_func(thd, a, b) {}
> - Item_bool_func(THD *thd, Item *a, Item *b, Item *c): Item_int_func(thd, a, b, c) {}
> - Item_bool_func(THD *thd, List<Item> &list): Item_int_func(thd, list) { }
> - Item_bool_func(THD *thd, Item_bool_func *item) :Item_int_func(thd, item) {}
> + Item_bool_func(THD *thd): Item_int_func(thd), opt_arg_count(0) {}
> + Item_bool_func(THD *thd, Item *a): Item_int_func(thd, a), opt_arg_count(0) {}
> + Item_bool_func(THD *thd, Item *a, Item *b): Item_int_func(thd, a, b), opt_arg_count(0) {}
> + Item_bool_func(THD *thd, Item *a, Item *b, Item *c): Item_int_func(thd, a, b, c), opt_arg_count(0) {}
> + Item_bool_func(THD *thd, List<Item> &list): Item_int_func(thd, list), opt_arg_count(0) { }
> + Item_bool_func(THD *thd, Item_bool_func *item) :Item_int_func(thd, item), opt_arg_count(0) {}
> bool is_bool_type() { return true; }
> virtual CHARSET_INFO *compare_collation() const { return NULL; }
> void fix_length_and_dec() { decimals=0; max_length=1; }
> @@ -436,33 +444,7 @@ class Item_bool_func2_with_rev :public Item_bool_func2
> Item_bool_func2_with_rev(THD *thd, Item *a, Item *b):
> Item_bool_func2(thd, a, b) { }
> virtual enum Functype rev_functype() const= 0;
> - SEL_TREE *get_mm_tree(RANGE_OPT_PARAM *param, Item **cond_ptr)
> - {
> - DBUG_ENTER("Item_bool_func2_with_rev::get_mm_tree");
> - DBUG_ASSERT(arg_count == 2);
> - SEL_TREE *ftree;
> - /*
> - Even if get_full_func_mm_tree_for_args(param, args[0], args[1]) will not
> - return a range predicate it may still be possible to create one
> - by reversing the order of the operands. Note that this only
> - applies to predicates where both operands are fields. Example: A
> - query of the form
> -
> - WHERE t1.a OP t2.b
> -
> - In this case, args[0] == t1.a and args[1] == t2.b.
> - When creating range predicates for t2,
> - get_full_func_mm_tree_for_args(param, args[0], args[1])
> - will return NULL because 'field' belongs to t1 and only
> - predicates that applies to t2 are of interest. In this case a
> - call to get_full_func_mm_tree_for_args() with reversed operands
> - may succeed.
> - */
> - if (!(ftree= get_full_func_mm_tree_for_args(param, args[0], args[1])) &&
> - !(ftree= get_full_func_mm_tree_for_args(param, args[1], args[0])))
> - ftree= Item_func::get_mm_tree(param, cond_ptr);
> - DBUG_RETURN(ftree);
> - }
> + SEL_TREE *get_mm_tree(RANGE_OPT_PARAM *param, Item **cond_ptr);
> };
>
>
> @@ -504,6 +486,10 @@ class Item_bool_rowready_func2 :public Item_bool_func2_with_rev
> Item_bool_func2::cleanup();
> cmp.cleanup();
> }
> + bool add_extra_key_fields(THD *thd,
> + JOIN *join, KEY_FIELD **key_fields,
> + uint *and_level, table_map usable_tables,
> + SARGABLE_PARAM **sargables);
> void add_key_fields(JOIN *join, KEY_FIELD **key_fields,
> uint *and_level, table_map usable_tables,
> SARGABLE_PARAM **sargables)
> diff --git a/sql/item_func.h b/sql/item_func.h
> index ca7c481..1f802db 100644
> --- a/sql/item_func.h
> +++ b/sql/item_func.h
> @@ -358,6 +358,11 @@ class Item_func :public Item_func_or_sum
> - or replaced to an Item_int_with_ref
> */
> bool setup_args_and_comparator(THD *thd, Arg_comparator *cmp);
> + virtual bool create_reverse_func(enum Functype cmp_type,
> + THD *thd, Item *r_arg, uint *a_cnt, Item** a)
> + {
> + return false;
> + }
> };
>
>
> diff --git a/sql/item_timefunc.cc b/sql/item_timefunc.cc
> index 41dc967..3124444 100644
> --- a/sql/item_timefunc.cc
> +++ b/sql/item_timefunc.cc
> @@ -2569,6 +2569,39 @@ bool Item_date_typecast::get_date(MYSQL_TIME *ltime, ulonglong fuzzy_date)
> }
>
>
> +bool Item_date_typecast::create_reverse_func(enum Functype cmp_type,
> + THD *thd, Item *r_arg, uint *a_cnt, Item** a)
> +{
> + switch (cmp_type)
> + {
> + case GT_FUNC:
> + case LE_FUNC:
> + (*a_cnt)++;
> + if (!(a[0]= new (thd->mem_root) Item_func_day_end(thd, r_arg)) ||
> + a[0]->fix_fields(thd, a+1))
> + return true;
> + break;
> + case LT_FUNC:
> + case GE_FUNC:
> + (*a_cnt)++;
> + if (!(a[0]= new (thd->mem_root) Item_func_day_begin(thd, r_arg)) ||
> + a[0]->fix_fields(thd, a+1))
> + return true;
> + break;
> + case EQ_FUNC:
> + (*a_cnt)+= 2;
> + if (!(a[0]= new (thd->mem_root) Item_func_day_begin(thd, r_arg)) ||
> + a[0]->fix_fields(thd, a+1))
> + return true;
> + if (!(a[1]= new (thd->mem_root) Item_func_day_end(thd, r_arg)) ||
> + a[1]->fix_fields(thd, a+2))
> + return true;
> + default:;
> + }
> + return false;
> +}
> +
> +
> bool Item_datetime_typecast::get_date(MYSQL_TIME *ltime, ulonglong fuzzy_date)
> {
> fuzzy_date |= sql_mode_for_dates(current_thd);
> @@ -3240,3 +3273,57 @@ bool Item_func_last_day::get_date(MYSQL_TIME *ltime, ulonglong fuzzy_date)
> ltime->time_type= MYSQL_TIMESTAMP_DATE;
> return (null_value= 0);
> }
> +
> +
> +bool Item_func_day_begin::get_date(MYSQL_TIME *res, ulonglong fuzzy_date)
> +{
> + if (get_arg0_date(res, fuzzy_date))
> + return (null_value=1);
> +
> + res->second_part= res->second= res->minute= res->hour= 0;
> + res->time_type= MYSQL_TIMESTAMP_DATETIME;
> +
> + return null_value= 0;
> +}
> +
> +
> +bool Item_func_day_end::get_date(MYSQL_TIME *res, ulonglong fuzzy_date)
> +{
> + if (get_arg0_date(res, fuzzy_date))
> + return (null_value=1);
> +
> + res->hour= 23;
> + res->second= res->minute= 59;
> + res->second_part= 999999;
> + res->time_type= MYSQL_TIMESTAMP_DATETIME;
> + return null_value= 0;
> +}
> +
> +
> +bool Item_func_year_begin::get_date(MYSQL_TIME *res, ulonglong fuzzy_date)
> +{
> + res->year= args[0]->val_int();
> + if ((null_value= args[0]->null_value || res->year >= 9999))
> + return 0;
> +
> + res->day= res->month= 1;
> + res->second_part= res->second= res->minute= res->hour= 0;
> + res->time_type= MYSQL_TIMESTAMP_DATETIME;
> + return null_value= 0;
> +}
> +
> +
> +bool Item_func_year_end::get_date(MYSQL_TIME *res, ulonglong fuzzy_date)
> +{
> + res->year= args[0]->val_int();
> + if ((null_value= args[0]->null_value || res->year >= 9999))
> + return 0;
> +
> + res->month= 12;
> + res->day= 31;
> + res->hour= 23;
> + res->second= res->minute= 59;
> + res->second_part= 999999;
> + res->time_type= MYSQL_TIMESTAMP_DATETIME;
> + return null_value= 0;
> +}
> diff --git a/sql/item_timefunc.h b/sql/item_timefunc.h
> index a853c63..b4f64ef 100644
> --- a/sql/item_timefunc.h
> +++ b/sql/item_timefunc.h
> @@ -745,7 +745,7 @@ class Item_func_now_local :public Item_func_now
> {
> public:
> Item_func_now_local(THD *thd, uint dec): Item_func_now(thd, dec) {}
> - const char *func_name() const { return "now"; }
> + const char *func_name() const { return "day_start"; }
> virtual void store_now_in_TIME(THD *thd, MYSQL_TIME *now_time);
> virtual enum Functype functype() const { return NOW_FUNC; }
> Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> @@ -1074,6 +1074,8 @@ class Item_date_typecast :public Item_temporal_typecast
> bool get_date(MYSQL_TIME *ltime, ulonglong fuzzy_date);
> const char *cast_type() const { return "date"; }
> enum_field_types field_type() const { return MYSQL_TYPE_DATE; }
> + bool create_reverse_func(enum Functype cmp_type,
> + THD *thd, Item *r_arg, uint *a_cnt, Item** a);
> Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> { return get_item_copy<Item_date_typecast>(thd, mem_root, this); }
> };
> @@ -1268,4 +1270,49 @@ class Item_func_last_day :public Item_datefunc
> { return get_item_copy<Item_func_last_day>(thd, mem_root, this); }
> };
>
> +
> +class Item_func_day_begin :public Item_datetimefunc
> +{
> +public:
> + Item_func_day_begin(THD *thd, Item *a): Item_datetimefunc(thd, a) {}
> + const char *func_name() const { return "day_begin"; }
> + bool get_date(MYSQL_TIME *res, ulonglong fuzzy_date);
> + Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> + { return get_item_copy<Item_func_day_begin>(thd, mem_root, this); }
> +};
> +
> +
> +class Item_func_day_end :public Item_datetimefunc
> +{
> +public:
> + Item_func_day_end(THD *thd, Item *a): Item_datetimefunc(thd, a) {}
> + const char *func_name() const { return "day_end"; }
> + bool get_date(MYSQL_TIME *res, ulonglong fuzzy_date);
> + Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> + { return get_item_copy<Item_func_day_end>(thd, mem_root, this); }
> +};
> +
> +
> +class Item_func_year_begin :public Item_datetimefunc
> +{
> +public:
> + Item_func_year_begin(THD *thd, Item *a): Item_datetimefunc(thd, a) {}
> + const char *func_name() const { return "year_begin"; }
> + bool get_date(MYSQL_TIME *res, ulonglong fuzzy_date);
> + Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> + { return get_item_copy<Item_func_year_begin>(thd, mem_root, this); }
> +};
> +
> +
> +class Item_func_year_end :public Item_datetimefunc
> +{
> +public:
> + Item_func_year_end(THD *thd, Item *a): Item_datetimefunc(thd, a) {}
> + const char *func_name() const { return "year_end"; }
> + bool get_date(MYSQL_TIME *res, ulonglong fuzzy_date);
> + Item *get_copy(THD *thd, MEM_ROOT *mem_root)
> + { return get_item_copy<Item_func_year_end>(thd, mem_root, this); }
> +};
> +
> +
> #endif /* ITEM_TIMEFUNC_INCLUDED */
> diff --git a/sql/opt_range.cc b/sql/opt_range.cc
> index 3ea9f4e..e533608 100644
> --- a/sql/opt_range.cc
> +++ b/sql/opt_range.cc
> @@ -6998,6 +6998,54 @@ SEL_TREE *Item_bool_func::get_ne_mm_tree(RANGE_OPT_PARAM *param,
> }
>
>
> +SEL_TREE *Item_bool_func2_with_rev::get_mm_tree(RANGE_OPT_PARAM *param, Item **cond_ptr)
> +{
> + DBUG_ENTER("Item_bool_func2_with_rev::get_mm_tree");
> + DBUG_ASSERT(arg_count == 2);
> + SEL_TREE *ftree;
> + /*
> + Even if get_full_func_mm_tree_for_args(param, args[0], args[1]) will not
> + return a range predicate it may still be possible to create one
> + by reversing the order of the operands. Note that this only
> + applies to predicates where both operands are fields. Example: A
> + query of the form
> +
> + WHERE t1.a OP t2.b
> +
> + In this case, args[0] == t1.a and args[1] == t2.b.
> + When creating range predicates for t2,
> + get_full_func_mm_tree_for_args(param, args[0], args[1])
> + will return NULL because 'field' belongs to t1 and only
> + predicates that applies to t2 are of interest. In this case a
> + call to get_full_func_mm_tree_for_args() with reversed operands
> + may succeed.
> + */
> + if (opt_arg_count)
> + {
> + if (opt_arg_count == 2)
> + {
> + ftree= get_full_func_mm_tree_for_args(param, opt_args[0], opt_args[1]);
> + }
> + else if (opt_arg_count == 3)
> + {
> + Field *f= ((Item_field *) opt_args[0])->field;
> + ftree= get_mm_parts(param, f, Item_func::GE_FUNC, opt_args[1]);
> + if (ftree)
> + {
> + ftree= tree_and(param, ftree,
> + get_mm_parts(param, f,
> + Item_func::LE_FUNC, opt_args[2]));
> + }
> + }
> + }
> + if (!ftree &&
> + !(ftree= get_full_func_mm_tree_for_args(param, args[0], args[1])) &&
> + !(ftree= get_full_func_mm_tree_for_args(param, args[1], args[0])))
> + ftree= Item_func::get_mm_tree(param, cond_ptr);
> + DBUG_RETURN(ftree);
> +};
> +
> +
> SEL_TREE *Item_func_between::get_func_mm_tree(RANGE_OPT_PARAM *param,
> Field *field, Item *value)
> {
> diff --git a/sql/sql_select.cc b/sql/sql_select.cc
> index aa08420..51f6204 100644
> --- a/sql/sql_select.cc
> +++ b/sql/sql_select.cc
> @@ -4833,6 +4833,30 @@ is_local_field (Item *field)
> }
>
>
> +static Item_field *get_local_field (Item *field)
> +{
> + Item *ri= field->real_item();
> + return (ri->type() == Item::FIELD_ITEM
> + && !(field->used_tables() & OUTER_REF_TABLE_BIT)
> + && !((Item_field *)ri)->get_depended_from()) ? (Item_field *) ri : 0;
> +}
> +
> +
> +static Item_field *field_in_sargable_func(Item *fn)
> +{
> + fn= fn->real_item();
> +
> + if (fn->type() == Item::FUNC_ITEM &&
> + strcmp(((Item_func *)fn)->func_name(), "cast_as_date") == 0)
> +
> + {
> + Item_date_typecast *dt= (Item_date_typecast *) fn;
> + return get_local_field(dt->arguments()[0]);
> + }
> + return 0;
> +}
> +
> +
> /*
> In this and other functions, and_level is a number that is ever-growing
> and is different for the contents of every AND or OR clause. For example,
> @@ -5036,6 +5060,25 @@ Item_func_like::add_key_fields(JOIN *join, KEY_FIELD **key_fields,
> }
>
>
> +bool Item_bool_rowready_func2::add_extra_key_fields(THD *thd,
> + JOIN *join, KEY_FIELD **key_fields,
> + uint *and_level,
> + table_map usable_tables,
> + SARGABLE_PARAM **sargables)
> +{
> + Item_field *f;
> + if ((f= field_in_sargable_func(args[0])) && args[1]->const_item())
> + {
> + opt_arg_count= 1;
> + opt_args[0]= f;
> + if (((Item_func *) args[0])->create_reverse_func(
> + functype(), thd, args[1], &opt_arg_count, opt_args+1))
> + return true;
> + }
> + return false;
> +}
> +
> +
> void
> Item_bool_func2::add_key_fields_optimize_op(JOIN *join, KEY_FIELD **key_fields,
> uint *and_level,
> @@ -5043,19 +5086,28 @@ Item_bool_func2::add_key_fields_optimize_op(JOIN *join, KEY_FIELD **key_fields,
> SARGABLE_PARAM **sargables,
> bool equal_func)
> {
> + Item_field *f;
> /* If item is of type 'field op field/constant' add it to key_fields */
> - if (is_local_field(args[0]))
> + if ((f= get_local_field(args[0])))
> {
> - add_key_equal_fields(join, key_fields, *and_level, this,
> - (Item_field*) args[0]->real_item(), equal_func,
> + add_key_equal_fields(join, key_fields, *and_level, this, f, equal_func,
> args + 1, 1, usable_tables, sargables);
> }
> - if (is_local_field(args[1]))
> + else if ((f= get_local_field(args[1])))
> {
> - add_key_equal_fields(join, key_fields, *and_level, this,
> - (Item_field*) args[1]->real_item(), equal_func,
> + add_key_equal_fields(join, key_fields, *and_level, this, f, equal_func,
> args, 1, usable_tables, sargables);
> }
> + if (opt_arg_count == 2)
> + {
> + add_key_equal_fields(join, key_fields, *and_level, this, opt_args[0],
> + equal_func, opt_args+1, 1, usable_tables, sargables);
> + }
> + else if (opt_arg_count == 3)
> + {
> + add_key_equal_fields(join, key_fields, *and_level, this, opt_args[0],
> + false, opt_args+1, 2, usable_tables, sargables);
> + }
> }
>
>
> @@ -5521,8 +5573,14 @@ update_ref_and_keys(THD *thd, DYNAMIC_ARRAY *keyuse,JOIN_TAB *join_tab,
> if (cond)
> {
> KEY_FIELD *saved_field= field;
> +
> + if (cond->add_extra_key_fields(thd, join_tab->join, &end, &and_level,
> + normal_tables, sargables))
> + DBUG_RETURN(TRUE);
> +
> cond->add_key_fields(join_tab->join, &end, &and_level, normal_tables,
> sargables);
> +
> for (; field != end ; field++)
> {
>
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
--
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0

15 Sep '16
Hi, Alexey!
On Sep 13, Alexey Botchkov wrote:
> revision-id: 8cd65cd3e31aa6129573c6fbba9feb06c714e00c (mariadb-10.1.8-247-g8cd65cd)
> parent(s): 76a0ed2e03c6ae1ff791534e742c812d5e83ba63
> committer: Alexey Botchkov
> timestamp: 2016-09-13 15:48:03 +0400
> message:
>
> MDEV-9143 JSON_xxx functions.
>
> Library added to operate JSON format.
> SQL functions to handle JSON data added:
> required by SQL standard:
> JSON_VALUE
> JSON_QUERY
> JSON_EXISTS
> MySQL functions:
> JSON_VALID
> JSON_ARRAY
> JSON_ARRAY_APPEND
> JSON_CONTAINS_PATH
> JSON_EXTRACT
> JSON_OBJECT
> JSON_QUOTE
> JSON_MERGE
> Some MySQL functions still missing, but no specific
> difficulties to implement them too in same manner.
> JSON_TABLE of the SQL standard is missing as it's not
> clean how and whether we'd like to implement it.
Good comment, thanks.
> include/CMakeLists.txt | 1 +
> include/json_lib.h | 338 ++++++++++
> mysql-test/r/func_json.result | 99 +++
> mysql-test/t/func_json.test | 44 ++
> sql/CMakeLists.txt | 2 +-
> sql/item.h | 15 +
> sql/item_create.cc | 351 ++++++++++
> sql/item_jsonfunc.cc | 898 ++++++++++++++++++++++++++
> sql/item_jsonfunc.h | 240 +++++++
> sql/item_xmlfunc.cc | 21 +-
> sql/item_xmlfunc.h | 5 -
> sql/sql_yacc.yy | 4 +-
> strings/CMakeLists.txt | 2 +-
> strings/ctype-ucs2.c | 19 +-
> strings/json_lib.c | 1426 +++++++++++++++++++++++++++++++++++++++++
> 15 files changed, 3433 insertions(+), 32 deletions(-)
Is the json library code the same as in the previous commit?
Or should I look at it again?
And *please* add unit tests for the json library.
> diff --git a/include/json_lib.h b/include/json_lib.h
> new file mode 100644
> index 0000000..592096f
> --- /dev/null
> +++ b/include/json_lib.h
...
> + do
> + {
> + // The parser has read next piece of JSON
> + // and set fields of j_eng structure accordingly.
> + // So let's see what we have:
Oops. Didn't notice that earlier, sorry.
No C++ comments in C files, please, some older compilers don't like that.
> diff --git a/mysql-test/r/func_json.result b/mysql-test/r/func_json.result
> new file mode 100644
> index 0000000..1ebe0b2
> --- /dev/null
> +++ b/mysql-test/r/func_json.result
> @@ -0,0 +1,99 @@
> +select json_value('{"key1":[1,2,3]}', '$.key1');
> +json_value('{"key1":[1,2,3]}', '$.key1')
> +NULL
I'd expect an error here, but the standard explictly says:
5) If <JSON value error behavior> is not specified, then NULL ON ERROR
is implicit.
that is, returning NULL on errors is correct.
Would be nice to have a comment about it in the code (around
Item_func_json_value::val_str), otherwise someone might change this behavior to
return an error.
> +select json_value('{"key1": [1,2,3], "key1":123}', '$.key1');
> +json_value('{"key1": [1,2,3], "key1":123}', '$.key1')
> +123
Eh... Okay, so you show the *last* key, right?
SQL standard says it's implementation defined, so I suppose that's fine.
> +select json_object("ki", 1, "mi", "ya");
> +json_object("ki", 1, "mi", "ya")
> +{"ki": 1, "mi": "ya"}
add tests for json constructing functions (json_object,
json_array_append, etc) with key/values that need quoting or escaping.
I mean, tests to show that json_object (for example) it more complex
than concat().
> diff --git a/mysql-test/t/func_json.test b/mysql-test/t/func_json.test
> new file mode 100644
> index 0000000..a5b84e0c
> --- /dev/null
> +++ b/mysql-test/t/func_json.test
...
> +
> +select json_merge('string', 123);
how comes this test is not in the result file?
> diff --git a/sql/item_jsonfunc.h b/sql/item_jsonfunc.h
> new file mode 100644
> index 0000000..aff19ee
> --- /dev/null
> +++ b/sql/item_jsonfunc.h
> +class json_path_with_flags
> +{
> +public:
> + json_path_t p;
> + bool constant;
> + bool parsed;
> + json_path_step_t *cur_step;
> + void set_constant_flag(bool s_constant)
> + {
> + constant= s_constant;
> + parsed= FALSE;
why set_constant_flag() resets 'parsed'?
Better to set parsed=false in the constructor.
> + }
> +};
> +
> diff --git a/sql/item_jsonfunc.cc b/sql/item_jsonfunc.cc
> new file mode 100644
> index 0000000..5649989
> --- /dev/null
> +++ b/sql/item_jsonfunc.cc
...
> +/*
> + Appends ASCII string to the String object taking it's charset in
> + consideration.
> +*/
> +static int st_append_ascii(String *s, const char *ascii, uint ascii_len)
Why did you do that, instead of using String::append() ?
I'd use String::append(), and if that would happen to be too slow, I'd
implement String::append_ascii(), not something json-specific.
...
> +/*
> + Appends arbitrary String to the JSON string taking charsets in
> + consideration.
> +*/
> +static int st_append_escaped(String *s, const String *a)
> +{
> + /*
> + In the worst case one character from the 'a' string
> + turns into '\uXXXX\uXXXX' which is 12.
how comes? add an example, please. Like "for example, character x'1234'
in the charset A becomes '\u1234\u5678' if JSON string is in the charset B"
> + */
> + int str_len= a->length() * 12 * s->charset()->mbmaxlen /
> + a->charset()->mbminlen;
> + if (!s->reserve(str_len, 1024) &&
> + (str_len=
> + json_escape(a->charset(), (uchar *) a->ptr(), (uchar *)a->end(),
> + s->charset(),
> + (uchar *) s->end(), (uchar *)s->end() + str_len)) > 0)
> + {
> + s->length(s->length() + str_len);
> + return 0;
> + }
> +
> + return a->length();
> +}
...
> +longlong Item_func_json_exists::val_int()
> +{
> + json_engine_t je;
> + String *js= args[0]->val_str(&tmp_js);
> +
> + if (!path.parsed)
> + {
> + String *s_p= args[1]->val_str(&tmp_path);
> + if (s_p &&
> + json_path_setup(&path.p, s_p->charset(), (const uchar *) s_p->ptr(),
> + (const uchar *) s_p->ptr() + s_p->length()))
1. why wouldn't you do it in fix_length_and_dec for constant paths?
2. please add tests when path is not constant
> + goto err_return;
> + path.parsed= path.constant;
> + }
> +
> + if ((null_value= args[0]->null_value || args[1]->null_value))
> + {
> + null_value= 1;
> + return 0;
> + }
> +
> + null_value= 0;
> + json_scan_start(&je, js->charset(),(const uchar *) js->ptr(),
> + (const uchar *) js->ptr() + js->length());
> +
> + path.cur_step= path.p.steps;
> + if (json_find_value(&je, &path.p, &path.cur_step))
> + {
> + if (je.s.error)
> + goto err_return;
> + return 0;
> + }
> +
> + return 1;
> +
> +err_return:
> + null_value= 1;
> + return 0;
> +}
...
> +longlong Item_func_json_contains_path::val_int()
> +{
> + String *js= args[0]->val_str(&tmp_js);
> + json_engine_t je;
> + uint n_arg;
> + longlong result;
> +
> + if ((null_value= args[0]->null_value))
> + return 0;
...
> + result= !mode_one;
> + for (n_arg=2; n_arg < arg_count; n_arg++)
> + {
> + json_path_with_flags *c_path= paths + n_arg - 2;
> + if (!c_path->parsed)
> + {
> + String *s_p= args[n_arg]->val_str(tmp_paths+(n_arg-2));
> + if (s_p &&
> + json_path_setup(&c_path->p,s_p->charset(),(const uchar *) s_p->ptr(),
> + (const uchar *) s_p->ptr() + s_p->length()))
> + goto error;
> + c_path->parsed= TRUE;
eh? not c_path->parsed= c_path->constant ?
> + }
> +
> + json_scan_start(&je, js->charset(),(const uchar *) js->ptr(),
> + (const uchar *) js->ptr() + js->length());
> +
> + c_path->cur_step= c_path->p.steps;
> + if (json_find_value(&je, &c_path->p, &c_path->cur_step))
> + {
> + /* Path wasn't found. */
> + if (je.s.error)
> + goto error;
> +
> + if (!mode_one)
> + {
> + result= 0;
> + break;
> + }
> + }
> + else if (mode_one)
> + {
> + result= 1;
> + break;
> + }
> + }
> +
> +
> + return result;
> +
> +error:
> + null_value= 1;
> + return 0;
> +}
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

14 Sep '16
Hi Vicențiu,
> item_windowfunc.cc|168| peer_tracker = new Group_bound_tracker(thd, window_spec->order_list);
> item_windowfunc.cc|176| peer_tracker = new Group_bound_tracker(thd, window_spec->order_list);
> item_windowfunc.cc|218| peer_tracker = new Group_bound_tracker(thd, window_spec->order_list);
Coding style violation: Use peer_tracker=.
> sql_window.cc:
> // TODO check if this can be placed outside the loop.
> err= tbl->file->ha_update_row(tbl->record[1], tbl->record[0]);
Yes, it should be. Can you try it? (and if it doesn't work that's surprising
and we could discuss why..)
> grep fech_prev_row *.h *.cc
> sql_window.cc|633| bool fetch_prev_row()
This function is not used anymore?
> static bool is_computed_with_remove(Item_sum::Sumfunctype sum_func)
What is the difference between this function and Item_sum::supports_removal ?
Does one imply/require the other?
> sql_window.h
> /*
> This handles computation of one window function.
>
> Currently, we make a spearate filesort() call for each window function.
> */
>
> class Window_func_runner : public Sql_alloc
Both statements in the comment are not true anymore, right?
Can you update the comment?
> sql_window.cc
> /*
> Regular frame cursors add or remove values from the sum functions they
> manage. By calling this method, they will only perform the required
> movement within the table, but no adding/removing will happen.
> */
> void set_no_action()
> {
> perform_no_action= true;
> }
Is there any difference between perform_no_action=true and
Frame_cursor::sum_functions being an empty list? Please clarify.
> sql_window.cc
> /*
> A class that owns cursor objects associated with a specific window function.
> */
> class Cursor_manager
>
But as I understand, your recent changes make window functions share the frame
cursors.
That is, window frame has its cursors, and all window functions that are
sharing the frame, share the cursors.
Is Cursor_manager still needed?
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0
Hello Everyone!
This thread is about solving this issue
https://jira.mariadb.org/browse/MDEV-10220.
I made a google doc
https://docs.google.com/document/d/1sor3nVGz2v9Jm4-MDDFfMSn7d7xOip7Qio_T0Aq…
In
which I am explaining my idea , which i think should solve this problem.
Please have a look and tell me what you think ?
Regards
sachin
1
0

Re: [Maria-developers] [Commits] 49b2502: Fix assertion/hang in read_init_file()
by Kristian Nielsen 13 Sep '16
by Kristian Nielsen 13 Sep '16
13 Sep '16
Monty,
My latest push uncovered a problem in the 10.2 tree; however the problem was
there before, just not triggered by the testsuite. I pushed the below patch
to fix the test failures in Buildbot, but I am not sure it is the correct
solution, so please check it.
It appears the problem was introduced with this patch:
commit 3d4a7390c1a94ef6e07b04b52ea94a95878cda1b
Author: Monty <monty(a)mariadb.org>
Date: Mon Feb 1 12:45:39 2016 +0200
MDEV-6150 Speed up connection speed by moving creation of THD to new thread
diff --git a/sql/sql_parse.cc b/sql/sql_parse.cc
index 84f0c63..355f62d 100644
--- a/sql/sql_parse.cc
+++ b/sql/sql_parse.cc
@@ -844,12 +844,13 @@ void do_handle_bootstrap(THD *thd)
delete thd;
#ifndef EMBEDDED_LIBRARY
- thread_safe_decrement32(&thread_count);
+ DBUG_ASSERT(thread_count == 1);
in_bootstrap= FALSE;
-
- mysql_mutex_lock(&LOCK_thread_count);
- mysql_cond_broadcast(&COND_thread_count);
- mysql_mutex_unlock(&LOCK_thread_count);
+ /*
+ dec_thread_count will signal bootstrap() function that we have ended as
+ thread_count will become 0.
+ */
+ dec_thread_count();
my_thread_end();
pthread_exit(0);
#endif
The problem is that do_handle_bootstrap is not only used for bootstrap - it
is also used for read_init_file(). In the latter case, it is _not_
guaranteed that thread_count will drop to zero. For example, the binlog
background thread might be running. You can see this in 10.2 (before my
just-pushed fix) by running
./mtr main.init_file --mysqld=--log-bin=mysql-bin
It will assert, or hang if assert is disabled.
To get rid of the failures in Buildbot, I pushed the below patch. I think
the patch is perfectly safe. However, maybe there is a better way - it seems
a bit messy, mixing bootstrap and read_init_file() and having these
if (opt_bootstrap) conditions. And maybe there are other problems caused by
this as well that I did not notice? As I am not much familiar with how this
thread count checking / bootstrap / init file handling works.
Thanks,
- Kristian.
Kristian Nielsen <knielsen(a)knielsen-hq.org> writes:
> revision-id: 49b25020ef512866751f192b91d8439670d0430b (mariadb-10.1.8-257-g49b2502)
> parent(s): be2b833c426b420073c50564125049e2b4a95e8b
> committer: Kristian Nielsen
> timestamp: 2016-09-09 18:09:59 +0200
> message:
>
> Fix assertion/hang in read_init_file()
>
> If there are other threads running (for example binlog background
> thread), then the thread count may not drop to zero at the end of
> do_handle_bootstrap(). This caused an assertion and missing wakeup of
> the main thread. The missing wakeup is because THD::~THD() only
> signals the COND_thread_count mutex when the number of threads drops
> to zero.
>
> Signed-off-by: Kristian Nielsen <knielsen(a)knielsen-hq.org>
>
> ---
> sql/sql_parse.cc | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/sql/sql_parse.cc b/sql/sql_parse.cc
> index 7263082..6baff31 100644
> --- a/sql/sql_parse.cc
> +++ b/sql/sql_parse.cc
> @@ -1082,9 +1082,20 @@ void do_handle_bootstrap(THD *thd)
> end:
> in_bootstrap= FALSE;
> delete thd;
> + if (!opt_bootstrap)
> + {
> + /*
> + We need to wake up main thread in case of read_init_file().
> + This is not done by THD::~THD() when there are other threads running
> + (binlog background thread, for example). So do it here again.
> + */
> + mysql_mutex_lock(&LOCK_thread_count);
> + mysql_cond_broadcast(&COND_thread_count);
> + mysql_mutex_unlock(&LOCK_thread_count);
> + }
>
> #ifndef EMBEDDED_LIBRARY
> - DBUG_ASSERT(thread_count == 0);
> + DBUG_ASSERT(!opt_bootstrap || thread_count == 0);
> my_thread_end();
> pthread_exit(0);
> #endif
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
2
3
Hi Sergei,
Weekly Report for 4th week of gsoc
1. Field property is_row_hash , field_visibility successfully saved and
retrived from frm , using extra2 space
2. Some tests added.
3. Solved the error when there is another primary key(it used to accept
duplicate in this case ).
4. Added hidden in parser.
5. Identified the memory leak 1 is because of malloc db_row_hash str.I did
not freed it. second memory leak i am searching for it.
Work for this week.
1 First solve the memory leak problem.
2 Work on FULL_HIDDEN_FIELDS.
3 in mysql_prepare_create_table I am using an iterator it would be better
if i can add custom field when it says an error. So will not have to use
iterator as suggested by you sir.
4 rename the hash field automatically in the case clash.
This week
On Thu, Jun 16, 2016 at 11:46 PM, Sergei Golubchik <serg(a)mariadb.org> wrote:
> Hi, Sachin!
>
> On Jun 15, Sachin Setia wrote:
> >
> > But the major problem is:-
> > Consider this case
> >
> > create table tbl(abc int primary key,xyz blob unique);
> >
> > In this case , second key_info will have one user_defined_key_parts but
> two
> > ext_key_parts
> > second key_part refers to primary key.
> > because of this ha_index_read_idx_map always return HA_ERR_KEY_NOT_FOUND
> > I am trying to solve this problem.
>
> I've seen you solved this, but I do not understand the problem (and so I
> cannot understand the fix either).
Problem was
consider this
create table tbl(abc int primary key , xyz blob unique);
insert into tbl value(1,12);
insert into tbl value(2,12); # no error , details in commit comment
https://github.com/MariaDB/server/commit/baecc73380084c61b9323a30f3e2597717…
> Please, try to add a test case for
> the problem you're fixing. In the same commit, preferrably.
>
Now you can still commit a test case for this problem and your fix,
> then, I hope, I'll be able to understand better what the problem was.
>
> Regards,
> Sergei
> Chief Architect MariaDB
> and security(a)mariadb.org
>
4
29

Re: [Maria-developers] [Commits] d6f760d: MDEV-10296 - Multi-instance table cache
by Sergei Golubchik 09 Sep '16
by Sergei Golubchik 09 Sep '16
09 Sep '16
Hi, Sergey!
On Sep 08, Sergey Vojtovich wrote:
> revision-id: d6f760d344f45177936c72cb65ba1ffff596710e (mariadb-10.1.8-230-gd6f760d)
> parent(s): 9f7e77b5414b5b2ef9b5f50f131966821fefb9c7
> committer: Sergey Vojtovich
> timestamp: 2016-09-08 16:02:49 +0400
> message:
>
> MDEV-10296 - Multi-instance table cache
>
> Table cache instances autosizing.
Looks fairly simple, good :)
Code-wise it's ok, I've added a comment in Jira.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

Re: [Maria-developers] [Commits] 9f7e77b: MDEV-10296 - Multi-instance table cache
by Sergei Golubchik 09 Sep '16
by Sergei Golubchik 09 Sep '16
09 Sep '16
Hi, Sergey!
On Sep 07, Sergey Vojtovich wrote:
> revision-id: 9f7e77b5414b5b2ef9b5f50f131966821fefb9c7 (mariadb-10.1.8-229-g9f7e77b)
> parent(s): 58634f6e50b40b28533a03a1afcb68f139937351
> committer: Sergey Vojtovich
> timestamp: 2016-09-07 12:47:43 +0400
> message:
>
> MDEV-10296 - Multi-instance table cache
>
> - simplified access to per-share free tables list
> - explain paddings
>
> @@ -43,7 +52,9 @@ struct TDC_element
> for this share.
> */
> All_share_tables_list all_tables;
> - char pad[CPU_LEVEL1_DCACHE_LINESIZE]; // free_tables follows this immediately
> + /** Avoid false sharing between TDC_element and free_tables */
> + char pad[CPU_LEVEL1_DCACHE_LINESIZE];
> + Share_free_tables free_tables[0];
> };
I vaguely remember that zero-length arrays are not supported
by some compilers (they're gcc extension, after all).
So I generally use [1] in cases like this, not [0].
Otherwise ok.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
2
1

Re: [Maria-developers] [Commits] 66b4459: MDEV-9143 JSON_xxx functions.
by Sergei Golubchik 09 Sep '16
by Sergei Golubchik 09 Sep '16
09 Sep '16
Hi, Alexey!
On Sep 05, Alexey Botchkov wrote:
> revision-id: 66b4459e38f5df4913ecce1c9e3d71c7afa7860d (mariadb-10.2.1-21-g66b4459)
> parent(s): 31a8cf54c8a7913338480a0571feaf32143b5f64
> committer: Alexey Botchkov
> timestamp: 2016-09-05 14:03:33 +0400
> message:
>
> MDEV-9143 JSON_xxx functions.
>
> Library with JSON-related functions added.
I've just briefly looked through the code, assuming it's mostly the same
as the in previous commit. Looks much better comment- and naming-wise,
thanks.
Could please you add some unit tests for it into unittest/strings ?
In a separate commit, if you'd like, that doesn't matter.
Regards,
Sergei
Chief Architect MariaDB
and security(a)mariadb.org
1
0

Re: [Maria-developers] [Commits] 0a86a91: MDEV-10765: Wrong result - query does not retrieve values from default partition on a table partitioned by list columns
by Sergey Petrunia 09 Sep '16
by Sergey Petrunia 09 Sep '16
09 Sep '16
Hi Sanja,
> revision-id: 0a86a915842d268477c5febd8481263f00d6c792 (mariadb-10.1.8-242-g0a86a91)
> parent(s): effb65bc863da0f1115e16ef5f11d11a13cdc7a0
> committer: Oleksandr Byelkin
> timestamp: 2016-09-08 19:43:09 +0200
> message:
>
> MDEV-10765: Wrong result - query does not retrieve values from default partition on a table partitioned by list columns
>
> Partial matches should be treat as not exact one.
>
So I'm running this example:
create table t14n
(
a int not null,
b int not null,
c int
)
partition by list columns(a,b)
(
partition p1 values in ((10,10)),
partition p2 values in ((10,20)),
partition p3 values in ((10,30)),
partition p4 values in ((10,40)),
partition p5 values in ((10,50))
);
insert into t14n values
(10,10,1234),
(10,20,1234),
(10,30,1234),
(10,40,1234),
(10,50,1234);
explain partitions
select * from t14n
where a>=10 and (a <=10 and b <=30);
and then I get:
#1 0x0000555555eb88ef in get_part_iter_for_interval_cols_via_map (part_info=0x7fff90046208, is_subpart=false, store_length_array=0x7ffff436a0a0, min_value=0x7fff90069518 "\n", max_value=0x7fff90069528 "\n", min_len=8, max_len=4, flags=0, part_iter=0x7ffff436a858) at /home/psergey/dev-git/10.2/sql/sql_partition.cc:7751
This means I get all the way to the
memcmp(min_value, max_value, min_len)
call, where min_len=8, max_len=4, which means we're comparing garbage.
Please add a check that min_len==max_len.
Ok to push after this is addressed.
On Thu, Sep 08, 2016 at 07:43:10PM +0200, Oleksandr Byelkin wrote:
> diff --git a/sql/sql_partition.cc b/sql/sql_partition.cc
> index 54396b9..24dff23 100644
> --- a/sql/sql_partition.cc
> +++ b/sql/sql_partition.cc
> @@ -7723,6 +7723,7 @@ int get_part_iter_for_interval_cols_via_map(partition_info *part_info,
> bool can_match_multiple_values;
> uint32 nparts;
> get_col_endpoint_func UNINIT_VAR(get_col_endpoint);
> + uint full_length= 0;
> DBUG_ENTER("get_part_iter_for_interval_cols_via_map");
>
> if (part_info->part_type == RANGE_PARTITION)
> @@ -7740,9 +7741,13 @@ int get_part_iter_for_interval_cols_via_map(partition_info *part_info,
> else
> assert(0);
>
> + for (uint32 i= 0; i < part_info->num_columns; i++)
> + full_length+= store_length_array[i];
> +
> can_match_multiple_values= ((flags &
> (NO_MIN_RANGE | NO_MAX_RANGE | NEAR_MIN |
> NEAR_MAX)) ||
> + (min_len != full_length) ||
> memcmp(min_value, max_value, min_len));
> DBUG_ASSERT(can_match_multiple_values || (flags & EQ_RANGE) || flags == 0);
> if (can_match_multiple_values && part_info->has_default_partititon())
> _______________________________________________
> commits mailing list
> commits(a)mariadb.org
> https://lists.askmonty.org/cgi-bin/mailman/listinfo/commits
--
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0

[Maria-developers] Contents of information_schema.partitions.PARTITION_DESCRIPTION for DEFAULT partitions?
by Sergey Petrunia 08 Sep '16
by Sergey Petrunia 08 Sep '16
08 Sep '16
Hi Sanja,
I'm looking at I_S description for the DEFAULT partition and I see this:
PARTITION_DESCRIPTION: (MAXVALUE,MAXVALUE)
Is this normal?
The full example below:
show create table t10;
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| t10 | CREATE TABLE `t10` (
`i` varchar(10) DEFAULT NULL,
`j` varchar(10) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50500 PARTITION BY LIST COLUMNS(i,j)
(PARTITION p1 VALUES IN (('10','10')) ENGINE = InnoDB,
PARTITION p2 DEFAULT ENGINE = InnoDB) */ |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
MariaDB [j4]> select * from information_schema.partitions where table_name='t10'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: j4
TABLE_NAME: t10
PARTITION_NAME: p1
SUBPARTITION_NAME: NULL
PARTITION_ORDINAL_POSITION: 1
SUBPARTITION_ORDINAL_POSITION: NULL
PARTITION_METHOD: LIST COLUMNS
SUBPARTITION_METHOD: NULL
PARTITION_EXPRESSION: `i`,`j`
SUBPARTITION_EXPRESSION: NULL
PARTITION_DESCRIPTION: ('10','10')
TABLE_ROWS: 0
AVG_ROW_LENGTH: 0
DATA_LENGTH: 16384
MAX_DATA_LENGTH: NULL
INDEX_LENGTH: 0
DATA_FREE: 0
CREATE_TIME: 2016-09-08 21:22:45
UPDATE_TIME: NULL
CHECK_TIME: NULL
CHECKSUM: NULL
PARTITION_COMMENT:
NODEGROUP: default
TABLESPACE_NAME: NULL
*************************** 2. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: j4
TABLE_NAME: t10
PARTITION_NAME: p2
SUBPARTITION_NAME: NULL
PARTITION_ORDINAL_POSITION: 2
SUBPARTITION_ORDINAL_POSITION: NULL
PARTITION_METHOD: LIST COLUMNS
SUBPARTITION_METHOD: NULL
PARTITION_EXPRESSION: `i`,`j`
SUBPARTITION_EXPRESSION: NULL
PARTITION_DESCRIPTION: (MAXVALUE,MAXVALUE)
TABLE_ROWS: 0
AVG_ROW_LENGTH: 0
DATA_LENGTH: 16384
MAX_DATA_LENGTH: NULL
INDEX_LENGTH: 0
DATA_FREE: 0
CREATE_TIME: 2016-09-08 21:22:45
UPDATE_TIME: NULL
CHECK_TIME: NULL
CHECKSUM: NULL
PARTITION_COMMENT:
NODEGROUP: default
TABLESPACE_NAME: NULL
2 rows in set (0.01 sec)
BR
Sergei
--
Sergei Petrunia, Software Developer
MariaDB Corporation | Skype: sergefp | Blog: http://s.petrunia.net/blog
1
0

Re: [Maria-developers] [Maria-discuss] Known limitation with TokuDB in Read Free Replication & parallel replication ?
by Kristian Nielsen 08 Sep '16
by Kristian Nielsen 08 Sep '16
08 Sep '16
Rich Prohaska <prohaska7(a)gmail.com> writes:
> The group lock retry algorithm is on the https://github.com/
> prohaska7/tokuft/tree/killwait branch. Its unit tests pass. Needed to add
> some test only functions to get reproducible behaviour.
>
> The group lock retry algorithm is integrated into my mariadb server on the
> https://github.com/prohaska7/mariadb-server/tree/toku_opr3 branch. Ran
> sysbench oltp on a small 1000 row table successfully.
Looks great, thanks! It passes tests for me, as well.
> I am going to write up the tokudb lock tree races that were fixed and email
> to George Lorch @ Percona so that this code can be integrated into
> PerconaFT.
Ok, sounds great!
I will push the replication part of the patch to MariaDB 10.1, then (the
async deadlock kill).
>From the git history, it looks like new TokuDB releases (from Percona
Server) are regularly merged into MariaDB 10.1, so I'm thinking that we can
get your TokuDB/tokuft changes into MariaDB that way, in the next regular
TokuDB merges. I will check it and add any missing MariaDB stuff, if it is
not part of the changes that go upstream.
Does that sounds ok to you?
> Removed the lock wait for report from the lock request start method since
> it is redundant with the report that will occur when the lock request is
> retried in the lock request wait method.
The reason I added this reporting originally was for the case where a
deadlock is detected.
If transaction T1 tries to get a lock with lock_request::start(), but a
deadlock is detected (DB_LOCK_DEADLOCK is returned), the lock request wait
method will not be called (if I understand the code correctly), so the
reporting in lock_request::start() was not redundant.
The rationale is that if T1 gets aborted due to a deadlock with T2, and T2
is later in the replication commit order, then when T1 is run again by
replication, it will almost certainly conflict with T2 again. So we might as
well get T2 killed early (by doing the report already in start()).
But on the other hand, things will work correctly without any reporting in
start(), and with only a slight delay in case of a conflict. And the
assumption in optimistic parallel replication is that conflicts will be
relatively rare. So I'm fine without reporting in start(), as you have in
your current code.
Looks like we are close now to having optimistic parallel replication
working with TokuDB. Thanks for all your work on this, Rich!
- Kristian.
1
0

07 Sep '16
Hello Sergey and Igor,
You recently closed MDEV-10057 and MDEV-10058, but forgot to add tests.
- MDEV-10057 does not have tests at all.
- MDEV-10058 has only the parser related test,
but does not have this test:
EXPLAIN SELECT * FROM (WITH a AS (SELECT * FROM t1) SELECT * FROM t2
NATURAL JOIN t3) AS d1;
Can you please add the missing tests?
Adding patches without tests makes merging painful.
You never know if a merge (especially an automatic one)
breaks something.
Thanks!
2
2
Hi Sergei,
Please review a patch for MDEV-8909.
It removes some old remainders that are not needed anymore.
Thanks.
2
2