On Tue, 18 Oct 2022 at 01:01, Sergei Golubchik <serg@mariadb.org> wrote:
diff --git a/sql/log_event_server.cc b/sql/log_event_server.cc index 422d496d5c3..25705d13641 100644 --- a/sql/log_event_server.cc +++ b/sql/log_event_server.cc @@ -6062,7 +6065,7 @@ int Rows_log_event::do_apply_event(rpl_group_info *rgi) */ sql_mode_t saved_sql_mode= thd->variables.sql_mode; if (!is_auto_inc_in_extra_columns()) - thd->variables.sql_mode= (rpl_data.is_online_alter() ? saved_sql_mode :0) + thd->variables.sql_mode= (m_online_alter ? saved_sql_mode :0) | MODE_NO_AUTO_VALUE_ON_ZERO;
wouldn't it be more correct to use saved_sql_mode also in normal replication?
using MODE_NO_AUTO_VALUE_ON_ZERO was a fix to Bug #56662 Assertion failed: next_insert_id == 0, file .\handler.cc. See commit d5bf6b8aa8bb
No, I didn't mean to ask "why MODE_NO_AUTO_VALUE_ON_ZERO", I meant to ask why not
thd->variables.sql_mode= saved_sql_mode | MODE_NO_AUTO_VALUE_ON_ZERO;
that is why not "to use saved_sql_mode also in normal replication"
Oh, sorry for misinterpretation. I have read what I wanted to read:) Meanwhile I had a time to think. Imo it should be so in replication. It is a background process, right? So we have no control over sql_mode in the replica nodes (if I understand the design correctly). So we should make it as permissive as possible. I have no opinion on NO_ZERO_DATE and other "View" (in terms of Model-View-Controller) modes, like EMPTY_STRING_IS_NULL, if they don't affect storage, they are unimportant. For the rest, we should allow as much execution, and as much conversions as possible, which is, as I recall, is achieved by zeroing the mode, and MODE_NO_AUTO_VALUE_ON_ZERO is an exception that was found once. On the contrary, in ALTER TABLE, it is clear that we should preserve the user mode, as we have full control over it, and we want to preserve usual ALTER TABLE behavior. Though I suppose, there is a known issue that we should handle, so we have to override MODE_NO_AUTO_VALUE_ON_ZERO. This is very likely, as there's also such code in the beginning of copy_data_between_tables: if (def->field) { if (*ptr == to->next_number_field) { auto_increment_field_copied= TRUE; /* If we are going to copy contents of one auto_increment column to another auto_increment column it is sensible to preserve zeroes. This condition also covers case when we are don't actually alter auto_increment column. */ if (def->field == from->found_next_number_field) thd->variables.sql_mode|= MODE_NO_AUTO_VALUE_ON_ZERO; }
+ uint p= 0; + for (;p < key->ext_key_parts; p++) + { + Field *f= key->key_part[p].field; + uint non_deterministic_default= f->default_value && + f->default_value->flags | VCOL_NOT_STRICTLY_DETERMINISTIC; + + int next_number_field= f->table->next_number_field ? + f->table->next_number_field->field_index : -1;
eh? 1. why it's *in the loop* ? 2. why not simply
|| f == f->table->next_number_field
or, to be extra safe
|| f->table->field[f->field_index] == f->table->next_number_field
Thanks! Turns out there's no need in extra safety, it just works.
by "extra safety" I meant the case when
f != f->table->field[f->field_index]
(here f= key->key_part[p].field). This is the case for varchar/blob prefix keys. But as they cannot be auto-increment anyway, the direct comparison should work, I suppose.
Hmm, thanks for noticing, I really didn't consider this blobs case.
+ + if (f->field_index >= master_columns
I don't think this condition ^^^ is needed. Same as elsewhere, if f->field_index < master_columns but !bitmap_is_set, does it mean the keyseg is ok? I don't think so.
It *is* needed.
An extra field with default can be used in the key lookup. A usual field should have an explicit value. If it's not, it means it's not sent (MINIMAL/NOBLOB) and we don't know what real value is
I don't understand how this answers my question. Let me rephrase. What will break if you remove the condition
f->field_index >= master_columns &&
?
MINIMAL/NOBLOB replication will break. you can remove the line and see what breaks. I think this one can fail for example: create or replace table t (a int primary key, b int default (a+1)); insert into t values (10, 10),(20, 20),(30, 30); --sync_slave_with_master alter table t drop primary key, add c int default(a), add primary key(b); --connection master delete from t where a = 20; update t set a = a + 2 where a = 10; --sync_slave_with_master select * from t; --echo # Cleanup --connection master drop table t; Here, we add a primary key on the field b which is a master column, though it wasn't replicated. Even though it has a deterministic default it can't be used, or the values would be not 10, 30 but 13, 31
@@ -8041,33 +8107,38 @@ int Rows_log_event::find_key()
{ if (!m_table->s->keys_in_use.is_set(i)) continue; + parts_suit= key_parts_suit_event(key); + if (!parts_suit) + continue; /* We cannot use a unique key with NULL-able columns to uniquely identify a row (but we can still select it for range scan below if nothing better is available). */ - if ((key->flags & (HA_NOSAME | HA_NULL_PART_KEY)) == HA_NOSAME) + if ((key->flags & (HA_NOSAME | HA_NULL_PART_KEY)) == HA_NOSAME && + parts_suit == key->user_defined_key_parts)
this doesn't make use of ext key parts. Should be something like
if (parts_suit == key->ext_key_parts || (parts_suit >= key->user_defined_key_parts && (key->flags & ....)))
why not just use ext_key_parts (which I'm using now btw)?
Now, you mean, in the next patch? I'll check it out.
The logic of my if() above is "either it uses the extended key (which is always truly unique) - or it uses only user specified key parts and has HA_NOSAME but no HA_NULL_PART_KEY".
Why can't we use a part of extended key part in the same way? I don't know a good example, but assume it could be long uniques. As long as index algorithm in BTREE, i'd expect it to work. And for other indexes, like HASH, i think, there was no support anyway -- Yours truly, Nikita Malyavin