Hi, Aleksey! On Apr 04, Aleksey Midenkov wrote:
Hello, Sergei!
In unversioned -> versioned scenario in the code below it first gets to Set time 4, creates some records (on slave) and then seconds on slave increased (X+1) while on master seconds are not yet increased (X). Then we get to Set time 3 and reset time on slave to X.0 therefore we get back in time and all stored records with timestamp X.n will be in future. 'n' came from ++system_time.sec_part in Set time 4.
That's not how it is supposed to work. As long as the master sends events with seconds=X, the slave will generate microsecons, X.0, X.1, X.2, etc. When the master sends an event with a new timestamp, Y, the slave goes back to Y.0, and continues with Y.1, etc.
Why did you decided to use such logic of getting seconds from master and microseconds from slave? Since microseconds sooner or later reset to 0 it's not better than just assigning some random number. What sending microseconds from master conditionally only is good for?
Because the master was sending microseconds conditionally, since 5.3. The slave had to cope with that somehow anyway. And I didn't want to force the master to include microseconds in every single event for every single user just in case someone would decide to do unversioned->versioned replication. Also, I thought that processing of 1000000 Query_log_event's in a second is not realistic. But now I see some issues with that. One can freeze the time on the master with 'SET TIMESTAMP' and send an arbitrary number of events with the same timestamp. Or one can generate a query event that includes microseconds, to force the slave to count not from X.0, but from, say, X.999998. So, a wraparound is possible and we need some fix for it. Regards, Sergei Chief Architect MariaDB and security@mariadb.org