Forgot to attach the error log...it is since the last successfull start of the slave. The line after "2021-02-01 8:32:30" is when hat to kill the server with 'kill -9'. 2021-01-29 3:08:28 139988321904384 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.005586' at position 621910335, relay log '/var/lib/mysql/relay-bin.000001' position: 4; GTID position '0-1-8694429449' 2021-01-29 3:08:28 139988858074880 [Note] Slave I/O thread: connected to master 'replication_user@172.16.1.1:3306',replication starts at GTID position '0-1-8694429449' 2021-01-29 6:03:01 139988341872384 [ERROR] SpotSystem.APInventory: Found row where the auto_increment column has the value 0 2021-02-01 8:32:30 139988355536640 [Note] /usr/sbin/mysqld: Normal shutdown 2021-02-01 8:32:30 139988355536640 [Note] Event Scheduler: Purging the queue. 0 events 2021-02-01 8:32:30 139988858074880 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.005595', position 90701825; GTID position 0-1-8704989874 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: Using mutexes to ref count buffer pool pages 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: The InnoDB memory heap is disabled 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: Compressed tables use zlib 1.2.7 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: Using Linux native AIO 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: Using SSE crc32 instructions 2021-02-01 8:45:13 140290042407168 [Note] InnoDB: Initializing buffer pool, size = 96.0G 2021-02-01 8:45:16 140290042407168 [Note] InnoDB: Completed initialization of buffer pool 2021-02-01 8:45:17 140290042407168 [Note] InnoDB: Highest supported file format is Barracuda. 2021-02-01 8:45:17 140290042407168 [Note] InnoDB: The log sequence number 8320414590808 in ibdata file do not match the log sequence number 8433639723027 in the ib_logfiles! 2021-02-01 8:45:24 140290042407168 [Note] InnoDB: Restoring possible half-written data pages from the doublewrite buffer... InnoDB: 5 transaction(s) which must be rolled back or cleaned up InnoDB: in total 11 row operations to undo InnoDB: Trx id counter is 33839639552 InnoDB: Last MySQL binlog file position 0 399790365, file name /var/lib/mysql/mysql-bin.001280 2021-02-01 8:45:38 140290042407168 [Note] InnoDB: 128 rollback segment(s) are active. 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Starting in background the rollback of recovered transactions 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: To roll back: 5 transactions, 11 rows 2021-02-01 8:45:38 140290042407168 [Note] InnoDB: Waiting for purge to start 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Rollback of trx with id 33839605832 completed 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Rollback of trx with id 33839605831 completed 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Rollback of trx with id 33839605828 completed 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Rollback of trx with id 33839605824 completed 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Rollback of trx with id 33839605823 completed 2021-02-01 8:45:38 140177859147520 [Note] InnoDB: Rollback of non-prepared transactions completed 2021-02-01 8:45:38 140290042407168 [Note] InnoDB: Percona XtraDB (http://www.percona.com) 5.6.49-89.0 started; log sequence number 8433639723027 2021-02-01 8:45:38 140290042407168 [Note] Plugin 'FEEDBACK' is disabled. 2021-02-01 8:45:38 140290042407168 [Note] Recovering after a crash using /var/lib/mysql/mysql-bin 2021-02-01 8:45:38 140290042407168 [Note] Starting crash recovery... 2021-02-01 8:45:38 140177372579584 [Note] InnoDB: Dumping buffer pool(s) not yet started 2021-02-01 8:45:38 140290042407168 [Note] Crash recovery finished. 2021-02-01 8:45:38 140290042407168 [Note] Server socket created on IP: '::'. 2021-02-01 8:45:38 140290042407168 [Warning] 'user' entry 'root@mysql-a.zcom.it' ignored in --skip-name-resolve mode. 2021-02-01 8:45:38 140290042407168 [Warning] 'proxies_priv' entry '@% root@mysql-a.zcom.it' ignored in --skip-name-resolve mode. 2021-02-01 8:45:40 140290041370368 [Note] Slave I/O thread: connected to master 'replication_user@172.16.1.1:3306',replication starts at GTID position '0-1-8696229561' 2021-02-01 8:45:40 140290041063168 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.005586' at position 1073742303, relay log '/var/lib/mysql/relay-bin.000001' position: 4; GTID position '0-1-8696229561' 2021-02-01 8:45:40 140290042407168 [Note] /usr/sbin/mysqld: ready for connections. Version: '10.1.48-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server Since 08:45:40 it is stuck in the console with the 'stop slave;' command.... Am 01.02.2021 um 09:36 schrieb Thomas:
Hello, we have upgraded our MySQL Master/Slave server vom 10.1.24 to 10.1.48 on 25.01. Backups worked as usual, but since friday (29.01.) our backups do not work anymore. I have seen that entering a 'stop slave;' on mysql cli hangs forever, while in the background (on a second console, I see the replication going on, but the gtid is far away from the master. According to MaxScale, on the slave it is '0-1-8696229561 and on the master '0-1-8705275411' and not moving forward on the slave.
Slave 'show slave status': MariaDB [(none)]> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.16.1.1 Master_User: replication_user Master_Port: 3306 Connect_Retry: 10 Master_Log_File: mysql-bin.005595 Read_Master_Log_Pos: 581894706 Relay_Log_File: relay-bin.000002 Relay_Log_Pos: 693 Relay_Master_Log_File: mysql-bin.005587 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 967647646 Relay_Log_Space: 8205026277 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: Slave_Pos Gtid_IO_Pos: 0-1-8705427292 Replicate_Do_Domain_Ids: Replicate_Ignore_Domain_Ids: Parallel_Mode: optimistic 1 row in set (0.00 sec)
Master status is this: MariaDB [(none)]> show master status\G *************************** 1. row *************************** File: mysql-bin.005595 Position: 581594839 Binlog_Do_DB: Binlog_Ignore_DB: 1 row in set (0.01 sec)
If I try to stop the server it hangs also. Waited half an hour after I saw the backups hanging on the 'stop slave' command, then killed the server with kill -9. It restarted without complaining, but the hang is still occuring when doing a 'stop slave'. Can anybody help me diagnosing and possibly resolving this issue?
Server is a baremetal with battery backed raid10. OS is Centos 7 all updated. Thanks, Thomas
_______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp