[Maria-discuss] Galera cluster as slave: Duplicate entry errors

Hi all, i'm new to this list :) My setup: MySQL Server 5.5.37 -> Replication -> Galera 10 Cluster (currently 2 nodes) The old MySQL Master is receiving inserts to a table with a auto_increment field (primary key). The Galera Cluster is configured as classic slave to the master. From time to time i'm getting duplicate entry issues on the galera cluster: 140905 12:06:15 [ERROR] Slave SQL: Error 'Duplicate entry '16383' for key 'PRIMARY'' on query. ... 140905 12:06:15 [Warning] Slave: Duplicate entry '16383' for key 'PRIMARY' Error_code: 1062 I cannot see the problem where there could came from... My thought was if the insert (auto_increment) succeeds on the master then is must also succeed on the galera side... someone with an idea on it? BR Otto

Hmm, maybe an idea by myself on the cause: the initial sync to galera was done as followed: - dump on the (locked) master - inserted the dumps on a galera node - started replication (change master to...) with the saved state of the master I've read somewhere that galera handles the auto_increments with a offset for each node (by design) - so you cannot rely on it. That is no problem for the application at all. But as i use ROW based replication the autoincrements (created on the master) could exist on the galera side already, right? what would be a way to avoid it? i think i need another migration scenario... Otto Am 05.09.14 um 18:59 schrieb Otto Berger:

----- Original Message -----
This should only matter if you are writing directly to the galera nodes. It can be disabled however. http://galeracluster.com/documentation-webpages/mysqlwsrepoptions.html#wsrep...
Did you compare the contents of the row on master and galera?
what would be a way to avoid it? i think i need another migration scenario...
It looks quite sane to me. -- -- Daniel Black, Engineer @ Open Query (http://openquery.com.au) Remote expertise & maintenance for MySQL/MariaDB server environments.

hi,all I also came into the almost same situation, my migration like below: 1. use xtrabackup back a mysql5.5.x database 2. set up the first node of PXC5.6 with the backup 3. start the master-slave replication between mysql5.5 and the first node of pxc5.6 4. add new node to the PXC5.6 when i process the step 4, I found the replication is stopped caused by the auto-increment issue. And i found the binlog-format of mysql5.5.x is statement-based, So i setup a middle node with row based binlog as the master of PXC, and it works. the operation steps like below: 1. use xtrabackup back a mysql5.5.x database 2. setup a middle node mysql5.6.x with the backup and binlog-format=row 3. start the master-slave replication between mysql5.5 and mysql5.6.x 4. set up the first node of PXC5.6 with the backup 5. start the master-slave replication between mysql5.6.x and the first node of pxc5.6 6. add new node to the PXC5.6 I do not deep into find the reason, but i know there must be something different for pxc and normal slave at the method of handling auto-increment field,So with row-based binlog, there is no need to handle the auto-increment field values, and the behaviour of them will be the same. On Sep 6, 2014, at 8:00, Daniel Black <daniel.black@openquery.com> wrote:
participants (3)
-
Daniel Black
-
Otto Berger
-
oucboy