[Maria-developers] unexpected "Duplicate entry" error
Hello, We sometimes (not always, not often but sometimes) have unexpected "Duplicate entry" error on the following table and with the following query : * table CREATE TABLE `category` ( `uid` bigint(20) unsigned NOT NULL, `category_uid` bigint(20) unsigned NOT NULL DEFAULT '0', `base_category_uid` bigint(20) unsigned NOT NULL DEFAULT '0', `category_strict_membership` tinyint(1) NOT NULL DEFAULT '0', PRIMARY KEY (`uid`,`category_uid`,`base_category_uid`,`category_strict_membership`), KEY `Membership` (`category_uid`,`base_category_uid`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci * query BEGIN; (some queries for other tables) DELETE FROM category WHERE uid=14026268; INSERT INTO category VALUES (14026268, 3131502, 2074, 1),(14026268, 2992, 2992, 0),(14026268, 3133077, 191, 1),(14026268, 1212264, 2992, 1); (some queries for other tables) COMMIT; * error Duplicate entry '14026268-3131502-2074-1' for key 'PRIMARY' But this duplicate entry error should not happen because DELETE is called in the same transaction, even if similar queries are sent from another connection at the same time. We confirmed that this kind of issue happens on the following environments using MySQL_python-1.2.3 : * MariaDB 5.5.25, XtraDB * MariaDB 5.5.29, XtraDB * MariaDB 5.5.29, InnoDB plugin Since it happens SOMETIMES, I cannot reproduce this issue by small query. Do you have any idea so that we can investigate this issue further ? Thanks in advance ! -- Kazuhiko Shiozaki Nexedi: Consulting and Development of Free / Open Source Software http://www.nexedi.com/ ERP5: Full Featured High End Open Source ERP http://www.erp5.com/
Kazuhiko Shiozaki <kazuhiko@nexedi.com> writes:
We sometimes (not always, not often but sometimes) have unexpected "Duplicate entry" error on the following table and with the following query :
BEGIN;
(some queries for other tables)
DELETE FROM category WHERE uid=14026268;
INSERT INTO category VALUES (14026268, 3131502, 2074, 1),(14026268, 2992, 2992, 0),(14026268, 3133077, 191, 1),(14026268, 1212264, 2992, 1);
(some queries for other tables)
COMMIT;
What transaction isolation level are you running these transactions with? - Kristian.
Hi Krisian, On 18/02/2013 15:36, Kristian Nielsen wrote:
We sometimes (not always, not often but sometimes) have unexpected "Duplicate entry" error on the following table and with the following query :
BEGIN;
(some queries for other tables)
DELETE FROM category WHERE uid=14026268;
INSERT INTO category VALUES (14026268, 3131502, 2074, 1),(14026268, 2992, 2992, 0),(14026268, 3133077, 191, 1),(14026268, 1212264, 2992, 1);
(some queries for other tables)
COMMIT;
What transaction isolation level are you running these transactions with?
It's REPEATABLE-READ. Regards, -- Kazuhiko Shiozaki, Nexedi SA Senior Consultant Nexedi: Consulting and Development of Free / Open Source Software http://www.nexedi.com/ ERP5: Full Featured High End Open Source ERP http://www.erp5.com/
On 18/02/2013 16:31, Kazuhiko Shiozaki wrote:
On 18/02/2013 15:36, Kristian Nielsen wrote:
We sometimes (not always, not often but sometimes) have unexpected "Duplicate entry" error on the following table and with the following query :
BEGIN;
(some queries for other tables)
DELETE FROM category WHERE uid=14026268;
INSERT INTO category VALUES (14026268, 3131502, 2074, 1),(14026268, 2992, 2992, 0),(14026268, 3133077, 191, 1),(14026268, 1212264, 2992, 1);
(some queries for other tables)
COMMIT;
What transaction isolation level are you running these transactions with?
It's REPEATABLE-READ.
And innodb_locks_unsafe_for_binlog is ON, could it be the reason ? Regards, -- Kazuhiko Shiozaki, Nexedi SA Senior Consultant Nexedi: Consulting and Development of Free / Open Source Software http://www.nexedi.com/ ERP5: Full Featured High End Open Source ERP http://www.erp5.com/
Kazuhiko Shiozaki <kazuhiko@nexedi.com> writes:
It's REPEATABLE-READ.
And innodb_locks_unsafe_for_binlog is ON, could it be the reason ?
Yes. A> SELECT @@global.innodb_locks_unsafe_for_binlog; @@global.innodb_locks_unsafe_for_binlog 1 A> SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; A> BEGIN; B> SET SESSION TRANSACTION ISOLATION LEVEL REPEATABLE READ; B> BEGIN; B> DELETE FROM t1 WHERE a=1; A> DELETE FROM t1 WHERE a=1; B> INSERT INTO t1 VALUES (1); B> COMMIT; A> INSERT INTO t1 VALUES (1); ERROR 1062 (23000): Duplicate entry '1' for key 'PRIMARY' With --innodb-locks-unsafe-for-binlog=0, the DELETE from connection A waits for a next-key lock set by the DELETE in B, and there is no error. With --innodb-locks-unsafe-for-binlog=1, no such next-key lock is set. Therefore, you get a duplicate key error, depending on exactly how the statements in the two transactions run. You can perhaps solve it by setting the transaction isolation level to SERIALIZABLE for just these queries. On the other hand, you probably need to be prepared to handle deadlocks and retry the transactions anyway, so you could just retry in this case also. Hope this helps, - Kristian.
Hi Kristian, On 18/02/2013 16:59, Kristian Nielsen wrote:
It's REPEATABLE-READ.
And innodb_locks_unsafe_for_binlog is ON, could it be the reason ?
Yes. (snip) With --innodb-locks-unsafe-for-binlog=0, the DELETE from connection A waits for a next-key lock set by the DELETE in B, and there is no error. With --innodb-locks-unsafe-for-binlog=1, no such next-key lock is set. Therefore, you get a duplicate key error, depending on exactly how the statements in the two transactions run.
You can perhaps solve it by setting the transaction isolation level to SERIALIZABLE for just these queries. On the other hand, you probably need to be prepared to handle deadlocks and retry the transactions anyway, so you could just retry in this case also.
Thanks for your help! I forgot that I enabled innodb_locks_unsafe_for_binlog when I posted this question... I'm trying using SERIALIZABLE isolation level and will check the performance impact with this change. Regards, -- Kazuhiko Shiozaki, Nexedi SA Senior Consultant Nexedi: Consulting and Development of Free / Open Source Software http://www.nexedi.com/ ERP5: Full Featured High End Open Source ERP http://www.erp5.com/
Hi Kristian, On 18/02/2013 17:38, Kazuhiko Shiozaki wrote:
On 18/02/2013 16:59, Kristian Nielsen wrote:
It's REPEATABLE-READ.
And innodb_locks_unsafe_for_binlog is ON, could it be the reason ?
Yes. (snip) With --innodb-locks-unsafe-for-binlog=0, the DELETE from connection A waits for a next-key lock set by the DELETE in B, and there is no error. With --innodb-locks-unsafe-for-binlog=1, no such next-key lock is set. Therefore, you get a duplicate key error, depending on exactly how the statements in the two transactions run.
You can perhaps solve it by setting the transaction isolation level to SERIALIZABLE for just these queries. On the other hand, you probably need to be prepared to handle deadlocks and retry the transactions anyway, so you could just retry in this case also.
Thanks for your help! I forgot that I enabled innodb_locks_unsafe_for_binlog when I posted this question...
I'm trying using SERIALIZABLE isolation level and will check the performance impact with this change.
I tried setting SERIALIZABLE isolation globally and confirmed that (much) more deadlocks happened. But unfortunately "Duplicate entry" error still happens. @@GLOBAL.tx_isolation: SERIALIZABLE @@tx_isolation: SERIALIZABLE @@innodb_locks_unsafe_for_binlog: 1 Is it normal ? Regards, -- Kazuhiko Shiozaki, Nexedi SA Senior Consultant Nexedi: Consulting and Development of Free / Open Source Software http://www.nexedi.com/ ERP5: Full Featured High End Open Source ERP http://www.erp5.com/
Kazuhiko Shiozaki <kazuhiko@nexedi.com> writes:
I tried setting SERIALIZABLE isolation globally and confirmed that (much) more deadlocks happened. But unfortunately "Duplicate entry" error still happens.
@@GLOBAL.tx_isolation: SERIALIZABLE @@tx_isolation: SERIALIZABLE @@innodb_locks_unsafe_for_binlog: 1
I don't know. I am not sure what the semantics is for setting innodb_locks_unsafe_for_binlog at the same time as SERIALIZABLE. They seem kind of contradictory to me. Do you still get the deadlocks if you use SERIALIZABLE without innodb_locks_unsafe_for_binlog? By the way, you do not need to set SERIALIZABLE globally. Why not set it only for that particular kind of transaction? - Kristian.
Hi Kristian, On 21/02/2013 13:31, Kristian Nielsen wrote:
I tried setting SERIALIZABLE isolation globally and confirmed that (much) more deadlocks happened. But unfortunately "Duplicate entry" error still happens.
@@GLOBAL.tx_isolation: SERIALIZABLE @@tx_isolation: SERIALIZABLE @@innodb_locks_unsafe_for_binlog: 1
I don't know.
I am not sure what the semantics is for setting innodb_locks_unsafe_for_binlog at the same time as SERIALIZABLE. They seem kind of contradictory to me.
Do you still get the deadlocks if you use SERIALIZABLE without innodb_locks_unsafe_for_binlog?
Sorry for late reply. Yes still deadlocks happen often with using SERIALIZABLE without innodb_locks_unsafe_for_binlog. Considering our use cases, it seems that using REPEATABLE-READ with innodb_locks_unsafe_for_binlog is faster than other options we tried, even though our application should handle "Duplicate entry" well to retry transactions, because this way has much less deadlocks.
By the way, you do not need to set SERIALIZABLE globally. Why not set it only for that particular kind of transaction?
Yes I know. I set it globally only because it was the easiest way without modifying the application at all... Regards, -- Kazuhiko Shiozaki, Nexedi SA Senior Consultant Nexedi: Consulting and Development of Free / Open Source Software http://www.nexedi.com/ ERP5: Full Featured High End Open Source ERP http://www.erp5.com/
Hi. Bumping this thread with a slightly different finding which may help understanding the issue. I'll use two command lines for this test, "MariaDB 1" and "MariaDB 2", also distinguished by indentation level for readability. Server version: 5.5.30-MariaDB-log MariaDB Server Some setup: MariaDB 1> SET autocommit = 0; Query OK, 0 rows affected (0.00 sec) MariaDB 1> CREATE TABLE `tst` ( -> `uid` int(10) unsigned NOT NULL, -> `foo` smallint(6) NOT NULL DEFAULT '-1', -> PRIMARY KEY (`uid`) -> ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; Query OK, 0 rows affected (0.03 sec) MariaDB 1> insert into tst values (1, -2); Query OK, 1 row affected (0.00 sec) MariaDB 1> commit; Query OK, 0 rows affected (0.00 sec) Here starts the actual test: MariaDB 1> select count(*) from tst\G *************************** 1. row *************************** count(*): 1 1 row in set (0.00 sec) MariaDB 2> select count(*) from tst\G *************************** 1. row *************************** count(*): 1 1 row in set (0.00 sec) MariaDB 2> insert into tst values (2, 4), (3, 0); Query OK, 2 rows affected (0.01 sec) Records: 2 Duplicates: 0 Warnings: 0 MariaDB 1> select count(*) from tst\G *************************** 1. row *************************** count(*): 1 1 row in set (0.00 sec) So far, all good. MariaDB 1> delete from tst; Query OK, 3 rows affected (0.00 sec) Why "3 rows affected" ? MariaDB 1> commit; Query OK, 0 rows affected (0.01 sec) MariaDB 1> select * from tst; Empty set (0.00 sec) The query Kazuhiko started this thread with can happen in parallel with highly-probably-different set of values for the "uid" column (the first column in both tables and index reporting the duplicate key). Probable, but not guaranteed. So a case like this is possible: c 1> delete ... where uid=1 ... (x lines affected) c 2> delete ... where uid=1 ... (y lines affected, none with uid 1) c 1> insert ... values (1, ...), ... c 1> commit (all fine) c 2> insert ... values (1, ...), ... c 2> commit (duplicate key error) I'm surprised by delete's behaviour, as I am mentally applying transaction isolation below what SQL can query, to the btree the table is. Is it just a bad assumption from my part ? If there a way to make this assumption true in mariadb ? Regards, -- Vincent Pelletier ERP5 - open source ERP/CRM for flexible enterprises
On Tue, 21 May 2013 11:35:46 +0200, Vincent Pelletier <vincent@nexedi.com> wrote :
I'm surprised by delete's behaviour, as I am mentally applying to the btree the table is.
...to the btree the table (or index) is implemented with.
Is it just a bad assumption from my part ?
Looks like I just found part of an answer: http://dev.mysql.com/doc/refman/5.5/en/delete.html#c7917 Quoting: It's probably worth to mention that DELETE FROM doesn't use the same isolation level in transaction as SELECT. Even if you set isolation level as REPEATABLE READ it doesn't change DELETE behaviour which work as READ COMMITTED. (it affects InnoDB engine in MySQL4 and MySQL5) So at least it shouldn't be news (MySQL4 !). Any plan to change this ? -- Vincent Pelletier ERP5 - open source ERP/CRM for flexible enterprises
Hi!
"Vincent" == Vincent Pelletier <vincent@nexedi.com> writes:
Vincent> Hi. Vincent> Bumping this thread with a slightly different finding which may help Vincent> understanding the issue. Vincent> I'll use two command lines for this test, "MariaDB 1" and "MariaDB 2", Vincent> also distinguished by indentation level for readability. Vincent> Server version: 5.5.30-MariaDB-log MariaDB Server <cut> Vincent> So far, all good. Vincent> MariaDB 1> delete from tst; Vincent> Query OK, 3 rows affected (0.00 sec) Vincent> Why "3 rows affected" ? This is because in InnoDB, delete is run under the READ COMMITTED isolation level, not REPETABLE READ as SELECT. This is an optimization that Heikki did a long time ago that has this strange side effect. See the comment posted by Radek on http://dev.mysql.com/doc/refman/5.5/en/delete.html Regards, Monty
On Tue, 21 May 2013 19:45:02 +0300, Michael Widenius <monty@askmonty.org> wrote :
This is an optimization that Heikki did a long time ago that has this strange side effect.
See the comment posted by Radek on http://dev.mysql.com/doc/refman/5.5/en/delete.html
I found that a bit after posting, yes... Is there any way to undo that optimisation (configuration...) ? What kind of problems happened before that optimisation ? On a loosely related topic: I've witnessed innodb tables with a lot of insert & delete ("0 rows, then 300k rows, then back to 0 rows" kind of life cycle, table being used as a task queue) that queries can become slow even on an empty table. In such event, "explain select * from foo" returns a large number of rows to scan, although count(*) returns 0. I suspect there is a GC-ish bug in innodb which keeps deleted records for too long (as I believe rows are not actually freed on the spot, but kept for a while, maybe while a snapshot keeps referencing them). The cure is usually to restart mariadb, even closing all client connections doesn't solve this issue. Any chance it could be linked to such "bending the isolation level" ? Regards, -- Vincent Pelletier ERP5 - open source ERP/CRM for flexible enterprises
participants (4)
-
Kazuhiko Shiozaki
-
Kristian Nielsen
-
Michael Widenius
-
Vincent Pelletier