[Maria-developers] MariaDB mutex contention (was: Re: MariaDB 10.0 benchmarks on upcoming IVB-EP processors ...)
Hi, Michael Widenius wrote:
10.0.4 should be very soon ready for benchmarking (Tuesday or Wednesday). We are in the final stage of fixing the failing test cases after our last merge with MySQL 5.6 GA.
I would just prefer to see Sergei Vojtovich push his code that changes how the table cache works.
This solves the same problem as the split table cache in 5.6, but is a much simpler and better implementation.
After this, I would expect that MariaDB 10.0 performance would be closer to 5.6 performance for this benchmark.
Axel, could you at some point run the same benchmark with MariaDB 10.0.4 and MySQL 5.6 with gprof to see where time is spent?
I have now beefed up my knowledge of performance_schema and used it to look at Mutex/RW-lock contention for readonly OLTP benchmark. ----- TL;DR: 10.0.4 spends very long time in LOCK_PLUGIN, LOCK_OPEN is indeed much better than 10.0.3 but not(?) nearly as good as MySQL-5.6. ----- Contestants: - MySQL-5.6.13 (release) - MariaDB-10.0.3 (release) - MariaDB-10.0 r3799 (bzr trunk) all 3 compiled from source on benchmark machine using the same build script Environment: - many core machine (32 cores, 64 threads) - sysbench-0.5 OLTP readonly, 10M rows, single table - mysqld bound to 48 cores (sysbench to the other 8) - tested with 64 threads (this is the sweet spot) - using tcmalloc (preloaded) - mutexes and rw locks fully instrumented via performance_schema, see my.cnf Benchmark results (transactions per second): P_S on P_S off MySQL: ~15K ~18K 10.0.3: ~7K ~7.5K 10.0.4: ~6.5K ~9K After stabilizing I took 3 probes from P_S, sorting by sum_timer_wait desc. Times shown by P_S are picoseconds. If this were seconds, the decimal point would be in the column between "m" and "_" in "sum_timer_wait". So i.e. 10.0.4 waited 242 seconds for LOCK_PLUGIN during the 10 second measurement interval. Since we have 48 cores, there are total 480 seconds and 10.0.4 wasted about 50% of that. Questions: - what is protected by LOCK_PLUGIN? - how is LOCK_OPEN split in MySQL-5.6?If some of that is now covered by THR_LOCK::mutex, then it would explain why this is the busiest mutex in MySQL-5.6. Let me know if you need more data from me or if you have ideas what else I should test. XL Top 20 Mutexes/RW-locks MySQL-5.6.13 +---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/mysys/THR_LOCK::mutex | 3397732 | 123367802416236 | | wait/synch/rwlock/innodb/btr_search_latch | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/LOCK_grant | 1682983 | 1526912505972 | | wait/synch/mutex/sql/LOCK_table_cache | 3397786 | 1247140492941 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash_table_locks | 4086413 | 1026755227155 | | wait/synch/mutex/innodb/trx_sys_mutex | 364146 | 639270297372 | | wait/synch/mutex/innodb/os_mutex | 149491 | 630758755641 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 1016436 | 344264365074 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 242778 | 183463650006 | | wait/synch/mutex/sql/MDL_map::mutex | 121342 | 69121460394 | | wait/synch/mutex/mysys/BITMAP::mutex | 242667 | 54426177477 | | wait/synch/mutex/innodb/fil_system_mutex | 179586 | 20229032103 | | wait/synch/mutex/sql/LOCK_plugin | 121352 | 15004585617 | | wait/synch/mutex/innodb/trx_mutex | 121475 | 13105527141 | | wait/synch/mutex/innodb/lock_mutex | 121374 | 12627692868 | | wait/synch/mutex/innodb/buf_pool_mutex | 2240 | 228234006 | | wait/synch/mutex/innodb/log_sys_mutex | 540 | 119752122 | | wait/synch/mutex/innodb/flush_list_mutex | 704 | 86802870 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 640 | 30728817 | +---------------------------------------------+------------+-----------------+ MariaDB-10.0.3 +------------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +------------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2206782 | 2761461343332 | | wait/synch/mutex/sql/LOCK_plugin | 2540899 | 1471913471196 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 6485120 | 770989733856 | | wait/synch/rwlock/innodb/hash table locks | 3315788 | 674878255920 | | wait/synch/rwlock/innodb/btr_search_latch | 1358771 | 540950632524 | | wait/synch/mutex/innodb/trx_sys_mutex | 380210 | 245662486416 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 191698 | 125257499748 | | wait/synch/mutex/sql/TABLE_SHARE::LOCK_ha_data | 1067704 | 95258676492 | | wait/synch/mutex/mysys/THR_LOCK_lock | 157621 | 52329482172 | | wait/synch/mutex/mysys/BITMAP::mutex | 154870 | 49415303736 | | wait/synch/mutex/sql/MDL_map::mutex | 79173 | 41936664096 | | wait/synch/mutex/innodb/fil_system_mutex | 129610 | 40258209024 | | wait/synch/mutex/innodb/lock_mutex | 93667 | 26848457928 | | wait/synch/mutex/innodb/trx_mutex | 94131 | 11398946760 | | wait/synch/mutex/innodb/buf_pool_mutex | 12672 | 571983480 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 9152 | 312365052 | | wait/synch/mutex/innodb/log_sys_mutex | 456 | 138632868 | | wait/synch/mutex/innodb/flush_list_mutex | 480 | 109226268 | | wait/synch/mutex/innodb/os_mutex | 350 | 71117100 | +------------------------------------------------+------------+-----------------+ MariaDB-10.0-r3799 +---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | +---------------------------------------------+------------+-----------------+ Comparison ---------- MariaDB-10.0-r3799 MariaDB-10.0.3 MySQL-5.6.13 +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | count_star | sum_timer_wait | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | 2540899 | 1471913471196 | 121352 | 15004585617 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | 2206782 | 2761461343332 | 3397732 | 123367802416236 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | 6485120 | 770989733856 | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | 3315788 | 674878255920 | 4086413 | 1026755227155 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | 1358771 | 540950632524 | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | 191698 | 125257499748 | 242778 | 183463650006 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | 380210 | 245662486416 | 364146 | 639270297372 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | 1016436 | 344264365074 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | 79173 | 41936664096 | 121342 | 69121460394 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | 129610 | 40258209024 | 179586 | 20229032103 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | 154870 | 49415303736 | 242667 | 54426177477 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | 157621 | 52329482172 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | 93667 | 26848457928 | 121374 | 12627692868 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | 94131 | 11398946760 | 121475 | 13105527141 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | 12672 | 571983480 | 2240 | 228234006 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | 9152 | 312365052 | 640 | 30728817 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | 480 | 109226268 | 704 | 86802870 | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+
Hi Axel, thanks for testing and interesting results. Though our results diverge a lot, generally I confirm that PFS is eating up 10-20% of performance and we're still slower than 5.6. But in my scenario even single thread is about 10% slower than in 5.6. Further comments/questions inline. 16.08.2013, в 18:11, Axel Schwenke <axel@askmonty.org> написал(а):
Hi,
Michael Widenius wrote:
10.0.4 should be very soon ready for benchmarking (Tuesday or Wednesday). We are in the final stage of fixing the failing test cases after our last merge with MySQL 5.6 GA.
I would just prefer to see Sergei Vojtovich push his code that changes how the table cache works.
This solves the same problem as the split table cache in 5.6, but is a much simpler and better implementation.
After this, I would expect that MariaDB 10.0 performance would be closer to 5.6 performance for this benchmark.
Axel, could you at some point run the same benchmark with MariaDB 10.0.4 and MySQL 5.6 with gprof to see where time is spent?
I have now beefed up my knowledge of performance_schema and used it to look at Mutex/RW-lock contention for readonly OLTP benchmark.
-----
TL;DR: 10.0.4 spends very long time in LOCK_PLUGIN, LOCK_OPEN is indeed much better than 10.0.3 but not(?) nearly as good as MySQL-5.6.
-----
Contestants:
- MySQL-5.6.13 (release) - MariaDB-10.0.3 (release) - MariaDB-10.0 r3799 (bzr trunk)
all 3 compiled from source on benchmark machine using the same build script
Environment:
- many core machine (32 cores, 64 threads) - sysbench-0.5 OLTP readonly, 10M rows, single table - mysqld bound to 48 cores (sysbench to the other 8) - tested with 64 threads (this is the sweet spot) - using tcmalloc (preloaded) - mutexes and rw locks fully instrumented via performance_schema, see my.cnf
Benchmark results (transactions per second):
P_S on P_S off MySQL: ~15K ~18K 10.0.3: ~7K ~7.5K 10.0.4: ~6.5K ~9K
After stabilizing I took 3 probes from P_S, sorting by sum_timer_wait desc. Times shown by P_S are picoseconds. If this were seconds, the decimal point would be in the column between "m" and "_" in "sum_timer_wait". So i.e. 10.0.4 waited 242 seconds for LOCK_PLUGIN during the 10 second measurement interval. Since we have 48 cores, there are total 480 seconds and 10.0.4 wasted about 50% of that.
Questions:
- what is protected by LOCK_PLUGIN? Mainly array of plugins and array of shared objects containing these plugins, but see below.
- how is LOCK_OPEN split in MySQL-5.6?If some of that is now covered by THR_LOCK::mutex, then it would explain why this is the busiest mutex in MySQL-5.6. Nope. It is just split into multiple table caches (or partitioned), protected by LOCK_table_cache and LOCK_open. I guess we have some optimization for THR_LOCK::mutex which Oracle doesn't.
Let me know if you need more data from me or if you have ideas what else I should test.
XL
Top 20 Mutexes/RW-locks
MySQL-5.6.13
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/mysys/THR_LOCK::mutex | 3397732 | 123367802416236 | | wait/synch/rwlock/innodb/btr_search_latch | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/LOCK_grant | 1682983 | 1526912505972 | | wait/synch/mutex/sql/LOCK_table_cache | 3397786 | 1247140492941 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash_table_locks | 4086413 | 1026755227155 | | wait/synch/mutex/innodb/trx_sys_mutex | 364146 | 639270297372 | | wait/synch/mutex/innodb/os_mutex | 149491 | 630758755641 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 1016436 | 344264365074 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 242778 | 183463650006 | | wait/synch/mutex/sql/MDL_map::mutex | 121342 | 69121460394 | | wait/synch/mutex/mysys/BITMAP::mutex | 242667 | 54426177477 | | wait/synch/mutex/innodb/fil_system_mutex | 179586 | 20229032103 | | wait/synch/mutex/sql/LOCK_plugin | 121352 | 15004585617 | | wait/synch/mutex/innodb/trx_mutex | 121475 | 13105527141 | | wait/synch/mutex/innodb/lock_mutex | 121374 | 12627692868 | | wait/synch/mutex/innodb/buf_pool_mutex | 2240 | 228234006 | | wait/synch/mutex/innodb/log_sys_mutex | 540 | 119752122 | | wait/synch/mutex/innodb/flush_list_mutex | 704 | 86802870 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 640 | 30728817 | +---------------------------------------------+------------+-----------------+
Indeed, strange that THD_LOCK::mutex takes so much time, never seen it in my tests. As well as strange that MDL_map::mutex is acquired 121 342 times whereas LOCK_table_cache is acquired 3 397 786 (they're almost identical in my tests).
MariaDB-10.0.3
+------------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +------------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2206782 | 2761461343332 | | wait/synch/mutex/sql/LOCK_plugin | 2540899 | 1471913471196 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 6485120 | 770989733856 | | wait/synch/rwlock/innodb/hash table locks | 3315788 | 674878255920 | | wait/synch/rwlock/innodb/btr_search_latch | 1358771 | 540950632524 | | wait/synch/mutex/innodb/trx_sys_mutex | 380210 | 245662486416 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 191698 | 125257499748 | | wait/synch/mutex/sql/TABLE_SHARE::LOCK_ha_data | 1067704 | 95258676492 | | wait/synch/mutex/mysys/THR_LOCK_lock | 157621 | 52329482172 | | wait/synch/mutex/mysys/BITMAP::mutex | 154870 | 49415303736 | | wait/synch/mutex/sql/MDL_map::mutex | 79173 | 41936664096 | | wait/synch/mutex/innodb/fil_system_mutex | 129610 | 40258209024 | | wait/synch/mutex/innodb/lock_mutex | 93667 | 26848457928 | | wait/synch/mutex/innodb/trx_mutex | 94131 | 11398946760 | | wait/synch/mutex/innodb/buf_pool_mutex | 12672 | 571983480 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 9152 | 312365052 | | wait/synch/mutex/innodb/log_sys_mutex | 456 | 138632868 | | wait/synch/mutex/innodb/flush_list_mutex | 480 | 109226268 | | wait/synch/mutex/innodb/os_mutex | 350 | 71117100 | +------------------------------------------------+------------+-----------------+
MariaDB-10.0-r3799
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | | wait/synch/mutex/sql/LOCK_open | 1835243| 33790056812460 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | +---------------------------------------------+------------+-----------------+
From the above I'd say: 911 928 - number of executed queries (1x LOCK_tdc per query) 1 835 243 / 911 928= ~2 - LOCK_open was acquired 2x per query 4 194 896 / 911 928= ~4 - LOCK_plugin was acquired 4x per query Most probably it means that table definition cache is full and every query reopens table (at least 1x LOCK_plugin), evicts some old table from cache (at least 1x LOCK_plugin). It may be a bug in new code. From 5.6 and 10.0.3 metrics I can say that they're not running out of table cache. If the above is true, more complex code path is executed under LOCK_open and may give less optimal number.
Comparison ----------
MariaDB-10.0-r3799 MariaDB-10.0.3 MySQL-5.6.13 +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | count_star | sum_timer_wait | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | 2540899 | 1471913471196 | 121352 | 15004585617 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | 2206782 | 2761461343332 | 3397732 | 123367802416236 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | 6485120 | 770989733856 | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | 3315788 | 674878255920 | 4086413 | 1026755227155 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | 1358771 | 540950632524 | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | 191698 | 125257499748 | 242778 | 183463650006 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | 380210 | 245662486416 | 364146 | 639270297372 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | 1016436 | 344264365074 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | 79173 | 41936664096 | 121342 | 69121460394 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | 129610 | 40258209024 | 179586 | 20229032103 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | 154870 | 49415303736 | 242667 | 54426177477 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | 157621 | 52329482172 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | 93667 | 26848457928 | 121374 | 12627692868 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | 94131 | 11398946760 | 121475 | 13105527141 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | 12672 | 571983480 | 2240 | 228234006 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | 9152 | 312365052 | 640 | 30728817 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | 480 | 109226268 | 704 | 86802870 | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+
<files.zip>
Would be nice to include LOCK_table_cache here. Just for your reference, here is command that I use to gather stats (gives seconds out of the box): select EVENT_NAME,COUNT_STAR,ROUND(SUM_TIMER_WAIT/1000000000000,2) AS SECONDS,SUM_TIMER_WAIT,MIN_TIMER_WAIT,AVG_TIMER_WAIT,MAX_TIMER_WAIT from performance_schema.events_waits_summary_global_by_event_name order by sum_timer_wait desc limit 20 Regards, Sergey
From the above I'd say: 911 928 - number of executed queries (1x LOCK_tdc per query) 1 835 243 / 911 928= ~2 - LOCK_open was acquired 2x per query 4 194 896 / 911 928= ~4 - LOCK_plugin was acquired 4x per query
Most probably it means that table definition cache is full and every query reopens table (at least 1x LOCK_plugin), evicts some old table from cache (at least 1x LOCK_plugin). It may be a bug in new code.
I wonder how the MariaDB under test was compiled and what flags were used. AFAIK by default semisync plugins are compiled in. That means that for each transaction ha_commit_trans() will execute RUN_HOOK(transaction, after_commit, (thd, FALSE)); It involves locking of LOCK_plugin when semisync plugin is "unlocked" (even though it's compiled in statically). Also trans_commit() has this: RUN_HOOK(transaction, after_commit, (thd, FALSE)); Which also involves locking of LOCK_plugin. Pavel On Sat, Aug 17, 2013 at 11:30 AM, Sergey Vojtovich <svoj@mariadb.org> wrote:
Hi Axel,
thanks for testing and interesting results. Though our results diverge a lot, generally I confirm that PFS is eating up 10-20% of performance and we're still slower than 5.6. But in my scenario even single thread is about 10% slower than in 5.6.
Further comments/questions inline.
16.08.2013, в 18:11, Axel Schwenke <axel@askmonty.org> написал(а):
Hi,
Michael Widenius wrote:
10.0.4 should be very soon ready for benchmarking (Tuesday or Wednesday). We are in the final stage of fixing the failing test cases after our last merge with MySQL 5.6 GA.
I would just prefer to see Sergei Vojtovich push his code that changes how the table cache works.
This solves the same problem as the split table cache in 5.6, but is a much simpler and better implementation.
After this, I would expect that MariaDB 10.0 performance would be closer to 5.6 performance for this benchmark.
Axel, could you at some point run the same benchmark with MariaDB 10.0.4 and MySQL 5.6 with gprof to see where time is spent?
I have now beefed up my knowledge of performance_schema and used it to look at Mutex/RW-lock contention for readonly OLTP benchmark.
-----
TL;DR: 10.0.4 spends very long time in LOCK_PLUGIN, LOCK_OPEN is indeed much better than 10.0.3 but not(?) nearly as good as MySQL-5.6.
-----
Contestants:
- MySQL-5.6.13 (release) - MariaDB-10.0.3 (release) - MariaDB-10.0 r3799 (bzr trunk)
all 3 compiled from source on benchmark machine using the same build script
Environment:
- many core machine (32 cores, 64 threads) - sysbench-0.5 OLTP readonly, 10M rows, single table - mysqld bound to 48 cores (sysbench to the other 8) - tested with 64 threads (this is the sweet spot) - using tcmalloc (preloaded) - mutexes and rw locks fully instrumented via performance_schema, see my.cnf
Benchmark results (transactions per second):
P_S on P_S off MySQL: ~15K ~18K 10.0.3: ~7K ~7.5K 10.0.4: ~6.5K ~9K
After stabilizing I took 3 probes from P_S, sorting by sum_timer_wait desc. Times shown by P_S are picoseconds. If this were seconds, the decimal point would be in the column between "m" and "_" in "sum_timer_wait". So i.e. 10.0.4 waited 242 seconds for LOCK_PLUGIN during the 10 second measurement interval. Since we have 48 cores, there are total 480 seconds and 10.0.4 wasted about 50% of that.
Questions:
- what is protected by LOCK_PLUGIN? Mainly array of plugins and array of shared objects containing these plugins, but see below.
- how is LOCK_OPEN split in MySQL-5.6?If some of that is now covered by THR_LOCK::mutex, then it would explain why this is the busiest mutex in MySQL-5.6. Nope. It is just split into multiple table caches (or partitioned), protected by LOCK_table_cache and LOCK_open. I guess we have some optimization for THR_LOCK::mutex which Oracle doesn't.
Let me know if you need more data from me or if you have ideas what else I should test.
XL
Top 20 Mutexes/RW-locks
MySQL-5.6.13
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/mysys/THR_LOCK::mutex | 3397732 | 123367802416236 | | wait/synch/rwlock/innodb/btr_search_latch | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/LOCK_grant | 1682983 | 1526912505972 | | wait/synch/mutex/sql/LOCK_table_cache | 3397786 | 1247140492941 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash_table_locks | 4086413 | 1026755227155 | | wait/synch/mutex/innodb/trx_sys_mutex | 364146 | 639270297372 | | wait/synch/mutex/innodb/os_mutex | 149491 | 630758755641 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 1016436 | 344264365074 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 242778 | 183463650006 | | wait/synch/mutex/sql/MDL_map::mutex | 121342 | 69121460394 | | wait/synch/mutex/mysys/BITMAP::mutex | 242667 | 54426177477 | | wait/synch/mutex/innodb/fil_system_mutex | 179586 | 20229032103 | | wait/synch/mutex/sql/LOCK_plugin | 121352 | 15004585617 | | wait/synch/mutex/innodb/trx_mutex | 121475 | 13105527141 | | wait/synch/mutex/innodb/lock_mutex | 121374 | 12627692868 | | wait/synch/mutex/innodb/buf_pool_mutex | 2240 | 228234006 | | wait/synch/mutex/innodb/log_sys_mutex | 540 | 119752122 | | wait/synch/mutex/innodb/flush_list_mutex | 704 | 86802870 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 640 | 30728817 | +---------------------------------------------+------------+-----------------+
Indeed, strange that THD_LOCK::mutex takes so much time, never seen it in my tests. As well as strange that MDL_map::mutex is acquired 121 342 times whereas LOCK_table_cache is acquired 3 397 786 (they're almost identical in my tests).
MariaDB-10.0.3
+------------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +------------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2206782 | 2761461343332 | | wait/synch/mutex/sql/LOCK_plugin | 2540899 | 1471913471196 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 6485120 | 770989733856 | | wait/synch/rwlock/innodb/hash table locks | 3315788 | 674878255920 | | wait/synch/rwlock/innodb/btr_search_latch | 1358771 | 540950632524 | | wait/synch/mutex/innodb/trx_sys_mutex | 380210 | 245662486416 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 191698 | 125257499748 | | wait/synch/mutex/sql/TABLE_SHARE::LOCK_ha_data | 1067704 | 95258676492 | | wait/synch/mutex/mysys/THR_LOCK_lock | 157621 | 52329482172 | | wait/synch/mutex/mysys/BITMAP::mutex | 154870 | 49415303736 | | wait/synch/mutex/sql/MDL_map::mutex | 79173 | 41936664096 | | wait/synch/mutex/innodb/fil_system_mutex | 129610 | 40258209024 | | wait/synch/mutex/innodb/lock_mutex | 93667 | 26848457928 | | wait/synch/mutex/innodb/trx_mutex | 94131 | 11398946760 | | wait/synch/mutex/innodb/buf_pool_mutex | 12672 | 571983480 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 9152 | 312365052 | | wait/synch/mutex/innodb/log_sys_mutex | 456 | 138632868 | | wait/synch/mutex/innodb/flush_list_mutex | 480 | 109226268 | | wait/synch/mutex/innodb/os_mutex | 350 | 71117100 | +------------------------------------------------+------------+-----------------+
MariaDB-10.0-r3799
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | | wait/synch/mutex/sql/LOCK_open | 1835243| 33790056812460 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | +---------------------------------------------+------------+-----------------+
From the above I'd say: 911 928 - number of executed queries (1x LOCK_tdc per query) 1 835 243 / 911 928= ~2 - LOCK_open was acquired 2x per query 4 194 896 / 911 928= ~4 - LOCK_plugin was acquired 4x per query
Most probably it means that table definition cache is full and every query reopens table (at least 1x LOCK_plugin), evicts some old table from cache (at least 1x LOCK_plugin). It may be a bug in new code.
From 5.6 and 10.0.3 metrics I can say that they're not running out of table cache.
If the above is true, more complex code path is executed under LOCK_open and may give less optimal number.
Comparison ----------
MariaDB-10.0-r3799 MariaDB-10.0.3 MySQL-5.6.13 +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | count_star | sum_timer_wait | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | 2540899 | 1471913471196 | 121352 | 15004585617 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | 2206782 | 2761461343332 | 3397732 | 123367802416236 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | 6485120 | 770989733856 | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | 3315788 | 674878255920 | 4086413 | 1026755227155 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | 1358771 | 540950632524 | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | 191698 | 125257499748 | 242778 | 183463650006 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | 380210 | 245662486416 | 364146 | 639270297372 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | 1016436 | 344264365074 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | 79173 | 41936664096 | 121342 | 69121460394 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | 129610 | 40258209024 | 179586 | 20229032103 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | 154870 | 49415303736 | 242667 | 54426177477 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | 157621 | 52329482172 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | 93667 | 26848457928 | 121374 | 12627692868 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | 94131 | 11398946760 | 121475 | 13105527141 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | 12672 | 571983480 | 2240 | 228234006 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | 9152 | 312365052 | 640 | 30728817 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | 480 | 109226268 | 704 | 86802870 | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+
<files.zip>
Would be nice to include LOCK_table_cache here.
Just for your reference, here is command that I use to gather stats (gives seconds out of the box): select EVENT_NAME,COUNT_STAR,ROUND(SUM_TIMER_WAIT/1000000000000,2) AS SECONDS,SUM_TIMER_WAIT,MIN_TIMER_WAIT,AVG_TIMER_WAIT,MAX_TIMER_WAIT from performance_schema.events_waits_summary_global_by_event_name order by sum_timer_wait desc limit 20
Regards, Sergey _______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
Hi Pavel, Pavel Ivanov wrote:
From the above I'd say: 911 928 - number of executed queries (1x LOCK_tdc per query) 1 835 243 / 911 928= ~2 - LOCK_open was acquired 2x per query 4 194 896 / 911 928= ~4 - LOCK_plugin was acquired 4x per query
I wonder how the MariaDB under test was compiled and what flags were used. AFAIK by default semisync plugins are compiled in. That means that for each transaction ha_commit_trans() will execute
RUN_HOOK(transaction, after_commit, (thd, FALSE));
It involves locking of LOCK_plugin when semisync plugin is "unlocked" (even though it's compiled in statically). Also trans_commit() has this:
RUN_HOOK(transaction, after_commit, (thd, FALSE));
Which also involves locking of LOCK_plugin.
Good catch! Indeed the default cmake configuration activates semisync master and slave, while the release build explicitly disables those. Removing those two plugins greatly reduced contention of LOCK_plugin. Since the binlog was disabled for this benchmark, I regard this behavior a bug. Another plugin that should be disabled is QUERY_RESPONSE_TIME. It has quite some impact on LOCK_plugin: -DWITH_QUERY_RESPONSE_TIME=1 +-----------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +-----------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 2743723 | 160451183703111 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2743651 | 19122327109683 | | wait/synch/mutex/sql/LOCK_plugin | 3135747 | 18611700478728 | | wait/synch/rwlock/sql/LOCK_tdc | 1356784 | 1360918935810 | -DWITHOUT_QUERY_RESPONSE_TIME=1 completely removes LOCK_plugin from the top 20 list. This is also considered a bug as the plugin API should be lightweight. XL
hi!
Another plugin that should be disabled is QUERY_RESPONSE_TIME. It has quite some impact on LOCK_plugin:
-DWITH_QUERY_RESPONSE_TIME=1
+-----------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +-----------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 2743723 | 160451183703111 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2743651 | 19122327109683 | | wait/synch/mutex/sql/LOCK_plugin | 3135747 | 18611700478728 | | wait/synch/rwlock/sql/LOCK_tdc | 1356784 | 1360918935810 |
-DWITHOUT_QUERY_RESPONSE_TIME=1
completely removes LOCK_plugin from the top 20 list. This is also considered a bug as the plugin API should be lightweight.
Should the "SET GLOBAL query_response_time_stats=0" solve this contention problem? or the LOCK_plugin is called always, doesn't matter if it's "disabled" or not? -- Roberto Spadim
On Mon, Aug 19, 2013 at 5:17 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
hi!
Another plugin that should be disabled is QUERY_RESPONSE_TIME. It has quite some impact on LOCK_plugin:
-DWITH_QUERY_RESPONSE_TIME=1
+-----------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +-----------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 2743723 | 160451183703111 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2743651 | 19122327109683 | | wait/synch/mutex/sql/LOCK_plugin | 3135747 | 18611700478728 | | wait/synch/rwlock/sql/LOCK_tdc | 1356784 | 1360918935810 |
-DWITHOUT_QUERY_RESPONSE_TIME=1
completely removes LOCK_plugin from the top 20 list. This is also considered a bug as the plugin API should be lightweight.
Should the "SET GLOBAL query_response_time_stats=0" solve this contention problem? or the LOCK_plugin is called always, doesn't matter if it's "disabled" or not?
If it's implemented the same way as semisync plugins (and it should be implemented like that) then LOCK_plugin is locked even when plugin's functionality disabled, because it gets disabled using plugin-specific variable which can be checked only inside the plugin code. Pavel
Is it reasonable for semi-sync to need to lock/unlock LOCK_plugin that frequently? The plugin architecture, like most of the rest of MySQL, wasn't described to people on the outside (like me). On Mon, Aug 19, 2013 at 6:32 AM, Pavel Ivanov <pivanof@google.com> wrote:
On Mon, Aug 19, 2013 at 5:17 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
hi!
Another plugin that should be disabled is QUERY_RESPONSE_TIME. It has quite some impact on LOCK_plugin:
-DWITH_QUERY_RESPONSE_TIME=1
+-----------------------------------------+------------+-----------------+
| event_name | count_star | sum_timer_wait |
+-----------------------------------------+------------+-----------------+
| wait/synch/mutex/sql/LOCK_open | 2743723 | 160451183703111 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2743651 | 19122327109683 | | wait/synch/mutex/sql/LOCK_plugin | 3135747 | 18611700478728 | | wait/synch/rwlock/sql/LOCK_tdc | 1356784 | 1360918935810 |
-DWITHOUT_QUERY_RESPONSE_TIME=1
completely removes LOCK_plugin from the top 20 list. This is also considered a bug as the plugin API should be lightweight.
Should the "SET GLOBAL query_response_time_stats=0" solve this contention problem? or the LOCK_plugin is called always, doesn't matter if it's "disabled" or not?
If it's implemented the same way as semisync plugins (and it should be implemented like that) then LOCK_plugin is locked even when plugin's functionality disabled, because it gets disabled using plugin-specific variable which can be checked only inside the plugin code.
Pavel
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
hummm some doubts since i never used/write an audit plugin... the lock_plugin of query response time, is a audit plugin + a information schema plugin right? he audit plugin is 'called' at query start and query end? or there's more locks used? when used the locked is used in what case? to lock the UNINSTALL PLUGIN command? should be nice a flow about this plugin locks
Hi, Roberto! On Aug 19, Roberto Spadim wrote:
hummm some doubts since i never used/write an audit plugin... the lock_plugin of query response time, is a audit plugin + a information schema plugin right?
query response time is a pair of plugins, yes. one information schema plugin and one audit plugin.
he audit plugin is 'called' at query start and query end? or there's more locks used?
An audit plugin may be called many times per query. But it's locked only once, at the query start, then it's cached in the THD.
when used the locked is used in what case? to lock the UNINSTALL PLUGIN command? should be nice a flow about this plugin locks
LOCK_plugin is used to protect the list of loaded plugins and the hash of plugin names. One takes it to make sure the plugin can not be unloaded by another thread at this very moment. Statically compiled plugins cannot be unloaded, and they don't take LOCK_plugin or do any reference counting at all. Regards, Sergei
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that. I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that. And with all that calls into semi-sync plugin happen I believe twice for each COMMIT and twice for each binlog event sent to each slave. I'd say that's a big performance hog... On Mon, Aug 19, 2013 at 8:27 AM, MARK CALLAGHAN <mdcallag@gmail.com> wrote:
Is it reasonable for semi-sync to need to lock/unlock LOCK_plugin that frequently? The plugin architecture, like most of the rest of MySQL, wasn't described to people on the outside (like me).
On Mon, Aug 19, 2013 at 6:32 AM, Pavel Ivanov <pivanof@google.com> wrote:
On Mon, Aug 19, 2013 at 5:17 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
hi!
Another plugin that should be disabled is QUERY_RESPONSE_TIME. It has quite some impact on LOCK_plugin:
-DWITH_QUERY_RESPONSE_TIME=1
+-----------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait |
+-----------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 2743723 | 160451183703111 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2743651 | 19122327109683 | | wait/synch/mutex/sql/LOCK_plugin | 3135747 | 18611700478728 | | wait/synch/rwlock/sql/LOCK_tdc | 1356784 | 1360918935810 |
-DWITHOUT_QUERY_RESPONSE_TIME=1
completely removes LOCK_plugin from the top 20 list. This is also considered a bug as the plugin API should be lightweight.
Should the "SET GLOBAL query_response_time_stats=0" solve this contention problem? or the LOCK_plugin is called always, doesn't matter if it's "disabled" or not?
If it's implemented the same way as semisync plugins (and it should be implemented like that) then LOCK_plugin is locked even when plugin's functionality disabled, because it gets disabled using plugin-specific variable which can be checked only inside the plugin code.
Pavel
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
It would seemingly be a reasonable idea to have an option to lock certain plugins in a "permanent" loaded state, so that they cannot be unloaded at runtime, but all of that per-entry locking can be avoided with e.g. if (!plugin->permanent) checks for the lock/unlock. Then, a server restart would be required to unload the plugin. Regards, Jeremy On Mon, Aug 19, 2013 at 10:44 AM, Pavel Ivanov <pivanof@google.com> wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that. And with all that calls into semi-sync plugin happen I believe twice for each COMMIT and twice for each binlog event sent to each slave. I'd say that's a big performance hog...
On Mon, Aug 19, 2013 at 8:27 AM, MARK CALLAGHAN <mdcallag@gmail.com> wrote:
Is it reasonable for semi-sync to need to lock/unlock LOCK_plugin that frequently? The plugin architecture, like most of the rest of MySQL, wasn't described to people on the outside (like me).
On Mon, Aug 19, 2013 at 6:32 AM, Pavel Ivanov <pivanof@google.com> wrote:
On Mon, Aug 19, 2013 at 5:17 AM, Roberto Spadim <roberto@spadim.com.br> wrote:
hi!
Another plugin that should be disabled is QUERY_RESPONSE_TIME. It has quite some impact on LOCK_plugin:
-DWITH_QUERY_RESPONSE_TIME=1
+-----------------------------------------+------------+-----------------+
| event_name | count_star | sum_timer_wait |
+-----------------------------------------+------------+-----------------+
| wait/synch/mutex/sql/LOCK_open | 2743723 | 160451183703111 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2743651 | 19122327109683 | | wait/synch/mutex/sql/LOCK_plugin | 3135747 | 18611700478728 | | wait/synch/rwlock/sql/LOCK_tdc | 1356784 | 1360918935810 |
-DWITHOUT_QUERY_RESPONSE_TIME=1
completely removes LOCK_plugin from the top 20 list. This is also considered a bug as the plugin API should be lightweight.
Should the "SET GLOBAL query_response_time_stats=0" solve this contention problem? or the LOCK_plugin is called always, doesn't matter if it's "disabled" or not?
If it's implemented the same way as semisync plugins (and it should be implemented like that) then LOCK_plugin is locked even when plugin's functionality disabled, because it gets disabled using plugin-specific variable which can be checked only inside the plugin code.
Pavel
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
Hi, Pavel! On Aug 19, Pavel Ivanov wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that.
Unlocking does that too, it's completely symmetrical. See plugin_unlock() function, it starts from /* built-in plugins don't need ref counting */ if (!plugin_dlib(plugin)) DBUG_VOID_RETURN; Regards, Sergei
On Tue, Aug 20, 2013 at 1:09 AM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Pavel!
On Aug 19, Pavel Ivanov wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that.
Unlocking does that too, it's completely symmetrical. See plugin_unlock() function, it starts from
/* built-in plugins don't need ref counting */ if (!plugin_dlib(plugin)) DBUG_VOID_RETURN;
Sure, but macro FOREACH_OBSERVER inside rpl_handler.cc doesn't use this function. It uses plugin_unlock_list() which always locks LOCK_plugin. BTW, MariaDB still supports compiling semi-sync plugins dynamically, but it seems that it doesn't do anything against unloading semi-sync plugins in the middle of transactions. Did anyone think about this? Pavel
Hi, Pavel! On Aug 20, Pavel Ivanov wrote:
On Tue, Aug 20, 2013 at 1:09 AM, Sergei Golubchik <serg@mariadb.org> wrote:
On Aug 19, Pavel Ivanov wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/ pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that.
Sure, but macro FOREACH_OBSERVER inside rpl_handler.cc doesn't use this function. It uses plugin_unlock_list() which always locks LOCK_plugin.
BTW, MariaDB still supports compiling semi-sync plugins dynamically, but it seems that it doesn't do anything against unloading semi-sync plugins in the middle of transactions. Did anyone think about this?
Judging from the replication plugin API - I don't think so. But if it adds that much overhead, I suppose we'll need to fix it. Then we fix the unloading too. Regards, Sergei
Is the workaround to use static rather than dynamic plugins? On Tue, Aug 20, 2013 at 7:31 AM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Pavel!
On Aug 20, Pavel Ivanov wrote:
On Tue, Aug 20, 2013 at 1:09 AM, Sergei Golubchik <serg@mariadb.org> wrote:
On Aug 19, Pavel Ivanov wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/ pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that.
Sure, but macro FOREACH_OBSERVER inside rpl_handler.cc doesn't use this function. It uses plugin_unlock_list() which always locks LOCK_plugin.
BTW, MariaDB still supports compiling semi-sync plugins dynamically, but it seems that it doesn't do anything against unloading semi-sync plugins in the middle of transactions. Did anyone think about this?
Judging from the replication plugin API - I don't think so.
But if it adds that much overhead, I suppose we'll need to fix it. Then we fix the unloading too.
Regards, Sergei
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
Workaround for which problem? On Tue, Aug 20, 2013 at 8:17 AM, MARK CALLAGHAN <mdcallag@gmail.com> wrote:
Is the workaround to use static rather than dynamic plugins?
On Tue, Aug 20, 2013 at 7:31 AM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Pavel!
On Aug 20, Pavel Ivanov wrote:
On Tue, Aug 20, 2013 at 1:09 AM, Sergei Golubchik <serg@mariadb.org> wrote:
On Aug 19, Pavel Ivanov wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/ pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that.
Sure, but macro FOREACH_OBSERVER inside rpl_handler.cc doesn't use this function. It uses plugin_unlock_list() which always locks LOCK_plugin.
BTW, MariaDB still supports compiling semi-sync plugins dynamically, but it seems that it doesn't do anything against unloading semi-sync plugins in the middle of transactions. Did anyone think about this?
Judging from the replication plugin API - I don't think so.
But if it adds that much overhead, I suppose we'll need to fix it. Then we fix the unloading too.
Regards, Sergei
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
Mutex contention from various plugins On Tue, Aug 20, 2013 at 9:12 AM, Pavel Ivanov <pivanof@google.com> wrote:
Workaround for which problem?
On Tue, Aug 20, 2013 at 8:17 AM, MARK CALLAGHAN <mdcallag@gmail.com> wrote:
Is the workaround to use static rather than dynamic plugins?
On Tue, Aug 20, 2013 at 7:31 AM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Pavel!
On Aug 20, Pavel Ivanov wrote:
On Tue, Aug 20, 2013 at 1:09 AM, Sergei Golubchik <serg@mariadb.org> wrote:
On Aug 19, Pavel Ivanov wrote:
No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. It's plugin infrastructure that does that.
I've actually was terrified to learn that each call into semi-sync plugin is surrounded with pthread_rwlock_rdlock/ pthread_rwlock_unlock (which is not cheap I believe). And also for each such call it "locks"/"unlocks" the semi-sync plugin. And although "locking" plugin avoids locking LOCK_plugin when plugin is linked statically, "unlocking" doesn't do that.
Sure, but macro FOREACH_OBSERVER inside rpl_handler.cc doesn't use this function. It uses plugin_unlock_list() which always locks LOCK_plugin.
BTW, MariaDB still supports compiling semi-sync plugins dynamically, but it seems that it doesn't do anything against unloading semi-sync plugins in the middle of transactions. Did anyone think about this?
Judging from the replication plugin API - I don't think so.
But if it adds that much overhead, I suppose we'll need to fix it. Then we fix the unloading too.
Regards, Sergei
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
-- Mark Callaghan mdcallag@gmail.com
I guess sometimes yes, having statically linked plugins is a workaround for mutex contention. But it's not the case for semi-sync plugins. On Tue, Aug 20, 2013 at 10:11 AM, MARK CALLAGHAN <mdcallag@gmail.com> wrote:
Mutex contention from various plugins
On Tue, Aug 20, 2013 at 9:12 AM, Pavel Ivanov <pivanof@google.com> wrote:
Workaround for which problem?
On Tue, Aug 20, 2013 at 8:17 AM, MARK CALLAGHAN <mdcallag@gmail.com> wrote:
Is the workaround to use static rather than dynamic plugins?
On Tue, Aug 20, 2013 at 7:31 AM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Pavel!
On Aug 20, Pavel Ivanov wrote:
On Tue, Aug 20, 2013 at 1:09 AM, Sergei Golubchik <serg@mariadb.org> wrote:
On Aug 19, Pavel Ivanov wrote: > No, it's not reasonable for semi-sync to lock/unlock LOCK_plugin. > It's plugin infrastructure that does that. > > I've actually was terrified to learn that each call into semi-sync > plugin is surrounded with pthread_rwlock_rdlock/ > pthread_rwlock_unlock (which is not cheap I believe). And also for > each such call it "locks"/"unlocks" the semi-sync plugin. And > although "locking" plugin avoids locking LOCK_plugin when plugin > is > linked statically, "unlocking" doesn't do that.
Sure, but macro FOREACH_OBSERVER inside rpl_handler.cc doesn't use this function. It uses plugin_unlock_list() which always locks LOCK_plugin.
BTW, MariaDB still supports compiling semi-sync plugins dynamically, but it seems that it doesn't do anything against unloading semi-sync plugins in the middle of transactions. Did anyone think about this?
Judging from the replication plugin API - I don't think so.
But if it adds that much overhead, I suppose we'll need to fix it. Then we fix the unloading too.
Regards, Sergei
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp
-- Mark Callaghan mdcallag@gmail.com
-- Mark Callaghan mdcallag@gmail.com
Hi Monty, me and Axel did additional analysis of LOCK_open wait time in 10.0.4. Below are a few interesting things that we discovered: - with 1 CPU LOCK_open and THR_LOCK::mutex wait times are very close - with 4 CPU LOCK_open wait time is about 5.5 * THR_LOCK::mutex wait time - 5.6 with 4 CPU and table-cache-instances=1THR_LOCK::mutex wait time is about 1.7% of execution time, with table-cache-instances=16 it is 9.8% - tc_acquire_table/tc_release_table update 3 lists each, which is ~20 pointer updates per query - thr_lock/thr_unlock seem to do less pointer updates under read-only load I guess every pointer update invalidates corresponding CPU cache lines of other CPU's and causing expensive main memory reads. So now I'm trying to find alternative cache algorithm that will either read or update less pointers. Regards, Sergey 16.08.2013, в 18:11, Axel Schwenke <axel@askmonty.org> написал(а):
Hi,
Michael Widenius wrote:
10.0.4 should be very soon ready for benchmarking (Tuesday or Wednesday). We are in the final stage of fixing the failing test cases after our last merge with MySQL 5.6 GA.
I would just prefer to see Sergei Vojtovich push his code that changes how the table cache works.
This solves the same problem as the split table cache in 5.6, but is a much simpler and better implementation.
After this, I would expect that MariaDB 10.0 performance would be closer to 5.6 performance for this benchmark.
Axel, could you at some point run the same benchmark with MariaDB 10.0.4 and MySQL 5.6 with gprof to see where time is spent?
I have now beefed up my knowledge of performance_schema and used it to look at Mutex/RW-lock contention for readonly OLTP benchmark.
-----
TL;DR: 10.0.4 spends very long time in LOCK_PLUGIN, LOCK_OPEN is indeed much better than 10.0.3 but not(?) nearly as good as MySQL-5.6.
-----
Contestants:
- MySQL-5.6.13 (release) - MariaDB-10.0.3 (release) - MariaDB-10.0 r3799 (bzr trunk)
all 3 compiled from source on benchmark machine using the same build script
Environment:
- many core machine (32 cores, 64 threads) - sysbench-0.5 OLTP readonly, 10M rows, single table - mysqld bound to 48 cores (sysbench to the other 8) - tested with 64 threads (this is the sweet spot) - using tcmalloc (preloaded) - mutexes and rw locks fully instrumented via performance_schema, see my.cnf
Benchmark results (transactions per second):
P_S on P_S off MySQL: ~15K ~18K 10.0.3: ~7K ~7.5K 10.0.4: ~6.5K ~9K
After stabilizing I took 3 probes from P_S, sorting by sum_timer_wait desc. Times shown by P_S are picoseconds. If this were seconds, the decimal point would be in the column between "m" and "_" in "sum_timer_wait". So i.e. 10.0.4 waited 242 seconds for LOCK_PLUGIN during the 10 second measurement interval. Since we have 48 cores, there are total 480 seconds and 10.0.4 wasted about 50% of that.
Questions:
- what is protected by LOCK_PLUGIN? - how is LOCK_OPEN split in MySQL-5.6?If some of that is now covered by THR_LOCK::mutex, then it would explain why this is the busiest mutex in MySQL-5.6.
Let me know if you need more data from me or if you have ideas what else I should test.
XL
Top 20 Mutexes/RW-locks
MySQL-5.6.13
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/mysys/THR_LOCK::mutex | 3397732 | 123367802416236 | | wait/synch/rwlock/innodb/btr_search_latch | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/LOCK_grant | 1682983 | 1526912505972 | | wait/synch/mutex/sql/LOCK_table_cache | 3397786 | 1247140492941 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash_table_locks | 4086413 | 1026755227155 | | wait/synch/mutex/innodb/trx_sys_mutex | 364146 | 639270297372 | | wait/synch/mutex/innodb/os_mutex | 149491 | 630758755641 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 1016436 | 344264365074 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 242778 | 183463650006 | | wait/synch/mutex/sql/MDL_map::mutex | 121342 | 69121460394 | | wait/synch/mutex/mysys/BITMAP::mutex | 242667 | 54426177477 | | wait/synch/mutex/innodb/fil_system_mutex | 179586 | 20229032103 | | wait/synch/mutex/sql/LOCK_plugin | 121352 | 15004585617 | | wait/synch/mutex/innodb/trx_mutex | 121475 | 13105527141 | | wait/synch/mutex/innodb/lock_mutex | 121374 | 12627692868 | | wait/synch/mutex/innodb/buf_pool_mutex | 2240 | 228234006 | | wait/synch/mutex/innodb/log_sys_mutex | 540 | 119752122 | | wait/synch/mutex/innodb/flush_list_mutex | 704 | 86802870 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 640 | 30728817 | +---------------------------------------------+------------+-----------------+
MariaDB-10.0.3
+------------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +------------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2206782 | 2761461343332 | | wait/synch/mutex/sql/LOCK_plugin | 2540899 | 1471913471196 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 6485120 | 770989733856 | | wait/synch/rwlock/innodb/hash table locks | 3315788 | 674878255920 | | wait/synch/rwlock/innodb/btr_search_latch | 1358771 | 540950632524 | | wait/synch/mutex/innodb/trx_sys_mutex | 380210 | 245662486416 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 191698 | 125257499748 | | wait/synch/mutex/sql/TABLE_SHARE::LOCK_ha_data | 1067704 | 95258676492 | | wait/synch/mutex/mysys/THR_LOCK_lock | 157621 | 52329482172 | | wait/synch/mutex/mysys/BITMAP::mutex | 154870 | 49415303736 | | wait/synch/mutex/sql/MDL_map::mutex | 79173 | 41936664096 | | wait/synch/mutex/innodb/fil_system_mutex | 129610 | 40258209024 | | wait/synch/mutex/innodb/lock_mutex | 93667 | 26848457928 | | wait/synch/mutex/innodb/trx_mutex | 94131 | 11398946760 | | wait/synch/mutex/innodb/buf_pool_mutex | 12672 | 571983480 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 9152 | 312365052 | | wait/synch/mutex/innodb/log_sys_mutex | 456 | 138632868 | | wait/synch/mutex/innodb/flush_list_mutex | 480 | 109226268 | | wait/synch/mutex/innodb/os_mutex | 350 | 71117100 | +------------------------------------------------+------------+-----------------+
MariaDB-10.0-r3799
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | +---------------------------------------------+------------+-----------------+
Comparison ----------
MariaDB-10.0-r3799 MariaDB-10.0.3 MySQL-5.6.13 +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | count_star | sum_timer_wait | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | 2540899 | 1471913471196 | 121352 | 15004585617 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | 2206782 | 2761461343332 | 3397732 | 123367802416236 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | 6485120 | 770989733856 | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | 3315788 | 674878255920 | 4086413 | 1026755227155 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | 1358771 | 540950632524 | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | 191698 | 125257499748 | 242778 | 183463650006 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | 380210 | 245662486416 | 364146 | 639270297372 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | 1016436 | 344264365074 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | 79173 | 41936664096 | 121342 | 69121460394 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | 129610 | 40258209024 | 179586 | 20229032103 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | 154870 | 49415303736 | 242667 | 54426177477 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | 157621 | 52329482172 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | 93667 | 26848457928 | 121374 | 12627692868 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | 94131 | 11398946760 | 121475 | 13105527141 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | 12672 | 571983480 | 2240 | 228234006 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | 9152 | 312365052 | 640 | 30728817 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | 480 | 109226268 | 704 | 86802870 | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+
<files.zip>
participants (7)
-
Axel Schwenke
-
Jeremy Cole
-
MARK CALLAGHAN
-
Pavel Ivanov
-
Roberto Spadim
-
Sergei Golubchik
-
Sergey Vojtovich