From the above I'd say: 911 928 - number of executed queries (1x LOCK_tdc per query) 1 835 243 / 911 928= ~2 - LOCK_open was acquired 2x per query 4 194 896 / 911 928= ~4 - LOCK_plugin was acquired 4x per query
Most probably it means that table definition cache is full and every query reopens table (at least 1x LOCK_plugin), evicts some old table from cache (at least 1x LOCK_plugin). It may be a bug in new code.
I wonder how the MariaDB under test was compiled and what flags were used. AFAIK by default semisync plugins are compiled in. That means that for each transaction ha_commit_trans() will execute RUN_HOOK(transaction, after_commit, (thd, FALSE)); It involves locking of LOCK_plugin when semisync plugin is "unlocked" (even though it's compiled in statically). Also trans_commit() has this: RUN_HOOK(transaction, after_commit, (thd, FALSE)); Which also involves locking of LOCK_plugin. Pavel On Sat, Aug 17, 2013 at 11:30 AM, Sergey Vojtovich <svoj@mariadb.org> wrote:
Hi Axel,
thanks for testing and interesting results. Though our results diverge a lot, generally I confirm that PFS is eating up 10-20% of performance and we're still slower than 5.6. But in my scenario even single thread is about 10% slower than in 5.6.
Further comments/questions inline.
16.08.2013, в 18:11, Axel Schwenke <axel@askmonty.org> написал(а):
Hi,
Michael Widenius wrote:
10.0.4 should be very soon ready for benchmarking (Tuesday or Wednesday). We are in the final stage of fixing the failing test cases after our last merge with MySQL 5.6 GA.
I would just prefer to see Sergei Vojtovich push his code that changes how the table cache works.
This solves the same problem as the split table cache in 5.6, but is a much simpler and better implementation.
After this, I would expect that MariaDB 10.0 performance would be closer to 5.6 performance for this benchmark.
Axel, could you at some point run the same benchmark with MariaDB 10.0.4 and MySQL 5.6 with gprof to see where time is spent?
I have now beefed up my knowledge of performance_schema and used it to look at Mutex/RW-lock contention for readonly OLTP benchmark.
-----
TL;DR: 10.0.4 spends very long time in LOCK_PLUGIN, LOCK_OPEN is indeed much better than 10.0.3 but not(?) nearly as good as MySQL-5.6.
-----
Contestants:
- MySQL-5.6.13 (release) - MariaDB-10.0.3 (release) - MariaDB-10.0 r3799 (bzr trunk)
all 3 compiled from source on benchmark machine using the same build script
Environment:
- many core machine (32 cores, 64 threads) - sysbench-0.5 OLTP readonly, 10M rows, single table - mysqld bound to 48 cores (sysbench to the other 8) - tested with 64 threads (this is the sweet spot) - using tcmalloc (preloaded) - mutexes and rw locks fully instrumented via performance_schema, see my.cnf
Benchmark results (transactions per second):
P_S on P_S off MySQL: ~15K ~18K 10.0.3: ~7K ~7.5K 10.0.4: ~6.5K ~9K
After stabilizing I took 3 probes from P_S, sorting by sum_timer_wait desc. Times shown by P_S are picoseconds. If this were seconds, the decimal point would be in the column between "m" and "_" in "sum_timer_wait". So i.e. 10.0.4 waited 242 seconds for LOCK_PLUGIN during the 10 second measurement interval. Since we have 48 cores, there are total 480 seconds and 10.0.4 wasted about 50% of that.
Questions:
- what is protected by LOCK_PLUGIN? Mainly array of plugins and array of shared objects containing these plugins, but see below.
- how is LOCK_OPEN split in MySQL-5.6?If some of that is now covered by THR_LOCK::mutex, then it would explain why this is the busiest mutex in MySQL-5.6. Nope. It is just split into multiple table caches (or partitioned), protected by LOCK_table_cache and LOCK_open. I guess we have some optimization for THR_LOCK::mutex which Oracle doesn't.
Let me know if you need more data from me or if you have ideas what else I should test.
XL
Top 20 Mutexes/RW-locks
MySQL-5.6.13
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/mysys/THR_LOCK::mutex | 3397732 | 123367802416236 | | wait/synch/rwlock/innodb/btr_search_latch | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/LOCK_grant | 1682983 | 1526912505972 | | wait/synch/mutex/sql/LOCK_table_cache | 3397786 | 1247140492941 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash_table_locks | 4086413 | 1026755227155 | | wait/synch/mutex/innodb/trx_sys_mutex | 364146 | 639270297372 | | wait/synch/mutex/innodb/os_mutex | 149491 | 630758755641 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 1016436 | 344264365074 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 242778 | 183463650006 | | wait/synch/mutex/sql/MDL_map::mutex | 121342 | 69121460394 | | wait/synch/mutex/mysys/BITMAP::mutex | 242667 | 54426177477 | | wait/synch/mutex/innodb/fil_system_mutex | 179586 | 20229032103 | | wait/synch/mutex/sql/LOCK_plugin | 121352 | 15004585617 | | wait/synch/mutex/innodb/trx_mutex | 121475 | 13105527141 | | wait/synch/mutex/innodb/lock_mutex | 121374 | 12627692868 | | wait/synch/mutex/innodb/buf_pool_mutex | 2240 | 228234006 | | wait/synch/mutex/innodb/log_sys_mutex | 540 | 119752122 | | wait/synch/mutex/innodb/flush_list_mutex | 704 | 86802870 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 640 | 30728817 | +---------------------------------------------+------------+-----------------+
Indeed, strange that THD_LOCK::mutex takes so much time, never seen it in my tests. As well as strange that MDL_map::mutex is acquired 121 342 times whereas LOCK_table_cache is acquired 3 397 786 (they're almost identical in my tests).
MariaDB-10.0.3
+------------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +------------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_open | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 2206782 | 2761461343332 | | wait/synch/mutex/sql/LOCK_plugin | 2540899 | 1471913471196 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 6485120 | 770989733856 | | wait/synch/rwlock/innodb/hash table locks | 3315788 | 674878255920 | | wait/synch/rwlock/innodb/btr_search_latch | 1358771 | 540950632524 | | wait/synch/mutex/innodb/trx_sys_mutex | 380210 | 245662486416 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 191698 | 125257499748 | | wait/synch/mutex/sql/TABLE_SHARE::LOCK_ha_data | 1067704 | 95258676492 | | wait/synch/mutex/mysys/THR_LOCK_lock | 157621 | 52329482172 | | wait/synch/mutex/mysys/BITMAP::mutex | 154870 | 49415303736 | | wait/synch/mutex/sql/MDL_map::mutex | 79173 | 41936664096 | | wait/synch/mutex/innodb/fil_system_mutex | 129610 | 40258209024 | | wait/synch/mutex/innodb/lock_mutex | 93667 | 26848457928 | | wait/synch/mutex/innodb/trx_mutex | 94131 | 11398946760 | | wait/synch/mutex/innodb/buf_pool_mutex | 12672 | 571983480 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 9152 | 312365052 | | wait/synch/mutex/innodb/log_sys_mutex | 456 | 138632868 | | wait/synch/mutex/innodb/flush_list_mutex | 480 | 109226268 | | wait/synch/mutex/innodb/os_mutex | 350 | 71117100 | +------------------------------------------------+------------+-----------------+
MariaDB-10.0-r3799
+---------------------------------------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | | wait/synch/mutex/sql/LOCK_open | 1835243| 33790056812460 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | +---------------------------------------------+------------+-----------------+
From the above I'd say: 911 928 - number of executed queries (1x LOCK_tdc per query) 1 835 243 / 911 928= ~2 - LOCK_open was acquired 2x per query 4 194 896 / 911 928= ~4 - LOCK_plugin was acquired 4x per query
Most probably it means that table definition cache is full and every query reopens table (at least 1x LOCK_plugin), evicts some old table from cache (at least 1x LOCK_plugin). It may be a bug in new code.
From 5.6 and 10.0.3 metrics I can say that they're not running out of table cache.
If the above is true, more complex code path is executed under LOCK_open and may give less optimal number.
Comparison ----------
MariaDB-10.0-r3799 MariaDB-10.0.3 MySQL-5.6.13 +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | event_name | count_star | sum_timer_wait | count_star | sum_timer_wait | count_star | sum_timer_wait | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+ | wait/synch/mutex/sql/LOCK_plugin | 4194896 | 242595666533160 | 2540899 | 1471913471196 | 121352 | 15004585617 | | wait/synch/mutex/sql/LOCK_open | 1835243 | 33790056812460 | 3334966 | 251668176449112 | | wait/synch/mutex/mysys/THR_LOCK::mutex | 1835221 | 3136249420092 | 2206782 | 2761461343332 | 3397732 | 123367802416236 | | wait/synch/rwlock/sql/LOCK_tdc | 911928 | 1088553356028 | | wait/synch/mutex/sql/THD::LOCK_thd_data | 5766813 | 695664363024 | 6485120 | 770989733856 | 9219327 | 1076510130276 | | wait/synch/rwlock/innodb/hash table locks | 2144715 | 513517462020 | 3315788 | 674878255920 | 4086413 | 1026755227155 | | wait/synch/rwlock/innodb/btr_search_latch | 915043 | 399309801912 | 1358771 | 540950632524 | 1632176 | 22207357475142 | | wait/synch/rwlock/sql/MDL_lock::rwlock | 131171 | 176610102480 | 191698 | 125257499748 | 242778 | 183463650006 | | wait/synch/mutex/innodb/trx_sys_mutex | 196751 | 154609451460 | 380210 | 245662486416 | 364146 | 639270297372 | | wait/synch/rwlock/innodb/index_tree_rw_lock | 527543 | 137903039712 | 1016436 | 344264365074 | | wait/synch/mutex/sql/MDL_map::mutex | 158268 | 95133455712 | 79173 | 41936664096 | 121342 | 69121460394 | | wait/synch/mutex/innodb/fil_system_mutex | 97493 | 27285605172 | 129610 | 40258209024 | 179586 | 20229032103 | | wait/synch/mutex/mysys/BITMAP::mutex | 131099 | 26589125196 | 154870 | 49415303736 | 242667 | 54426177477 | | wait/synch/mutex/mysys/THR_LOCK_lock | 131105 | 23952666708 | 157621 | 52329482172 | | wait/synch/mutex/innodb/lock_mutex | 65579 | 10355822676 | 93667 | 26848457928 | 121374 | 12627692868 | | wait/synch/mutex/innodb/trx_mutex | 65812 | 9790097724 | 94131 | 11398946760 | 121475 | 13105527141 | | wait/synch/mutex/innodb/buf_pool_mutex | 11550 | 644569236 | 12672 | 571983480 | 2240 | 228234006 | | wait/synch/rwlock/innodb/checkpoint_lock | 2 | 236298492 | | wait/synch/mutex/innodb/buf_dblwr_mutex | 3880 | 156907368 | 9152 | 312365052 | 640 | 30728817 | | wait/synch/mutex/innodb/flush_list_mutex | 692 | 124740900 | 480 | 109226268 | 704 | 86802870 | +---------------------------------------------+------------+-----------------+------------+-----------------+------------+-----------------+
<files.zip>
Would be nice to include LOCK_table_cache here.
Just for your reference, here is command that I use to gather stats (gives seconds out of the box): select EVENT_NAME,COUNT_STAR,ROUND(SUM_TIMER_WAIT/1000000000000,2) AS SECONDS,SUM_TIMER_WAIT,MIN_TIMER_WAIT,AVG_TIMER_WAIT,MAX_TIMER_WAIT from performance_schema.events_waits_summary_global_by_event_name order by sum_timer_wait desc limit 20
Regards, Sergey _______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp