Hello all, I am encouraged to see this. I thought I would pass this along from our testing. Near the end of releasing TokuDB v7, we tested 3.1.0, 3.2.0, 3.3.0, and 3.3.1. Performance and stability was the same on all for tokudb. So jemalloc 3.1 sounds fine for us. -Zardosht On Wed, Apr 24, 2013 at 1:07 PM, Axel Schwenke <axel@askmonty.org> wrote:
Hi Sanja,
Oleksandr Byelkin wrote:
24.04.2013 19:00, Axel Schwenke пишет: [skip]
OLTP ro transactions per second relative to glibc malloc ---------------------------------------------------------------------- Threads jemalloc-3.3.1 glibc malloc tcmalloc jemalloc-3.1.0 4 +3.2% +0.0% +11.6% +61.9% 8 +0.9% +0.0% -2.3% +8.0% 16 +9.6% +0.0% +4.7% +12.9% 32 +1.8% +0.0% +0.9% +5.8% 64 -5.1% +0.0% +2.1% +5.4% 128 -4.8% +0.0% +0.2% +4.5% 256 -4.2% +0.0% +0.7% +5.0% 512 -13.3% +0.0% +1.8% +4.1%
Strange that results jumps up and down when percona had quite smooth one. Is it possible that computer was doing something else? or we need more tests to take average?
It's not so "jumpy" when you look at the absolute numbers. The difference to Percona is, that they tested their own server, where MariaDB tries to minimize the number of malloc() calls.
I wouldn't give too much on the numbers for 4 threads, since this is the buffer pool warmup phase. Also I've run each test for only 100 seconds to get results fast. Re "jumpy" - see the attached pdf with the scatter plot. Each point represents the tps from a 5 second interval. Numbers are in fact rather smooth, except for 512 threads.
The main takeaway is, that I can reproduce the regression in jemalloc 3.3. With TokuDB there is no way around jemalloc anyway.
XL
_______________________________________________ Mailing list: https://launchpad.net/~maria-developers Post to : maria-developers@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-developers More help : https://help.launchpad.net/ListHelp