
Thank you Simon! MJ On 3/19/25 15:53, Simon Avery wrote:
Hi,
(I would have contributed to this thread on Monday but I didn't see it then - but I agree with everything that's been said to use either jemalloc or tcmalloc. They have helped deal with memory usage)
This is how I did it some years ago. It works for both jemalloc and tcmalloc - just change the package and library path. This is for Debian.
1. Install the package with something like: apt install libtcmalloc-minimal4 2. Deploy the following file to /etc/systemd/system/mariadb.service.d/mariadb_tcmalloc.conf --- [Service] Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4" ---
3. systemctl daemon-reload 4. systemctl restart mariadb
For reference, we moved to jemalloc some years ago (before we heard about tcmalloc) for our 80 odd mariadb servers and it helped a lot in stopping the memory usage climbing. Prior to this, we had a few servers that would regularly oom. I still have one that's using jemalloc which creeps upwards and I plan on moving that to tcmalloc soon. I've had another test machine using tcmalloc on EL9 for a good time with no ill effects.
Also for jemalloc, I stole this rather horrible test to see whether it was being used correctly. There are undoubtedly better ways.
pidof mysqld >/dev/null && perl -e 'if (`pmap \`pidof mysqld\` | grep all` =~ "libjemalloc") { print "jemalloc library is in use\n"; exit 0;} else { print "jemalloc library is NOT in use\n"; exit 1; }' || perl -e 'if (`pmap \`pidof mariadbd\` | grep all` =~ "libjemalloc") { print "jemalloc library is in use\n"; exit 0;} else { print "jemalloc library is NOT in use\n"; exit 1; }'
Hope that's useful.
Simon
-----Original Message----- From: sacawulu via discuss <discuss@lists.mariadb.org> Sent: 19 March 2025 14:18 To: discuss@lists.mariadb.org Subject: [MariaDB discuss] Re: Issue with MariaDB eating all the server memory
Hi,
I've followed this thread with great interest.
Just to confirm, if we also want to change to tmalloc, running on RHEL8/9, a quick google tells me that the steps are:
1) sudo yum install gperftools-libs
2) rpm -ql gperftools-libs | grep tcmalloc
(save the file/path)
3) edit the mariadb systemctl file:
sudo systemctl edit mariadb and add [Service] Environment="LD_PRELOAD=/usr/lib64/libtcmalloc_minimal.so.4"
4) restart mysql
5) verify from mariadb: SHOW GLOBAL VARIABLES LIKE 'jemalloc%';
6) optimize mariadb: [mariadb] malloc-lib=/usr/lib64/libtcmalloc_minimal.so.4
Or is there a simpler/more standard method?
MJ
On 3/17/25 15:48, Derick Turner via discuss wrote:
System (default) made zero difference. TC Malloc, on the other hand, has been a winner! The server which I switched this to has sat on 15/16GB while the rest have continued to consume all of the memory.
I'll switch over all of the servers to use this and I will put the other settings back to where they originally were. (30GB RAM server with 22GB innodb_buffer_pool)
Thank you for you help with this. It is very much appreciated!
Kind regards
Derick
On 17/03/2025 12:13, Derick Turner via discuss wrote:
Thanks for the response Sergie!
I switched over to jemalloc in an effort to try and resolve the issue - as I had seen some posts suggesting that as a potential option to deal with memory leaks. I've removed this from one of the servers which has set it back to system. On the next rotation of restarts I'll change another to tmalloc, so I can track any differences between the three.
In case this is also related: we do not get any instances in the logs for InnoDB: Memory pressure events. The listener is being started on all instances. I'm assuming this is where the memory in the InnoDB cache is being released back to the OS? There were logged instances of this running when all of the memory was consumed and the system was starting to use swap. However, OOM killer eventually kicked in and killed the DB process, which is too much of a risk for us to have happen at the moment.
Kind regards
Derick
On 17/03/2025 11:55, Sergei Golubchik wrote:
Hi, Derick,
According to your SHOW GLOBAL STATUS
Memory_used 15460922288
That is the server thinks it uses about 15GB
The difference could be due to memory fragmentation, when the server frees the memory, but it cannot be returned to the OS. In this case using a different memory allocator could help (try system or tcmalloc).
Regards, Sergei Chief Architect, MariaDB Server and security@mariadb.org
On Mar 17, Derick Turner via discuss wrote:
Hi all,
I was pointed to this list from a question I raised on StackExchange (https://dba.stackexchange.com/questions/345743/why-does-my-mariadb - application-use-more-memory-than-configured)
I have a cluster of MariaDB (11.4.5) primary/primary servers running on Ubuntu. I updated the OS on Saturday to 24.04 from 22.04 (and patched the DB to the 11.4.5 noble version) as we were occasionally hitting an OOM event which was causing the database process to be killed. Since then, the DB process takes all of the available server memory before being killed by the OOM killer.
DB is configured to use about 15GB of RAM (from config calculations) Servers currently have 50GB of RAM and 95% of this is used within about an hour an a half.
Link to document with configuration settings, global status, mariadb.service override and InnoDB status is here - https://docs.google.com/spreadsheets/ d/1ev9KRWP8l54FpRrhFeX4uxFhJnOlFkV4_vTCZWipcXA/edit?usp=sharing
Any help would be gratefully received.
Thanks in advance.
Derick
-- Derick Turner - He/Him
_______________________________________________ discuss mailing list -- discuss@lists.mariadb.org To unsubscribe send an email to discuss-leave@lists.mariadb.org
_______________________________________________ discuss mailing list -- discuss@lists.mariadb.org To unsubscribe send an email to discuss-leave@lists.mariadb.org