[Maria-discuss] limiting memory use by mariadb?
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V
Use linux control groups, or even simpler, run MariaDB with Docker (which uses control groups as well): docker run --name mariadb -d -m 10G mariadb:10.1 On Wed, Dec 28, 2016 at 7:48 PM l vic <lvic4594@gmail.com> wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V _______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
Hi Guillaume, Already do, kind of.... I run it (mariadb ) in docker, as Mesos task; The problem: if memory use by mysqld exceeds limit allocated to mesos task, we have oom and mesos kills container On Wed, Dec 28, 2016 at 1:51 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Use linux control groups, or even simpler, run MariaDB with Docker (which uses control groups as well):
docker run --name mariadb -d -m 10G mariadb:10.1
On Wed, Dec 28, 2016 at 7:48 PM l vic <lvic4594@gmail.com> wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V _______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
Well, in this case, I would take care not to set buffers too high so they will not exceed the limit defined by Mesos - I do not think it warrants using docker "--memory" option which would kill the service anyway... If you run InnoDB, the calculation is a bit complex but you would be fine with 60% of maximum RAM. There can be overhead with the dictionary if you have many tables, but you can also limit that size. On Wed, Dec 28, 2016 at 8:04 PM l vic <lvic4594@gmail.com> wrote:
Hi Guillaume, Already do, kind of.... I run it (mariadb ) in docker, as Mesos task; The problem: if memory use by mysqld exceeds limit allocated to mesos task, we have oom and mesos kills container
On Wed, Dec 28, 2016 at 1:51 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Use linux control groups, or even simpler, run MariaDB with Docker (which uses control groups as well):
docker run --name mariadb -d -m 10G mariadb:10.1
On Wed, Dec 28, 2016 at 7:48 PM l vic <lvic4594@gmail.com> wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V _______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
You mean innodb_buffer_pool_size to use 60% of RAM, correct? On Wed, Dec 28, 2016 at 2:12 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Well, in this case, I would take care not to set buffers too high so they will not exceed the limit defined by Mesos - I do not think it warrants using docker "--memory" option which would kill the service anyway... If you run InnoDB, the calculation is a bit complex but you would be fine with 60% of maximum RAM. There can be overhead with the dictionary if you have many tables, but you can also limit that size.
On Wed, Dec 28, 2016 at 8:04 PM l vic <lvic4594@gmail.com> wrote:
Hi Guillaume, Already do, kind of.... I run it (mariadb ) in docker, as Mesos task; The problem: if memory use by mysqld exceeds limit allocated to mesos task, we have oom and mesos kills container
On Wed, Dec 28, 2016 at 1:51 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Use linux control groups, or even simpler, run MariaDB with Docker (which uses control groups as well):
docker run --name mariadb -d -m 10G mariadb:10.1
On Wed, Dec 28, 2016 at 7:48 PM l vic <lvic4594@gmail.com> wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V _______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
Yes, that's right. It's a conservative figure, the amount of memory used by InnoDB is basically innodb buffer pool + per-page overhead (1462 byte) + dictionary + adaptive hash index. Then there will be the per-connection buffers but as long as you keep connections low and don't mess around with temp tables, you should usually be fine with those. On Wed, Dec 28, 2016 at 8:16 PM l vic <lvic4594@gmail.com> wrote:
You mean innodb_buffer_pool_size to use 60% of RAM, correct?
On Wed, Dec 28, 2016 at 2:12 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Well, in this case, I would take care not to set buffers too high so they will not exceed the limit defined by Mesos - I do not think it warrants using docker "--memory" option which would kill the service anyway... If you run InnoDB, the calculation is a bit complex but you would be fine with 60% of maximum RAM. There can be overhead with the dictionary if you have many tables, but you can also limit that size.
On Wed, Dec 28, 2016 at 8:04 PM l vic <lvic4594@gmail.com> wrote:
Hi Guillaume, Already do, kind of.... I run it (mariadb ) in docker, as Mesos task; The problem: if memory use by mysqld exceeds limit allocated to mesos task, we have oom and mesos kills container
On Wed, Dec 28, 2016 at 1:51 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Use linux control groups, or even simpler, run MariaDB with Docker (which uses control groups as well):
docker run --name mariadb -d -m 10G mariadb:10.1
On Wed, Dec 28, 2016 at 7:48 PM l vic <lvic4594@gmail.com> wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V _______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
from documentation page: Be sure the changes are under the [mysqld] section, not some other section So, if I create mysqld section in /etc/mysql/conf.d/mysql_server.cnf and specify new value, will it overwrite default value in my.cnf? On Wed, Dec 28, 2016 at 2:26 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Yes, that's right. It's a conservative figure, the amount of memory used by InnoDB is basically innodb buffer pool + per-page overhead (1462 byte) + dictionary + adaptive hash index. Then there will be the per-connection buffers but as long as you keep connections low and don't mess around with temp tables, you should usually be fine with those.
On Wed, Dec 28, 2016 at 8:16 PM l vic <lvic4594@gmail.com> wrote:
You mean innodb_buffer_pool_size to use 60% of RAM, correct?
On Wed, Dec 28, 2016 at 2:12 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Well, in this case, I would take care not to set buffers too high so they will not exceed the limit defined by Mesos - I do not think it warrants using docker "--memory" option which would kill the service anyway... If you run InnoDB, the calculation is a bit complex but you would be fine with 60% of maximum RAM. There can be overhead with the dictionary if you have many tables, but you can also limit that size.
On Wed, Dec 28, 2016 at 8:04 PM l vic <lvic4594@gmail.com> wrote:
Hi Guillaume, Already do, kind of.... I run it (mariadb ) in docker, as Mesos task; The problem: if memory use by mysqld exceeds limit allocated to mesos task, we have oom and mesos kills container
On Wed, Dec 28, 2016 at 1:51 PM, Guillaume Lefranc < guillaume.lefranc@mariadb.com> wrote:
Use linux control groups, or even simpler, run MariaDB with Docker (which uses control groups as well):
docker run --name mariadb -d -m 10G mariadb:10.1
On Wed, Dec 28, 2016 at 7:48 PM l vic <lvic4594@gmail.com> wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V _______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
-- Guillaume Lefranc Remote DBA Services Manager MariaDB Corporation
Hi, l! On Dec 28, l vic wrote:
Is there some way to set hard limit for use of RAM by mariaDB server? Thanks, -V
Just FYI: In the next 10.1 release there will be a variable that sets a hard limit of RAM that can be allocated in one connection. Regards, Sergei Chief Architect MariaDB and security@mariadb.org
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop? why don't you just configure your server to match your needs? we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
_______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
Am 30.12.2016 um 14:20 schrieb l vic:
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services
jesus christ fix your mail-client using virtual memory / swap is not the job of the application, it's the job of the underlying operating system to swap out unused pages by LRU algorithms - in most cases when your OS starts heavily swapping the game is over and so you have to adjust the ressource consumation of your services or just add physical memory "I don't have control over the cluster and MariaDB instances share" - when you have no control at all you can't change anything, as long as you have control of your "my.cnf" adjust caches and buffers as i told you that we have mariadb instances which are consuming between 35 MB and many GB RAM - depending on the size of the dataset, buffer-pool settings and so on
On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
How's about max_binlog_cache_size or max_binlog_stmt_cache_size ? max_binlog_cache_size defaults to 18446744073709547520 Can it be more than available RAM ? On Fri, Dec 30, 2016 at 9:55 AM, Reindl Harald <h.reindl@thelounge.net> wrote:
Am 30.12.2016 um 14:20 schrieb l vic:
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services
jesus christ fix your mail-client
using virtual memory / swap is not the job of the application, it's the job of the underlying operating system to swap out unused pages by LRU algorithms - in most cases when your OS starts heavily swapping the game is over and so you have to adjust the ressource consumation of your services or just add physical memory
"I don't have control over the cluster and MariaDB instances share" - when you have no control at all you can't change anything, as long as you have control of your "my.cnf" adjust caches and buffers as i told you that we have mariadb instances which are consuming between 35 MB and many GB RAM - depending on the size of the dataset, buffer-pool settings and so on
On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net
<mailto:h.reindl@thelounge.net>> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
_______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
don't come with random solutions instead describe your problem properly and show your real config - the subject of this thread is nonsense and the options below are *not* your problem in real life Am 03.01.2017 um 20:13 schrieb l vic:
How's about |max_binlog_cache_size or max_binlog_stmt_cache_size ? || ||max_binlog_cache_size defaults to 18446744073709547520 | |Can it be more than available RAM ? |
On Fri, Dec 30, 2016 at 9:55 AM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> wrote:
Am 30.12.2016 um 14:20 schrieb l vic:
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services
jesus christ fix your mail-client
using virtual memory / swap is not the job of the application, it's the job of the underlying operating system to swap out unused pages by LRU algorithms - in most cases when your OS starts heavily swapping the game is over and so you have to adjust the ressource consumation of your services or just add physical memory
"I don't have control over the cluster and MariaDB instances share" - when you have no control at all you can't change anything, as long as you have control of your "my.cnf" adjust caches and buffers as i told you that we have mariadb instances which are consuming between 35 MB and many GB RAM - depending on the size of the dataset, buffer-pool settings and so on
On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>>> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
I didn't suggest it as "solution", just wondering if huge default value can cause problems. The real problem is that memory consumption of mysqld is growing in time until the container oom and is killed; What would be my "read config": list of system variables? +---------------------------------------------+------------------------+ | Variable_name | Value | +---------------------------------------------+------------------------+ | innodb_adaptive_flushing | ON | | innodb_adaptive_flushing_lwm | 10.000000 | | innodb_adaptive_hash_index | ON | | innodb_adaptive_hash_index_partitions | 1 | | innodb_adaptive_max_sleep_delay | 150000 | | innodb_additional_mem_pool_size | 8388608 | | innodb_api_bk_commit_interval | 5 | | innodb_api_disable_rowlock | OFF | | innodb_api_enable_binlog | OFF | | innodb_api_enable_mdl | OFF | | innodb_api_trx_level | 0 | | innodb_autoextend_increment | 64 | | innodb_autoinc_lock_mode | 2 | | innodb_background_scrub_data_check_interval | 3600 | | innodb_background_scrub_data_compressed | OFF | | innodb_background_scrub_data_interval | 604800 | | innodb_background_scrub_data_uncompressed | OFF | | innodb_buf_dump_status_frequency | 0 | | innodb_buffer_pool_dump_at_shutdown | OFF | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_dump_pct | 100 | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 8 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | OFF | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_populate | OFF | | innodb_buffer_pool_size | 2147483648 | | innodb_change_buffer_max_size | 25 | | innodb_change_buffering | all | | innodb_checksum_algorithm | INNODB | | innodb_checksums | ON | | innodb_cleaner_lsn_age_factor | HIGH_CHECKPOINT | | innodb_cmp_per_index_enabled | OFF | | innodb_commit_concurrency | 0 | | innodb_compression_algorithm | none | | innodb_compression_failure_threshold_pct | 5 | | innodb_compression_level | 6 | | innodb_compression_pad_pct_max | 50 | | innodb_concurrency_tickets | 5000 | | innodb_corrupt_table_action | assert | | innodb_data_file_path | ibdata1:12M:autoextend | | innodb_data_home_dir | | | innodb_default_encryption_key_id | 1 | | innodb_defragment | OFF | | innodb_defragment_fill_factor | 0.900000 | | innodb_defragment_fill_factor_n_recs | 20 | | innodb_defragment_frequency | 40 | | innodb_defragment_n_pages | 7 | | innodb_defragment_stats_accuracy | 0 | | innodb_disable_sort_file_cache | OFF | | innodb_disallow_writes | OFF | | innodb_doublewrite | ON | | innodb_empty_free_list_algorithm | BACKOFF | | innodb_encrypt_log | OFF | | innodb_encrypt_tables | OFF | | innodb_encryption_rotate_key_age | 1 | | innodb_encryption_rotation_iops | 100 | | innodb_encryption_threads | 0 | | innodb_fake_changes | OFF | | innodb_fast_shutdown | 1 | | innodb_fatal_semaphore_wait_threshold | 600 | | innodb_file_format | Antelope | | innodb_file_format_check | ON | | innodb_file_format_max | Antelope | | innodb_file_per_table | ON | | innodb_flush_log_at_timeout | 1 | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | O_DIRECT | | innodb_flush_neighbors | 1 | | innodb_flushing_avg_loops | 30 | | innodb_force_load_corrupted | OFF | | innodb_force_primary_key | OFF | | innodb_force_recovery | 0 | | innodb_foreground_preflush | EXPONENTIAL_BACKOFF | | innodb_ft_aux_table | | | innodb_ft_cache_size | 8000000 | | innodb_ft_enable_diag_print | OFF | | innodb_ft_enable_stopword | ON | | innodb_ft_max_token_size | 84 | | innodb_ft_min_token_size | 3 | | innodb_ft_num_word_optimize | 2000 | | innodb_ft_result_cache_limit | 2000000000 | | innodb_ft_server_stopword_table | | | innodb_ft_sort_pll_degree | 2 | | innodb_ft_total_cache_size | 640000000 | | innodb_ft_user_stopword_table | | | innodb_idle_flush_pct | 100 | | innodb_immediate_scrub_data_uncompressed | OFF | | innodb_instrument_semaphores | OFF | | innodb_io_capacity | 400 | | innodb_io_capacity_max | 2000 | | innodb_kill_idle_transaction | 0 | | innodb_large_prefix | OFF | | innodb_lock_wait_timeout | 50 | | innodb_locking_fake_changes | ON | | innodb_locks_unsafe_for_binlog | ON | | innodb_log_arch_dir | ./ | | innodb_log_arch_expire_sec | 0 | | innodb_log_archive | OFF | | innodb_log_block_size | 512 | | innodb_log_buffer_size | 8388608 | | innodb_log_checksum_algorithm | INNODB | | innodb_log_compressed_pages | OFF | | innodb_log_file_size | 50331648 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_lru_scan_depth | 1024 | | innodb_max_bitmap_file_size | 104857600 | | innodb_max_changed_pages | 1000000 | | innodb_max_dirty_pages_pct | 75.000000 | | innodb_max_dirty_pages_pct_lwm | 0.001000 | | innodb_max_purge_lag | 0 | | innodb_max_purge_lag_delay | 0 | | innodb_mirrored_log_groups | 1 | | innodb_monitor_disable | | | innodb_monitor_enable | | | innodb_monitor_reset | | | innodb_monitor_reset_all | | | innodb_mtflush_threads | 8 | | innodb_old_blocks_pct | 37 | | innodb_old_blocks_time | 1000 | | innodb_online_alter_log_max_size | 134217728 | | innodb_open_files | 400 | | innodb_optimize_fulltext_only | OFF | | innodb_page_size | 16384 | | innodb_prefix_index_cluster_optimization | OFF | | innodb_print_all_deadlocks | OFF | | innodb_purge_batch_size | 300 | | innodb_purge_threads | 1 | | innodb_random_read_ahead | OFF | | innodb_read_ahead_threshold | 56 | | innodb_read_io_threads | 4 | | innodb_read_only | OFF | | innodb_replication_delay | 0 | | innodb_rollback_on_timeout | OFF | | innodb_rollback_segments | 128 | | innodb_sched_priority_cleaner | 19 | | innodb_scrub_log | OFF | | innodb_scrub_log_speed | 256 | | innodb_show_locks_held | 10 | | innodb_show_verbose_locks | 0 | | innodb_simulate_comp_failures | 0 | | innodb_sort_buffer_size | 1048576 | | innodb_spin_wait_delay | 6 | | innodb_stats_auto_recalc | ON | | innodb_stats_method | nulls_equal | | innodb_stats_modified_counter | 0 | | innodb_stats_on_metadata | OFF | | innodb_stats_persistent | ON | | innodb_stats_persistent_sample_pages | 20 | | innodb_stats_sample_pages | 8 | | innodb_stats_traditional | ON | | innodb_stats_transient_sample_pages | 8 | | innodb_status_output | OFF | | innodb_status_output_locks | OFF | | innodb_strict_mode | OFF | | innodb_support_xa | ON | | innodb_sync_array_size | 1 | | innodb_sync_spin_loops | 30 | | innodb_table_locks | ON | | innodb_thread_concurrency | 0 | | innodb_thread_sleep_delay | 10000 | | innodb_track_changed_pages | OFF | | innodb_undo_directory | . | | innodb_undo_logs | 128 | | innodb_undo_tablespaces | 0 | | innodb_use_atomic_writes | OFF | | innodb_use_fallocate | OFF | | innodb_use_global_flush_log_at_trx_commit | ON | | innodb_use_mtflush | OFF | | innodb_use_native_aio | ON | | innodb_use_stacktrace | OFF | | innodb_use_sys_malloc | ON | | innodb_use_trim | OFF | | innodb_version | 5.6.28-76.1 | | innodb_write_io_threads | 4 | On Tue, Jan 3, 2017 at 2:39 PM, Reindl Harald <h.reindl@thelounge.net> wrote:
don't come with random solutions instead describe your problem properly and show your real config - the subject of this thread is nonsense and the options below are *not* your problem in real life
Am 03.01.2017 um 20:13 schrieb l vic:
How's about |max_binlog_cache_size or max_binlog_stmt_cache_size ? || ||max_binlog_cache_size defaults to 18446744073709547520 | |Can it be more than available RAM ? |
On Fri, Dec 30, 2016 at 9:55 AM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> wrote:
Am 30.12.2016 um 14:20 schrieb l vic:
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services
jesus christ fix your mail-client
using virtual memory / swap is not the job of the application, it's the job of the underlying operating system to swap out unused pages by LRU algorithms - in most cases when your OS starts heavily swapping the game is over and so you have to adjust the ressource consumation of your services or just add physical memory
"I don't have control over the cluster and MariaDB instances share" - when you have no control at all you can't change anything, as long as you have control of your "my.cnf" adjust caches and buffers as i told you that we have mariadb instances which are consuming between 35 MB and many GB RAM - depending on the size of the dataset, buffer-pool settings and so on
On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>>> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
_______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
the stuff below only shows innodb settings which is not all how large is your *dataset* how many connections do you have at peak times how much memory does your system have for the workload how is query cache configured why don't you just use MyISAM if you don't have suiteable ressources what is the content of your "my.cnf" if your container is oom killed you likely don't have enough physical ressources for your dataset and should fix that properly instead cripple down services - we are at 2017 and not in the 1990's where 64 MB was common Am 04.01.2017 um 22:21 schrieb l vic:
I didn't suggest it as "solution", just wondering if huge default value can cause problems. The real problem is that memory consumption of mysqld is growing in time until the container oom and is killed; What would be my "read config": list of system variables? +---------------------------------------------+------------------------+ | Variable_name | Value | +---------------------------------------------+------------------------+ | innodb_adaptive_flushing | ON | | innodb_adaptive_flushing_lwm | 10.000000 | | innodb_adaptive_hash_index | ON | | innodb_adaptive_hash_index_partitions | 1 | | innodb_adaptive_max_sleep_delay | 150000 | | innodb_additional_mem_pool_size | 8388608 | | innodb_api_bk_commit_interval | 5 | | innodb_api_disable_rowlock | OFF | | innodb_api_enable_binlog | OFF | | innodb_api_enable_mdl | OFF | | innodb_api_trx_level | 0 | | innodb_autoextend_increment | 64 | | innodb_autoinc_lock_mode | 2 | | innodb_background_scrub_data_check_interval | 3600 | | innodb_background_scrub_data_compressed | OFF | | innodb_background_scrub_data_interval | 604800 | | innodb_background_scrub_data_uncompressed | OFF | | innodb_buf_dump_status_frequency | 0 | | innodb_buffer_pool_dump_at_shutdown | OFF | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_dump_pct | 100 | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 8 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | OFF | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_populate | OFF | | innodb_buffer_pool_size | 2147483648 | | innodb_change_buffer_max_size | 25 | | innodb_change_buffering | all | | innodb_checksum_algorithm | INNODB | | innodb_checksums | ON | | innodb_cleaner_lsn_age_factor | HIGH_CHECKPOINT | | innodb_cmp_per_index_enabled | OFF | | innodb_commit_concurrency | 0 | | innodb_compression_algorithm | none | | innodb_compression_failure_threshold_pct | 5 | | innodb_compression_level | 6 | | innodb_compression_pad_pct_max | 50 | | innodb_concurrency_tickets | 5000 | | innodb_corrupt_table_action | assert | | innodb_data_file_path | ibdata1:12M:autoextend | | innodb_data_home_dir | | | innodb_default_encryption_key_id | 1 | | innodb_defragment | OFF | | innodb_defragment_fill_factor | 0.900000 | | innodb_defragment_fill_factor_n_recs | 20 | | innodb_defragment_frequency | 40 | | innodb_defragment_n_pages | 7 | | innodb_defragment_stats_accuracy | 0 | | innodb_disable_sort_file_cache | OFF | | innodb_disallow_writes | OFF | | innodb_doublewrite | ON | | innodb_empty_free_list_algorithm | BACKOFF | | innodb_encrypt_log | OFF | | innodb_encrypt_tables | OFF | | innodb_encryption_rotate_key_age | 1 | | innodb_encryption_rotation_iops | 100 | | innodb_encryption_threads | 0 | | innodb_fake_changes | OFF | | innodb_fast_shutdown | 1 | | innodb_fatal_semaphore_wait_threshold | 600 | | innodb_file_format | Antelope | | innodb_file_format_check | ON | | innodb_file_format_max | Antelope | | innodb_file_per_table | ON | | innodb_flush_log_at_timeout | 1 | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | O_DIRECT | | innodb_flush_neighbors | 1 | | innodb_flushing_avg_loops | 30 | | innodb_force_load_corrupted | OFF | | innodb_force_primary_key | OFF | | innodb_force_recovery | 0 | | innodb_foreground_preflush | EXPONENTIAL_BACKOFF | | innodb_ft_aux_table | | | innodb_ft_cache_size | 8000000 | | innodb_ft_enable_diag_print | OFF | | innodb_ft_enable_stopword | ON | | innodb_ft_max_token_size | 84 | | innodb_ft_min_token_size | 3 | | innodb_ft_num_word_optimize | 2000 | | innodb_ft_result_cache_limit | 2000000000 | | innodb_ft_server_stopword_table | | | innodb_ft_sort_pll_degree | 2 | | innodb_ft_total_cache_size | 640000000 | | innodb_ft_user_stopword_table | | | innodb_idle_flush_pct | 100 | | innodb_immediate_scrub_data_uncompressed | OFF | | innodb_instrument_semaphores | OFF | | innodb_io_capacity | 400 | | innodb_io_capacity_max | 2000 | | innodb_kill_idle_transaction | 0 | | innodb_large_prefix | OFF | | innodb_lock_wait_timeout | 50 | | innodb_locking_fake_changes | ON | | innodb_locks_unsafe_for_binlog | ON | | innodb_log_arch_dir | ./ | | innodb_log_arch_expire_sec | 0 | | innodb_log_archive | OFF | | innodb_log_block_size | 512 | | innodb_log_buffer_size | 8388608 | | innodb_log_checksum_algorithm | INNODB | | innodb_log_compressed_pages | OFF | | innodb_log_file_size | 50331648 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_lru_scan_depth | 1024 | | innodb_max_bitmap_file_size | 104857600 | | innodb_max_changed_pages | 1000000 | | innodb_max_dirty_pages_pct | 75.000000 | | innodb_max_dirty_pages_pct_lwm | 0.001000 | | innodb_max_purge_lag | 0 | | innodb_max_purge_lag_delay | 0 | | innodb_mirrored_log_groups | 1 | | innodb_monitor_disable | | | innodb_monitor_enable | | | innodb_monitor_reset | | | innodb_monitor_reset_all | | | innodb_mtflush_threads | 8 | | innodb_old_blocks_pct | 37 | | innodb_old_blocks_time | 1000 | | innodb_online_alter_log_max_size | 134217728 | | innodb_open_files | 400 | | innodb_optimize_fulltext_only | OFF | | innodb_page_size | 16384 | | innodb_prefix_index_cluster_optimization | OFF | | innodb_print_all_deadlocks | OFF | | innodb_purge_batch_size | 300 | | innodb_purge_threads | 1 | | innodb_random_read_ahead | OFF | | innodb_read_ahead_threshold | 56 | | innodb_read_io_threads | 4 | | innodb_read_only | OFF | | innodb_replication_delay | 0 | | innodb_rollback_on_timeout | OFF | | innodb_rollback_segments | 128 | | innodb_sched_priority_cleaner | 19 | | innodb_scrub_log | OFF | | innodb_scrub_log_speed | 256 | | innodb_show_locks_held | 10 | | innodb_show_verbose_locks | 0 | | innodb_simulate_comp_failures | 0 | | innodb_sort_buffer_size | 1048576 | | innodb_spin_wait_delay | 6 | | innodb_stats_auto_recalc | ON | | innodb_stats_method | nulls_equal | | innodb_stats_modified_counter | 0 | | innodb_stats_on_metadata | OFF | | innodb_stats_persistent | ON | | innodb_stats_persistent_sample_pages | 20 | | innodb_stats_sample_pages | 8 | | innodb_stats_traditional | ON | | innodb_stats_transient_sample_pages | 8 | | innodb_status_output | OFF | | innodb_status_output_locks | OFF | | innodb_strict_mode | OFF | | innodb_support_xa | ON | | innodb_sync_array_size | 1 | | innodb_sync_spin_loops | 30 | | innodb_table_locks | ON | | innodb_thread_concurrency | 0 | | innodb_thread_sleep_delay | 10000 | | innodb_track_changed_pages | OFF | | innodb_undo_directory | . | | innodb_undo_logs | 128 | | innodb_undo_tablespaces | 0 | | innodb_use_atomic_writes | OFF | | innodb_use_fallocate | OFF | | innodb_use_global_flush_log_at_trx_commit | ON | | innodb_use_mtflush | OFF | | innodb_use_native_aio | ON | | innodb_use_stacktrace | OFF | | innodb_use_sys_malloc | ON | | innodb_use_trim | OFF | | innodb_version | 5.6.28-76.1 | | innodb_write_io_threads | 4 |
On Tue, Jan 3, 2017 at 2:39 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> wrote:
don't come with random solutions instead describe your problem properly and show your real config - the subject of this thread is nonsense and the options below are *not* your problem in real life
Am 03.01.2017 um 20:13 schrieb l vic:
How's about |max_binlog_cache_size or max_binlog_stmt_cache_size ? || ||max_binlog_cache_size defaults to 18446744073709547520 | |Can it be more than available RAM ? |
On Fri, Dec 30, 2016 at 9:55 AM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>>> wrote:
Am 30.12.2016 um 14:20 schrieb l vic:
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services
jesus christ fix your mail-client
using virtual memory / swap is not the job of the application, it's the job of the underlying operating system to swap out unused pages by LRU algorithms - in most cases when your OS starts heavily swapping the game is over and so you have to adjust the ressource consumation of your services or just add physical memory
"I don't have control over the cluster and MariaDB instances share" - when you have no control at all you can't change anything, as long as you have control of your "my.cnf" adjust caches and buffers as i told you that we have mariadb instances which are consuming between 35 MB and many GB RAM - depending on the size of the dataset, buffer-pool settings and so on
On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>>>> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
dataset size: 1000000 rows. 5 int cols. 7 char cols I run mysqlslap for 99 concurrent clients ( out of 100 max) Container memory is limited to 5.268G Query cache configuration is default, i didn't try to change it Why MyISAM engine is more suitable in resource constrained env? What's the cons of using it? Below is my "my.cnf": # MariaDB database server configuration file. # # You can copy this file to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] skip-host-cache skip-name-resolve # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc_messages_dir = /usr/share/mysql lc_messages = en_US skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # max_connections = 100 connect_timeout = 5 wait_timeout = 600 max_allowed_packet = 16M thread_cache_size = 128 sort_buffer_size = 4M bulk_insert_buffer_size = 16M tmp_table_size = 32M max_heap_table_size = 32M # # * MyISAM # # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched. On error, make copy and try a repair. myisam_recover_options = BACKUP key_buffer_size = 128M #open-files-limit = 2000 table_open_cache = 400 myisam_sort_buffer_size = 512M concurrent_insert = 2 read_buffer_size = 2M read_rnd_buffer_size = 1M # # * Query Cache Configuration # # Cache only tiny result sets, so we can fit more in the query cache. query_cache_limit = 128K query_cache_size = 64M # for more write intensive setups, set to DEMAND or OFF #query_cache_type = DEMAND # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # we do want to know about network errors and such #log_warnings = 2 # # Enable the slow query log to see queries with especially long duration #slow_query_log[={0|1}] slow_query_log_file = /var/log/mysql/mariadb-slow.log long_query_time = 10 #log_slow_rate_limit = 1000 #log_slow_verbosity = query_plan #log-queries-not-using-indexes #log_slow_admin_statements # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #report_host = master1 #auto_increment_increment = 2 #auto_increment_offset = 1 #log_bin = /var/log/mysql/mariadb-bin #log_bin_index = /var/log/mysql/mariadb-bin.index # not fab for performance, but safer #sync_binlog = 1 expire_logs_days = 10 max_binlog_size = 100M # slaves #relay_log = /var/log/mysql/relay-bin #relay_log_index = /var/log/mysql/relay-bin.index #relay_log_info_file = /var/log/mysql/relay-bin.info #log_slave_updates #read_only # # If applications support it, this stricter sql_mode prevents some # mistakes like inserting invalid dates etc. #sql_mode = NO_ENGINE_SUBSTITUTION,TRADITIONAL # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! default_storage_engine = InnoDB # you can't just change log file size, requires special procedure #innodb_log_file_size = 50M innodb_buffer_pool_size = 256M innodb_log_buffer_size = 8M innodb_file_per_table = 1 innodb_open_files = 400 innodb_io_capacity = 400 innodb_flush_method = O_DIRECT # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem # # * Galera-related settings # [galera] # Mandatory settings #wsrep_on=ON #wsrep_provider= #wsrep_cluster_address= #binlog_format=row #default_storage_engine=InnoDB #innodb_autoinc_lock_mode=2 # # Allow server to accept connections on all interfaces. # #bind-address=0.0.0.0 # # Optional setting #wsrep_slave_threads=1 #innodb_flush_log_at_trx_commit=0 [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ On Wed, Jan 4, 2017 at 6:31 PM, Reindl Harald <h.reindl@thelounge.net> wrote:
the stuff below only shows innodb settings which is not all
how large is your *dataset* how many connections do you have at peak times how much memory does your system have for the workload how is query cache configured why don't you just use MyISAM if you don't have suiteable ressources what is the content of your "my.cnf"
if your container is oom killed you likely don't have enough physical ressources for your dataset and should fix that properly instead cripple down services - we are at 2017 and not in the 1990's where 64 MB was common
Am 04.01.2017 um 22:21 schrieb l vic:
I didn't suggest it as "solution", just wondering if huge default value can cause problems. The real problem is that memory consumption of mysqld is growing in time until the container oom and is killed; What would be my "read config": list of system variables? +---------------------------------------------+------------------------+ | Variable_name | Value | +---------------------------------------------+------------------------+ | innodb_adaptive_flushing | ON | | innodb_adaptive_flushing_lwm | 10.000000 | | innodb_adaptive_hash_index | ON | | innodb_adaptive_hash_index_partitions | 1 | | innodb_adaptive_max_sleep_delay | 150000 | | innodb_additional_mem_pool_size | 8388608 | | innodb_api_bk_commit_interval | 5 | | innodb_api_disable_rowlock | OFF | | innodb_api_enable_binlog | OFF | | innodb_api_enable_mdl | OFF | | innodb_api_trx_level | 0 | | innodb_autoextend_increment | 64 | | innodb_autoinc_lock_mode | 2 | | innodb_background_scrub_data_check_interval | 3600 | | innodb_background_scrub_data_compressed | OFF | | innodb_background_scrub_data_interval | 604800 | | innodb_background_scrub_data_uncompressed | OFF | | innodb_buf_dump_status_frequency | 0 | | innodb_buffer_pool_dump_at_shutdown | OFF | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_dump_pct | 100 | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 8 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | OFF | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_populate | OFF | | innodb_buffer_pool_size | 2147483648 | | innodb_change_buffer_max_size | 25 | | innodb_change_buffering | all | | innodb_checksum_algorithm | INNODB | | innodb_checksums | ON | | innodb_cleaner_lsn_age_factor | HIGH_CHECKPOINT | | innodb_cmp_per_index_enabled | OFF | | innodb_commit_concurrency | 0 | | innodb_compression_algorithm | none | | innodb_compression_failure_threshold_pct | 5 | | innodb_compression_level | 6 | | innodb_compression_pad_pct_max | 50 | | innodb_concurrency_tickets | 5000 | | innodb_corrupt_table_action | assert | | innodb_data_file_path | ibdata1:12M:autoextend | | innodb_data_home_dir | | | innodb_default_encryption_key_id | 1 | | innodb_defragment | OFF | | innodb_defragment_fill_factor | 0.900000 | | innodb_defragment_fill_factor_n_recs | 20 | | innodb_defragment_frequency | 40 | | innodb_defragment_n_pages | 7 | | innodb_defragment_stats_accuracy | 0 | | innodb_disable_sort_file_cache | OFF | | innodb_disallow_writes | OFF | | innodb_doublewrite | ON | | innodb_empty_free_list_algorithm | BACKOFF | | innodb_encrypt_log | OFF | | innodb_encrypt_tables | OFF | | innodb_encryption_rotate_key_age | 1 | | innodb_encryption_rotation_iops | 100 | | innodb_encryption_threads | 0 | | innodb_fake_changes | OFF | | innodb_fast_shutdown | 1 | | innodb_fatal_semaphore_wait_threshold | 600 | | innodb_file_format | Antelope | | innodb_file_format_check | ON | | innodb_file_format_max | Antelope | | innodb_file_per_table | ON | | innodb_flush_log_at_timeout | 1 | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | O_DIRECT | | innodb_flush_neighbors | 1 | | innodb_flushing_avg_loops | 30 | | innodb_force_load_corrupted | OFF | | innodb_force_primary_key | OFF | | innodb_force_recovery | 0 | | innodb_foreground_preflush | EXPONENTIAL_BACKOFF | | innodb_ft_aux_table | | | innodb_ft_cache_size | 8000000 | | innodb_ft_enable_diag_print | OFF | | innodb_ft_enable_stopword | ON | | innodb_ft_max_token_size | 84 | | innodb_ft_min_token_size | 3 | | innodb_ft_num_word_optimize | 2000 | | innodb_ft_result_cache_limit | 2000000000 | | innodb_ft_server_stopword_table | | | innodb_ft_sort_pll_degree | 2 | | innodb_ft_total_cache_size | 640000000 | | innodb_ft_user_stopword_table | | | innodb_idle_flush_pct | 100 | | innodb_immediate_scrub_data_uncompressed | OFF | | innodb_instrument_semaphores | OFF | | innodb_io_capacity | 400 | | innodb_io_capacity_max | 2000 | | innodb_kill_idle_transaction | 0 | | innodb_large_prefix | OFF | | innodb_lock_wait_timeout | 50 | | innodb_locking_fake_changes | ON | | innodb_locks_unsafe_for_binlog | ON | | innodb_log_arch_dir | ./ | | innodb_log_arch_expire_sec | 0 | | innodb_log_archive | OFF | | innodb_log_block_size | 512 | | innodb_log_buffer_size | 8388608 | | innodb_log_checksum_algorithm | INNODB | | innodb_log_compressed_pages | OFF | | innodb_log_file_size | 50331648 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_lru_scan_depth | 1024 | | innodb_max_bitmap_file_size | 104857600 | | innodb_max_changed_pages | 1000000 | | innodb_max_dirty_pages_pct | 75.000000 | | innodb_max_dirty_pages_pct_lwm | 0.001000 | | innodb_max_purge_lag | 0 | | innodb_max_purge_lag_delay | 0 | | innodb_mirrored_log_groups | 1 | | innodb_monitor_disable | | | innodb_monitor_enable | | | innodb_monitor_reset | | | innodb_monitor_reset_all | | | innodb_mtflush_threads | 8 | | innodb_old_blocks_pct | 37 | | innodb_old_blocks_time | 1000 | | innodb_online_alter_log_max_size | 134217728 | | innodb_open_files | 400 | | innodb_optimize_fulltext_only | OFF | | innodb_page_size | 16384 | | innodb_prefix_index_cluster_optimization | OFF | | innodb_print_all_deadlocks | OFF | | innodb_purge_batch_size | 300 | | innodb_purge_threads | 1 | | innodb_random_read_ahead | OFF | | innodb_read_ahead_threshold | 56 | | innodb_read_io_threads | 4 | | innodb_read_only | OFF | | innodb_replication_delay | 0 | | innodb_rollback_on_timeout | OFF | | innodb_rollback_segments | 128 | | innodb_sched_priority_cleaner | 19 | | innodb_scrub_log | OFF | | innodb_scrub_log_speed | 256 | | innodb_show_locks_held | 10 | | innodb_show_verbose_locks | 0 | | innodb_simulate_comp_failures | 0 | | innodb_sort_buffer_size | 1048576 | | innodb_spin_wait_delay | 6 | | innodb_stats_auto_recalc | ON | | innodb_stats_method | nulls_equal | | innodb_stats_modified_counter | 0 | | innodb_stats_on_metadata | OFF | | innodb_stats_persistent | ON | | innodb_stats_persistent_sample_pages | 20 | | innodb_stats_sample_pages | 8 | | innodb_stats_traditional | ON | | innodb_stats_transient_sample_pages | 8 | | innodb_status_output | OFF | | innodb_status_output_locks | OFF | | innodb_strict_mode | OFF | | innodb_support_xa | ON | | innodb_sync_array_size | 1 | | innodb_sync_spin_loops | 30 | | innodb_table_locks | ON | | innodb_thread_concurrency | 0 | | innodb_thread_sleep_delay | 10000 | | innodb_track_changed_pages | OFF | | innodb_undo_directory | . | | innodb_undo_logs | 128 | | innodb_undo_tablespaces | 0 | | innodb_use_atomic_writes | OFF | | innodb_use_fallocate | OFF | | innodb_use_global_flush_log_at_trx_commit | ON | | innodb_use_mtflush | OFF | | innodb_use_native_aio | ON | | innodb_use_stacktrace | OFF | | innodb_use_sys_malloc | ON | | innodb_use_trim | OFF | | innodb_version | 5.6.28-76.1 | | innodb_write_io_threads | 4 |
On Tue, Jan 3, 2017 at 2:39 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> wrote:
don't come with random solutions instead describe your problem properly and show your real config - the subject of this thread is nonsense and the options below are *not* your problem in real life
Am 03.01.2017 um 20:13 schrieb l vic:
How's about |max_binlog_cache_size or max_binlog_stmt_cache_size ? || ||max_binlog_cache_size defaults to 18446744073709547520 | |Can it be more than available RAM ? |
On Fri, Dec 30, 2016 at 9:55 AM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>>> wrote:
Am 30.12.2016 um 14:20 schrieb l vic:
and what should happen after reach it? a hard stop? Should use virt. memory/swap why don't you just configure your server to match your needs? I don't have control over the cluster and MariaDB instances share hardware with other services
jesus christ fix your mail-client
using virtual memory / swap is not the job of the application, it's the job of the underlying operating system to swap out unused pages by LRU algorithms - in most cases when your OS starts heavily swapping the game is over and so you have to adjust the ressource consumation of your services or just add physical memory
"I don't have control over the cluster and MariaDB instances share" - when you have no control at all you can't change anything, as long as you have control of your "my.cnf" adjust caches and buffers as i told you that we have mariadb instances which are consuming between 35 MB and many GB RAM - depending on the size of the dataset, buffer-pool settings and so on
On Wed, Dec 28, 2016 at 6:42 PM, Reindl Harald <h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net> <mailto:h.reindl@thelounge.net <mailto:h.reindl@thelounge.net>>>> wrote:
Am 28.12.2016 um 19:47 schrieb l vic:
Is there some way to set hard limit for use of RAM by mariaDB server?
and what should happen after reach it? a hard stop?
why don't you just configure your server to match your needs?
we have MariaDB instances between 35 MB and 10 GB RAM depending on the usecase - small instances with a small dataset don't need much caches and buffers, machine srunning dbmail need really large buffers
_______________________________________________ Mailing list: https://launchpad.net/~maria-discuss Post to : maria-discuss@lists.launchpad.net Unsubscribe : https://launchpad.net/~maria-discuss More help : https://help.launchpad.net/ListHelp
Am 05.01.2017 um 17:13 schrieb l vic:
dataset size: 1000000 rows. 5 int cols. 7 char cols I run mysqlslap for 99 concurrent clients ( out of 100 max) Container memory is limited to 5.268G Query cache configuration is default, i didn't try to change it Why MyISAM engine is more suitable in resource constrained env? What's the cons of using it?
because when you disable innodb in my.cnf you don't have the overhead of buffer-pool and other innodb related stuff while MyISAM just acts with the files on disk and caching is done by the OS aka the virtual memory you wanted to swap out stuff at the begin of the thread cons: no transaction support - just read about the different storage engines, there is a lot of documentation in the web
participants (4)
-
Guillaume Lefranc
-
l vic
-
Reindl Harald
-
Sergei Golubchik