How to resolve Clickhouse error code 241: Memory limit exceeded

In ClickHouse, executing large and complex queries or updating a significant amount of data, such as deduplicating a large table, may occasionally result in the following error:

1SQL Error [241] [07000]: Code: 241. DB::Exception: Memory limit (total) exceeded: 
2would use 4.67 GiB (attempt to allocate chunk of 8388608 bytes), maximum: 4.66 GiB. 
3OvercommitTracker decision: Query was selected to stop by OvercommitTracker.: 
4(while reading column dob): (while reading from part /var/lib/clickhouse/store/34a/34ab1fe4-3ef3-4595-9ec3-76d34a1ffb6d/c479ec7101faeb275dd8b201c7f16395_6718_50090_3/ from mark 0 with max_rows_to_read = 1048449):

Root cause:

By default, ClickHouse has several memory limit configurations. If you're executing a heavy query, the ClickHouse server may not be able to allocate enough memory, resulting in this error code.

Solution:

To resolve this issue, modify the following three configurations:

Server-level config: /etc/clickhouse-server/config.xml

  1. max_server_memory_usage
    • Tip: The unit is bytes.
    • For example, if you set the value to 12000000000, the maximum memory is 12 GB.
  2. max_server_memory_usage_to_ram_ratio=1
    • The value should be > 0.
    • Tip: In memory-constrained environments, you may need to set this value larger than 1.

User-level config: /etc/clickhouse-server/users.xml

  1. max_memory_usage
    • In section clickHouse -> profile -> default, you need to add a new config for the default profile.
    • Tip: The unit is bytes, just like the server-level configuration. ClickHouse will choose the smaller value between user-level and server-level configurations.

After changing the above configurations, you need to restart the clickhouse server using command

1sudo service clickhouse-server restart

Posts in this Series