Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MariaDB Performance in Docker is Half as Fast Compared to Running Directly on the Host System #629

Open
justinh998 opened this issue Feb 2, 2025 · 3 comments
Labels
need feedback Need feedback from user.

Comments

@justinh998
Copy link

I initially ran the MariaDB Docker container without any additional options (the container was placed in a Docker network and also received a volume mounted to /var/lib/mysql).

The performance seemed a bit slow to me (3000-4000 q/s when inserting 150000 rows), so I stopped the container and ran MariaDB directly on host (also without any additional options).

For the same 150000 rows, I then had a performance of 7000-8000 q/s.

MariaDB in the Docker container runs at half the speed compared to running directly on the host system.

I also tried to speed up the container with various options, but that didn't help either. Here is my final Docker Compose:

version: '3'
volumes:
  data:
services:
  db:
    image: mariadb
    environment:
      MYSQL_ROOT_PASSWORD: Sandburg123#
      MYSQL_USER: JUSTINH
      MYSQL_PASSWORD: Sandburg123#
    command: --performance-schema=true --innodb-buffer-pool-size=12884901888 --innodb_log_buffer_size=64M --max_connections=128 --thread_cache_size=32 --tmp_table_size=268435456 --query_cache_size=67108864 --query_cache_type=1 --innodb_flush_method=fsync
    volumes:
      - data:/var/lib/mysql
      #- /var/lib/mysql/socket:/var/run/mysqld
    ports:
      - "3306:3306"
    restart: unless-stopped
    networks:
      - link
networks:
   link:

Does anyone have an idea why MariaDB in Docker is so slow?

@fauust
Copy link
Collaborator

fauust commented Feb 2, 2025

Hi @justinh998 !
You should probably share more information about your setup (mariadb version on host, OS, docker version, docker volume setup, SQL insert, hardware, etc.). So we can try to reproduce it.

@grooverdan
Copy link
Member

grooverdan commented Feb 3, 2025

innodb-log-file-size is one of the key things to increase for bulk load (up to buffer pool size or more).

If doing a lot of temporary on disk tables, MDEV-34577 seemed to be having some effect on the overlayfs on /tmp, where option 1) --temp-pool=1 , or option b) on the run make /tmp a tmpfs filesystem like:

    tmpfs:
      - /tmp:rw,size=2G

As @fauust asked, we're still guessing because there are many ways to insert. Perf measurements of container (need capability CAP_SYS_PERFMON) and comparision to host (differential flame graph?).

@grooverdan grooverdan added the need feedback Need feedback from user. label Feb 5, 2025
@polarathene
Copy link

polarathene commented Feb 17, 2025

If you're testing via a docker bridge network, consider disabling the userland-proxy setting in /etc/docker/daemon.json, as that last I recall will cause about a 50% perf hit on the network (when I tested it was with iperf throughput IIRC).

EDIT: Sorry not sure why I was thinking network I/O when this is more of a disk I/O issue 😓


Another common gotcha I've noticed in the past was with FD limits in containers being set to infinity, which depending on the Docker host meant the soft limit (ulimit -Sn) was raised to over a million or over a billion. That difference in environment made troubleshooting the bugs I was tasked to resolve a bit more tricky until I knew about that, as it was a 1,000x multiplier.

Since Docker v25 that should be minimized, but another part of it was dependent upon Containerd 2.0 release (roughly around the start of 2025), which should be landing in Docker v28 release.

You can check in the container with ulimit -Sn to see what you have, and ulimit -Hn for the hard limit. Ideally these would be 1024 and 524288 respectfully, to match the host system. Database software will typically want to raise the soft limit for file descriptors (FDs) above that 1024 value, and some images have implicitly relied upon the earlier bug where containers had the soft limit already above a million, instead of the software itself in the container raising the soft limit at runtime as needed or configured.

You rarely need a higher hard limit, but I recall for production databases there may be larger workloads where 524288 is insufficient. Envoy is another service where some enterprise deployments need to exceed a billion FDs apparently 🤷‍♂

FWIW the higher soft limit should not be set on the container level, since some software can regress heavily in performance (excessive CPU usage iterating through FDs to close, or memory allocation which was reported for MySQL).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need feedback Need feedback from user.
Development

No branches or pull requests

4 participants