Memory is something I generally don’t worry about when working with Docker. It just works. This is great… but what happens when it doesn’t?
We recently had a situation where one of the Docker solutions we’ve been developing at Fathom Data suddenly started having memory issues. And the timing was bad: we were using it live with a client. It did not look good.
This lead to some frantic learning about memory management with Docker. This short post details some of the things that I learned.
Physical and Virtual Memory
There are two main types of memory: physical and virtual. Physical memory is RAM. It lives close to the CPU and it’s fast. The amount of physical memory is constrained by the amount of RAM you have installed on the machine. Virtual memory is not really memory at all. It’s a crafty operating system trick to emulate memory by swapping pages of physical memory out to disk. You can increase the amount of virtual memory available by allocating more disk space. This sounds great: expand the effective amount of memory available up to the size of your disk. Yes, you could. But relative to physical memory, virtual memory is slow. Very slow indeed. Virtual memory will allow you to load larger processes into memory, but they will be less responsive because of the latency of swapping data back and forth with the disk.
Setup
For the purpose of this post I set up a Virtual Machine with 4 GB of physical memory and 8 GB of virtual memory.
free --giga
total used free shared buff/cache available
Mem: 4 0 3 0 0 3
Swap: 8 0 8
Stress Test Tool
We’re going to use a Docker image for the stress
tool that tool allows you to stress test various aspects of a system. It’s a rather versatile tool, but we’re only going to use one component: the memory test.
Memory Limit
The -m
(or --memory
) option to docker
can be used to limit the amount of memory available to a container. Let’s use stress
to allocate 128 MB of memory and hold it for 30 seconds. We’ll Use the -m
option to limit the memory allocated to the container to only 128 MB.
docker run -m 128M stress -t 30s --vm 1 --vm-bytes 128M --vm-keep
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 30s
stress: dbug: [1] --> hogvm worker 1 [7] forked
stress: dbug: [7] allocating 134217728 bytes ...
stress: dbug: [1] <-- worker 7 signalled normally
stress: info: [1] successful run completed in 30s
Okay, it seems like that was successful. ✅
Let’s try to consume 256 MB, more than what’s allocated.
docker run -m 128M stress -t 30s --vm 1 --vm-bytes 256M --vm-keep
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 30s
stress: dbug: [1] --> hogvm worker 1 [9] forked
stress: dbug: [9] allocating 268435456 bytes ...
stress: FAIL: [1] (415) <-- worker 9 got signal 9
stress: WARN: [1] (417) now reaping child worker processes
stress: FAIL: [1] (421) kill error: No such process
stress: FAIL: [1] (451) failed run completed in 3s
The process was killed. 🔥 Makes sense, right?
Let’s just check one thing: using a little less memory (but still more than allocated).
docker run -m 128M stress -t 30s --vm 1 --vm-bytes 192M --vm-keep
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 30s
stress: dbug: [1] --> hogvm worker 1 [7] forked
stress: dbug: [7] allocating 201326592 bytes ...
stress: dbug: [1] <-- worker 7 signalled normally
stress: info: [1] successful run completed in 30s
We allocated 128 MB and then successfully managed to use 192 MB. What’s going on here?
By default, the host kernel can swap out a percentage of anonymous pages used by a container. Runtime options with Memory, CPUs, and GPUs
Some swap space will automatically be made available to the container, up to “a percentage” of the allocated space. That vague percentage is dictated by the swappiness, which we’ll get back to later on. The available swap is implicitly allocated to the container. When we tried to allocate 256 MB we exceeded the implicit allocation.
Explicit Swap Allowance
Use the --memory-swap
option to explicitly allocate swap memory to a container.
docker run -m 128M --memory-swap 512M stress -t 30s --vm 1 --vm-bytes 384M --vm-keep
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 30s
stress: dbug: [1] --> hogvm worker 1 [7] forked
stress: dbug: [7] allocating 402653184 bytes ...
stress: dbug: [1] <-- worker 7 signalled normally
stress: info: [1] successful run completed in 30s
Okay, so now we are able to allocate a decent chunk of memory, exceeding the implicitly allocated amount.
--memory-swap
represents the total amount of memory and swap that can be used, and--memory
controls the amount used by non-swap memory. Runtime options with Memory, CPUs, and GPUs
💡 The values of the -m
and --memory-swap
arguments are not additive.
Unlimited Swap Allowance
What happens if you know a container is going to need swap space, but you’re not quite sure exactly how much? You can then allocate unlimited access to swap by setting --memory-swap
to -1
.
Let’s live large and requested 4 GB of space.
docker run -m 128M --memory-swap -1 stress -t 30s --vm 1 --vm-bytes 4G --vm-keep
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 30s
stress: dbug: [1] --> hogvm worker 1 [8] forked
stress: dbug: [8] allocating 4294967296 bytes ...
stress: dbug: [1] <-- worker 8 signalled normally
stress: info: [1] successful run completed in 30s
No problem at all.
Swappiness
The --memory-swappiness
argument can be used to vary how likely it is that pages from physical memory will get swapped out to virtual memory.
By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set –memory-swappiness to a value between 0 and 100, to tune this percentage. See –memory-swappiness details.
I have not found a simple way to illustrate the effect of this argument. However, one quick way to see it in is to turn off implicit swap allocation.
docker run --memory-swappiness 0 -m 128M stress -t 30s --vm 1 --vm-bytes 192M --vm-keep
stress: info: [1] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [1] using backoff sleep of 3000us
stress: dbug: [1] setting timeout to 30s
stress: dbug: [1] --> hogvm worker 1 [7] forked
stress: dbug: [7] allocating 201326592 bytes ...
stress: FAIL: [1] (415) <-- worker 7 got signal 9
stress: WARN: [1] (417) now reaping child worker processes
stress: FAIL: [1] (421) kill error: No such process
stress: FAIL: [1] (451) failed run completed in 0s
Without setting --memory-swappiness
this would have been successful (see earlier) due to implicit swap allocation.
If this argument is not specified then Docker will inherit swappiness from the system.
cat /proc/sys/vm/swappiness
60
Why would you want to meddle with the swappiness? The principle reason would be performance. Swapping to virtual memory is slow. If you want a process to be performant you’d want to discourage its memory from being swapped out (by setting a low value of swappiness).
Resources
- The
Dockerfile
which I used to build thestress
image.