Table of contents

Increase docker storage size

If your Red Hat operating system uses device mapper as the docker storage driver, the base size limits the size of image and container to 10G. This topic describes how to increase docker storage size of a specific container. You do not have to increase the size for overlay or overlay2, which have a default base size of 500GB.

As an example, you can enter the following command to see that the size of a spark worker pod:

kubectl exec -it spark-worker-7b447945d4-6gdhn -n dsx -- bash -c "df -h"

In this example, the size of the spark worker is 25G by default.

Increase docker storage size

Complete the following steps on each node:

  1. Stop the kubelet service: systemctl stop kubelet.service
  2. Inside the node, run systemctl stop docker.service to stop docker service.
  3. Change the docker base size to 110G in the config files. You can either modify DOCKER_STORAGE_OPTIONS in /etc/sysconfig/docker-storage or dm.basesize in /etc/docker/daemon.json. But keep in mind that you can only modify one and remove another. It depends on which one you mostly use. For example, enter vi /etc/sysconfig/docker-storage and remove DOCKER_STORAGE_OPTIONS, then enter vi /etc/docker/daemon.json and change dm.basesize to 110G.
  4. Restart the docker service: systemctl start docker.service. Verify that all services are up.

If you do not remove the current docker image, the container size will not change even though you increase the size. You also need to remove all deployments using that image. In the example, by checking docker image docker images | grep spark, you can see spark-worker is using image: idp-registry.sysibm-adm.svc.cluster.local:31006/spark:1.5.528-x86_64, with Image ID -- 88886464062c. So you must remove all deployments (spark-master, spark-worker, spark-history) using that image:

  1. Run docker ps -a | grep <spark-image-id>, and remove the unused/use containers using the spark image.
  2. Remove the image by running docker rmi <spark-image-id>.
  3. Run docker pull on the spark image:
    docker pull idp-registry.sysibm-adm.svc.cluster.local:31006/spark:1.5.528-x86_64

Sometimes different images are sharing the same layers, so make sure the image is pulling entirely new. If you see some layers are showing Already exist, you must also remove other related images. In this example, you must delete the images for wdp-dashboard-back and wdp-dashboard-calculator.

On all of the nodes:

docker rmi <dash-back-image>
docker rmi <dash-calc-image>

After you remove those images, run the docker pull again, we will see the image is being entirely pulled.

Tip: If the size is still not changed, run step 2 and step 3 again, then run the docker system prune -a command to clean all unused containers. Then docker pull the image again and check the size.

Now start the kubelet service: systemctl start kubelet.service, and check the new container. You should see the spark worker pod size is increased to 110G.