{"id":60729,"date":"2024-03-14T13:44:07","date_gmt":"2024-03-14T08:14:07","guid":{"rendered":"https:\/\/www.tothenew.com\/blog\/?p=60729"},"modified":"2024-03-19T13:57:23","modified_gmt":"2024-03-19T08:27:23","slug":"running-docker-inside-docker-container-using-custom-docker-image","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/running-docker-inside-docker-container-using-custom-docker-image\/","title":{"rendered":"Running Docker Inside Docker Container using Custom Docker Image"},"content":{"rendered":"<h2><b>Introduction and Scenario<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">There are various use cases for running Docker inside a host Docker container, which we will mention<\/span><span style=\"font-weight: 400;\">\u00a0later on, but one of the use cases that often comes in handy is <\/span><b>when we run a Docker <\/b><b>container as a Jenkins agent. <\/b><span style=\"font-weight: 400;\">Suppose we want to build and push our application image to any Docker registry, say ECR, and <\/span><span style=\"font-weight: 400;\">then want to deploy the image in any EKS, ECS, etc cluster via Jenkins or any other CI\/CD tool like <\/span><span style=\"font-weight: 400;\">Buildkite or GoCD, etc, then we can use this approach i.e., we can run Docker as an agent and <\/span><span style=\"font-weight: 400;\">then inside that Docker agent we can run Docker commands to build and push the images.<\/span><\/p>\n<h2><b>Prerequisites<\/b><\/h2>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Docker should be pre-installed on the host machine<\/span><\/li>\n<\/ul>\n<h2><b>Use Cases<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">To run a Docker container inside another Docker container.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Some of the use cases for this are:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">One potential use case for docker in docker is for the CI pipeline, where we need to build and push docker images to a container registry after a successful code build.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Building Docker images with a VM is straightforward. But, when we use Docker containers as Jenkins agents<\/span> <span style=\"font-weight: 400;\">for our CI\/CD pipelines, docker in docker is a must-have functionality.<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">For Sandboxed environments:<\/span>\n<ul>\n<li style=\"font-weight: 400;\"><b>Testing Environments<\/b><span style=\"font-weight: 400;\">: In CI\/CD pipelines, developers often need to run tests in an environment that closely mimics the production setup. By running Docker containers within another Docker container, we can create a sandboxed environment where we can quickly spin up test environments, run tests, and tear them down without affecting the host system.<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Isolation<\/b><span style=\"font-weight: 400;\">: Docker containers provide a level of isolation, but running them inside another Docker container adds an extra layer of isolation. This can be useful for running untrusted or potentially malicious code in a controlled environment without risking the host system.<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">This is for experimental purposes on our local development workstation.<\/span><\/li>\n<\/ul>\n<h2><b>How doest this work?<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">One of the features of Docker is the new \u201cprivileged\u201d mode for containers. It allows us to run some containers with almost all the capabilities of their host machine regarding kernel features and host device access. <\/span><span style=\"font-weight: 400;\">Also, we can now run Docker within Docker itself. Here\u2019s how:<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-60731 size-large\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-1024x671.png\" alt=\"\" width=\"625\" height=\"410\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-1024x671.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-300x197.png 300w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-768x503.png 768w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-1536x1007.png 1536w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-2048x1342.png 2048w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/DOCKER-624x409.png 624w\" sizes=\"(max-width: 625px) 100vw, 625px\" \/><\/p>\n<h2><b>Solution<\/b><\/h2>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">First, we need a Dockerfile (with all the necessary configurations that supports running Docker inside Docker)<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<pre style=\"text-align: left;\"><span style=\"font-weight: 400;\">FROM ubuntu:latest<\/span>\r\n\r\n<span style=\"font-weight: 400;\"># Updating the Container and Installing Dependencies<\/span>\r\n\r\n<span style=\"font-weight: 400;\">RUN apt update &amp;&amp; apt install apt-transport-https ca-certificates curl lxc iptables -y<\/span>\r\n\r\n<span style=\"font-weight: 400;\"># Installing Docker<\/span>\r\n\r\n<span style=\"font-weight: 400;\">RUN curl -sSL https:\/\/get.docker.com\/ | sh<\/span>\r\n\r\n<span style=\"font-weight: 400;\"># Installing Wrapper Script<\/span>\r\n\r\n<span style=\"font-weight: 400;\">ADD .\/script.sh \/usr\/local\/bin\/script.sh<\/span>\r\n\r\n<span style=\"font-weight: 400;\">RUN chmod +x \/usr\/local\/bin\/script.sh<\/span>\r\n\r\n<span style=\"font-weight: 400;\"># Define Volume for Container (Secondary Container)<\/span>\r\n\r\n<span style=\"font-weight: 400;\">VOLUME \/var\/lib\/docker<\/span>\r\n\r\n<span style=\"font-weight: 400;\">CMD [\"script.sh\"]<\/span><\/pre>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">The above Dockerfile is doing the following:<\/span>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">It installs a few packages like <\/span><b>lxc<\/b><span style=\"font-weight: 400;\"> and <\/span><b>iptables<\/b><span style=\"font-weight: 400;\"> (because Docker needs them), and <\/span><b>ca-certificates<\/b><span style=\"font-weight: 400;\"> (because when communicating with the Docker index and registry, Docker needs to validate their SSL certificates).<\/span><\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Also <\/span><b>\/var\/lib\/docker<\/b><span style=\"font-weight: 400;\"> should be a volume because the filesystem of a container is an AUFS mount point composed of multiple branches, and those branches have to be \u201cnormal\u201d filesystems (i.e., not AUFS mount points). We can say that <\/span><span style=\"font-weight: 400;\">\/var\/lib\/docker<\/span><span style=\"font-weight: 400;\"> is a place where Docker stores its containers, and it can not be an AUFS filesystem.\u00a0<\/span><\/li>\n<\/ul>\n<\/li>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Here is the wrapper script that we are running while creating the above Docker image:<\/span><\/li>\n<\/ul>\n<pre>#!\/bin\/bash\r\n\r\n# Ensure that all nodes in \/dev\/mapper correspond to mapped devices\r\n\r\ndmsetup mknodes\r\n\r\n# Defining CGROUP for our child containers\r\n\r\nCGROUP=\/sys\/fs\/cgroup\r\n\r\n: {LOG:=stdio}\r\n\r\n[ -d $CGROUP ] || mkdir $CGROUP\r\n\r\n# Mounting CGROOUP\r\n\r\nmountpoint -q $CGROUP || mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {\r\n\r\necho \"Can not make a tmpfs mount\"\r\n\r\nexit 1\r\n\r\n}\r\n\r\n# Checking if CGROUP is mounted successfully or not\r\n\r\nif [ -d \/sys\/kernel\/security ] &amp;&amp; ! mountpoint -q \/sys\/kernel\/security\r\n\r\nthen\r\n\r\n\u00a0\u00a0\u00a0\u00a0mount -t securityfs none \/sys\/kernel\/security || {\r\n\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0echo \"Could not mount \/sys\/kernel\/security.\"\r\n\r\n\u00a0\u00a0\u00a0\u00a0}\r\n\r\nfi\r\n\r\n# Removing the docker.pid file\r\n\r\nrm -rf \/var\/run\/docker.pid\r\n\r\n# If custom port is mentioned for our child containers\r\n\r\nif [ \"$PORT\" ]\r\n\r\nthen\r\n\r\nexec dockerd -H 0.0.0.0:$PORT -H unix:\/\/\/var\/run\/docker.sock \\\r\n\r\n$DOCKER_DAEMON_ARGS\r\n\r\nelse\r\n\r\nif [ \"$LOG\" == \"file\" ]\r\n\r\nthen\r\n\r\ndockerd $DOCKER_DAEMON_ARGS &amp;&gt;\/var\/log\/docker.log &amp;\r\n\r\nelse\r\n\r\ndockerd $DOCKER_DAEMON_ARGS &amp;\r\n\r\nfi\r\n\r\n(( timeout = 60 + SECONDS ))\r\n\r\n[[ $1 ]] &amp;&amp; exec \"$@\"\r\n\r\nexec bash --login\r\n\r\nfi<\/pre>\n<ul>\n<li><span style=\"font-weight: 400;\">The above script does the following:<\/span>\n<ul>\n<li><span style=\"font-weight: 400;\">It ensures that the cgroup pseudo-filesystems are properly mounted because Docker needs them.<\/span><\/li>\n<li><span style=\"font-weight: 400;\">It checks if you specified a <\/span><b>PORT<\/b><span style=\"font-weight: 400;\"> environment variable through the <\/span><b>-e PORT<\/b><span style=\"font-weight: 400;\"> command-line option. The Docker daemon starts in the foreground and listens to API requests on the specified TCP port if provided. It will start Docker in the background and give you an interactive shell if not specified.<\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Now, just build the Docker image:<\/span><\/li>\n<\/ul>\n<pre>docker build -t docker:v1<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-60726 size-large\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-1024x356.png\" alt=\"\" width=\"625\" height=\"217\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-1024x356.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-300x104.png 300w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-768x267.png 768w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-1536x534.png 1536w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-2048x712.png 2048w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker1-624x217.png 624w\" sizes=\"(max-width: 625px) 100vw, 625px\" \/><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Now, run the container with privileged mode:<\/span><\/li>\n<\/ul>\n<pre>docker run -itd --privileged docker:v1<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone wp-image-60727 size-large\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-1024x123.png\" alt=\"\" width=\"625\" height=\"75\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-1024x123.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-300x36.png 300w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-768x92.png 768w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-1536x184.png 1536w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-2048x246.png 2048w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker3-624x75.png 624w\" sizes=\"(max-width: 625px) 100vw, 625px\" \/><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Now, you can access the docker inside the above container. First, go inside the above container, then run the docker commands:<\/span><\/li>\n<\/ul>\n<pre>docker exec -it &lt;container_name&gt; bash<\/pre>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignleft wp-image-60728 size-large\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-1024x156.png\" alt=\"\" width=\"625\" height=\"95\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-1024x156.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-300x46.png 300w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-768x117.png 768w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-1536x234.png 1536w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-2048x312.png 2048w, \/blog\/wp-ttn-blog\/uploads\/2024\/03\/docker2-624x95.png 624w\" sizes=\"(max-width: 625px) 100vw, 625px\" \/><\/p>\n<h2><b>Conclusion<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Running Docker containers inside another Docker container offers significant benefits, especially in scenarios where Docker containers serve as Jenkins agents for building and deploying images.<\/span><\/p>\n<p><b>Usefulness for Jenkins Agents:<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\"><b>Build Isolation<\/b><span style=\"font-weight: 400;\">: Docker-in-Docker allows Jenkins to create isolated build environments for each job or pipeline, ensuring that dependencies or changes in one build do not affect others.<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Environment Consistency<\/b><span style=\"font-weight: 400;\">: By encapsulating build environments within Docker containers, Jenkins ensures consistent build environments across different machines and eliminates &#8220;works on my machine&#8221; issues.<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Scalability<\/b><span style=\"font-weight: 400;\">: Jenkins can dynamically spin up multiple Docker containers to handle concurrent builds or stages, enabling scalability and efficient resource utilization.<\/span><\/li>\n<li style=\"font-weight: 400;\"><b>Resource Efficiency<\/b><span style=\"font-weight: 400;\">: Docker containers offer lightweight virtualization, allowing Jenkins to run multiple build agents on a single host machine without significant overhead.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">Utilizing Docker-in-Docker for Jenkins agents enhances build isolation, environment consistency, scalability, and resource efficiency. While it offers significant advantages, carefully considering performance, security, and management complexities is necessary to ensure smooth operation and mitigate potential risks. With proper configuration and monitoring, Docker-in-Docker enables Jenkins to efficiently build and deploy Docker images in a controlled and reproducible manner.<\/span><\/p>\n<div class=\"ap-custom-wrapper\"><\/div><!--ap-custom-wrapper-->","protected":false},"excerpt":{"rendered":"<p>Introduction and Scenario There are various use cases for running Docker inside a host Docker container, which we will mention\u00a0later on, but one of the use cases that often comes in handy is when we run a Docker container as a Jenkins agent. Suppose we want to build and push our application image to any [&hellip;]<\/p>\n","protected":false},"author":1644,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":47},"categories":[2348],"tags":[5184,1883],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/60729"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/1644"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=60729"}],"version-history":[{"count":6,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/60729\/revisions"}],"predecessor-version":[{"id":60865,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/60729\/revisions\/60865"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=60729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=60729"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=60729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}