There you are again. A freshly registered domain and a $5 VPS from your hosting provider of choice at the ready.
Surely this time the world is ready for <amazing_idea_final_v3> and it is going to be glorious. You quickly spin up a terminal to build your container.
The logs begin to scroll by, everything seems fine… until it doesn't.
Maybe you've seen this warning before — maybe you've even ignored it. But with credentials and user data leaking left and right, it's worth pausing to understand what Docker is trying to tell us here, and how to fix it so nothing holds back your launch.
If you're anything like me, the cold, reference-manual style of the Docker documentation can feel more like a hurdle than a helping hand. But don't worry! I've taken on that burden, so you can sit back and enjoy this guided tour through the concept of secret mounts.
Restoring secrets from Docker ARG and ENV
Before we get to Secret Mounts, let's discuss what's wrong with our current approach of stuffing secrets into ARG and ENV.
Most instructions in your Dockerfile are essentially copied one to one to the resulting image as so called image layers, this also applies to ARG and ENV instructions. These image layers represent a read-only snapshot of the filesystem and can be retroactively explored.
As an example, let's assume you've got a hold of an image that was built using the following Dockerfile.
FROM alpine:latest ARG CMS_SECRET_ARG ENV CMS_SECRET_ENV=$CMS_SECRET_ARG RUN echo $CMS_SECRET_ENV
By running docker image history --no-trunc <image-name> you can inspect the output of all the commands invoked during the build process.
you@horo:~$ docker image history --no-trunc secrets-appIMAGE CREATED CREATED BY SIZE COMMENT sha256:70f8e4ecb… 2 minutes ago RUN |1 CMS_SECRET_ARG=MY_SUPER_SECRET_VALUE /bin/sh -c echo $CMS_SECRET_ENV # buildkit 0B buildkit.dockerfile.v0<missing> 2 minutes ago ENV CMS_SECRET_ENV=MY_SUPER_SECRET_VALUE 0B buildkit.dockerfile.v0<missing> 2 minutes ago ARG CMS_SECRET_ARG=MY_SUPER_SECRET_VALUE 0B buildkit.dockerfile.v0<missing> 5 weeks ago CMD ["/bin/sh"] 0B buildkit.dockerfile.v0 <missing> 5 weeks ago ADD alpine-minirootfs-3.22.2-x86_64.tar.gz / # buildkit 8.32MB buildkit.dockerfile.v0
You should notice that you can read the value of both the build argument and the environment variable directly from the output. Even though you never built the image yourself.
This could be your image in the hand of a bad actor.
NOTEYou might scoff at this, as a real world Dockerfile is more complicated. However if we consider the use cases where we want to make use of secrets, then they can be essentially simplified to the following:
- Securely inject a secret, e.g. to communicate with an external system
- Access the secret during build time, e.g. to cache initial data
- Access the secret during runtime, e.g. to continuously update data on user changes
Don't be fooled thinking you'd be safe if your commands didn't explicitly print out the secret during build time. If your application interacts with the secret it had to be stored into the filesystem at some point, leaving traces behind. It's Linux after all, everything is a file.
And no, retroactively changing or removing the value from the filesystem isn't an option either. Image layers are immutable and separately viewable. You may need to invest a bit more effort but you will still be able to extract the secret, e.g. by using docker save -o layers.tar and sifting through the resulting archive. Be warned though, depending on your image this archive can become quite large.
Your runtime container also inherits the problems of your image from build time. Since we bake the secret into the resulting Docker image anyone with shell access to the running container can easily retrieve the secret by simply invoking the env command (or inspecting /proc/<pid>/environ).
How to Pass Secrets Securely with Docker BuildKit and Docker Compose
To reduce the attack surface around exposing our secrets we can make use of Docker secret mounts. During build time secrets are temporarily mounted either as a file or environment variable for the duration of the respective command. As they are essentially unmounted before the image layer is persisted, they do not leave traces behind, making their use preferred to ARG and ENV.
For comparison, the image's history of our prior example when adopting Docker secret mounts looks as follows.
you@horo:~$ docker image history --no-trunc secrets-appIMAGE CREATED CREATED BY SIZE COMMENTsha256:01b7b88c5… 6 seconds ago RUN /bin/sh -c echo $CMS_SECRET_ENV # buildkit 0B buildkit.dockerfile.v0<missing> 5 weeks ago CMD ["/bin/sh"] 0B buildkit.dockerfile.v0 <missing> 5 weeks ago ADD alpine-minirootfs-3.22.2-x86_64.tar.gz / # buildkit 8.32MB buildkit.dockerfile.v0
While there are multiple ways of creating and managing secrets, in this blog post we'll focus on managing them using Docker Compose.
NOTEDocker secret mounts require Docker BuildKit, which is enabled by default in Docker Engine 23.0 or newer.
Migrating a Docker setup to Docker Secret Mounts
Your Dockerfile will most likely vary widely from my examples, based on your application and requirements. I'll outline the necessary migration steps based on the following node example. I'm assuming that both npm build and npm start require access to the secret, which should cover most use cases.
FROM node:22-alpine ARG CMS_SECRET ENV CMS_SECRET=$CMS_SECRET WORKDIR /src COPY package.json package-lock.json ./ RUN npm install RUN npm build CMD ["npm", "start"]
services:
app:
build:
context: .
arguments:
- CMS_SECRET=MY_SUPER_SECRET_VALUEStep 1Register your secrets in Docker Compose
Register your secrets in Docker Compose
Identify all secrets that are currently passed via build or environment variables, such as passwords, API keys, tokens or other sensitive data. For each of these create a separate file that holds the secret value.
Creating a subdirectory makes it easy to ignore these files via .gitignore or .dockerignore, preventing accidental check-ins to version control or your build context. You might also want to consider moving them to a secure location outside of your project's directory altogether.
services:
app:
build:
context: .
– arguments:– - CMS_SECRET=MY_SUPER_SECRET_VALUENext, for each secret file you've just created, register them in your compose.yml.
services:
app:
build:
context: .
+ secrets:+ cms-secret:+ file: secrets/CMS.txtYou can safely ignore any build arguments or environment variables that do not hold sensitive information.
Step 2Use the secret at build time
Use the secret at build time
Now that Docker Compose knows of our secrets, we need to expose them during build time and adjust our Dockerfile, so that it extracts them.
services:
app:
build:
context: .+ secrets:+ - cms-secret
secrets: cms-secret: file: secrets/CMS.txtFROM node:22-alpine-ARG CMS_SECRET-ENV CMS_SECRET=$CMS_SECRETWORKDIR /src COPY package.json package-lock.json ./ RUN npm install-RUN npm build+RUN --mount=type=secret,id=cms-secret,env=CMS_SECRET npm buildCMD ["npm", "start"]
You need to be careful to only mount the secret for the commands that actually require it and that these commands can be trusted, so that they do not leave traces of the secret in the resulting image layer.
Step 3Use the secret at runtime
Use the secret at runtime
Lastly we also want runtime access to our secrets. Similar to build time we need to expose the secrets in our compose.yml and adjust our Dockerfile to read it.
services:
app:
build:
context: .
secrets:
- cms-secret
+ secrets:+ - cms-secret
secrets:
cms-secret:
file: secrets/CMS.txtFROM node:22-alpine WORKDIR /src COPY package.json package-lock.json ./ RUN npm install RUN --mount=type=secret,id=cms-secret,env=CMS_SECRET npm build-CMD ["npm", "start"]+CMD ["sh", "-c", "CMS_SECRET=$(cat /run/secrets/cms-secret)", "npm", "start"]
Limitations of Docker Secret Mounts and What to Do About Them
Docker secret mounts aren't a silver bullet. While they help to reduce the attack surface around your secrets, they come with certain limitations and caveats that you should be aware of.
Overcoming plaintext secrets
In all of the prior examples we've exposed our secrets as plaintext files on our host machine. This means that anyone with access to said machine can read the secrets directly from the filesystem.
While this is already a big improvement over baking the secrets directly into the image, it still leaves a lot to be desired from a security standpoint. We can do better and move the moment in time where our secrets are unencrypted closer to the point of use.
This can be done using the orchestrator-native secret managers exposed by Docker Swarm or Kubernetes. With a little bit of configuration you can setup at rest-encryption, meaning that up until the point where your container starts they will be fully encrypted.
Want to move the encryption boundary even closer to the point of use? Consider using a runtime secret manager, such as Infisical, HashiCorp Vault or AWS Secret Manager. As opposed to the aforementioned regular secret managers, these integrate with your application runtime directly instead of your container runtime. By directly integrating with your application, it is able to fetch the secrets on demand using a provided library. This way your secrets are stored encrypted in the secret manager and fully unavailable on your machine, until you actually need them.
Note however that once your application obtained the secret it will no longer be encrypted. At some point your application will need to read the secret in plaintext to make use of it, introducing a potential attack vector you need to handle yourself. In case your secret leaks, you should rotate it. Secret rotation describes the process of periodically updating your secrets, rendering any leaked secrets useless after a certain time frame. Most runtime secret managers provide built-in automated support for this, without having to restart your applications.
Be aware that introducing a secret manager also introduces additional complexity either to your infrastructure or (in case of a runtime secret manager) to your application code. Depending on your use case this might be an acceptable compromise, but for smaller hobby projects this might be overkill. Security decisions have tradeoffs, be it raw performance, architectural complexity, user experience or something else entirely — the reality of it is, is that there is no one-size-fits-all solution.
Runtime secrets are a lie
You may have noticed that the way we've mounted the runtime secret as an environment variable seemed a bit, for a lack of better word, clumsy.
The reason for this becomes apparent when we skip Docker Compose and instead use plain Docker CLI to build and subsequently run our container. The build time secret mount works as expected, using docker build --tag secrets-app --secret id=cms-secret,src=secret/cms.txt .. However when you try to execute docker run you will notice that there is no --secret flag available.
While this is an oversimplification, we should consider Docker Compose's runtime secrets as a convenient facade around mounting a readonly bind mount at runtime, rather than a core feature of Docker itself. So the plain Docker CLI equivalent to mount a secret file at runtime would look something like docker run --mount type=bind,src=./cms-secret.txt,dst=/run/secrets/cms-secret,readonly secrets-app.
To be fair the line does get a bit blurry when you leave the secret management to Docker Swarm, as Docker Compose will request the secret from the Swarm manager before mounting it unencrypted into the container runtime for you, so there is a bit more magic going on. I only mentioned Docker Swarm, because Docker Compose lacks built-in support for retrieving secrets from Kubernetes.
NOTEYou might wonder why I even bother with runtime secrets. After all, if you're not using a secret manager, the secrets are unencrypted files on my host machine, similar to the .env file you might already be using.
You're completely right for pointing this out. When you're not running a full-blown secret manager you can simply add an environment variable and skip over mounting the secret file. Personally I still prefer the separate file approach to create a logical barrier between regular environment variables and our secrets, that also allows an easy migration path to a proper secret manager in the future.
Opinionated Recommendations: Scaling from a $5 VPS to Kubernetes
So this is it… Everything you need to know to get started with securely managing secrets in your Node.JS application using Docker Compose and Docker BuildKit. Personally I make use of this approach for all of my hobby projects — most of which are Next.JS applications, being hosted on a 4€ VPS. And it works great.
As such I recommend that Hobbyists should at least adopt build-time secret mounts using Docker Compose, as introduced in this blog. Adoption complexity is extremely low and it will give you peace of mind when you publish your images to Container registries or sharing them via other means.
When your user base grows you should revisit this decision. Your next focus will be on scaling your application using orchestrators such as Docker Swarm or Kubernetes. Both offer built-in secret management solutions that integrate seamlessly with your secret mounts. Most likely these won't have a meaningful impact on your application's code, however the architectural complexity will increase.
If you don't run a container orchestrator or want to automate zero-downtime secret rotations, consider adopting a runtime secret manager. These services come with SDKs that allow you to interface with them directly from your code. However adopting the SDK will require substantial changes to your application. You should also keep in mind that these services often come with additional costs in terms of money, infrastructure, and, when self-hosted, maintenance.