I have the problem with installing node_modules
inside the Docker container and synchronize them with the host. My Docker's version is 18.03.1-ce, build 9ee9f40
and Docker Compose's version is 1.21.2, build a133471
.
My docker-compose.yml
looks like:
# Frontend Container.
frontend:
build: ./app/frontend
volumes:
- ./app/frontend:/usr/src/app
- frontend-node-modules:/usr/src/app/node_modules
ports:
- 3000:3000
environment:
NODE_ENV: ${ENV}
command: npm start
# Define all the external volumes.
volumes:
frontend-node-modules: ~
My Dockerfile
:
# Set the base image.
FROM node:10
# Create and define the working directory.
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# Install the application's dependencies.
COPY package.json ./
COPY package-lock.json ./
RUN npm install
The trick with the external volume is described in a lot of blog posts and Stack Overflow answers. For example, this one.
The application works great. The source code is synchronized. The hot reloading works great too.
The only problem that I have is that node_modules
folder is empty on the host. Is it possible to synchronize the node_modules
folder that is inside Docker container with the host?
I've already read these answers:
- docker-compose volume on node_modules but is empty
- Accessing node_modules after npm install inside Docker
Unfortunately, they didn't help me a lot. I don't like the first one, because I don't want to run npm install
on my host because of the possible cross-platform issues (e.g. the host is Windows or Mac and the Docker container is Debian 8 or Ubuntu 16.04). The second one is not good for me too, because I'd like to run npm install
in my Dockerfile
instead of running it after the Docker container is started.
Also, I've found this blog post. The author tries to solve the same problem I am faced with. The problem is that node_modules
won't be synchronized because we're just copying them from the Docker container to the host.
I'd like my node_modules
inside the Docker container to be synchronized with the host. Please, take into account that I want:
- to install
node_modules
automatically instead of manually - to install
node_modules
inside the Docker container instead of the host - to have
node_modules
synchronized with the host (if I install some new package inside the Docker container, it should be synchronized with the host automatically without any manual actions)
I need to have node_modules
on the host, because:
- possibility to read the source code when I need
- the IDE needs
node_modules
to be installed locally so that it could have access to thedevDependencies
such aseslint
orprettier
. I don't want to install thesedevDependencies
globally.
Thanks in advance.
At first, I would like to thank David Maze and trust512 for posting their answers. Unfortunately, they didn't help me to solve my problem.
I would like to post my answer to this question.
My
docker-compose.yml
:My
Dockerfile
:And last but not least
entrypoint.sh
:The trickiest part here is to install the
node_modules
into thenode_module
's cache directory (/usr/src/cache
) which is defined in ourDockerfile
. After that,entrypoint.sh
will move thenode_modules
from the cache directory (/usr/src/cache
) to our application directory (/usr/src/app
). Thanks to this the entirenode_modules
directory will appear on our host machine.Looking at my question above I wanted:
The first thing is done:
node_modules
are installed automatically. The second thing is done too:node_modules
are installed inside the Docker container (so, there will be no cross-platform issues). And the third thing is done too:node_modules
that were installed inside the Docker container will be visible on our host machine and they will be synchronized! If we install some new package inside the Docker container, it will be synchronized with our host machine at once.The important thing to note: truly speaking, the new package installed inside the Docker container, will appear in
/usr/src/app/node_modules
. As this directory is synchronized with our host machine, this new package will appear on our host machine'snode_modules
directory too. But the/usr/src/cache/node_modules
will have the old build at this point (without this new package). Anyway, it is not a problem for us. During nextdocker-compose up --build
(--build
is required) the Docker will re-install thenode_modules
(becausepackage.json
was changed) and theentrypoint.sh
file will move them to our/usr/src/app/node_modules
.You should take into account one more important thing. If you
git pull
the code from the remote repository orgit checkout your-teammate-branch
when Docker is running, there may be some new packages added to thepackage.json
file. In this case, you should stop the Docker withCTRL + C
and up it again withdocker-compose up --build
(--build
is required). If your containers are running as a daemon, you should just executedocker-compose stop
to stop the containers and up it again withdocker-compose up --build
(--build
is required).If you have any questions, please let me know in the comments.
Hope this helps.
I wouldn't suggest overlapping volumes, although I haven't seen any official docs ban it, I've had some issues with it in the past. How I do it is:
The above might be achieved by shortening your compose file a bit:
That means you might need two Dockerfiles - one for local development and one for deploying a fat image with all the application dist files layered inside.
That said, consider a development Dockerfile:
The above makes the application create a full node_modules installation and map it to your host location, while the docker-compose specified command would start your application off.
There's three things going on here:
docker build
ordocker-compose build
, your Dockerfile builds a new image containing a/usr/src/app/node_modules
directory and a Node installation, but nothing else. In particular, your application isn't in the built image.docker-compose up
, thevolumes: ['./app/frontend:/usr/src/app']
directive hides whatever was in/usr/src/app
and mounts host system content on top of it.volumes: ['frontend-node-modules:/usr/src/app/node_modules']
directive mounts the named volume on top of thenode_modules
tree, hiding the corresponding host system directory.If you were to launch another container and attach the named volume to it, I expect you'd see the
node_modules
tree there. For what you're describing you just don't want the named volume: delete the second line from thevolumes:
block and thevolumes:
section at the end of thedocker-compose.yml
file.Thanks Vladyslav Turak for answer with
entrypoint.sh
where we copynode_modules
from container to host.I implemented the similar thing but I run into the issue with husky, @commitlint, tslint npm packages.
I can't push anything into repository.
Reason: I copied
node_modules
from Linux to Windows. In my case <5% of files are different (.bin and most of package.json) and 95% are the same. example: image with diffSo I returned to solution with
npm install
ofnode_modules
for Windows first (for IDE and debugging). And Docker image will contain Linux version ofnode_modules
.I know that this was resolved, but what about:
Dockerfile:
docker-composer.yml:
volumes/app/package.json
After run, node_modules will be present in your volumes, but its contents are generated within the container so no cross platform problems.