Build docker image without docker installed

2019-05-10 17:43发布

问题:

Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.

I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?

It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.

UPDATE

There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)

回答1:

We can create Docker image without Docker being installed.

Jib Maven and Gradle Plugins

Google has an open source tool called Jib that is relatively new, but quite interesting for a number of reasons. Probably the most interesting thing is that you don’t need docker to run it - it builds the image using the same standard output as you get from docker build but doesn’t use docker unless you ask it to - so it works in environments where docker is not installed (not uncommon in build servers). You also don’t need a Dockerfile (it would be ignored anyway), or anything in your pom.xml to get an image built in Maven (Gradle would require you to at least install the plugin in build.gradle).

Another interesting feature of Jib is that it is opinionated about layers, and it optimizes them in a slightly different way than the multi- layer Dockerfile created above. Just like in the fat jar, Jib separates local application resources from dependencies, but it goes a step further and also puts snapshot dependencies into a separate layer, since they are more likely to change. There are configuration options for customizing the layout further.

Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better

For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container



回答2:

Have a look at the following tools:

  1. Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
  2. Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.

Fabric8-maven-plugin

The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.

fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.

I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.

Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:

<plugin>
    <groupId>io.fabric8</groupId>
    <artifactId>fabric8-maven-plugin</artifactId>
    <version>3.5.41</version>
</plugin>

then build+deploy using

$ mvn fabric8:deploy

If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.


Buildah

Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:

  • create a working container, either from scratch or using an image as a starting point
  • create an image, either from a working container or via the instructions in a Dockerfile
  • images can be built in either the OCI image format or the traditional upstream docker image format
  • mount a working container's root filesystem for manipulation
  • unmount a working container's root filesystem
  • use the updated contents of a container's root filesystem as a filesystem layer to create a new image
  • delete a working container or an image
  • rename a local container


回答3:

Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.



回答4:

I don't want to force others to install docker on their machines.

If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.

The Docker Engine API is a RESTful API accessed by an HTTP client such as wget or curl, or the HTTP library which is part of most modern programming languages.

For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.



回答5:

I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.

OpenSource and has a MIT license.

Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.

Requirements

  1. project repository on Gitlab
  2. an openshift cluster (an openshift-online-starter will do for most medium/small

extract how the docker image for this project was created:

# create and push image to ImageStream:
build_rootfs:
  image: centos
  stage: build-image
  dependencies:
    - build
  before_script:
    - mkdir -pv rootfs
    - cp -v output/oc-* rootfs/
    - mkdir -pv rootfs/etc/pki/tls/certs
    - mkdir -pv rootfs/bin-runner
    - cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
    - chmod -Rv 777  rootfs
  tags:
    - oc-runner-shared
  script:
    - registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config