Docker - /bin/sh: not found - bad ELF inter

2019-04-12 17:31发布

问题:

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)

--

I've been trying (half a day :P) to execute a binary extracted during docker build.

My dockerfile contains roughly:

...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...

Within directory b is a binary file imcl

Error I'm getting was:

/bin/sh: 1: /tmp/setup/a/b/imcl: not found

What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:

RUN ls -la /tmp/setup/a/b/imcl  
-rwxr-xr-x  1 root root 63050 Aug  9  2012 imcl

RUN file /tmp/setup/a/b/imcl  
ELF 32-bit LSB  executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped` 

Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.

Docker asks not to use sudo so I tried with su combinations:

su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"

Both of these returned:

stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory

Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D

Guess how that turned out?

sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory

Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing: "Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."

So my question is:

  • Is there a workaround to this?
  • Is there a way to add extracted files to docker build context during a build (within the dockerfile)?

Oh and the machine I'm building this is not connected to the internet...

I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?

So am I out of luck?

Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?

UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.

My problem is actually this one: CentOS 64 bit bad ELF interpreter

Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:

/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.

Now the solution would be simple if I had internet access and repos enabled:

apt-get install ia32-libs

Or

yum install glibc.i686

However, I dont... :[

So the question becomes now:

  • What would be the best way to achive the same result without repos or internet connection?

According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++

[root@localhost]# yum install gtk2.i686
[root@localhost]# yum install libXtst.i686
[root@localhost]# yum install compat-libstdc++

回答1:

UPDATE:

So the question becomes now:

  • What would be the best way to achive the same result without repos or internet connection?

You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.

If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:

  • on f69m/ubuntu32 home page, there is a link to GitHub repo used to generate images;
  • on hugodby/fedora32 home page, there is an example of commands used to build the image;
  • and so on.

Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.

Say, you can use a Dockerfile like this:

FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs

...and use produced image as a base (with FROM directive) for images you're building without internet access.

You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.


No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.



回答2:

You're in luck! You can do this using the ADD command. The docs say:

If <src> is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is copied or unpacked, it has the same behavior as tar -x: the result is the union of:

  1. Whatever existed at the destination path and
  2. The contents of the source tree, with conflicts resolved in favor of “2.” on a file-by-file basis.