I am running a Google compute instance with a coreos container (image name: coreos-stable-1688-4-0-v20180327
). Copying files from Storage to the local filesystem with gsutil
seems to work fine -- except that none of the supposedly copied files actually appears on the filesystem. Running the same copy command on a compute instance without using a container does work, so I imagine the issue is with the container. However, I'm not sure what about the container is causing the copy to fail.
The command is gsutil cp -r gs://my-bucket ./
.
You're hitting the issue described in https://github.com/GoogleCloudPlatform/gsutil/issues/453. There's an alias set up for gsutil
that runs gsutil within a container (which does not have access to the host filesystem), so the files are being copied to that container's filesystem, rather than your GCE host's. Some workarounds are suggested in that thread.
EDIT for better reading
(info from the GitHub issue thread):
Looks like GCE VMs have a nifty alias set up for gsutil:
$ type gsutil
gsutil is aliased to `(docker images google/cloud-sdk || docker pull google/cloud-sdk) > /dev/null;docker run -t -i --net=host -v /home/<USER>/.config:/root/.config google/cloud-sdk gsutil'
Potential workarounds on CoreOS instance:
- Clone the gsutil repo, run
git checkout <tag>
to fetch the commit for the most recent release, install Python via these instructions, then make sure to run the local copy of gsutil directly on the CoreOS host, rather than running the containerized version of gsutil.
- Override the
gsutil
alias, or make a new one, so that it mounts some part of the host filesystem on the container; this allows you to access the newly-written files after the container terminates.