`docker stack` specific log locations

2019-08-25 10:47发布

Is there a way to specify logging file on a deployed docker stack with docker-compose.yml file?

I've been suffering from intermittent crashes on docker services and stacks running on a docker swarm that leave no log trace behind, and I need some rotating logs that I can look at when these happen

my docker info output:

$ docker info
Containers: 114
 Running: 4
 Paused: 0
 Stopped: 110
Images: 95
Server Version: 17.09.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog

2条回答
贼婆χ
2楼-- · 2019-08-25 11:26

As per you question about rotating logs, you can add for each service in your stack file the following option:

  logging:
   driver: "json-file"
   options:
      max-size: "5m"
      max-file: "3"

You can also use docker service logs service_name to see the logs of a specific service.

by default when a container exited it's not removed, so you can always see its logs.

EDIT:

You could use the following to redirect that into a file (-f flag to follow output)

(nohup docker service logs service -f  >> /path-to-file/file.log)&

And the following to check which docker service logs output is already redirected.

ps aux | grep "docker service logs"
查看更多
Viruses.
3楼-- · 2019-08-25 11:30

There are several reasons why you may be not getting what you want.

  1. Yes @hichamx above is correct that you can change the file size settings of the default JSON log driver for services, but that likely isn't your issue.

  2. Logs are deleted when containers are deleted. With a Swarm Service, those "tasks" only hang around the Task History Retention Limit as seen in docker info. This defaults to 5, which means 4 old tasks will hang around. If your service tasks are crashing quickly after startup, you get the idea... logs aren't around very long. Use docker swarm update --task-history-limit to set a bigger number, disk-space permitting.

  3. docker service logs only shows what your apps report to stdout/stderr inside the containers. They won't show you actions the Swarm orchestrator are taking. Use docker events for that. It doesn't store much historical, so you'll need to keep that running in your shell or capture it some other way for historical. events tells you things like container creation/deletion/failure.

  4. Ideally if it's something the app is doing internally, it'll report an error on exit, which docker will store in the container inspect metadata under .State.Error. One-liners are cool but sometimes not as flexible as we need them to be. Here's an example of one that first lists all tasks in a , then gets their container ID for each task, then inspects each container and shows the Error, if any. Unfortunately it only works against containers on the local server you're connected to, but hopefully, you get the idea of the nested commands inside each $() .

docker inspect -f "{{.State.Error}}" $(docker inspect -f "{{.Status.ContainerStatus.ContainerID}}" $(docker service ps <servicename> -q))

查看更多
登录 后发表回答