Kubernetes - How to read logs that are written to

2020-03-26 06:32发布

I have a pod in a state of CrashLoopBackOff, the logs I'm seeing from kubectl logs <pod-name> -p present only a partial picutre. Other logs are found in other files (e.g. /var/log/something/something.log).

Since this pod is crashed, I can't kubectl exec into a shell there and look at the files.

How can I look at the log files produced by a container that is no longer running?

To be more specific, I'm looking for log files file under $HOME/logs/es.log (in the container that failed)

标签: kubernetes
3条回答
Lonely孤独者°
2楼-- · 2020-03-26 06:57

Have you tried the --previous flag?

It's like

$ kubectl logs <pod-name> <container-name> --previous
查看更多
三岁会撩人
3楼-- · 2020-03-26 06:59

I was so frustrated from finding no solution to this seemingly common problem that I built a docker image that tails log files and sends them to stdout, to be used as a sidecar container.


Here's what I did:

  1. I added a volume with emptyDir{} to the pod
  2. I mounted that volume to my main container, with the mountPath being the directory to which it writes the logs to
  3. I added another container to the pod, called "logger", with the image being the log tracker I wrote (lutraman/logger-sidecar:v2), and mounted the same volume to /logs (I programmed the script to read the logs from this directory)

then, all the logs written to that directory, can be accessed by kubectl logs <pod-name> -c logger


Here is an example yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dummy
  labels:
    app: dummy
spec:
  selector:
    matchLabels:
      app: dummy
  template:
    metadata:
      labels:
        app: dummy
    spec:
      volumes:
        - name: logs
          emptyDir: {}
      containers:
        - name: dummy-app # the app that writes logs to files
          image: lutraman/dummy:v2
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
          env:
            - name: MESSAGE
              value: 'hello-test'
            - name: LOG_FILE
              value: '/var/log/app.log'
          volumeMounts:
            - name: logs
              mountPath: /var/log
        - name: logger # the sidecar container tracking logs and sending them to stdout
          image: lutraman/logger-sidecar:v2
          volumeMounts:
            - name: logs
              mountPath: /logs

For anyone who is interested, here is how I made the sidecar container:

Dockerfile:

FROM alpine:3.9

RUN apk add bash --no-cache

COPY addTail /addTail
COPY logtrack.sh /logtrack.sh

CMD ["./logtrack.sh"]

addTail:

#!/bin/sh

(exec tail -F logs/$3 | sed "s/^/$3: /" ) &
echo $! >> /tmp/pids

logtrack.sh:

#!/bin/bash

trap cleanup INT

function cleanup() {
  while read pid; do kill $pid; echo killed $pid; done < /tmp/pids
}

: > /tmp/pids

for log in $(ls logs); do
  ./addTail n logs $log
done

inotifyd ./addTail `pwd`/logs:n 
查看更多
霸刀☆藐视天下
4楼-- · 2020-03-26 06:59

Basically you have several options here.

If you want to proceed as is with you setup,

you can access files of exited container from the host where the container run.
Find out the worker the container is on:

$ kubectl get pod my-pod -o custom-columns=Node:{.spec.nodeName} --no-headers
my-worker-node

Then if you have the access to this node (e.g. via shh) you can find the container:

$ ID=$(docker ps -a | grep my-pod | grep -v POD | cut -d" " -f1)
$ docker cp $ID:/my.log .
$ cat my.log
log entry

If you don't have ssh access to the node you can use plugins like this one: https://github.com/kvaps/kubectl-enter

But generally this is not the best practice

You shouldn't write logs into files, instead your app should write in into stdout/stderr, then it would be much easier to debug

查看更多
登录 后发表回答