Logs sent to console using logback configuration i

2019-07-22 04:59发布

I read in kubernetes docs somewhere that kubernetes reads application logs from stdout and stderror in pods. I created a new application and configured it to send logs to a remote splunk hec endpoint (using splunk-logback jars) and at the same time to console. So by default, the console logs in logback should go to System.out, which should then be visible using kubectl logs . But it's not happening in my application.

my logback file:

<?xml version="1.0" encoding="UTF-8"?>

<configuration>

    <Appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
        <url>${splunk_hec_url}</url>
        <token>${splunk_hec_token}</token>
        <index>${splunk_app_token}</index>
        <disableCertificateValidation>true</disableCertificateValidation>
        <batch_size_bytes>1000000</batch_size_bytes>
        <batch_size_count>${batch_size_count}</batch_size_count>
        <send_mode>sequential</send_mode>

        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>%msg</pattern>
        </layout>
    </Appender>

    <Appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%msg</pattern>
        </encoder>
    </Appender>

    <Appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="STDOUT" />
    </Appender>

    <root level="INFO">
        <appender-ref ref="SPLUNK"/>
        <appender-ref ref="ASYNC"/>
    </root>

</configuration>

I am able to see the logs in splunk and If I login to the container from backend and start my java application, then also I can see the logs on the terminal that time. But if I let the container start by default on it's own, then the logs are only going to splunk and I can't view them using kubectl logs <POD_NAME>

The kubernetes yml file for my logger app:

apiVersion: v1
kind: Pod
metadata:
    name: logging-pod
    labels:
       app: logging-pod
spec:
  containers:
     - name: logging-container
       image: logger-splunk:latest
       command: ["java", "-jar", "logger-splunk-1.0-SNAPSHOT.jar"]
       resources:
          requests:
             cpu: 1
             memory: 1Gi
          limits:
             cpu: 1
             memory: 1Gi

2条回答
闹够了就滚
2楼-- · 2019-07-22 05:33

According to the Kubenetes documentation, all output (that a containerized application writes to stdout and stderr) is redirected to a JSON file by default. You can access it by using kubectl logs.

Let's test this feature by creating a simple pod that outputs numbers in stdout:

kubectl create -f https://k8s.io/docs/tasks/debug-application-cluster/counter-pod.yaml

counter-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

where:
counter - name of the pod
count - name of the container inside "counter" pod

You can access the content of that file by running:

$ kubectl logs counter

You can access a log file of previously crashed container in a pod by the following command:

$ kubectl logs --previous

In case of multiple containers in the pod, you should add the name of the container as follows:

$ kubectl logs counter -c count

When the pod is removed from the cluster, all its logs (current and previous) are also removed.

Ensure you configure stdout in application correctly, and the output to stdout in your application is not silently skipped by any reason.

查看更多
smile是对你的礼貌
3楼-- · 2019-07-22 05:33

ok so this finally got resolved. The issue was with the logs not being flushed.

In the PatternLayout the %n was missing. Hence everything was going into some buffer I guess and not reaching the console.

查看更多
登录 后发表回答