I read in kubernetes docs somewhere that kubernetes reads application logs from stdout and stderror in pods. I created a new application and configured it to send logs to a remote splunk hec endpoint (using splunk-logback jars) and at the same time to console. So by default, the console logs in logback should go to System.out, which should then be visible using kubectl logs . But it's not happening in my application.
my logback file:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<Appender name="SPLUNK" class="com.splunk.logging.HttpEventCollectorLogbackAppender">
<url>${splunk_hec_url}</url>
<token>${splunk_hec_token}</token>
<index>${splunk_app_token}</index>
<disableCertificateValidation>true</disableCertificateValidation>
<batch_size_bytes>1000000</batch_size_bytes>
<batch_size_count>${batch_size_count}</batch_size_count>
<send_mode>sequential</send_mode>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%msg</pattern>
</layout>
</Appender>
<Appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%msg</pattern>
</encoder>
</Appender>
<Appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="STDOUT" />
</Appender>
<root level="INFO">
<appender-ref ref="SPLUNK"/>
<appender-ref ref="ASYNC"/>
</root>
</configuration>
I am able to see the logs in splunk and If I login to the container from backend and start my java application, then also I can see the logs on the terminal that time. But if I let the container start by default on it's own, then the logs are only going to splunk and I can't view them using kubectl logs <POD_NAME>
The kubernetes yml file for my logger app:
apiVersion: v1
kind: Pod
metadata:
name: logging-pod
labels:
app: logging-pod
spec:
containers:
- name: logging-container
image: logger-splunk:latest
command: ["java", "-jar", "logger-splunk-1.0-SNAPSHOT.jar"]
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 1
memory: 1Gi