I run new modules of my system in Google-Container-Engine. I would like to bring stdout and stderr from them (running in pods) to my centralised logstash. Is there an easy way to forward logs from pods to external logging service, e.g., logstash or elasticsearch?
相关问题
- Microk8s, MetalLB, ingress-nginx - How to route ex
- How do I change the storage class of existing pers
- Why do Dataflow steps not start?
- Use awslogs with kubernetes 'natively'
- __call__() missing 1 required positional argument:
相关文章
- k8s 访问Pod 时好时坏
- Override env values defined in container spec
- How do I create a persistent volume claim with Rea
- How to obtain the enable admission controller list
- Difference between API versions v2beta1 and v2beta
- MountVolume.SetUp failed for volume “nfs” : mount
- How to save content of a configmap to a file with
- GKE does not scale to/from 0 when autoscaling enab
since elasticsearch 6.00 you can use filebeats
see blog
Download Filebeat DaemonSet manifest
Update Elasticsearch connection details
Deploy it to Kubernetes
You could try installing the following kubernetes addon: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
Haven't tried it myself but I'm also looking for proper logging. The GCE logging is somehow limited to my opinion.
you can create a sink form the logs in
stack-driver
topub-sub
and then use thelogstash-input-google_pubsub
plugin - which exports all the logs to elastic using logstash-input-google_pubsub image, see source codeexport logs to pub-sub
create a topic and a subscription in pubsub follow instrutions here
in the log viewer page click on
create export
, make sure you are filtered to your app's logs (GKE Container -> cluster-name, app-name), enter a sink name, choose Cloud Pubsub as Sink Service , now choose your topic in Sink Destination.logs from now and on are exported to pub-sub
configure logstash pipeline
here is the
pubsub-elastic.conf
file:here is my Docker file:
now you should build the image and run
if running on kubernetes use the following:
here is deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: logstash-input-google-pubsub spec: replicas: 1 strategy: type: RollingUpdate template: metadata: labels: app: logstash-input-google-pubsub spec: containers: - name: logstash-input-google-pubsub image: us.gcr.io/my-gcloud-project-id/logstash-input-google_pubsub:1.0.0
build your image and push to registry
now create instance
kubectl create -f deployment.yaml
done!!
I decided to log directly to elasticsearch, an external virtual machine that can be access at
elasticsearch.c.my-project.internal
(I am on Google-Cloud-Platform). It is quite easy:Setup an ExternalService with name: elasticsearch that points to the elasticsearch instance:
apiVersion: v1 kind: Service metadata: name: elasticsearch-logging namespace: kube-system labels: k8s-app: elasticsearch kubernetes.io/name: "elasticsearch" spec: type: ExternalName externalName: elasticsearch.c.my-project.internal ports: - port: 9200 targetPort: 9200
Deploy a fluentd-elasticsearch as a DeamonSet. fluentd-elasticsearch will automatically connect to service with name
elasticsearch-logging
(based on a fluentd-elasticsearch deployment defintion :apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: fluentd-elasticsearch namespace: kube-system labels: tier: monitoring app: fluentd-logging k8s-app: fluentd-logging spec: template: metadata: labels: name: fluentd-elasticsearch spec: containers: - name: fluentd-elasticsearch image: gcr.io/google_containers/fluentd-elasticsearch:1.19 volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers
Use
kubectl logs fluentd-elasticsearch-...
to check whether you were able to connect to the elasticsearach instance.Now, you can access kibana and see the logs.