I have Kubernetes set up and running a grpc service in a pod. I am successfully hitting an endpoint on the service, which has a print() statement in it, but I see no logs in the log file. I have seen this before when I was running a (cron) job in Kubernetes and the logs only appeared after the job was done (as opposed to when the job was running). Is there a way to make kubernetes write to the log file right away? Any setting that I can put (either cluster-level or just for the pod)? Thanks for any help in advance!
相关问题
- Microk8s, MetalLB, ingress-nginx - How to route ex
- How do I change the storage class of existing pers
- Use awslogs with kubernetes 'natively'
- I want to trace logs using a Macro multi parameter
- Error message 'No handlers could be found for
相关文章
- k8s 访问Pod 时好时坏
- how do I log requests and responses for debugging
- Override env values defined in container spec
- How do I create a persistent volume claim with Rea
- How to obtain the enable admission controller list
- Android Studio doesn't display logs by package
- Difference between API versions v2beta1 and v2beta
- MountVolume.SetUp failed for volume “nfs” : mount
One possibility is that the container is starved for CPU. We have run into this issue when running locally on minikube with resource limits that enforced in our larger cluster. Try bumping the CPU resource limits on your pod. Below is an example yaml.
If your CPU limits are around 20-40m, that might be too low to run a full flask/python app. You might try bumping it to closer to 100m. It's not going to crush your local machine.
Here is an example of K8S deployment yaml so you can copy paste the solution from the aforementioned answer:
Found the root cause. Specifically, found it at Python app does not print anything when running detached in docker . The solution is to set the following environmental variable: PYTHONUNBUFFERED=0 . It was not that the print statement was not being displayed, it was that the print statement was being buffered. Doing the above will solve the issue.