I know that GKE is driven by kubernetes underneath. But I don't seem to still get is that what part is taken care by GKE and what by k8s in the layering? The main purpose of both, as it appears to me is to manage containers in a cluster. Basically, I am looking for a simpler explanation with an example.
相关问题
- Microk8s, MetalLB, ingress-nginx - How to route ex
- How do I change the storage class of existing pers
- Docker task in Azure devops won't accept "$(pw
- Use awslogs with kubernetes 'natively'
- Unable to run mariadb when mount volume
GKE (Google Container Engine) is only container platform, which Kubernetes can manage. It is not a kubernetes-like with "differences".
As mentioned in "Docker and Kubernetes and AppC " (May 2015, that can change):
You can see Kubernetes used on GKE in this example: "Spinning Up Your First Kubernetes Cluster on GKE" from Rimantas Mocevicius.
The gcloud API will still make kubernetes commands behind the scene.
GKE will organize its platform through Kubernetes master
In short, without getting into technical details, GKE is managed Kubernetes, similar to how Google's Cloud Composer is managed Apache Airflow and Cloud Dataflow is managed Apache Beam.
So, some of Google Cloud Platform's services (GKE, Cloud Composer, Cloud Dataflow) are managed implementations of various open source technologies (Kubernetes, Airflow, Beam).
in short GKE is a managed/hosted Kubernetes (i.e. it is managed for you so you can concentrate on running your pods/containers applications)
Kubernetes does handle:
in addition, there are several 'add-ons' to Kubernetes, some of which are part of what makes GKE:
None of these are out-of-the-box, although they are fairly easy to setup, but you need to maintain them. There is no real 'logging' add-on, but there are various projects to do this (using Logspout, logstash, elasticsearch etc...)
In short Kubernetes does the orchestration, the rest are services that would run on top of Kubernetes.
GKE brings you all these components out-of-the-box, and you don't have to maintain them. They're setup for you, and they're more 'integrated' with the Google portal.
One important thing that everyone needs is the LoadBalancer part: - Since Pods are ephemeral containers, that can be rescheduled anywhere and at any time, they are not static, so ingress traffic needs to be managed separately.
This can be done within Kubernetes by using a DaemonSet to fix a Pod on a specific node, and use a
hostPort
for that Pod to bind to the node's IP. Obviously this lacks fault tolerance, so you could use multiple and do DNS round robin load balancing.GKE takes care of all this too with external Load Balancing. (On AWS, it's similar, with ALB taking care of load balancing in Kubernetes)