I have 4 nodes (kubelets
) configured with a label role=nginx
master ~ # kubectl get node
NAME LABELS STATUS
10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx Ready
10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx Ready
10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx Ready
10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx Ready
I modified the replication controller
and added these lines
spec:
replicas: 4
selector:
role: nginx
But when I fire it up I get 2 pods on one host. What I want is 1 pod on each host. What am I missing?
Prior to DaemonSet being available, you can also specify that you pod uses a host port and set the number of replicas in your replication controller to something greater than your number of nodes. The host port constraint will allow only one pod per host.
I was able to achieve this by modifying the labels as follows below
master ~ # kubectl get nodes -o wide
NAME LABELS STATUS
10.1.141.34 kubernetes.io/hostname=10.1.141.34,role=nginx1 Ready
10.1.141.40 kubernetes.io/hostname=10.1.141.40,role=nginx2 Ready
10.1.141.42 kubernetes.io/hostname=10.1.141.42,role=nginx3 Ready
10.1.141.43 kubernetes.io/hostname=10.1.141.43,role=nginx4 Ready
I then created 4 nginx replication controllers each referencing the nginx{1|2|3|4} roles and labels.
Replication controller doesn't guarantee one pod per node as the scheduler will find the best fit for each pod. I think what you want is the DaemonSet controller, which is still under development. Your workaround posted above would work too.