kubernetes pod can't connect (through service)

2019-02-09 07:07发布

I have a kubernetes single-node setup (see https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html )

I have a service and an replication controller creating pods. Those pods need to connect to the other pods in the same service (Note: this is ultimately so that I can get mongo running w/replica sets (non localhost), but this simple example demonstrates the problem that mongo has).

When I connect from any node to the service, it will be distributed (as expected) to one of the pods. This will work until it load balances to itself (the container that I am on). Then it fails to connect.

Sorry to be verbose, but I am going to attach all my files so that you can see what I'm doing in this little example.

Dockerfile:

FROM ubuntu
MAINTAINER Eric H
RUN apt-get update; apt-get install netcat
EXPOSE 8080
COPY ./entry.sh /
ENTRYPOINT ["/entry.sh"]

Here is the entry point

#!/bin/bash
# wait for a connection, then tell them who we are 
while : ; do 
    echo "hello, the date=`date`; my host=`hostname`" | nc -l 8080 
    sleep .5
done

build the dockerfile

docker build -t echoserver .

tag and upload to my k8s cluster's registry

docker tag -f echoserver:latest 127.0.0.1:5000/echoserver:latest
docker push 127.0.0.1:5000/echoserver:latest

Here is my Replication Controller

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    role: echo-server
    app: echo
  name: echo-server-1
spec:
  replicas: 3
  template:
    metadata:
      labels:
        entity: echo-server-1
        role: echo-server
        app: echo
    spec:
      containers:
      - 
        image: 127.0.0.1:5000/echoserver:latest
        name: echo-server-1

        ports:
          - containerPort: 8080

And finally, here is my Service

kind: Service
metadata:
  labels:
    app: echo
    role: echo-server
    name: echo-server-1
  name: echo-server-1
spec:
  selector:
    entity: echo-server-1
    role: echo-server
  ports:
    - port: 8080
      targetPort: 8080

Create my service kubectl create -f echo.service.yaml

Create my rc kubectl create -f echo.controller.yaml

Get my PODs

kubectl get po
NAME                  READY     STATUS    RESTARTS   AGE
echo-server-1-jp0aj   1/1       Running   0          39m
echo-server-1-shoz0   1/1       Running   0          39m
echo-server-1-y9bv2   1/1       Running   0          39m

Get the service IP

kubectl get svc
NAME            CLUSTER_IP   EXTERNAL_IP   PORT(S)    SELECTOR                                AGE
echo-server-1   10.3.0.246   <none>        8080/TCP   entity=echo-server-1,role=echo-server   39m

Exec into one of the pods kubectl exec -t -i echo-server-1-jp0aj /bin/bash

Now connect to the service multiple times... It will give me the app-message for all pods except for when it gets to itself, whereupon it hangs.

root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:02:38 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
^C
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:02:43 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
^C
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:19 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:23 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:26 UTC 2016; my host=echo-server-1-y9bv2
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080
hello, the date=Mon Jan 11 22:31:27 UTC 2016; my host=echo-server-1-shoz0
root@echo-server-1-jp0aj:/# nc 10.3.0.246 8080

How can I configure things so that all members of a service can connect to all other members, including itself?

3条回答
时光不老,我们不散
2楼-- · 2019-02-09 07:34

Thanks to all those who helped on GitHub.
The workaround turned out to be as follows:

tanen01 commented on Feb 4 Seeing the same problem here on k8s v1.1.7 stable

Issue occurs with:

kube-proxy --proxy-mode=iptables 

Once I changed it to:

           --proxy-mode=userspace 

(also the default), then it works again.

So, if you are experiencing this, please try turning off --proxy-mode when you start kube-proxy.

查看更多
劳资没心,怎么记你
3楼-- · 2019-02-09 07:47

I have seen this reported by at least one other user. I filed an issue: https://github.com/kubernetes/kubernetes/issues/20475

I assume you used the version of Kubernetes from that link -- 1.1.2.

查看更多
爱情/是我丢掉的垃圾
4楼-- · 2019-02-09 07:59

This is supposed to work - we've tested it extensively with the iptables proxy in kubernetes v1.1 (not default, but will be in v1.2). Can you say more about your environment?

查看更多
登录 后发表回答