Request to localhost from pod via its own service

2019-05-30 17:19发布

问题:

I have a service named foo with a selector to foo pod:

apiVersion: v1
kind: Service
metadata:
  labels:
    name: foo
  name: foo
  namespace: bar
spec:
  clusterIP: 172.20.166.230
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    name: foo
  sessionAffinity: None
  type: ClusterIP

I have a deployment/pod named foo with a label foo:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "3"
  generation: 3
  labels:
    name: foo
  name: foo
  namespace: bar
spec:
  selector:
    matchLabels:
      name: foo
  template:
    metadata:
      labels:
        name: foo
    spec:
      containers:
        image: my/image:tag
        imagePullPolicy: Always
        name: foo
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: ClusterFirst

I make a request from foo pod to foo host, host resolved but requests just don't pass through:

$ curl -vvv foo:8080
* Rebuilt URL to: foo:8080/
*   Trying 172.20.166.230...
* TCP_NODELAY set

Is this supposed to work like that in Kubernetes?

I don't have any problems requesting foo from other pods from the same namespace.

The reason why I don't simply use localhost:8080 (which works fine) is that I have the same config file with hosts used by different pods, so I don't want to write a specific logic per pod.

Kubernetes 1.6.4, single-node cluster, iptables mode.

回答1:

It looks like this is a default behavior when using iptables as a proxy mode.



标签: kubernetes