Connecting to Docker Elasticsearch instance throug

2019-06-23 23:26发布

问题:

I'm running Elasticsearch instance from Docker. The image is from jHipster docker hub repo: jhipster/jhipster-elasticsearch/ - I'm using image v1.3.2 because I need Elasticsearch 2.4.0 (to be in line with Spring Boot version of the project).

I'm starting ES container along with Logstash and Kibana images, with docker-compose. This are the settings for starting ES container:

jhipster-elasticsearch:
    image: jhipster/jhipster-elasticsearch:v1.3.2
    ports:
        - 9400:9200
        - 9500:9300
    volumes:
       - ./log-es-config/elasticsearch_custom.yml:/usr/share/elasticsearch/config/elasticsearch.yml

So I'm using 9400 for REST and 9500 for transport communication.

This is configuration inside elasticsearch_custom.yml that is mounted to ES config:

cluster.name: "log-cluster"
node.name: "log-node"
http.host: 0.0.0.0
transport.host: 127.0.0.1
transport.tcp.port: 9500
transport.publish_port: 9500

When I start container, this is what I get from http://localhost:9400/_nodes:

"cluster_name": "log-cluster",
  "nodes": {
    "xLsGj2DyTdCF89I7sAToVw": {
      "name": "log-node",
      "transport_address": "127.0.0.1:9500",
      "host": "127.0.0.1",
      "ip": "127.0.0.1",
      "version": "2.4.0",
      "build": "ce9f0c7",
      "http_address": "172.18.0.5:9200",
      "settings": {
        "cluster": {
          "name": "log-cluster"
        },
        ... (I can put all response if needed)

JAVA API:

Now I'm trying to connect to this ES node like this:

    @Bean
    public ElasticsearchOperations logsElasticsearchOperations() throws UnknownHostException {
        Settings settings = Settings.settingsBuilder()
            .put("cluster.name", "log-cluster")
            .put("node.name", "log-node")
            .build();

        Client client = TransportClient.builder()
            .settings(settings)
            .build()
            .addTransportAddress(new InetSocketTransportAddress(new InetSocketAddress("127.0.0.1", 9500)));


        ElasticsearchTemplate template = new ElasticsearchTemplate(client);
        template.createIndex(ProcessLog.class);
        log.debug("Elasticsearch for logs configured.");
        return template;
    }

The error I'm getting is the most famous one:

Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9500}]

I googled and tried different config approaches, also with client.transport.sniff set to false, but non of those worked. Now I spent lots of time trying to configure this one and I'm still missing something.

Thanks in advance for help.

UPDATE:

There is also embedded ES instance running when I start the app. So host ports config is like this:

  • Embedded ES: 9200 (http), 9300 (tcp)
  • Docker's ES: 9400 (http), 9500 (tcp)

Here is full docker-compose.yml:

    version: '2'
    services:
    jhipster-elasticsearch:
        # elasticsearch 2.4.0 - to be in line with spring boot version
        image: jhipster/jhipster-elasticsearch:v1.3.2
        ports:
            - 9400:9200
            - 9500:9300
        volumes:
           - ./log-es-config/elasticsearch_custom.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    jhipster-logstash:
        image: jhipster/jhipster-logstash:v2.2.1
        command: logstash -f /conf/logstash_custom.conf
        ports:
            - 5000:5000/udp
            - 6000:6000/tcp
        volumes:
            - ./logstash-log-es-conf/:/conf
    jhipster-console:
        image: jhipster/jhipster-console:v2.0.1
        ports:
            - 5601:5601
    jhipster-zipkin:
        image: jhipster/jhipster-zipkin:v2.0.1
        ports:
            - 9411:9411
        environment:
            - ES_HOSTS=http://jhipster-elasticsearch:9400
            - ZIPKIN_UI_LOGS_URL=http://localhost:5601/app/kibana#/dashboard/logs-dashboard?_g=(refreshInterval:(display:Off,pause:!f,value:0),time:(from:now-1h,mode:quick,to:now))&_a=(filters:!(),options:(darkTheme:!f),panels:!((col:1,id:logs-levels,panelIndex:2,row:1,size_x:6,size_y:3,type:visualization),(col:7,columns:!(stack_trace),id:Stacktraces,panelIndex:7,row:1,size_x:4,size_y:3,sort:!('@timestamp',desc),type:search),(col:11,id:Log-forwarding-instructions,panelIndex:8,row:1,size_x:2,size_y:3,type:visualization),(col:1,columns:!(app_name,level,message),id:All-logs,panelIndex:9,row:4,size_x:12,size_y:7,sort:!('@timestamp',asc),type:search)),query:(query_string:(analyze_wildcard:!t,query:'{traceId}')),title:logs-dashboard,uiState:())

回答1:

I managed to get this working by defining the transport.host as 0.0.0.0 inside elasticsearch_custom.yml, so the instance binds to the container's ip.

Maybe this should be also default setup for elasticsearch.yml on the project's github repo.



回答2:

From your docker compose file, port 9500 on the host is mapped to port 9300 inside the container, i.e.:

ports:
    - 9500:9300

So since port 9500 is the TCP port outside of the Docker container, in your elasticsearch_custom.yml config file you should have this instead

transport.tcp.port: 9300
transport.publish_port: 9300

or simply leave those two lines out since 9300 is the default TCP port.



回答3:

I would recommend taking a step back and installing the Head Plugin(https://github.com/mobz/elasticsearch-head) so you can get a view of your cluster, it will display the detail information of the cluster name.

Also maybe try sending a simple index request from the command line to make sure you can connect to your cluster at all.