I know that question has already been treated but with all my attemps, no way to fix my issue.
I just installed elasticsearch and boot it. Here the log of elastic :
[2017-05-17T00:05:27,290][INFO ][o.e.n.Node ] [] initializing ...
[2017-05-17T00:05:27,394][INFO ][o.e.e.NodeEnvironment ] [xhkU1rX] using [1] data paths, mounts [[Data (D:)]], net usable_space [25gb], net total_space [138.4gb], spins? [unknown], types [NTFS]
[2017-05-17T00:05:27,395][INFO ][o.e.e.NodeEnvironment ] [xhkU1rX] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-05-17T00:05:28,331][INFO ][o.e.n.Node ] node name [xhkU1rX] derived from node ID [xhkU1rXqQ1OgNhIOdfkifg]; set [node.name] to override
[2017-05-17T00:05:28,332][INFO ][o.e.n.Node ] version[5.4.0], pid[7944], build[780f8c4/2017-04-28T17:43:27.229Z], OS[Windows 7/6.1/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_101/25.101-b13]
[2017-05-17T00:05:30,421][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [aggs-matrix-stats]
[2017-05-17T00:05:30,422][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [ingest-common]
[2017-05-17T00:05:30,422][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [lang-expression]
[2017-05-17T00:05:30,422][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [lang-groovy]
[2017-05-17T00:05:30,422][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [lang-mustache]
[2017-05-17T00:05:30,422][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [lang-painless]
[2017-05-17T00:05:30,423][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [percolator]
[2017-05-17T00:05:30,423][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [reindex]
[2017-05-17T00:05:30,423][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [transport-netty3]
[2017-05-17T00:05:30,423][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded module [transport-netty4]
[2017-05-17T00:05:30,424][INFO ][o.e.p.PluginsService ] [xhkU1rX] loaded plugin [x-pack]
[2017-05-17T00:05:34,242][DEBUG][o.e.a.ActionModule ] Using REST wrapper from plugin org.elasticsearch.xpack.XPackPlugin
[2017-05-17T00:05:34,804][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/11204] [Main.cc@128] controller (64 bit): Version 5.4.0 (Build 120b96fa7f6fa7) Copyright (c) 2017 Elasticsearch BV
[2017-05-17T00:05:34,863][INFO ][o.e.d.DiscoveryModule ] [xhkU1rX] using discovery type [zen]
[2017-05-17T00:05:36,769][INFO ][o.e.n.Node ] initialized
[2017-05-17T00:05:36,772][INFO ][o.e.n.Node ] [xhkU1rX] starting ...
[2017-05-17T00:05:38,384][INFO ][o.e.t.TransportService ] [xhkU1rX] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2017-05-17T00:05:41,441][INFO ][o.e.c.s.ClusterService ] [xhkU1rX] new_master {xhkU1rX}{xhkU1rXqQ1OgNhIOdfkifg}{Ki_4aUviS3m6n36CQ73tVw}{127.0.0.1}{127.0.0.1:9300}{ml.enabled=true}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-05-17T00:05:43,372][INFO ][o.e.l.LicenseService ] [xhkU1rX] license [78f22813-a953-4397-ab90-d9e057e7647e] mode [trial] - valid
[2017-05-17T00:05:43,399][INFO ][o.e.g.GatewayService ] [xhkU1rX] recovered [28] indices into cluster_state
[2017-05-17T00:05:45,462][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [xhkU1rX] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2017-05-17T00:05:45,474][INFO ][o.e.n.Node ] [xhkU1rX] started
[2017-05-17T00:05:47,834][INFO ][o.e.c.r.a.AllocationService] [xhkU1rX] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[my-index][3]] ...]).
So, Elastic is fully running and listenning on the port 9300.
My cluster name is "my-application".
Confirmed by elasticsearch.yml :
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
And also confirmed by requesting http://localhost:9200/ :
{
"name" : "xhkU1rX",
"cluster_name" : "my-application",
"cluster_uuid" : "ygmbc6k0TpKdJP77OJeG9g",
"version" : {
"number" : "5.4.0",
"build_hash" : "780f8c4",
"build_date" : "2017-04-28T17:43:27.229Z",
"build_snapshot" : false,
"lucene_version" : "6.5.0"
},
"tagline" : "You Know, for Search"
}
Here is my Maven dependencies (versions of elastic and client are aligned) :
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>5.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-to-slf4j</artifactId>
<version>2.7</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.21</version>
</dependency>
And here my java piece of code :
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Collections;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.transport.client.PreBuiltTransportClient;
public class Test {
public static void main(String[] args) throws UnknownHostException {
Settings clusterSettings = Settings.builder()
.put("cluster.name", "my-application")
.build();
TransportClient transportClient = new PreBuiltTransportClient(clusterSettings)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
try {
SearchResponse response = transportClient
.prepareSearch()
.execute().actionGet();
System.out.println(response.toString());
} catch (Exception e) {
e.printStackTrace();
} finally {
transportClient.close();
}
}
}
that gives me the following logs :
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{nqelLN69RhSJabcb57_lWQ}{127.0.0.1}{127.0.0.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:348)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:246)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at Test.main(Test.java:23)
If I try to execute this command on a terminal :
telnet 127.0.0.1 9300
I'm prompted and I can enter only 6 (invisible) characters and elastic console display (I typed azerty) :
[2017-05-17T00:30:33,573][WARN ][o.e.x.s.t.n.SecurityNetty4Transport] [xhkU1rX] exception caught on transport layer [[id: 0xda47cab8, L:/127.0.0.1:9300 - R:/127.0.0.1:64226]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (61,7a,65,72)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) ~[netty-codec-4.1.9.Final.jar:4.1.9.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248) ~[netty-codec-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:624) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:524) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:478) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:438) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.9.Final.jar:4.1.9.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (61,7a,65,72)
at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1257) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) ~[?:?]
... 19 more
[2017-05-17T00:30:33,577][WARN ][o.e.x.s.t.n.SecurityNetty4Transport] [xhkU1rX] exception caught on transport layer [[id: 0xda47cab8, L:/127.0.0.1:9300 ! R:/127.0.0.1:64226]], closing connection
io.netty.handler.codec.DecoderException: java.io.StreamCorruptedException: invalid internal transport message format, got (61,7a,65,72)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:442) ~[netty-codec-4.1.9.Final.jar:4.1.9.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:375) ~[netty-codec-4.1.9.Final.jar:4.1.9.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:342) ~[netty-codec-4.1.9.Final.jar:4.1.9.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:325) ~[netty-codec-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:224) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:245) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:231) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-common-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403) [netty-common-4.1.9.Final.jar:4.1.9.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442) [netty-transport-4.1.9.Final.jar:4.1.9.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.9.Final.jar:4.1.9.Final]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
Caused by: java.io.StreamCorruptedException: invalid internal transport message format, got (61,7a,65,72)
at org.elasticsearch.transport.TcpTransport.validateMessageHeader(TcpTransport.java:1257) ~[elasticsearch-5.4.0.jar:5.4.0]
at org.elasticsearch.transport.netty4.Netty4SizeHeaderFrameDecoder.decode(Netty4SizeHeaderFrameDecoder.java:36) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411) ~[?:?]
... 20 more
It means that the port 9300 is the good one.
I don't understand what I am doing wrong.
Someowe can help me ?
EDIT Here is the complete logs after configuring log4j with TRACE on org.elasticsearch :
0 [main] TRACE org.elasticsearch.plugins.PluginsService - plugin loaded from classpath [- Plugin information:
Name: org.elasticsearch.transport.Netty3Plugin
Description: classpath plugin
Version: NA
Native Controller: false
* Classname: org.elasticsearch.transport.Netty3Plugin]
21 [main] TRACE org.elasticsearch.plugins.PluginsService - plugin loaded from classpath [- Plugin information:
Name: org.elasticsearch.transport.Netty4Plugin
Description: classpath plugin
Version: NA
Native Controller: false
* Classname: org.elasticsearch.transport.Netty4Plugin]
24 [main] TRACE org.elasticsearch.plugins.PluginsService - plugin loaded from classpath [- Plugin information:
Name: org.elasticsearch.index.reindex.ReindexPlugin
Description: classpath plugin
Version: NA
Native Controller: false
* Classname: org.elasticsearch.index.reindex.ReindexPlugin]
25 [main] TRACE org.elasticsearch.plugins.PluginsService - plugin loaded from classpath [- Plugin information:
Name: org.elasticsearch.percolator.PercolatorPlugin
Description: classpath plugin
Version: NA
Native Controller: false
* Classname: org.elasticsearch.percolator.PercolatorPlugin]
28 [main] TRACE org.elasticsearch.plugins.PluginsService - plugin loaded from classpath [- Plugin information:
Name: org.elasticsearch.script.mustache.MustachePlugin
Description: classpath plugin
Version: NA
Native Controller: false
* Classname: org.elasticsearch.script.mustache.MustachePlugin]
41 [main] INFO org.elasticsearch.plugins.PluginsService - no modules loaded
43 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.index.reindex.ReindexPlugin]
43 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.percolator.PercolatorPlugin]
43 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.script.mustache.MustachePlugin]
43 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.transport.Netty3Plugin]
43 [main] INFO org.elasticsearch.plugins.PluginsService - loaded plugin [org.elasticsearch.transport.Netty4Plugin]
112 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [force_merge], size [1], queue size [unbounded]
116 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [fetch_shard_started], core [1], max [8], keep alive [5m]
116 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [listener], size [2], queue size [unbounded]
121 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [index], size [4], queue size [200]
122 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [refresh], core [1], max [2], keep alive [5m]
122 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [generic], core [4], max [128], keep alive [30s]
122 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [warmer], core [1], max [2], keep alive [5m]
123 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [search], size [7], queue size [1k]
123 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [flush], core [1], max [2], keep alive [5m]
123 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [fetch_shard_store], core [1], max [8], keep alive [5m]
124 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [management], core [1], max [5], keep alive [5m]
124 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [get], size [4], queue size [1k]
124 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [bulk], size [4], queue size [200]
124 [main] DEBUG org.elasticsearch.threadpool.ThreadPool - created thread pool: name [snapshot], core [1], max [2], keep alive [5m]
1301 [main] DEBUG org.elasticsearch.common.network.IfConfig - configuration:
lo
Software Loopback Interface 1
inet 127.0.0.1 netmask:255.0.0.0 broadcast:127.255.255.255 scope:host
inet6 ::1 prefixlen:128 scope:host
UP MULTICAST LOOPBACK mtu:-1 index:1
[...] other wetwork interfaces [...]
2395 [main] TRACE org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService - parent circuit breaker with settings [parent,type=PARENT,limit=2622960435/2.4gb,overhead=1.0]
2396 [main] TRACE org.elasticsearch.indices.breaker.request - creating ChildCircuitBreaker with settings [request,type=MEMORY,limit=2248251801/2gb,overhead=1.0]
2396 [main] TRACE org.elasticsearch.indices.breaker.fielddata - creating ChildCircuitBreaker with settings [fielddata,type=MEMORY,limit=2248251801/2gb,overhead=1.03]
2396 [main] TRACE org.elasticsearch.indices.breaker.in_flight_requests - creating ChildCircuitBreaker with settings [in_flight_requests,type=MEMORY,limit=3747086336/3.4gb,overhead=1.0]
2538 [main] DEBUG org.elasticsearch.client.transport.TransportClientNodesService - node_sampler_interval[5s]
3373 [main] DEBUG org.elasticsearch.client.transport.TransportClientNodesService - adding address [{#transport#-1}{1hiyV12vSMewRrjSv9ZmmA}{192.168.56.1}{192.168.56.1:9300}]
4016 [main] TRACE org.elasticsearch.indices.breaker.request - [request] Adjusted breaker by [16440] bytes, now [16440]
4059 [elasticsearch[_client_][transport_client_boss][T#1]] TRACE org.elasticsearch.indices.breaker.request - [request] Adjusted breaker by [-16440] bytes, now [0]
4059 [elasticsearch[_client_][transport_client_boss][T#1]] TRACE org.elasticsearch.transport.TransportService.tracer - [1][internal:tcp/handshake] sent to [{#transport#-1}{1hiyV12vSMewRrjSv9ZmmA}{192.168.56.1}{192.168.56.1:9300}] (timeout: [null])
4085 [main] TRACE org.elasticsearch.indices.breaker.request - [request] Adjusted breaker by [16440] bytes, now [16440]
4087 [elasticsearch[_client_][transport_client_boss][T#1]] TRACE org.elasticsearch.indices.breaker.request - [request] Adjusted breaker by [-16440] bytes, now [0]
4217 [main] INFO org.elasticsearch.client.transport.TransportClientNodesService - failed to get node info for {#transport#-1}{1hiyV12vSMewRrjSv9ZmmA}{192.168.56.1}{192.168.56.1:9300}, disconnecting...
RemoteTransportException[[xhkU1rX][192.168.56.1:9300][cluster:monitor/nodes/liveness]]; nested: ElasticsearchSecurityException[missing authentication token for action [cluster:monitor/nodes/liveness]];
Caused by: ElasticsearchSecurityException[missing authentication token for action [cluster:monitor/nodes/liveness]]
at org.elasticsearch.xpack.security.support.Exceptions.authenticationError(Exceptions.java:39)
at org.elasticsearch.xpack.security.authc.DefaultAuthenticationFailureHandler.missingToken(DefaultAuthenticationFailureHandler.java:74)
at org.elasticsearch.xpack.security.authc.AuthenticationService$AuditableTransportRequest.anonymousAccessDenied(AuthenticationService.java:506)
at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.lambda$handleNullToken$14(AuthenticationService.java:331)
at org.elasticsearch.xpack.security.authc.AuthenticationService$Authenticator.handleNullToken(AuthenticationService.java:336)
[...] some additionnal stacktrace [...]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{1hiyV12vSMewRrjSv9ZmmA}{192.168.56.1}{192.168.56.1:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:348)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:246)
at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:54)
at Test.main(Test.java:22)
if you are running the elastic search via docker image. then runt the docker image via below command.
-e "transport.host=127.0.0.1" did not work.
Also settings should be as below. donot add the .put("client.transport.sniff", true).
This error may come because of the version mismatch. Check in elastic search logs, it will specify more details about the minimum version of elasticsearch client to be used. So either upgrade client jar or run below versions of elasticsearch
I had exactly this issue, and for me it was the version between elasticsearch running on my host and the elasticsearch version in my IntelliJ pom.xml
When I used the same version on both ends it worked. I.e. I downloaded the same version for my host as my client and then it worked.
I modified this working sample to figure it out:
https://github.com/spring-projects/spring-boot/tree/master/spring-boot-samples/spring-boot-sample-data-elasticsearch
Use ElasticSearch 6.3.1 and Springboot2.0.0.RELASE and do the following changes:
Settings settings = Settings.builder().put("cluster.name", "farkalit-cluster").build(); transportClient = new PreBuiltTransportClient(settings).addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
It will solves the problem.
I have encountered same problem and i solved the mentioned issue by adding
transport.host: localhost
property in elasticsearch.yml file.After adding it, it works as expected. I hope it will help other reader of this thread.
Just for reference, I am using 7.2 while I was trying to use
and following code to connect to my local cluster, and same error popped into my face.
As I checked the ES console, it's already warned me of the version issues
So I just use the new version to fix the error.