public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "foxzen")
.put("node.name", "yu").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("XXX.XXX.XXX.XXX", 9200));
// XXX is my server's ip address
IndexResponse response = client.prepareIndex("twitter", "tweet")
.setSource(XContentFactory.jsonBuilder()
.startObject()
.field("productId", "1")
.field("productName", "XXX").endObject()).execute().actionGet();
System.out.println(response.getIndex());
System.out.println(response.getType());
System.out.println(response.getVersion());
client.close();
}
I access server from my computer
curl -get http://XXX.XXX.XXX.XXX:9200/
get this
{
"status" : 200,
"name" : "yu",
"version" : {
"number" : "1.1.0",
"build_hash" : "2181e113dea80b4a9e31e58e9686658a2d46e363",
"build_timestamp" : "2014-03-25T15:59:51Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
Why get error by using Java API?
EDIT
There is the cluster and node part config of elasticsearch.yml
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: foxzen
#################################### Node #####################################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
node.name: yu
Some suggestions:
1 - Use port 9300. [9300-9400] is for node-to-node communication, [9200-9300] is for HTTP traffic.
2 - Ensure the version of the Java API you are using matches the version of elasticsearch running on the server.
3 - Ensure that the name of your cluster is
foxzen
(check the elasticsearch.yml on the server).4 - Remove
put("node.name", "yu")
, you aren't joining the cluster as a node since you are using theTransportClient
, and even if you were it appears your server node is namedyu
so you would want a different node name in any case.If you are still having issues, even when using port 9300, and everything else seems to be configured correctly, try using an older version of elasticsearch.
I was getting this same error while using elasticsearch version 2.2.0, but as soon as I rolled back to version 1.7.5, my problem magically went away. Here's a link to someone else having this issue : older version solves problem
Other reason could be, your Elasticsearch Java client is a different version from your Elasticsearch server.
Elasticsearch Java client version is nothing but your elasticsearch jar version in your code base.
For example: In my code it's elasticsearch-2.4.0.jar
To verify Elasticsearch server version,
As you can see, I've downloaded latest version of Elastic server 5.2.2 but forgot to update the ES Java API client version 2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html
You need to change your code to use port 9300 - correct line would be:
The reason is that the Java API is using the internal transport used for inter node communications and it defaults to port 9300. Port 9200 is the default for the REST API interface. Common issue to run into - check this sample code here towards the bottom of the page, under Transport Client:
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html
Another solution may be to include
io.netty.netty-all
into project dependencies explicitly.On
addTransportAddresses
a methodnodesSampler.sample()
is being executed, and added addresses are being checked for availability there. In my casetry-catch
block swallowsConnectTransportException
because a methodio.netty.channel.DefaultChannelId.newInstance()
cannot be found. So added node just isn't treated as available.I met this error too. I use ElasticSearch 2.4.1 as a standalone server (single node) in docker, programming with Grails 3/spring-data-elasticsearch. My fix is setting
client.transport.sniff
tofalse
. Here is my core conf :application.yml
See this