ElasticSearch 2.0 Transport Client - No Node Available exception - java

[Using ElasticSearch version 2.0]
In etc/hosts file "esnode" is mapped to IP address(some other machine where ES is running) as shown
192.168.2.219 esnode
The Transport Client code is ::
public Client getClient() {
if ((this.client == null)) {
try {
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "myclustername").build();
TransportClient tClient = TransportClient.builder().settings(settings).build();
String[] nodes = "esnode:9300".split(COMMA);
for (String node : nodes) {
String[] hostPort = node.split(COLON);
tClient.addTransportAddress(new InetSocketTransportAddress(
InetAddress.getByName(hostPort[0]), Integer.parseInt(hostPort[1])));
}
this.client = tClient;
} catch (Exception e) {
e.printStackTrace();
}
}
return this.client;
}
This client code runs but when executing the below code :
this.getClient().prepareGet(indexName, typeName, String.valueOf(id)).get();
The exception is thrown:
NoNodeAvailableException[None of the configured nodes are available: []]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:280)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:197)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:272)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:67)
I have also tried using IPAddress instead of host name. The above code runs properly if
esnode is mapped to 127.0.0.1
Can somebody help...

Setup elasticsearch host ip address to network.host value in elasticsearch.yml
network.host: es_host_ip
This is solve TransportClient NoNodeAvailableException issue.

Check if your elasticsearch server have also version 2.0, if no, upgrade. Client and server must have the same version to work, I don't know why but this solved my problem.
Cheers,

Other reason could be, your Elasticsearch Java client is a different version from your Elasticsearch server.
Elasticsearch Java client version is nothing but your elasticsearch jar version in your code base.
For example: In my code it's elasticsearch-2.4.0.jar
To verify Elasticsearch server version,
$ /Users/kkolipaka/elasticsearch/bin/elasticsearch -version Version:
5.2.2, Build: f9d9b74/2017-02-24T17:26:45.835Z, JVM: 1.8.0_111
As you can see, I've downloaded latest version of Elastic server 5.2.2 but forgot to update the ES Java API client version 2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html

Related

Unable to produce to Kafka topic that is running on WSL 2 from Windows

I am running the latest Kafka on Ubuntu WSL2 successfully. I can start zookeeper, kafka server, create topics, console produce and console consume just fine from within the Ubuntu that I have running on the WSL. However, when I go into my Intellij on Windows and create a simple Java Producer it does not seem to be able to connect to the broker
Versions & Hostname
Java version: 1.8
Kafka Version: 2.6
hostname (from Ubuntu): KDAAPPDEV04
hostname (from Powershell): KDAAPPDEV04
java.net.InetAddress.getLocalHost().getHostName() = KDAAPPDEV04
java.net.InetAddress.getLocalHost().getCanonicalHostName() = KDAAPPDEV04
netstat from CMD:
TCP [::1]:9092 [::]:0 LISTENING
server.properties
I found this settings on another SO answer but these did not work for me.
advertised.listeners=PLAINTEXT://127.0.0.1:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
listeners=PLAINTEXT://0.0.0.0:9092
then tried (and restarted zookeeper and kafka)
advertised.listeners=PLAINTEXT://KDAAPPDEV04:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
listeners=PLAINTEXT://0.0.0.0:9092
Producer
I run this producer with three different values: hostname, localhost and 127.0.0.1 but it never connects to the broker
public class ProducerDemo{
private static Logger logger = LoggerFactory.getLogger(ProducerDemo.class);
public static void main(String[] args) throws UnknownHostException{
System.out.println(InetAddress.getLocalHost().getHostName());
System.out.println(InetAddress.getLocalHost().getCanonicalHostName());
String bootstrapServers = "127.0.0.1:9092";
// String bootstrapServers = "localhost:9092";
// String bootstrapServers = "KDAAPPDEV04:9092";
//create Producer properties
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
//create the producer
KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
//create a producer record
ProducerRecord<String,String> record = new ProducerRecord<String, String>("first-topic","hola mundo");
//send data
producer.send(record);
//flush + close
producer.flush();
producer.close();
}
}
Error
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.6.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 62abe01bee039651
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1601666175706
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 (KDAAPPDEV04/my-ipconfig-address-here:9092) could not be established. Broker may not be available.
Had this same issue. The root cause seems to be that WSL2 is broken with regards to IPv6 and localhost (See: https://github.com/microsoft/WSL/issues/4851)
The only fix I found that doesn't involve changing configs every time you reboot (per the "172.*" suggestion above) is to use the IPv6 loopback address ::1 in both the Kafka server config running in Linux and the Java client in Windows.
In server.properties I have this:
listeners=PLAINTEXT://[::1]:9092
And likewise in my Java client bootstrap server config I use
"[::1]:9092"
I had the exact problem you are having and I resolved it as follows:
I ran the following command in my WSL2 Ubuntu shell:
ip addr | grep "eth0"
I made note of the ip address against the inet property, for example, 172.27.10.68
In my Kafka server.properties I replaced the listeners property value as follows:
listeners=PLAINTEXT://172.27.10.68:9092
I commented out the advertised.listeners property. But you can alternatively assign
the ip in question to this property, and have the listeners property set to 0.0.0.0.
But I assume you are using the Kafka installation for testing/learning purposes,
so I would keep it simple.
I made no change to the Zookeeper's default ip:port
I am using the Schema Registry, so I modified the Kafka bootstrap property as follows:
kafkastore.bootstrap.servers=PLAINTEXT://172.27.10.68:9092
I made no change to the default schema registry listener listeners=http://0.0.0.0:8081
I used the same ip (as listed above) in my IntelliJ Kafka Producer.
It then happily connected to my Kafka broker in WSL2.
More information on WSL2 networking can be found at https://learn.microsoft.com/en-us/windows/wsl/compare-versions .
The only problem with this setup is that every time you shutdown or restart your Windows machine, or close your Ubuntu terminal, the ip address for eth0 changes. And this results in redoing steps 2, 4 and 5. I am sure there is a better way, but everything I tried failed, except for this.
WSL2 runs on hypervisor and you need port proxy to connect Kafka Broker running on WSL2.
Step 1 . Check you WSL2 IP using following command and copy inet value
$ ifconfig
inet 172.X.X.X
Step 2. Open cmd with Admin permsissions
netsh interface portproxy add v4tov4 listenport=9092 listenaddress=0.0.0.0 connectport=9092 connectaddress=172.X.X.X
You should be able to connect now
Note : WSL2 IP changes everytime you restart machine
I am able to find a work around . Thanks to Goose's comments
I ran the following command in my WSL2 Ubuntu shell: ip addr
Then ip address against the inet property global eth0 . for example, inet 172.20.XXX.XXX/20 .... scope global eth0
I replaced all localhost with this IP address in the docker-compose.yml
I replaced the localhost with this IP address in springboot yml or properties file.
My Kafka producer and consumer able to connect to the Kafka running in Ubunti - WSL 2 from Windows
Stop Kafka and Zookeeper, then
Disable IPv6 on WSL2:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
Start Kafka, and you're good to go!
I got this problem when running a kafka producer in IntelliJ and a consumer in ubuntu terminal while on WSL2.
First, stop Kafka and Zookeeper. Then run these commands on WSL2, one by one:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
After that, in the kakfa folder, go to config/server.properties and edit the file to add the line:
listeners=PLAINTEXT://localhost:9092
When these commands have succeeded relaunch zookeeper and kafka.
https://www.conduktor.io/kafka/kafka-fundamentals
This is not the optimal solution, but you will be able to connect if you run your producer in Ubuntu/WSL. This means if you are using a Windows IDE, writing the code, switching to Ubuntu and using a command line compiler and running the producer. See this post Error connecting to kafka server via IDE in WSL2
Edit the file etc/sysctl.conf and add following lines in it.
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
Replace listeners=PLAINTEXT://:9092 with listeners=PLAINTEXT://localhost:9092 in your server.properties.
Update the sysctl config by using the following command. (Everytime you restart your machine this command needs to be run to update the configuration)
sudo sysctl -p

Spring Data Elasticsearch: Understanding path.home

I am following Spring Data Elasticsearch Turorial.
In section 2.3. Java Configuration, there is:
#Value("${elasticsearch.home:/usr/local/Cellar/elasticsearch/5.6.0}")
private String elasticsearchHome;
which is used in:
#Bean
public Client client() {
Settings elasticsearchSettings = Settings.builder()
.put("client.transport.sniff", true)
.put("path.home", elasticsearchHome)
.put("cluster.name", clusterName).build();
TransportClient client = new PreBuiltTransportClient(elasticsearchSettings);
client.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
return client;
}
First, I din't understand why the client needs the installation directory of Elasticsearch. Second, I don't have Elasticsearch installed on the client computer. It's installed on a remote server. So what should I do with this settig - path.home?
(Of course I will change "127.0.0.1" too).

Connect to aerospike running in a vagrant vm on MacOS

I followed above steps to install aerospike on my Mac - https://www.aerospike.com/docs/operations/install/vagrant/mac/index.html
I am able to open the AMC console on my system but I am unable to connect to aerospike via my Java aerospike client
I am creating my AerospikeClient like below in my Java code.
new AerospikeClient("172.28.128.3", 8081);
Getting below error. Can anyone let me know what the problem is.
Factory method 'aerospikeClient' threw exception; nested exception is com.aerospike.client.AerospikeException$Connection: Error Code 11: Failed to connect to host(s):
172.28.128.3 8081 java.io.EOFException
Try connecting to port 3000, not 8081. That one is only open to allow AMC to connect from the macOS side to the cluster.
Install the tools package for macOS and try the following:
asinfo -h "172.28.128.3" -v version

ElasticSearch Java Transport Client NoNodeAvailableException on Ubuntu 14.04

I am running Elasticsearch v. 2.3.2, using Java 7. Following is the printout from curl http://172.31.11.83:9200:
{
"name" : "ip-172-31-11-83",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.2",
"build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94",
"build_timestamp" : "2016-04-21T16:03:47Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
... and I am using the following in my Java code:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>2.3.2</version>
</dependency>
I have ports 9200 and 9300 open in my firewall rules for my ES server, and can successfully execute said Java code from my laptop (Mac OSX). Following is the code snippet that starts off the process (this works fine):
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "elasticsearch").build();
esClient =TransportClient.builder().settings(settings).build().addTransportAddress(new
InetSocketTransportAddress(new InetSocketAddress(InetAddress.getByName("172.31.11.83"), 9300)));
Then later, I try to issue an index request (this fails when I run the code on Ubuntu 14.04:
adminClient = esClient.admin().indices();
IndicesExistsResponse response = adminClient.exists(request).actionGet();
My elasticsearch.yml file contains the following network settings:
network.bind_host: 0
network.publish_host: 172.31.11.83
transport.tcp.port: 9300
http.port: 9200
I have also tried with network.bind_host: 172.31.11.83 to no avail. Using curl, I can get to port 9200 from all machines. The cluster name reported by curl is "elasticsearch".
When I start ES, I see the following in the elasticsearch.log:
publish_address {172.31.11.83:9300}, bound_addresses {[::]:9300}
And yet, the exception I get is as follows:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{172.31.11.83}{172.31.11.83:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:283)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:336)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1178)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.exists(AbstractClient.java:1198)
Again, this exact code works from my local machine. Any thoughts?
Having identical issue..
Upgraded elastic from 1.7 to 2.3.2 on same AWS kit
Ubunto 14.0.4
Elastic binding transport on 9300 as before
Security group has port open (not changed)
Now remote clients cannot connect via transport layer - same error as above.
The only thing that has changed in my setup is the version of Elasticsearch
ok I solved this. It appears 2.3.2 doesn't default TCP bind in same way as 1.7.0
I had to set this in my elasticsearch.yml :
network.bind_host: {AWS private IP address)

Elasticsearch & NetFlix Edda - NoNodeAvailableException: No node available

I am trying to get Netflix open source solution Edda to work with Elasticsearch. I know I've installed Edda correctly because I can get it working with MongoDB as a backend successfully. I'd prefer to use Elasticsearch so I can get the benefits of Kibana rather than write my own frontend. So I'm running Edda and Elasticsearch on the same server in AWS at the moment (just trying to get it working). Elasticsearch is operational:
{
"name" : "Arsenic",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
And to show it's listening:
netstat -tulpn | grep java
tcp 0 0 ::ffff:<myip>:9300 :::* LISTEN 2270/java
tcp 0 0 ::ffff:<myip>:9200 :::* LISTEN 2270/java
My java version I updated from 1.7 to 1.8 as I believe the java version for Elasticsearch and what is running on the server have to match. I can't see a reason why 1.8 would be causing an issue:
java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)
Here's my edda properties file:
cat /home/ec2-user/edda/src/main/resources/edda.properties | grep elasticsearch
edda.datastore.current.class=com.netflix.edda.elasticsearch.ElasticSearchDatastore
edda.elector.class=com.netflix.edda.elasticsearch.ElasticSearchElector
edda.elasticsearch.cluster=elasticsearch
edda.elasticsearch.address=<myip>:9300
edda.elasticsearch.shards=5
edda.elasticsearch.replicas=0
# http://www.elasticsearch.org/guide/reference/api/index_/
edda.elasticsearch.writeConsistency=quorum
edda.elasticsearch.replicationType=async
edda.elasticsearch.scanBatchSize=1000
edda.elasticsearch.scanCursorDuration=60000
edda.elasticsearch.bulkBatchSize=0
And in my elasticsearch.yml file:
network.host: <myip>
I haven't specified a clustername so it assumes the default 'elasticseach'.
So when I run Edda to poll AWS and populate elasticsearch with the data it finds I receive this error:
[Collection aws.hostedZones] init: caught org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at com.netflix.edda.Collection$$anonfun$init$1.apply$mcV$sp(Collection.scala:471)
at com.netflix.edda.Utils$$anon$1.act(Utils.scala:169)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at scala.actors.ReactorTask.compute(ReactorTask.scala:63)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Clearly it can't connect to the elasticsearch cluster yet the cluster name is correct, it's listening on the correct port and ip address as far as I can tell and I don't think there's an issue with the java version.
I'm missing something probably very simple.
Thanks in advance for all your assistance.
Regards
Neilos
I've figured it out, the java client used in Edda is set to use version 0.90.0 of elasticsearch which is set in build.gradle, if you install that version of Elasticsearch it works. Obviously that's a very old version of Elasticsearch which you are not likely to want to use. If you change the version number in this file it fails when it tries to compile due to broken paths (missing assemblies). I'm weighing up whether it's worth trying to resolve these assembly issues to get it working with the latest version of Elasticsearch or choose to use MongoDB which works without any code changes but will only provide REST Api functionality. At least the problem is resolved.

Categories