I am running Elasticsearch v. 2.3.2, using Java 7. Following is the printout from curl http://172.31.11.83:9200:
{
"name" : "ip-172-31-11-83",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.2",
"build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94",
"build_timestamp" : "2016-04-21T16:03:47Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
... and I am using the following in my Java code:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>2.3.2</version>
</dependency>
I have ports 9200 and 9300 open in my firewall rules for my ES server, and can successfully execute said Java code from my laptop (Mac OSX). Following is the code snippet that starts off the process (this works fine):
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "elasticsearch").build();
esClient =TransportClient.builder().settings(settings).build().addTransportAddress(new
InetSocketTransportAddress(new InetSocketAddress(InetAddress.getByName("172.31.11.83"), 9300)));
Then later, I try to issue an index request (this fails when I run the code on Ubuntu 14.04:
adminClient = esClient.admin().indices();
IndicesExistsResponse response = adminClient.exists(request).actionGet();
My elasticsearch.yml file contains the following network settings:
network.bind_host: 0
network.publish_host: 172.31.11.83
transport.tcp.port: 9300
http.port: 9200
I have also tried with network.bind_host: 172.31.11.83 to no avail. Using curl, I can get to port 9200 from all machines. The cluster name reported by curl is "elasticsearch".
When I start ES, I see the following in the elasticsearch.log:
publish_address {172.31.11.83:9300}, bound_addresses {[::]:9300}
And yet, the exception I get is as follows:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{172.31.11.83}{172.31.11.83:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:283)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:336)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1178)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.exists(AbstractClient.java:1198)
Again, this exact code works from my local machine. Any thoughts?
Having identical issue..
Upgraded elastic from 1.7 to 2.3.2 on same AWS kit
Ubunto 14.0.4
Elastic binding transport on 9300 as before
Security group has port open (not changed)
Now remote clients cannot connect via transport layer - same error as above.
The only thing that has changed in my setup is the version of Elasticsearch
ok I solved this. It appears 2.3.2 doesn't default TCP bind in same way as 1.7.0
I had to set this in my elasticsearch.yml :
network.bind_host: {AWS private IP address)
Related
I am running the latest Kafka on Ubuntu WSL2 successfully. I can start zookeeper, kafka server, create topics, console produce and console consume just fine from within the Ubuntu that I have running on the WSL. However, when I go into my Intellij on Windows and create a simple Java Producer it does not seem to be able to connect to the broker
Versions & Hostname
Java version: 1.8
Kafka Version: 2.6
hostname (from Ubuntu): KDAAPPDEV04
hostname (from Powershell): KDAAPPDEV04
java.net.InetAddress.getLocalHost().getHostName() = KDAAPPDEV04
java.net.InetAddress.getLocalHost().getCanonicalHostName() = KDAAPPDEV04
netstat from CMD:
TCP [::1]:9092 [::]:0 LISTENING
server.properties
I found this settings on another SO answer but these did not work for me.
advertised.listeners=PLAINTEXT://127.0.0.1:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
listeners=PLAINTEXT://0.0.0.0:9092
then tried (and restarted zookeeper and kafka)
advertised.listeners=PLAINTEXT://KDAAPPDEV04:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT
listeners=PLAINTEXT://0.0.0.0:9092
Producer
I run this producer with three different values: hostname, localhost and 127.0.0.1 but it never connects to the broker
public class ProducerDemo{
private static Logger logger = LoggerFactory.getLogger(ProducerDemo.class);
public static void main(String[] args) throws UnknownHostException{
System.out.println(InetAddress.getLocalHost().getHostName());
System.out.println(InetAddress.getLocalHost().getCanonicalHostName());
String bootstrapServers = "127.0.0.1:9092";
// String bootstrapServers = "localhost:9092";
// String bootstrapServers = "KDAAPPDEV04:9092";
//create Producer properties
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
//create the producer
KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
//create a producer record
ProducerRecord<String,String> record = new ProducerRecord<String, String>("first-topic","hola mundo");
//send data
producer.send(record);
//flush + close
producer.flush();
producer.close();
}
}
Error
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.6.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 62abe01bee039651
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1601666175706
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Connection to node -1 (KDAAPPDEV04/my-ipconfig-address-here:9092) could not be established. Broker may not be available.
Had this same issue. The root cause seems to be that WSL2 is broken with regards to IPv6 and localhost (See: https://github.com/microsoft/WSL/issues/4851)
The only fix I found that doesn't involve changing configs every time you reboot (per the "172.*" suggestion above) is to use the IPv6 loopback address ::1 in both the Kafka server config running in Linux and the Java client in Windows.
In server.properties I have this:
listeners=PLAINTEXT://[::1]:9092
And likewise in my Java client bootstrap server config I use
"[::1]:9092"
I had the exact problem you are having and I resolved it as follows:
I ran the following command in my WSL2 Ubuntu shell:
ip addr | grep "eth0"
I made note of the ip address against the inet property, for example, 172.27.10.68
In my Kafka server.properties I replaced the listeners property value as follows:
listeners=PLAINTEXT://172.27.10.68:9092
I commented out the advertised.listeners property. But you can alternatively assign
the ip in question to this property, and have the listeners property set to 0.0.0.0.
But I assume you are using the Kafka installation for testing/learning purposes,
so I would keep it simple.
I made no change to the Zookeeper's default ip:port
I am using the Schema Registry, so I modified the Kafka bootstrap property as follows:
kafkastore.bootstrap.servers=PLAINTEXT://172.27.10.68:9092
I made no change to the default schema registry listener listeners=http://0.0.0.0:8081
I used the same ip (as listed above) in my IntelliJ Kafka Producer.
It then happily connected to my Kafka broker in WSL2.
More information on WSL2 networking can be found at https://learn.microsoft.com/en-us/windows/wsl/compare-versions .
The only problem with this setup is that every time you shutdown or restart your Windows machine, or close your Ubuntu terminal, the ip address for eth0 changes. And this results in redoing steps 2, 4 and 5. I am sure there is a better way, but everything I tried failed, except for this.
WSL2 runs on hypervisor and you need port proxy to connect Kafka Broker running on WSL2.
Step 1 . Check you WSL2 IP using following command and copy inet value
$ ifconfig
inet 172.X.X.X
Step 2. Open cmd with Admin permsissions
netsh interface portproxy add v4tov4 listenport=9092 listenaddress=0.0.0.0 connectport=9092 connectaddress=172.X.X.X
You should be able to connect now
Note : WSL2 IP changes everytime you restart machine
I am able to find a work around . Thanks to Goose's comments
I ran the following command in my WSL2 Ubuntu shell: ip addr
Then ip address against the inet property global eth0 . for example, inet 172.20.XXX.XXX/20 .... scope global eth0
I replaced all localhost with this IP address in the docker-compose.yml
I replaced the localhost with this IP address in springboot yml or properties file.
My Kafka producer and consumer able to connect to the Kafka running in Ubunti - WSL 2 from Windows
Stop Kafka and Zookeeper, then
Disable IPv6 on WSL2:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
Start Kafka, and you're good to go!
I got this problem when running a kafka producer in IntelliJ and a consumer in ubuntu terminal while on WSL2.
First, stop Kafka and Zookeeper. Then run these commands on WSL2, one by one:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
After that, in the kakfa folder, go to config/server.properties and edit the file to add the line:
listeners=PLAINTEXT://localhost:9092
When these commands have succeeded relaunch zookeeper and kafka.
https://www.conduktor.io/kafka/kafka-fundamentals
This is not the optimal solution, but you will be able to connect if you run your producer in Ubuntu/WSL. This means if you are using a Windows IDE, writing the code, switching to Ubuntu and using a command line compiler and running the producer. See this post Error connecting to kafka server via IDE in WSL2
Edit the file etc/sysctl.conf and add following lines in it.
net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1
Replace listeners=PLAINTEXT://:9092 with listeners=PLAINTEXT://localhost:9092 in your server.properties.
Update the sysctl config by using the following command. (Everytime you restart your machine this command needs to be run to update the configuration)
sudo sysctl -p
I'm deploying a Java app that runs on port 8761, and works fine on localhost.
Although when I push to App Engine flexible environment, I get a HTTP 502 server error.
Here is my app.yaml:
runtime: java
env: flex
service: eureka
runtime_config:
jdk: openjdk8
handlers:
- url: /.*
script: ignore
secure: always
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 2
The log from gcloud is fine, server is running, but my request doesn't seems to hit the app at all.
I noticed that if I run on port 8080, it works. For now, it is not a problem change the default port to 8080, but I would like to understand why I'm not able to run it on 8761
I think you need to use the network settings section in the app.yaml config file:
network:
forwarded_ports:
- 8761/tcp
You might also need to set firewall rules in the Cloud Platform Console.
I am trying to get Netflix open source solution Edda to work with Elasticsearch. I know I've installed Edda correctly because I can get it working with MongoDB as a backend successfully. I'd prefer to use Elasticsearch so I can get the benefits of Kibana rather than write my own frontend. So I'm running Edda and Elasticsearch on the same server in AWS at the moment (just trying to get it working). Elasticsearch is operational:
{
"name" : "Arsenic",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
And to show it's listening:
netstat -tulpn | grep java
tcp 0 0 ::ffff:<myip>:9300 :::* LISTEN 2270/java
tcp 0 0 ::ffff:<myip>:9200 :::* LISTEN 2270/java
My java version I updated from 1.7 to 1.8 as I believe the java version for Elasticsearch and what is running on the server have to match. I can't see a reason why 1.8 would be causing an issue:
java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)
Here's my edda properties file:
cat /home/ec2-user/edda/src/main/resources/edda.properties | grep elasticsearch
edda.datastore.current.class=com.netflix.edda.elasticsearch.ElasticSearchDatastore
edda.elector.class=com.netflix.edda.elasticsearch.ElasticSearchElector
edda.elasticsearch.cluster=elasticsearch
edda.elasticsearch.address=<myip>:9300
edda.elasticsearch.shards=5
edda.elasticsearch.replicas=0
# http://www.elasticsearch.org/guide/reference/api/index_/
edda.elasticsearch.writeConsistency=quorum
edda.elasticsearch.replicationType=async
edda.elasticsearch.scanBatchSize=1000
edda.elasticsearch.scanCursorDuration=60000
edda.elasticsearch.bulkBatchSize=0
And in my elasticsearch.yml file:
network.host: <myip>
I haven't specified a clustername so it assumes the default 'elasticseach'.
So when I run Edda to poll AWS and populate elasticsearch with the data it finds I receive this error:
[Collection aws.hostedZones] init: caught org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at com.netflix.edda.Collection$$anonfun$init$1.apply$mcV$sp(Collection.scala:471)
at com.netflix.edda.Utils$$anon$1.act(Utils.scala:169)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at scala.actors.ReactorTask.compute(ReactorTask.scala:63)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Clearly it can't connect to the elasticsearch cluster yet the cluster name is correct, it's listening on the correct port and ip address as far as I can tell and I don't think there's an issue with the java version.
I'm missing something probably very simple.
Thanks in advance for all your assistance.
Regards
Neilos
I've figured it out, the java client used in Edda is set to use version 0.90.0 of elasticsearch which is set in build.gradle, if you install that version of Elasticsearch it works. Obviously that's a very old version of Elasticsearch which you are not likely to want to use. If you change the version number in this file it fails when it tries to compile due to broken paths (missing assemblies). I'm weighing up whether it's worth trying to resolve these assembly issues to get it working with the latest version of Elasticsearch or choose to use MongoDB which works without any code changes but will only provide REST Api functionality. At least the problem is resolved.
[Using ElasticSearch version 2.0]
In etc/hosts file "esnode" is mapped to IP address(some other machine where ES is running) as shown
192.168.2.219 esnode
The Transport Client code is ::
public Client getClient() {
if ((this.client == null)) {
try {
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "myclustername").build();
TransportClient tClient = TransportClient.builder().settings(settings).build();
String[] nodes = "esnode:9300".split(COMMA);
for (String node : nodes) {
String[] hostPort = node.split(COLON);
tClient.addTransportAddress(new InetSocketTransportAddress(
InetAddress.getByName(hostPort[0]), Integer.parseInt(hostPort[1])));
}
this.client = tClient;
} catch (Exception e) {
e.printStackTrace();
}
}
return this.client;
}
This client code runs but when executing the below code :
this.getClient().prepareGet(indexName, typeName, String.valueOf(id)).get();
The exception is thrown:
NoNodeAvailableException[None of the configured nodes are available: []]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:280)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:197)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:272)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:67)
I have also tried using IPAddress instead of host name. The above code runs properly if
esnode is mapped to 127.0.0.1
Can somebody help...
Setup elasticsearch host ip address to network.host value in elasticsearch.yml
network.host: es_host_ip
This is solve TransportClient NoNodeAvailableException issue.
Check if your elasticsearch server have also version 2.0, if no, upgrade. Client and server must have the same version to work, I don't know why but this solved my problem.
Cheers,
Other reason could be, your Elasticsearch Java client is a different version from your Elasticsearch server.
Elasticsearch Java client version is nothing but your elasticsearch jar version in your code base.
For example: In my code it's elasticsearch-2.4.0.jar
To verify Elasticsearch server version,
$ /Users/kkolipaka/elasticsearch/bin/elasticsearch -version Version:
5.2.2, Build: f9d9b74/2017-02-24T17:26:45.835Z, JVM: 1.8.0_111
As you can see, I've downloaded latest version of Elastic server 5.2.2 but forgot to update the ES Java API client version 2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html
I setup a elasticsearch container with the OFFICIAL REPO elasticsearch docker image. Then run it with
docker run -dP elasticsearch
Easy and worked. The ps info is
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
658b49ed9551 elasticsearch:latest "/docker-entrypoint. 2 seconds ago Up 1 seconds 0.0.0.0:32769->9200/tcp, 0.0.0.0:32768->9300/tcp suspicious_albattani
And I can access the server with http-client via port 32769->9200
baihetekiMacBook-Pro:0 baihe$ curl 10.211.55.100:32769
{
"status" : 200,
"name" : "Scorpia",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.5",
"build_hash" : "2aaf797f2a571dcb779a3b61180afe8390ab61f9",
"build_timestamp" : "2015-04-27T08:06:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
Now I need my JAVA-program to work with the dockerized elasticsearch. The java Node client can only connected to the elasticsearch through 32768->9300 (the cluster node talking port). So I config the transport client in my java like this
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff", true)
.put("client.transport.ignore_cluster_name", true).build();
client = new TransportClient(settings);
((TransportClient) client)
.addTransportAddress(new InetSocketTransportAddress(
"10.211.55.100", 32768));
Then I get the following errors in the console:
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:305)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:200)
at org.elasticsearch.client.transport.support.InternalTransportIndicesAdminClient.execute(InternalTransportIndicesAdminClient.java:86)
at org.elasticsearch.client.support.AbstractIndicesAdminClient.exists(AbstractIndicesAdminClient.java:170)
at org.elasticsearch.action.admin.indices.exists.indices.IndicesExistsRequestBuilder.doExecute(IndicesExistsRequestBuilder.java:53)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at cct.bigdata.yellowbook.service.impl.ResourceServiceImpl.<init>(ResourceServiceImpl.java:49)
at cct.bigdata.yellowbook.config.YellowBookConfig.resourceService(YellowBookConfig.java:21)
at cct.bigdata.yellowbook.config.YellowBookConfig$$EnhancerBySpringCGLIB$$e7d2ff3e.CGLIB$resourceService$0(<generated>)
at cct.bigdata.yellowbook.config.YellowBookConfig$$EnhancerBySpringCGLIB$$e7d2ff3e$$FastClassBySpringCGLIB$$72e3e213.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:312)
at cct.bigdata.yellowbook.config.YellowBookConfig$$EnhancerBySpringCGLIB$$e7d2ff3e.resourceService(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:166)
... 31 common frames omitted
When I run the elasticsearch directly in the host. everything is all right.
I check all the dockerfile of elasticsearch on docker hub. It seems all of them simply do the followings:
EXPOSE 9200 9300
I wonder has anyone tried to do the similar things. Is the 9300 the normal TCP port or UDP port? Do I need to do some special thing to make it when running the container? Thanks!
If you set "client.transport.sniff" to false it should work.
If you still want to use sniffing follow next instructions:
https://github.com/olivere/elastic/wiki/Docker
Detailed discussion here: https://github.com/olivere/elastic/issues/57#issuecomment-88697714
This works for me (in docker-compose.yml).
version: "2"
services:
elasticsearch5:
image: docker.elastic.co/elasticsearch/elasticsearch:5.5.3
container_name: elasticsearch5
environment:
- cluster.name=elasticsearch5-cluster
- http.host=0.0.0.0
- network.publish_host=127.0.0.1
- transport.tcp.port=9700
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- "9600:9200"
- "9700:9700"
Specifying network.publish_host and transport.tcp.port seems to do the trick. And sniff=true still works.