Elasticsearch & NetFlix Edda - NoNodeAvailableException: No node available - java

I am trying to get Netflix open source solution Edda to work with Elasticsearch. I know I've installed Edda correctly because I can get it working with MongoDB as a backend successfully. I'd prefer to use Elasticsearch so I can get the benefits of Kibana rather than write my own frontend. So I'm running Edda and Elasticsearch on the same server in AWS at the moment (just trying to get it working). Elasticsearch is operational:
{
"name" : "Arsenic",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.0",
"build_hash" : "72cd1f1a3eee09505e036106146dc1949dc5dc87",
"build_timestamp" : "2015-11-18T22:40:03Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
And to show it's listening:
netstat -tulpn | grep java
tcp 0 0 ::ffff:<myip>:9300 :::* LISTEN 2270/java
tcp 0 0 ::ffff:<myip>:9200 :::* LISTEN 2270/java
My java version I updated from 1.7 to 1.8 as I believe the java version for Elasticsearch and what is running on the server have to match. I can't see a reason why 1.8 would be causing an issue:
java -version
openjdk version "1.8.0_65"
OpenJDK Runtime Environment (build 1.8.0_65-b17)
OpenJDK 64-Bit Server VM (build 25.65-b01, mixed mode)
Here's my edda properties file:
cat /home/ec2-user/edda/src/main/resources/edda.properties | grep elasticsearch
edda.datastore.current.class=com.netflix.edda.elasticsearch.ElasticSearchDatastore
edda.elector.class=com.netflix.edda.elasticsearch.ElasticSearchElector
edda.elasticsearch.cluster=elasticsearch
edda.elasticsearch.address=<myip>:9300
edda.elasticsearch.shards=5
edda.elasticsearch.replicas=0
# http://www.elasticsearch.org/guide/reference/api/index_/
edda.elasticsearch.writeConsistency=quorum
edda.elasticsearch.replicationType=async
edda.elasticsearch.scanBatchSize=1000
edda.elasticsearch.scanCursorDuration=60000
edda.elasticsearch.bulkBatchSize=0
And in my elasticsearch.yml file:
network.host: <myip>
I haven't specified a clustername so it assumes the default 'elasticseach'.
So when I run Edda to poll AWS and populate elasticsearch with the data it finds I receive this error:
[Collection aws.hostedZones] init: caught org.elasticsearch.client.transport.NoNodeAvailableException: No node available
at com.netflix.edda.Collection$$anonfun$init$1.apply$mcV$sp(Collection.scala:471)
at com.netflix.edda.Utils$$anon$1.act(Utils.scala:169)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.Reactor$$anonfun$dostart$1.apply(Reactor.scala:224)
at scala.actors.ReactorTask.run(ReactorTask.scala:33)
at scala.actors.ReactorTask.compute(ReactorTask.scala:63)
at scala.concurrent.forkjoin.RecursiveAction.exec(RecursiveAction.java:160)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Clearly it can't connect to the elasticsearch cluster yet the cluster name is correct, it's listening on the correct port and ip address as far as I can tell and I don't think there's an issue with the java version.
I'm missing something probably very simple.
Thanks in advance for all your assistance.
Regards
Neilos

I've figured it out, the java client used in Edda is set to use version 0.90.0 of elasticsearch which is set in build.gradle, if you install that version of Elasticsearch it works. Obviously that's a very old version of Elasticsearch which you are not likely to want to use. If you change the version number in this file it fails when it tries to compile due to broken paths (missing assemblies). I'm weighing up whether it's worth trying to resolve these assembly issues to get it working with the latest version of Elasticsearch or choose to use MongoDB which works without any code changes but will only provide REST Api functionality. At least the problem is resolved.

Related

Is openJdk upgrading to 8u292 break my aosp build system?

Software environment:
Ubuntu 20.04 LTS server;
Android AOSP 8.0;
OpenJDK 8;
It works very well util yesterday I upgraded my OpenJDK from 8u282 to 8u292. Now the broken building log says:
Ensuring Jack server is installed and started
FAILED: setup-jack-server
/bin/bash -c "(prebuilts/sdk/tools/jack-admin install-server prebuilts/sdk/tools/jack-launcher.jar prebuilts/sdk/tools/jack-server-4.11.ALPHA.jar 2>&1 || (exit 0) ) && (JACK_SERVER_VM_ARGUMENTS=\"-Dfile.encoding=UTF-8 -XX:+TieredCompilation\" prebuilts/sdk/tools/jack-admin start-server 2>&1 ||
exit 0 ) && (prebuilts/sdk/tools/jack-admin update server prebuilts/sdk/tools/jack-server-4.11.ALPHA.jar 4.11.ALPHA 2>&1 || exit 0 ) && (prebuilts/sdk/tools/jack-admin update jack prebuilts/sdk/tools/jacks/jack-4.32.CANDIDATE.jar 4.32.CANDIDATE || exit 47 )"
Jack server already installed in "~/.jack-server"
Launching Jack server java -XX:MaxJavaStackTraceDepth=-1 -Djava.io.tmpdir=/tmp -Dfile.encoding=UTF-8 -XX:+TieredCompilation -cp ~/.jack-server/launcher.jar com.android.jack.launcher.ServerLauncher
Jack server failed to (re)start, try 'jack-diagnose' or see Jack server log
SSL error when connecting to the Jack server. Try 'jack-diagnose'
SSL error when connecting to the Jack server. Try 'jack-diagnose'
ninja: build stopped: subcommand failed.
10:11:50 ninja failed with: exit status 1
I checked the log in ~/.jack-server/log/xxxx-0-0.log. It has nothing about error.
I use curl command to connect to the server, it says:
$ curl https://127.0.0.1:8076/jack
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:8076
I changed the script in prebuilts/sdk/tools/jack-admin to print the $CURL_CODE, samed as my shell curl command, report error code 35.
This url discussed about samliar problem:
https://forums.gentoo.org/viewtopic-t-1060536-start-0.html
But I am not sure.
Here is the source script link which prompts the above error:
https://android-opengrok.bangnimang.net/android-8.1.0_r81/xref/prebuilts/sdk/tools/jack-admin?r=692a2a62#89
I have same issue and it was fixed by removing "TLSv1, TLSv1.1" in jdk.tls.disabledAlgorithms configuration in file /etc/java-8-openjdk/security/java.security.
I think that there is a good chance that it is this:
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8202343
Basically, they have turned off (default) support for TLS 1.0 and 1.1, starting in 8u291. These versions of TLS are old, insecure and deprecated; see https://en.wikipedia.org/wiki/Transport_Layer_Security
This is mentioned in the 8u291 release notes.
My advice would be to find out why your build system is not using TLS 1.2 or later. Then upgrade / fix that.
You can test if this is the problem by running curl with the --tlsv1.2 option.
removing "TLSv1, TLSv1.1" in jdk.tls.disabledAlgorithms configuration in file /etc/java-8-openjdk/security/java.security.
It work for me.
Ubuntu update jdk 8u292 background, so it hard related to jdk .
Firsty, Some info link to change Jack port , I had change Jack port but it doesnot work.
Secondly, I have try update ubuntu16.04.2 and ubuntu16.04.7. but error of "SSL error when connecting to the Jack server. Try 'jack-diagnose'" still occurs.
Thanks #Guillaume P a lot.

How to access H2O Flow when using Google Colab

Does anyone know how to access H2O Flow when using Google Colab?
My code is as follows:
!pip install H2O
import h2o
h2o.init(bind_to_localhost=False, log_dir="./")
from google.colab.output import eval_js
print(eval_js("google.colab.kernel.proxyPort(54321)"))
this code shows the following output:
Checking whether there is an H2O instance running at http://localhost:54321 ..... not found.
Attempting to start a local H2O server...
Java Version: openjdk version "11.0.10" 2021-01-19; OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.18.04); OpenJDK 64-Bit Server VM (build 11.0.10+9-Ubuntu-0ubuntu1.18.04, mixed mode, sharing)
Starting server from /usr/local/lib/python3.7/dist-packages/h2o/backend/bin/h2o.jar
Ice root: /tmp/tmp5mullu7m
JVM stdout: /tmp/tmp5mullu7m/h2o_unknownUser_started_from_python.out
JVM stderr: /tmp/tmp5mullu7m/h2o_unknownUser_started_from_python.err
Server is running at http://127.0.0.1:54321
Connecting to H2O server at http://127.0.0.1:54321 ... successful.
H2O_cluster_uptime: 02 secs
H2O_cluster_timezone: Etc/UTC
H2O_data_parsing_timezone: UTC
H2O_cluster_version: 3.32.1.1
H2O_cluster_version_age: 3 days
H2O_cluster_name: H2O_from_python_unknownUser_0ttq4b
H2O_cluster_total_nodes: 1
H2O_cluster_free_memory: 3.180 Gb
H2O_cluster_total_cores: 2
H2O_cluster_allowed_cores: 2
H2O_cluster_status: accepting new members, healthy
H2O_connection_url: http://127.0.0.1:54321
H2O_connection_proxy: {"http": null, "https": null}
H2O_internal_security: False
H2O_API_Extensions: Amazon S3, XGBoost, Algos, AutoML, Core V3, TargetEncoder, Core V4
Python_version: 3.7.10 final
https://0258qgrdz6tx-496ff2e9c6d22116-54321-colab.googleusercontent.com/
and clicking https://0258qgrdz6tx-496ff2e9c6d22116-54321-colab.googleusercontent.com/ returns HTTP 500 error with "Not Implemented" message instead of H2O Flow (Web UI) page.
It seems that the message is returned by Persist class.
You can use localtunnel to expose the port that H2O.ai runs on:
Install localtunnel:
!npm install -g localtunnel
Start localtunnel:
!lt --port 54321
Then you can navigate to the url it returns and access H2O.ai notebook.

No able to connect to spark cluster via sparklyr package when my custom package method is invoked via OpenCpu

I have created an R package that makes use of the sparklyr capabilities within a dummy hello function. My package does a very simple thing as connection to a spark cluster, print the spark version and disconnect. The package is successfully clean and build and is successfully executed from R and Rstudio.
# Connect to Spark cluster
spark_conn <- sparklyr::spark_connect(master = "spark://elenipc.home:7077", spark_home = '/home/eleni/spark-2.2.0-bin-hadoop2.7/')
# Print the version of Spark
sv<- sparklyr::spark_version(spark_conn)
print(sv)
# Disconnect from Spark
sparklyr::spark_disconnect(spark_conn)
It is very important for me to be able to execute the hello function from OpenCpu rest api. (I have used opencpu api for executing many other custom created packages.)
When invoking opencpu api like:
curl http://localhost/ocpu/user/rstudio/library/myFirstBigDataPackage/R/hello/print -X POST
i get the following response:
Failed while connecting to sparklyr to port (8880) for sessionid (89615): Gateway in port (8880) did not respond.
Path: /home/eleni/spark-2.2.0-bin-hadoop2.7/bin/spark-submit
Parameters: --class, sparklyr.Shell, '/home/rstudio/R/x86_64-pc-linux-gnu-library/3.4/sparklyr/java/sparklyr-2.2-2.11.jar', 8880, 89615
Log: /tmp/ocpu-temp/file26b165c92166_spark.log
---- Output Log ----
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
---- Error Log ----
In call:
force(code)
Of course allocate more memory to both java & spark executor does not resolve the issue. permission issues are also discarded as i already configured the etc/apparmor.d/opencpu.d/custom file so as to permit opencpu to have rwx privileges on spark. It seems to be a connectivity issue that i do not know how to face. During method invocation via opencpu api spark logs do not even print something.
For you info my environment configuration is as follows:
java version "1.8.0_65"
R version 3.4.1
RStudio version 1.0.153
spark-2.2.0-bin-hadoop2.7
opencpu 1.5 (compatible with my Ubuntu 14.04.3 LTS)
Thank you very much for you support and time!!!

ElasticSearch Java Transport Client NoNodeAvailableException on Ubuntu 14.04

I am running Elasticsearch v. 2.3.2, using Java 7. Following is the printout from curl http://172.31.11.83:9200:
{
"name" : "ip-172-31-11-83",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.2",
"build_hash" : "b9e4a6acad4008027e4038f6abed7f7dba346f94",
"build_timestamp" : "2016-04-21T16:03:47Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
... and I am using the following in my Java code:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>2.3.2</version>
</dependency>
I have ports 9200 and 9300 open in my firewall rules for my ES server, and can successfully execute said Java code from my laptop (Mac OSX). Following is the code snippet that starts off the process (this works fine):
Settings settings = Settings.settingsBuilder()
.put("cluster.name", "elasticsearch").build();
esClient =TransportClient.builder().settings(settings).build().addTransportAddress(new
InetSocketTransportAddress(new InetSocketAddress(InetAddress.getByName("172.31.11.83"), 9300)));
Then later, I try to issue an index request (this fails when I run the code on Ubuntu 14.04:
adminClient = esClient.admin().indices();
IndicesExistsResponse response = adminClient.exists(request).actionGet();
My elasticsearch.yml file contains the following network settings:
network.bind_host: 0
network.publish_host: 172.31.11.83
transport.tcp.port: 9300
http.port: 9200
I have also tried with network.bind_host: 172.31.11.83 to no avail. Using curl, I can get to port 9200 from all machines. The cluster name reported by curl is "elasticsearch".
When I start ES, I see the following in the elasticsearch.log:
publish_address {172.31.11.83:9300}, bound_addresses {[::]:9300}
And yet, the exception I get is as follows:
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{172.31.11.83}{172.31.11.83:9300}]]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207)
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55)
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:283)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:336)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1178)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.exists(AbstractClient.java:1198)
Again, this exact code works from my local machine. Any thoughts?
Having identical issue..
Upgraded elastic from 1.7 to 2.3.2 on same AWS kit
Ubunto 14.0.4
Elastic binding transport on 9300 as before
Security group has port open (not changed)
Now remote clients cannot connect via transport layer - same error as above.
The only thing that has changed in my setup is the version of Elasticsearch
ok I solved this. It appears 2.3.2 doesn't default TCP bind in same way as 1.7.0
I had to set this in my elasticsearch.yml :
network.bind_host: {AWS private IP address)

Unable to deploy in Netbeans 6.7.1 and Glassfish v2.1.1

I am trying to deploy a simple WebService in Netbeans 6.7.1 and Glassfish v2.1.1 and am getting the following error. I am using GlassfishESBv2.2 and windows 7 machine. I have tried googling and implemented things as shown in
http://forums.netbeans.org/topic10055-0-asc-0.html . Still unable to deploy. Though the message says that application server is not started, from the Server tab , I am able to see a message which indicates Glassfish has started.Also , doing a netstat after trying to deploy returns this, which means that Glassfish is running.
C:>netstat -an | findstr "4848"
TCP 0.0.0.0:4848 0.0.0.0:0 LISTENING
I have been trying real hard to get this resolved. Any help is highly appreciated.
Error Message :
The Sun Java System Application Server could not start.
More information about the cause is in the Server log file.
Possible reasons include:
- IDE timeout: refresh the server node to see if it's running now.
- Port conflicts. (use netstat -a to detect possible port numbers already used by the operating system.)
- Incorrect server configuration (domain.xml to be corrected manually)
- Corrupted Deployed Applications preventing the server to start.(This can be seen in the server.log file. In this case, domain.xml needs to be modified).
- Invalid installation location.
C:\Users\xyz\Documents\NetBeansProjects\HWebService\nbproject\build-impl.xml:564: Deployment error:
The Sun Java System Application Server could not start.
More information about the cause is in the Server log file.
Possible reasons include:
- IDE timeout: refresh the server node to see if it's running now.
- Port conflicts. (use netstat -a to detect possible port numbers already used by the operating system.)
- Incorrect server configuration (domain.xml to be corrected manually)
- Corrupted Deployed Applications preventing the server to start.(This can be seen in the server.log file. In this case, domain.xml needs to be modified).
- Invalid installation location.
See the server log for details.
BUILD FAILED (total time: 29 seconds)
I've described the solution please visit here...
http://forums.netbeans.org/post-65058.html

Categories