ZooKeeperBindException when starting MiniAccumuloCluster - java

I'm attempting to start up a MiniAccumuloCluster for testing as described in the Accumulo Docs:
TemporaryFolder folder= new TemporaryFolder();
File temp_dir = folder.newFolder("AccumuloTempFolder");
MiniAccumuloCluster accumulo = new MiniAccumuloCluster(tempDirectory, "password");
accumulo.start();
Instance instance = new ZooKeeperInstance(accumulo.getInstanceName(), accumulo.getZooKeepers());
Connector conn = instance.getConnector("root", new PasswordToken("password"));
When calling accumulo.start(), a ZooKeeperBindException is thrown because "Zookeeper did not start within 20 seconds." Documentation and usage notes for the MiniAccumuloCluster seem sparse - can anyone help me understand what's going wrong here? I assumed that all of the Zookeeper configuration was being handled under the covers of the MiniAccumuloCluster so I'm not even sure where to start looking for a solution.

I ended up running a docker container for the mini fluo and mini accumulo, must have been something in my local environment causing the issue. This will print out the results or put a jar in the target directory of your local machine.
docker run -it --rm --name my-maven-project -v "$(pwd)":/path/to/code/fluo-
tour/src/main/java/ft -w /path/to/code/fluo-tour/src/main/java/ft maven:3.6.3-
openjdk-8 mvn -q clean compile exec:java

Related

FSCrawler docker-compose NoSuchElementException

I try to run FSCrawler via docker-compose following the steps described in https://fscrawler.readthedocs.io/en/fscrawler-2.9/installation.html#using-docker-compose.
ELASTIC_VERSION = "7.17.8"
FSCRAWLER_VERSION = "2.9"
PWD = ""
I verified that elasticsearch is running sucessfully.
On running docker-compose up fscrawler, I get the following exception.
You need to use the 2.10-SNAPSHOT which is much more stable although the name does not say that ;)

How to pass -javagent /path/to/newrelic.jar parameter to the JVM running a HiveMetaStore server

I try to get a Newrelic java agent to run in a docker container to monitor a HiveMetaStore server running in the docker container.
In order to get the Newrelic agent started during the startup of the JVM I have to pass-javaagent /path/to/newrelic.jar flag to the JVM.
I tried:
hive --service metastore -javaagent /path/to/newrelic.jar
This failed with "Unrecognized Option" in the HiveMetaStore server code, where it should not have ended up at all.
The hive script invokes the bin/ext/metastore.sh script which in turn invokes
exec $HADOOP jar $JAR $CLASS "$#"
So I tried to patch this invocation:
exec $HADOOP -javaagent /path/to/newrelic.jar jar $JAR $CLASS "$#"
This failed as well.
Then I took a deeper look at the hadoop script. Finally function hadoop_java_exec in libexec//hadoop_functions.sh invokes:
exec "${JAVA}" "-Dproc_${command}" ${HADOOP_OPTS} "${class}" "$#"
So I patched this code:
exec "${JAVA}" "-javaagent /path/to/newrelic.jar" "-Dproc_${command}" ${HADOOP_OPTS} "${class}" "$#"
This again failed.
Last but not least I recognized that one can pass java properties via HADOOP _OPTS (in libexec/hadoop_functions.sh):
function hadoop_finalize_hadoop_opts
{
hadoop_translate_cygwin_path HADOOP_LOG_DIR
hadoop_add_param HADOOP_OPTS hadoop.log.dir "-Dhadoop.log.dir=${HADOOP_LOG_DIR}"
hadoop_add_param HADOOP_OPTS hadoop.log.file "-Dhadoop.log.file=${HADOOP_LOGFILE}"
…
}
But I could not figure out how to pass -javaagent:/path/to/newrelic.jar using this mechanism.
Is anyone out there who has tried this before and can help with that ?
My apology if this is a stupid question. Thanks upfront, Ute
function hadoop_finalize_hadoop_opts
{
hadoop_translate_cygwin_path HADOOP_LOG_DIR
hadoop_add_param HADOOP_OPTS hadoop.log.dir "-Dhadoop.log.dir=${HADOOP_LOG_DIR}"
hadoop_add_param HADOOP_OPTS hadoop.log.file "-Dhadoop.log.file=${HADOOP_LOGFILE}"
…
hadoop_add _param HADOOP_OPTS java.javaagent -javaagent:${NEWRELIC_AGENT_HOME}\/newrelic.jar
}
Adding the last statement to got the agent started. I see in the container:
/usr/lib/jvm/default-jvm/bin/java -Dproc_jar -Dproc_metastore , … , NullAppender - javaagent:/opt/newrelic-agent-5.10.0/newrelic.jar org.apache.hadoop.util.RunJar /opt/apache-hive-3.1.2-bin/lib/hive-metastore-3.1.2.jar org.apache.hadoop.hive.metastore.HiveMetaStore
I don't understand the "NullAppender" yet but at least the agent seems to be running now.

TypeError: 'JavaPackage' object is not callable (spark._jvm)

I'm setting up GeoSpark Python and after installing all the pre-requisites, I'm running the very basic code examples to test it.
from pyspark.sql import SparkSession
from geo_pyspark.register import GeoSparkRegistrator
spark = SparkSession.builder.\
getOrCreate()
GeoSparkRegistrator.registerAll(spark)
df = spark.sql("""SELECT st_GeomFromWKT('POINT(6.0 52.0)') as geom""")
df.show()
I tried running it with python3 basic.py and spark-submit basic.py, both give me this error:
Traceback (most recent call last):
File "/home/jessica/Downloads/geo_pyspark/basic.py", line 8, in <module>
GeoSparkRegistrator.registerAll(spark)
File "/home/jessica/Downloads/geo_pyspark/geo_pyspark/register/geo_registrator.py", line 22, in registerAll
cls.register(spark)
File "/home/jessica/Downloads/geo_pyspark/geo_pyspark/register/geo_registrator.py", line 27, in register
spark._jvm. \
TypeError: 'JavaPackage' object is not callable
I'm using Java 8, Python 3, Apache Spark 2.4, my JAVA_HOME is set correctly, I'm running Linux Mint 19. My SPARK_HOME is also set:
$ printenv SPARK_HOME
/home/jessica/spark/
How can I fix this?
The Jars for geoSpark are not correctly registered with your Spark Session. There's a few ways around this ranging from a tad inconvenient to pretty seamless. For example, if when you call spark-submit you specify:
--jars jar1.jar,jar2.jar,jar3.jar
then the problem will go away, you can also provide a similar command to pyspark if that's your poison.
If, like me, you don't really want to be doing this every time you boot (and setting this as a .conf() in Jupyter will get tiresome) then instead you can go into $SPARK_HOME/conf/spark-defaults.conf and set:
spark-jars jar1.jar,jar2.jar,jar3.jar
Which will then be loaded when you create a spark instance. If you've not used the conf file before it'll be there as spark-defaults.conf.template.
Of course, when I say jar1.jar.... What I really mean is something along the lines of:
/jars/geo_wrapper_2.11-0.3.0.jar,/jars/geospark-1.2.0.jar,/jars/geospark-sql_2.3-1.2.0.jar,/jars/geospark-viz_2.3-1.2.0.jar
but that's up to you to get the right ones from the geo_pyspark package.
If you are using an EMR:
You need to set your cluster config json to
[
{
"classification":"spark-defaults",
"properties":{
"spark.jars": "/jars/geo_wrapper_2.11-0.3.0.jar,/jars/geospark-1.2.0.jar,/jars/geospark-sql_2.3-1.2.0.jar,/jars/geospark-viz_2.3-1.2.0.jar"
},
"configurations":[]
}
]
and also get your jars to upload as part of your bootstrap. You can do this from Maven but I just threw them on an S3 bucket:
#!/bin/bash
sudo mkdir /jars
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geo_wrapper_2.11-0.3.0.jar /jars/
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geospark-1.2.0.jar /jars/
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geospark-sql_2.3-1.2.0.jar /jars/
sudo aws s3 cp s3://geospark-test-ds/bootstrap/geospark-viz_2.3-1.2.0.jar /jars/
If you are using an EMR Notebook
You need a magic cell at the top of your notebook:
%%configure -f
{
"jars": [
"s3://geospark-test-ds/bootstrap/geo_wrapper_2.11-0.3.0.jar",
"s3://geospark-test-ds/bootstrap/geospark-1.2.0.jar",
"s3://geospark-test-ds/bootstrap/geospark-sql_2.3-1.2.0.jar",
"s3://geospark-test-ds/bootstrap/geospark-viz_2.3-1.2.0.jar"
]
}
I was seeing a similar kind of issue with SparkMeasure jars on Windows 10 machine
self.stagemetrics =
self.sc._jvm.ch.cern.sparkmeasure.StageMetrics(self.sparksession._jsparkSession)
TypeError: 'JavaPackage' object is not callable
So what I did was
Went to 'SPARK_HOME' via Pyspark shell, and installed the required jar
bin/pyspark --packages ch.cern.sparkmeasure:spark-measure_2.12:0.16
Grabbed that jar ( ch.cern.sparkmeasure_spark-measure_2.12-0.16.jar ) and copied into the the Jars folder of 'SPARK_HOME'
Reran the script and now it worked without that above error.

Elasticsearch: Could not find or load main class org.elasticsearch.tools.launchers.JavaVersionChecker

I'm using CentOS and have downloaded Elasticsearch 6.2.1. I created a new user "elastic" and when I run ./bin/elasticsearch I get the error:
Could not find or load main class org.elasticsearch.tools.launchers.JavaVersionChecker
I tried placing this user in an admin group ("wheel"), and the same problem occurs. If I try it with "sudo ./bin/elasticsearch" I get:
[2018-02-15T17:42:39,776][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.2.1.jar:6.2.1]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.1.jar:6.2.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.1.jar:6.2.1]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.1.jar:6.2.1]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:121) ~[elasticsearch-6.2.1.jar:6.2.1]
I searched a bit and saw that it this error can be due to the java version, but it seems to be up to date:
[elastic#sandbox-hdp elasticsearch-6.1.1]$ sudo update-alternatives --config java
There are 3 programs which provide 'java'.
Selection Command
1 /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java
*+ 2 /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java
3 /usr/lib/jvm/jre-1.5.0-gcj/bin/java
This also happens if I try Elasticsearch 6.1
Any suggestions would be appreciated.
In case you still have this problem... or just for curiosity... take a look at the file ./bin/elasticsearch-env.sh, around the line 70, look for the command:
"$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.JavaVersionChecker
Check if it is correctly "spelled" for your environment, write this command just before it and see what you've got:
echo "$JAVA" -cp "$ES_CLASSPATH" org.elasticsearch.tools.launchers.JavaVersionChecker
in my case... I was running in a Git Bash inside a Windows machine, and the command was pointing to:
/c/Users/Ualter/Developer/ELKStack/elasticsearch-6.2.2/lib/*
Changing to:
/Users/Ualter/Developer/ELKStack/elasticsearch-6.2.2/lib/*
As I was running on a Windows S.O. it get up and running OK the Elasticsearch.
I was trying hard to use elasticsearch 6.7.1 on Windows.
I had this exact error when running bin/elasticsearch.bat .
Long story short, there was probably an issue with my java version and elastic search, it couldn't run the jars (using Java 8 211).
I installed the last ElasticSearch 6.x from here :
https://www.elastic.co/downloads/past-releases#elasticsearch
And unset the existing ES_HOME and ES_CLASSPATH environment variables that I created, and all went well.
I also encounter such strange question, the first time it raises exception for running elastic as root, after I add a new user elastic and run it as elastic , I see this exception "Could not find or load main class org.elasticsearch.tools.launchers.JavaVersionChecker
".
I solved this problem by moving the program dirs to the non /root/ dirs, for example, I put it to /opt/elk/, and it works.
So I guess elastic may not allow to run as root for security , thus we can not put the program in the root user's dir. Hope it can make some tips for you.
I'm installing elasticsearch plugin es-sql on windows.
In my case, run the cmd as administrator role.
Seems like you don't have right permissions. Try to set them to 774 (rwxrwxr--) and check again:
sudo chmod 774 -R elasticsearch-6.3.2/

CCNx Java Code Help (ProcessBuilder)

Has anyone played around with the CCNx code over from http://www.ccnx.org/.
I unzip the project and loading the .project file in the javasrc directory into Eclipse. The project builds with no errors.
I'm guess I need to start the CCNDaemon (org.ccnx.ccn.impl.support.CCNDaemon) but I get an error at:
java.io.IOException: Cannot run program "../ccnd/agent/ccnd": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at org.ccnx.ccn.impl.support.CCNDaemon$CCNDWorkerThread.initialize(CCNDaemon.java:93)
at org.ccnx.ccn.impl.support.Daemon$WorkerThread.run(Daemon.java:125)
It looks like its trying to build a new process with
private static final String DEFAULT_CCND_COMMAND_STRING = "../ccnd/agent/ccnd";
protected String _command = DEFAULT_CCND_COMMAND_STRING;
...
ProcessBuilder pb = new ProcessBuilder(_command);
I don't have the ccnd operating system process. Do I need to build the C++ code? Or is there some way to run this with pure java? Thanks for the help!
Based on my findings, it turns out that all CCN applications require a CCNx Daemon. This is only based in C right now. So you have to build the C code with all the dependencies. The Java code actually calls the ccnd (CCN Daemon) to run with the ProcessBuilder.
I wrote up a blog post about how I got it to work for Ubuntu... but its basically.
C Source Dependencies:
sudo apt-get install git-core python-dev libssl-dev libpcap-dev libexpat1-dev athena-jot
Run:
./configure
Build CCN with:
make
Test with:
make test
Start the ccnd:
ccndstart
The blog post has more details.
If you add -start in the arguments block in the "Run configuration" dialog in Eclipse before you run the CCNDaemon, there should be no errors.
————————————————————————————————————————————
2011-10-5 19:49:39 org.ccnx.ccn.impl.support.Daemon startDaemon
信息: Starting daemon with command line: java -Djava.library.path=.:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java -cp /Users/thomas/Downloads/ccnx-0.4.1/javasrc /bin:/Applications/eclipse/plugins/org.junit_4.8.2.v4_8_2_v20110321-1705/junit.jar:/Applications/eclipse/plugins/org.hamcrest.core_1.1.0.v20090501071000.jar:/Users/thomas/Downloads/ccnx-0.4.1/javasrc/lib/bcprov-jdk16-143.jar:/Users/thomas/Downloads/ccnx-0.4.1/javasrc/lib/junit-4.3.1.jar:/Users/thomas/Downloads/ccnx-0.4.1/javasrc/lib/kxml2-2.3.0.jar org.ccnx.ccn.impl.support.CCNDaemon -daemon
Started daemon ccnd. PID 3127
2011-10-5 19:49:40 org.ccnx.ccn.impl.support.Daemon startDaemon
信息: Started daemon ccnd. PID 3127
——————————————————————————————————————————————————————————————————————————
(blank lines introduced for clarity)

Categories