I have an issue packing bazel-remote-worker into deployable jar.
I ran the following command:
bazel build //src/tools/remote_worker:remote_worker_deploy.jar
But when I try to run the jar I get this error:
➜ bazel git:(master) ✗ java -jar remote_worker_deploy.jar --work_path=/tmp/test --listen_port=3030
*** Initializing in-memory cache server.
*** Not using remote cache. This should be used for testing only!
Exception in thread "main" java.lang.UnsatisfiedLinkError: no unix in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at com.google.devtools.build.lib.UnixJniLoader.loadJni(UnixJniLoader.java:28)
at com.google.devtools.build.lib.unix.NativePosixFiles.<clinit>(NativePosixFiles.java:136)
at com.google.devtools.build.lib.unix.UnixFileSystem.createDirectory(UnixFileSystem.java:309)
at com.google.devtools.build.lib.vfs.Path.createDirectory(Path.java:829)
at com.google.devtools.build.lib.vfs.FileSystemUtils.createDirectoryAndParentsWithCache(FileSystemUtils.java:692)
at com.google.devtools.build.lib.vfs.FileSystemUtils.createDirectoryAndParents(FileSystemUtils.java:652)
at com.google.devtools.build.remote.RemoteWorker.<init>(RemoteWorker.java:114)
at com.google.devtools.build.remote.RemoteWorker.main(RemoteWorker.java:621)
The only way I can start it is by running the executable from bazel-bin:
bazel-bin/src/tools/remote_worker/remote_worker --work_path=/tmp/test --listen_port=3030
I'm running bazel latest (currently a3e26835890a543ff84cce90c879f9196ae06348) on mac osx sierra.
I tried it with either oracle-jdk-1.8.131 or openjdk-1.8.91 and it behaved the same.
End goal is to create a docker image that runs this jar but even inside the openjdk:8 this jar acts the same...
Apparently we're not packing the native code into the deploy jar. I'd actually prefer to refactor the RemoteWorker to avoid most of Bazel's internal libraries, although it's unlikely to happen soon. You could ship the libunix.so with the deploy jar and set the java.library.path appropriately. Alternatively, you can take the entire runfiles tree after building the remote worker (bazel-bin/src/tools/remote_worker/remote_worker.runfiles/).
Since the question was asked, the paths in Bazel source tree have changed. Nowadays the build commands to get the _deploy.jar are as follows.
bazel build src/tools/remote:worker_deploy.jar
mkdir -p /tmp/cas /tmp/cache /tmp/work
java -jar bazel-bin/src/tools/remote/worker_deploy.jar \
--cas_path /tmp/cas --disk_cache /tmp/cache --work_path /tmp/work
bazel build --spawn_strategy=remote \
--remote_cache=grpc://${IP}:8080 --remote_executor=grpc://${IP}:8080
Related
RPI 2B, running Debian/Jessie, with java version 1.8.0_65.
Downloaded latest nukkitx from https://nukkitx.com.
Followed installation instructions at https://github.com/IntellectualCrafters/PlotSquared/wiki/Installation.
Plugins I have installed:
Plot Squared 18.07.21-aaa7088-2022
FastAsyncWorldEdit 18.07.21-a00345f-1159-20.4.0
DbLib 0.2.3
Error I am encountering:
java.lang.UnsatisfiedLinkError: org.sqlite.core.NativeDB._open(Ljava/lang/String;I)V
Stack trace: https://pastebin.com/C3DrUm0Q.
Full server log: https://pastebin.com/2iuvQmbC.
As you can see, it says that PlotSquared has been loaded, but none of the plot commands are available. It just says unknown command when I type it. I have tried several different versions of all of the plugins, and a couple previous versions of nukkitx, all have the same problem. I'm thinking its something about my device, but I'm still pretty new to Linux and am not sure what to try next. Any suggestions would be amazing!
EDIT: I download the driver from https://github.com/xerial/sqlite-jdbc, and added it to the class path when calling the nukkitx jar to start the server. This didn't fix the problem. Here is the .sh file to start the nukkit jar:
#!/bin/sh
echo $USER
java -Xms1G -Xmx1G -cp ".;sqlite-jdbc-3.23.1.jar" -jar nukkit-1.0-SNAPSHOT.jar
I figured it out! For whatever reason, the JbLib sqlite driver apparently wasn't working. The solution was to remove JbLib (jar and folder) from the plugins folder, change the start.sh file (which I created according to the installation instructions) to use a classpath command instead of a jar command, add the xerial sqlite driver to the class path, and specify the Main Nukkit class to execute, like so:
java -classpath nukkit-1.0-SNAPSHOT.jar:sqlite-jdbc-3.23.1.jar cn.nukkit.Nukkit
Here's what I did:
Downloaded Apache ZooKeeper 3.4.6 (.tar file), extracted to C:\cygwin\home\user\zookeeper-3.4.6\
Ran ant at the root of the ZooKeeper folder (C:\cygwin\home\user\zookeeper-3.4.6)
Navigated to C:\cygwin\home\user\zookeeper-3.4.6\contrib\ZooInspector\
Ran ant, and I get the following error:
Output:
Buildfile: C:\cygwin\home\Jean\zookeeper-3.4.6\contrib\ZooInspector\build.xml
BUILD FAILED
C:\cygwin\home\user\zookeeper-3.4.6\contrib\ZooInspector\build.xml:19: Cannot find C:\cygwin\home\user\zookeeper-3.4.6\contrib\build-contrib.xml imported from C:\cygwin\home\user\zookeeper-3.4.6\contrib\ZooInspector\build.xml
Total time: 0 seconds
This leaves me with no .cmd or .sh file to execute. How come the build-contrib.xml file isn't there?
Also, I noticed that there seems to be an already-compiled ZooInspector JAR file: zookeeper-3.4.6-ZooInspector.jar. However, attempting to run it with the following command yields failure too:
$ java -cp zookeeper-3.4.6-ZooInspector.jar:lib/* org.apache.zookeeper.inspector.ZooInspector
Error: Could not find or load main class org.apache.zookeeper.inspector.ZooInspector
This is a bit frustrating -- setting up the ZooKeeper server was straightforward but for some reason I just can't figure out how to run this standalone GUI. What am I missing?
For windows:
#echo off
set cp="./*;./lib/*;../../*;../../lib/*"
java -cp %cp% org.apache.zookeeper.inspector.ZooInspector
ZooInspector 3.4.6 (that's bundled with ZooKeeper 3.4.6) doesn't seem to be able to connect to a running ZooKeeper instance on Windows.
Better use zkui:
https://github.com/echoma/zkui/wiki/Download
zooInspector just need 3 libraries and 1 jar to load the main class.
the mainclass lives zookeeper-3.3.0-ZooInspector.jar and it needs jtoaster-1.0.4.jar, zookeeper-3.3.0.jar and finally log4j-1.2.15.jar
After download the tar.gz file from apache servers, you must untar and build with ant. finally copy the zookeeper-3.3.0.jar and log4j-1.2.15.jar to contrib/ZooInspector/lib/. Finally cd to contrib/ZooInspector and launch this command
java -jar zookeeper-3.3.0-ZooInspector.jar -cp lib/*
I met the same issue today, and created a pre-compiled version, which should work on Windows as well. You can find details here:
https://www.admon.org/scripts/zooinspector-zookeeper-graphic-interface/
I'm trying to run Hive locally on OSX Mountain Lion and I'm trying to follow the instructions here:
https://github.com/twitter/hadoop-lzo
I've compiled the native OSX libraries and jar, but I'm not sure how I'm supposed to launch Hive locally such that Hive/Hadoop uses the native libraries.
I've tried including it through the JAVA_LIBRARY_PATH environment variable but I think that's just for Hadoop in general.
export JAVA_LIBRARY_PATH="${SCRIPTS_DIR}/jars/native/Mac_OS_X-x86_64-64"
When I run hive using the LzopCodec e.g.:
SET mapred.output.compression.codec = com.hadoop.compression.lzo.LzopCodec;
I get the following error when I run a query that runs a map/reduce job:
SELECT COUNT(*) from test_table;
Job running in-process (local Hadoop)
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: native-lzo library not available
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:237)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:477)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:525)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:959)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:995)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
at org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:303)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:530)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:262)
Caused by: java.lang.RuntimeException: native-lzo library not available
at com.hadoop.compression.lzo.LzoCodec.getCompressorType(LzoCodec.java:155)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:100)
at com.hadoop.compression.lzo.LzopCodec.getCompressor(LzopCodec.java:135)
at com.hadoop.compression.lzo.LzopCodec.createOutputStream(LzopCodec.java:70)
at org.apache.hadoop.hive.ql.exec.Utilities.createCompressedStream(Utilities.java:868)
at org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat.getHiveRecordWriter(HiveIgnoreKeyTextOutputFormat.java:80)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:246)
at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:234)
... 14 more
I've also tried setting in a Hive script the mapred.child.env LD_LIBRARY_PATH (no luck):
SET mapred.child.env="LD_LIBRARY_PATH=../../scripts/jars/native/Mac_OS_X-x86_64-64";
Reading the clear instructions again:
How do I configure Hadoop to use these classes?
# Copy the native library
tar -cBf - -C build/hadoop-gpl-compression-0.1.0-dev/lib/native . | tar -xBvf - -C /path/to/hadoop/dist/lib/native
Basically I just needed to copy the built native library into my hadoop installation:
ant compile-native tar
cp -r build/hadoop-lzo-0.4.17-SNAPSHOT/lib/native/Mac_OS_X-x86_64-64 /usr/local/Cellar/hadoop/1.1.2/libexec/lib/native/
Related to : Executing rake tasks on an exploded war on tomcat without jruby being installed
I'm trying to run rake tasks in my Tomcat server that doesn't have JRuby installed. I'm using warbler to create a war file.
Using the answer to the linked question, I ran:
java -cp lib/jruby-core*.jar:lib/jruby-stdlib*.jar org.jruby.Main -S rake -T
This gets me the error:
Exception in thread "main" java.lang.NoClassDefFoundError: org/jruby/Main
Caused by: java.lang.ClassNotFoundException: org.jruby.Main
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
ls lib gets me:
ems-gems-activerecord-jdbc-adapter-1.2.2-lib-arjdbc-jdbc-adapter_java.jar
gems-gems-jdbc-sqlite3-3.7.2-lib-sqlite-jdbc-3.7.2.jar
gems-gems-jruby-jars-1.6.8-lib-jruby-core-1.6.8.jar
gems-gems-jruby-jars-1.6.8-lib-jruby-stdlib-1.6.8.jar
gems-gems-jruby-rack-1.1.10-lib-jruby-rack-1.1.10.jar
gems-gems-json-1.7.5-java-lib-json-ext-generator.jar
gems-gems-json-1.7.5-java-lib-json-ext-parser.jar
gems-gems-therubyrhino_jar-1.7.4-jar-rhino-1.7R4.jar
gems-gems-warbler-1.3.6-lib-warbler_jar.jar
jruby-core-1.6.8.jar
jruby-rack-1.1.10.jar
jruby-stdlib-1.6.8.jar
ojdbc6.jar
Opening up the jruby-core-1.6.8.jar file, I can see that there is indeed a org/jruby/Main.class file.
As one can see from the file listing, there is no jruby-complete jar file, so I can't run the command from https://stackoverflow.com/a/9982556/684934
What am I doing wrong, and is there by now a better way to do this?
I was working on a similar problem 2 months ago, so things may have changed, but I needed to include all the jars in my class path, had to use bin stubs, and had to set GEM_HOME to get everything working. It may have been simpler, but the posts you referenced didn't work for me either.
I actually had jruby installed, (but I was only using it to build the concatenated class path), so my setup was something like:
cd /path/to/application/
export GEM_HOME=/path/to/application/gems
export CLASSPATH=$(jruby -e 'puts Dir["lib/*.jar"].join(":")')
RAILS_ENV=production java -cp $CLASSPATH org.jruby.Main bin/rake -T
Also useful, the jruby-jars gem can be included in your gemfile to set the version of jruby that warbler includes (I was using gem 'jruby-jars', '1.7.0.preview2' as 1.7.0 wasn't released yet)
My Java development done in a windows machine and i run my processes on a centos machine.
I have a bash script that build all my Jars and SCP them to my centos machine. i run this bash script in Cygwin (java -version is 1.5.0_12), but when i try to run the process in my centos machine, the jVM can't open the Jars. also, running jar -tf throws:
java.util.zip.ZipException: error in opening zip file
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:114)
at java.util.zip.ZipFile.<init>(ZipFile.java:75)
at sun.tools.jar.Main.list(Main.java:979)
at sun.tools.jar.Main.run(Main.java:224)
at sun.tools.jar.Main.main(Main.java:1149)
so the only way i can deploy is running mvn commands in cmd.exe in order to build my jars and then copy everything using WinSCP (that way i have no problems in the jars).
is there any known problem running mvn in cygwin?
(running mvn -version returned
Apache Maven 2.2.1 (r801777; 2009-08-06 22:16:01+0300)
Java version: 1.6.0_26)
thank you
Solved it.
i found the solution in cygwin sets file permission to 000
Edit /etc/fstab and add this line at the end of the file:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
Then close all Cygwin processes, open a new terminal and ls -l on your files again.
Maven being a Java application runs the same whether launched via Cygwin script or cmd.exe. The Java executable in this case is the same tool.
First, you might want to post the copy command you are using in the bash script. Secondly, have you checked the permissions on the jar files once they are pushed to the CentOS box? Are the files actually readable to the process owner when sent via your bash script and are the owners/permissions the same as when copied using WinSCP?