JLink does not produce redistributable image - java

For a while now I've been working with modular projects, but due to being constrained with filename and automatic modules, I had never got a chance to work with jlink tool to produce a redistributable application image. Today I've opted to start an independent project which does not import any external dependencies to prevent usage of the compatibility mode. The project consists of 3 modules and is in maven, so I will only be posting the jlink command snippet I'm using.
Project for reference: https://gitlab.com/Dragas/edu-day-demo, checkout the modules-full tag. Project is built with package goal, to prevent polluting your local .m2 repository. Project is already configured to pull dependencies so packaging and deployment would be easier.
The command I used to generate the jlinked image was as follows:
jlink \
--module-path edu-day-runtime/target/dependency/:edu-day-runtime/target/ \
--add-modules ALL-MODULE-PATH \
--output edu-day-jlinked \
--launcher edurun=edu.day.runtime
Invoking the command does indeed generate a jlinked image, which contains minimum required modules, java libraries and JVM binaries to run the project. Invoking the machine that built the image
edu-day-jlinked/bin/edurun 1 1
does run the project and outputs the following
Result of sum is 2
Meanwhile, attempting to run the same in containerized environment (here i'm using bash:5, a non-java image to simulate an environment where java is not installed) does not yield similar results. Instead, the shell does not seem to find a binary named java
docker run -it -v "$(pwd)/edu-day-jlinked:/app" bash:5
...(in container)
bash-5.0# /app/bin/edurun 1 1
/app/bin/edurun: line 4: /app/bin/java: not found
Upon inspection, the folder does indeed contain the binary called java
bash-5.0# ls -la
total 52
drwxr-xr-x 2 1000 1000 4096 Aug 23 07:53 .
drwxr-xr-x 7 1000 1000 4096 Aug 23 07:53 ..
-rwxr-xr-x 1 1000 1000 116 Aug 23 07:53 edurun
-rwxr-xr-x 1 1000 1000 16688 Aug 23 07:53 java
-rwxr-xr-x 1 1000 1000 16712 Aug 23 07:53 keytool
But even invoking it directly (to show the help message) does not yield any results, besides the same message that the binary cannot be found
(in /app/bin/ folder)
bash-5.0# ./java
bash: ./java: No such file or directory
What is more interesting is that even the keytool binary returns the same error
(in /app/bin/ folder)
bash-5.0# ./keytool
bash: ./keytool: No such file or directory
This raises a question: what went wrong? I haven't yet delved deeper into how jlink works, but my speculation is that it copies the binaries from my own java installation (openjdk 11.0.8+10 from arch repositories), and considers that to be redistributable. Or did I just miss some command line options?

Your issue is that the test container (bash:5) doesn't use the same version of the run-time linker as the java environment.
The binary produced by the jlink will only run if there is a compatible linux run-time linker on the system.
The purpose of the run-time linker is to configure the binary for execution on the system - at the time you are building an executable the default run-time linker is hard-coded into the binary. You can inspect the run-time linker using a tool such as readelf -l, or ldd (ldd only works if it can find the run-time linker)
The default run-time linker for amd64 linux (e.g. ubuntu) is: /lib64/ld-linux-x86-64.so.2
The default run-time linker for i386 linux is: /lib/ld-linux.so.2
On a bash:5 container, the default run-time linker is: /lib/ld-musl-x86_64.so.1
This is not compatible with the run-time linker for the jdk
The error: /app/bin/java: not found is caused because the run-time linker cannot be found for the binary. A dirty test of a jlinked VM in a bash:5 container gives the same error.
When I get the run-time linker for the java I've used:
$ docker run --rm -it -v (pwd)/edu-day-jlinked64:/app -w /here bash:5 bash
bash-5.0# /app/bin/java
bash: /app/bin/java: No such file or directory
bash-5.0# strings -a /app/bin/java | grep '^/lib'
/lib64/ld-linux-x86-64.so.2
bash-5.0# ls -l /lib64/ld-linux-x86-64.so.2
ls: /lib64/ld-linux-x86-64.so.2: No such file or directory
Testing with the run-time linker that's on-board:
bash-5.0# /lib/ld-musl-x86_64.so.1 --list /app/bin/java
/lib64/ld-linux-x86-64.so.2 (0x7fe2852a3000)
libjli.so => /app/bin/../lib/libjli.so (0x7fe28528c000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7fe2852a3000)
libz.so.1 => /lib/libz.so.1 (0x7fe285272000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7fe2852a3000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7fe2852a3000)
Error relocating /app/bin/../lib/libjli.so: __snprintf_chk: symbol not found
Error relocating /app/bin/../lib/libjli.so: __vfprintf_chk: symbol not found
Error relocating /app/bin/../lib/libjli.so: __read_chk: symbol not found
Error relocating /app/bin/../lib/libjli.so: __memmove_chk: symbol not found
Error relocating /app/bin/../lib/libjli.so: __printf_chk: symbol not found
Error relocating /app/bin/../lib/libjli.so: __fprintf_chk: symbol not found
Error relocating /app/bin/../lib/libjli.so: __sprintf_chk: symbol not found
so it definitely won't work here.
Let's use something 'standard'. As I had built the jlinked app in an ubuntu:focal container, with an installed version of java let's use one that doesn't have java built-in:
$ docker run --rm -it -v $(pwd)/edu-day-jlinked64:/app -w /here ubuntu:focal bash
root#865c9c12c029:/here# /app/bin/java
Usage: java [options] <mainclass> [args...]
(to execute a class)
or java [options] -jar <jarfile> [args...]
(to execute a jar file)
or java [options] -m <module>[/<mainclass>] [args...]
java [options] --module <module>[/<mainclass>] [args...]
(to execute the main class in a module)
or java [options] <sourcefile> [args]
(to execute a single source-file program)
so it will work in this case.
Reproducibility:
Built using:
$ docker run --rm -it -v $(pwd):/here -w /here ubuntu:focal bash
# apt-get update
# DEBIAN_FRONTEND=noninteractive apt-get install -y git openjdk-14-jdk maven
# git clone https://gitlab.com/Dragas/edu-day-demo .
# git checkout modules-full
# ./mvnw package
# rm -rf edu-day-runtime/target/classes
# jlink --module-path edu-day-runtime/target/dependency/:edu-day-runtime/target/ --add-modules ALL-MODULE-PATH --output edu-day-jlinked --launcher edurun=edu.day.runtime
# ./edu-day-jlinked/bin/edurun 1 1
Result of sum is 2
In an adjacent directory:
$ docker run --rm -it -v $(pwd)/edu-day-jlinked:/app -w /here bash:5 bash
bash-5.0# /app/bin/edurun 1 1
/app/bin/edurun: line 4: /app/bin/java: not found
In another directory:
$ docker run --rm -it -v $(pwd)/edu-day-jlinked:/app -w /here ubuntu:focal bash
root#200b4a98f9ee:/here# /app/bin/edurun 1 1
Result of sum is 2

TL;DR — The bash:5 image uses a C library that is binary incompatible with the C library that was linked with your edu-day-jlinked/bin/java executable.
The long version
„…This raises a question: what went wrong?…“
What's going wrong is your app/bin/java binary is failing to find the C library that it was originally linked to when you built your edu-day-jlinked executable on whatever machine you built it on locally.
The problem occurs because the java binary that jlink produced is linked to the GNU glibc library that your locally-installed JDK uses…
$ ldd edu-day-demo-modules-full/edu-day-jlinked/bin/java
…
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fa61a95b000)
…
Whereas, the bash:5 image runs in the Busybox Linux distribution. And Busybox does not use glibc…
bash-5.0# ldd app/bin/java
…
libjli.so => app/bin/../lib/jli/libjli.so (0x7f572a16d000)
…
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f572a19f000)
Error relocating app/bin/../lib/jli/libjli.so: __snprintf_chk: symbol not found
Error relocating app/bin/../lib/jli/libjli.so: __vfprintf_chk: symbol not found
Error relocating app/bin/../lib/jli/libjli.so: __read_chk: symbol not found
Error relocating app/bin/../lib/jli/libjli.so: __memmove_chk: symbol not found
Error relocating app/bin/../lib/jli/libjli.so: __printf_chk: symbol not found
Error relocating app/bin/../lib/jli/libjli.so: __fprintf_chk: symbol not found
Error relocating app/bin/../lib/jli/libjli.so: __sprintf_chk: symbol not found
It uses a different C library: musl…
bash-5.0# find / -name '*musl*'
/lib/libc.musl-x86_64.so.1
/lib/ld-musl-x86_64.so.1
It helps to understand the Linking process. And it also helps to bear in mind that JLink builds a custom executable for a specific environment.
Your trial run on your local machine worked, because jlink built the executable specifically for your local environment.
The proposed solution
„…docker here is intended to simulate an environment which does not have java installed…“
Here is a Dockerfile that successfully builds your application and the resulting image „does not have java installed“…
FROM maven:3.6.1-jdk-13-alpine as build
WORKDIR /app
COPY pom.xml .
COPY edu-day-sum edu-day-sum
COPY edu-day-runtime edu-day-runtime
COPY edu-day-api edu-day-api
RUN mvn package && \
--module-path ${JAVA_HOME}/jmods:edu-day-runtime/target/dependency/:edu-day-runtime/target/edu-day-runtime-1.0-SNAPSHOT.jar \
--add-modules ALL-MODULE-PATH \
--output edu-day-jlinked \
--launcher edurun=edu.day.runtime
FROM alpine:latest
COPY --from=build /app/edu-day-jlinked /app
ENTRYPOINT ["/app/bin/edurun"]
CMD ["1", "1"]
Docker best practice advises: „Use multi-stage builds“ (like in the above Dockerfile) when your aim is to build „an environment which does not have java installed“.
The FROM maven:3.6.1-jdk-13-alpine stage of the multi-stage build, uses an alpine Linux image that has both Maven and a JDK specifically built to be compatible with the alpine distribution.
The FROM alpine:latest is a very small linux distro that does not have Java on it. The maven:3.6.1-jdk-13-alpine layer is discarded as the Docker best practice docs says. The only java in the resulting image is the one in app/bin.

Related

"Unable to run a simple JNI program" error message when installing rJava on R 3.6 for ubuntu bionic beaver

I have the very common problem that rJava does not install correctly on Ubuntu.
This problem has been dsicussed in multiple places here, here, here, to name a few.
The basic problem is that on installing the rJava package, the following error message is produced
configure: error: Unable to run a simple JNI program. Make sure you have configured R with Java support (see R documentation) and check config.log for failure reason.
Warning in system(cmd) : error in running command
ERROR: configuration failed for package ‘rJava’
* removing ‘/home/jonno/R/x86_64-pc-linux-gnu-library/3.6/rJava’
There are various closely related solutions to this problem. Most of them use sudo R CMD javareconf to configure Java for R (also a -e variant). Some suggest setting the JAVA_HOME path in the environment variables (others say not to). Others suggest uninstalling and re-installing R whilst others suggest installing rJava from cran. There are several who reccomend update alternatives. There are other variants of these solutions.
I have tried combinations of all of the above, and have got nowhere, so am clearly doing something wrong.
entering echo $JAVA_HOME returns
/usr/lib/jvm/java-11-openjdk-amd64
my etc/environment looks like this
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/$
MKL_THREADING_LAYER=GNU
JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
When I run R CMD javaconf, it looks like this
Java interpreter : /usr/lib/jvm/java-11-openjdk-amd64/java
Java version : 11.0.4
Java home path : /usr/lib/jvm/java-11-openjdk-amd64
Java compiler : /usr/lib/jvm/java-11-openjdk-amd64/bin/javac
Java headers gen.: /usr/bin/javah
Java archive tool: /usr/lib/jvm/java-11-openjdk-amd64/bin/jar
trying to compile and link a JNI program
detected JNI cpp flags : -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux
detected JNI linker flags : -L$(JAVA_HOME)/lib/server -ljvm
gcc -std=gnu99 -I"/usr/share/R/include" -DNDEBUG -I/usr/lib/jvm/java-11-openjdk-amd64/include -I/usr/lib/jvm/java-11-openjdk-amd64/include/linux -fpic -g -O2 -fdebug-prefix-map=/build/r-base-uuRxut/r-base-3.6.1=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g -c conftest.c -o conftest.o
gcc -std=gnu99 -shared -L/usr/lib/R/lib -Wl,-Bsymbolic-functions -Wl,-z,relro -o conftest.so conftest.o -L/usr/lib/jvm/java-11-openjdk-amd64/lib/server -ljvm -L/usr/lib/R/lib -lR
JAVA_HOME : /usr/lib/jvm/java-11-openjdk-amd64
Java library path: $(JAVA_HOME)/lib/server
JNI cpp flags : -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/linux
JNI linker flags : -L$(JAVA_HOME)/lib/server -ljvm
Updating Java configuration in /usr/lib/R
Done.
What am I doing wrong and how do I get rJava to install properly?
EDIT:
having managed to successfully install rJava using sudo apt-get install r-cran-rjava I know get the following error
Error: package or namespace load failed for ‘rJava’:
.onLoad failed in loadNamespace() for 'rJava', details:
call: dyn.load(file, DLLpath = DLLpath, ...)
error: unable to load shared object '/usr/lib/R/site-library/rJava/libs/rJava.so':
libjvm.so: cannot open shared object file: No such file or directory
I've investigated with the original poster (we work at the same place) and the problem is that in OpenJDK11 they moved around some of the .so files that the JVM lives in, specifically libjvm.so which in the Ubuntu package is now in /usr/lib/jvm/java-1.11.0-openjdk-amd64/lib/server/.
This means that even if you install the Ubuntu package for rJava with apt install r-cran-rjava it fails when you try to library(rJava).
The solution is to add /usr/lib/jvm/java-1.11.0-openjdk-amd64/lib/server/ to your $LD_LIBRARY_PATH by adding:
export LD_LIBRARY_PATH=/usr/lib/jvm/java-1.11.0-openjdk-amd64/lib/server:$LD_LIBRARY_PATH
to the end of your ~/.bashrc and starting a new shell (or source ~/.bashrc).
This is something we had to fix for our central installs of OpenJDK e.g. here: https://github.com/UCL-RITS/rcps-buildscripts/blob/master/adoptopenjdk-11.0.3_install.sh#L46
If you want to make this work with Rstudio launched from Gnome, you need to add that directory to ldconfig.
As root (or with sudo) create a file in /etc/ld.so.conf.d/ which you should call something with a .conf extension e.g. java.conf which contains the line:
/usr/lib/jvm/java-1.11.0-openjdk-amd64/lib/server
And then as root run
ldconfig -v
This should add the directory to the locations that executables launched through GNOME search for. This particular part of the problem (GNOME ignoring settings in bashrc) has been a problem in Ubuntu since at least 9.04 (https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/366728/).

Create Java runtime image on one platform for another using Jlink

I created runtime image using jlink on my Linux machine. And I see linux folder under the include folder. Does it mean that I can use this runtime image only for Linux platform? If yes, are there any ways to create runtime images on one platform for another (e.g. on Linux for Windows and vice versa)
The include directory is for header files, such as jni.h, that are needed when compiling C/C++ code that uses JNI and other native interfaces. It's nothing to do with jlink.
The jlink tool can create a run-time image for another platform (cross targeting). You need to download two JDKs to do this. One for the platform where you run jlink, the other for the target platform. Run jlink with --module-path $TARGET/jmods where $TARGET is the directory where you've unzipped the JDK for the target platform.
Being generally unable to add anything to Alan Bateman's answers in terms of information, I'll offer a working example. This example illustrates using jlink on Mac OS and then running the binary on Ubuntu in a Docker container.
The salient points are as follows.
Given two simple modules, we compile on Mac OS:
javac -d build/modules \
--module-source-path src \
`find src -name "*.java"`
jar --create --file=lib/net.codetojoy.db#1.0.jar \
-C build/modules/net.codetojoy.db .
jar --create --file=lib/net.codetojoy.service#1.0.jar \
-C build/modules/net.codetojoy.service .
Assuming that the Linux 64 JDK is unpacked in a local directory (specified as command-line arg), we call jlink (on Mac OS in this example). JAVA_HOME is the crux of the solution:
# $1 is ./jdk9_linux_64/jdk-9.0.1
JAVA_HOME=$1
rm -rf serviceapp
jlink --module-path $JAVA_HOME/jmods:build/modules \
--add-modules net.codetojoy.service \
--output serviceapp
Then, assuming we've pulled the ubuntu image for Docker, we can execute the following in a Docker terminal (i.e. Linux):
docker run --rm -v $(pwd):/data ubuntu /data/serviceapp/bin/java net.codetojoy.service.impl.UserServiceImpl
TRACER : hello from UserServiceImpl
To re-iterate this feature of Java 9/jlink: Linux does not have Java installed and the Linux binary was built on Mac OS.

running bazel remote executor test on separate machines

The remote worker guide of bazel (here) explains how to start the remote-worker locally and then run bazel against it.
I tried it and indeed that worked (with bugs that reported in GH)
Another attempt was to create run the remote worker on a virtual separate machine, by running it inside docker container and running bazel against it. But it failed in a different way - and I think this time I'm using it wrong.
Here's my docker file:
FROM openjdk:8
# install release bazel from apt
RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | tee /etc/apt/sources.list.d/bazel.list
RUN curl https://bazel.build/bazel-release.pub.gpg | apt-key add -
RUN apt-get update && apt-get install -y zip bazel
# compile dev bazel from sources
RUN mkdir -p /usr/src/bazel
# "bazel" has the latest development code of bazel from github
COPY bazel /usr/src/bazel
WORKDIR /usr/src/bazel
RUN bazel build src/bazel
# compile remote_worker using latest development bazel
RUN bazel-bin/src/bazel build //src/tools/remote_worker
# prepare cache folder
RUN mkdir -p /tmp/test
# Run remote-worker
CMD ["bazel-bin/src/tools/remote_worker/remote_worker","--work_path=/tmp/test","--listen_port=3030"]
After building it I simply ran the docker binding the port to the localhost:
$ docker build -t bazel-worker .
$ docker run -p 3030:3030 bazel-worker
Then ran bazel java test to run using the remote worker:
(Can check out my test repo here)
$ bazel --host_jvm_args=-Dbazel.DigestFunction=SHA1 test \
--spawn_strategy=remote \
--remote_executor=localhost:3030 \
--remote_cache=localhost:3030 \
--strategy=Javac=remote \
--remote_local_fallback=false \
--remote_timeout=600 \
//src/main/java/com/example/...
But I got this weird error message:
____Loading package: src/main/java/com/example
____Loading package: #bazel_tools//tools/cpp
____Loading package: #local_jdk//
____Loading package: #local_config_xcode//
____Loading package: #local_config_cc//
____Loading complete. Analyzing...
____Loading package: tools/defaults
____Loading package: #bazel_tools//third_party/java/jdk/langtools
____Loading package: #junit//jar
____Found 1 test target...
____Building...
____[0 / 2] BazelWorkspaceStatusAction stable-status.txt
____[2 / 4] Creating source manifest for //src/main/java/com/example:my_test
____From Extracting interface #junit//jar:jar:
/tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: 1: /tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: �����0��!H__PAGEZEROx__TEXTpp__text__TEXT/��__stubs__TEXT0p�__stub_helper__TEXT���__gcc_except_tab__TEXT�: not found
/tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: 2: /tmp/test/build-80057300-ffd2-49ea-a20b-3f234d9963db/external/bazel_tools/tools/jdk/ijar/ijar: Syntax error: word unexpected (expecting ")")
ERROR: /private/var/tmp/_bazel_ors/719f891d5db9fd5e73ade25b0c847fd1/external/junit/jar/BUILD.bazel:2:1: output 'external/junit/jar/_ijar/jar/external/junit/jar/junit-4.12-ijar.jar' was not created.
ERROR: /private/var/tmp/_bazel_ors/719f891d5db9fd5e73ade25b0c847fd1/external/junit/jar/BUILD.bazel:2:1: not all outputs were created or valid.
____Building complete.
Target //src/main/java/com/example:my_test failed to build
Use --verbose_failures to see the command lines of failed build steps.
____Elapsed time: 13.614s, Critical Path: 0.21s
Am I doing anything wrong? Do I need to run it differently when running the remote worker on an actual (or virtual) remote machine (vs. just running it locally)?
Important to mention: my machine is mac osx sierra. , I believe that docker openjdk:8 is ubuntu based, I'm running locally bazel development version (sha 956810b6ee24289e457a4b8d0a84ff56eb32c264).
Running the remote worker on a different architecture / OS combination than Bazel itself isn't working yet. We still have a couple of places in Bazel where we inspect the local machine - they were added as temporary measures, but haven't been fixed yet.
Edit: It may work in some cases, especially for platform-independent code (e.g., Java or Scala).
If your build is test-heavy, you could try only running tests remotely with --test_strategy=remote; I'm not sure if the default Jvm configuration will work, though.
If you want to run the entire build remotely, then you need to tell Bazel what kind of machines / OS it's executing on. Right now, that'd require setting --host_cpu, and probably --crosstool_top / --host_crosstool_top to configure a C++ compiler for that platform.
Also, some combinations of platforms are more and some less likely to work. In particular, combining MacOS and Linux or different flavors of Linux are much more likely to work than Windows in any combination.

Unable to build java google cloud debugger on ubuntu server on virtualbox

I'm trying to build the java google cloud debugger on Ubuntu 15.10 Server (guest) running on Virtual Box 5.0.14 on Mac OS X El Capitan (host).
I'm following the build instructions from cloud-debug-java
After installing cmake, build-essential, oracle java 8, maven3 etc., I also had to make the following changes to src/agent/Makefile before running ./build.sh:
Changed the /path/to/java/ to /usr/lib/jvm/java-8-oracle/
Added this include: -I/usr/lib/jvm/java-8-oracle/include/linux
So, my INCLUDES declaration looks like this:
INCLUDES = \
-I/usr/lib/jvm/java-8-oracle/include \
-I/usr/lib/jvm/java-8-oracle/include/linux \
-I$(THIRD_PARTY_INCLUDE_PATH) \
-I$(ANTLR_CPP_LIB_INCLUDE) \
-I. \
-I../codegen \
-Iantlrgen \
After that, the build runs fine but eventually fails when trying to build expression_util.o
Error:
g++ -I/usr/lib/jvm/java-8-oracle/include -I/usr/lib/jvm/java-8-oracle/include/linux -I/home/ubuntu-java/Development/google-cloud-debugger/cloud-debug-java/third_party/install/include -I../../third_party/antlr/lib/cpp/v2_7_2/ -I. -I../codegen -Iantlrgen -m64 -std=c++11 -fPIC -Werror -Wall -Wno-unused-parameter -Wno-deprecated -Wno-ignored-qualifiers -Wno-sign-compare -Wno-array-bounds -g0 -DSTANDALONE_BUILD -DGCP_HUB_CLIENT -Wno-unused-but-set-variable -Wno-strict-aliasing -O3 -D NDEBUG -c expression_util.cc -o expression_util.o
In file included from expression_util.cc:25:0:
antlrgen/JavaExpressionLexer.hpp:4:54: fatal error: third_party/antlr/lib/cpp/antlr/config.hpp: No such file or directory
compilation terminated.
Makefile:190: recipe for target 'expression_util.o' failed
make: *** [expression_util.o] Error 1
In the generated JavaExpressionLexer.hpp file, it's trying to #include third_party/antlr/lib/cpp/antlr/config.hpp and fails to find it.
In the project, I do see a config.hpp, but it's under <project-root>/third_party/antlr/lib/cpp/v2_7_2/antlr/.
I'm not sure how to resolve this error.
Are you using build.sh script? It should take care of ANTLR and other third party dependencies.
Specifically the build needs to set THIRD_PARTY_INCLUDE_PATH environment variable similarly to build.sh.

Installing PyLucene 3.0.3 on Ubuntu 10.04

I'm attempting to install PyLucene 3.0.3 on Ubuntu 10.04. This has proven considerably challenging, but so far I've:
Patched setuptools to allow building of JCC, as instructed in the PyLucene docs.
Built JCC via: cd pylucene-3.0.3-1/jcc; python setup.py build
Built Lucene 3.0.3 via ant, and installed the jar to /usr/share/java/lucene-core-3.0.3-dev.jar. Note, I have Ubuntu's default Lucene package installed to /usr/share/java/lucene-core-2.9.2.jar which also symlinks to /usr/share/java/lucene-core.jar
I'm now trying to "make" PyLucene, but I get the error:
cd lucene-java-3.0.3; -Dversion=3.0.3
/bin/sh: -Dversion=3.0.3: not found
make: *** [lucene-java-3.0.3/build/lucene-core-3.0.3.jar] Error 127
The file pylucene-3.0.3-1/doc/documentation/install.html makes mention to "edit Makefile to match your environment", but I'm not sure what that means. The makefile seems to contain the same Lucene version number as the one I installed. How else do I need to edit my makefile in order to build PyLucene?
Edit: After uncommenting a section in the makefile (thanks Torsten) for compiling under Ubuntu 8.10 (seriously, 8.10?!) most of it seemed to compile fine, but I still received an error. Several components reported "BUILD SUCCESSFUL" but the final build ended with:
/usr/bin/python -m jcc --shared --jar lucene-java-3.0.3/build/lucene-core-3.0.3.jar --jar lucene-java-3.0.3/build/contrib/snowball/lucene-snowball-3.0.3.jar --jar lucene-java-3.0.3/build/contrib/analyzers/common/lucene-analyzers-3.0.3.jar --jar lucene-java-3.0.3/build/contrib/regex/lucene-regex-3.0.3.jar --jar lucene-java-3.0.3/build/contrib/memory/lucene-memory-3.0.3.jar --jar lucene-java-3.0.3/build/contrib/highlighter/lucene-highlighter-3.0.3.jar --jar lucene-java-3.0.3/build/contrib/queries/lucene-queries-3.0.3.jar --jar build/jar/extensions.jar --package java.lang java.lang.System java.lang.Runtime --package java.util java.util.Arrays java.text.SimpleDateFormat java.text.DecimalFormat java.text.Collator --package java.io java.io.StringReader java.io.InputStreamReader java.io.FileInputStream --exclude org.apache.lucene.queryParser.Token --exclude org.apache.lucene.queryParser.TokenMgrError --exclude org.apache.lucene.queryParser.QueryParserTokenManager --exclude org.apache.lucene.queryParser.ParseException --exclude org.apache.lucene.search.regex.JakartaRegexpCapabilities --exclude org.apache.regexp.RegexpTunnel --python lucene --mapping org.apache.lucene.document.Document 'get:(Ljava/lang/String;)Ljava/lang/String;' --mapping java.util.Properties 'getProperty:(Ljava/lang/String;)Ljava/lang/String;' --rename org.apache.lucene.search.highlight.SpanScorer=HighlighterSpanScorer --version 3.0.3 --module python/collections.py --files 200 --build
/usr/bin/python: jcc is a package and cannot be directly executed
make: *** [compile] Error 1
I did this before (but without installing Lucene's default package in Ubuntu). I don't know what exactly is Error 127, but in my case it helped to set NUM_FILES=200 from the original NUM_FILES=2 in my Makefile. For some reason when NUM_FILES=2 it creates really huge files in memory which ubuntu will not handle. With NUM_FILES=200 the chunks are smaller and installation worked for me in the end. For python 2.6 you also have to change the JCC setting in Makefile (see below).
Here the part which was important for me in the Makefile:
# Linux (Ubuntu 8.10 64-bit, Python 2.5.2, OpenJDK 1.6, setuptools 0.6c9)
PREFIX_PYTHON=/usr
ANT=ant
PYTHON=$(PREFIX_PYTHON)/bin/python
JCC=$(PYTHON) -m jcc.__main__ --shared
NUM_FILES=200

Categories