I'm running unit tests in Android Studio. I have a Java class that loads a native library with the following code
static
{
System.loadLibrary("mylibrary");
}
But when I test this class inside my src/test directory I get
java.lang.UnsatisfiedLinkError: no mylibrary in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1864)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
How can I make it find the path of native .so libraries which is located at src/main/libs in order to unit test without errors?
Note: inside src/main/libs directory I have 3 more subdirectories: armeabi, mips and x86. Each one of those contains the proper .so file. I'm using the Non experimental version for building NDK libs.
I don't wanna use other 3rd party testing libraries as all my other "pure" java classes can be unit tested fine. But if that's not possible then I'm open to alternatives.
Here is my test code which throws the error
#Test
public void testNativeClass() throws Exception
{
MyNativeJavaClass test = new MyNativeJavaClass("lalalal")
List<String> results = test.getResultsFromNativeMethodAndPutThemInArrayList();
assertEquals("There should be only three result", 3, results.size());
}
The only solution I found that works without hacks is to use JUnit through instrumentation testing (androidTest directory).
My class can now be tested fine but with help of the android device or emulator.
If the library is required for your test, use an AndroidTest (under src/androidTest/...) rather than a junit test. This will allow you to load and use the native library like you do elsewhere in your code.
If the library is not required for your test, simply wrap the system load in a try/catch. This will allow the JNI class to still work in junit tests (under src/test/...) and it is a safe workaround, given that it is unlikely to mask the error (something else will certainly fail, if the native lib is actually needed). From there, you can use something like mockito to stub out any method calls that still hit the JNI library.
For example in Kotlin:
companion object {
init {
try {
System.loadLibrary("mylibrary")
} catch (e: UnsatisfiedLinkError) {
// log the error or track it in analytics
}
}
}
I am not sure whether this solves your problem or not but so far nobody has mentioned about strategy pattern for dealing with classes preloading library during their creation.
Let's see the example:
We want to implement Fibonacci solver class. Assuming that we provided implementation in the native code and managed to generate the native library, we can implement the following:
public interface Fibonacci {
long calculate(int steps);
}
Firstly, we provide our native implementation:
public final class FibonacciNative implements Fibonacci {
static {
System.loadLibrary("myfibonacci");
}
public native long calculate(int steps);
}
Secondly, we provide Java implementation for Fibonacci solver:
public final class FibonacciJava implements Fibonacci {
#Override
public long calculate(int steps) {
if(steps > 1) {
return calculate(steps-2) + calculate(steps-1);
}
return steps;
}
}
Thirdly, we wrap the solvers with parental class choosing its own implementation during its instantiation:
public class FibonnaciSolver implements Fibonacci {
private static final Fibonacci STRATEGY;
static {
Fibonacci implementation;
try {
implementation = new FibonnaciNative();
} catch(Throwable e) {
implementation = new FibonnaciJava();
}
STRATEGY = implementation;
}
#Override
public long calculate(int steps) {
return STRATEGY.calculate(steps);
}
}
Thus, the problem with finding path to the library using strategy. This case, however, does not resolve the problem if the native library is really necessary to be included during the test. It does not neither solve the problem if the native library is a third-party library.
Basically, this gets around the native library load problem by mocking out the native code for java code.
Hope this helps somehow:)
There is a way to configure library path of Gradle-run VM for local unit tests, and I'm going to describe it below, but spoiler: in my expericence, #ThanosFisherman is right: local unit tests for stuff that uses the Android NDK seem to be a fools errand right now.
So, for anyone else looking for a way to load shared (i.e. .so) libraries into unit tests with gradle, here's the somewhat lengthy abstract:
The goal is to set the shared library lookup path for the JVM running the unit tests.
Althoug many people suggest putting the lib path into java.library.path, I found that it doesn't work, at least not on my linux machine. (also, same results in this CodeRanch thread)
What does work though is setting the LD_LIBRARY_PATH os environment variable (or PATH is the closest synonym in Windows)
Using Gradle:
// module-level build.gradle
apply plugin: 'com.android.library' // or application
android {
...
testOptions {
unitTests {
all {
// This is where we have access to the properties of gradle's Test class,
// look it up if you want to customize more test parameters
// next we take our cmake output dir for whatever architecture
// you can also put some 3rd party libs here, or override
// the implicitly linked stuff (libc, libm and others)
def libpath = '' + projectDir + '/build/intermediates/cmake/debug/obj/x86_64/'
+':/home/developer/my-project/some-sdk/lib'
environment 'LD_LIBRARY_PATH', libpath
}
}
}
}
With that, you can run, e.g. ./gradlew :mymodule:testDebugUnitTest and the native libs will be looked for in the paths that you specified.
Using Android Studio JUnit plugin
For the Android Studio's JUnit plugin, you can specify the VM options and the environment variables in the test configuration's settings, so just run a JUnit test (right-clicking on a test method or whatever) and then edit the Run Configuration:
Although it sounds like "mission accomplished", I found that when using libc.so, libm.so and others from my os /usr/lib gives me version errors (probably because my own library is compiled by cmake with the android ndk toolkit against it's own platform libs). And using the platform libs from the ndk packages brought down the JVM wih a SIGSEGV error (due to incompatibility of the ndk platform libs with the host os environment)
Update As #AlexCohn incisively pointed out in the comments, one has to build against the host environment libs for this to work; even though your machine most likely is x86_64, the x86_64 binaries built against NDK environment will not do.
There may be something I overlooked, obviously, and I'll appreciate any feedback, but for now I'm dropping the whole idea in favor of instrumented tests.
Just make sure, the directory containing the library is contained in the java.library.path system property.
From the test you could set it before you load the library:
System.setProperty("java.library.path", "... path to the library .../libs/x86");
You can specify the path hard coded, but this will make the project less portable to other environments. So I suggest you build it up programmatically.
The .so files are to be placed under
src/main/jniLibs
Not under src/main/libs
(Tested with Android Studio 1.2.2)
For reference check the page - http://ph0b.com/android-studio-gradle-and-ndk-integration/, though some portions might be outdated.
This is very, very tricky. Setting java.library.path does not work, but trying to understand someone else’s Mac OSX approach I eventually found a working solution.
Legal release: all code examples directly copied into this post are available under CC0 but it would be appeciated to credit my employer ⮡ tarent, the LLCTO project at Deutsche Telekom, and the author mirabilos.
CAVEATS first:
with this, you’re testing a version of the native code compiled against your system libraries (usually glibc on GNU/Linux, and on BSD, Mac OSX and Windows it’s even trickier) so adding some instrumented tests should be done anyway, use the unittests only for faster testing of things that actually can be tested on the host OS
I’ve only tested this with a GNU/Linux host (and am, in fact, excluding these native tests on all other host OSes, see below)
it should work under unixoid OSes with GNU/BSD-style shared libraries as-is
with small adaptions from the “someone else’s” article linked above, it might probably work on Mac OSX
Windows… no, just no. Use WSL, which is basically Linux anyway and makes things much easier, and so much closer to Android which is also basically Linux just not GNU
IDE integration needs manual steps at each developer’s machine (but these are easily documented, see (much) below)
Prerequisites
You’ll need to make sure that all build dependencies of your native code are also installed in the host system. This includes cmake (because we sadly cannot reuse the NDK cmake) and a host C compiler. Note that these introduce further differences in the build: you’re testing something that has been built with the host C compiler (often GCC, not clang like in Android) against the host C library and other libraries by the host clang. Do consider this when writing your tests. I had to move one of the tests to instrumented because it was impossible to test under glibc.
For filesystem layout, we assume the following:
~/MYPRJ/build.gradle is the top-level build file (generated by IntelliJ / Android Studio)
~/MYPRJ/app/build.gradle is where the Android code in question is built (generated by IntelliJ / Android Studio)
~/MYPRJ/app/src/main/native/CMakeLists.txt is where the native code is situated
This means build.gradle (for the app) has something like this already, by the point where you begin wondering about whether your project can be unittested:
externalNativeBuild {
cmake {
path "src/main/native/CMakeLists.txt"
return void // WTF‽
}
}
Make sure your code builds on the host
Doing this ought to be easy at first glance:
$ rm -rf /tmp/build
$ mkdir /tmp/build
$ cd /tmp/build
$ cmake ~/MYPRJ/app/src/main/native/
$ make
(Make sure you give cmake the path to the directory the main CMakeLists.txt file is in, but not to that file itself!)
This will fail for everything nōntrivial, of course. Most people would use Android logging. (It will also fail because it cannot find <jni.h>, and because GNU libc requires an extra _GNU_SOURCE definition to access some prototypes, etc…)
So I wrote a header to include instead of <android/log.h> which abstracts the logging away…
#ifndef MYPRJ_ALOG_H
#define MYPRJ_ALOG_H
#ifndef MYPRJ_ALOG_TAG
#define MYPRJ_ALOG_TAG "MYPRJ-JNI"
#endif
#if defined(MYPRJ_ALOG_TYPE) && (MYPRJ_ALOG_TYPE == 1)
#include <android/log.h>
#define ecnlog_err(msg, ...) __android_log_print(ANDROID_LOG_ERROR, \
MYPRJ_ALOG_TAG, msg, ##__VA_ARGS__)
#define ecnlog_warn(msg, ...) __android_log_print(ANDROID_LOG_WARN, \
MYPRJ_ALOG_TAG, msg, ##__VA_ARGS__)
#define ecnlog_info(msg, ...) __android_log_print(ANDROID_LOG_INFO, \
MYPRJ_ALOG_TAG, msg, ##__VA_ARGS__)
#elif defined(MYPRJ_ALOG_TYPE) && (MYPRJ_ALOG_TYPE == 2)
#include <stdio.h>
#define ecnlog_err(msg, ...) fprintf(stderr, \
"E: [" MYPRJ_ALOG_TAG "] " msg "\n", ##__VA_ARGS__)
#define ecnlog_warn(msg, ...) fprintf(stderr, \
"W: [" MYPRJ_ALOG_TAG "] " msg "\n", ##__VA_ARGS__)
#define ecnlog_info(msg, ...) fprintf(stderr, \
"I: [" MYPRJ_ALOG_TAG "] " msg "\n", ##__VA_ARGS__)
#else
# error What logging system to use?
#endif
#endif
… and updated my CMakeLists.txt to indicate whether building for NDK (must be default) or native:
cmake_minimum_required(VERSION 3.10)
project(myprj-native)
option(UNDER_NDK "Build under the Android NDK" ON)
add_compile_options(-fvisibility=hidden)
add_compile_options(-Wall -Wextra -Wformat)
add_library(myprj-native SHARED
alog.h
myprj-jni.c
)
if (UNDER_NDK)
add_definitions(-DECNBITS_ALOG_TYPE=1)
find_library(log-lib log)
target_link_libraries(myprj-native ${log-lib})
else (UNDER_NDK)
add_definitions(-DECNBITS_ALOG_TYPE=2)
include(FindJNI)
include_directories(${JNI_INCLUDE_DIRS})
add_definitions(-D_GNU_SOURCE)
endif (UNDER_NDK)
Note this also already includes the fix for <jni.h> (FindJNI) and the extra definitions.
Now let’s try to build it again:
$ rm -rf /tmp/build
$ mkdir /tmp/build
$ cd /tmp/build
$ cmake -DUNDER_NDK=OFF ~/MYPRJ/app/src/main/native/
$ make
In my case, this was sufficient. If you’re still not there, fix this first before proceeding. If you cannot fix this, give up on buildhost-local unit tests for your JNI code and move the respective tests to instrumented.
Let Gradle build the host-native code
Add the following to the app build.gradle:
def dirForNativeNoNDK = project.layout.buildDirectory.get().dir("native-noNDK")
def srcForNativeNoNDK = project.layout.projectDirectory.dir("src/main/native").asFile
task createNativeNoNDK() {
def dstdir = dirForNativeNoNDK.asFile
if (!dstdir.exists()) dstdir.mkdirs()
}
task buildCMakeNativeNoNDK(type: Exec) {
dependsOn createNativeNoNDK
workingDir dirForNativeNoNDK
commandLine "/usr/bin/env", "cmake", "-DUNDER_NDK=OFF", srcForNativeNoNDK.absolutePath
}
task buildGMakeNativeNoNDK(type: Exec) {
dependsOn buildCMakeNativeNoNDK
workingDir dirForNativeNoNDK
commandLine "/usr/bin/env", "make"
}
project.afterEvaluate {
if (org.gradle.internal.os.OperatingSystem.current().isLinux()) {
testDebugUnitTest {
dependsOn buildGMakeNativeNoNDK
systemProperty "java.library.path", dirForNativeNoNDK.asFile.absolutePath + ":" + System.getProperty("java.library.path")
}
testReleaseUnitTest {
dependsOn buildGMakeNativeNoNDK
systemProperty "java.library.path", dirForNativeNoNDK.asFile.absolutePath + ":" + System.getProperty("java.library.path")
}
}
}
This defines a few new tasks to compile the buildhost-native version of the shared library, and hooks this up if the host OS is “Linux”. (This syntax will also work for other unixoid OSes — BSD, Mac OSX — but not for Windows. But we can probably test this under Linux only anyway. WSL counts as Linux.) It also sets up the JVM library path so that ../gradlew test will let the JVM pick up the library from its path.
Loose ends
There’s a few loose ends you might have noticed here:
In the last paragraph of the previous section, I mentioned that ../gradlew test will pick up the library. Testing from the IDE will not work yet; this involves manual setup.
I mentioned that the relevant unit tests must be skipped if the buildhost OS is not “Linux”; we have yet to do that. Unfortunately, JUnit 4 lacks such facilities, but switching the unit tests to JUnit 5 “Jupiter” will allow us to do that. (We’re not switching the instrumented tests, though; that’d be more invasive.)
You’ll probably not yet have noticed, but the logging output from the native code will not show up thanks to Gradle’s default settings which we’ll need to change.
So, let’s do that. First, edit your app build.gradle file again. There will be a dependencies { block. We’ll need to fill that with suitable dependencies for either JUnit:
dependencies {
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0'
//noinspection GradleDependency
androidTestImplementation 'com.android.support.test:runner:1.0.1'
//noinspection GradleDependency
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
//noinspection GradleDependency
androidTestImplementation 'junit:junit:4.12'
}
You’ll also have a line apply plugin: 'com.android.application' (or perhaps apply plugin: 'com.android.library') at the top. Directly below that line, insert this one:
apply plugin: 'de.mannodermaus.android-junit5'
Also, make sure that, under android { defaultConfig { the testInstrumentationRunner is still "android.support.test.runner.AndroidJUnitRunner" (the default as generated by IntelliJ / Android Studio).
Next, edit the top-level ~/MYPRJ/build.gradle file. You’ll already have a buildscript { dependencies { and will have to add a line to that section to make the JUnit5 plugin available in the first place:
//noinspection GradleDependency
classpath 'de.mannodermaus.gradle.plugins:android-junit5:1.5.2.0'
Then, add a new section under allprojects {:
tasks.withType(Test) {
testLogging {
outputs.upToDateWhen { false }
showStandardStreams = true
exceptionFormat = 'full'
}
systemProperty 'java.util.logging.config.file', file('src/test/resources/logging.properties').getAbsolutePath()
}
This ensures that…
tests are never skipped because Gradle thinks them up-to-date
logging output and exceptions are shown in full
if you have a ~/MYPRJ/app/src/test/resources/logging.properties it will set up java.util.logging with this (recommended)
Now see to your test, something like ~/MYPRJ/app/src/test/java/org/example/packagename/JNITest.java. First, you should add a “test” that can always run (I use one that merely tests whether my JNI class can be loaded), and ensure it displays some information first:
// or Lombok #Log
private static final java.util.logging.Logger LOGGER = java.util.logging.Logger.getLogger(JNITest.class.getName());
#Test
public void testClassBoots() {
LOGGER.info("running on " + System.getProperty("os.name"));
if (!LINUX.isCurrentOs()) {
LOGGER.warning("skipping JNI tests");
}
// for copy/paste into IntelliJ run options
LOGGER.info("VM options: -Djava.library.path=" +
System.getProperty("java.library.path"));
LOGGER.info("testing Java™ part of JNI class…");
[…]
}
Then, annotate the actual JNI tests that need to be skipped on other OSes:
#Test
#EnabledOnOs(LINUX)
public void testJNIBoots() {
LOGGER.info("testing JNI part of JNI class…");
final long tid;
try {
tid = JNI.n_gettid();
} catch (Throwable t) {
LOGGER.log(Level.SEVERE, "it failed", t);
Assertions.fail("JNI does not work");
return;
}
LOGGER.info("it also works: " + tid);
assertNotEquals(0, tid, "but is 0");
}
For comparison, instrumented tests (unittests that run on the Android device or emulator) — e.g. ~/MYPRJ/app/src/androidTest/java/org/example/packagename/JNIInstrumentedTest.java — look like this:
#RunWith(AndroidJUnit4.class)
public class JNIInstrumentedTest {
#Test
public void testJNIBoots() {
Log.i("ECN-Bits-JNITest", "testing JNI part of JNI class…");
final long tid;
try {
tid = JNI.n_gettid();
} catch (Throwable t) {
Log.e("ECN-Bits-JNITest", "it failed", t);
fail("JNI does not work");
return;
}
Log.i("ECN-Bits-JNITest", "it also works: " + tid);
assertNotEquals("but is 0", 0, tid);
}
}
See Testable.java if you need an assertThrows for instrumented tests (JUnit 5 already comes with one), by the way. (Note that this does not fall under the CC0 grant above but comes under a permissive licence.)
Now, you can run both tests, unittests and (if an Android emulator is started or device commected) instrumented tests:
../gradlew test connectedAndroidTest
Do so. Note the output of the VM options: logger call from the buildhost-native unit tests; in fact, copy it to the clipboard. You’ll now need it to set up testing in the IDE.
In the Project view (left-side tree), right-click either on your JNITest class or the entire src/test/java/ directory. Click on Run 'JNITest' (or Run 'Tests in 'java''), it will fail with an UnsatisfiedLinkError as in the original post.
Now click on the arrow in the test drop-down below the menu bar, then select Save JNITest configuration, then do it again and select Edit configurations… and select your configuration. Append the entire pasted thing to VM options: so the field will now look like -ea -Djava.library.path=/home/USER/MYPRJ/app/build/native-noNDK:/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (of course, the actual value will differ) and click OK. Re-run the test, and it will succeed.
Unfortunately, you’ll have to do this for every native test class once and for the entire directory, so all possible ways of invocation will be covered. You’ll also have to do this manually, by clicking around, for every IDE instance, and these values depend on the path the code was checked out into. I’ve not found a way to automate these (if you know of one, do tell).
Exception backtraces
If you’re throwing custom exceptions from your code, you’ll most likely wish to include file/lineno/function information. Use a constructor like MyprjNativeException(final String file, final int line, final String func, final String msg, … /* custom data */, final Throwable cause) and, after calling super(msg, cause) (possibly with a changed message), do this:
StackTraceElement[] currentStack = getStackTrace();
StackTraceElement[] newStack = new StackTraceElement[currentStack.length + 1];
System.arraycopy(currentStack, 0, newStack, 1, currentStack.length);
newStack[0] = new StackTraceElement("<native>", func, file, line);
setStackTrace(newStack);
Then, to throw an exception like this from native code:
#define throw(env,...) vthrow(__FILE__, __func__, env, __LINE__, __VA_ARGS__)
static void vthrow(const char *loc_file, const char *loc_func, JNIEnv *env,
int loc_line, /* custom args */ const char *msg, ...);
Use as follows:
if (func() != expected)
throw(env, /* custom args */ "foo");
Implementation (assuming you cache class and constructor method references) looks as follows (adjust for custom args):
static void vthrow(const char *loc_file, const char *loc_func, JNIEnv *env,
int loc_line, const char *fmt, ...)
{
jthrowable e;
va_list ap;
jstring jfile = NULL;
jint jline = loc_line;
jstring jfunc = NULL;
jstring jmsg = NULL;
jthrowable cause = NULL;
const char *msg;
char *msgbuf;
if ((*env)->PushLocalFrame(env, /* adjust for amount you need */ 5)) {
cause = (*env)->ExceptionOccurred(env);
(*env)->ExceptionClear(env);
(*env)->Throw(env, (*env)->NewObject(env, classreference, constructorreference,
jfile, jline, jfunc, jmsg, /* custom */…, cause));
return;
}
if ((cause = (*env)->ExceptionOccurred(env))) {
/* will be treated as cause */
(*env)->ExceptionClear(env);
}
va_start(ap, fmt);
if (vasprintf(&msgbuf, fmt, ap) == -1) {
msgbuf = NULL;
msg = fmt;
} else
msg = msgbuf;
va_end(ap);
jmsg = (*env)->NewStringUTF(env, msg);
free(msgbuf);
if (!jmsg)
goto onStringError;
if (!(jfunc = (*env)->NewStringUTF(env, loc_func)))
goto onStringError;
/* allocate NewStringUTF for any custom things you need */
/* exactly like the one for loc_func above */
/* increase PushLocalFrame argument for each */
jfile = (*env)->NewStringUTF(env, loc_file);
if (!jfile) {
onStringError:
(*env)->ExceptionClear(env);
}
e = (*env)->PopLocalFrame(env, (*env)->NewObject(env, classreference, constructorreference,
jfile, jline, jfunc, jmsg, /* custom */…, cause));
if (e)
(*env)->Throw(env, e);
}
Now using __FILE__ will put the full absolute path into the messages and backtraces. This is not very nice. There’s a compiler option to fix that, but NDK r21’s clang is much too old, so we need a workaround.
CMakeLists.txt:
if (NOT TOPLEV)
message(FATAL_ERROR "setting the top-level directory is mandatory")
endif (NOT TOPLEV)
[…]
if (UNDER_NDK)
[…]
execute_process(COMMAND ${CMAKE_CXX_COMPILER} --version OUTPUT_VARIABLE cxx_version_full)
string(REGEX REPLACE "^Android [^\n]* clang version ([0-9]+)\\.[0-9].*$" "\\1" cxx_version_major ${cxx_version_full})
if (${cxx_version_major} VERSION_GREATER_EQUAL 10)
add_definitions("-ffile-prefix-map=${TOPLEV}=«MyPrj»")
else (${cxx_version_major} VERSION_GREATER_EQUAL 10)
add_definitions(-DOLD_CLANG_SRCDIR_HACK="${TOPLEV}/")
endif (${cxx_version_major} VERSION_GREATER_EQUAL 10)
else (UNDER_NDK)
[…]
add_definitions("-ffile-prefix-map=${TOPLEV}=«MyPrj»")
endif (UNDER_NDK)
app build.gradle:
(straight after the apply plugin lines)
def dirToplev = project.layout.projectDirectory.asFile.absolutePath
(inside android { defaultConfig { add a new block)
externalNativeBuild {
cmake {
//noinspection GroovyAssignabilityCheck because Gradle and the IDE have different world views…
arguments "-DTOPLEV=" + dirToplev
}
return void // WTF‽
}
(later, where you call cmake)
commandLine "/usr/bin/env", "cmake", "-DTOPLEV=" + dirToplev, "-DUNDER_NDK=OFF", srcForNativeNoNDK.absolutePath
Then, replace the line jfile = (*env)->NewStringUTF(env, loc_file); with the following snippet:
#ifdef OLD_CLANG_SRCDIR_HACK
if (!strncmp(loc_file, OLD_CLANG_SRCDIR_HACK, sizeof(OLD_CLANG_SRCDIR_HACK) - 1) &&
asprintf(&msgbuf, "«ECN-Bits»/%s", loc_file + sizeof(OLD_CLANG_SRCDIR_HACK) - 1) != -1) {
msg = msgbuf;
} else {
msg = loc_file;
msgbuf = NULL;
}
#else
#define msg loc_file
#endif
jfile = (*env)->NewStringUTF(env, msg);
#ifdef OLD_CLANG_SRCDIR_HACK
free(msgbuf);
#else
#undef msg
#endif
Tieing it all together
This all is implemented in the ECN-Bits project. I’m posting a permalink because it’s currently on a nōn-default branch but expected to be merged (once the actual functionality is no longer WIP), so be sure to check master at some point in time as well (although this permalink is probably a better example as it has the testing down and there’s not as much “actual” code to get in the way). Note that these links do not fall under the CC0 grant from above; the files all have a permissive licence though (the files which don’t have it explicit (gradle/cmake files) have the same as the unittest class permalink), but enough of it was reposted in this article so that should not be a problem for you; these only serve to show an actually-compiling-and-testing example.
In this project, it’s not in app/ but as a separate library module.
top-level build.gradle
library build.gradle
instrumented tests
unittest class
unittest logging configuration
native CMakeLists.txt
native alog.h
native code including caching the references
Java™ code to which the JNI native code attaches including the Exception class
Try running your test code with java -XshowSettings:properties option and make sure your destination path for system libraries and in the output of this command, library path values are the same
Just to clarify, the System.loadlibrary() call was failing because the junit unit test uses host/system environment which was windows in my case. Hence the loadlibrary() call was trying to search for the .so files in standard shared libary folders. But this isn't what I was expecting to happen. Instead I wanted the libxxx.so files to be loaded from .aar file(contains android resources, jars, jni libs).
This can only happen by two ways:
Baking the libxxx.so into the APK manually: we ourselves place the libxxx.so files into jnilibs dir(which the system will search for when loading the shared lib) under src/java/ root of the apk by copying the required files.
adding the .aar as an external dependency to module level gradle build script (implementation/androidTestImplementation), by doing this we make this aar available for link editing used by the apk build process.
But in both cases the app runs in the android environment/vm and hence the System.loadlibrary() call will resolve to correct libxxx.so which would be part of the apk. Hence NO ISSUES.
However in case of unit tests, which are does not require instrument(ie, android device) and runs on the JVM running on the host system where the tests are running (ex: windows/linux/mac), The call to System.loadlibrary() resolves only the standard lib paths of host system for finding shared libs/exe and doesn't refer to android system environment. Hence the ISSUES.
Fixes:
unpack the libxxx.so into some temp dir and add this dir to the system's library search path (ex: java.library.path, %PATH% variable on windows etc). Now run the required unit tests which doesn't require the android environment but involves the native code testing using JNI if any. This should work!!
(Efficient method) Simply move these type of unit tests to androidTest(ie, Instrumentation tests) so that above explained loading and packing are intact and System.loadlibrary() can successfully find the libxxx.so when running inside the instrument(android device/os). This way you ensure appropriate lib type (x86, x86-64, arm-v7a, arm-v8a(AARCH64)) is invoked on target device and tests are run on specific target devices.
I need to port a CPP project to Android but I somehow got stuck because of the following things that I am not sure of:
Do I need to use some kind of java wrapper for my CPP project at all, i.e is it necessarily that I use Android SDK to integrate my application with Android? If there is another way, which one would that be?
I have seen some people claiming they have been able to manipulate their cmake file and some custom android-cmake toolchain to build their “.so” or eventually an “.apk” from their project. Would it be possible without a java wrapper to manipulate the cmake files of the cpp project to build your project? (source: Build Android NDK project with Cmake)
From my experience, I would go with using the android java entry point, otherwise you will most likely bump into problems (you can make full native android apps , but I strongly advise against it).
One of the reasons is that you will want SDK function calls from inside your CPP, and using reflection on the java environment from CPP isn't trivial.
The steps would be the following :
Create a very simple C code that will server as bridge to your CPP code . Here are some sample C functions that I've commonly used :
OnApplicationStart
OnApplicationPaused
OnApplicationResumed
OnApplicationUpdate
OnTouchReceived
Export these functions and load them in your Java code (lookup JNI on how to do this)
Handle all Android-specific actions in Java, and all application specific actions in cpp
So the answer to your 1st question is that a java wrapper isn't mandatory, but it's HIGHLY recommended.
To answer your 2nd question :
Yes, you can use cmkae, ant, cygwin , allot of command tools that will end up creating your apk (It's up to you on how you feel comfortable).You will still use the android compiler since you are targeting arm.
The difference is that you need to change in your manifest.xml the entry point of the application from a normal activity to a native activity ( http://developer.android.com/reference/android/app/NativeActivity.html ) .
As you can see, the difference isn't the build system, it's the way you define your entry point.
As a friendly advice, try using the minimal java wrapper approach. You might get to better results sooner, and it won't take you more then 1 week of research on the web on how to link java code to cpp.
EDIT :
As of demand, I will shortly explain how I would approach the process of porting a CPP application to Android :
Rethink your application to work as a shared library , using a C entry point :
a.Create a simple C application that will load yourCPPApp.dll (or .so if you are on linux)
b. In your .dll create the minimum necessary extern "C" functions to be exported in order for you to give the necessary information to your dll
For simplicity, we'll assume we have 3 methods :
void StartApplication();
bool OnApplicationUpdate();
void OnScreenTouched(int x, int y);
c. Implement the simple C project that will make the calls to these methods externaly (so the .exe will call the methods from the .dll ! )
Sample code :
#include "myCPPapp.h"
int main(int arg, char** argv)
{
StartApplication();
srand(time(NULL));
while (OnApplicationUpdate())
{
// we assume we have a 480x640 resolution
OnScreenTouched(rand()%480,rand()%640);
}
return 0;
}
Now that we have things working in full native with a .exe and a .dll , time to make it work with a .apk and a .so
a. Rename the exposed methods from myCppApp into java compatible prototypes
extern "C" {
JNIEXPORT void JNICALL Java_com_sample_nativebridge_OnApplicationStart(JNIEnv env, jobject thiz);
JNIEXPORT jboolean JNICALL Java_com_sample_nativebridge_OnApplcationUpdate(JNIEnv env, jobject thiz);
JNIEXPORT void JNICALL Java_com_sample_nativebridge_OnScreenTouched(JNIEnv env, jobject thiz, jint x, jint y);
}
b. create the java class nativebridge (case sensitive) in the package com.sample (you need t respect the names in order for correct linkage to native)
class nativebridge {
public static native void OnApplicationStart();
public static native boolean OnApplicationUpdate();
public static native void OnScreenTouched(int x, int y);
}
c. add the load library statement in the native bridge in order to have your library loaded at runtime
class nativebridge {
....
static {
System.loadLibrary("myCppApp");
// notice that the lib prefix and .so sufix aren't part of the name
}
}
d. compile your CPP code with the armeabi compiler and create libmyCPPApp.so (ndk build system , or whatever you'd like... i would go with the ndk, it's quite easy , just go into the folder with Android.mk and call $ANDROID_NDK_BUILD_PATH/build )
At this point you will need to createa a .mk file that will compile your myCppApp code, but this is out of scope, you will need to do this research on your own (it's quite trivial once you get the hang of it).
c. use the native methods from the bridge inside your java app wherever you see fit.
A very good tip would be to go through a hello world sample of ndk :
http://www.ntu.edu.sg/home/ehchua/programming/android/android_ndk.html
Enjoy.
I am trying to print out all of the capture devices that are supported using the #getDeviceList() method in the CaptureDeviceManager class and the returned Vector has a size of 0.
Why is that? I have a webcam that works - so there should be at least one. I am running Mac OS X Lion - using JMF 2.1.1e.
Thanks!
CaptureDeviceManager.getDeviceList(Format format) does not detect devices. Instead it reads from the JMF registry which is the jmf.properties file. It searches for the jmf.properties file in the classpath.
If your JMF install has succeeded, then the classpath would have been configured to include all the relevant JMF jars and directories. The JMF install comes with a jmf.properties file included in the 'lib' folder under the JMF installation directory. This means the jmf.properties would be located by JMStudio and you would usually see the JMStudio application executing correctly. (If your JMF install is under 'C:\Program Files', then run as administrator to get around UAC)
When you create your own application to detect the devices, the problem you described above might occur. I have seen a few questions related to the same problem. This is because your application's classpath might be different and might not include the environment classpath. Check out your IDE's properties here. The problem is that CaptureDeviceManager cannot find the jmf.properties file because it is not there.
As you have found out correctly, you can copy the jmf.properties file from the JMF installation folder. It would contain the correct device list since JMF detects it during the install (Check it out just to make sure anyway).
If you want do device detection yourself, then create an empty jmf.properties file and put it somewhere in your classpath (it might throw a java.io.EOFException initially during execution but that's properly handled by the JMF classes). Then use the following code for detecting webcams...
import javax.media.*;
import java.util.*;
public static void main(String[] args) {
VFWAuto vfwObj = new VFWAuto();
Vector devices = CaptureDeviceManager.getDeviceList(null);
Enumeration deviceEnum = devices.elements();
System.out.println("Device count : " + devices.size());
while (deviceEnum.hasMoreElements()) {
CaptureDeviceInfo cdi = (CaptureDeviceInfo) deviceEnum.nextElement();
System.out.println("Device : " + cdi.getName());
}
}
The code for the VFWAuto class is given below. This is part of the JMStudio source code. You can get a good idea on how the devices are detected and recorded in the registry. Put both classes in the same package when you test. Disregard the main method in the VFWAuto class.
import com.sun.media.protocol.vfw.VFWCapture;
import java.util.*;
import javax.media.*;
public class VFWAuto {
public VFWAuto() {
Vector devices = (Vector) CaptureDeviceManager.getDeviceList(null).clone();
Enumeration enum = devices.elements();
while (enum.hasMoreElements()) {
CaptureDeviceInfo cdi = (CaptureDeviceInfo) enum.nextElement();
String name = cdi.getName();
if (name.startsWith("vfw:"))
CaptureDeviceManager.removeDevice(cdi);
}
int nDevices = 0;
for (int i = 0; i < 10; i++) {
String name = VFWCapture.capGetDriverDescriptionName(i);
if (name != null && name.length() > 1) {
System.err.println("Found device " + name);
System.err.println("Querying device. Please wait...");
com.sun.media.protocol.vfw.VFWSourceStream.autoDetect(i);
nDevices++;
}
}
}
public static void main(String [] args) {
VFWAuto a = new VFWAuto();
System.exit(0);
}
}
Assuming you are on a Windows platform and you have a working web-cam, then this code should detect the device and populate the jmf.properties file. On the next run you can also comment out the VFWAuto section and it's object references and you can see that CaptureDeviceManager reads from the jmf.properties file.
The VFWAuto class is part of jmf.jar. You can also see the DirectSoundAuto and JavaSoundAuto classes for detecting audio devices in the JMStudio sample source code. Try it out the same way as you did for VFWAuto.
My configuration was Windows 7 64 bit + JMF 2.1.1e windows performance pack + a web-cam.
I had the same issue and I solved by invoking flush() on my ObjectInputStream object.
According to the API documentation for ObjectInputStream's constructor:
The stream header containing the magic number and version number are read from the stream and verified. This method will block until the corresponding ObjectOutputStream has written and flushed the header.
This is a very important point to be aware of when trying to send objects in both directions over a socket because opening the streams in the wrong order will cause deadlock.
Consider for example what would happen if both client and server tried to construct an ObjectInputStream from a socket's input stream, prior to either constructing the corresponding ObjectOutputStream. The ObjectInputStream constructor on the client would block, waiting for the magic number and version number to arrive over the connection, while at the same time the ObjectInputStream constructor on the server side would also block for the same reason. Hence, deadlock.
Because of this, you should always make it a practice in your code to open the ObjectOutputStream and flush it first, before you open the ObjectInputStream. The ObjectOutputStream constructor will not block, and invoking flush() will force the magic number and version number to travel over the wire. If you follow this practice in both your client and server, you shouldn't have a problem with deadlock.
Credit goes to Tim Rohaly and his explanation here.
Before calling CaptureDeviceManager.getDeviceList(), the available devices must be loaded into the memory first.
You can do it manually by running JMFRegistry after installing JMF.
or do it programmatically with the help of the extension library FMJ (Free Media in Java). Here is the code:
import java.lang.reflect.Field;
import java.util.Vector;
import javax.media.*;
import javax.media.format.RGBFormat;
import net.sf.fmj.media.cdp.GlobalCaptureDevicePlugger;
public class FMJSandbox {
static {
System.setProperty("java.library.path", "D:/fmj-sf/native/win32-x86/");
try {
final Field sysPathsField = ClassLoader.class.getDeclaredField("sys_paths");
sysPathsField.setAccessible(true);
sysPathsField.set(null, null);
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String args[]) {
GlobalCaptureDevicePlugger.addCaptureDevices();
Vector deviceInfo = CaptureDeviceManager.getDeviceList(new RGBFormat());
System.out.println(deviceInfo.size());
for (Object obj : deviceInfo ) {
System.out.println(obj);
}
}
}
Here is the output:
USB2.0 Camera : civil:\\?\usb#vid_5986&pid_02d3&mi_00#7&584a19f&0&0000#{65e8773d-8f56-11d0-a3b9-00a0c9223196}\global
RGB, -1-bit, Masks=-1:-1:-1, PixelStride=-1, LineStride=-1