Is there a Java library for accelerated vector computations? - java

I'm looking for a Java lib that permits to do some fast computations with vector (and maybe matrices too).
By fast I mean that it takes advantage of GPU processing and/or SSE instructions. I'm wondering if it can be possible to find something more portable as possible. I recognize that the JVM provides a thick abstraction layer of the hardware.
I've come across JCUDA, but there's a drawback: on a computer without an Nnvidia graphic card it should be run in emulation mode (so I come to believe it will be not efficient as expected). Has anyone already tried it?

What about OpenCL? It should provide you a good starting point for this kind of optimized operations.
There exist many bindings for Java, starting from jocl (but take a loot also at JavaCL or LWJGL that added support from 2.6)

If by fast you mean high speed rather than requiring support for your particular hardware, I'd recommend Colt. Vectors are called 1-d matrices in this library.

I'd recommend using UJMP (wraps most if not all of the high-speed Java matrix libraries) and wait for a decent GPGPU implementation to be written for it (I started hacking it with JavaCL a while ago, but it needs some serious rewrite, maybe using ScalaCLv2 that's in the works).

Related

Java best practices for vectorized computations

I'm researching methods for computing expensive vector operations in Java, e.g. dot-products or multiplications between large matrices. There are a few good threads on here on this topic, like this and this. It appears that there is no reliable way of having the JIT compile code to use CPU vector instructions (SSE2, AVX, MMX...). Moreover, high-performance linear algebra libraries (ND4J, jblas, ...) do in fact make JNI calls to BLAS/LAPACK libraries for the core routines. And I understand BLAS/LAPACK packages to be the de facto standard choices for native linear algebra computations.
On the other hand others (JAMA, ...) implement algorithms in pure Java without native calls.
My questions are:
What are the best practices here?
Is making native calls to BLAS/LAPACK actually a recommended choice? Are there other libraries worth considering?
Is the overhead of JNI calls negligible compared to the performance gain? Does anyone have experience as to where the threshold lies (e.g. how small an input should be to make JNI calls more expensive than a pure Java routine?)
How big is the portability tradeoff?
I hope this question could be of help both for those who develop their own computation routines, and for those who just want to make an educated choice between different implementations.
Insights are appreciated!
There are no clear best practices for every case. Whether you could/should use a pure Java solution (not using SIMD instructions) or (optimized with SIMD) native code through JNI depends on your particular application and specifically the size of your arrays and possible restrictions on the target system.
There could be a requirement that you are not allowed to install specific native libraries in the target system and BLAS is not already installed. In that case you simply have to use a Java library.
Pure Java libraries tend to perform better for arrays with length much smaller than 100 and at some point after that you get better performance using native libraries through JNI. As always, your mileage may vary.
Pertinent benchmarks have been performed (in random order):
http://ojalgo.org/performance_ejml.html
http://lessthanoptimal.github.io/Java-Matrix-Benchmark/
Performance of Java matrix math libraries?
These benchmarks can be confusing as they are informative. One library may be faster for some operation and slower for some other. Also keep in mind that there may be more than one implementation of BLAS available for your system. I currently have 3 installed on my system blas, atlas and openblas. Apart from choosing a Java library wrapping a BLAS implementation you also have to choose the underlying BLAS implementation.
This answer has a fairly up to date list except it doesn't mention nd4j that is rather new. Keep in mind that jeigen depends on eigen so not on BLAS.

Using Java with Nvidia GPUs (CUDA)

I'm working on a business project that is done in Java, and it needs huge computation power to compute business markets. Simple math, but with huge amount of data.
We ordered some CUDA GPUs to try it with and since Java is not supported by CUDA, I'm wondering where to start. Should I build a JNI interface? Should I use JCUDA or are there other ways?
I don’t have experience in this field and I would like if someone could direct me to something so I can start researching and learning.
First of all, you should be aware of the fact that CUDA will not automagically make computations faster. On the one hand, because GPU programming is an art, and it can be very, very challenging to get it right. On the other hand, because GPUs are well-suited only for certain kinds of computations.
This may sound confusing, because you can basically compute anything on the GPU. The key point is, of course, whether you will achieve a good speedup or not. The most important classification here is whether a problem is task parallel or data parallel. The first one refers, roughly speaking, to problems where several threads are working on their own tasks, more or less independently. The second one refers to problems where many threads are all doing the same - but on different parts of the data.
The latter is the kind of problem that GPUs are good at: They have many cores, and all the cores do the same, but operate on different parts of the input data.
You mentioned that you have "simple math but with huge amount of data". Although this may sound like a perfectly data-parallel problem and thus like it was well-suited for a GPU, there is another aspect to consider: GPUs are ridiculously fast in terms of theoretical computational power (FLOPS, Floating Point Operations Per Second). But they are often throttled down by the memory bandwidth.
This leads to another classification of problems. Namely whether problems are memory bound or compute bound.
The first one refers to problems where the number of instructions that are done for each data element is low. For example, consider a parallel vector addition: You'll have to read two data elements, then perform a single addition, and then write the sum into the result vector. You will not see a speedup when doing this on the GPU, because the single addition does not compensate for the efforts of reading/writing the memory.
The second term, "compute bound", refers to problems where the number of instructions is high compared to the number of memory reads/writes. For example, consider a matrix multiplication: The number of instructions will be O(n^3) when n is the size of the matrix. In this case, one can expect that the GPU will outperform a CPU at a certain matrix size. Another example could be when many complex trigonometric computations (sine/cosine etc) are performed on "few" data elements.
As a rule of thumb: You can assume that reading/writing one data element from the "main" GPU memory has a latency of about 500 instructions....
Therefore, another key point for the performance of GPUs is data locality: If you have to read or write data (and in most cases, you will have to ;-)), then you should make sure that the data is kept as close as possible to the GPU cores. GPUs thus have certain memory areas (referred to as "local memory" or "shared memory") that usually is only a few KB in size, but particularly efficient for data that is about to be involved in a computation.
So to emphasize this again: GPU programming is an art, that is only remotely related to parallel programming on the CPU. Things like Threads in Java, with all the concurrency infrastructure like ThreadPoolExecutors, ForkJoinPools etc. might give the impression that you just have to split your work somehow and distribute it among several processors. On the GPU, you may encounter challenges on a much lower level: Occupancy, register pressure, shared memory pressure, memory coalescing ... just to name a few.
However, when you have a data-parallel, compute-bound problem to solve, the GPU is the way to go.
A general remark: Your specifically asked for CUDA. But I'd strongly recommend you to also have a look at OpenCL. It has several advantages. First of all, it's an vendor-independent, open industry standard, and there are implementations of OpenCL by AMD, Apple, Intel and NVIDIA. Additionally, there is a much broader support for OpenCL in the Java world. The only case where I'd rather settle for CUDA is when you want to use the CUDA runtime libraries, like CUFFT for FFT or CUBLAS for BLAS (Matrix/Vector operations). Although there are approaches for providing similar libraries for OpenCL, they can not directly be used from Java side, unless you create your own JNI bindings for these libraries.
You might also find it interesting to hear that in October 2012, the OpenJDK HotSpot group started the project "Sumatra": http://openjdk.java.net/projects/sumatra/ . The goal of this project is to provide GPU support directly in the JVM, with support from the JIT. The current status and first results can be seen in their mailing list at http://mail.openjdk.java.net/mailman/listinfo/sumatra-dev
However, a while ago, I collected some resources related to "Java on the GPU" in general. I'll summarize these again here, in no particular order.
(Disclaimer: I'm the author of http://jcuda.org/ and http://jocl.org/ )
(Byte)code translation and OpenCL code generation:
https://github.com/aparapi/aparapi : An open-source library that is created and actively maintained by AMD. In a special "Kernel" class, one can override a specific method which should be executed in parallel. The byte code of this method is loaded at runtime using an own bytecode reader. The code is translated into OpenCL code, which is then compiled using the OpenCL compiler. The result can then be executed on the OpenCL device, which may be a GPU or a CPU. If the compilation into OpenCL is not possible (or no OpenCL is available), the code will still be executed in parallel, using a Thread Pool.
https://github.com/pcpratts/rootbeer1 : An open-source library for converting parts of Java into CUDA programs. It offers dedicated interfaces that may be implemented to indicate that a certain class should be executed on the GPU. In contrast to Aparapi, it tries to automatically serialize the "relevant" data (that is, the complete relevant part of the object graph!) into a representation that is suitable for the GPU.
https://code.google.com/archive/p/java-gpu/ : A library for translating annotated Java code (with some limitations) into CUDA code, which is then compiled into a library that executes the code on the GPU. The Library was developed in the context of a PhD thesis, which contains profound background information about the translation process.
https://github.com/ochafik/ScalaCL : Scala bindings for OpenCL. Allows special Scala collections to be processed in parallel with OpenCL. The functions that are called on the elements of the collections can be usual Scala functions (with some limitations) which are then translated into OpenCL kernels.
Language extensions
http://www.ateji.com/px/index.html : A language extension for Java that allows parallel constructs (e.g. parallel for loops, OpenMP style) which are then executed on the GPU with OpenCL. Unfortunately, this very promising project is no longer maintained.
http://www.habanero.rice.edu/Publications.html (JCUDA) : A library that can translate special Java Code (called JCUDA code) into Java- and CUDA-C code, which can then be compiled and executed on the GPU. However, the library does not seem to be publicly available.
https://www2.informatik.uni-erlangen.de/EN/research/JavaOpenMP/index.html : Java language extension for for OpenMP constructs, with a CUDA backend
Java OpenCL/CUDA binding libraries
https://github.com/ochafik/JavaCL : Java bindings for OpenCL: An object-oriented OpenCL library, based on auto-generated low-level bindings
http://jogamp.org/jocl/www/ : Java bindings for OpenCL: An object-oriented OpenCL library, based on auto-generated low-level bindings
http://www.lwjgl.org/ : Java bindings for OpenCL: Auto-generated low-level bindings and object-oriented convenience classes
http://jocl.org/ : Java bindings for OpenCL: Low-level bindings that are a 1:1 mapping of the original OpenCL API
http://jcuda.org/ : Java bindings for CUDA: Low-level bindings that are a 1:1 mapping of the original CUDA API
Miscellaneous
http://sourceforge.net/projects/jopencl/ : Java bindings for OpenCL. Seem to be no longer maintained since 2010
http://www.hoopoe-cloud.com/ : Java bindings for CUDA. Seem to be no longer maintained
From the research I have done, if you are targeting Nvidia GPUs and have decided to use CUDA over OpenCL, I found three ways to use the CUDA API in java.
JCuda (or alternative)- http://www.jcuda.org/. This seems like the best solution for the problems I am working on. Many of libraries such as CUBLAS are available in JCuda. Kernels are still written in C though.
JNI - JNI interfaces are not my favorite to write, but are very powerful and would allow you to do anything CUDA can do.
JavaCPP - This basically lets you make a JNI interface in Java without writing C code directly. There is an example here: What is the easiest way to run working CUDA code in Java? of how to use this with CUDA thrust. To me, this seems like you might as well just write a JNI interface.
All of these answers basically are just ways of using C/C++ code in Java. You should ask yourself why you need to use Java and if you can't do it in C/C++ instead.
If you like Java and know how to use it and don't want to work with all the pointer management and what-not that comes with C/C++ then JCuda is probably the answer. On the other hand, the CUDA Thrust library and other libraries like it can be used to do a lot of the pointer management in C/C++ and maybe you should look at that.
If you like C/C++ and don't mind pointer management, but there are other constraints forcing you to use Java, then JNI might be the best approach. Though, if your JNI methods are just going be wrappers for kernel commands you might as well just use JCuda.
There are a few alternatives to JCuda such as Cuda4J and Root Beer, but those do not seem to be maintained. Whereas at the time of writing this JCuda supports CUDA 10.1. which is the most up-to-date CUDA SDK.
Additionally there are a few java libraries that use CUDA, such as deeplearning4j and Hadoop, that may be able to do what you are looking for without requiring you to write kernel code directly. I have not looked into them too much though.
I'd start by using one of the projects out there for Java and CUDA: http://www.jcuda.org/
Marco13 already provided an excellent answer.
In case you are in search for a way to use the GPU without implementing CUDA/OpenCL kernels, I would like to add a reference to the finmath-lib-cuda-extensions (finmath-lib-gpu-extensions) http://finmath.net/finmath-lib-cuda-extensions/ (disclaimer: I am the maintainer of this project).
The project provides an implementation of "vector classes", to be precise, an interface called RandomVariable, which provides arithmetic operations and reduction on vectors. There are implementations for the CPU and GPU. There are implementation using algorithmic differentiation or plain valuations.
The performance improvements on the GPU are currently small (but for vectors of size 100.000 you may get a factor > 10 performance improvements). This is due to the small kernel sizes. This will improve in a future version.
The GPU implementation use JCuda and JOCL and are available for Nvidia and ATI GPUs.
The library is Apache 2.0 and available via Maven Central.
There is not much information on the nature of the problem and the data, so difficult to advise. However, would recommend to assess the feasibility of other solutions, that can be easier to integrate with java and enables horizontal as well as vertical scaling. The first I would suggest to look at is an open source analytical engine called Apache Spark https://spark.apache.org/ that is available on Microsoft Azure but probably on other cloud IaaS providers too. If you stick to involving your GPU then the suggestion is to look at other GPU supported analytical databases on the market that fits in the budget of your organisation.

MKL Accelerated Math Libraries for Java

I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming.
I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here).
In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java?
Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math).
I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven.
Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stack so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.
For things like FFT (bulk operations on arrays), the range check in java might kill your performance (at least recently it did). You probably want to look for libraries which optimize the provability of their index bounds.
According to the The HotSpot spec
The Java programming language
specification requires array bounds
checking to be performed with each
array access. An index bounds check
can be eliminated when the compiler
can prove that an index used for an
array access is within bounds.
I would actually look at JNI, and do your bulk operations there if they are individually very large. The longer the operation takes (i.e. solving a large linear system, or large FFT) the more its worth it to use JNI (even if you have to memcpy there and back).
Personally, I agree with your general approach, offloading the heavyweight maths from Java to a commercial-grade library.
Googling around for Java / MKL integration I found this so what you propose is technically possible. Another option to consider would be the NAG libraries. I use the MKL all the time, though I program in Fortran so there are no integration issues. I can certainly recommend their quality and performance. We tested, for instance, the MKL version of FFTW against a version we built from sources ourselves. The MKL implementation was faster by a small integer multiple.
If you have concerns about the performance of calling a library through JNI, then you should plan to structure your application to make fewer larger calls in preference to more smaller ones. As to the difficulties of using JNI, my view (I've done some JNI programming) is that the initial effort you have to make in learning how to use the interface will be well rewarded.
I note that you don't seem to be overwhelmed yet with suggestions of what Java maths libraries you could use. Like you I would be suspicious of research-quality, low-usage Java libraries trawled from the net.
You'd probably be better off avoiding them I think. I could be wrong, it's not a bit I'm too familiar with, so don't take too much from this unless a few others agree with me, but calling up the JNI has quite a large overhead, since it has to go outside of the JRE and everything to do it, so unless you're grouping a lot of things together into a single function to put through at once, the slight benefit of the external library's will be outweighed hugely by the cost of calling them. I'd give up looking for an MKL library and find an optimized pure Java library. I can't say I know of any better than the standard one to recommend though, sorry.

High volume SVM (machine learning) system

I working on a possible machine learning project that would be expected to do high speed computations for machine learning using SVM (support vector machines) and possibly some ANN.
I'm resonably comfortable working on matlab with these, but primarly in small datasets, just for experimentation. I'm wondering if this matlab based approach will scale? or should i be looking into something else? C++ / gpu based computing? java wrapping of the matlab code and pushing it onto app engine?
Incidentally, there seems to be a lot fo literature on GPUs, but not much on how useful they are on machine learning applications using matlab, & the cheapest CUDA enlabled GPU money can buy? is it even worth the trouble?
I work on Pattern Recognition problems. Let me please to give you some advices if you plan to work effectively on SVM/ANN problems and if you realy don't have access to a computer cluster:
1) Don't use Matlab. Use Python and its large number of numerical libraries instead for Visualisation/Analysis of your computations.
2) Critical sections better to implement using C. You can integrate them then with your Python scripts very easy .
3) CUDA/GPU is not a solution if you mostly deal with non-polinomial time complexity problems which is typical in Machine Learning, so it brings no great speed-up; dot/matrix products are only a tiny part of SVM calculations - you still will have to deal with feature extractions and lists/objects processing, try instead to optimize your algorithms and devise effective algorithmic methods. If you need parallelism (e.g. for ANNs), use threads or processes.
4) Use GCC compiler to compile your C program - it will build the very fast executable code. To speed-up numerical computations you can try GCC optimization flags (e.g. Streaming SIMD Extensions)
5) Run your program on any modern CPU under Linux OS.
For realy good performance, use Linux clusters.
Both libsvm and SVM light have matlab interfaces. Besides, most learning tasks are trivially parallelizable, so take a look at matlab commands like parfor and the rest of the Parallel Computing Toolbox.
I would advice against using Matlab for anything beyond prototyping.
When the project becomes more complex and extensive, proportion of your own code will grow versus functionality provided by matlab and toolboxes. The more developed the project becomes, the less you benefit from matlab and the more you need features, libraries and - more importanly - practices, processes and tools of general purpose languages.
Scaling of matlab solution is achieved by interfacing with non-matlab code, and I've seen matlab project turn into nothin more than a glue calling modules written in multi-purpose languages. Causing everyday pains for everyone involved.
If you are comfortable with Java, I'd recommend using it together with some good math library (at least, you can always interface MKL). Even with recent Matlab optimisations, MKL + JVM are much faster - scaling and maintainability are beyond comparison.
C++ with processor specific intrinsics can provide better performance, but at a price of development time and maintainability. Adding CUDA imporves performance further, but the amount of work and specific knowledge is hardly worth it. Certainly not if you don't have prior experience with GPU calcucations. As soon as you go beyond single processor, it's much more effective to add another CPU or two to system than to struggle with GPU calculations.
Nothing as of now will scale beyond a limit. libsvm has a tool for subset selection, to select a set of data points for training. forget about ANN, it will not generalize and there is no theory that helps to choose the number of hidden nodes, etc.. It has to be manually optimized a lot and can get trapped in local minima.. Go with SVM only
Here you can find some semiparametric approxiamtions that can work with a high volumen of data very fast:
http://www.dabi.temple.edu/budgetedsvm/
https://robedm.github.io/LIBIRWLS/

Using SIFT for Augmented Reality

I've come across MANY AR libraries/SDKs/APIs, all of them are marker-based, until I found this video, from the description and the comments, it looks like he's using SIFT to detect the object and follow it around.
I need to do that for Android, so I'm gonna need a full implementation of SIFT in pure Java.
I'm willing to do that but I need to know how SIFT is used for augmented reality first.
I could make use of any information you give.
In my opinion, trying to implement SIFT for a portable device is madness. SIFT is an image feature extraction algorithm, which includes complex math and certainly requires a lot of computing power. SIFT is also patented.
Still, if you indeed want to go forth with this task, you should do quite some research at first. You need to check things like:
Any variants of SIFT that enhance performance, including different algorithms all around
I would recommend looking into SURF which is very robust and much more faster (but still one of those scary algorithms)
Android NDK (I'll explain later)
Lots and lots of publications
Why Android NDK? Because you'll probably have a much more significant performance gain by implementing the algorithm in a C library that's being used by your Java application.
Before starting anything, make sure you do that research cause it would be a pity to realize halfway that the image feature extraction algorithms are just too much for an Android phone. It's a serious endeavor in itself implementing such an algorithm that provides good results and runs in an acceptable amount of time, let alone using it to create an AR application.
As in how you would use that for AR, I guess that the descriptor you get from running the algorithm on an image would have to be matched against with data saved in a central database. Then the results can be displayed to the user. The features of an image gathered from SURF are supposed to describe it such as that it can be then identified using those. I'm not really experienced on doing that but there's always resources on the web. You'd probably wanna start with generic stuff such as Object Recognition.
Best of luck :)
I have tried SURF for 330Mhz Symbian mobile and it was still too slow even with all optimizations and lookup tables. And SIFT should be even more slow. Everyone using FAST for mobiles now. Anyway feature extraction is not a biggest problem. Correspondence and clearing false positive in it is more difficult.
FAST link
http://svr-www.eng.cam.ac.uk/~er258/work/fast.html
If I where you, I'd look into how (and why) the SIFT feature works (as was said, its wikipedia-page offers a good cochise explanation, and for more details check the science paper (which is linked to at wikipedia)), and then build your own variant that suits your taste; i.e. has the optimal balance between performance and cpu-load, needed for your application.
For instance, I think Gaussian smoothing might be replaced by some faster way of smoothing.
Also, when you build your own variant, you don't have anything to do with patents (there already are lots of variants, like GLOH).
I would recommend you to start by looking at the features already implemented in the OpenCV library, which include SURF, MSER and others:
http://opencv.willowgarage.com/documentation/cpp/feature_detection.html
This might be enough for your application and are faster than SIFT. And as mentioned above, SIFT is patented.
Also, start by making performance tests in your mobile platform, just by extracting the features at every frame, this way you'll have an idea which ones can run real-time or not.
Have you tried OpenCV's FAST implementation in the Android port? I've tested it out and it runs blazingly fast.
You can also compute reduced histogram descriptors around the detected FAST keypoints. I've heard of 3x3 rather than standard 4x4 of SIFT. That has a decent chance of working in real time if you optimize it heavily with NEON instructions. Otherwise, I'd recommend something fast and simple like sum of squared or absolute differences for a patch around the keypoints which are very fast.
SIFT is not a panacea. For real time video applications, it's usually overkill.
As always, Wikipedia is a good place to start from : http://en.wikipedia.org/wiki/Scale-invariant_feature_transform, but note that SIFT is patented.

Categories