Using SIFT for Augmented Reality - java

I've come across MANY AR libraries/SDKs/APIs, all of them are marker-based, until I found this video, from the description and the comments, it looks like he's using SIFT to detect the object and follow it around.
I need to do that for Android, so I'm gonna need a full implementation of SIFT in pure Java.
I'm willing to do that but I need to know how SIFT is used for augmented reality first.
I could make use of any information you give.

In my opinion, trying to implement SIFT for a portable device is madness. SIFT is an image feature extraction algorithm, which includes complex math and certainly requires a lot of computing power. SIFT is also patented.
Still, if you indeed want to go forth with this task, you should do quite some research at first. You need to check things like:
Any variants of SIFT that enhance performance, including different algorithms all around
I would recommend looking into SURF which is very robust and much more faster (but still one of those scary algorithms)
Android NDK (I'll explain later)
Lots and lots of publications
Why Android NDK? Because you'll probably have a much more significant performance gain by implementing the algorithm in a C library that's being used by your Java application.
Before starting anything, make sure you do that research cause it would be a pity to realize halfway that the image feature extraction algorithms are just too much for an Android phone. It's a serious endeavor in itself implementing such an algorithm that provides good results and runs in an acceptable amount of time, let alone using it to create an AR application.
As in how you would use that for AR, I guess that the descriptor you get from running the algorithm on an image would have to be matched against with data saved in a central database. Then the results can be displayed to the user. The features of an image gathered from SURF are supposed to describe it such as that it can be then identified using those. I'm not really experienced on doing that but there's always resources on the web. You'd probably wanna start with generic stuff such as Object Recognition.
Best of luck :)

I have tried SURF for 330Mhz Symbian mobile and it was still too slow even with all optimizations and lookup tables. And SIFT should be even more slow. Everyone using FAST for mobiles now. Anyway feature extraction is not a biggest problem. Correspondence and clearing false positive in it is more difficult.
FAST link
http://svr-www.eng.cam.ac.uk/~er258/work/fast.html

If I where you, I'd look into how (and why) the SIFT feature works (as was said, its wikipedia-page offers a good cochise explanation, and for more details check the science paper (which is linked to at wikipedia)), and then build your own variant that suits your taste; i.e. has the optimal balance between performance and cpu-load, needed for your application.
For instance, I think Gaussian smoothing might be replaced by some faster way of smoothing.
Also, when you build your own variant, you don't have anything to do with patents (there already are lots of variants, like GLOH).

I would recommend you to start by looking at the features already implemented in the OpenCV library, which include SURF, MSER and others:
http://opencv.willowgarage.com/documentation/cpp/feature_detection.html
This might be enough for your application and are faster than SIFT. And as mentioned above, SIFT is patented.
Also, start by making performance tests in your mobile platform, just by extracting the features at every frame, this way you'll have an idea which ones can run real-time or not.

Have you tried OpenCV's FAST implementation in the Android port? I've tested it out and it runs blazingly fast.
You can also compute reduced histogram descriptors around the detected FAST keypoints. I've heard of 3x3 rather than standard 4x4 of SIFT. That has a decent chance of working in real time if you optimize it heavily with NEON instructions. Otherwise, I'd recommend something fast and simple like sum of squared or absolute differences for a patch around the keypoints which are very fast.
SIFT is not a panacea. For real time video applications, it's usually overkill.

As always, Wikipedia is a good place to start from : http://en.wikipedia.org/wiki/Scale-invariant_feature_transform, but note that SIFT is patented.

Related

Most performant Bayesian Network Java API available?

I'm currently planning the implementation of a bayesian network solution for inference of outcome probabilities given known node networks, in a Java application. I've been looking around the web for what Java APIs are available, and have come across a number of these – jSMILE, AgenaRisk, JavaBayes, netica-J, Jayes, WEKA(?), etc.
Now, I'm struggling to find any good comparison of/compare all these APIs in terms of performance, usability and price for commercial applications, to know which one is the best to go for. I have tested out the AgenaRisk API, which suited my needs however before committing to this product it'd be great to hear if anyone has any knowledge of:
variation in performance between different APIs (or is it negligible? i.e. they all rely on identical fundamental Bayesian calculations?)
robust free alternatives to AgenaRisk?
does it matter if one of these solutions seems to no longer be supported/relies on a very old version of Java (e.g. JavaBayes is on Java 1.1 I believe)?
(Bonus points) are Bayesian Networks and Bayesian Network Classifications one and the same thing? As for example WEKA advertises itself as providing the latter.
List item
The last post on here looking for a good solution was from 2012, so I'm wondering if anyone would recommend any new solutions which have emmerged or if it's still a good bet to work with those.
Thanks!
Well for anyone who comes across this and is interested:
Will be figuring this out myself soon and will update this (If I forget nudge me).
Jayes seems to do everything AgenaRisk does if you're simply creating a graph and want to get some beliefs out of it, minus the GUI. Also it seems there you can choose what algorithm to use, which I haven't seen with AgenaRisk.
I'm going to stick to current solutions to be safe
...yes they are

Java chart library for really large data?

I'm looking for a chart library capable to handle large amount of data points - 300 millions per a chart and even more. Surely drawing, caching and approximation should be implemented with intelligence there.
Actually I need to represent waveforms but not only them.
Target platform is Java, data comes from files.
UPD: PC, Swing.
Not java, but CERN does massive data crunching and distros/plots may well have these kinds of data volumes. They use the root package which is c++. You can download it, although couldn't see a licence. It's prob open source.
Or alternatively, take a look at R which might do what you need.
I have been happy with my use of JChart2D. Switching to it from JFreeChart saved us considerable processor use, and it has traces that compute multiple inputs into a mean point for speed and memory saving. I've never used them seeing as how I haven't needed to yet. I have put extremely large sets of data into a normal trace by accident, and it didn't seem to be a problem.
There may be a better charting system out there, but this one gets the job done quick and effectively, it's free, open-source, based off of JPanels, and the author is around to answer questions and correct problems.
I don't see a way to handle that amount of data on an android phone, whatever librairy you use. You should think about doing all this processing on a server or a cloud and then put either an approximated set of data that would approximate the chart or even the result of the chart as an image file so that android phones can download it from the server without processing the data.
Regards,
stéphane
I assume that you are talking about a Swing Application.
I make use of JGoodies for all my Swing applications including Graphs and Charts.
Takes a bit getting use to it, but once you are use to it building UI's is fairly quick and easy.
The only problem is that there is a developer license cost involved.
You can download the Java Webstart examples to have a look at what it is capable of.

Is there a Java library for accelerated vector computations?

I'm looking for a Java lib that permits to do some fast computations with vector (and maybe matrices too).
By fast I mean that it takes advantage of GPU processing and/or SSE instructions. I'm wondering if it can be possible to find something more portable as possible. I recognize that the JVM provides a thick abstraction layer of the hardware.
I've come across JCUDA, but there's a drawback: on a computer without an Nnvidia graphic card it should be run in emulation mode (so I come to believe it will be not efficient as expected). Has anyone already tried it?
What about OpenCL? It should provide you a good starting point for this kind of optimized operations.
There exist many bindings for Java, starting from jocl (but take a loot also at JavaCL or LWJGL that added support from 2.6)
If by fast you mean high speed rather than requiring support for your particular hardware, I'd recommend Colt. Vectors are called 1-d matrices in this library.
I'd recommend using UJMP (wraps most if not all of the high-speed Java matrix libraries) and wait for a decent GPGPU implementation to be written for it (I started hacking it with JavaCL a while ago, but it needs some serious rewrite, maybe using ScalaCLv2 that's in the works).

Do external libraries make apps slower?

I am building an app that scrapes information from web pages. To do that I have chosen to use an html scraper called Jsoup because it's so simple to use. Jsoup is also dependent on Apache Commons Lang libray. (Together they make up a total of 385kB ).
So Jsoup will be used to Download the page and parse it.
My question is if the use of these simplifying libraries, instead of using Androids built-in libraries, will make my app slower? (in terms of downloading data and parsing).
I was thinking that the internal libraries would be optimized for Android.
The next release of jsoup will not require Apache Commons-Lang or any other external dependencies, which brings down the jar size to around 115K.
Internally, jsoup uses standard Java libraries (URL connection, HashMap etc) which are going to be reasonably well Android optimised.
I've spent a good amount of time optimising jsoup's parse execution time and data extractor methods; and certainly if you find any ways to improve it, I'm all ears.
If the question is, "Will external libraries INHERENTLY make my app slower than if I wrote the same code myself?", the answer is generally, "Yes, but not very much."
It will take the JVM some time to load an external library. It's likely that the library has functions or features that you aren't using, and loading these or reading past them will take some time. But in most cases this difference will be trivial, and I wouldn't worry about it unless you are in a highly constrained environment.
If what you mean is, "Can I write code that will do the same function faster than an external library?", the answer is, "Almost certainly yes, but is it worth your time?"
The odds are that any external library you use will have all sorts of features that you don't need but are included to accomodate the needs of others. The authors of the library don't know exactly what every user is up to so they have to optimize in a general way. So if you wrote your own code, you could make it do exactly what you need and nothing more, and be optimized to exactly what you are up to.
Whether it's worth the trouble in your particular case is the big question.
The external libraries will also use the internal libraries that are optimized for Android. I guess the real question is: would your custom implementation be faster than the generic implementation of these libraries?
In most cases, third-party libraries solve the problem that you want to solve, but also other problems that you might not need to solve, and it's this part that might hurt performance. You have to find the balance between reinventing the wheel and using optimized code just for your basic needs.
Additionally, if these libraries weren't designed with the Android platform in mind, make sure to test them extensively.
It's the classical build-vs-buy argument.
If run-time performance is really important for your application then you should consider rolling out your own implementation or optimizing the library (assuming it's open source.) However, before you do that you should know good or bad the performance of the existing library is. You won't know that unless you actually use it and get some data.
As a first step I would recommend using the library and collect data regarding it's performance OR ask someone who has already used this library on Android for performance numbers. The library may be slow but if it's acceptable then I guess it's better than rolling one on your own.
Keep in mind when you create your own implementation it will cost your time and money (design, coding, testing and maintenance.) So you are trading off runtime performance for reuse and reduced development cost.
EDIT: Another important point is that performance is a function of many things. For example, the hardware, the Android version and the network. If your target device is running 2.1 or less and you may get a boost in performance by using 2.2. On the other hand, if you want to target all versions you have to adopt a different strategy.

MKL Accelerated Math Libraries for Java

I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming.
I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here).
In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java?
Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math).
I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven.
Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stack so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.
For things like FFT (bulk operations on arrays), the range check in java might kill your performance (at least recently it did). You probably want to look for libraries which optimize the provability of their index bounds.
According to the The HotSpot spec
The Java programming language
specification requires array bounds
checking to be performed with each
array access. An index bounds check
can be eliminated when the compiler
can prove that an index used for an
array access is within bounds.
I would actually look at JNI, and do your bulk operations there if they are individually very large. The longer the operation takes (i.e. solving a large linear system, or large FFT) the more its worth it to use JNI (even if you have to memcpy there and back).
Personally, I agree with your general approach, offloading the heavyweight maths from Java to a commercial-grade library.
Googling around for Java / MKL integration I found this so what you propose is technically possible. Another option to consider would be the NAG libraries. I use the MKL all the time, though I program in Fortran so there are no integration issues. I can certainly recommend their quality and performance. We tested, for instance, the MKL version of FFTW against a version we built from sources ourselves. The MKL implementation was faster by a small integer multiple.
If you have concerns about the performance of calling a library through JNI, then you should plan to structure your application to make fewer larger calls in preference to more smaller ones. As to the difficulties of using JNI, my view (I've done some JNI programming) is that the initial effort you have to make in learning how to use the interface will be well rewarded.
I note that you don't seem to be overwhelmed yet with suggestions of what Java maths libraries you could use. Like you I would be suspicious of research-quality, low-usage Java libraries trawled from the net.
You'd probably be better off avoiding them I think. I could be wrong, it's not a bit I'm too familiar with, so don't take too much from this unless a few others agree with me, but calling up the JNI has quite a large overhead, since it has to go outside of the JRE and everything to do it, so unless you're grouping a lot of things together into a single function to put through at once, the slight benefit of the external library's will be outweighed hugely by the cost of calling them. I'd give up looking for an MKL library and find an optimized pure Java library. I can't say I know of any better than the standard one to recommend though, sorry.

Categories