I have searched and found a few image scaling libraries for Java. But not sure which one to go with. I need to generate thumbnails from the image uploaded by server.
It would be great if you can tell which one's good and bad.
The list I have is:
JMagick
im4java
Thumbnailator
java-image-scaling
JAI
For my simple needs, Thumbnailator was perfect. Small lib; fluent, clean, well-documented API.
In my case, it was just "net.coobird" % "thumbnailator" % "0.4.8" dependency and:
//..
Thumbnails.of(originalFile)
.size(300, 300)
.toFile(thumbnailFile)
//..
and done. Basically it’s a friendly wrapper on top of the Java 2D APIs. Useful for specific (thumbnailin') needs; no learning curve.
Unless you really need to do some heavy-lifting with images, I'd be wary of depending on an external binary (ImageMagick and wrappers like JMagick), which would add complexity and moving parts into the setup. Especially if your stack is something like mine: Scala/Java app running on Heroku. There’s stuff like heroku-buildpack-imagemagick-cedar-14, yes, but a simple dependency bundled with the app is infinitely cleaner.
Java API to generate thumbnails is not good enough if you need high quality thumbnails.
To generate high quality thumbnails use a framework like the ones you have listed. I have tried ImgScalr (https://github.com/thebuzzmedia/imgscalr) and Thumbnailator.
With my microbenchmark tests (cannot really be used for general purpose conclusions) Thumbnailator is faster - though not by a lot, and easier - especially when generating thumbnails with transparency.
I also trying out JMagick but running into issues just compiling the code and setting it up. Also a little worried about running into issues later with a framework whose basic code is written in a language I don't understand at all - C.
You might also find useful the discussion here: How can I resize an image using Java?
In my private projects, I don't use any specific library, the functionality provided for Java gives decent results for me. If you just want to do image scaling, then a complete image processing library would be too heavyweighted.
I use the code snippets given in http://www.mkyong.com/java/how-to-resize-an-image-in-java/ which works quite well.
Related
I'm trying to design a side scroller for android and a level editor for windows. A lot of the code between the two programs will be the same. The bit I'm currently working on is drawing textures.
What I currently have is a library that includes code related to loading and drawing the texture information. The bit that I'm stuck on is how would I incorporate OpenGL (for windows) and OpenGL ES (for android) into the library?
I thought about having an interface that includes all the drawing functions and then implementing that interface separately within each program (since one will use OpenGL and one OpenGL ES) but that still produces a lot of duplicated code (and kinda defeats the purpose of trying to create this shared library).
Is there a better way to approach this problem? Am I just over complicating this by trying to make it too flexible? I have been thinking about this for a few sdays now, so any input would be greatly appreciated!
Please ask if anything doesn't make sense!
OpenGL and OpenGL-ES are similar enough, that you can share large amounts of the codebase. Since you're probably going to target OpenGL-ES-2, you will then probably use OpenGL-3 on the desktop. OpenGL-3 has a lot of what OpenGL-ES-2 has. So I suggest you develop your code primarily for OpenGL-ES-2 and then only for the small differences toward OpenGL-3 you add alternative codepaths.
I'm working on developing a media player like application in Java (it's a swing based application) and I want it to be able to run on smoothly using as many different file formats as possible. I want to be able to take in a bunch of music files, then retrieve their tag information (artist/album/songname/etc), and then later play them. I've done a bit of poking around but it's hard to find a library which will support .m4a, .mp3. and maybe even .flac files. Does anyone know of a library which will do what I want? Thanks!
JMF is, to put it in the nicest possible way, rather out of date, unmaintained, difficult to distribute and in my experience has quite a few annoying bugs that crop up where you least expect them. And if you can get FMJ to work at all, good luck - they pride on it being an up to date, drop in replacement but my experience begs to differ on both those points.
Personally I wouldn't even consider it - just use separate libraries for each format or bunch of formats you want to support. JLayer would be a good one to start with as it can do a fair few, JFlac will do your flac files on top of that.
There's JMF - see http://en.wikipedia.org/wiki/Java_Media_Framework, which also lists some alternatives. I've had rather mixed success with JMF; it worked well for some static MPEG files but didn't seem compatible with the streaming sources we were using at the time (a couple of years ago).
An alternative to jFLAC for FLAC files is to use the official libFLAC, invoked via the Java native interface. See this blog post, under the headline “FLAC decoding with Java native interface” for an explanation of how it's done, with links to working code.
I've been searching and found jFreeChart, Python Google Chart and matplotlib. Searching here I also found CairoPlot. I've heard I might be able to use OpenOffice to do it too. Is the API easy to use? Or would it be simpler to stick to one of those libraries?
I have more experience with Java, but I've read most of Dive Into Python 3 and done some mockup programs in Python for simple things. I'm probably gonna have to spend more time doing it in Python, though I'm willing to do it as long as it isn't anything mindblowing. I want to automate some tests to put into a thesis, so I'm more worried about the end product.
So far I'm thinking of using matplotlib simply because it's the only one that's had any recent updates, which leads me to assume there might be more documentation due to continued support. I've used jFreeChart in the past too for some testing, and it was ok. But I was hoping to find something better, or to have more documentation/examples to use. Last time I didn't customize the graphics appearance as I wanted - say, change the background in a line plot - due to the lack of examples/documentation.
I recommend you to use matplotlib, it has high quality backends and a lot of graphical representations, you'll have the whole control over your plots and Python is a very handy and easy language to automatize tests, very practical for what you're willing to do. Matplotlib has also a large community that can help you and a lot of documentation/examples; just remember that matplotlib was not ported to Python 3.x yet, I don't know if this is important for you.
What I absolutely don't recommend is CairoPlot, it is not maintained anymore and is a toy project.
Google's Visualization API is fantastic - and much cleaner if you're working in a web environment, since you just output some text JS with your HTML, don't have to call back and render an image.
JFree also has Eastwood which is a reimplementation of Google Charts API, if you don't want to send your data to Google, or need SSL, though I don't think it's quite current, it's a good subset.
do i need "Java Advanced Imaging API" to learn "image processing" in Java?
and is there any good link for learning "image processing" in java?
If you are looking for learning very basic stuff, I'd say you should not start with JAI, because this library will help you processing images easily, but not learning how to process images -- you won't get to know the really low level stuff, like how to manipulate pixel arrays directly.
I started by reversing the image, cropping the image, creating histograms, doing histogram equalization, various affine transformation (scaling, rotating) etc.
This lecture might be a good start..
http://kevin.floorsoup.com/scholarship/ip2d/p12-burger.pdf
Sorry if you were looking for more sophisticated stuff :)
I'd say that if you want to learn image processing then you don't need JAI. At university we played around with PGM files, learned how to do basic transforms, basic filtering etc.
PGM is very easy to work with since it's greyscale and you can encode the image using ASCII. This means you can knock up a bit of code to read and write them in no time and then you can start building your own implmentation of image processing algorithms.
Check it out here: http://en.wikipedia.org/wiki/Portable_Gray_Map
Obviously if you want to do some serious image processing then check out a real API, but if you're wanting to play about and learn then this is where I'd start.
It depends on what you mean by "to learn". If you want to understand the foundations of image processing, then, no, you don't need anything beyond a basic familiarity with the ImageIO class for reading and writing images, and the BufferedImage getRGB() and setRGB() methods.
If you want to learn how to use existing APIs for image processing, you can still get a lot done with the Java2D API, but the JAI gives you more file formats and more filters.
But in that case, I'd say try Python Imaging Library!
If your needs go beyond basic operations - I think the best toolkit out there to learn the entire gamut of image processing (from basic to advanced such as object recognition/computer vision) is Open CV. There are plenty of Open CV wrappers for Java.
Consider Processing, it is based on Java (some extensions, like a color data type). The tutorials provide good introductions to general image processing concepts.
"Processing is an open source programming language and environment for people who want to create images, animations, and interactions." http://www.processing.org
No. However, there is a good Java program called "ImageJ" (google it).
It will get you off to a flying start, as you can start writing your own plugins for it in Java, with minimal effort and complexity. Also, there are many plugins and classes contributed for it and freely available.
I've come across MANY AR libraries/SDKs/APIs, all of them are marker-based, until I found this video, from the description and the comments, it looks like he's using SIFT to detect the object and follow it around.
I need to do that for Android, so I'm gonna need a full implementation of SIFT in pure Java.
I'm willing to do that but I need to know how SIFT is used for augmented reality first.
I could make use of any information you give.
In my opinion, trying to implement SIFT for a portable device is madness. SIFT is an image feature extraction algorithm, which includes complex math and certainly requires a lot of computing power. SIFT is also patented.
Still, if you indeed want to go forth with this task, you should do quite some research at first. You need to check things like:
Any variants of SIFT that enhance performance, including different algorithms all around
I would recommend looking into SURF which is very robust and much more faster (but still one of those scary algorithms)
Android NDK (I'll explain later)
Lots and lots of publications
Why Android NDK? Because you'll probably have a much more significant performance gain by implementing the algorithm in a C library that's being used by your Java application.
Before starting anything, make sure you do that research cause it would be a pity to realize halfway that the image feature extraction algorithms are just too much for an Android phone. It's a serious endeavor in itself implementing such an algorithm that provides good results and runs in an acceptable amount of time, let alone using it to create an AR application.
As in how you would use that for AR, I guess that the descriptor you get from running the algorithm on an image would have to be matched against with data saved in a central database. Then the results can be displayed to the user. The features of an image gathered from SURF are supposed to describe it such as that it can be then identified using those. I'm not really experienced on doing that but there's always resources on the web. You'd probably wanna start with generic stuff such as Object Recognition.
Best of luck :)
I have tried SURF for 330Mhz Symbian mobile and it was still too slow even with all optimizations and lookup tables. And SIFT should be even more slow. Everyone using FAST for mobiles now. Anyway feature extraction is not a biggest problem. Correspondence and clearing false positive in it is more difficult.
FAST link
http://svr-www.eng.cam.ac.uk/~er258/work/fast.html
If I where you, I'd look into how (and why) the SIFT feature works (as was said, its wikipedia-page offers a good cochise explanation, and for more details check the science paper (which is linked to at wikipedia)), and then build your own variant that suits your taste; i.e. has the optimal balance between performance and cpu-load, needed for your application.
For instance, I think Gaussian smoothing might be replaced by some faster way of smoothing.
Also, when you build your own variant, you don't have anything to do with patents (there already are lots of variants, like GLOH).
I would recommend you to start by looking at the features already implemented in the OpenCV library, which include SURF, MSER and others:
http://opencv.willowgarage.com/documentation/cpp/feature_detection.html
This might be enough for your application and are faster than SIFT. And as mentioned above, SIFT is patented.
Also, start by making performance tests in your mobile platform, just by extracting the features at every frame, this way you'll have an idea which ones can run real-time or not.
Have you tried OpenCV's FAST implementation in the Android port? I've tested it out and it runs blazingly fast.
You can also compute reduced histogram descriptors around the detected FAST keypoints. I've heard of 3x3 rather than standard 4x4 of SIFT. That has a decent chance of working in real time if you optimize it heavily with NEON instructions. Otherwise, I'd recommend something fast and simple like sum of squared or absolute differences for a patch around the keypoints which are very fast.
SIFT is not a panacea. For real time video applications, it's usually overkill.
As always, Wikipedia is a good place to start from : http://en.wikipedia.org/wiki/Scale-invariant_feature_transform, but note that SIFT is patented.