Java2d thumbnails. Can I get thumbnail from OS - java

I develop an application in which I want to display a grid with a list of images. For each image I create an instance of a class myImage. MyImage class, extends JCompoment and create an thumbnail and after draw it with overide thepaintCompoment(Graphics g).
All is ok, but in big size images I have a lot of delay to create the thumbnail.
Now I think to when I scan the folders for image(to create the list I said above to create an thumbanail of each image and save it to disc. For each image I will have a database record to save the image path and thumbnail path.So is this a good solution of the problem?
Is there a way to get the thumbnails of the system creates for each image, in file manager. Or a more effective solution than I try.
Thank you!!

Your best bet is to use something like imagemagick to convert the image and create the thumbnail. There's a project called JMagick which provides JNI hooks into Imagemagick, but running a process work too.
Imagemagick is heavily optimized C code for manipulating images. It will also be able to handle images that Java won't and with much less memory usage.
I work for a website where we let users upload art and create thumbnails on the fly, and it absolutely needs to be fast, so that's what we use.
The following is Groovy code, but it can modified to Java code pretty easily:
public boolean createThumbnail(InputStream input, OutputStream output){
def cmd = "convert -colorspace RGB -auto-orient -thumbnail 125x125 -[0] jpg:-"
Process p = cmd.execute()
p.consumeProcessErrorStream(System.out)
p.consumeProcessOutputStream(output)
p.out << input
p.out.close()
p.waitForOrKill(8000)
return p.exitValue()==0
}
This creates a thumbnail using pipes without actually writing any data to disk. The outputStream can be to a file if you wanted to immediately write it as well.

One way to avoid OS dependance is to use getScaledInstance(), as shown in this example. See the cited articles for certain limitations. If it's taking too long, use a SwingWorker to do the load and scale in the background.

I haven't used it for the creation of thumbnails, but you may also want to take a look at the ImageIO API.
ImageIO

Related

How to take high-res picture while sensing depth using project tango

How take picture using project tango ?
I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API
which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ?
Note that java API
public void onFrameAvailable(int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
mTangoCameraPreview.onFrameAvailable();
}
}
does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview.
Thanks
You don't have to use TangoCameraPreview to get frames in Java. It is really just a convenience class provided to help with getting video on the screen. It appears to be implemented entirely in Java with calls to com.google.atap.tangoservice.Tango (i.e no calls to unpublished APIs). In fact, if you look inside the Tango SDK jar file, you can see that someone accidentally included a version of the source file - it has some diff annotations and may not be up to date but examining it is still instructive.
I prefer not to use TangoCameraPreview and instead call Tango.connectTextureId() and Tango.updateTexture() myself to load frame pixels into an OpenGL texture that I can then use however I want. That is exactly what TangoCameraPreview does under the hood.
The best way to capture a frame in pure Java is to draw the texture at its exact size (1280x720) to an offscreen buffer and read it back. This also has the side effect of converting the texture from whatever YUV format it has into RGB (which may or may not be desirable). In OpenGL ES you do this using a framebuffer and renderbuffer.
Adding the framebuffer/renderbuffer stuff to a program that can already render to the screen isn't a lot of code - about on par with the amount needed to save a file - but it is tricky to get right when you do it for the first time. I created an Android Studio sample capture app that saves a Tango texture as a PNG to the pictures folder (when you tap the screen) in case that is helpful for anyone.

Desktop TCP Streaming (java)

I want to stream desktop screen captures using sockets.
I don't know the exact way to do this, so I went with AWT's robot :)
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
The problem is that images coming from the robot are too large to make a stream.
A 1440x900 capture is about 0.3MB and I can't transfer it fast enough to create a smooth 24fps stream.
Currently I'm using a TCP socket, because I had problems cutting down the image into multiple parts and sending them over with UDP.
Probably this isn't the right method, but what is? How are HD video streams transferred?
Thanks in advance
I think you'll need external library to create video (may be platform dependent).
The approach with images is simple but you'll need to send each frame. When you use video codec size is smaller because it sends some of frames in full size and other contain only changed part of the picture
See here:
http://en.wikipedia.org/wiki/Key_frame
http://en.wikipedia.org/wiki/I-frame
Here some open-source libs I just googled:
https://code.google.com/p/java-screen-recorder/
http://www.xuggle.com/xuggler/
I think You also can find some libs to create video stream from images...
How are HD video streams transferred?
Typically as a video stream, which a 'group of images' is not. Video codecs often have clever ways to compress groups of images further, e.g. by only showing the part of the next frame that is different to the previous one.
You might also want to look into encoding the images as a high compression JPEG.
Having said that, I doubt you'll get a very good transfer rate at that size in pixels.

What is the difference between the ways to read an Image file in Java?

There are various ways of reading an image file in java such as BufferedImage and ImageIcon to name a few. I want to know what is the difference between these cases? Are they context dependent that in a particular case only one of them can be used?
What would be the best way of reading a image selected by JFileChooser by the user and separating the color channels of an image?
A good way is to use the different ImageIO.read methods, which return BufferedImage objects.
Image is an abstract class, so I think the real question is which subclass is more efficient for your program. Use VolatileImage if you need hardware acceleration. More on that here.
ImageIcon (and Toolkit#createImage/Toolkit#getImage) use a background loading process. That is, after you call these methods, they will return immediately, having created a background thread to actually load the image data.
These were/are used when loading large images across slow connections, like ye old 28k modems (ah, how I remember the days). This means that your application can continue running while the images are been downloaded.
You'll find in the Graphics class the drawImage methods accept an ImageObserver interface and that java.awt.Component implements this interface, this allows components the ability to automatically update themselves once the image has actually finished loading.
ImageIO on the other hand will not return until the image is fully loaded. It also makes it easier to introduce new readers/writers, making the API far more flexible then the original API. ImageIO also supports a wider range of images out of the box.
BufferedImage is also a far more flexible image class, especially when it comes to apply effects to the image.
Now, I, personally, prefer ImageIO. If I know I'm loading large images or images over a potentially slow connection, I will create my own background thread to load them. While a little more complicated, the trade offs greatly out weight the small amount of extra work -IMHO
What would be the best way of reading a image selected by JFileChooser by the user and separating the color channels of an image?
ImageIO without a doubt. In order to do any serious manipulation of an image loaded using something ImageIcon, you'd have to convert that image to a BufferedImage anyway

How to improve the image quality before the image processing start in javacv or opencv?

I have an image with 400x400 image to identify different components from it. But when I try to identify components using that most of time it doesn't provide correct answers. So I need to know whether there are some kind of methods in javacv or opencv to improve the quality of the image or increase the size of the image without effecting to its quality ?
This is the sample image that I use. (This is the maximum size that I can get and I can't use any photo editing softwares in the project, because it's dynamically generated image.)
In my image processing I need to identify squares and rectangles that connects those squares. And specially I need to get the width and height of those using pixel values.
You can scale it to any size, if you can vectorize it... and in your case vestorization is quite simple as you have some simple geometrical objects in image.
So, in my view your approach should be like this:
detect edges in the image with a high threshold (as you have very distinct objects)
vectorize them
scale them to any size
You should also look at the following link: Increasing camera capture resolution in OpenCV.
If you stick to image processing the easiest way to do it is to apply an equalizeHist(). This will increase contrast and will improve subsequent steps.
But, and this is a biiiig 'but', why are you doing it? Just reading this post, I saw another solution, and a quick google proved me I am right:
Kabeja is a Java library for parsing, processing and converting
Autodesk's DXF format. You can use Kabeja from the CommandLine or
embed into your application. All parsed data are accessible with the
DOM-like API.
That means you can extract directly all the data you want from that image in a text format. Probably something like "at position x, y there is a transistor, or whatever." So why would you render that file into an image, then analyse that image to extract the components?
If you do it for school (I know that many school projects are like this) I would recommend you to find a real problem to solve, and propose it to your teacher. You will be happier to do something that is not complete nonsense.
vectorizing the image is best option I guess as suggested by mocap.
you can also use enhancement tools like sharpening, saturating etc.

Fastest way to load and display a jpeg on a SurfaceView?

This is a bit of a followup to my last question: Canvas is drawing too slowly
Now that I can draw images more quickly, the problem I am faced with is that the actual loading of the images takes far too long.
In the app I am working on, the user is able to play back video frames (jpegs) in succession, as though he is viewing the video in realtime. I have been using BitmapFactory.decodeFile() to load each jpeg in a Bitmap. I'm unable to load all images at once since there are about 240 of them, and that would use up all of my heap space. What I have been doing is preloading up to 6 at a time into an array by way of a separate thread in order to cut down on the time it takes for each image to display.
Unfortunately, it takes somewhere between 50 and 90ms to load an image, and I need to show an image every 42ms. Is there a faster way to load images possibly?
For clarification, these images are in a folder on the SD card, and they are all 720x480 jpegs. I am sampling them at half that size to cut down on memory usage.
I ended up doing this quite a bit differently than I had originally envisioned. There was quite a bit to it, but here's the gist of how I achieved my goal:
All images are stored on SD card and written to one file (each image takes up X bytes in the file)
Use native code to read from and write to the image file
When requesting an image, I pass the index of the image in the list and a bitmap object (RGB_565) to the native code using a JNI wrapper
The native code locks the bitmap surface, writes pixel data (as a uint8_t**) directly to the bitmap, then unlocks it
The image is rendered to the screen
By doing it this way, I only needed to store one image in memory at a time, and I was able to bypass garbage collection (since the bitmap was only created once and then repopulated natively). I hope someone else might find this strategy useful.
Guess you already tried all methods in this tutorial http://www.higherpass.com/Android/Tutorials/Working-With-Images-In-Android/2/ and chosen the fastest. Maybe tweaking resizing can decrease loading time.
Best of all would of course be if you didn't have to resize the images at all. If you have full control of the images maybe you could try to pack them as sprites, see article http://www.droidnova.com/2d-sprite-animation-in-android,471.html

Categories