Full Resolution Camera Access in j2me - java

I'm trying to do an image capture on a high end Nokia phone (N95). The phone's internal camera is very good (4 megapixels) but in j2me I only seem to be able to get a maximum of 1360x1020 image out. I drew largely from this example http://developers.sun.com/mobility/midp/articles/picture/
What I did was start with 640x480 and increase the width and height by 80 and 60, respectively until it failed. The line of code is:
jpg = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=" + width + "&height=" + height);
So the two issues are:
1. The phone throws an exception when getting an image larger than 1360x1020.
2. The higher resolution images appear to be just smoothed versions of the smaller ones. E.g. When I take a 640x480 image and increase it in photoshop I can't tell the difference between this and one that's supposedly 1360x1020.
Is this a limitation of j2me on the phone? If so does anyone know of a way to get a higher resolution from within a j2me application and/or how to access the native camera from within another application?

This explanation on Nokia forum may help you.
It says that "The maximum image size that can be captured depends on selected image format, encoding options and free heap memory available."
and
"It is thus strongly adviced that at least larger images (larger than 1mpix) are captured as JPEG images and in a common image size (e.g. 1600x1200 for 2mpix an so on). Supported common image sizes are dependent on product and platform version."
So I suggest you to take some tries
1. with 1600x1200, 1024x768 and whatever image resolution your N95 guide mentions
2. with BMP and PNG as well.
Anyway, based on my earlier experiences (that could be outdated), j2me implementations are full of bugs, so there may not be a working solution to your problem.

Your cameras resolution is natively:
2582 x 1944 . Try capturing there to see how that goes.
This place:
http://developers.sun.com/mobility/midp/articles/picture/index.html
Mentions the use of:
byte[] raw = mVideoControl.getSnapshot(null);
Image image = Image.createImage(raw, 0, raw.length);
The use of raw seems interesting, to get the native resolution.

The 'quality' of a JPEG (As interpreted by the code) is nothing to do with the resolution. Rather it is to do with how compressed the image is. A 640x480 image at 100 quality will be noticably better looking than a 640x480 image at 50, but will use more storage space.
Try this instead:
jpg = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=2048&height=1536");

Related

Jpeg to Svg and Image Tracing

Currently we have a requirement where we have an image depicting the blueprint of the mall (red specifies the booked up areas and white specifies the available areas) and the image is available in a raster (JPEG) format.
We would like to drag and drop some icons onto the available areas of the image (in white). There should also be zoom in and zoom out functionality to be given for the above image as well
Since the JPEG has a lossy scaling, zooming after a certain limit can result in a jagged image. One proposed solution is to convert the image to SVG (Scalable Vector graphics).
Going with the expanded form of SVG, it simply tells us that image is:
s=>scalable (i.e. you can zoom to any level without compromising the quality)
v=>vectorized (i.e co-ordinates are available)
So by simply looking at the XML format of the image, we can predict whether to allow dropping an object at fill=red or fill=white where red and white are the two colors in the image. This might not be appropriate solution, but I'm just guessing it this way
Now the problems I see with this approach is:
Converting an image with some open source tool (InkSpace) - if we trace it with ink-space, which uses portace inside it to trace the image, it can handle only black and white colors.
Note-: Most of the tools comes with some license.
Problem with inkspace is that it embeds the image into the SVG map and does not create the co-ordinates. If you trace it with inkspace, it only creates the outline of the image.
Converting it with some online utility - Not recommended in our case, but doing so results in a large size of the SVG image. For a 700 KB file, the SVG generated is about 39 MB, which when opened up on a browser crashes the browser.
Most of the time when the image is converted to an SVG, it becomes way too large a big factor to worry about. There are utilities available like Gzip to compress files, but this is a two way route - first you convert, then you compress.
Using delinate (which employs a portace and autotrace engines in it) - the quality of the image produced is not good.
Using Java code - Again the quality suffers. Java graphics are not fully developed to handle the conversion (size is again way too large)
Converting the image to PDF, then to SVG - this also embeds the image into the SVG file, which is useless as no co-ordinates are available
Does anybody got any idea on this ,how to deal with this situation?,Can we handle the drag and drop on raster(jpeg,png...etc) images itself?
Thanks
Dishant Anand

drawable sizes for different screen sizes in android

i have a problem in android development that bored me. my problem is screen size and dealing with that. specially i have some problems with images. for example i want to create a background image for my activity that i created in photoshop and my background image contains a "HELLO" word on it. but when i put it on drawable-xhdpi folder, it seems blurry and its not sharp!! my phone is a nexus 4 and according to Google documentation i create background image in 640 x 480 size.
when i create background image in 960 x 720 size it seems better but not perfect. in this case my image file size is very high!
but what is the standard way for this? please help me to solve this problem for ever. i read google documentation but its not solve my problem!
http://developer.android.com/guide/practices/screens_support.html
You should usually avoid creating images for certain screen sizes to make them background, because there are thousands of different devices and you would have to create dozens of such images.
The first thing you need to be aware of is screen density.
Generally you create 3 to 5 images when not even looking at screen size: low (120 dpi), medium (160 dpi), high (240 dpi), extra high (320 dpi) and 2*extra high (480 dpi). These go into drawable-Xdpi folders, where X is one of l, m, h, xh, xxh.
Next thing when you want to have bigger images on bigger screens (bigger phones, small and big tablets), you may want to put images to folders like drawable-sw600dp-Xdpi. This is not a case for your phone.
Nexus 4 is a xhdpi 640x384 dp device, but you should not treat it differently than Samsung Galaxy S2 (hdpi 533x320 dp).
Create an image of smaller size for both phones and center it horizontally. E.g. 320x100 px for mdpi, 480x150 px for hdpi and 640x200 px for xhdpi (your phone).
the screen resolution for Nexus is 1280x768 (http://www.google.com/nexus/4/specs/), resize the image to this resolution. In especial consideration some images can't handle the resolution and the image became disproportionately.
for interesting
resolution calculator:
http://members.ping.de/~sven/dpi.html
This is problem of Android Fragmentation and you just cannot deal with it perfectly as there is a several hundreds different devices. As colleague above wrote Nexus 4 has resolution -1280 x 768 so for sure res of image as equal as 960 x 720 is good choice. I'm even surprised that google suggest 640 x 480 for xhdpi, it's definitely too less.
So as I said you are not able to make perfect looking graphics for all existing devices. You should choose the most popular devices from every screen category(xhdpi,mdpi,ldpi ... etc) to cover the most important market share.
With 1600+ android models even after they are categorized in few Screen size and a few DPI's its very difficult to manage layouts.. i suggest that you just concentrate on designing layouts w.r.t to screen size and then create views as Resizeable Views to neglect density effects.
Once you have created your layouts Resize the Views .. You can create a Custom View or resize on its onMeasure();

How to scale down the size and quality of an BufferedImage?

I'm working on a project, a client-server application named 'remote desktop control'. What I need to do is take a screen capture of the client computer and send this screen capture to the server computer. I would probably need to send 3 to 5 images per second. But considering that sending BufferedImage directly will be too costly for the process, I need to reduce the size of the images. The image quality need not to be loss less.
How can I reduce the byte size of the image? Any suggestions?
You can compress it with ZIP very easily by using GZIPInputStream and its output counterpart on the other end of the socket.
Edit:
Also note that you can create delta images for transmission, you can use a "transpartent color" for example (magic pink #FF00FF) to indicate that no change was made on that part of the screen. On the other side you can draw the new image over the last one ignoring these magic pixels.
Note that if the picture already contains this color you can change the real pink pixels to #FF00FE for example. This is unnoticable.
An other option is to transmit a 1-bit mask with every image (after painting the no-change pixels to an arbitrary color. For this you can change the color which is mostly used in the picture to result in the best compression ratio (optimal huffman-coding).
Vbence's solution of using a GZIPInputStream is a good suggestion. The way this is done in most commercial software - Windows Remote Desktop, VNC, etc. is that only changes to the screen-buffer are sent. So you keep a copy on the server of what the client 'sees', and with each consecutive capture you calculate what is different in terms of screen areas. Then you only send these screen areas to the client along with their top-left coords, width, height. And update the server copy of the client 'view' with just these new areas.
That will MASSIVELY reduce the amount of network data you use, while I have been typing this answer, only 400 or so pixels (20x20) are changing with each keystroke. This on a 1920x1080 screen is just 1/10,000th of the screen, so clearly worth thinking about.
The only expensive part is how you calculate the 'difference' between one frame and the next. There are plenty of libraries out there to do that cheaply, most of them very mathematical (discrete cosine transform type stuff, way over my head), but it can be done relatively cheaply.
See this thread for how to encode to JPG with controllable compression/quality. The slider on the left is used to control the level.
Ultimately it would be better to encode the images directly to a video codec that can be streamed, but I am a little hazy on the details.
One way would be to use ImageIO API
ImageIO.write(buffimg, "jpg", new File("buffimg.jpg"));
As for the quality and other parameters- I'm not sure, but it should be possible, just dig deeper.

Resize Image files

I have a lot of images that taken by my Digital camera with very high resolution 3000 * 4000 and it takes a lot of Hard disk space, I used Photoshop to open each Image and re-size it o be with small resolution, but it needs a lot of time and effort
I think that I can write simple program that open the folder of images and read each file and get it's width and height and if it's very high change it and overwrite the image with the small one.
Here some code I use in a Java-EE project (should work in normal application to:
int rw = the width I needed;
BufferedImage image = ImageIO.read(new File(filename));
ResampleOp resampleOp = new ResampleOp(rw,(rw * image.getHeight()) / image.getWidth() );
resampleOp.setFilter(ResampleFilters.getLanczos3Filter());
image = resampleOp.filter(image, null);
File tmpFile = new File(tmpName);
ImageIO.write(image, "jpg", tmpFile);
The resample filter comes from java-image-scaling library. It also contains BSpline and Bicubic filters among others if you don't like the Lanczos3. If the images are not in sRGB color space Java silently converts the color space to sRGB (which accidentally was what I needed).
Also Java loses all EXIF data, thought it does provide some (very hard to use) methods to retrieve it. For color correct rendering you may wish to at least add a sRGB flag to the file. For that see here.
+1 to what some of the other folks said about not specifically needing Java for this, but I imagine you must have known this and were maybe asking because you either wanted to write such a utility or thought it would be fun?
Either way, getting the image file listing from a dir is straight forward, resizing them correctly can take a bit more leg work as you'll notice from Googling for best-practices and seeing about 9 different ways to actually resize the files.
I wrote imgscalr to address this exact issue; it's a dead-simple API (single class, bunch of static methods) and has some good adoption in webapps and other tools utilizing it.
Steps to resize would look like this (roughly):
Get file list
BufferedImage image = ImageIO.read(files[i]);
image = Scalr.resize(image, width);
ImageIO.write(image);
There are a multitude of "resize" methods to call on the Scalr class, and all of them honor the image's original proportions. So if you scale only using a targetWidth (say 1024 pixels) the height will be calculated for you to make sure the image still looks exactly right.
If you scale with width and height, but they would violate the proportions of the image and make it look "Stretched", then based on the orientation of the image (portrait or landscape) one dimension will be used as the anchor and the other incorrect dimension will be recalculated for you transparently.
There are also a multitude of different Quality settings and FIT-TO scaling modes you can use, but the library was designed to "do the right thing" always, so using it is very easy.
You can dig through the source, it is all Apache 2 licensed. You can see that it implements the Java2D team's best-practices for scaling images in Java and pedantically cleans up after itself so no memory gets leaked.
Hope that helps.
You do not need Java to do this. It's a waste of time and resources. If you have photoshop you can do it with recording actions: batch resize using actions
AffineTransformOp offers the additional flexibility of choosing the interpolation type, as shown here.
You can individually or batch resize with our desktop image resizing application called Sizester. There's a full functioning 15-day free trial on our site (www.sizester.com).

Android Photoshop PNG Icons appear large and stretched

I am developing for the Android. When I create icons in Photoshop (and convert them to PNG), they appear larger and stretched within my Android application. The emulator that I am using is medium density. Does anyone have some tips for how I can create my icons in Photoshop so that they appear normally on the Android?
Thanks!
The dpi of the PNG isn't relevant in this instance, only the actual pixel size. How are you displaying the images? If you're using an ImageView, try setting android:scaleType="none". If you're setting its width and height with wrap_content it shouldn't matter, but it's worth a try.
Also, if you're accessing them from the drawables folder, try placing them under a new folder called drawable-mdpi. Android should detect that the emulator is set to medium density, and automatically use the resources from the mdpi folder if they exist.
PNGs can store pixel size information (dpi). That's probably why you see the image larger and streched.
Check Photoshop's image size options, if necessary fix the print sizes so aspect ratio is conserved.
I also have this problem. I don't know why. I am using Mac. The png file produces by it is extremely large. For example, the normal size should be 8kb, but it's size is 48kb. If a use other editor to edit & save them, the size return OK. Seems that ps is saving some extra info into the png file.

Categories