I'm using webview.capturePicture() to create a Picture object that contains all the drawing objects for a webpage.
I can successfully render this Picture object to a bitmap using the canvas.drawPicture(picture, dst) with no problems.
However when I use picture.writeToStream(fos) to serialize the picture object out to file, and then
Picture.createFromStream(fis) to read the data back in and create a new picture object, the resultant bitmap when rendered as above is missing any larger images (anything over around 20KB! by observation).
This occurs on all the Android OS platforms that I have tested 1.5, 1.6 and 2.1.
Looking at the native code for Skia which is the underlying Android graphics library and the output file produced from the picture.writeToStream() I can see how the file format is constructed.
I can see that some of the images in this Skia spool file are not being written out (the larger ones), the code that appears to be the problem is in skBitmap.cpp in the method
void SkBitmap::flatten(SkFlattenableWriteBuffer& buffer) const;
It writes out the bitmap fWidth, fHeight, fRowBytes, FConfig and isOpaque values but then just writes out SERIALIZE_PIXELTYPE_NONE (0). This means that the spool file does not contain any pixel information about the actual image and therefore cannot restore the picture object correctly.
Effectively this renders the writeToStream and createFromStream() APIs useless as they do not reliably store and recreate the picture data.
Has anybody else seen this behaviour and if so am I using the API incorrectly, can it be worked around, is there an explanation i.e. incomplete API / bug and if so are there any plans for a fix in a future release of Android?
Thanks in advance.
That's the way the API is meant to work. It was never intended for long term storage, but to store flattened in the current process, or to send to another process. What you are asking for will not be supported.
On the Honeycomb platform it appears that writeToStream() and createFromStream() now store and recreate the Picture object including large image data.
However it does come with the following caveats:
The image data used in a picture must be of an immutable type.
The image data must have been created with the following BitmapFactory.Options set to true, inInputShareable and inPurgeable. This can be done by using BitmapFactory.decodeResource() passing in the BitmapFactory.Options.
It so happens that Pictures created by WebView 'do' contain suitable images that meet this criteria and therefore can be serialized and restored.
I have not confirmed as yet that Ice Cream Sandwich also works but I am assuming/hoping that it will.
Related
I'm trying to read the meta-data of a PNG file with java following the solution proposed here.
But the method ImageIO.getImageReaders(inputStream) is returning an empty list of readers.
I assured that the stream is correct by reading it via ImageIO.read and rendering the resulting Image to the screen.
And this is why I'm confused: since ImageIO.read returns a valid image, i assume there is some ImageReader claiming to be able to interpret this stream. Is there a difference between interpreting image data and the meta-data of the image?
Any hints or even solutions to this problem?
Thank you very much.
I believe that ImageIO.getImageReaders() expects an ImageInputStream, you can try to create one from your InputStream using createImageInputStream. I guess that's what ImageIO.read(InputStream) does under the hood.
Anyway, if you already know that you have a PNG, why not use getImageReadersByFormatName("png") ?
BTW: height and width (and color model, etc) can be considered as "image metadata", in the sense that they are not part of the pixels values (which would be the real data), but in common parlance, they are regarded rather as image (esential) properties. The image metadata is generally (and specifcally in IIOMetadata) understood to be additional "miscelanous" data (as physical resolution, timestamp) which is normally not needed to access the image data.
Suppose that I am drawing a set of images using java graphics objects. Suppose that I java is outputting these images to my monitor. Where is the file or files that are sent to the monitor card (the graphical representation files). How can I take this file and save it to disk, or how can I take this file and write it to an array, or how can I take these files and combine the results of their output (to the monitor) into a single file for saving? I don't want to use a screen shot feature, I want to be able to redirect (xor capture also) the output to the monitor to some sort of byte-stream. I note that monitors are much better than semaphores, when you are talking about display capabilities; I don't need a counter example.
I might not be asking the correct question. It might be that I want to capture the file while it is still in User Space, before it is put into 'Device Space'. I would like to try and capture the byte stream so that I can convert it to MPEG-4 format. I either need a streaming output from the MPEG-4 converter, coming from the streaming input, or else, I need to take static images at discrete times and convert the images.
What format will the output from User Space be in? What format will the Device Space output be in? Try to keep speculation to a minimum.
http://docs.oracle.com/javame/config/cdc/opt-pkgs/api/jsr927/index.html
I guess that Java has made a means of displaying AWT objects on a television screen. (Java TV).
In Java both the "user" and "device" data are handled via images and rasters. If you dig a little deeper in the source files you'll see that the SunGraphics2D, which is used for drawing behind the scenes, uses pipes (SunGraphics2D lines 154-159) which again uses Blits to transfer the graphics data onto an image (or a 2-dimensional screen). And both the graphics and pipes are programmed to do only this. So it is as MadProgrammer said - you need to render the graphics on an image to extract it. And this is basically as low-level as you get before the native implementations kick in.
This can be done in parallel with the normal paint-mechanisms running in Java. the thread MadProgrammer mentioned gives excellent examples really.
I need to send images over a very low-bandwidth connection from an android phone (down to 10kByte/s) and would like to send them in progressive (interlaced) mode so that the user at the other end starts seeing the image already during the lengthy transfer. Right now, I am creating the image with the regular photo app:
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
But this creates non-progressive photos and I have not been able to discover how to convince it to do otherwise. The second option I explored (reading and re-compressing the taken image) got foiled because the Bitmap's compress method does not allow any encoding parameters besides format name and compression factor as far as I could determine:
bitmap.compress(Bitmap.CompressFormat.JPEG, 80, out);
My preferred solution would be to instruct the photo app to save in progressive mode.
The next best option would be a Java algorithm that losslessly converts the stored jpeg to progressive (jpegtran does this on Linux, but it is in C and relies on libjepeg).
The next best would a method to specify the relevant encoding parameters to android allowing me to re-compress it, or an alternative Java library that does the same.
Further research revealed that the algorithms are already there (/system/lib/libjpeg.so) with the sources in ~/android-sdk-linux/source-tree/external/jpeg -- but there do not seem to be JNI wrappers readily available.
Have you seen this document?
http://docs.oracle.com/javase/6/docs/api/javax/imageio/plugins/jpeg/JPEGImageWriteParam.html
It seems to have write progressive support.
Alternatively, you could use e.g. OpenJPEG through JNI. See http://www.linux-mag.com/id/7697/ as a start.
I am creating an application in java which will be the part of an external application. My application contains a viewport which shows some polygons and stuff like that. The external application needs to get the image of the viewport in gif format. For that it calls a method in an interface (implemented by my application) and my application returns the image. The external application needs to store the image in database (or something related to it which I dont need to worry about).
My question is:- What should be the data container type of the image when my application send it to the external application? I mean what should be the return type of the method?
Currently my gif encoder class returns a byte array. Is there any other 'better' option?
A byte array could be appropriate if you expect the GIFs to be small, but you might consider using an OutputStream so you can stream bits more efficiently.
Even if today you just return a fully-populated ByteArrayOutputStream, this would allow you to change your implementation in future without affecting cilent code.
A more intuitive return type might be java.awt.Image.
Here are some examples:
http://www.google.com/codesearch?q=java+gif+image&hl=en&btnG=Search+Code
If your 'application' is actually calling a Java method then it should understand Java return types and you should return java.awt.image.
If you are doing this through some kind of remote procedure that can't understand Java types I would return a byte array and let the receiving app decode it.
I'd create two methods:
First method creates the image and returns a java.awt.Image. Here you can put the drawing part of your method.
The second method creates a gif representation of the java.awt.Image as requested by the external application. It should return OutputStream as already suggested.
I want to create a serve resampled (downsized) version of images using jsp. The original images are stored in the database as blobs. I want to to create a jsp that serves a downsampled image with decent quality (not pixelated) as per the passed image width/height (e.g. getimage.jsp?imageid=xxxx&maxside=200) . Can you point me to a opensource api or code that I can call from the jsp page?
Java already contains libraries for image manipulation. It should be easy to resize an image and output it from a JSP.
This servlet looks like it does a very similar thing to what you want your JSP to do.
Is there anything wrong with the built-in Image.getScaledInstance(w, h, hints)? (*)
Use hints=Image.SCALE_SMOOTH to get non-horrible thumbnailing. Then use an ImageIO to convert to the required format for output.
*: well yes, there is something wrong with it, it's a bit slow, but really with all the other web overhead to worry about that's not likely to be much of an issue. It's also not the best quality for when upscaling images, where a drawImage with BICUBIC renderinghint is more suitable. But you're talking about downscaling only at the moment.
Be sure to check the sizes passed in so that you can't DoS your servlet by passing in enormous sizes causing a memory-eatingly-huge image to be created.