I have ported a Java implementation of AnimateGIFWriter to Android here. It works fine except I don't get the transparency. I tried to pass an Options to the decode() method like this:
Options opts = new Options();
opts.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeStream(fin, null, opts);
After reading the input image into a Bitmap, I will grab the data as an int array through Bitmap.getPixels() and auto-detect the transparency from the alpha value. But this doesn't work - the resulting animated GIF is not transparent (The input images are all transparent and my code is supposed to keep them through auto-transparency detect. This works for desktop application).
I am not quite familiar with Android related stuff. My question is given the above configuration, will BitmapFactory.decode() keep the transparency information of the input image?
Update: I found out for images support full alpha transparency like PNG images, it does keep the transparency information. But for single color transparency images like GIF, it seems not. This needs further confirmation.
My question is given the above configuration, will BitmapFactory.decode() keep the transparency information of the input image?
Yes, transparency/alpha should be loaded into that bitmap.
To verify this, you could try fetching the value of a single pixel -- maybe (0,0) -- and printing it to the log.
I don't have the time (or desire) to get into your full 1600 lines of code, but I would be suspicious of this part in the writeFrame method:
if(frame.getTransparencyFlag() == GIFFrame.TRANSPARENCY_INDEX_SET && frame.getTransparentColor() != -1) {
Related
When I create a Bitmap and Canvas with the following code:
Bitmap bitmap = Bitmap.createBitmap(640, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
and I put a breakpoint at the second line to display the preview, it turns out the created Bitmap's size is different than specified. It's supposed to be 640px wide, but the preview says it's 377px instead.
The height is scaled down by the same factor (640/377 is the same as 1737/1024) so I presume some unwanted scaling takes place there.
Interestingly, bitmap.getWidth() returns 640.
I initially thought it might be the debugger's preview related issue, but I checked that when I load some images using:
BitmapFactory.decodeResource(context.resources, R.drawable.image)
and I scale them later to the desired width:
Bitmap.createScaledBitmap(bitmap, desiredWidthPx, targetHeightToKeepAspectRatio, false)
the debugger's preview shows them correctly, i.e. their size matches desiredWidthPx and targetHeightToKeepAspectRatio. So it's not the preview's issue.
For context: I stumbled upon this issue while working on a Bitmap with a specific width in pixels. The Bitmap is going to be printed by a thermal printer later on, so the size must not depend on the Android device's screen size, density and so on. I started debugging this issue after realizing the printer printed around 60% of the image (it was cropped horizontally), e.g. something like this:
I'm not sure exactly what's going on, but by looking at the printouts, I think it's reasonable to suspect the scaling to be the issue here.
Bitmap size in the debug preview is limited to 1024px regardless of the method used to create it. Similar question and issue tracker. Your issue with the printer is not related to it.
You could use a BufferedImage instead:
https://docs.oracle.com/javase/7/docs/api/java/awt/image/BufferedImage.html
I've the known problem that my bitmap/canvas is too big and it throws the java.lang.OutOfMemoryError.
My question is what would be the best for my needs.
The Canvas should draw a graph (with given points) and can be very wide (like 3000px and more, theoretically it's wide could be much more, like 20000px). The height is fix.
Because thats to wide for any screen I put it in a Scrollview and draw the whole graph into the canvas.
So thats to wide for the bitmap and I get the error.
The second possibility would be a fix sized canvas where I'd write a "onScroll" method that redraws the graph depending on the users swipe. So it'd only draw a part of the graph.
Would that be the better way or is there a way to make the first option work?
Anyhow please give me some hints and example code for the solution.
Here is the code:
Bitmap bitmap = Bitmap.createBitmap(speedCanvasWidth,speedCanvasHeight,Bitmap.Config.RGB_565); //I also tried ARGB_8888
speedCanvas = new Canvas(bitmap);
graph.setImageBitmap(bitmap);
Thanks in advance
You can handle this with a BitmapRegionDecoder. Just create an instance of one that points to your image. The system will maintain a handle on the image and then you can call decode on the decoder based on what rectangle you want to be displayed within the canvas. Updates to the canvas will have to be handled based on your needs. This will help prevent loading this large image that you have to handle.
You can further get details of the Bitmap in question by checking the Bitmap information. This can be done by loading the bitmap into memory with BitmapFactory.Options flags set for to true for inJustDecodeBounds. That keeps the Bitmap from actually being loaded into memory during the checks.
For instance, a quick retrieval could be done with the following:
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance("pathToFile", true);
Bitmap regionOfInterestBitmap = decoder.decodeRegion(rectWithinImage, null);//Or with options you have decided to load.
I have a lot of images that taken by my Digital camera with very high resolution 3000 * 4000 and it takes a lot of Hard disk space, I used Photoshop to open each Image and re-size it o be with small resolution, but it needs a lot of time and effort
I think that I can write simple program that open the folder of images and read each file and get it's width and height and if it's very high change it and overwrite the image with the small one.
Here some code I use in a Java-EE project (should work in normal application to:
int rw = the width I needed;
BufferedImage image = ImageIO.read(new File(filename));
ResampleOp resampleOp = new ResampleOp(rw,(rw * image.getHeight()) / image.getWidth() );
resampleOp.setFilter(ResampleFilters.getLanczos3Filter());
image = resampleOp.filter(image, null);
File tmpFile = new File(tmpName);
ImageIO.write(image, "jpg", tmpFile);
The resample filter comes from java-image-scaling library. It also contains BSpline and Bicubic filters among others if you don't like the Lanczos3. If the images are not in sRGB color space Java silently converts the color space to sRGB (which accidentally was what I needed).
Also Java loses all EXIF data, thought it does provide some (very hard to use) methods to retrieve it. For color correct rendering you may wish to at least add a sRGB flag to the file. For that see here.
+1 to what some of the other folks said about not specifically needing Java for this, but I imagine you must have known this and were maybe asking because you either wanted to write such a utility or thought it would be fun?
Either way, getting the image file listing from a dir is straight forward, resizing them correctly can take a bit more leg work as you'll notice from Googling for best-practices and seeing about 9 different ways to actually resize the files.
I wrote imgscalr to address this exact issue; it's a dead-simple API (single class, bunch of static methods) and has some good adoption in webapps and other tools utilizing it.
Steps to resize would look like this (roughly):
Get file list
BufferedImage image = ImageIO.read(files[i]);
image = Scalr.resize(image, width);
ImageIO.write(image);
There are a multitude of "resize" methods to call on the Scalr class, and all of them honor the image's original proportions. So if you scale only using a targetWidth (say 1024 pixels) the height will be calculated for you to make sure the image still looks exactly right.
If you scale with width and height, but they would violate the proportions of the image and make it look "Stretched", then based on the orientation of the image (portrait or landscape) one dimension will be used as the anchor and the other incorrect dimension will be recalculated for you transparently.
There are also a multitude of different Quality settings and FIT-TO scaling modes you can use, but the library was designed to "do the right thing" always, so using it is very easy.
You can dig through the source, it is all Apache 2 licensed. You can see that it implements the Java2D team's best-practices for scaling images in Java and pedantically cleans up after itself so no memory gets leaked.
Hope that helps.
You do not need Java to do this. It's a waste of time and resources. If you have photoshop you can do it with recording actions: batch resize using actions
AffineTransformOp offers the additional flexibility of choosing the interpolation type, as shown here.
You can individually or batch resize with our desktop image resizing application called Sizester. There's a full functioning 15-day free trial on our site (www.sizester.com).
I need to translate colors in bitmap loaded to BufferedImage from RGB to YCbCr (luminance and 2 channels chrominance) and back after process.
I made it with functions used like rgb2ycbcr() in main method for each pixel, but it isn't so smart solution. I should use ColorSpace and ColorModel classes to get BufferedImage with correct color space. It would be more flexible method, but I don't know how to do that.
I'm lost and I need some tips. Can somebody help me?
As I understood your question, you want to do the following:
Load RGB image -> process YCbCr image -> Use RGB image again
And you want us to help you, to make this process as seamless as possible. First and foremost you want us to give you a simple way to avoid the -> (converting) parts.
Well I looked into the BufferedImage documentation. It seems, as if there doesn't exist a way to change the ColorSpace of an once created BufferedImage.
You could create a new BufferedImage with an YCbCr color space for that you can use the predefined ICC_ColorSpace. Then you copy the data from your old image possibly via ColorSpace.fromRGB to the YCbCr color space, do the image processing and then convert again via ColorSpace.toRGB. This method requires you to fully convert the image before and after processing via existing methods. Furthermore you have to know, how ICC_ColorSpace converts your image to YCbCr color space. Otherwise you can't know, which array indices corresponds to the same pixel.
If you just want to create a wrapper around the RGB-BufferedImage that lets you manipulate this image, as if it was an YCbCr image, that isn't possible with BufferedImage.
EDIT:
To convert the color space of a BufferedImage use ColorConvertOp. The code would look something like this:
ColorConvertOp cco = new ColorConvertOp(new YCbCrColorSpace(), null);
BufferedImage ycbcrImage = cco.filter( oldRGBImage, null );
This requires you to either write your own ColorSpace class or you could download and use the classes mentioned here. If you just want to load a JPEG image you should use the predefined classes.
I have a BufferedImage created using
new BufferedImage(wid,hgt,BufferedImage.TYPE_INT_ARGB);
to which I assemble a wallpaper using multiple other images. It works fine in Jave SE, but when I tried to run the code on a J9 CDC/PP platform I discovered that the Personal Profile BufferedImage has no constructors!
Can anyone point me to how I can construct an alpha-channel supporting image using CDC 1.0 and Personal Profile 1.1?
Edit: For now I have created fallback code which handles NoSuchMethodError (et al.) and then simply creates an image with GraphicsConfiguration.createCompatibleImage(int,int). It might be that that creates an alpha-blending image, but it will be a few weeks before I can specifically test that due to other priorities (testing on handhelds is not my direct responsibility, so it's out of my hands).
If I find a better answer, I will post it as an answer to this; in the meantime, if someone else beats me to it, be assured I will accept your answer if it works, and the answer will be of interest to me for the foreseeable future (I expect to still need an answer in 2-5 years).
The Image class (javax.microedition.lcdui.Image
) contains a method getRGB(...) which parses the Image into an array of RGB+Alpha values for each pixel in the image. Once you have the image in that format, its easy to tweak the alpha values to increase their transparency before you layer the images. This is really the only dynamic way I've seen to edit the transparency of an image in J2ME.
to get the alpha (transparency) value out of the rgba array you have to use bit-shifting like this:
int origAlpha = (rgba[j] >> 24);
and then to change the alpha (transparency) value to something different (without changing the color at that pixel), you can use bitshifting to insert a different transparency level.
int newAlpha = 0x33; // or use whatever 0-255 value you want, with 255=opaque, 0=transparent
rgba[j] = (rgba[j] & 0x00ffffff);
rgba[j] = (rgba[j] | (newAlpha << 24));
Then there is a createImage(...) method in Image that takes a byte-array of image data as a parameter, that can be used to create a new image out of your modified pixel data array.
Also helpful, SonyEricsson's developer website also has a tutorial with sample code called "Fade in and out images in MIDP 2.0" which explains "how to change the alpha value of an image to make it appear blended" which is essentially alpha-blending.