Modify image data in ByteBuffer - java

I would like to modify the per-pixel RGB color data of a PNG image that I have loaded as a ByteBuffer, preferably a simple, lightweight solution.
I currently load the data directly from the file into a ByteBuffer using a ReadableByteChannel, which does not decode the PNG data.
So the question is, how do I
Decode the ByteBuffer PNG data into something where I can modify the pixel data
Turn it back into a valid ByteBuffer ('valid' means that it would be accepted by an OpenGl shader)

The PNG image encoding includes (among other things) a ZLIB compression, you cannot access the pixel values directly. You need (practically speaking) to decode the image, for example reading it into a BufferedImage with ImageIO.read( )
If for some reason (huge image, memory constraints) you don't like to load the full image in memory in a BufferedImage you could use a progressive reader (eg PNGJ)

Related

Android Camera2 API Image Color space

I used this Tutorial to learn and try understand how to make a simple picture taking android app using the Camera2 API. I have added some snippets from the code to see if you all can help me understand some questions I have.
I am trying to find out how the image is saved as. Is it RGB, or BGR?
Is it stored in the variable bytes?
ImageReader reader = ImageReader.newInstance(width,height,ImageFormat.JPEG, 1);
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
save(bytes);
}
The image is received in JPEG format (as specified in the first line). Android uses YUV (to be more exact, YCbCr) color space for JPEG. Jpeg size is variable, it is compressed with lossy compression, and you have very little control over the level of compression.
Normally, you receive a JPEG buffer in onImageAvailable() and decode this JPEG to receive a Bitmap. You can get pixels of this Bitmap as an int array of packed SRGB pixels. The format for this array will be ARGB_8888.
You don't need JNI to convert it to BGR, see this answer.
You can access Bitmap objects from C++, see ndk/reference/group/bitmap. There you can find the pixel format of this bitmap. If it was decoded from JPEG, you should expect this to be ANDROID_BITMAP_FORMAT_RGBA_8888.
The variable bytes contains an entire compressed JPEG file. You need to decompress it to do anything much with it, such as with BitmapFactory.decodeByteArray or ImageDecoder (newer API levels).
It's not an uncompressed array of RGB values in any sense. If you want uncompressed data, the camera API supports the YUV_420_888 format, which will give you uncompressed 4:2:0 YUV data; still not RGB, though.

Extract image into a file from PDImageXObject without loading it into memory

This is related to How to extract image bytes out of PDF efficiently, but I'll try to restate the problem differently so it's less about PDF parsing and more about image processing.
I'm using PDFBox to extract images out of PDF files. There's an class PDImageXObject that represents the image inside the PDF, which contains image metadata (height, width, etc), and exposes two APIs to pull out the image are: BufferedImage getImage() and BufferedImage getImage(Rectangle rect, int subsampling);.
The current code is straightforward:
BufferedImage image = pdImage.getImage();
ImageIO.write(image, "jpg", baos);
However, for a large image, I'm having an issue with memory usage, as BufferedImage is storing uncompressed image data in memory, which is a lot bigger than the compressed result.
Is there a way to avoid loading the whole image into memory by breaking it up into tiles (e.g. 1024x1024) and iterating over them using the getImage signature that takes Rectangle? I'm seeing some promising information about JAI being able to use Tiles to output a compressed image without loading the uncompressed content into memory at once, but I don't understand how to tie it together with what I have from PDImageXObject. Or is there another way to do it? Is JAI still an active project?
By the way, the purpose of extracting the image is to feed it into the next component in the pipeline that can handle multiple image formats. So, if some format other than jpg, is more suited for tiled processing, that should be ok.
I'm aware of one possibility using something like BigBufferedImage. But I was thinking processing a Tile at a time looked promising.
OK, I found a libray: Commons Imaging. Class Imaging maybe can help you.
I think you can try createInputStream() method, find out the size of real data(bytes length).

Lossless image extraction from PDF

I'm using PDFBox to extract images out of a PDF file and feed it to another image processing library (that can handle different image formats). My current code is like this:
PDImageXObject pdImage;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
BufferedImage image = pdImage.getImage();
ImageIO.write(image, "png", baos);
byte[] imageBytes = baos.toByteArray();
This will take whatever is stored in the PDF file and use Java graphics to convert it to PNG. Is there a better way to avoid conversion and extract the image in whatever format it is embedded? I don't want to degrade image quality (I suppose mitigated by using a lossless format like PNG?) and incur conversion overhead.
The DEFLATE algorithm is used by the FlateDecode filter and by the PNG file format. However a stream of FlateDecode-compressed data isn't itself a PNG file.
Also, you need to consider the colorspace representation of the Image XObject (e.g. DeviceCMYK) versus what PNG actually supports.
By targeting lossless compression for your output image file you won't lose any information. (Be sure you actually need a lossless extracted image, often people assume lossy compression means their image will now have so many changes it's no longer recognizable. Though in many cases depending on the parameters the loss is hardly noticeable to the naked eye and you can substantially benefit from the size savings of Lossy compression.)
If performance is slow it could simply be the quality of your PDF software responsible for extracting the image and saving it.

How can i convert a big bmp to a png?

I'm trying to convert a very big bmp file to a png.
I'm writing an app to make fractal image and I want to make a very high resolution image (like ultrahd).
I'll save bitmap pixel image directly into file with RandomAccessFile, so I will not allocate any Bitmap object into memory. The problem will be to convert the temporany bitmap to a png.
I found BitmapRegionDecoder but it is not usefull for my problem.
It is not easy to convert an image without a full data load. :(
I think a good solution could be a method look like: convertToPng(InputStream bitmapData, OutputStream pngStream).
My question is, how can i convert a very big bitmap to a png without have an OutOfMemoryException?
You can try the PNGJ library (disclaimer: I'm the author) which allows to read/write PNG images line by line.

Read/write imag files by lines or by pixels using Java

I have a specific object with image data. And I want read/write images (in BMP, JPEG and PNG formats) to this object.
Of course, I can read/write image into/from BufferedImage and then copy/paste pixels into/from my object. But, I want to do it more faster without intermediate objects!
How can I perform it over standard Java library or throw other Java library?
P.S.:
I know pngj library that allow read PNG images by lines. Maybe you know same libraries for BMP, JPEG (or for all of them: BMP, JPEG, PNG)?
Why don't you just have the BufferedImage in your Object? BufferedImage isn't all that bad. Otherwise, you will need to use something like BufferedImage to convert the image file into pixels, and then read the BufferedImage into something like an int[] array of pixels, but it still requires BufferedImage in the middle, and this would slow things down by adding an extra loop to read over and store the pixels.
Is there a reason why BufferedImage isn't suitable for your purpose?
If you want to do all the reading by yourself, you can just open the file with a stream and read it into an array, but you would need to work your way through the whole structure of the file (headers etc...) if you are working with compressed formats you'd have to work on that too.
Best would be to use imageio to load/write your files and I don't believe there is a lot of overhead coming from the read/write of your images.
Also if you expect to do some heavy work on the images, know that BufferedImages can be hardware accelerated which will improve performance a lot.

Categories