I'm converting a png image to jpeg with the following snippet of code:
ByteArrayOutputStream image1baos = new ByteArrayOutputStream();
image1 = resizeImage(cropImage(image1, rect1), 150);
ImageWriter writer = null;
Iterator<ImageWriter> iter = ImageIO.getImageWritersByFormatName("jpg");
if (iter.hasNext()) {
writer = (ImageWriter) iter.next();
}
ImageOutputStream ios = ImageIO.createImageOutputStream(image1baos);
writer.setOutput(ios);
// set the compression quality
ImageWriteParam iwparam = new MyImageWriteParam();
iwparam.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
iwparam.setCompressionQuality(0.2f);
// write image 1
writer.write(null, new IIOImage(image1, null, null), iwparam);
ios.flush();
// set image 1
c.getItem1().setImageData(image1baos.toByteArray());
I'd like to convert the alpha channel to white, not black as it does by default, but I couldn't find a way to do that. Will appreciate any help!
My solution is ugly and probably slow, but it's a solution :)
BufferedImage img = <your image>
for( int i = 0; i < img.getWidth( ); i++ )
for( int j = 0; j < img.getHeight( ); j++ ) {
// get argb from pixel
int coli = img.getRGB( i, j );
int a = coli >> 24 & 0xFF;
int r = coli >> 16 & 0xFF;
int g = coli >> 8 & 0xFF;
int b = coli & 0xFF;
coli &= ~0xFFFFFFFF;
// do what you want with a, r, g and b, in your case :
a = 0xFF;
// save argb
coli |= a << 24;
coli |= r << 16;
coli |= g << 8;
coli |= b << 0;
img.setRGB( i, j, coli );
}
}
Of course, you can reduce the code by 60% if you just need to adjust the alpha channel. I kept all RGB stuff for further referece.
Related
I would like to use a quantized tensorflow lite model but the current ByteBuffer I have is using floating point. I would like this to be integer representation. Right now the model wants 270000 bytes and I am trying to pass it 1080000 bytes. Is it as simple as casting the float to int?
public ByteBuffer convertBitmapToByteBuffer(Bitmap bitmap) {
// Preallocate memory for bytebuffer
ByteBuffer byteBuffer = ByteBuffer.allocate(inputSize*inputSize*pixelSize);
byteBuffer.order(ByteOrder.nativeOrder());
// Initialize pixel data array and populate from bitmap
int [] intArray = new int[inputSize*inputSize];
bitmap.getPixels(intArray, 0, bitmap.getWidth(), 0 , 0,
bitmap.getWidth(), bitmap.getHeight());
int pixel = 0; // pixel indexer
for (int i=0; i<inputSize; i++) {
for (int j=0; j<inputSize; j++) {
int input = intArray[pixel++];
byteBuffer.putfloat((((input >> 16 & 0x000000FF) - imageMean) / imageStd));
byteBuffer.putfloat((((input >> 8 & 0x000000FF) - imageMean) / imageStd));
byteBuffer.putfloat((((input & 0x000000FF) - imageMean) / imageStd));
}
}
return byteBuffer;
}
Thanks for any tips you can provide.
Casting float to int is not the correct approach. Good news is that the quantized input values expected by the model (8-bit r, g, b values in sequence) matches exactly the same as the Bitmap pixel representation except that the model doesn't expect an alpha channel, so the conversion process should be actually easier than when you're using float inputs.
Here's what you might try instead. (I'm assuming pixelSize is 3)
int pixel = 0; // pixel indexer
for (int i=0; i<inputSize; i++) {
for (int j=0; j<inputSize; j++) {
int input = intArray[pixel++]; // pixel containing ARGB.
byteBuffer
.put((byte)((input >> 16) & 0xFF)) // R
.put((byte)((input >> 8) & 0xFF)) // G
.put((byte)((input ) & 0xFF)); // B
}
}
I have an image with a lot of anti-aliased lines in it and trying to remove pixels that fall below a certain alpha channel threshold (and anything above the threshold gets converted to full 255 alpha). I've got this coded up and working, its just not as fast as I would like when running it on large images. Does anyone have an alternative method they could suggest?
//This will convert all pixels with > minAlpha to 255
public static void flattenImage(BufferedImage inSrcImg, int minAlpha)
{
//loop through all the pixels in the image
for (int y = 0; y < inSrcImg.getHeight(); y++)
{
for (int x = 0; x < inSrcImg.getWidth(); x++)
{
//get the current pixel (with alpha channel)
Color c = new Color(inSrcImg.getRGB(x,y), true);
//if the alpha value is above the threshold, convert it to full 255
if(c.getAlpha() >= minAlpha)
{
inSrcImg.setRGB(x,y, new Color(c.getRed(), c.getGreen(), c.getBlue(), 255).getRGB());
}
//otherwise set it to 0
else
{
inSrcImg.setRGB(x,y, new Color(0,0,0,0).getRGB()); //white (transparent)
}
}
}
}
per #BenoitCoudour 's comments I've modified the code accordingly, but it appears to be affecting the resulting RGB values of pixels, any idea what I might be doing wrong?
public static void flattenImage(BufferedImage src, int minAlpha)
{
int w = src.getWidth();
int h = src.getHeight();
int[] rgbArray = src.getRGB(0, 0, w, h, null, 0, w);
for (int i=0; i<w*h; i++)
{
int a = (rgbArray[i] >> 24) & 0xff;
int r = (rgbArray[i] >> 16) & 0xff;
int b = (rgbArray[i] >> 8) & 0xff;
int g = rgbArray[i] & 0xff;
if(a >= minAlpha) { rgbArray[i] = (255<<24) | (r<<16) | (g<<8) | b; }
else { rgbArray[i] = (0<<24) | (r<<16) | (g<<8) | b; }
}
src.setRGB(0, 0, w, h, rgbArray, 0, w);
}
What may slow you down is the instantiation of a Color object for every pixel.
Please see this answer to iterate over pixels in a BufferedImage and access the alpha channel : https://stackoverflow.com/a/6176783/3721907
I'll just paste the code below
public Image alpha2gray(BufferedImage src) {
if (src.getType() != BufferedImage.TYPE_INT_ARGB)
throw new RuntimeException("Wrong image type.");
int w = src.getWidth();
int h = src.getHeight();
int[] srcBuffer = src.getData().getPixels(0, 0, w, h, null);
int[] dstBuffer = new int[w * h];
for (int i=0; i<w*h; i++) {
int a = (srcBuffer[i] >> 24) & 0xff;
dstBuffer[i] = a | a << 8 | a << 16;
}
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h, pix, 0, w));
}
This is very close to what you want to achieve.
You have a theoretical complexity of O(n) which you optimize by performing byte manipulation.
You can go further and use threads (you have an embarrassing parallel problem), but since most of user machines have at most 8 physical threads it will not get you too far. You could add another level of optimization on top of this by manipulating parts of the image one at the time, adapted to the memory buffers and different cache levels in your system.
Since I already mentioned you have an embarrassing parallel problem, the best solution is to perform GPU programming.
You can follow this tutorial on simple image processing with cuda and change the code of the filter to something like this
void blur(unsigned char* input_image, unsigned char* output_image, int width, int height) {
const unsigned int offset = blockIdx.x*blockDim.x + threadIdx.x;
const int currentoffset = (offset)*4;
if(offset < width*height) {
if (input_image[currentoffset+3]>= threshold )
output_red = input_image[currentoffset];
output_green = input_image[currentoffset+1];
output_blue = input_image[currentoffset+2];
output_alpha = 255;
}else{
output_red = 0;
output_green = 0;
output_blue = 0;
output_alpha = 0;
}
}
}
output_image[currentoffset*3] = output_red;
output_image[currentoffset*3+1] = output_green;
output_image[currentoffset*3+2] = output_blue;
output_image[currentoffset*3+3] = output_alpha
}
}
If you are set on using Java you have here a great answer on how to get started on using java with nvidia gpu
Hi How can we identify Blank Image(White Image),
BufferReaderImage im = ImageIO.read("samplePath");
the image iam passing is empty with some height and width, i want to identify it
This link should help you get started:
Get RGB values of a BufferedImage
In particular, this is the relevant part:
BufferedImage image = ImageIO.read(
new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int w = image.getWidth();
int h = image.getHeight();
int[] dataBuffInt = image.getRGB(0, 0, w, h, null, 0, w);
Color c = new Color(dataBuffInt[100]);
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
Then go through all the entries and make sure there are some different values.
I've written a Java methods ,but i have to use this method in android project,so someone can help me to convert it into android or help me what should i do?
public Image getImage(){
ColorModel cm = grayColorModel() ;
if( n == 1){// in case it's a 8 bit/pixel image
return Toolkit.getDefaultToolkit().createImage(new MemoryImageSource(w, h,cm, pixData, 0, w));
}//endif
}
protected ColorModel grayColorModel()
{
byte[] r = new byte[256] ;
for (int i = 0; i <256 ; i++ )
r[i] = (byte)(i & 0xff ) ;
return (new IndexColorModel(8,256,r,r,r));
}
For instance, to convert a grayscale image (byte array, imageSrc) to drawable:
byte[] imageSrc= [...];
// That's where the RGBA array goes.
byte[] imageRGBA = new byte[imageSrc.length * 4];
int i;
for (i = 0; i < imageSrc.length; i++) {
imageRGBA[i * 4] = imageRGBA[i * 4 + 1] = imageRGBA[i * 4 + 2] = ((byte) ~imageSrc[i]);
// Invert the source bits
imageRGBA[i * 4 + 3] = -1;// 0xff, that's the alpha.
}
// Now put these nice RGBA pixels into a Bitmap object
Bitmap bm = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bm.copyPixelsFromBuffer(ByteBuffer.wrap(imageRGBA));
Code may differ depending of input format.
So far I have this:
BufferedImage image = ImageIO.read(
new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
int w = image.getWidth();
int h = image.getHeight();
int[] dataBuffInt = image.getRGB(0, 0, w, h, null, 0, w);
Color c = new Color(dataBuffInt[100]);
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
Earlier, I tried putting the getRed, getGreen, and getBlue in a for loop but it only shows the same RGB value. How do I get all the RGB values in an image? Given that I wanna store them in different arrays.
I'm not entirely clear on the question, but assuming you mean unique RGB values, just loop, and just use say java.util.Set implementation that maintains uniqueness?
Set<Color> colors = new HashSet<Color>();
for (int datum : dataBuffInt) {
colors.add(new Color(datum));
}
System.out.println(String.format("%d different colors", colors.size()));
Or if you mean separate components?
for (int datum : dataBuffInt) {
Color color = new Color(datum);
reds.add(color.getRed());
greens.add(color.getGreen());
blues.add(color.getBlue());
}
System.out.println(String.format("reds: %d greens: %d blues: %d", reds.size(), greens.size(), blues.size()));
Are you certain when you had the for loop you were using the index variable into the array and not a static value, like 100? When I run your code with a for loop I see different values:
for (int i = 0; i < dataBuffInt.length; i++) {
Color c = new Color(dataBuffInt[i]);
System.out.println("COLOR");
System.out.println(c.getRed()); // = (dataBuffInt[100] >> 16) & 0xFF
System.out.println(c.getGreen()); // = (dataBuffInt[100] >> 8) & 0xFF
System.out.println(c.getBlue()); // = (dataBuffInt[100] >> 0) & 0xFF
System.out.println(c.getAlpha()); // = (dataBuffInt[100] >> 24) & 0xFF
System.out.println();
}
If you want unique colors you could build a set one pixel at a time:
final BufferedImage image = ImageIO.read(new URL("http://upload.wikimedia.org/wikipedia/en/2/24/Lenna.png"));
final Set<Color> uniqueColors = new HashSet<Color>(image.getWidth() * image.getHeight());
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
final int rgb = image.getRGB(x, y);
uniqueColors.add(new Color(rgb));
}
}
for (final Color color : uniqueColors) {
System.out.println(format("red: {0}, green: {1}, blue: {2}, alpha: {3}",
color.getRed(),
color.getGreen(),
color.getBlue(),
color.getAlpha()));
}
Or use your existing code and dump the array into a set.