How can I remove white background color(white in this) from image?.
Or
Want to make image background transparent.
These are two images.
Image 1 is: Text with white background
Image 2 is: Green(or close to green color) image
I need to make white remove background transparent of image 1
Here is an example to make BLUE color transparent in this image.
Here we have an Image with a blue background like and we want to display it in an Applet with a white background. All we have to do is to look for the blue color with the "Alpha bits" set to opaque and make them transparent.
You can adapt it and make WHITE color transparent.
Also you can check this answer.
The question is that, you want to make background transparent at runtime or just this image? if just this one then you should edit the image with any tool like irfanView to remove background color. just open this image in this tool and then save it as PNG and tick save transparent color and untick use window background color and then press enter. After that select white color.
You can do it easily with photoshop.
Using the magic wand or lasso tool, select the area of the image you want to be transparent.
and then just hit DELETE button.
for additional information check this link
you can also check this tutorial
here is useful video
when i had to solve that problem i used a buffer...
private IntBuffer buffer;
public void toBuffer(File tempFile){
final Bitmap temp = BitmapFactory.decodeFile(imgSource
.getAbsolutePath());
buffer.rewind(); //setting the buffers index to '0'
temp.copyPixelsToBuffer(buffer);
}
then you can simply edit all pixels (int-values) on your buffer (as mentioned from https://stackoverflow.com/users/3850595/jordi-castilla) ...
...setting ARGB for 0xFFFFFFFF (white&transparent) to 0x00?????? (any color would suit, it's transparent anyway)
so here's your code to edit transparenty on a buffer:
public void convert(){
for (int dy = 0; dy < imgHeight; dy ++){
for (int dx = 0; dx < imgWidth; dx ++ ){
int px = buffer.get();
int a = (0xFF000000 & px) >> 24;
int r = (0x00FF0000 & px) >> 16;
int g = (0x0000FF00 & px) >> 8;
int b = (0x000000FF & px);
int result = px;
if (px == 0xFFFFFFFF){ //only adjust alpha when the color is 'white'
a = 0xFF000000; //transparent?
int result = a | r | g | b;
}
int pos = buffer.position();
buffer.put(pos-1, result);
}
}
later you want to write back your image into an converted file:
public void copyBufferIntoImage(File tempFile) throws IOException {
buffer.rewind();
Bitmap temp = Bitmap.createBitmap(imgWidth, imgHeight,
Config.ARGB_8888);
temp.copyPixelsFromBuffer(buffer);
FileOutputStream out = new FileOutputStream(tempFile);
temp.compress(Bitmap.CompressFormat.PNG, 90, out);
out.flush();
out.close();
}
maybe you still want to know how to map a buffer?
public void mapBuffer(final File tempFile, long size) throws IOException {
RandomAccessFile aFile = new RandomAccessFile(tempFile, "rw");
aFile.setLength(4 * size); // 4 byte pro int
FileChannel fc = aFile.getChannel();
buffer = fc.map(FileChannel.MapMode.READ_WRITE, 0, fc.size())
.asIntBuffer();
}
Related
I want to be able to take the data of an image as an array, modify it, and then use that array to create a modified image. Here is what I attempted:
public class Blue {
public static void main (String [] args) throws AWTException, IOException {
Robot robot = new Robot ();
Dimension d = Toolkit.getDefaultToolkit().getScreenSize();
BufferedImage img = robot.createScreenCapture(new Rectangle(0,0,d.width,d.height));
int[] pixels = ((DataBufferInt)img.getRaster().getDataBuffer()).getData();
int[] newPixels = IntStream.range(0,pixels.length).parallel().filter(i ->{
int p = pixels[i];
// get red
int r = (p >> 16) & 0xff;
// get green
int g = (p >> 8) & 0xff;
// get blue
int b = p & 0xff;
return b >= 200;
}).toArray();
int[] output = new int[pixels.length];
for(int i = 0; i<newPixels.length; i++) {
output[newPixels[i]] = 0x0000FF;
}
File f = new File("Result.jpg");
ByteBuffer byteBuffer = ByteBuffer.allocate(output.length * 4);
for (int i = 0; i< output.length; i++) {
byteBuffer.putInt(output[i]);
}
byte[] array = byteBuffer.array();
InputStream stream = new ByteArrayInputStream(array);
BufferedImage image1 = ImageIO.read(stream);
System.out.println(image1.getWidth());
ImageIO.write(image1, "png", f);
}
}
Here is how it works.
The robot takes a screen capture of the screen, which is then stored into a BufferedImage.
The data of the image is stored in an integer array
An int stream is used to select all pixel locations that correspond to sufficiently blue pixels
These blue pixels are placed in an array called output at the same locations they were taken from. However, the rest of the array has value 0.
A destination file for my modified image is created
I create a byte buffer that is 4 times the length of the output array, and data from the output array is placed in it.
I create a byte array from the buffer then create an input stream with it
Finally, I read the stream to create an Image from it
I use System.out.println() to print some data from the image to see if the image exists.
Step 9 is where the problem shows up.
I keep getting a NullPointerException, meaning that the image doesn't exist, it is null.
I don't understand what I did wrong.
I tried using ByteArrayInputStream instead of InputStream, but that doesn't work as well. Then, I thought that maybe the first couple of bytes encode the coding information for the image, so I tried copying that over to the output array, but that didn't solve the problem either. I am not sure why my byte array isn't turning into an image.
Yo summarize the comments in an answer, the problem is that you have an array of "raw" pixels, and try to pass that to ImageIO.read(). ImageIO.read() reads images stored in a defined file format, like PNG, JPEG or TIFF (while the pixel array is just pixels, it does not contain information on image dimension, color model, compression etc.). If no plugin is found for the input, the method will return null(thus the NullPointerException).
To create a BufferedImage from the pixel array, you could create a raster around the array, pick a suitable color model and create a new BufferedImage using the constructor taking a Raster and ColorModel parameter. You can see how to do that in one of my other answers.
However, as you already have a BufferedImage and access to its pixels, it's much easier (and cheaper CPU/memory wise) to just reuse that.
You can replace your code with the following (see comments for details and relation to your steps):
public class Blue {
public static void main (String [] args) throws AWTException, IOException {
// 1. Create screen capture
Robot robot = new Robot ();
Dimension d = Toolkit.getDefaultToolkit().getScreenSize();
BufferedImage img = robot.createScreenCapture(new Rectangle(0, 0, d.width, d.height));
// 2: Get backing array
int[] pixels = ((DataBufferInt) img.getRaster().getDataBuffer()).getData();
// 3: Find all "sufficiently blue" pixels
int[] bluePixels = IntStream.range(0, pixels.length).parallel()
.filter(i -> pixels[i] & 0xff >= 200).toArray();
// 4a: Clear all pixels to opaque black
for (int i = 0; i < pixels.length; i++) {
pixels[i] = 0xFF000000;
}
// 4b: Set all blue pixels to opaque blue
for (int i = 0; i < bluePixels.length; i++) {
pixels[bluePixels[i]] = 0xFF0000FF;
}
// 5: Make sure the file extension matches the file format for less confusion... 😀
File f = new File("result.png");
// 9: Print & write image (steps 6-8 is not needed)
System.out.println(img);
ImageIO.write(img, "png", f);
}
}
I have one image in android resource folder and second image is capturing from camera using application.
Now I want to get effect of first image(from android resource folder) and want to apply same effect on captured image by camera in android.
Approach Applied :
First I find RGB from first image for one Pixel and again find RGB from second image for One Pixel, calculating difference between both RGB values and difference applied for second RBG values for all pixels.
Output Image is in Gray Color.
private void gettingRGBDifference(){
Bitmap bmp=BitmapFactory.decodeResource(getResources(), R.drawable.lady);
int width=(bmp.getWidth()*88/100);
int height=(bmp.getHeight()*85/100);
Bitmap resizedBitmap=Bitmap.createBitmap(bmp,1220,560, width-1220, height - 560);
cropCircleImageView.setImageBitmap(resizedBitmap);
int colour = resizedBitmap.getPixel(160, 160);
int red = Color.red(colour);
int blue = Color.blue(colour);
int green = Color.green(colour);
int alpha = Color.alpha(colour);
byte[] rgbArray;
Bitmap circleImageBitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.circle);
int colour1 = circleImageBitmap.getPixel(150, 150);
int red1 = Color.red(colour1);
int blue1 = Color.blue(colour1);
int green1 = Color.green(colour1);
int alpha1 = Color.alpha(colour1);
redDifference = red - red1;
blueDifference = blue - blue1;
greenDifference = green - green1;
}
Can anyone correct my approach?
What I want to do is very simple : I have an image which is basically a single color image with alpha.
EDIT : since my post has been light speed tagged as duplicate, here are the main differences with the other post :
The other topic's image has several colors, mine has only one
The owner accepted answer is the one I implemented... and I'm saying I have an issue with it because it's too slow
It means that all pixels are either :
White (255;255;255) and Transparent (0 alpha)
or Black (0;0;0) and Opaque (alpha between 0 and 255)
Alpha is not always the same, I have different shades of black.
My image is produced with Photoshop with the following settings :
Mode : RGB color, 8 Bits/Channel
Format : PNG
Compression : Smallest / Slow
Interlace : None
Considering it has only one color I suppose I could use other modes (such as Grayscale maybe ?) so tell me if you have suggestions about that.
What I want to do is REPLACE the Black color by another color in my java application.
Reading the other topics it seems that changing the ColorModel is the good thing to do, but I'm honestly totally lost on how to do it correctly.
Here is what I did :
public Test() {
BufferedImage image, newImage;
IndexColorModel newModel;
try {
image = ImageIO.read(new File("E:\\MyImage.png"));
}
catch (IOException e) {e.printStackTrace();}
newModel = createColorModel();
newRaster = newModel.createCompatibleWritableRaster(image.getWidth(), image.getHeight());
newImage = new BufferedImage(newModel, newRaster, false, null);
newImage.getGraphics().drawImage(image, 0, 0, null);
}
private IndexColorModel createColorModel() {
int size = 256;
byte[] r = new byte[size];
byte[] g = new byte[size];
byte[] b = new byte[size];
byte[] a = new byte[size];
for (int i = 0; i < size; i++) {
r[i] = (byte) 21;
g[i] = (byte) 0;
b[i] = (byte) 149;
a[i] = (byte) i;
}
return new IndexColorModel(16, size, r, g, b, a);
}
This produces the expected result.
newImage is the same than image with the new color (21;0;149).
But there is a flaw : the following line is too slow :
newImage.getGraphics().drawImage(image, 0, 0, null);
Doing this on a big image can take up to a few seconds, while I need this to be instantaneous.
I'm pretty sure I'm not doing this the good way.
Could you tell me how achieve this goal efficiently ?
Please consider that my image will always be single color with alpha, so suggestions about image format are welcomed.
If your input image always in palette + alpha mode, and using an IndexColorModel compatible with the one from your createColorModel(), you should be able to do:
BufferedImage image ...; // As before
IndexColorModel newModel = createColorModel();
BufferedImage newImage = new BufferedImage(newModel, image.getRaster(), newModel.isAlphaPremultiplied(), null);
This will create a new image, with the new color model, but re-using the raster from the original. No creating of large data arrays or drawing is required, thus it should be very fast. But: Note that image and newImage will now share backing buffer, so any changes drawn in one of them, will be reflected on the other (probably not a problem in your case).
PS: You might need to tweak your createColorModel() method to get the exact result you want, but as I don't have your input file, I can't very if it works or not.
i wrote a java code to change all the red values of a black and white image to 255, so the output would be a red image.
But its not red, instead it outputs a brighter image.
What did I do wrong?
File bwgFile = new File("X:/Java/Documents/NetBeansProjects/colour/input/bwg.png");
BufferedImage bwgImage = ImageIO.read(bwgFile);
int width=bwgImage.getWidth();
int height=bwgImage.getHeight();
for(int w=0; w<width; w++){
for(int h=0; h<height; h++){
int pixel = bwgImage.getRGB(w,h);
Color bwg = new Color(pixel);
int c=bwg.getRed();
Color red = new Color(255,c,c);
int cpixel = red.getRGB();
bwgImage.setRGB(w,h,cpixel);
}
}
ImageIO.write(bwgImage, "png", new File("X:/Java/Documents/NetBeansProjects/colour/output/c.png"));
input
output
EDIT:
I have found out what the problem was, apparently when the input is a greyscale image it will try to make the output a greyscale image as well thus making it darker when blue and green colors get removed and brighter when red gets added. not using a grayscale image as input fixed it.
If I understand what you're trying to do, you're trying to create a greyscale image, except that it is "redscale", using only shades of red. Therefore, you need to compute the greyscale constant of each pixel.
From wikipedia (Greyscale), the luminance of a pixel Y = 0.2126R + 0.7152G + 0.0722B. So, try this
int pixel = bwgImage.getRGB(w,h);
Color bwg = new Color(pixel);
float c = (0.2126f * bwg.getRed() + 0.7152f * bwg.getGreen() + 0.0722f * bwg.getBlue());
int cc = (int)Math.round(c);
Color red = new Color(cc, 0, 0);
int cpixel = red.getRGB();
bwgImage.setRGB(w,h,cpixel);
Alternatively, you can simply retain the red component and set green and blue to 0. This will leave you with just the "redness" of each pixel.
int pixel = bwgImage.getRGB(w,h);
Color bwg = new Color(pixel);
int c=bwg.getRed();
Color red = new Color(c,0,0);
int cpixel = red.getRGB();
bwgImage.setRGB(w,h,cpixel);
NOTE: This solution above only works on images that are not using IndexColorModel. You can check the color model using BufferedImage's getColorModel(). For IndexColorModel, setRGB() does not work directly and instead picks a color in the index closest to the set color, as per HaraldK's comment. To achieve the desired result for images using IndexColorModel, you can create a new BufferedImage with TYPE_INT_ARGB:
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
Then, write the calculated pixel colors to this new image and save the new image instead.
While working on a Java application which requires rendering sprites, I thought that, instead of loading a .png or .jpg file as an Image or BufferedImage, I could load up a byte[] array containing indices for a color palette(16 colors per palette, so two pixels per byte), then render that.
The method I currently have generates a BufferedImage from the byte[] array and color palette while initializing, taking extra time to initialize but running smoothly after that, which works fine, but there are only 4 sprites in the program so far. I'm worried that when there are 100+ sprites, storing all of them as BufferedImages will be too taxing on the memory. And not only would that mean 1 BufferedImage per sprite, but actually 1 image for each sprite/palette combination I'd want to use.
This function creates the BufferedImage:
protected BufferedImage genImage(ColorPalette cp, int width, int height){ //Function to generate BufferedImage to render from the byte[]
BufferedImage ret = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); //Create the Image to return
for(int j=0; j<height; j++){ //Run a for loop for each pixel
for(int i=0; i<width; i++){
int index = (j * width + i)/2; //Get the index of the needed byte
int value = image[index] & 0x00ff; //Convert to "unsigned byte", or int
byte thing; //declare actual color index as byte
if(i % 2 == 0)thing = (byte)((value & 0b11110000) >>> 4); //If it's an even index(since it starts with 0, this includes the 1st one), get the first 4 bits of the value
else thing = (byte)(value & 0b00001111); //If it's odd, get the last four bits
ret.setRGB(i, j, cp.getColor(thing & 0x00ff).getRGB()); //Set the pixel in the image to the value in the Color Palette
}
}
return ret;
}
And this one actually renders it to the screen:
public void render(Graphics g, int x, int y){ //Graphics to render to and x/y coords
g.drawImage(texture, x, y, TILE_WIDTH, TILE_HEIGHT, null); //Render it
}
I've experimented with another method that renders from the byte[] directly w/o the need for a BufferedImage, which should theoretically succeed in saving memory by avoiding use of a BufferedImage for each sprite, but it ended up being very, very slow. It took several seconds to render each frame w/ at most 25 sprites to render on the screen! Note that g is a Graphics object.
private void drawSquare(int x, int y, int scale, Color c){ //Draw each "pixel" to scale
if(g == null){ //If null, quit
return;
}
g.setColor(c); //Set the color
for(int i=x; i<x+scale; i++){ //Loop through each pixel
if(i<0)continue;
for(int j=y; j<y+scale; j++){
if(j<0)continue;
g.fillRect(x, y, scale, scale); //Fill the rect to make the "pixel"
}
}
}
public void drawBytes(byte[] image, int x, int y, int width, int height, int scale, ColorPalette palette){ //Draw a byte[] image with given byte[], x/y coords, width/height, scale, and color palette
if(image.length < width * height / 2){ //If the image is too small, exit
return;
}
for(int j=0; j<height; j++){ //Loop through each pixel
for(int i=0; i<width; i++){
int index = (j * width + i)/2; //Get index
int value = image[index]; //get the byte
byte thing; //get the high or low value depending on even/odd
if(i % 2 == 0)thing = (byte)((value & 0b11110000) >>> 4);
else thing = (byte)(value & 0b00001111);
drawSquare((int)(x + scale * i), (int)(y + scale * j), scale, palette.getColor(thing)); //draw the pixel
}
}
}
So is there a more efficient way to render these byte[] arrays w/o the need for BufferedImage's? Or will it really not be problematic to have several hundred BufferdImage's loaded into memory?
EDIT: I've also tried doing the no-BufferedImage methods, but with g as the one large BufferedImage to which everything is rendered, and is then rendered to the Canvas. The primary difference is that g.fillRect(... is changed to g.setRGB(... in that method, but it was similarly slow.
EDIT: The images I'm dealing with are 16x16 and 32x32 pixels.
If memory usage is your main concern, I'd use BufferedImages with IndexColorModel (TYPE_BYTE_BINARY). This would perfectly reflect your byte[] image and ColorPalette, and waste very little memory. They will also be reasonably fast to draw.
This approach will use about 1/8th of the memory used by the initial use of TYPE_INT_RGB BufferedImages, because we retain the 4 bits per pixel, instead of 32 bits (an int is 32 bits) per pixel (plus some overhead for the palette, of course).
public static void main(String[] args) {
byte[] palette = new byte[16 * 3]; // 16 color palette without alpha
byte[] pixels = new byte[(16 * 16 * 4) / 8]; // 16 * 16 * 4 bit
Random random = new Random(); // For test purposes, just fill arrays with random data
random.nextBytes(palette);
random.nextBytes(pixels);
// Create ColorModel & Raster from palette and pixels
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, false, -1); // -1 for no transparency
DataBufferByte buffer = new DataBufferByte(pixels, pixels.length);
WritableRaster raster = Raster.createPackedRaster(buffer, 16, 16, 4, null);
// Create BufferedImage from CM and Raster
final BufferedImage image = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // "image: BufferedImage#...: type = 12 ..."
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Foo");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setVisible(true);
}
});
}
The above code will create fully opaque (Transparency.OPAQUE) images, that will occupy the entire 16 x 16 pixel block.
If you want bitmask (Transparency.BITMASK) transparency, where all pixels are either fully opaque or fulle transparent, just change the last parameter in the IndexColorModel to the palette index you want to be fully transparent.
int transparentIndex = ...;
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, false, transparentIndex);
// ...everything else as above
This will allow your sprites to have any shape you want.
If you want translucent pixels (Transparency.TRANSLUCENT), where pixels can be semi-transparent, you can also have that. You will then have to change the palette array to 16 * 4 entries, and include a sample for the alpha value as the 4th sample for each entry (quadruple). Then invoke the IndexColorModel constructor with the last parameter set to true (hasAlpha):
byte[] palette = new byte[16 * 4]; // 16 color palette with alpha (translucency)
// ...
IndexColorModel cm = new IndexColorModel(4, 16, palette, 0, true); // true for palette with alpha samples
// ...everything else as above
This will allow smoother gradients between the transparent and non-transparent parts of the sprites. But with only 16 colors in the palette, you won't have many entries available for transparency.
Note that it is possible to re-use the Rasters and IndexColorModels here, in all of the above examples, to save further memory for images using the same palette, or even images using the same image data with different palettes. There's one caveat though, that is the images sharing rasters will be "live views" of each other, so if you make any changes to one, you will change them all. But if your images are never changed, you could exploit this fact.
That said, the above really is a compromise between saving memory and having "reasonable" performance. If performance (ie. frames per second) is more important, just ignore the memory usage, and create BufferedImages that are compatible with your graphics card's/the OS's native pixel layout. You can do this, by using component.createCompatibleImage(...) (where component is a JComponent subclass) or gfxConfig.createCompatibleImage(...) (where gfxConfig is a GraphicsConfiguration obtained from the local GraphicsEnvironment).