Java AlphaComposite blending massive quality loss - java

I'm working with timelapse photography and was trying to implement a script to smooth out the transitions by blending multiple photos to one frame. Like photos 1 through 30 yielding frame 1, photos 2 through 31 yielding frame 2 and so on.
When I tried merging 30 consecutive photographs with the GIMP the result was super smooth; when I implement the same logic with Java BufferedImages and AlphaComposite I'm getting what seem to be unacceptable compression artifacts.
I already tried different things, like replacing ImageIO.write() with a compression free ImageWriter, but the result is almost identical. Now I'm out of ideas. Could it be that the AlphaComposite blending causes the quality loss?
GIMP generated image
Java generated image
import java.awt.AlphaComposite;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import javax.imageio.ImageIO;
public class SimplePhotoBlender {
public static void main(String[] args) throws IOException {
if (args.length < 1)
System.out.println("Usage: java SimplePhotoBlender <PATH_TO_IMAGE_FOLDER>");
else
new SimplePhotoBlender(args[0]);
}
public SimplePhotoBlender(String path) throws IOException {
// Get all images in folder
List<File> fileList = getFilesInFolder(new File(path));
Collections.sort(fileList);
// Iterate through the list ...
for (int i = 0; i < fileList.size(); i++) {
File originalImage = fileList.get(i);
BufferedImage blended = ImageIO.read(originalImage);
// ... and blend it with the following 30 images.
for (int j = 1; j <= 30 && i + j < fileList.size() - 1; j++)
blended = blend(blended, ImageIO.read(fileList.get(i + j)), 0.9);
// Finally, save it in a new file.
File blendedFile = new File(originalImage.getParent() + "/blended/" + originalImage.getName());
blendedFile.getParentFile().mkdirs();
blendedFile.createNewFile();
// I double-checked but the JPG compression in ImageIO.write()
// doesn't account for the tremendous amount of quality loss.
ImageIO.write(blended, "jpg", blendedFile);
}
}
// Needless to say that I copied this from the sources like
// http://www.informit.com/articles/article.aspx?p=1245201&seqNum=2
private BufferedImage blend(BufferedImage bi1, BufferedImage bi2, double weight) {
BufferedImage bi3 = new BufferedImage(bi1.getWidth(), bi1.getHeight(), BufferedImage.TYPE_INT_RGB);
Graphics2D g2d = bi3.createGraphics();
g2d.drawImage(bi1, null, 0, 0);
g2d.setComposite(AlphaComposite.getInstance(AlphaComposite.SRC_OVER, (float) (1.0 - weight)));
g2d.drawImage(bi2, null, 0, 0);
g2d.dispose();
return bi3;
}
public List<File> getFilesInFolder(final File folder) {
List<File> fileNames = new ArrayList<>();
for (final File fileEntry : folder.listFiles())
if (!fileEntry.isDirectory())
fileNames.add(fileEntry);
return fileNames;
}
}

Related

Why is the color of my image changed after writing it as a jpg file?

I'm currently making a method that converts a ppm file to a jpg, png, and bmp file. The way I did it is reading the content of a ppm file, creating a BufferedImage, and assigning each pixel from the ppm file to the corresponding pixel in the BufferedImage. My bmp and png files look correct. However, the jpg file looks completely different.
Below is my code:
import java.awt.*;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferInt;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import javax.imageio.ImageIO;
public class readPPMOutputOthers {
public static void main(String[] args) throws InterruptedException {
// read a ppm file
Scanner sc;
// if the file is not found, it will throw an exception
try {
sc = new Scanner(new FileInputStream("res/test2.ppm"));
} catch (FileNotFoundException e) {
throw new IllegalArgumentException("File not found!");
}
// the file now is a StringBuilder
// read line by line to get information
StringBuilder builder = new StringBuilder();
while (sc.hasNextLine()) {
String s = sc.nextLine();
// ignore comment #
if (s.charAt(0) != '#') {
builder.append(s).append(System.lineSeparator());
}
}
sc = new Scanner(builder.toString());
String token;
token = sc.next();
// set the fields
// initial load image
int width = sc.nextInt();
int height = sc.nextInt();
int maxValue = sc.nextInt();
List<Integer> pixels = new ArrayList<>();
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
int r = sc.nextInt();
int g = sc.nextInt();
int b = sc.nextInt();
int rgb = r;
rgb = (rgb << 8) + g;
rgb = (rgb << 8) + b;
pixels.add(rgb);
}
}
// make a BufferedImage from pixels
BufferedImage outputImg = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
int[] outputImagePixelData = ((DataBufferInt) outputImg.getRaster().getDataBuffer()).getData();
for (int i = 0; i < pixels.size(); i++) {
outputImagePixelData[i] = pixels.get(i);
}
try {
ImageIO.write(outputImg, "png",
new File("res/test.png"));
ImageIO.write(outputImg, "jpg",
new File("res/test2.jpg"));
ImageIO.write(outputImg, "bmp",
new File("res/test.bmp"));
} catch (IOException e) {
System.out.println("Exception occurred :" + e.getMessage());
}
System.out.println("Images were written successfully.");
}
}
images comparison
The weird thing is it works for a very large image but not for this small image. I need to make it work for such small images because of testing. I've been digging posts about this on google and still didn't find a way to solve this. Any help would be appreciated!
The reason for the strange colors is YUV420 chroma subsumpling used by JPEG encoding.
In YUV420 every 2x2 pixels have the same chroma information (the 2x2 pixels have the same color).
The 2x2 pixels have the same color, but each pixel has different luminance (brighness).
The YUV420 Chroma subsumpling is demonstrated in Wikipedia:
And in our case:
becomes
The brown color is a mixture of the original red, cyan magenta and the yellow colors (the brown color is "shared" by the 4 pixels).
Note:
Chroma subsumpling is not considered as "compression", is the sense that it not performed as part of the JPEG compression stage.
We can't control the chroma subsumpling by setting the compression quality parameter.
Chroma subsumpling is referred as part of the "color format conversion" pre-processing stage - converting from RGB to YUV420 color format.
The commonly used JPEG color format is YUV420, but JPEG standard does support YUV444 Chroma subsumpling.
GIMP manages to save JPEG images with YUV444 Chroma subsumpling.
Example (2x2 image):
Too small: Enlarged:
I couldn't find an example for saving YUV444 JPEG in JAVA...
To some degree the effect you describe is to be expected.
From https://en.wikipedia.org/wiki/JPEG:
JPEG is a commonly used method of lossy compression for digital
images, particularly for those images produced by digital photography.
The degree of compression can be adjusted, allowing a selectable
tradeoff between storage size and image quality. JPEG typically
achieves 10:1 compression with little perceptible loss in image
quality.
Maybe when storing small files you can set the compression to be low and thus increase quality. See Setting jpg compression level with ImageIO in Java

Apache PDFBox - vertical match between image and text position

I need help to achieve a mapping between text and image objects in a PDF document.
As the first figure shows, my PDF documents have 3 images arranged randomly in the y-direction. To the left of them are texts. The texts extend along the height of the images.
My goal is to combine the texts into "ImObj" objects (see the class ImObj).
The 2nd figure shows that I want to use the height of the image to detect the position of the texts (all texts outside of the image height should be ignored). In the example, there will be 3 ImObj-objects formed by the 3 images.
The link to the pdf file is here (on wetransfer):
[enter link description here][3]
But my mapping does not work, because I probably use the wrong coordinates from the image. Now I have already looked at some examples, but I still don't really understand how to get the coordinates of text and images working together?
Here is my code:
import java.awt.Image;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.pdfbox.contentstream.operator.Operator;
import org.apache.pdfbox.cos.COSBase;
import org.apache.pdfbox.cos.COSName;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
import org.apache.pdfbox.pdmodel.PDResources;
import org.apache.pdfbox.pdmodel.graphics.PDXObject;
import org.apache.pdfbox.pdmodel.graphics.image.PDImageXObject;
import org.apache.pdfbox.text.PDFTextStripper;
import org.apache.pdfbox.text.TextPosition;
import org.apache.pdfbox.util.Matrix;
public class ImExample extends PDFTextStripper {
public static void main(String[] args) {
File file = new File("C://example document.pdf");
try {
PDDocument document = PDDocument.load(file);
ImExample example = new ImExample();
for (int pnr = 0; pnr < document.getPages().getCount(); pnr++) {
PDPage page = document.getPages().get(pnr);
PDResources res = page.getResources();
example.processPage(page);
int idx = 0;
for (COSName objName : res.getXObjectNames()) {
PDXObject xObj = res.getXObject(objName);
if (xObj instanceof PDImageXObject) {
System.out.println("...add a new image");
PDImageXObject imXObj = (PDImageXObject) xObj;
BufferedImage image = imXObj.getImage();
// Here is my mistake ... but I do not know how to solve it.
ImObj imObj = new ImObj(image, idx++, pnr, image.getMinY(), image.getMinY() + image.getHeight());
example.imObjects.add(imObj);
}
}
}
example.setSortByPosition(true);
example.getText(document);
// Output
for (ImObj iObj : example.imObjects)
System.out.println(iObj.idx + " -> " + iObj.text);
document.close();
} catch (Exception e) {
e.printStackTrace();
}
}
public List<ImObj> imObjects = new ArrayList<ImObj>();
public ImExample() throws IOException {
super();
}
#Override
protected void writeString(String text, List<TextPosition> textPositions) throws IOException {
// match between imagesize and textposition
TextPosition txtPos = textPositions.get(0);
for (ImObj im : imObjects) {
if(im.page == (this.getCurrentPageNo()-1))
if (im.minY < txtPos.getY() && (txtPos.getY() + txtPos.getHeight()) < im.maxY)
im.text.append(text + " ");
}
}
}
class ImObj {
float minY, maxY;
Image image = null;
StringBuilder text = new StringBuilder("");
int idx, page = 0;
public ImObj(Image im, int idx, int pnr, float yMin, float yMax) {
this.idx = idx;
this.image = im;
this.minY = yMin;
this.maxY = yMax;
this.page = pnr;
}
}
Best regards
You're looking for the images in the (somewhat) wrong place!
You iterate over the image XObject resources of the page itself and inspect them. But this is not helpful:
An image XObject resource merely is that, a resource. I.e. it can be used on the page, even more than once, but you cannot determine from this resource alone how it is used (where? at which scale? transformed somehow?)
There are other places an image can be stored and used on a page, e.g. in the resources of some form XObject or pattern used on the page, or inline in the content stream.
What you actually need is to parse the page content stream for uses of images and the current transformation matrix at the time of use. For a basic implementation of this have a look at the PDFBox example PrintImageLocations.
The next problem you'll run into is that the coordinates PDFBox returns in the TextPosition methods getX and getY is not from the original coordinate system of the PDF page in question but from some coordinate system normalized for the purpose of easier handling in the text extraction code. Thus, you most likely should use the un-normalized coordinates.
You can find information on that in this answer.

Why does the same image (edit: PNG) produce two slightly different byte arrays in Java?

I am writing a simple application and one of the features is the ability to serialize an object to an image so that the user can share a photo of their project with the build instructions embedded within it. Currently the other user can drag-and-drop the image into a JavaFX ListView and a listener will execute some code to decode and parse the embedded JSON.
This works fine if the image is stored locally, but I also want to allow users to drag and drop images from a browser without having to save them locally first. To test this I linked a working image in a basic HTML file so I could render it in Chrome.
Originally I was using the path to the file taken from the Dragboard, but when the image is coming from a browser I need (I think) to be able to accept the image directly, hence the overloaded decode() method below.
The problem I am having is that I am ending up with two slightly different byte arrays depending on whether the image comes from a browser or from somewhere in my main local storage. I don't know enough about images within Java (or in general) to understand why this is and haven't been able to find answers elsewhere. The browser sourced drag-drop produces a different enough byte array that I can't decode the message 100% properly, and therefore cannot deserialize it to an object. However, if I right click and save the browser based image, it loads correctly.
I have included a reproducible snippet and a test image below. My JDK is the most recent version of Amazon Corretto 8.
Reproducible snippet:
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.control.ListView;
import javafx.scene.image.Image;
import javafx.scene.image.PixelFormat;
import javafx.scene.image.PixelReader;
import javafx.scene.image.WritablePixelFormat;
import javafx.scene.input.TransferMode;
import javafx.scene.layout.GridPane;
import javafx.stage.Stage;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import java.net.URL;
import java.nio.ByteBuffer;
import java.util.List;
public class MainApp extends Application {
private static final int OFFSET = 40;
private ListView<String> listView;
private GridPane gridPane;
#Override
public void start(Stage primaryStage) throws Exception {
listView = new ListView<>();
gridPane = new GridPane();
gridPane.addRow(0, listView);
setDragDropAction();
Scene scene = new Scene(gridPane, 250, 250);
primaryStage.setScene(scene);
primaryStage.sizeToScene();
primaryStage.show();
}
// the same drag-drop logic I am using in my project
private void setDragDropAction() {
gridPane.setOnDragOver(event -> {
if (event.getDragboard().hasFiles() || event.getDragboard().hasImage()) {
event.acceptTransferModes(TransferMode.ANY);
}
event.consume();
});
gridPane.setOnDragDropped(event -> {
List<File> files = event.getDragboard().getFiles();
try {
if (!files.isEmpty()) {
String decoded = decode(files.get(0));
System.out.println(decoded);
} else if (event.getDragboard().hasImage()) {
Image image = event.getDragboard().getImage();
String decoded = decode(image);
System.out.println(decoded);
}
} catch (IOException e) {
e.printStackTrace();
}
});
}
// decode using the file path
public String decode(File file) throws IOException {
byte[] byteImage = imageToByteArray(file);
return getStringFromBytes(byteImage);
}
// this results in a different byte array and a failed decode
public String decode(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
PixelReader reader = image.getPixelReader();
byte[] byteImage = new byte[width * height * 4];
WritablePixelFormat<ByteBuffer> format = PixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, byteImage, 0, width * 4);
return getStringFromBytes(byteImage);
}
private String getStringFromBytes(byte[] byteImage) {
int offset = OFFSET;
byte[] byteLength = new byte[4];
System.arraycopy(byteImage, 1, byteLength, 0, (offset / 8) - 1);
int length = byteArrayToInt(byteLength);
byte[] result = new byte[length];
for (int b = 0; b < length; ++b) {
for (int i = 0; i < 8; ++i, ++offset) {
result[b] = (byte) ((result[b] << 1) | (byteImage[offset] & 1));
}
}
return new String(result);
}
private int byteArrayToInt(byte[] b) {
return b[3] & 0xFF
| (b[2] & 0xFF) << 8
| (b[1] & 0xFF) << 16
| (b[0] & 0xFF) << 24;
}
private byte[] imageToByteArray(File file) throws IOException {
BufferedImage image;
URL path = file.toURI().toURL();
image = ImageIO.read(path);
return ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
}
}
Another attempt at the decode method using SwingFXUtils.fromFXImage() which is also not working:
// Create a buffered image with the same imageType (6) as the type returned from ImageIO.read() with a local drag-drop
public String decode(Image image) throws IOException {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
BufferedImage buffImageFinal = new BufferedImage(width, height, 6); // imageType = 6
BufferedImage buffImageTemp = SwingFXUtils.fromFXImage(image, null);
Graphics2D g = buffImageFinal.createGraphics();
g.drawImage(buffImageTemp, 0, 0, width, height, null);
g.dispose();
return getStringFromBytes(((DataBufferByte) buffImageFinal.getRaster().getDataBuffer()).getData());
}
Images showing the changes in the data:
Corner of image. Dissimilar pixels are highlighted in red.
Example image for testing:
The decoded testing message should print out ("This is a test message for my minimal reproducible example". Dragging the image from the browser does not work, saving the image locally and drag-dropping it does.
Based on a comment from the OP, event.getDragboard.getImage() loads an image with premultiplied alpha, while ImageIO.read() does not. One simple way to circumvent that is to get the url from the event (regardless of how the image is drag and dropped) and use ImageIO.read().
gridPane.setOnDragDropped(event -> {
if (event.getDragboard().hasUrl()) {
try {
URL path = new URL(event.getDragboard().getUrl());
String decoded = decode(path);
System.out.println(decoded);
} catch (IOException e) {
e.printStackTrace();
}
}
});
With the following modifications
public String decode(URL url) throws IOException {
byte[] byteImage = imageToByteArray(url);
return getStringFromBytes(byteImage);
}
private byte[] imageToByteArray(URL url) throws IOException {
BufferedImage image = ImageIO.read(url);
return ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
}
Note that loading from an url supports only a limited set of formats, as seen here.

Getting different values from getRGB and setRGB after XORing two images

I want to XOR two images pixel by pixel. I am using the following code.
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.*;
import java.io.IOException;
import javax.imageio.ImageIO;
import java.util.Scanner;
import java.security.*;
import java.awt.AlphaComposite;
import java.awt.Graphics2D;
import java.awt.RenderingHints;
public class sefi
{
public static void main(String[] args) throws Exception
{
encdec ed = new encdec();
String plainimagename = args[0];
String keyfilename = args[1];
String choice = args[2];
BufferedImage bikey = ImageIO.read(new File(keyfilename));
BufferedImage biplain = ImageIO.read(new File(plainimagename));
BufferedImage resizedImage = new BufferedImage(biplain.getWidth(), biplain.getHeight(), BufferedImage.TYPE_INT_RGB);
Graphics2D g = resizedImage.createGraphics();
g.drawImage(bikey, 0, 0, biplain.getWidth(), biplain.getHeight(), null);
g.dispose();
ImageIO.write(resizedImage, "jpg", new File("resizeimage.jpg"));
if(choice.equals("enc"))
{
ed.encrypt(resizedImage,biplain);
}
else if(choice.equals("dec"))
{
ed.decrypt(resizedImage,biplain);
}
}
}
class encdec
{
public void encrypt(BufferedImage bikey, BufferedImage biplain) throws Exception
{
BufferedImage xoredimage = xor(bikey, biplain);
File xored = new File("xored.jpg");
ImageIO.write(xoredimage, "JPEG", xored);
}
public void decrypt(BufferedImage bikey, BufferedImage biplain) throws Exception
{
BufferedImage xoredimage = xor(bikey, biplain);
File xored = new File("newplain.jpg");
ImageIO.write(xoredimage, "JPEG", xored);
}
private BufferedImage xor(BufferedImage image1, BufferedImage image2) throws Exception
{
BufferedImage outputimage = new BufferedImage(image1.getWidth(), image1.getHeight(), BufferedImage.TYPE_INT_RGB);
for (int y = 0; y < image1.getHeight(); y++)
{
for (int x = 0; x < image1.getWidth(); x++)
{
outputimage.setRGB(x,y,((image1.getRGB(x, y))^(image2.getRGB(x, y))));
System.out.println("one:" + image1.getRGB(x, y) + "\ttwo:" + image2.getRGB(x, y) + "\txor:" + ((image1.getRGB(x, y))^(image2.getRGB(x, y))));
System.out.println("one:" + Integer.toBinaryString(image1.getRGB(x, y)) + "\ttwo:" + Integer.toBinaryString(image2.getRGB(x, y)) + "\txor:" + Integer.toBinaryString((image1.getRGB(x, y)^image2.getRGB(x, y))));
}
}
return outputimage;
}
}
First time I run this code where image1 is a 4-pixel colored image and image2 is a 4-pixel white image, I get the output as:
input: #java sefi white.jpg key.jpg enc
one:-201053 two:-1 xor:201052
one:11111111111111001110111010100011 two:11111111111111111111111111111111 xor:110001000101011100
one:-265579 two:-1 xor:265578
one:11111111111110111111001010010101 two:11111111111111111111111111111111 xor:1000000110101101010
one:-664247 two:-1 xor:664246
one:11111111111101011101110101001001 two:11111111111111111111111111111111 xor:10100010001010110110
one:-925624 two:-1 xor:925623
one:11111111111100011110000001001000 two:11111111111111111111111111111111 xor:11100001111110110111
Next time I run with image1 as the xored image file and image 2 as the 4-pixel colored file, which should give me the original white image as output. But I get this as output instead:
Input:#java sefi xored.jpg key.jpg dec
one:-1 two:-16773753 xor:16773752
one:11111111111111111111111111111111 two:11111111000000000000110110000111 xor:111111111111001001111000
one:-1 two:-16773753 xor:16773752
one:11111111111111111111111111111111 two:11111111000000000000110110000111 xor:111111111111001001111000
one:-1 two:-15786601 xor:15786600
one:11111111111111111111111111111111 two:11111111000011110001110110010111 xor:111100001110001001101000
one:-1 two:-15786601 xor:15786600
one:11111111111111111111111111111111 two:11111111000011110001110110010111 xor:111100001110001001101000
If you look at the output we can see that the colors of the xored image from first time has changed.
I am not able to understand why I am getting different color value for the same image file.
There's something wrong with your image selection. If you were selecting the RGB for the generated image then the first pixel would be 110001000101011100 and not 11111111000000000000110110000111.
So my advice is for you to check if you are using the correct images on the second step.
Your code looks ok, although you'd have to paste the whole code for me to have a better idea.
Found the Answer.
It is because I am using JPEG images. JPEG compresses raw image, but when it decompress it does not guarantee to produce the exact same colors as it was before compression. Thus the different color values before and after.
When I used bmp images, I got the same colors before and after.

Replacing image pixel in Java2D

I am trying to replace some pixels from my source image(PNG format). But i am end up with some confusing result. Basically i am replacing a particular RGB values with black and white colors. Here is my code,
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ChangePixel
{
public static void main(String args[]) throws IOException
{
File file = new File(System.getProperty("user.dir"), "D4635.png");
FileInputStream fis = new FileInputStream(file);
BufferedImage image = ImageIO.read(fis);
int[] replaceColors = new int[2];
replaceColors[0] = Color.BLACK.getRGB();
replaceColors[1] = Color.WHITE.getRGB();
Color src = new Color(133, 101, 51);
int srcRGBvalue = src.getRGB();
changeAlg2(image, srcRGBvalue, replaceColors);
}
private static void changeAlg2(BufferedImage image, int srcRGBvalue, int[] replaceColors) throws IOException
{
for (int width = 0; width < image.getWidth(); width++)
{
for (int height = 0; height < image.getHeight(); height++)
{
if (image.getRGB(width, height) == srcRGBvalue)
{
image.setRGB(width, height, ((width + height) % 2 == 0 ? replaceColors[0] : replaceColors[1]));
}
}
}
File file = new File(System.getProperty("user.dir"), "107.png");
ImageIO.write(image, "png", file);
}
}
It changes my source pixels to black and some other color, instead of white. Please advice me, what's going wrong here.
Since this is my first post, I can't able to attach my images. Sorry for the inconvenience.
Edit: I have uploaded the source and the output images in a site. Here is the URL,
Source : http://s20.postimage.org/d7zdt7kwt/D4635.png
Output : http://s20.postimage.org/kdr4vntzx/107.png
Expected output : After the black pixel, white pixel has to come.
Edit : Resolved code as per Jan Dvorak advice,
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ChangePixel
{
public static void main(String args[]) throws IOException
{
File file = new File(System.getProperty("user.dir"), "D12014.gif");
FileInputStream fis = new FileInputStream(file);
BufferedImage image = ImageIO.read(fis);
Color src = new Color(223, 170, 66);
int srcRGBvalue = src.getRGB();
int[] replaceColors = new int[2];
replaceColors[0] = Color.MAGENTA.getRGB();
replaceColors[1] = Color.CYAN.getRGB();
changeAlg2(image, srcRGBvalue, replaceColors);
}
private static void changeAlg2(BufferedImage image, int srcRGBvalue, int[] replaceColors) throws IOException
{
BufferedImage image2 = new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_RGB);
for (int width = 0; width < image.getWidth(); width++)
{
for (int height = 0; height < image.getHeight(); height++)
{
if (image.getRGB(width, height) == srcRGBvalue)
{
image2.setRGB(width, height, ((width + height) % 2 == 0 ? replaceColors[0] : replaceColors[1]));
}
else
{
image2.setRGB(width, height, image.getRGB(width, height));
}
}
}
File file = new File(System.getProperty("user.dir"), "110.gif");
ImageIO.write(image2, "gif", file);
}
}
Regards
Raja.
Since you are adding colors that are not present in the original image palette, the pixels you are trying to set are clipped to the nearest color in the palette. You need to set a new color mode. Either convert to 24bpp RGB (true-color) or extend the palette with the new colors.
It doesn't seem to be possible to modify an existing BufferedImage ColorModel or assign a new one, but you can create a new buffer and copy the data there. Creating a new BufferedImage with the same Raster might work as well (only if the bit depth does not change?).
If you don't mind, you can always create a True-color image. try:
{
BufferedImage old = image;
image = new BufferedImage(
old.getWidth(),
old.getHeight(),
BufferedImage.TYPE_INT_RGB
);
image.setData(old.getRaster());
} // old is no longer needed
API reference: http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/image/BufferedImage.html
You could try to detect if the image is already in true-color (image.getColorModel() instanceof ???) to avoid having to copy the buffer when not needed.
You could try to extend the existing palette. If that is not possible (there is no palette to start with or there is not enough space), you have to fallback to RGB.
See:
http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/image/BufferedImage.html (getColorModel and the constructor taking a ColorModel and type)
http://docs.oracle.com/javase/1.4.2/docs/api/java/awt/image/IndexColorModel.html (getMapSize, getRGBs and the corresponding constructor)
From seeing your actual palette, you'll need some sort of deduplication logic because your palette is already 256 bytes - the maximum size of a PNG palette. Note that you should not save the image with larger palette than there are colors in the image (especially when you want to add new colors later). your original file could have been saved with a 2-color palette, saving 762 bytes.
Note that you don't gain much from storing the image as indexed as opposed to true-color with the same number of colors. The reason is that the byte stream (palette = 1 byte per pixel, true-color = 3 or 4 bytes per pixel) is losslessly compressed (with DEFLATE) anyways. Indexed can save you a few bytes (or lose you a few bytes, if the palette is big), but it won't reduce the file size to one third.

Categories