I created a map editor in Java. The problem is, I have steps for every byte value, so the map isn't smooth. Is it possible to change the BufferedImage raster data to float data and draw in float precision on it?
To answer your question, yes, you can create a BufferedImage with float precision. It is however a little unclear if this will help you solve your problem.
In any case, here's working example code for creating a BufferedImage with float precision:
public class FloatImage {
public static void main(String[] args) {
// Define dimensions and layout of the image
int w = 300;
int h = 200;
int bands = 4; // 4 bands for ARGB, 3 for RGB etc
int[] bandOffsets = {0, 1, 2, 3}; // length == bands, 0 == R, 1 == G, 2 == B and 3 == A
// Create a TYPE_FLOAT sample model (specifying how the pixels are stored)
SampleModel sampleModel = new PixelInterleavedSampleModel(DataBuffer.TYPE_FLOAT, w, h, bands, w * bands, bandOffsets);
// ...and data buffer (where the pixels are stored)
DataBuffer buffer = new DataBufferFloat(w * h * bands);
// Wrap it in a writable raster
WritableRaster raster = Raster.createWritableRaster(sampleModel, buffer, null);
// Create a color model compatible with this sample model/raster (TYPE_FLOAT)
// Note that the number of bands must equal the number of color components in the
// color space (3 for RGB) + 1 extra band if the color model contains alpha
ColorSpace colorSpace = ColorSpace.getInstance(ColorSpace.CS_sRGB);
ColorModel colorModel = new ComponentColorModel(colorSpace, true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_FLOAT);
// And finally create an image with this raster
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
System.out.println("image = " + image);
}
}
For map elevation data, using a single band (bands = 1; bandOffsets = {0};) and a grayscale color space (ColorSpace.CS_GRAY) and no transparency may make more sense.
Related
Motivation:
My goal is to convert AWT BufferedImage to SWT ImageData in the most efficient way. Typical answer to this question is pixel by pixel conversion of the whole picture, that is O(n^2) complexity. Much more efficient would be if they could exchange whole pixel matrix as it is. BufferedImage seems to be very flexible in determining in detail how colors and alpha are encoded.
To provide you with a wider context, I wrote a SVG icon on demand rasterizer, using Apache Batik, but it is for SWT (Eclipse) application. Batik renders only to a java.awt.image.BufferedImage, but SWT components require org.eclipse.swt.graphics.Image.
Their backing raster objects: java.awt.image.Raster and org.eclipse.swt.graphics.ImageData represent exactly same thing, they are just wrapper around a 2D array of byte values representing pixels. If I can make one or the other to use came color encoding, voila, I can reuse the backing array as it is.
I got pretty far, this works:
// defined blank "canvas" for Batik Transcoder for SVG to be rasterized there
public BufferedImage createCanvasForBatik(int w, int h) {
new BufferedImage(w, h, BufferedImage.TYPE_4BYTE_ABGR);
}
// convert AWT's BufferedImage to SWT's ImageData to be made into SWT Image later
public ImageData convertToSWT(BufferedImage bufferedImage) {
DataBuffer db = bufferedImage.getData().getDataBuffer();
byte[] matrix = ((DataBufferByte) db).getData();
PaletteData palette =
new PaletteData(0x0000FF, 0x00FF00, 0xFF0000); // BRG model
// the last argument contains the byte[] with the image data
int w = bufferedImage.getWidth();
int h = bufferedImage.getHeight();
ImageData swtimgdata = new ImageData(w, h, 32, palette);
swtimgdata.data = matrix; // ImageData has all field public!!
// ImageData swtimgdata = new ImageData(w, h, 32, palette, 4, matrix); ..also works
return swtimgdata;
}
It all works except transparency :(
It looks like ImageData requires (always?) alpha to be a separate raster, see ImageData.alphaData from color raster, see ImageData.data; both are byte[] types.
Is there a way how to make ImageData to accept ARGB model? That is alpha mixed with other colors? I doubt so I went the other way. To make BufferedImage to use separate arrays (aka rasters or "band") for colors and alpha. The ComponentColorModel and BandedRaster seem to intended exactly for these things.
So far I got here:
public BufferedImage createCanvasForBatik(int w, int h) {
ColorSpace cs = ColorSpace.getInstance(ColorSpace.CS_sRGB);
int[] nBits = {8, 8, 8, 8}; // ??
ComponentColorModel colorModel = new ComponentColorModel(cs, nBits, true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
WritableRaster raster = Raster.createBandedRaster(
DataBuffer.TYPE_BYTE, w, h, 4, new Point(0,0));
isPremultiplied = false;
properties = null;
return new BufferedImage(colorModel, raster, isPremultiplied, properties);
}
That creates a separate raster (band) for alpha but also for every color separately, so I end up with 4 bands (4 rasters) which is again unusable for SWT Image. Is it possible to create a banded raster with 2 bands: one for colors in RGB or BRG, and one for alpha only?
I don't know SWT in detail, but based on my understand of the API doc, the below should work:
The trick is to use a custom DataBuffer implementation that masquerades as a "banded" buffer, but internally uses a combination of interleaved RGB and separate alpha array for storage. This works nicely with the standard BandedSampleModel. You will lose any chance of special (hardware) optimizations that are normally applied to BufferedImages using this model, but that should not matter as you are using SWT for display anyway.
I suggest you create your SWT image first, and then "wrap" the color and alpha arrays from the SWT image in the custom data buffer. If you do it this way, Batik should render directly to your SWT image, and you can just throw away the BufferedImage afterwards (if this is not practical, you can of course do it the other way around as well, but you may need to expose the internal arrays of the custom data buffer class below, to create the SWT image).
Code (important parts are the SWTDataBuffer class and createImage method):
public class SplitDataBufferTest {
/** Custom DataBuffer implementation using separate arrays for RGB and alpha.*/
public static class SWTDataBuffer extends DataBuffer {
private final byte[] rgb; // RGB or BGR interleaved
private final byte[] alpha;
public SWTDataBuffer(byte[] rgb, byte[] alpha) {
super(DataBuffer.TYPE_BYTE, alpha.length, 4); // Masquerade as banded data buffer
if (alpha.length * 3 != rgb.length) {
throw new IllegalArgumentException("Bad RGB/alpha array lengths");
}
this.rgb = rgb;
this.alpha = alpha;
}
#Override
public int getElem(int bank, int i) {
switch (bank) {
case 0:
case 1:
case 2:
return rgb[i * 3 + bank];
case 3:
return alpha[i];
}
throw new IndexOutOfBoundsException(String.format("bank %d >= number of banks, %d", bank, getNumBanks()));
}
#Override
public void setElem(int bank, int i, int val) {
switch (bank) {
case 0:
case 1:
case 2:
rgb[i * 3 + bank] = (byte) val;
return;
case 3:
alpha[i] = (byte) val;
return;
}
throw new IndexOutOfBoundsException(String.format("bank %d >= number of banks, %d", bank, getNumBanks()));
}
}
public static void main(String[] args) {
// These are given from your SWT image
int w = 300;
int h = 200;
byte[] rgb = new byte[w * h * 3];
byte[] alpha = new byte[w * h];
// Create an empty BufferedImage around the SWT image arrays
BufferedImage image = createImage(w, h, rgb, alpha);
// Just to demonstrate that it works
System.out.println("image: " + image);
paintSomething(image);
showIt(image);
}
private static BufferedImage createImage(int w, int h, byte[] rgb, byte[] alpha) {
DataBuffer buffer = new SWTDataBuffer(rgb, alpha);
// SampleModel sampleModel = new BandedSampleModel(DataBuffer.TYPE_BYTE, w, h, 4); // If SWT data is RGB, you can use simpler constructor
SampleModel sampleModel = new BandedSampleModel(DataBuffer.TYPE_BYTE, w, h, w,
new int[] {2, 1, 0, 3}, // Band indices for BGRA
new int[] {0, 0, 0, 0});
WritableRaster raster = Raster.createWritableRaster(sampleModel, buffer, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
private static void showIt(final BufferedImage image) {
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Test");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
JLabel label = new JLabel(new ImageIcon(image));
label.setOpaque(true);
label.setBackground(Color.GRAY);
frame.add(label);
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
private static void paintSomething(BufferedImage image) {
int w = image.getWidth();
int h = image.getHeight();
int qw = w / 4;
int qh = h / 4;
Graphics2D g = image.createGraphics();
g.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
g.setColor(Color.ORANGE);
g.fillOval(0, 0, w, h);
g.setColor(Color.RED);
g.fillRect(5, 5, qw, qh);
g.setColor(Color.WHITE);
g.drawString("R", 5, 30);
g.setColor(Color.GREEN);
g.fillRect(5 + 5 + qw, 5, qw, qh);
g.setColor(Color.BLACK);
g.drawString("G", 5 + 5 + qw, 30);
g.setColor(Color.BLUE);
g.fillRect(5 + (5 + qw) * 2, 5, qw, qh);
g.setColor(Color.WHITE);
g.drawString("B", 5 + (5 + qw) * 2, 30);
g.dispose();
}
}
I have a byte array with type TYPE_4BYTE_ABGR, and I know its width and height, I want to change it to BufferedImage, any ideas?
The fastest way to create a BufferedImage from a byte array in TYPE_4BYTE_ABGR form, is to wrap the array in a DataBufferByte and create an interleaved WritableRaster from that. This will make sure there are no additional byte array allocations. Then create the BufferedImage from the raster, and a matching color model:
public static void main(String[] args) {
int width = 300;
int height = 200;
int samplesPerPixel = 4; // This is the *4BYTE* in TYPE_4BYTE_ABGR
int[] bandOffsets = {3, 2, 1, 0}; // This is the order (ABGR) part in TYPE_4BYTE_ABGR
byte[] abgrPixelData = new byte[width * height * samplesPerPixel];
DataBuffer buffer = new DataBufferByte(abgrPixelData, abgrPixelData.length);
WritableRaster raster = Raster.createInterleavedRaster(buffer, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), true, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage image = new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
System.out.println("image: " + image); // Should print: image: BufferedImage#<hash>: type = 6 ...
}
Note however, that this image will be "unmanaged" (some HW accelerations will be disabled), because you have direct access to the pixel array.
To avoid this, create the WritableRaster without the pixels, and copy the pixels into it. This will use twice as much memory, but will keep the image "managed" and thus possible better display performance:
// Skip creating the data buffer
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE, width, height, samplesPerPixel * width, samplesPerPixel, bandOffsets, null);
raster.setDataElements(0, 0, width, height, abgrPixelData);
// ...rest of code as above.
You could even do this (which might be more familiar):
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
WritableRaster raster = image.getRaster();
raster.setDataElements(0, 0, width, height, abgrPixelData);
Might not be very efficient, but a BufferedImage can be converted to another type this way:
public static BufferedImage convertToType(BufferedImage image, int type) {
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight(), type);
Graphics2D graphics = newImage.createGraphics();
graphics.drawImage(image, 0, 0, null);
graphics.dispose();
return newImage;
}
About the method you want to be implemented, you would have to know the width or height of the image to convert a byte[] to a BufferedImage.
Edit:
One way is converting the byte[] to int[] (data type TYPE_INT_ARGB) and using setRGB:
int[] dst = new int[width * height];
for (int i = 0, j = 0; i < dst.length; i++) {
int a = src[j++] & 0xff;
int b = src[j++] & 0xff;
int g = src[j++] & 0xff;
int r = src[j++] & 0xff;
dst[i] = (a << 24) | (r << 16) | (g << 8) | b;
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
image.setRGB(0, 0, width, height, dst, 0, width);
private BufferedImage outputImg;
for(int y = 0; y < inputImg.getHeight(); ++y)
{
for(int x = 0; x < inputImg.getWidth(); ++x)
{
Color originPixel = new Color(inputImg.getRGB(x, y));
double X = 0.412453 * originPixel.getRed() + 0.35758 * originPixel.getGreen() + 0.180423 * originPixel.getBlue();
double Y = 0.212671 * originPixel.getRed() + 0.71516 * originPixel.getGreen() + 0.072169 * originPixel.getBlue();
double Z = 0.019334 * originPixel.getRed() + 0.119193 * originPixel.getGreen() + 0.950227 * originPixel.getBlue();
//???
}
}
In color space conversion function I get RGB-pixel and convert it into XYZ-pixel. But how to set this result in outputImg?
Among BufferedImage methods I see only setRGB(int r, int g, int b)
To work with a BufferedImage in a different color model than RGB, you typically have to work with the Raster or DataBuffer directly.
The fastest way to convert from an RGB color space (like sRGB) to an XYZ color space (like CIEXYZ), is to use ColorConvertOp. However, I assume that this is an assignment, and your task is to implement this yourself.
It's possible to create an XYZ BufferedImage like this:
int w = 1024, h = 1024; // or whatever you prefer
ColorSpace xyzCS = ColorSpace.getInstance(ColorSpace.CS_CIEXYZ);
ComponentColorModel cm = new ComponentColorModel(xyzCS, false, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
WritableRaster raster = cm.createCompatibleWritableRaster(w, h);
BufferedImage xyzImage = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
You can then modify the samples/pixels through the WritableRaster, using raster.setPixel(x, y, pixelData) or raster.setPixels(x, y, w, h, pixelData) or one of the raster.setSample(x, y, band, ...)/setSamples(x, y, w, h, band, ...) methods.
You can also get the DataBuffer, using raster.getDataBuffer(), or if you really like to, access the backing array directly:
// The cast is safe, as long as you used DataBuffer.TYPE_BYTE for cm above
DataBufferByte buffer = (DataBufferByte) raster.getDataBuffer();
byte[] pixels = buffer.getData();
Why would this method throw an index out of bounds error? Trying to create an image from data I generate myself and I expected this would work.
private BufferedImage getImageFromFloatArray(float[] data, int w, int h) {
BufferedImage img = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
System.out.println("Image pixel array size: "
+ ((DataBufferInt) img.getRaster().getDataBuffer())
.getData().length);
System.out.println("Datasize: " + data.length);
WritableRaster raster = img.getRaster();
raster.setPixels(0, 0, w, h, data);
return img;
}
Stacktrace
Image pixel array size: 800000
Datasize: 800000
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 800000
at java.awt.image.SampleModel.setPixels(Unknown Source)
at java.awt.image.WritableRaster.setPixels(Unknown Source)
at image.PixelAraryToImageTest.getImageFromFloatArray(PixelAraryToImageTest.java:36)
Try using the Raster width and height variables instead of the BufferedImage width and height variable. Also use Raster.getMinX() and Raster.getMinY()
Every value in the float array isn't a pixel value. Every value is a color component value. So a 2x1 image would actually need to be of length 4, as you have ARGB color components. To make it a 2x1 image red for example, would require something like...
int numColorComponents = 4;
float[] data = new float[imgWidth*imgHeight*numColorComponents];
raster.setPixels(minX,minY, rasterWidth,rasterHeight, data);
Also, unlike other graphics frameworks, the float buffer here isn't a buffer of normalized values. Its value between [0, 255]. So, to set 2x1 image to opaque red, your buffer would be:
float alpha = 255;
float red = 255;
float[] buffer = new float[]{alpha,red,0,0,alpha,red,0,0};
I want to extract the pixel values of the jpeg image using the JAVA language, and need to store it in array(bufferdArray) for further manipulation. So how i can extract the pixel values from jpeg image format?
Have a look at BufferedImage.getRGB().
Here is a stripped-down instructional example of how to pull apart an image to do a conditional check/modify on the pixels. Add error/exception handling as necessary.
public static BufferedImage exampleForSO(BufferedImage image) {
BufferedImage imageIn = image;
BufferedImage imageOut =
new BufferedImage(imageIn.getWidth(), imageIn.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
int width = imageIn.getWidth();
int height = imageIn.getHeight();
int[] imageInPixels = imageIn.getRGB(0, 0, width, height, null, 0, width);
int[] imageOutPixels = new int[imageInPixels.length];
for (int i = 0; i < imageInPixels.length; i++) {
int inR = (imageInPixels[i] & 0x00FF0000) >> 16;
int inG = (imageInPixels[i] & 0x0000FF00) >> 8;
int inB = (imageInPixels[i] & 0x000000FF) >> 0;
if ( conditionChecker_inRinGinB ){
// modify
} else {
// don't modify
}
}
imageOut.setRGB(0, 0, width, height, imageOutPixels, 0, width);
return imageOut;
}
The easiest way to get a JPEG into a java-readable object is the following:
BufferedImage image = ImageIO.read(new File("MyJPEG.jpg"));
BufferedImage provides methods for getting RGB values at exact pixel locations in the image (X-Y integer coordinates), so it'd be up to you to figure out how you want to store that in a single-dimensional array, but that's the gist of it.
There is a way of taking a buffered image and converting it into an integer array, where each integer in the array represents the rgb value of a pixel in the image.
int[] pixels = ((DataBufferInt)image.getRaster().grtDataBuffer()).getData();
The interesting thing is, when an element in the integer array is edited, the corresponding pixel in the image is as well.
In order to find a pixel in the array from a set of x and y coordinates, you would use this method.
public void setPixel(int x, int y ,int rgb){
pixels[y * image.getWidth() + x] = rgb;
}
Even with the multiplication and addition of coordinates, it is still faster than using the setRGB() method in the BufferedImage class.
EDIT:
Also keep in mind, the image needs type needs to be that of TYPE_INT_RGB, and isn't by default. It can be converted by creating a new image of the same dimensions, and of the type of TYPE_INT_RGB. Then using the graphics object of the new image to draw the original image to the new one.
public BufferedImage toIntRGB(BufferedImage image){
if(image.getType() == BufferedImage.TYPE_INT_RGB)
return image;
BufferedImage newImage = new BufferedImage(image.getWidth(), image.getHeight, BufferedImage.TYPE_INT_RGB);
newImage.getGraphics().drawImage(image, 0, 0, null);
return newImage;
}