Why are those pixel rgb values sometimes equal and sometimes not equal? I am learning image processing. It would be great if someone help me out here.
public class ColorTest1 {
Color p1;
Color p2;
ColorTest1() throws IOException, InterruptedException {
BufferedImage bi = ImageIO.read(new File("d:\\x.jpg"));
for (int y = 0; y < bi.getHeight(); y++) {
for (int x = 0; x < bi.getWidth() - 1; x++) {
p1 = new Color(bi.getRGB(x, y));
p2 = new Color(bi.getRGB(x + 1, y));
int a = (p1.getAlpha() + p2.getAlpha()) / 2;
int r = (p1.getRed() + p2.getRed()) / 2;
int g = (p1.getGreen() + p2.getGreen()) / 2;
int b = (p1.getBlue() + p2.getBlue()) / 2;
int x1 = p1.getRGB();
int x2 = p2.getRGB();
int sum1 = (x1 + x2) / 2;
int sum2 = a * 16777216 + r * 65536 + g * 256 + b;
System.out.println(sum1 == sum2);
}
}
}
public static void main(String... areg) throws IOException, InterruptedException {
new ColorTest1();
}
}
This is the image:
Take two pixels. One is black. The other is nearly black but with a slight bit of red in it, just 1/255. Ignore alpha. r will be (0 + 1) / 2 = 0. g and b will be 0 too. x1 will be 0. x2 will be 65536, right? So sum1 will be 65536 / 2 = 32768. sum2 obviously will be 0.
Whenever the sum of either red or green of the two colours is odd, the int division will set the high bit of the next colour in RGB, leading to an unexpected result.
Related
I've been trying to implement a BC1 (DXT1) decompression algorithm in Java. Everything seems to work pretty precise but I've ran into problem with some blocks around transparent ones. I've been trying to resolve it for a few hours without success.
In short, after decompressing all blocks everything looks good except for the blocks whose are around transparent ones. During the development I've been checking results with results from DirectXTex (texconv) which is written in C++.
This is my result compared to DirectXTex one:
Here is the code I'm using:
BufferedImage decompress(byte[] buffer, int width, int height)
and implementation:
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
int[] scanline = new int[4 * width]; //stores 4 horizontal lines (width/4 blocks)
RGBA[] blockPalette = new RGBA[4]; //stores RGBA values of current block
int bufferOffset = 0;
for (int row = 0; row < height / 4; row++) {
for (int col = 0; col < width / 4; col++) {
short rgb0 = Short.reverseBytes(Bytes.getShort(buffer, bufferOffset));
short rgb1 = Short.reverseBytes(Bytes.getShort(buffer, bufferOffset + 2));
int bitmap = Integer.reverseBytes(Bytes.getInt(buffer, bufferOffset + 4));
bufferOffset += 8;
blockPalette[0] = R5G6B5.decode(rgb0);
blockPalette[1] = R5G6B5.decode(rgb1);
if(rgb0 <= rgb1) {
int c2r = (blockPalette[0].getRed() + blockPalette[1].getRed()) / 2;
int c2g = (blockPalette[0].getGreen() + blockPalette[1].getGreen()) / 2;
int c2b = (blockPalette[0].getBlue() + blockPalette[1].getBlue()) / 2;
blockPalette[2] = new RGBA(c2r, c2g, c2b, 255);
blockPalette[3] = new RGBA(0, 0, 0, 0);
} else {
int c2r = (2 * blockPalette[0].getRed() + blockPalette[1].getRed()) / 3;
int c2g = (2 * blockPalette[0].getGreen() + blockPalette[1].getGreen()) / 3;
int c2b = (2 * blockPalette[0].getBlue() + blockPalette[1].getBlue()) / 3;
int c3r = (blockPalette[0].getRed() + 2 * blockPalette[1].getRed()) / 3;
int c3g = (blockPalette[0].getGreen() + 2 * blockPalette[1].getGreen()) / 3;
int c3b = (blockPalette[0].getBlue() + 2 * blockPalette[1].getBlue()) / 3;
blockPalette[2] = new RGBA(c2r, c2g, c2b, 255);
blockPalette[3] = new RGBA(c3r, c3g, c3b, 255);
}
for (int i = 0; i < 16; i++, bitmap >>= 2) {
int pi = (i / 4) * width + (col * 4 + i % 4);
int index = bitmap & 3;
scanline[pi] = A8R8G8B8.encode(blockPalette[index]);
}
}
//copy scanline to buffered image
result.setRGB(0, row * 4, width, 4, scanline, 0, width);
}
return result;
Does anyone have idea where is the problem? I've been doing exactly the same steps as specification says: Block Compression (Direct3D 10)
Is it that blockPalette[2].set(c2r, c2g, c2b); should be blockPalette[2].set(c2r, c2g, c2b, 255);? (in two locations)
For those who are interested, I've found that the problem was in comparing short values.
I've just changed:
if(rgb0 <= rgb1) {
to either
if(Short.compareUnsigned(rgb0, rgb1) <= 0) {
or
if((rgb0 & 0xffff) <= (rgb1 & 0xffff)) {
and this ensures that color values are compared as unsigned shorts (positive integers).
so I am working on a program in java which creates the a rectangular image (see link below) as a ppm image that would be further written into a ppm file. Creating and writing the image to the file I get. However, I am having difficulty creating the image dynamically such that it works for any width and height specified. From my understanding, a p3 ppm file simply follows the following format for a 4x4 image.
P3
4 4
15
0 0 0 0 0 0 0 0 0 15 0 15
0 0 0 0 15 7 0 0 0 0 0 0
0 0 0 0 0 0 0 15 7 0 0 0
15 0 15 0 0 0 0 0 0 0 0 0
Where the first three numbers are the headings and the rest is simply the rgb values of each pixel. But I am having trouble figuring out how I can create the above matrix for the image below and for any dimensions specified as it does not include solid colors in a straight line?
Image to be created:
I figured I could create an arraylist which holds an array of rgb values such that each index in the list is one rgb set followed by the next rgb set to the right. However, I am quite confused on what the rgb values would be. Here is what I have:
public static void createImage(int width, int height){
pic = new ArrayList();
int[] rgb = new int[3];
for(int i = 0; i <= width; i++){
for(int j = 0; i <= height; j++){
rgb[0] = 255-j; //random values as im not sure what they should be or how to calculate them
rgb[1] = 0+j;
rgb[1] = 0+j;
pic.add(rgb);
}
}
}
Thanks in advance.
EDITED::Updated code
I have managed to fix most of the issues, however, the image generated does not match the one posted above. With this code. I get the following image:
package ppm;
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
public class PPM {
private BufferedImage img;
private static final String imageDir = "Image/rect.ppm";
private final static String filename = "assignment1_q1.ppm";
private static byte bytes[]=null; // bytes which make up binary PPM image
private static double doubles[] = null;
private static int height = 0;
private static int width = 0;
private static ArrayList pic;
private static String matrix="";
/**
* #param args the command line arguments
*/
public static void main(String[] args) throws IOException {
createImage(200, 200);
writeImage(filename);
}
public static void createImage(int width, int height){
pic = new ArrayList();
int[] rgb = new int[3];
matrix +="P3\n" + width + "\n" + height + "\n255\n";
for(int i = 0; i <= height; i++){
for(int j = 0; j <= width; j++){
Color c = getColor(width, height, j, i);
//System.out.println(c);
if(c==Color.red){
rgb[0] = (int) (255*factor(width, height, j, i));
rgb[1] = 0;
rgb[2] = 0;
}else if(c==Color.green){
rgb[0] = 0;
rgb[1] = (int) (255*factor(width, height, j, i));
rgb[2] = 0;
}else if(c==Color.blue){
rgb[0] = 0;
rgb[1] = 0;
rgb[2] = (int) (255*factor(width, height, j, i));
}else if(c== Color.white){
rgb[0] = (int) (255*factor(width, height, j, i));
rgb[1] = (int) (255*factor(width, height, j, i));
rgb[2] = (int) (255*factor(width, height, j, i));
}
matrix += ""+ rgb[0] + " " + rgb[1] + " " + rgb[2] + " " ;
//System.out.println(""+ rgb[0] + " " + rgb[1] + " " + rgb[2] + " ");
//pic.add(rgb);
}
matrix += "\n";
}
}
public static Color getColor(int width, int height, int a, int b){
double d1 = ((double) width / height) * a;
double d2 = (((double) -width / height) * a + height);
if(d1 > b && d2 > b) return Color.green;
if(d1 > b && d2 < b) return Color.blue;
if(d1 < b && d2 > b) return Color.red;
return Color.white;
}
public static double factor(int width, int height, int a, int b){
double factorX = (double) Math.min(a, width - a) / width * 2;
double factorY = (double) Math.min(b, height - b) / height * 2;
//System.out.println(Math.min(factorX, factorY));
return Math.min(factorX, factorY);
}
public static void writeImage(String fn) throws FileNotFoundException, IOException {
//if (pic != null) {
FileOutputStream fos = new FileOutputStream(fn);
fos.write(new String(matrix).getBytes());
//fos.write(data.length);
//System.out.println(data.length);
fos.close();
// }
}
}
You can use Linear functions to model the diagonals in the picture. Keep in mind though that in the coordinates (0, 0) lie in the top-left corner of the image!
Say you want to create an image with the dimensions width and height, the diagonal from the top-left to bottom-right would cross the points (0, 0) and (width, height):
y = ax + t
0 = a * 0 + t => t = 0
height = a * width + 0 => a = height / width
d1(x) = (height / width) * x
Now we can calculate the function for the second diagonal. This diagonal goes through the points (0, height) and (width, 0), so:
y = ax + t
height = a * 0 + t => t = height
0 = a * width + height => a = -(height/width)
d2(x) = -(height/width) * x + height
From this we can determine whether a certain point in the image lies below or above a diagonal. As an example for the point (a, b):
if d1(a) > b: (a, b) lies above the first diagonal (left-top to right-bottom), thus it must be either blue or green. Otherwise it must be either red or white
if d2(a) > b: (a, b) lies above the second diagonal, thus it must be either red or green. Otherwise it must be white or blue
By applying both relationships it's easy to determine to which of the four colors a certain point belongs:
Color getColor(int width, int height, int a, int b){
double d1 = ((double) height / width) * a;
double d2 = ((double) -height / width) * a + height;
if(d1 > b && d2 > b) return greenColor;
if(d1 > b && d2 < b) return blueColor;
if(d1 < b && d2 > b) return redColor;
return whiteColor;
}
Now there's one last thing that we need to take into account: the image darkens towards it's borders.
A darker version of a color can be created by multiplying each channel with a factor. The lower the factor the darker the resulting the color. For the sake of simplicity I'll assume the change in brightness is linear from the center of the image.
Since the brightness changes along two axis independently, we need to model this by calculating the change alongside both axis and using the maximum.
The brightness change as a function of the distance of the center can be modeled using the distance to the closer border of the image in relation to the distance to the center (only on one axis):
deltaX = min(a, width - a) / (width / 2)
deltaY = min(b, height - b) / (height / 2)
So we can get the factor to multiply each color-channel by this way:
double factor(int width, int height, int a, int b){
double factorX = (double) Math.min(a, width - a) / width * 2;
double factorY = (double) Math.min(b, height - b) / height * 2;
return Math.min(factorX, factorY);
}
I need to implement Gaussian Blur in Java for 3x3, 5x5 and 7x7 matrix. Can you correct me if I'm wrong:
I've a matrix(M) 3x3 (middle value is M(0, 0)):
1 2 1
2 4 2
1 2 1
I take one pixel(P) from image and for each nearest pixel:
s = M(-1, -1) * P(-1, -1) + M(-1, 0) * P(-1, 0) + ... + M(1, 1) * P(1, 1)
An then division it total value of matrix:
P'(i, j) = s / M(-1, -1) + M(-1, 0) + ... + M(1, 1)
That's all that my program do. I leave extreme pixels not changed.
My program:
for(int i = 1; i < height - 1; i++){
for(int j = 1; j < width - 1; j++){
int sum = 0, l = 0;
for(int m = -1; m <= 1; m++){
for(int n = -1; n <= 1; n++){
try{
System.out.print(l + " ");
sum += mask3[l++] * Byte.toUnsignedInt((byte) source[(i + m) * height + j + n]);
} catch(ArrayIndexOutOfBoundsException e){
int ii = (i + m) * height, jj = j + n;
System.out.println("Pixels[" + ii + "][" + jj + "] " + i + ", " + j);
System.exit(0);
}
}
System.out.println();
}
System.out.println();
output[i * width + j] = sum / maskSum[0];
}
}
I get source from a BufferedImage like this:
int[] source = image.getRGB(0, 0, width, height, null, 0, width);
So for this image:
Result is this:
Can you describe me, what is wrong with my program?
First of all, your formula for calculating the index in the source array is wrong. The image data is stored in the array one pixel row after the other. Therefore the index given x and y is calculated like this:
index = x + y * width
Furthermore the color channels are stored in different bits of the int cannot simply do the calculations with the whole int, since this allows channels to influence other channels.
The following solution should work (even though it just leaves the pixels at the bounds transparent):
public static BufferedImage blur(BufferedImage image, int[] filter, int filterWidth) {
if (filter.length % filterWidth != 0) {
throw new IllegalArgumentException("filter contains a incomplete row");
}
final int width = image.getWidth();
final int height = image.getHeight();
final int sum = IntStream.of(filter).sum();
int[] input = image.getRGB(0, 0, width, height, null, 0, width);
int[] output = new int[input.length];
final int pixelIndexOffset = width - filterWidth;
final int centerOffsetX = filterWidth / 2;
final int centerOffsetY = filter.length / filterWidth / 2;
// apply filter
for (int h = height - filter.length / filterWidth + 1, w = width - filterWidth + 1, y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
int r = 0;
int g = 0;
int b = 0;
for (int filterIndex = 0, pixelIndex = y * width + x;
filterIndex < filter.length;
pixelIndex += pixelIndexOffset) {
for (int fx = 0; fx < filterWidth; fx++, pixelIndex++, filterIndex++) {
int col = input[pixelIndex];
int factor = filter[filterIndex];
// sum up color channels seperately
r += ((col >>> 16) & 0xFF) * factor;
g += ((col >>> 8) & 0xFF) * factor;
b += (col & 0xFF) * factor;
}
}
r /= sum;
g /= sum;
b /= sum;
// combine channels with full opacity
output[x + centerOffsetX + (y + centerOffsetY) * width] = (r << 16) | (g << 8) | b | 0xFF000000;
}
}
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
result.setRGB(0, 0, width, height, output, 0, width);
return result;
}
int[] filter = {1, 2, 1, 2, 4, 2, 1, 2, 1};
int filterWidth = 3;
BufferedImage blurred = blur(img, filter, filterWidth);
I am trying to implement window level functionality( To apply bone, brain, lung etc on CT) for DICOM images in my application and implemented formula as per the DICOM specification.
I am changing pixel values based on below formula and creating a new image, but images are becoming blank. What am doing wrong and is this correct way to do this. Please help :(:( Thanks
BufferedImage image = input image;
double w = 2500; // Window width
double c = 500; // window Center
double ymin = 0;
double ymax = 255;
double x = 0;
double y = 0;
double slope = dicomObject.get(Tag.RescaleSlope).getFloat(true);
double intercept = dicomObject.get(Tag.RescaleIntercept).getFloat(true);
int width = image.getWidth();
int height = image.getHeight();
double val = c - 0.5 - (w - 1) / 2;
double val2 = c - 0.5 + (w - 1) / 2;
for (int m = 0; m < height; m++) {
for (int n = 0; n < width; n++) {
int rgb = image.getRGB(n, m);
int valrgb = image.getRGB(n, m);
int a = (0xff000000 & valrgb) >>> 24;
int r = (0x00ff0000 & valrgb) >> 16;
int g = (0x0000ff00 & valrgb) >> 8;
int b = (0x000000ff & valrgb);
x = a + r + g + b;
if (x <= val)
y = ymin;
else if (x > val2)
y = ymax;
else {
y = ((x - (c - 0.5)) / (w - 1) + 0.5) * (ymax - ymin)+ ymin;
}
y = y * slope + intercept;
rgb = (int) y;
image.setRGB(n, m, rgb);
}
}
String filePath = "out put fileName";
ImageIO.write(image, "jpeg", new File(filePath));
First of all whats in your BufferedImage image ?
There are three steps you want to take from raw (decopressed) pixel data:
Get stored values - apply BitsAllocated, BitsStored, HighBit transformation. (I guess you image already passed that level)
Get modality values - thats your Slope, Intercept transformation. Ofter this transformation, your data will be in Hounsfield Units for CT.
Then you apply WW/WL (Value Of Interest) transformation, which will transform this window of walues into grayscale color space.
EDIT:
You've got to tell me where did you get "input image" from? After decompression pixel data should be in a byte array of size byte[width*height*2] (for CT Image BitsAllocated is always 16, thus *2). You can get stored values like this:
ushort code = (ushort)((pixel[0] + (pixel[1] << 8)) & (ushort)((1<<bitsStored) - 1));
int value = TwosComplementDecode(code);
I appear to have hit a wall in my most recent project involving wave/ripple generation over an image. I made one that works with basic colors on a grid that works perfectly; heck, I even added shades to the colors depending on the height of the wave.
However, my overall goal was to make this effect work over an image like you would see here. I was following an algorithm that people are calling the Hugo Elias method (though idk if he truly came up with the design). His tutorial can be found here!
When following that tutorial I found his pseudo code challenging to follow. I mean the concept for the most part makes sense until I hit the height map portion over an image. The problem being the x and y offsets throw an ArrayIndexOutOfBoundsException due to him adding the offset to the corresponding x or y. If the wave is too big (i.e. in my case 512) it throws an error; yet, if it is too small you can't see it.
Any ideas or fixes to my attempted implementation of his algorithm?
So I can't really make a compile-able version that is small and shows the issue, but I will give the three methods I'm using in the algorithm. Also keep in mind that the buffer1 and buffer2 are the height maps for the wave (current and previous) and imgArray is a bufferedImage represented by a int[img.getWidth() * img.getHeight()] full of ARGB values.
Anyways here you go:
public class WaveRippleAlgorithmOnImage extends JPanel implements Runnable, MouseListener, MouseMotionListener
{
private int[] buffer1;
private int[] buffer2;
private int[] imgArray;
private int[] movedImgArray;
private static double dampening = 0.96;
private BufferedImage img;
public WaveRippleAlgorithmOnImage(BufferedImage img)
{
this.img = img;
imgArray = new int[img.getHeight()*img.getWidth()];
movedImgArray = new int[img.getHeight()*img.getWidth()];
imgArray = img.getRGB(0, 0,
img.getWidth(), img.getHeight(),
null, 0, img.getWidth());
//OLD CODE
/*for(int y = 0; y < img.getHeight(); y++)
{
for(int x = 0; x < img.getWidth(); x++)
{
imgArray[y][x] = temp[0 + (y-0)*img.getWidth() + (x-0)];
}
}*/
buffer1 = new int[img.getHeight()*img.getWidth()];
buffer2 = new int[img.getHeight()*img.getWidth()];
buffer1[buffer1.length/2] = (img.getWidth() <= img.getHeight() ? img.getWidth() / 3 : img.getHeight() / 3);
//buffer1[25][25] = 10;
back = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_ARGB);
this.addMouseListener(this);
this.addMouseMotionListener(this);
}
//<editor-fold defaultstate="collapsed" desc="Used Methods">
#Override
public void run()
{
while(true)
{
this.update();
this.repaint();
this.swap();
}
}
//Called from Thread to update movedImgArray prior to being drawn.
private void update()
{
//This is my attempt of trying to convert his code to java.
for (int i=img.getWidth(); i < imgArray.length - 1; i++)
{
if(i % img.getWidth() == 0 || i >= imgArray.length - img.getWidth())
continue;
buffer2[i] = (
((buffer1[i-1]+
buffer1[i+1]+
buffer1[i-img.getWidth()]+
buffer1[i+img.getWidth()]) >> 1)) - buffer2[i];
buffer2[i] -= (buffer2[i] >> 5);
}
//Still my version of his code, because of the int[] instead of int[][].
for (int y = 1; y < img.getHeight() - 2; y++)
{
for(int x = 1; x < img.getWidth() - 2; x++)
{
int xOffset = buffer1[((y)*img.getWidth()) + (x-1)] - buffer1[((y)*img.getWidth()) + (x+1)];
int yOffset = buffer1[((y-1)*img.getWidth()) + (x)] - buffer1[((y+1)*img.getWidth()) + (x)];
int shading = xOffset;
//Here is where the error occurs (after a click or wave started), because yOffset becomes -512; which in turn gets
//multiplied by y... Not good... -_-
movedImgArray[(y*img.getWidth()) + x] = imgArray[((y+yOffset)*img.getWidth()) + (x+xOffset)] + shading;
}
}
//This is my OLD code that kidna worked...
//I threw in here to show you how I was doing it before I switched to images.
/*
for(int y = 1; y < img.getHeight() - 1; y++)
{
for(int x = 1; x < img.getWidth() - 1; x++)
{
//buffer2[y][x] = ((buffer1[y][x-1] +
//buffer1[y][x+1] +
//buffer1[y+1][x] +
//buffer1[y-1][x]) / 4) - buffer2[y][x];
buffer2[y][x] = ((buffer1[y][x-1] +
buffer1[y][x+1] +
buffer1[y+1][x] +
buffer1[y-1][x] +
buffer1[y + 1][x-1] +
buffer1[y + 1][x+1] +
buffer1[y - 1][x - 1] +
buffer1[y - 1][x + 1]) / 4) - buffer2[y][x];
buffer2[y][x] = (int)(buffer2[y][x] * dampening);
}
}*/
}
//Swaps buffers
private void swap()
{
int[] temp;
temp = buffer2;
buffer2 = buffer1;
buffer1 = temp;
}
//This creates a wave upon clicking. It also is where that 512 is coming from.
//512 was about right in my OLD code shown above, but helps to cause the Exeception now.
#Override
public void mouseClicked(MouseEvent e)
{
if(e.getX() > 0 && e.getY() > 0 && e.getX() < img.getWidth() && e.getY() < img.getHeight())
buffer2[((e.getY())*img.getWidth()) + (e.getX())] = 512;
}
private BufferedImage back;
#Override
public void paintComponent(Graphics g)
{
super.paintComponent(g);
back.setRGB(0, 0, img.getWidth(), img.getHeight(), movedImgArray, 0, img.getWidth());
g.drawImage(back, 0, 0, null);
}
}
P.S. Here are two images of the old code working.
Looking at my original pseudocode, I assume the Array Out Of Bounds error is happening when you try to look up the texture based on the offset. The problem happens because the refraction in the water is allowing us to see outside of the texture.
for every pixel (x,y) in the buffer
Xoffset = buffer(x-1, y) - buffer(x+1, y)
Yoffset = buffer(x, y-1) - buffer(x, y+1)
Shading = Xoffset
t = texture(x+Xoffset, y+Yoffset) // Array out of bounds?
p = t + Shading
plot pixel at (x,y) with colour p
end loop
The way to fix this is simply to either clamp the texture coordinates, or let them wrap. Also, if you find that the amount of refraction is too much, you can reduce it by bit-shifting the Xoffset and Yoffset values a little bit.
int clamp(int x, int min, int max)
{
if (x < min) return min;
if (x > max) return max;
return x;
}
int wrap(int x, int min, int max)
{
while (x<min)
x += (1+max-min);
while (x>max)
x -= (1+max-min);
return x;
}
for every pixel (x,y) in the buffer
Xoffset = buffer(x-1, y) - buffer(x+1, y)
Yoffset = buffer(x, y-1) - buffer(x, y+1)
Shading = Xoffset
Xoffset >>= 1 // Halve the amount of refraction
Yoffset >>= 1 // if you want.
Xcoordinate = clamp(x+Xoffset, 0, Xmax) // Use clamp() or wrap() here
Ycoordinate = clamp(y+Yoffset, 0, Ymax) //
t = texture(Xcoordinate, Ycoordinate)
p = t + Shading
plot pixel at (x,y) with colour p
end loop