Finding pixel position - java

public static void sample(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int value[][] = new int[width][height];
int valueR[][] = new int[width][height];
int valueG[][] = new int[width][height];
int valueB[][] = new int[width][height];
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
int pixel = image.getRGB(i, j);
value[i][j] = pixel;
Color c = new Color(pixel);
valueR[i][j]= c.getRed();
valueG[i][j] = c.getGreen();
valueB[i][j] = c.getBlue();
System.out.println("Red value = "+valueR[i][j]);
System.out.println("Green value ="+valueG[i][j]);
System.out.println("Blue value"+valueB[i][j]);
}
}
}
The above code is to store RGB values and pixel color values of an image in an array separately.
public static BigInteger modPow(BigInteger a1, BigInteger e, BigInteger n) {
BigInteger r = 1;
for (int i = e.bitLength() - 1; i >= 0; i--) {
r = (r.multiply(r)).mod(n);
if (e.testBit(i)) {
r = (r.multiply(a1)).mod(n);
}
}
System.out.println("C value = " + r);
int lng = 3;
BigInteger bi = BigInteger.valueOf(lng);
BigInteger a = r.divide(bi);
BigInteger b = r.mod(bi);
System.out.println("pixel position a = " + a);
System.out.println("modulus value b = " + b);
return r;
}
In the above code am finding pixel position where i need to embed the secret bit.so i need to go to that specific pixel to embed the message.But in the previous code am storing pixel color in array value[][].i need to search through the array value[][] to get the pixel position which i got in last code.
Note: a1 is the position of current bit of information file to embed
{e,n} is public key
My question is how to find the pixel positions?

To find the position of a pixel is a simple concept with a complex execution. I've written some code here that takes a BufferedImage and searches through it for a pixel of a specific color.
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import java.io.IOException;
public class pixelSearch {
public static void main(String[] args) {
//I don't know where you're getting your image but i'll get one from file
File image = new File("image.bmp");
try {
BufferedImage imageToSearch = ImageIO.read(image);
Color colorToFind = new Color(255,255,255); //define color to search for with RGB vals 255,255,255
//for more information on constructing colors look here: http://docs.oracle.com/javase/7/docs/api/java/awt/Color.html
int[] pixelCoordinates = pSearch( colorToFind, imageToSearch ); //search for the pixel
System.out.println("Found pixel at (" + pixelCoordinates[0] + "," + pixelCoordinates[1] + ")."); //display coordinates
} catch (IOException e) {
System.out.println(e.toString());
}
}
private static int[] pSearch ( Color c, BufferedImage pic ){
int cVal = c.getRGB(); //get integer value of color we are trying to find
int x1 = 0;
int y1 = 0;
int x2 = pic.getWidth();
int y2 = pic.getHeight();
int[] XArray = new int[x2-x1+1]; //create an array to hold all X coordinates in image
int iterator = 0;
while (iterator <= x2) {
XArray[iterator] = x1 + iterator;
iterator++;
}
int [] YArray = new int[y2-y1+1]; //create an array to hold all Y coordinates in image
iterator = 0;
while (iterator <= y2) {
YArray[iterator] = y1 + iterator;
iterator++;
}
//next we iterate throug all the possible coordinates to check each pixel
for (int yVal : YArray) {
for (int xVal : XArray) {
int color = pic.getRGB(xVal, yVal); //get the color of pixel at coords (xVal, yVal)
if (color == cVal) { //if the color is equal to the one we inputted to the function
int[] cPos = {xVal, yVal}; //store the coordinates
return cPos; //return the coordinates
}
}
}
int[] returnVal = {-1,-1}; //if we didn't find it return -1, -1
return returnVal;
}
}

Related

Calculating 'color distance' between 2 points in a 3-dimensional space

I have a homework task where I have to write a class responsible for contour detection. It is essentially an image processing operation, using the definition of euclidean distance between 2 points in the 3-dimensional space. Formula given to us to use is:
Math.sqrt(Math.pow(pix1.red - pix2.red,2) + Math.pow(pix1.green- pix2.green,2) + Math.pow(pix1.blue- pix2.blue,2));
We need to consider each entry of the two dimensional array storing the colors of the pixels of an image, and if some pixel, pix, the color distance between p and any of its neighbors is more than 70, change the color of the pixel to black, else change it to white.
We are given a seperate class as well responsible for choosing an image, and selecting an output, for which method operationContouring is applied to. Java syntax and convention is very new to me having started with python. Conceptually, I'm struggling to understand what the difference between pix1 and pix2 is, and how to define them. This is my code so far.
Given:
import java.awt.Color;
/* Interface for ensuring all image operations invoked in same manner */
public interface operationImage {
public Color[][] operationDo(Color[][] imageArray);
}
My code:
import java.awt.Color;
public class operationContouring implements operationImage {
public Color[][] operationDo(Color[][] imageArray) {
int numberOfRows = imageArray.length;
int numberOfColumns = imageArray[0].length;
Color[][] results = new Color[numberOfRows][numberOfColumns];
for (int i = 0; i < numberOfRows; i++)
for (int j = 0; j < numberOfColumns; j++) {
int red = imageArray[i][j].getRed();
int green = imageArray[i][j].getGreen();
int blue = imageArray[i][j].getBlue();
double DistanceColor = Math.sqrt(Math.pow(pix1.red - pix2.red,2) + Math.pow(pix1.green- pix2.green,2) + Math.pow(pix1.blue- pix2.blue,2));
int LIMIT = 70;
if (DistanceColor> LIMIT ) {
results[i][j] = new Color((red=0), (green=0), (blue=0));
}
else {
results[i][j] = new Color((red=255), (green=255), (blue=255));
}
}
return results;
}
}
This is a solution I wrote that uses BufferedImages. I tested it and it should work. Try changing it such that it uses your data format (Color[][]) and it should work for you too. Note that "pix1" is nothing more than a description of the color of some pixel, and "pix2" is the description of the color of the pixel you are comparing it to (determining whether the color distance > 70).
public static boolean tooDifferent(Color c1, Color c2) {
return Math.sqrt(Math.pow(c1.getRed() - c2.getRed(),2) + Math.pow(c1.getGreen()- c2.getGreen(),2) + Math.pow(c1.getBlue()- c2.getBlue(),2)) > 70;
}
public static Color getColor(int x, int y, BufferedImage img) {
return new Color(img.getRGB(x, y));
}
public static BufferedImage operationDo(BufferedImage img) {
int numberOfRows = img.getHeight();
int numberOfColumns = img.getWidth();
BufferedImage results = new BufferedImage(numberOfColumns, numberOfRows, BufferedImage.TYPE_INT_ARGB);
for (int y = 0; y < numberOfRows; y++) {
for (int x = 0; x < numberOfColumns; x++) {
Color color = new Color(img.getRGB(x, y));
boolean aboveExists = y > 0;
boolean belowExists = y < numberOfRows - 1;
boolean leftExists = x > 0;
boolean rightExists = x < numberOfColumns - 1;
if ((aboveExists && tooDifferent(color, getColor(x, y - 1, img))) ||
(belowExists && tooDifferent(color, getColor(x, y + 1, img))) ||
(leftExists && tooDifferent(color, getColor(x - 1, y, img))) ||
(rightExists && tooDifferent(color, getColor(x + 1, y, img)))) {
results.setRGB(x, y, Color.black.getRGB());
} else {
results.setRGB(x, y, Color.white.getRGB());
}
}
}
return results;
}

RGB to HSV values not being shown

I am trying to convert RGB values of an image and get the HSV values of it. I am trying to do this without using Color.RGBtoHSB because I don't like the float values I want the numbers to be in the range of 0-255. When I run this program my conversion algorithm prints out nothing even though I ask it to print out values.
public void splitChannels() {
Mat firstImage = Imgcodecs.imread("darkGreen.jpg");
Imgproc.cvtColor(firstImage, firstImage, Imgproc.COLOR_BGR2RGB);
int width = 20;
int height = 20;
Rect roi = new Rect(100,100, width, height);
Mat smallImg = new Mat(firstImage, roi);
Imgproc.cvtColor(smallImg,smallImg,Imgproc.COLOR_BGR2RGB);
// 3 channels in smallImg
int channels = smallImg.channels();
int totalBytes = (int)(smallImg.total() * smallImg.channels());
byte buff[] = new byte[totalBytes];
smallImg.get(0, 0, buff);
for (int i=0; i< height; i++) {
// stride is the number of bytes in a row of smallImg
int stride = channels * width;
for (int j=0; j<stride; j+=channels) {
int r = buff[(i * stride) + j];
int g = buff[(i * stride) + j + 1];
int b = buff[(i * stride) + j + 2];
RGBtoHSV(r, g, b);
}
}
}
private int[] RGBtoHSV(int r, int g, int b){
int computedH = 0;
int computedS = 0;
int computedV = 0;
int[] HSVarr = new int[3];
HSVarr[0] = computedH;
HSVarr[1] = computedS;
HSVarr[2] = computedV;
if(r< 0 || g< 0 || b< 0 || r> 255 || g>255 || b> 255){
System.err.println("RGB values must be in range 0 to 255");
}
r=r/255; g=g/255; b=b/255;
int minRGB = Math.min(r, Math.min(g, b));
int maxRGB = Math.max(r, Math.min(g, b));
// Black-gray-white
if(minRGB==maxRGB){
computedV = minRGB;
return HSVarr;
}
int d = (r==minRGB) ? g-b : ((b==minRGB) ? r-g : b-r);
int h = (r==minRGB) ? 3 : ((b==minRGB) ? 1 : 5);
computedH = 60*(h - d/(maxRGB - minRGB));
computedS = (maxRGB = minRGB)/maxRGB;
computedV = maxRGB;
System.out.println("H: " + computedH + " V: "+ computedS +" S: " + computedV);
return HSVarr;
}
I am trying to convert RGB values of an image and get the HSV values of it. I am trying to do this without using Color.RGBtoHSB because I don't like the float values.
Create a wrapper method for the Color.RGBtoHSB(...) method to convert the float values to the appropriate int values.
Something like:
import java.awt.*;
public class Main
{
public static void main(String[] args) throws Exception
{
int[] hsb = RGBtoHSB(0, 0, 255);
for (int value: hsb)
System.out.println( value );
}
public static int[] RGBtoHSB(int red, int green, int blue)
{
float[] hsbFloat = Color.RGBtoHSB(red, green, blue, null);
int[] hsbInt = new int[3];
hsbInt[0] = Math.round( hsbFloat[0] * 360 );
hsbInt[1] = Math.round( hsbFloat[1] * 100 );
hsbInt[2] = Math.round( hsbFloat[2] * 100 );
return hsbInt;
}
}

Applying Mean filter on an image using java

When I give it a picture with salt and pepper noise it returns an image loosing all details and I don't know what's wrong with my code:
public class Q1 {
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File("task1input.png"));
//get dimensions
int maxHeight = img.getHeight();
int maxWidth = img.getWidth();
//create 2D Array for new picture
int pictureFile[][] = new int [maxHeight][maxWidth];
for( int i = 0; i < maxHeight; i++ ){
for( int j = 0; j < maxWidth; j++ ){
pictureFile[i][j] = img.getRGB( j, i );
}
}
int output [][] = new int [maxHeight][maxWidth];
//Apply Mean Filter
for (int v=1; v<maxHeight; v++) {
for (int u=1; u<maxWidth; u++) {
//compute filter result for position (u,v)
int sum = 0;
for (int j=-1; j<=1; j++) {
for (int i=-1; i<=1; i++) {
if((u+(j)>=0 && v+(i)>=0 && u+(j)<maxWidth && v+(i)<maxHeight)){
int p = pictureFile[v+(i)][u+(j)];
sum = sum + p;
}
}
}
int q = (int) (sum /9);
output[v][u] = q;
}
}
//Turn the 2D array back into an image
BufferedImage theImage = new BufferedImage(
maxHeight,
maxWidth,
BufferedImage.TYPE_INT_RGB);
int value;
for(int y = 1; y<maxHeight; y++){
for(int x = 1; x<maxWidth; x++){
value = output[y][x] ;
theImage.setRGB(y, x, value);
}
}
File outputfile = new File("task1output3x3.png");
ImageIO.write(theImage, "png", outputfile);
}
}
getRGB "Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB)" so therefore you have to extract the R, G, and B and add them separately; then put them back together. One way is this
int pixel=pictureFile[u+i][v+j];
int rr=(pixel&0x00ff0000)>>16, rg=(pixel&0x0000ff00)>>8, rb=pixel&0x000000ff;
sumr+=rr;
sumg+=rg;
sumb+=rb;
then to put them back together
sumr/=9; sumg/=9; sumb/=9;
newPixel=0xff000000|(sumr<<16)|(sumg<<8)|sumb);

Edge detection using sobel operator

So I am trying to write a program that uses sobel operator to detect edges in an image. Below is my method.
/**
* Detects edges.
* #param url - filepath to the iamge.
*/
private void detect(String url) {
BufferedImage orgImage = readImage(url);
int width = orgImage.getWidth();
int height = orgImage.getHeight();
BufferedImage resImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_BINARY);
WritableRaster inraster = orgImage.getRaster();
WritableRaster outraster = resImage.getRaster();
System.out.println("size: " + width + "X" + height);
// Loop through every pixel, ignores the edges as these will throw out of
//bounds.
for (int i = 1; i < width-2; i++) {
for (int j = 1; j < height-2; j++) {
// Compute filter result, loops over in a
// box pattern.
int sum = 0;
for (int x = -1; x <= 1; x++) {
for (int y = -1; y <= 1; y++) {
int sum1 = i+y;
int sum2 = j+x;
int p = inraster.getSample(sum1, sum2, 0);
sum = sum + p;
}
}
int q = (int) Math.round(sum / 9.0);
if(q<150){
q = 0;
}else{
q = 255;
}
outraster.setSample(i, j, 0, q);
}
}
writeImage(resImage, "jpg", "EdgeDetection " + url);
}
This mostly just gives me a black and white image:
Before
After
I am obviosly calculating the pixel value wrong somehow. I am also note sure what value to use when deciding if the pixel should be black or white.

How to flip an image horizontally

HiI was wondering how to flip and image horizontally, for a practce task I was given a code that reads an image, inverting it to an image indicating it's brightness from 0-5, I had to flip an image.
This is my code of my reading an image and drawing it
public int[][] readImage(String url) throws IOException
{
// fetch the image
BufferedImage img = ImageIO.read(new URL(url));
// create the array to match the dimensions of the image
int width = img.getWidth();
int height = img.getHeight();
int[][] imageArray = new int[width][height];
// convert the pixels of the image into brightness values
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
// get the pixel at (x,y)
int rgb = img.getRGB(x,y);
Color c = new Color(rgb);
int red = c.getRed();
int green = c.getGreen();
int blue = c.getBlue();
// convert to greyscale
float[] hsb = Color.RGBtoHSB(red, green, blue, null);
int brightness = (int)Math.round(hsb[2] * (PIXEL_CHARS.length - 1));
imageArray[x][y] = brightness;
}
}
return imageArray;
}
public void draw() throws IOException
{
int[][] array = readImage("http://sfpl.org/images/graphics/chicklets/google-small.png");
for(int i=0; i<array.length; i++)
{
for(int pic=0; pic<array[i].length; pic++)
{
if(array[pic][i] == 0)
{
System.out.print("X");
}
else if(array[pic][i] == 1)
{
System.out.print("8");
}
else if(array[pic][i] == 2)
{
System.out.print("0");
}
else if(array[pic][i] == 3)
{
System.out.print(":");
}
else if(array[pic][i] == 4)
{
System.out.print(".");
}
else if (array[pic][i] == 5)
{
System.out.print(" ");
}
else
{
System.out.print("error");
break;
}
}
System.out.println();
}
}
and this is the code I tried to create to horizontally flip it,
void mirrorUpDown()
{
int[][] array = readImage("http://sfpl.org/images/graphics/chicklets/google-small.png");
int i = 0;
for (int x = 0; x < array.length; x++)
{
for (int y = 0; y < array[i].length; y++)
{{
int temp = array[x][y];
array[x][y]= array[-x][y];
array[array[i].length-x][y]=temp;
}
}
}
}
I get an error
unreported exception java.io.IException;
must be caught or declared to be thrown
I'd actually do it by this way...
BufferedImage flip(BufferedImage sprite){
BufferedImage img = new BufferedImage(sprite.getWidth(),sprite.getHeight(),BufferedImage.TYPE_INT_ARGB);
for(int xx = sprite.getWidth()-1;xx>0;xx--){
for(int yy = 0;yy < sprite.getHeight();yy++){
img.setRGB(sprite.getWidth()-xx, yy, sprite.getRGB(xx, yy));
}
}
return img;
}
Just a loop whose x starts at the end of the first image and places its rgba value on the flipped position of the second image. Clean, easy code :)
The function mirrorUpDown() , add a throws IOException there.
Also the function from which you are calling these methods, does that handle exception, does that code enclosed in a try catch block or the function is also set to throw IOException (one of either should be there)
How is your image supposed to know it should get it's data from imageArray ?
instead, you should access the raster of your image and modify the data in it.
void flip(BufferedImage image) {
WritableRaster raster = image.getRaster();
int h = raster.getHeight();
int w = raster.getWidth();
int x0 = raster.getMinX();
int y0 = raster.getMinY();
for (int x = x0; x < x0 + w; x++){
for (int y = y0; y < y0 + h / 2; y++){
int[] pix1 = new int[3];
pix1 = raster.getPixel(x, y, pix1);
int[] pix2 = new int[3];
pix2 = raster.getPixel(x, y0 + h - 1 - (y - y0), pix2);
raster.setPixel(x, y, pix2);
raster.setPixel(x, y0 + h - 1 - (y - y0), pix1);
}
}
return;
}
Sorry about posting this here over a year later but it should aid someone at a stage
try{
java.awt.image.BufferedImage bi = javax.imageio.ImageIO.read(getClass().getResource("Your image bro.jpg")) ;
int[] h = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), null, 0, bi.getWidth());
int [] h1 = new int[h.length];
System.out.println(""+h.length);
for(int j = 0;500>j;j++){
for(int i = 500;i>0;i--){
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
}
}
bi.setRGB(0, 0, bi.getWidth(), bi.getHeight(), h1, 0, bi.getWidth());
}
catch(Exception e){e.printStackTrace();}
Lets break the code down
java.awt.image.BufferedImage bi =javax.imageio.ImageIO.read(getClass().getResource("Your image bro.jpg"));
Tries to read the image and stores the read image into the BufferedImage variable bi
int[] h = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), null, 0, bi.getWidth());
int [] h1 = new int[h.length];
instantiate two arrays, h is the original RGB Array and h1 will be the horizontally flipped RGB array.
for(int j = 0;500>j;j++){
for(int i = 500;i>0;i--){
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
}
}
Lets look at something in particular more closely
h1[j*500+(500-i)] = h[(j*500)+(i-1)];
Images are scanned from position 0;0 to x.length;y.length
but it is scanned in a coninual array. Thus we use a psuedo-array to manipulate the flipping of the image. j*500 references the Y values and (500-i) references the x values.
bi.setRGB(0, 0, bi.getWidth(), bi.getHeight(), h1, 0, bi.getWidth());
Finally, the image gets stored back into the BufferedImage variable.
Note that the 500 constant is referencing your x resolution of the image. For example, 1920 x 1080 sized image uses a max value of 1920. The logic is yours to decide.

Categories