I'm creating a google maps application on Android and I'm facing problem. I have elevation data in text format. It looks like this
longtitude latitude elevation
491222 163550 238.270000
491219 163551 242.130000
etc.
This elevation information is stored in grid of 10x10 meters. It means that for every 10 meters is an elevation value. This text is too large so that I could find there the information I need so I would want to create a bitmap with this information.
What I need to do is in certain moment to scan the elevation around my location. There can be a lot of points to be scanned so I want to make it quick. That's why I'm thinking about the bitmap.
I don't know if it's even possible but my idea is that there would be a bitmap of size of my text grid and in every pixel would be information about the elevation. So it should be like invisible map over the google map placed in the place according to coordinates and when I need to learn the elevation about my location, I would just look at these pixels and read the value of elevation.
Do you think that is possible to create such a bitmap? I have just this idea but no idea how to implement it. Eg how to store in it the elevation information, how to read that back, how to create the bitmap.. I would be very grateful for every advice, direction, source which you can give me. Thank you so much!
BufferedImage is not available in android but android.graphics.Bitmap can be used. Bitmap must be saved in lossless format (eg. PNG).
double[] elevations={238.27,242.1301,222,1};
int[] pixels = doublesToInts(elevations);
//encoding
Bitmap bmp=Bitmap.createBitmap(2, 2, Config.ARGB_8888);
bmp.setPixels(pixels, 0, 2, 0, 0, 2, 2);
File file=new File(getCacheDir(),"bitmap.png");
try {
FileOutputStream fos = new FileOutputStream(file);
bmp.compress(CompressFormat.PNG, 100, fos);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
//decoding
Bitmap out=BitmapFactory.decodeFile(file.getPath());
if (out!=null)
{
int [] outPixels=new int[out.getWidth()*out.getHeight()];
out.getPixels(outPixels, 0, out.getWidth(), 0, 0, out.getWidth(), out.getHeight());
double[] outElevations=intsToDoubles(outPixels);
}
static int[] doublesToInts(double[] elevations)
{
int[] out=new int[elevations.length];
for (int i=0;i<elevations.length;i++)
{
int tmp=(int) (elevations[i]*1000000);
out[i]=0xFF000000|tmp>>8;
}
return out;
}
static double[] intsToDoubles(int[] pixels)
{
double[] out=new double[pixels.length];
for (int i=0;i<pixels.length;i++)
out[i]=(pixels[i]<<8)/1000000.0;
return out;
}
As color with red, green, blue and alpha (opacity/transparency). Start with all pixels transparent. and fill in the corresponding value as (R, G, B), non-transparent (the high eight bits. (Or an other convention for "not filled in."
RGB form the lower 24 bits of an integer.
Longitude and latitude to x and y
Elevation to integer less 0x01_00_00_00. And vice versa:
double elevation = 238.27;
int code = (int)(elevation * 100);
Color c = new Color(code); // BufferedImage uses int, so 'code' sufThat does not fices.
code = c.getRGB();
elevation = ((double)code) / 100;
BufferedImage with setRGB(code) or so (there are different possibilities).
Use Oracles javadoc, by googling after BufferedImage and such.
To fill unused pixels do an avaraging, in a second BufferedImage. So as never to average to original pixels.
P.S. for my Netherlands elevation might be less than zero, so maybe + ... .
Related
I am working on a Rubik's side scanner to determine what state the cube is in. I am quite new to computer vision and using it so it has been a little bit of a challenge. What I have done so far is that I use a video capture and at certain frames capture that frame and save it for image processing. Here is what it looks like.
When the photo is taken the cube is in the same position each time so I don't have to worry about locating the stickers.
What I am having trouble doing is getting a small range of pixels in each square to determine its HSV.
I know the ranges of HSV are roughly
Red = Hue(0...9) AND Hue(151..180)
Orange = Hue(10...15)
Yellow = Hue(16..45)
Green = Hue(46..100)
Blue = Hue(101..150)
White = Saturation(0..20) AND Value(230..255)
So after I have captured the image I then load it and split the HSV values of the image but don't know how to get the certain pixel coordinates of the image. How do I do so?
BufferedImage getOneFrame() {
currFrame++;
//At the 90th frame I capture that frame and save that frame
if (currFrame == 120) {
cap.read(mat2Img.mat);
mat2Img.getImage(mat2Img.mat);
Imgcodecs.imwrite("firstImage.png", mat2Img.mat);
}
cap.read(mat2Img.mat);
return mat2Img.getImage(mat2Img.mat);
}
public void splitChannels() {
IplImage firstShot = cvLoadImage("firstImage.png");
//I split the channels so that I can determine the value of the pixel range
IplImage hsv = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), firstShot.nChannels());
IplImage hue = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
IplImage sat = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
IplImage val = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
cvSplit( hsv, hue, sat, val, null );
//How do I get a small range of pixels of my images to determine get their HSV?
}
If I understand your question right, you know the coordinates of all areas that interest you. Save the information about each area into cvRect objects.
You can traverse the rectangle area by looping. Make a double loop. In outer loop start at rect.y and stop before rect.y + rect.height. In inner loop, do a similar thing in x direction. Inside the loop, use CV_IMAGE_ELEM macro to access individual pixel values and compute whatever you need.
One advice though: There are several advantages to using Mat instead of IplImage when working with OpenCV. I recommend that you start using 'Mat', unless you have some special reasons to do so, of course. Click to see the documentation and take a look at one of constructors that takes one Mat and one Rect as parameters. This constructor is your good friend - you can create a new Mat object (without copying any data) which will only contain the area inside the rectangle.
I am trying to get the value of the White Colored pixel from a GrayScale image and replace it with another Color but when I run my code, the whole GrayScale image is transfered to another Color. Can anyone please tell me where is fault in the code or how can I get my desired results??
This is the code...
public class gray {
public static void main (String args[])throws IOException{
int width;
int height;
BufferedImage myImage = null;
File f = new File("E:\\eclipse\\workspace\\Graphs\\src\\ColorToGray\\1.png");
myImage = ImageIO.read(f);
width = myImage.getWidth();
height = myImage.getHeight();
BufferedImage image = new BufferedImage(width,height,BufferedImage.TYPE_INT_ARGB);
int pixels[];
pixels = new int[width * height];
myImage.getRGB(0, 0, width, height, pixels, 0, width);
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] == 0xFFFFFF) {
pixels[i] = 0x000000FF;
}
}
File f2 = new File("E:\\eclipse\\workspace\\Graphs\\src\\ColorToGray\\out 1.png");
image.setRGB(0, 0, width, height, pixels, 0, width);
ImageIO.write( image, "jpg", f2);
}
}
Image Before:
Image Before Output
Image After:
Image After Output
I looked into it, and found a bunch of problems.
First of all, when specifying the filename to save, you supply a ".png" extension, but when you call the ImageIO.write() function, you specify file type "jpg". That tends not to work very well. If you try to open up the resulting file, most programs will give you a "this is not a valid .PNG file" error. Windows explorer tries to be smart, and re-interprets the .PNG as a .JPG, but this spared you from the chance of discovering your mistake.
This takes care of the strange redness problem.
However, if you specify "png" in ImageIO.write(), you still don't get the right image. One would expect an image that looks mostly like the original, with just a few patches of blue there where bright white used to be, but instead what we get is an overall brighter version of the original image.
I do not have enough time to look into your original image to find out what is really wrong with it, but I suspect that it is actually a bright image with an alpha mask that makes it look less bright, AND there is something wrong with the way the image gets saved that strips away alpha information, thus the apparent added brightness.
So, I tried your code with another image that I know has no tricks in it, and still your code did not appear to do anything. It turns out that the ARGB format of the int values you get from myImage.getRGB(); returns 255 for "A", which means that you need to be checking for 0xFFFFFFFF, not 0x00FFFFFF.
And of course when you replace a value, you must replace it with 0xFF0000FF, specifying a full alpha value. Replacing a pixel with 0x000000FF has no visible effect, because regardless of the high blue value, alpha is zero, so the pixel would be rendered transparent.
I am a newbie in OpenGL programming. I am making a java program with OpenGL. I drew many cubes inside. I now wanted to implement a screenshot function in my program but I just couldn't make it work. The situation is as follow :
I used FPSanimator to refresh my drawable in 60 fps
I drew dozens of cubes inside my Display.
I added a KeyListener to my panel, if I pressed the alt key, the program will run the following method :
public static void exportImage() {
int[] bb = new int[Constants.PanelSize.width*Constants.PanelSize.height*4];
IntBuffer ib = IntBuffer.wrap(bb);
ib.position(0);
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0,0,Constants.PanelSize.width,Constants.PanelSize.height,GL2.GL_RGBA,GL2.GL_UNSIGNED_BYTE,ib);
System.out.println(Constants.gl.glGetError());
ImageExport.savePixelsToPNG(bb,Constants.PanelSize.width,Constants.PanelSize.height, "imageFilename.png");
}
// Constant is a class which I store all my global variables in static type
The output in the console was 0, which means no errors. I printed the contents in the buffer and they were all zeros.
I checked the output file and it was only 1kB.
What should I do? Are there any good suggestions for me to export the screen contents to an image file using OpenGL? I heard that there are several libraries available but I don't know which one is suitable. Any help is appreciated T_T (plz forgive me if I have any grammatical mistakes ... )
You can do something like this, supposing you are drawing to the default framebuffer:
protected void saveImage(GL4 gl4, int width, int height) {
try {
GL4 gl4 = GLContext.getCurrentGL().getGL4();
BufferedImage screenshot = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Graphics graphics = screenshot.getGraphics();
ByteBuffer buffer = GLBuffers.newDirectByteBuffer(width * height * 4);
gl4.glReadBuffer(GL_BACK);
gl4.glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for (int h = 0; h < height; h++) {
for (int w = 0; w < width; w++) {
graphics.setColor(new Color((buffer.get() & 0xff), (buffer.get() & 0xff),
(buffer.get() & 0xff)));
buffer.get();
graphics.drawRect(w, height - h, 1, 1);
}
}
BufferUtils.destroyDirectBuffer(buffer);
File outputfile = new File("D:\\Downloads\\texture.png");
ImageIO.write(screenshot, "png", outputfile);
} catch (IOException ex) {
Logger.getLogger(EC_DepthPeeling.class.getName()).log(Level.SEVERE, null, ex);
}
}
Essentially you create a bufferedImage and a direct buffer. Then you use Graphics to render the content of the back buffer pixel by pixel to the bufferedImage.
You need an additional buffer.get(); because that represents the alpha value and you need also height - h to flip the image.
Edit: of course you need to read it when there is what you are looking for.
You have several options:
trigger a boolean variable and call it directly from the display method, at the end, when everything you wanted has been rendered
disable the automatic buffer swapping, call from the key listener the display() method, read the back buffer and enable the swapping again
call from the key listener the same code you would call in the display
You could use Robot class to take screenshot:
BufferedImage screenshot = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
ImageIO.write(screenshot, "png", new File("screenshot.png"));
There are two things to consider:
You take screenshot from screen, you could determine where the cordinates of you viewport are, so you can catch only the part of interest.
Something can reside a top of you viewport(another window), so the viewport could be hided by another window, it is unlikely to occur, but it can.
When you use buffers with LWJGL, they almost always need to be directly allocated. The OpenGL library doesn't really understand how to interface with Java Arrays™, and in order for the underlying memory operations to work, they need to be applied on natively-allocated (or, in this context, directly allocated) memory.
If you're using LWJGL 3.x, that's pretty simple:
//Check the math, because for an image array, given that Ints are 4 bytes, I think you can just allocate without multiplying by 4.
IntBuffer ib = org.lwjgl.BufferUtils.createIntBuffer(Constants.PanelSize.width * Constants.PanelSize.height);
And if that function isn't available, this should suffice:
//Here you actually *do* have to multiply by 4.
IntBuffer ib = java.nio.ByteBuffer.allocateDirect(Constants.PanelSize.width * Constants.PanelSize.height * 4).asIntBuffer();
And then you do your normal code:
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0, 0, Constants.PanelSize.width, Constants.PanelSize.height, GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, ib);
System.out.println(Constants.gl.glGetError());
int[] bb = new int[Constants.PanelSize.width * Constants.PanelSize.height];
ib.get(bb); //Stores the contents of the buffer into the int array.
ImageExport.savePixelsToPNG(bb, Constants.PanelSize.width, Constants.PanelSize.height, "imageFilename.png");
I have got an assignment where I need to validate images. I have got 2 sets of folders one which is actual and other folder contain expected images. These images are of some brands/Companies.
Upon initial investigation, I found that images of each brand have different dimension but are of same format i.e png
What I have done so far:- Upon googling I found the below code which compares 2 images. I ran this code for one of the brand and ofcourse the result was false. Then I modify one of the image such that both the images have same dimension.. even then i got the same result.
public void testImage() throws InterruptedException{
String file1="D:\\image\\bliss_url_2.png";
String file2="D:\\bliss.png";
Image image1 = Toolkit.getDefaultToolkit().getImage(file1);
Image image2 = Toolkit.getDefaultToolkit().getImage(file2);
PixelGrabber grab1 =new PixelGrabber(image1, 0, 0, -1, -1, true);
PixelGrabber grab2 =new PixelGrabber(image2, 0, 0, -1, -1, true);
int[] data1 = null;
if (grab1.grabPixels()) {
int width = grab1.getWidth();
int height = grab1.getHeight();
System.out.println("Initial width and height of Image 1:: "+width + ">>"+ height);
grab2.setDimensions(250, 100);
System.out.println("width and height of Image 1:: "+width + ">>"+ height);
data1 = new int[width * height];
data1 = (int[]) grab1.getPixels();
System.out.println("Image 1:: "+ data1);
}
int[] data2 = null;
if (grab2.grabPixels()) {
int width = grab2.getWidth();
int height = grab2.getHeight();
System.out.println("width and height of Image 2:: "+width + ">>"+ height);
data2 = new int[width * height];
data2 = (int[]) grab2.getPixels();
System.out.println("Image 2:: "+ data2.toString());
}
System.out.println("Pixels equal: " + java.util.Arrays.equals(data1, data2));
}
I just want to verify if the content of images are same i.e images belong to same brand ,if not then what are the differences
Please help me what should I do to do valid comparison.
Maybe you should not use some external library assuming it should be your own work. In this point of view, a way to compare images is to get the average color of the same portion of both images. If results are equals (or very similar due to compression errors etc)
Lets say we have two images
image 1 is 4 pixel. (to simplify each pixel is represented with a number but should be RGB)
1 2
3 4
[ (1+2+3+4) / 4 = 2.5 ]
image 2 is twice bigger
1 1 2 2
1 1 2 2
3 3 4 4
3 3 4 4
[ ((4*1)+(4*2)+(4*3)+(4*4)) / 16 = 2.5]
The average pixel value (color) is 2.5 in both images.
(with real pixel colors, compare the RGB colors separately. The three should be equal or very close)
That's the idea. Now, you should make this caumputing for each pixel of the smallest image and the corresponding pixels of the bigest one (according to the scale difference of both images)
Hope you'll find out a good solution !
Method setDimensions doesn't scale the image. Moreover, you shouldn't call it directly (see its java-doc). PixelGrabber is just a grabber to grab a subset of the pixels in an image. To scale the image use Image.getScaledInstance() http://docs.oracle.com/javase/7/docs/api/java/awt/Image.html#getScaledInstance(int,%20int,%20int) for instance
Even if you have got 2 images with the same size after scaling, you still cannot compare them pixel by pixel, since any algorithm of scaling is lossy by its nature. That means the only thing you can do is to check "similarity" of the images. I'd suggest to take a look at a great image processing library OpenCV which has a wrapper for Java:
Simple and fast method to compare images for similarity
http://docs.opencv.org/doc/tutorials/introduction/desktop_java/java_dev_intro.html
Is there a way to implement "Duotone" effect in Java?
A good example of what I'd like to do is here or here
I guess BandCombineOp might help.
To me I should convert it to grays first and then apply smth like threshold effect.
But I didn't manage to achieve good output.
Also I don't understand how can I set up colors for this effect.
float[][] grayMatrix = new float[][]
{
new float[] {0.3f, 0.3f, 0.3f},
new float[] {0.3f, 0.3f, 0.3f},
new float[] {0.3f, 0.3f, 0.3f},
};
float[][] duoToneMatrix = new float[][]
{
new float[] {0.1f, 0.1f, 0.1f},
new float[] {0.2f, 0.2f, 0.2f},
new float[] {0.1f, 0.1f, 0.1f},
};
BufferedImage src = ImageIO.read(new File("X:\\photoshop_image_effects.jpg"));
WritableRaster srcRaster = src.getRaster();
// make it gray
BandCombineOp bco = new BandCombineOp(grayMatrix, null);
WritableRaster dstRaster = bco.createCompatibleDestRaster(srcRaster);
bco.filter(srcRaster, dstRaster);
// apply duotone
BandCombineOp duoToneBco = new BandCombineOp(duoToneMatrix, null);
WritableRaster dstRaster2 = bco.createCompatibleDestRaster(dstRaster);
duoToneBco.filter(dstRaster, dstRaster2);
BufferedImage result = new BufferedImage(src.getColorModel(), dstRaster2, src.getColorModel().isAlphaPremultiplied(), null);
ImageIO.write(result, "png", new File("X:\\result_duotone.png"));
My output
From what I can tell you are trying to change the colouring of an image without changing its luminosity. Note the difference from luminance.
Regardless of whether you are aiming for luminance or luminosity, your problem boils down to varying the relative contributions of B, G, and R without changing their weighted sum. Your first matrix converts to greyscale by setting B,G,R to the same value and only slightly changing their luminance (.3+.3+.3 = .9). To use luminosity instead use
greyMatrix = (.11,.59,.3,
.11,.59,.3,
.11,.59,.3); //note this is for bgr
Then you want to change their relative weighting without changing their weighted sum. First, note that since after greyscale conversion your B,G,R values are the same you could replace your matrix with
duoToneMatrix = (0,.3,0,
0,.6,0,
0,.3,0,)
and it would be equivalent. To conserve luminance you need to choose 3 factors such that their sum is 1. Those three factors can be applied in the duoTone Matrix. The larger a factor is, the more tinted the image will be with that colour. To preserve luminosity you need 3 factors fb,fg,fr such that
fb*.11+fg*.59+fr*.3 = 1; //again for bgr
You can choose your factors fb, fg, fr to find the tint of your choosing.
Also, note that you can do this with one matrix. Just combine the two matrices you already have.
[duoToneMatrix]*[greyMatrix]*vector = ([duoToneMatrix]*[greyMatrix])*vector;
just compute the product of duoToneMatrix and greyMatrix (in that order) and process in one step.