Java opencv inRange thresholding function makes my image into three different images? - java

I am using java opencv and this is the line that I am executing.
Imgproc.cvtColor(originalImage, hsvImage, Imgproc.COLOR_BGR2HSV);
Core.inRange(hsvImage, low, high, thresholdImage);
low and high are some Scalar values(of size 3 each). So my original image as you can see is of 3 channels, but my thresholdImage has only one channel, why? As a result of this, when I try to display thresholdImage, I get three small images in my JFrame. How to fix this?

It turns out that Core.inRange changes the second argument, which is a Mat, to a single channel image. So in order to get 3 channels, I needed to use Imgproc.cvtColor function to re-convert it back to 3 channels.

Related

cvtColor in openCV creating array of stretched images?

I'm using the android native sample activity. When I use the cvtcolor function, my output is four copies of my input shrunken and without color. Without the cvtcolor function the input is perfect, except of course, it has color.
To be more specific, the output is four columns. The output is grey. If it were filming a face, the face would be stretched downward and look super long.
cvtcolor(input, output, CV_BGR2GRAY);
Given my limited image processing knowledge i have no idea where to begin or what to do next. I am on a Moto X
Android images are 4 channel. Therefore you need something like
cv::Mat gray;
cv::cvtColor(input, gray , CV_BGRA2GRAY); // convert 4 channel color to 1 channel gray
cv::cvtColor(gray , output, CV_GRAY2BGRA); // convert 1 channel gray to 4 channel gray

How to scale down the size and quality of an BufferedImage?

I'm working on a project, a client-server application named 'remote desktop control'. What I need to do is take a screen capture of the client computer and send this screen capture to the server computer. I would probably need to send 3 to 5 images per second. But considering that sending BufferedImage directly will be too costly for the process, I need to reduce the size of the images. The image quality need not to be loss less.
How can I reduce the byte size of the image? Any suggestions?
You can compress it with ZIP very easily by using GZIPInputStream and its output counterpart on the other end of the socket.
Edit:
Also note that you can create delta images for transmission, you can use a "transpartent color" for example (magic pink #FF00FF) to indicate that no change was made on that part of the screen. On the other side you can draw the new image over the last one ignoring these magic pixels.
Note that if the picture already contains this color you can change the real pink pixels to #FF00FE for example. This is unnoticable.
An other option is to transmit a 1-bit mask with every image (after painting the no-change pixels to an arbitrary color. For this you can change the color which is mostly used in the picture to result in the best compression ratio (optimal huffman-coding).
Vbence's solution of using a GZIPInputStream is a good suggestion. The way this is done in most commercial software - Windows Remote Desktop, VNC, etc. is that only changes to the screen-buffer are sent. So you keep a copy on the server of what the client 'sees', and with each consecutive capture you calculate what is different in terms of screen areas. Then you only send these screen areas to the client along with their top-left coords, width, height. And update the server copy of the client 'view' with just these new areas.
That will MASSIVELY reduce the amount of network data you use, while I have been typing this answer, only 400 or so pixels (20x20) are changing with each keystroke. This on a 1920x1080 screen is just 1/10,000th of the screen, so clearly worth thinking about.
The only expensive part is how you calculate the 'difference' between one frame and the next. There are plenty of libraries out there to do that cheaply, most of them very mathematical (discrete cosine transform type stuff, way over my head), but it can be done relatively cheaply.
See this thread for how to encode to JPG with controllable compression/quality. The slider on the left is used to control the level.
Ultimately it would be better to encode the images directly to a video codec that can be streamed, but I am a little hazy on the details.
One way would be to use ImageIO API
ImageIO.write(buffimg, "jpg", new File("buffimg.jpg"));
As for the quality and other parameters- I'm not sure, but it should be possible, just dig deeper.

Working with DrJava - How can I load and alter a jpeg?

I'm a complete beginner to programming and I've been trying to figure this out for a while but I'm lost. There's a few different versions of the question, but I think I can figure the rest out after I have one finished code, so I'm just going explain the one. The first part asks to write a program using DrJava that will display an image, wait for a user response, and then reduce the image to have only 4 levels per color channel. It goes on to say this:
"What we want to do is reduce each color channel from the range 0-255 (8 bits) to the range 0-3 (2 bits). We can do this by dividing the color channel value by 64. However, since our actual display still uses 1 byte per color channel, a values 0-3 will all look very much like black (very low color intensity). To make it look right, we need to scale the values back up to the original range (multiply by 64). Note that, if integer division is used, this means that only 4 color channel values will occur: 0, 64, 128 and 192, imitating a 2-bit color palate."
I don't even get where I'm supposed to put the picture and get it to load from. Basically I need it explained like I'm five. Thanks in advance!
Java API documentation will be your best resource.
You can read an BufferedImage via a function ImageIO.read(File).
BufferedImage is an Image, so you can display it a part of a JLabel or JButton.
BufferedImage can be created with different ColorModels, RGB, BGR, ARGB, one byte per colour, indexed colours and so on. Here you want to copy one BufferedImage to another with another Colormodel.
Basically you can create a new BufferedImage with the differing ColorModel, call:
Graphics g = otherImg.getGraphics();
g.drawImage(originalImg, ...);
ImageIO.write(otherImg, ...);

what values of an image should I use to produce a haar wavelet?

I currently have a Java program that will get the rgb values for each of the pixels in an image. I also have a method to calculate a Haar wavelet on a 2d matrix of values. However I don't know which values I should give to my method that calculates the Haar wavelet. Should I average each pixels rgb value and computer a haar wavelet on that? or maybe just use 1 of r, g,b.
I am trying to create a unique fingerprint for an image. I read elsewhere that this was a good method as I can take the dot product of 2 wavelets to see how similar the images are to each other.
Please let me know of what values I should be computing a Haar wavelet on.
Thanks
Jess
You should regard the R/G/B components as different images: Create one matrix for R, G and B each, then apply the wavelet to parts of those independently.
You then reconstruct the R/G/B-images from the 3 wavelet-compressed channels and finally combine those to a 3-channel bitmap.
Since eznme didn't answer your question (You want fingerprints, he explains compression and reconstruction), here's a method you'll often come across:
You separate color and brightness information (chrominance and luma), and weigh them differently. Sometimes you'll even throw away the chrominance and just use the luma part. This reduces the size of your fingerprint significantly (~factor three) and takes into account how we perceive an image - mainly by local brightness, not by absolute color. As a bonus you gain some robustness concerning color manipulation of the image.
The separation can be done in different ways, e.g. transforming your RGB image to YUV or YIQ color space. If you only want to keep the luma component, these two color spaces are equivalent. However, they encode the chrominance differently.
Here's the linear transformation for the luma Y from RGB:
Y = 0.299*R + 0.587*G + 0.114*B
When you take a look at the mathematics, you notice that we're doing nothing else than creating a grayscale image – taking into account that we perceive green brighter than red and red brighther than blue when they all are numerically equal.
Incase you want to keep a bit of chrominance information, in order to keep your fingerprint as concise as possible, you could reduce the resolution of the two U,V components (each actually 8 bit). So you could join them both into one 8 bit value by reducing their information to 4 bit and combining them with the shift operator (don't know how that works in java). The chrominance should weigh less in comparison to the luma, in the final fingerprint-distance calculation (the dot product you mentioned).

Bitwise operations on a png and bmp give different results? (Same 32 bit ARGB representation)

I'm trying to replicate some image filtering software on the Android platform. The desktop version works with bmps but crashes out on png files.
When I come to xOr two pictures (The 32 bit ints of each corresponding pixel) I get very different results for the two pieces of software.
I'm sure my code isn't wrong as it's such a simple task but here it is;
const int aMask = 0xFF000000;
int xOrPixels(int p1, int p2) {
return (aMask | (p1 ^ p2) );
}
The definition for the JAI library used by the Java desktop software can be found here and states;
The destination pixel values are defined by the pseudocode:
dst[x][y][b] = srcs[0][x][y][b] ^ srcs[1][x][y][b];
Where the b is for band (i.e. R,G,B).
Any thoughts? I have a similar problem with AND and OR.
Here is an image with the two source images xOr'd at the bottom on Android using a png. The same file as a bitmap xOr'd gives me a bitmap filled with 0xFFFFFFFF (White), no pixels at all. I checked the binary values of the Android ap and it seems right to me....
Gav
NB When i say (Same 32 bit ARGB representation) I mean that android allows you to decode a png file to this format. Whilst this might give room for some error (Is png lossless?) I get completely different colours on the output.
I checked a couple of values from your screenshot.
The input pixels:
Upper left corners, 0xc3cbce^0x293029 = 0xeafbe7
Nape of the neck, 0xbdb221^0x424dd6 = 0xfffff7
are very similar to the corresponding output pixels.
Looks to me like you are XORing two images that are closely related (inverted in each color channel), so, necessarily, the output is near 0xffffff.
If you were to XOR two dissimilar images, perhaps you will get something more like what you expect.
The question is, why do you want to XOR pixel values?
The png could have the wrong gamma or color space, and it's getting converted on load, affecting the result. Some versions of Photoshop had a bug where they saved pngs with the wrong gamma.
What are you doing prior to the code posted?
PNG is a compressed format, using the deflate algorithm (See Section 5 of RFC2083), so if you're just doing binary reads, you're not looking at actual pixels.

Categories