Android OpenCV Create Rect - java

Good evening, need some help with getting a square area of rgb pixels with openCV
Im trying to get the pixels from a 50x50 area on the center of the screen, but i didnt get any success.
if i use the output i dont get any pixel else if using original mat(image) its getting the top corner
Please someone help
#Override
public void onPreviewFrame(Mat image) {
int w = image.cols();
int h = image.rows();
Mat roi=new Mat(image, new Rect(h/2 , w/2 , 50, 50));
Mat output = roi.clone();
int rw=output.cols();
int rh=output.rows();
//double [] arrp = output.get(rh, rw);//if i use the output i dont get any pixel
double [] arrp = image.get(rh, rw);//if use original mat its getting the top corner
Log.i("",+(int)arrp[2]+" "+ (int)arrp[1]+" "+(int)arrp[0]);
}

I assume you can use copyTo :
Mat output(50, 50, image.type());
image(Rect(h/2 , w/2 , output.size())).copyTo(output);
The matrix class has an operator (const Rect &roi), that returns a submatrix (without deep data copy). : http://docs.opencv.org/modules/core/doc/basic_structures.html#id6
EDIT:
Didn't see it was for android. I think there is no operator, so you have to use the submat function :
image.submat(Rect(h/2 , w/2 , output.size())).copyTo(output)

Related

Best parameters for pupil detection using hough? java opencv

--------------read edit below---------------
I am trying to detect the edge of the pupils and iris within various images. I am altering parameters and such but I can only manage to ever get one iris/pupil outline correct, or get unnecessary outlines in the background, or none at all. Is the some specific parameters that I should try to try and get the correct outlines. Or is there a way that I can crop the image just to the eyes, so the system can focus on that part?
This is my UPDATED method:
private void findPupilIris() throws IOException {
//converts and saves image in grayscale
Mat newimg = Imgcodecs.imread("/Users/.../pic.jpg");
Mat des = new Mat(newimg.rows(), newimg.cols(), newimg.type());
Mat norm = new Mat();
Imgproc.cvtColor(newimg, des, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv = new ArrayList<Mat>();
Core.split(des, hsv);
Mat v = hsv.get(2); //gets the grey scale version
Imgcodecs.imwrite("/Users/Lisa-Maria/Documents/CapturedImages/B&Wpic.jpg", v); //only writes mats
CLAHE clahe = Imgproc.createCLAHE(2.0, new Size(8,8) ); //2.0, new Size(8,8)
clahe.apply(v,v);
// Imgproc.GaussianBlur(v, v, new Size(9,9), 3); //adds left pupil boundary and random circle on 'a'
// Imgproc.GaussianBlur(v, v, new Size(9,9), 13); //adds right outer iris boundary and random circle on 'a'
Imgproc.GaussianBlur(v, v, new Size(9,9), 7); //adds left outer iris boundary and random circle on left by hair
// Imgproc.GaussianBlur(v, v, new Size(7,7), 15);
Core.addWeighted(v, 1.5, v, -0.5, 0, v);
Imgcodecs.imwrite("/Users/.../after.jpg", v); //only writes mats
if (v != null) {
Mat circles = new Mat();
Imgproc.HoughCircles( v, circles, Imgproc.CV_HOUGH_GRADIENT, 2, v.rows(), 100, 20, 20, 200 );
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
System.out.println("circles.cols() " + circles.cols());
if(circles.cols() > 0) {
System.out.println("1");
for (int x = 0; x < circles.cols(); x++) {
System.out.println("2");
double vCircle[] = circles.get(0, x);
if(vCircle == null) {
break;
}
Point pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
int radius = (int) Math.round(vCircle[2]);
//draw the found circle
Imgproc.circle(v, pt, radius, new Scalar(255,0,0),2); //newimg
//Imgproc.circle(des, pt, radius/3, new Scalar(225,0,0),2); //pupil
Imgcodecs.imwrite("/Users/.../Houghpic.jpg", v); //newimg
//draw the mask: white circle on black background
// Mat mask = new Mat( new Size( des.cols(), des.rows() ), CvType.CV_8UC1 );
// Imgproc.circle(mask, pt, radius, new Scalar(255,0,0),2);
// des.copyTo(des,mask);
// Imgcodecs.imwrite("/Users/..../mask.jpg", des); //newimg
Imgproc.logPolar(des, norm, pt, radius, Imgproc.WARP_FILL_OUTLIERS);
Imgcodecs.imwrite("/Users/..../Normalised.jpg",norm);
}
}
}
}
Result: hough pic
Following discussion in comments, I am posting a general answer with some results I got on the worst case image uploaded by the OP.
Note : The code I am posting is in Python, since it is the fastest for me to write
Step 1. As you ask for a way to crop the image, so as to focus on the eyes only, you might want to look at Face Detection. Since, the image essentially requires to find eyes only, I did the following:
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
eyes = eye_cascade.detectMultiScale(v) // v is the value channel of the HSV image
// The results "eyes" gives you the dimensions of the rectangle where the eyes are detected as [x, y, w, h]
// Just for drawing
cv2.rectangle(v, (x1, y1), (x1+w1, y1+h1), (0, 255, 0), 2)
cv2.rectangle(v, (x2, y2), (x2+w2, y2+h2), (0, 255, 0), 2)
Now, once you have the bounding rectangles, you can crop the rectangles from the image like:
crop_eye1 = v[y1:y1+h1, x1:x1+w1]
crop_eye2 = v[y2:y2+h2, x2:x2+w2]
After you obtain the rectangles, I would suggest looking into different color spaces instead of RGB/BGR, HSV/Lab/Luv in particular.
Because the R, G, and B components of an object’s color in a digital image are all correlated with the amount of light hitting the object, and therefore with each other, image descriptions in terms of those components make object discrimination difficult. Descriptions in terms of hue/lightness/chroma or hue/lightness/saturation are often more relevant
Then, once, you have the eyes, its time to equalize the contrast of the image, however, I suggest using CLAHE and play with the parameters for clipLimit and tileGridSize. Here is a code which I implemented a while back in Java:
private static Mat clahe(Mat image, int ClipLimit, Size size){
CLAHE clahe = Imgproc.createCLAHE();
clahe.setClipLimit(ClipLimit);
clahe.setTilesGridSize(size);
Mat dest_image = new Mat();
clahe.apply(image, dest_image);
return dest_image;
}
Once you are satisfied, you should sharpen the image so that HoughCircle is robust. You should look at unsharpMask. Here is the code in Java for UnsharpMask I implemented in Java:
private static Mat unsharpMask(Mat input_image, Size size, double sigma){
// Make sure the {input_image} is gray.
Mat sharpend_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Mat Blurred_image = new Mat(input_image.rows(), input_image.cols(), input_image.type());
Imgproc.GaussianBlur(input_image, Blurred_image, size, sigma);
Core.addWeighted(input_image, 2.0D, Blurred_image, -1.0D, 0.0D, sharpened_image);
return sharpened_image;
}
Alternatively, you could use bilateral filter, which is edge preserving smoothing, or read through this for defining a custom kernel for sharpening image.
Hope it helps and best of luck!

Getting HSV values of pixels in image OpenCV

I am working on a Rubik's side scanner to determine what state the cube is in. I am quite new to computer vision and using it so it has been a little bit of a challenge. What I have done so far is that I use a video capture and at certain frames capture that frame and save it for image processing. Here is what it looks like.
When the photo is taken the cube is in the same position each time so I don't have to worry about locating the stickers.
What I am having trouble doing is getting a small range of pixels in each square to determine its HSV.
I know the ranges of HSV are roughly
Red = Hue(0...9) AND Hue(151..180)
Orange = Hue(10...15)
Yellow = Hue(16..45)
Green = Hue(46..100)
Blue = Hue(101..150)
White = Saturation(0..20) AND Value(230..255)
So after I have captured the image I then load it and split the HSV values of the image but don't know how to get the certain pixel coordinates of the image. How do I do so?
BufferedImage getOneFrame() {
currFrame++;
//At the 90th frame I capture that frame and save that frame
if (currFrame == 120) {
cap.read(mat2Img.mat);
mat2Img.getImage(mat2Img.mat);
Imgcodecs.imwrite("firstImage.png", mat2Img.mat);
}
cap.read(mat2Img.mat);
return mat2Img.getImage(mat2Img.mat);
}
public void splitChannels() {
IplImage firstShot = cvLoadImage("firstImage.png");
//I split the channels so that I can determine the value of the pixel range
IplImage hsv = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), firstShot.nChannels());
IplImage hue = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
IplImage sat = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
IplImage val = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
cvSplit( hsv, hue, sat, val, null );
//How do I get a small range of pixels of my images to determine get their HSV?
}
If I understand your question right, you know the coordinates of all areas that interest you. Save the information about each area into cvRect objects.
You can traverse the rectangle area by looping. Make a double loop. In outer loop start at rect.y and stop before rect.y + rect.height. In inner loop, do a similar thing in x direction. Inside the loop, use CV_IMAGE_ELEM macro to access individual pixel values and compute whatever you need.
One advice though: There are several advantages to using Mat instead of IplImage when working with OpenCV. I recommend that you start using 'Mat', unless you have some special reasons to do so, of course. Click to see the documentation and take a look at one of constructors that takes one Mat and one Rect as parameters. This constructor is your good friend - you can create a new Mat object (without copying any data) which will only contain the area inside the rectangle.

OpenCV - Ellipse not showing at all

I wanna make an ellipse mask for cropping an image so only the contents inside of the ellipse will be shown.
Could you inspect my code?
public static Mat cropImage(Mat imageOrig, MatOfPoint contour){
Rect rect = Imgproc.boundingRect(contour);
MatOfPoint2f contour2f = new MatOfPoint2f(contour.toArray());
RotatedRect boundElps = Imgproc.fitEllipse(contour2f);
Mat out = imageOrig.submat(rect);
// the line function is working
Imgproc.line(out, new Point(0,0), new Point(out.width(), out.height()), new Scalar(0,0,255), 5);
// but not this one
Imgproc.ellipse(out, boundElps, new Scalar(255, 0, 0), 99);
return out;
}//cropImage
It seems like it's not working at all. Though you can see the line function I've done to test if it is working on the right image and I can see a line but there's no ellipse.
Here's a sample output of my cropImage function.
TIA
You're retrieving the ellipse coordinates in the imageOrig coordinates system, but you're showing it on the cropped out image.
If you want to show the ellipse on the crop, you need to translate the ellipse center to account for the translation introduced by the crop (top-left coordinates of rect), something like:
boundElps.center().x -= rect.x; boundElps.center().y -= rect.y;
You can try this out:
RotatedRect rRect = Imgproc.minAreaRect(contour2f);
Imgproc.ellipse(out, rRect , new Scalar(255, 0, 0), 3);
You should check for the minimum requirements for using the fitEllipse as shown in this post.
The function fitEllipse requires atleast 5 points.
Note: Although the reference I mention is for Python, I hope you can do the same for Java.
for cnt in contours:
area = cv2.contourArea(cnt)
# Probably this can help but not required
if area < 2000 or area > 4000:
continue
# This is the check I'm referring to
if len(cnt) < 5:
continue
ellipse = cv2.fitEllipse(cnt)
cv2.ellipse(roi, ellipse, (0, 255, 0), 2)
Hope it helps!

How to create a bitmap with information in every pixel?

I'm creating a google maps application on Android and I'm facing problem. I have elevation data in text format. It looks like this
longtitude latitude elevation
491222 163550 238.270000
491219 163551 242.130000
etc.
This elevation information is stored in grid of 10x10 meters. It means that for every 10 meters is an elevation value. This text is too large so that I could find there the information I need so I would want to create a bitmap with this information.
What I need to do is in certain moment to scan the elevation around my location. There can be a lot of points to be scanned so I want to make it quick. That's why I'm thinking about the bitmap.
I don't know if it's even possible but my idea is that there would be a bitmap of size of my text grid and in every pixel would be information about the elevation. So it should be like invisible map over the google map placed in the place according to coordinates and when I need to learn the elevation about my location, I would just look at these pixels and read the value of elevation.
Do you think that is possible to create such a bitmap? I have just this idea but no idea how to implement it. Eg how to store in it the elevation information, how to read that back, how to create the bitmap.. I would be very grateful for every advice, direction, source which you can give me. Thank you so much!
BufferedImage is not available in android but android.graphics.Bitmap can be used. Bitmap must be saved in lossless format (eg. PNG).
double[] elevations={238.27,242.1301,222,1};
int[] pixels = doublesToInts(elevations);
//encoding
Bitmap bmp=Bitmap.createBitmap(2, 2, Config.ARGB_8888);
bmp.setPixels(pixels, 0, 2, 0, 0, 2, 2);
File file=new File(getCacheDir(),"bitmap.png");
try {
FileOutputStream fos = new FileOutputStream(file);
bmp.compress(CompressFormat.PNG, 100, fos);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
//decoding
Bitmap out=BitmapFactory.decodeFile(file.getPath());
if (out!=null)
{
int [] outPixels=new int[out.getWidth()*out.getHeight()];
out.getPixels(outPixels, 0, out.getWidth(), 0, 0, out.getWidth(), out.getHeight());
double[] outElevations=intsToDoubles(outPixels);
}
static int[] doublesToInts(double[] elevations)
{
int[] out=new int[elevations.length];
for (int i=0;i<elevations.length;i++)
{
int tmp=(int) (elevations[i]*1000000);
out[i]=0xFF000000|tmp>>8;
}
return out;
}
static double[] intsToDoubles(int[] pixels)
{
double[] out=new double[pixels.length];
for (int i=0;i<pixels.length;i++)
out[i]=(pixels[i]<<8)/1000000.0;
return out;
}
As color with red, green, blue and alpha (opacity/transparency). Start with all pixels transparent. and fill in the corresponding value as (R, G, B), non-transparent (the high eight bits. (Or an other convention for "not filled in."
RGB form the lower 24 bits of an integer.
Longitude and latitude to x and y
Elevation to integer less 0x01_00_00_00. And vice versa:
double elevation = 238.27;
int code = (int)(elevation * 100);
Color c = new Color(code); // BufferedImage uses int, so 'code' sufThat does not fices.
code = c.getRGB();
elevation = ((double)code) / 100;
BufferedImage with setRGB(code) or so (there are different possibilities).
Use Oracles javadoc, by googling after BufferedImage and such.
To fill unused pixels do an avaraging, in a second BufferedImage. So as never to average to original pixels.
P.S. for my Netherlands elevation might be less than zero, so maybe + ... .

Using warpPerspective() on a sequence of points given by HoughCircles(), OpenCV

I'm trying to detect the positions of billiards balls on a table from an image taken at a perspective angle. I'm using the getPerspectiveTransform() method to find the transformation matrix and I want to apply that to only the circles I detect using HoughCircles. I'm trying to go from a rather large trapezoidal shape to a smaller rectangular shape. I don't want to do the transformation on the image first and then find the HoughCircles because the image gets too warped for houghcircles to provide useful results.
Here's my code:
CvMat mmat = cvCreateMat(3,3,CV_32FC1);
double srcX1 = 462;
double srcX2 = 978;
double srcX3 = 1440;
double srcX4 = 0;
double srcY = 241;
double srcHeight = 772;
double dstX = 56.8;
double dstY = 33.5;
double dstWidth = 262.4;
double dstHeight = 447.3;
CvSeq seq = cvHoughCircles(newGray, circles, CV_HOUGH_GRADIENT, 2.1d, (double)newGray.height()/40, 85d, 65d, 5, 50);
JavaCV.getPerspectiveTransform(new double[]{srcX1, srcY, srcX2,srcY, srcX3, srcHeight, srcX4, srcHeight},
new double[]{dstX, dstY, dstWidth, dstY, dstWidth, dstHeight, dstX, dstHeight}, mmat);
cvWarpPerspective(seq, seq, mmat);
for(int j=0; j<seq.total(); j++){
CvPoint3D32f point = new CvPoint3D32f(cvGetSeqElem(seq, j));
float xyr[] = {point.x(),point.y(),point.z()};
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(gray, center, 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(gray, center, radius, CvScalar.BLUE, 3, 8, 0);
}
The problem is I get this error on the warpPerspective() method:
error: (-215) seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size in function cv::Mat cv::cvarrToMat(const CvArr*, bool, bool, int)
Also I guess it's worth mentioning that I'm using JavaCV, in case the method calls look a bit different than what you're used to. Thanks for any help.
Answer:
the problem with what you want to do (besides the obvious, opencv wont let you) is that the radius cant really be warped correctly. AFAIK the xy coordinates are pretty easy to calculate x'=((m00x+m01y+m02)/(m20x+m21y+m22)) y'=((m10x+m11y+m12)/(m20x+m21y_m22)) when m is the transformation matrix. the radius you can hack by transforming all the points of the original circle and then find the max distance between x'y' and those points (atleast if the radius in the warped image is expected to cover all those points)
btw, mIJx = m(i,j)*x (just to clarify)
End Answer.
Everything i write is according to the c++ version, i've never used JavaCV but from what i could see its just a wrapper that calls the native c++ lib.
CvSeq is a sequance data structure that behaves like a linked list.
the assert your application crushes at is
CV_Assert(seq->total > 0 && CV_ELEM_SIZE(seq->flags) == seq->elem_size);
which means that either your seq instance is empty (total is the number of elements in the sequence) or somehow the inner seq flags are corrupted.
I'd recommend that you'd check the total member of your CvSeq, and the cvHoughCircles call.
all of this occurs before the actual implementation of cvWarpPerspective (its the first line in the implementation, that only converts your CvSeq to cv::Mat).. so its not the warping but what you're doing before that.
anyway, to understand whats wrong with cvHoughCircles we'll need more info about the creation of newGray and circles.
here is an example i've found on the javaCV page (Link)
IplImage gray = cvCreateImage( cvSize( img.width, img.height ), IPL_DEPTH_8U, 1 );
cvCvtColor( img, gray, CV_RGB2GRAY );
// smooth it, otherwise a lot of false circles may be detected
cvSmooth(gray,gray,CV_GAUSSIAN,9,9,2,2);
CvMemStorage circles = CvMemStorage.create();
CvSeq seq = cvHoughCircles(gray, circles.getPointer(), CV_HOUGH_GRADIENT,
2, img.height/4, 100, 100, 0, 0);
for(int i=0; i<seq.total; i++){
float xyr[] = cvGetSeqElem(seq,i).getFloatArray(0, 3);
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(img, center.byValue(), 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(img, center.byValue(), radius, CvScalar.BLUE, 3, 8, 0);
}
from what i've seen in the implementation of cvHoughCircles, the answer is saved in the circles buff and at the end they create from it the CvSeq to return, so if you've allocated the circles buff wrong, it wont work.
EDIT:
as you can see, the CvSeq instance in case of the return from cvHoughCircles is a list of point-values, that is probably why the assertion failed. you cannot convert this CvSeq into a cv::Mat.. because its just not a cv::Mat. to get only the circles returned from cvHoughCircles in an cv::Mat instance, you'll need to create a new cv::Mat instance and than draw onto it all the circles in the CvSeq - as seen in the provided example above.
than the warping will work (you'll have a cv::Mat instance, and that is what the function expect - a cv::Mat as the only element in the CvSeq)
END EDIT
here is the c++ reference for CvSeq
and if you want to fiddle with the source code than
cvarrToMat is in matrix.cpp
CV_ELEM_SIZE is in types_c.h
cvWarpPerspective is in imgwarp.cpp
cvHoughCircles is in hough.cpp
I hope that will help.
BTW, your next error will probably be:
cv::warpPerspective in the C++ OpencCv asserts that dst.data != src.data
thus
cvWarpPerspective(seq, seq, mmat);
wont work cause your source mat and destination mat referencing the same data.
Not all the functions in OpenCV (and image processing in general) work in-situ (because there is no in-situ algorithm or because its slower then the other version eg. transpose of an n*n mat will work in-situ, but n*m where n!=m will be harder to do in-situ and might be slower)
you cant assume the using the src matrix as the dst will work.

Categories