I'm trying to detect similar objects in a picture. The purpose of the code is to detect the gold and click on it.
I have tried scanning pixel by pixel but it wasn't efficient and the results weren't satisfying.
I'll add that the game is running on windows mode and classes like robot are working fine. Also the gold might be in different places every time.
As a very quick example I wrote up this using your image:
public class OpenCVTest {
public static void main(String[] args) {
OpenCV.loadLibrary();
Mat m = Highgui.imread("/home/artur/Pictures/test.png", Highgui.CV_LOAD_IMAGE_GRAYSCALE);
LoadImage( m);
Mat res = Mat.zeros(m.size(), m.type());
Imgproc.adaptiveThreshold(m, res, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 15, 20);
LoadImage(res);
Mat cannyRes = Mat.zeros(m.size(), m.type());
Imgproc.Canny(res, cannyRes, 55, 5.2);
LoadImage(cannyRes);
Imgproc.dilate(cannyRes, cannyRes, new Mat(), new Point(-1, -1), 2);
Imgproc.erode(cannyRes, cannyRes, new Mat(), new Point(-1, -1), 2);
LoadImage(cannyRes);
List<MatOfPoint> contours = new ArrayList<>();
Imgproc.findContours(cannyRes, contours, new Mat(), Imgproc.RETR_LIST,Imgproc.CHAIN_APPROX_SIMPLE);
System.err.println(contours.size());
contours = contours.stream().filter(s -> s.size().area() > 50 && s.size().area() <= 100).collect(Collectors.toList());
for(MatOfPoint p : contours) {
Size size = p.size();
System.err.println("-- -- --");
System.err.println(size.area());
}
Imgproc.drawContours(cannyRes, contours, 20, new Scalar(233, 223,212));
LoadImage(cannyRes);
}
public static void LoadImage( Mat m) {
Path path = Paths.get("/tmp/", UUID.randomUUID().toString() + ".png");
Highgui.imwrite(path.toString(), m);
JFrame frame = new JFrame("My GUI");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setResizable(true);
frame.setLocationRelativeTo(null);
// Inserts the image icon
ImageIcon image = new ImageIcon(path.toString());
frame.setSize(image.getIconWidth() + 10, image.getIconHeight() + 35);
// Draw the Image data into the BufferedImage
JLabel label1 = new JLabel(" ", image, JLabel.CENTER);
frame.getContentPane().add(label1);
frame.validate();
frame.setVisible(true);
}
}
I read the image
I use adaptive threshhold to create a binary image
I use canny to detect the edges in my image
I use dilate/erode to remove background noise
I use the contour finder to find objects in my image
I dismiss any contour that has a arbitrary size
The resulting contours are roughly your yellow spots. This is not very accurate as I didn't invest time playing with the different parameters, but you can fine tune that.
Hope that helps,
Have fun playing. You can see how to set up OpenCV here: Java OpenCV from Maven
Related
I am trying to develop a mobile application on Xamarin. Firstly I'm doing it for the android device. I want the Oncamera function to automatically detect the contours and measure the size of the object. As a primary step I am trying to detect the contours in real-time. Read lot of forms and many documents but nothing helped me
public Mat OnCameraFrame(CameraBridgeViewBase.ICvCameraViewFrame inputFrame)
{
Mat input = inputFrame.Rgba();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat gray = new Mat();
//Mat hierarchy = new Mat();
Imgproc.CvtColor(p0: input, p1: gray, p2: Imgproc.ColorRgb2gray);
Mat blur = new Mat();
Imgproc.GaussianBlur(gray, blur, new Size(7, 7), -2);
Mat thresh = new Mat();
Imgproc.Threshold(blur, thresh, 127, 250, Imgproc.ThreshBinary);
Mat edged = new Mat();
Imgproc.Canny(thresh, thresh, 25, 50);
Imgproc.Dilate(thresh, thresh, new Mat(), new Point(-1, 1), 1);
Mat hierarchy = thresh.Clone();
Imgproc.FindContours(hierarchy, contours, new Mat(),
Imgproc.RetrExternal, Imgproc.ChainApproxNone);
Java.Lang.JavaSystem.Out.Println("contours" + contours);
if (contours != null)
{
Java.Lang.JavaSystem.Out.Println("found contours");
for (int i = 0; i < contours.Count(); i++)
{
Imgproc.DrawContours(input, contours, i, new Scalar(255, 0, 0), -1);
}
}
else
{
Java.Lang.JavaSystem.Out.Println("no contours");
}
return input;
I used the above logic in the code. But my output in the application is displaying normal image without any contours drawn on it. If I return the "thresh", then canny edge detection is perfectly working. But Drawcontours is not showing up anything.
I used Contours.count() because my Xamarin ide is showing error for contours.Size();
I had a similar problem.
in my case I have replaced
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
with
IList<MatOfPoint> contours = new JavaList<MatOfPoint>();
I have hand detection code right now that creates a green rectangle around the hand when detected. However, I want to fill the rectangle with another jpeg image. I was thinking of using ImagePattern where the image gets embedded inside the shape however that did not seem to work in my program.
Here is my code for creating the rectangle around the hand :
Imgproc.rectangle(frame, handsArray[i].tl(), handsArray[i].br(), new Scalar(0, 255, 0), 3);
The frame is a Mat and Imgproc is a opencv, javacv function.
Can someone please help me with this problem? I am struggling
Regards
Green Rectangle Around The Hand that needs to be filled
BufferedImage bi = null;
try {
bi = ImageIO.read(new File(trollFace));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
byte[] data = ((DataBufferByte) bi.getRaster().getDataBuffer()).getData();
mat.put(0, 0, data);
You can do it in these steps:
Submat the rectangle from your source image
Mat submat = src.submat(new Rect(200, 200, 100, 100));
Now copy the sencond image into the submat
src2.copyTo(submat);
Note: Before copying, insure the size of the second image is the same as the rectangle, if not you can use the resize() function of the Imgprc class to resize the Mat. And it is that.
Update: The below is the example.
Mat src = Imgcodecs.imread("C:\\src1.jpg");
Mat src2 = Imgcodecs.imread("C:\\src2.jpg");
Imgproc.resize(src2, src2, new Size(100, 100));
Mat submat = src.submat(new Rect(200, 200, 100, 100));
src2.copyTo(submat);
// now you can do what you want with the src1
I am working on a real time text detection and recognition with OpenCV4Android. Recognition part is totally completed. However, I have to ask question about text detection. I' m using the MSER FeatureDetector for detection text.
This is the real time and calling the method part:
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
carrierMat = inputFrame.gray();
carrierMat = General.MSER(carrierMat);
return carrierMat;
}
And this is the basic MSER implementation:
private static FeatureDetector fd = FeatureDetector.create(FeatureDetector.MSER);
private static MatOfKeyPoint mokp = new MatOfKeyPoint();
private static Mat edges = new Mat();
public static Mat MSER(Mat mat) {
//for mask
Imgproc.Canny(mat, edges, 400, 450);
fd.detect(mat, mokp, edges);
//for drawing keypoints
Features2d.drawKeypoints(mat, mokp, mat);
return mat;
}
It works fine for finding text with edges mask.
I would like to draw a rectangles for clusters like this:
or this:
You can assume that I have the right points.
As you can see, fd.detect() method is returning a MatOfKeyPoint. Hence I' ve tried this method for drawing rectangle:
public static Mat MSER_(Mat mat) {
fd.detect(mat, mokp);
KeyPoint[] refKp = mokp.toArray();
Point[] refPts = new Point[refKp.length];
for (int i = 0; i < refKp.length; i++) {
refPts[i] = refKp[i].pt;
}
MatOfPoint2f refMatPt = new MatOfPoint2f(refPts);
MatOfPoint2f approxCurve = new MatOfPoint2f();
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(refMatPt, true) * 0.02;
Imgproc.approxPolyDP(refMatPt, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
// Get bounding rect
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(mat, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), Detect_Color_, 5);
//Features2d.drawKeypoints(mat, mokp, mat);
return mat;
}
But when I was trying to Imgproc.arcLength() method, it suddenly stopped. I gave a random approxDistance value for Imgproc.approxPolyDP() method like 0.1, it doesn' t work really efficiently.
So how can I draw rectangle for detected text?
I tested your code and had exactly the same problem.
For now I still can't find the problem within.
But I found a project using both "MSER" and "Morphological".
you can find it here .
The project have very simple structure and the author put the
text detection in "onCameraFrame" method just like you.
I implemented the method from that project and it worked,
but the result was still not very good.
If you seek better text detection tool, here's two of them.
Stroke Width Transform(SWT):
A whole new method for finding text area. It's fast and efficient. however it is only available in c++ or python. you can find some example here.
Class-specific Extremal Regions using class ERFilter:An advanced version of the MSER. Unfortunately, it is only available in OpenCV 3.0.0-dev. You can't use it in current version of OpenCV4Android. The document is here.
To be honest I am new in this area(2 months), but I hope these information can help you finish your project.
(update:2015/9/13)
I've translated a c++ method from a post.
It works far better than the first github project I mentioned.
Here is the code:
public void apply(Mat src, Mat dst) {
if (dst != src) {
src.copyTo(dst);
}
Mat img_gray,img_sobel, img_threshold, element;
img_gray=new Mat();
Imgproc.cvtColor(src, img_gray, Imgproc.COLOR_RGB2GRAY);
img_sobel=new Mat();
Imgproc.Sobel(img_gray, img_sobel, CvType.CV_8U, 1, 0, 3, 1, 0,Core.BORDER_DEFAULT);
img_threshold=new Mat();
Imgproc.threshold(img_sobel, img_threshold, 0, 255, Imgproc.THRESH_OTSU+Imgproc.THRESH_BINARY);
element=new Mat();
element = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(17, 3) );
Imgproc.morphologyEx(img_threshold, img_threshold, Imgproc.MORPH_CLOSE, element);
//Does the trick
List<MatOfPoint> contours=new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(img_threshold, contours, hierarchy, 0, 1);
List<MatOfPoint> contours_poly=new ArrayList<MatOfPoint>(contours.size());
contours_poly.addAll(contours);
MatOfPoint2f mMOP2f1,mMOP2f2;
mMOP2f1=new MatOfPoint2f();
mMOP2f2=new MatOfPoint2f();
for( int i = 0; i < contours.size(); i++ )
if (contours.get(i).toList().size()>100)
{
contours.get(i).convertTo(mMOP2f1, CvType.CV_32FC2);
Imgproc.approxPolyDP(mMOP2f1,mMOP2f2, 3, true );
mMOP2f2.convertTo(contours_poly.get(i), CvType.CV_32S);
Rect appRect=Imgproc.boundingRect(contours_poly.get(i));
if (appRect.width>appRect.height)
{
Imgproc.rectangle(dst, new Point(appRect.x,appRect.y) ,new Point(appRect.x+appRect.width,appRect.y+appRect.height), new Scalar(255,0,0));
}
}
}
I'm implementing using Java the OpenCV tutorial for finding an object in a scene using homography http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
Below is my implementation, where img1 is the scene and img2 is the object
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
//set up img1 (scene)
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
//calculate descriptor for img1
detector.detect(img1, keypoints1);
descriptor.compute(img1, keypoints1, descriptors1);
//set up img2 (template)
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
//calculate descriptor for img2
detector.detect(img2, keypoints2);
descriptor.compute(img2, keypoints2, descriptors2);
//match 2 images' descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1, descriptors2,matches);
//calculate max and min distances between keypoints
double max_dist=0;double min_dist=99;
List<DMatch> matchesList = matches.toList();
for(int i=0;i<descriptors1.rows();i++)
{
double dist = matchesList.get(i).distance;
if (dist<min_dist) min_dist = dist;
if (dist>max_dist) max_dist = dist;
}
//set up good matches, add matches if close enough
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
MatOfDMatch gm = new MatOfDMatch();
for (int i=0;i<descriptors2.rows();i++)
{
if(matchesList.get(i).distance<3*min_dist)
{
good_matches.addLast(matchesList.get(i));
}
}
gm.fromList(good_matches);
//put keypoints mats into lists
List<KeyPoint> keypoints1_List = keypoints1.toList();
List<KeyPoint> keypoints2_List = keypoints2.toList();
//put keypoints into point2f mats so calib3d can use them to find homography
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
for(int i=0;i<good_matches.size();i++)
{
objList.addLast(keypoints2_List.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints1_List.get(good_matches.get(i).trainIdx).pt);
}
MatOfPoint2f obj = new MatOfPoint2f();
MatOfPoint2f scene = new MatOfPoint2f();
obj.fromList(objList);
scene.fromList(sceneList);
//output image
Mat outputImg = new Mat();
MatOfByte drawnMatches = new MatOfByte();
Features2d.drawMatches(img1, keypoints1, img2, keypoints2, gm, outputImg, Scalar.all(-1), Scalar.all(-1), drawnMatches,Features2d.NOT_DRAW_SINGLE_POINTS);
//run homography on object and scene points
Mat H = Calib3d.findHomography(obj, scene,Calib3d.RANSAC, 5);
Mat tmp_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
//get corners from object
tmp_corners.put(0, 0, new double[] {0,0});
tmp_corners.put(1, 0, new double[] {img2.cols(),0});
tmp_corners.put(2, 0, new double[] {img2.cols(),img2.rows()});
tmp_corners.put(3, 0, new double[] {0,img2.rows()});
Core.perspectiveTransform(tmp_corners,scene_corners, H);
Core.line(outputImg, new Point(scene_corners.get(0,0)), new Point(scene_corners.get(1,0)), new Scalar(0, 255, 0),4);
Core.line(outputImg, new Point(scene_corners.get(1,0)), new Point(scene_corners.get(2,0)), new Scalar(0, 255, 0),4);
Core.line(outputImg, new Point(scene_corners.get(2,0)), new Point(scene_corners.get(3,0)), new Scalar(0, 255, 0),4);
Core.line(outputImg, new Point(scene_corners.get(3,0)), new Point(scene_corners.get(0,0)), new Scalar(0, 255, 0),4);
The program is able to calculate and display feature points from both images. However, the scene_corners returned are 4 points in a close cluster (small green blob)
where they are supposed to represent the 4 corners of the perspective projection of the object onto the scene. I checked double checked to make sure my program is as close to the c++ implementation as possible. What might be causing this?
I checked the homography matrix and it seems the corner coordinates are skewed by 2 very big results from the matrix. Is the homography matrix incorrectly calculated?
I'd appreciate any input, thanks.
Update:
I played about with the filter threshold for good matches and found that 2.75*min_dist seems to work well with this set of images. I can now get good matches with zero outliers. However, the bounding box is still wrong. http://i.imgur.com/fuXeOqL.png
How do I know what value of threshold to use for best matches and how does the homography relate to them? Why was 3*min_dist used in the example?
I managed to solve the problem and use homography correctly while investigating index out of bounds errors. It turns out when I added my good matches to my object and scene lists, I swapped round the query and train indices
objList.addLast(keypoints2_List.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints1_List.get(good_matches.get(i).trainIdx).pt);
According to this question OpenCV drawMatches -- queryIdx and trainIdx , since I called
matcher.match(descriptors1, descriptors2,matches);
with descriptor1 first then descriptor2, the correct indices should be:
objList.addLast(keypoints2_List.get(good_matches.get(i).trainIdx).pt);
sceneList.addLast(keypoints1_List.get(good_matches.get(i).queryIdx).pt);
where queryIdx refers to keypoints1_List and trainIdx refers to keypoints2_List.
Here is an example result:
http://i.imgur.com/LZNBjY2.png
Currently I'm also implementing a 2D homography in java and I also found the OpenCV tutorial then your question.
I don't think it'll enhance your results but in the OpenCV tutorial when they compute the min and max distance, they loop with descriptors_object.rows and in your code you do with descriptors1.rows() which is the scene descriptor and not the object descriptor.
Edit: Just also noticed the same with the matcher. For you:
img1/descriptor1 -> Scene
img2/descriptor2 -> the object to find
In the tutorial:
matcher.match( descriptors_object, descriptors_scene, matches );
But in your code:
matcher.match(descriptors1, descriptors2,matches);
And Javadoc:
void org.opencv.features2d.DescriptorMatcher.match(Mat queryDescriptors, Mat trainDescriptors, MatOfDMatch matches)
I hope someone can help me here. I'm pretty much a noob trying to make an interactive graphic showing soccer teams moving up or down FIFA rankings. I loaded pictures I created outside of Processing to represent the teams and I want them to move based on a mouseclick event.
My problem right now is that when I test the app it doesn't size according to the settings I put in. Most of the images get cut off. I tried frame.setResizable() and while I can manipulate the size of the window to a degree my images are still cut off.
Below is my code and I am working on Processing 2.0B7 on a Macbook Pro running on OS X:
//Setting up the images that will go into the sketch
PImage img1;
PImage img2;
PImage img3;
PImage img4;
PImage img5;
PImage img6;
PImage img7;
PImage img8;
PImage img9;
PImage img10;
PImage img11;
PImage img12;
PImage img13;
PImage img14;
PImage img15;
PImage img16;
PImage img17;
//loading the images from the file
void setup() {
size(600, 1200);
frame.setResizable(true);
img1 = loadImage("Click.png");
img2 = loadImage("Team_Algeria.png");
img3 = loadImage("Team_Angola.png");
img4 = loadImage("Team_BurkinaFaso.png");
img5 = loadImage("Team_CapeVerde.png");
img6 = loadImage("Team_DRCongo.png");
img7 = loadImage("Team_Ethiopia.png");
img8 = loadImage("Team_Ghana.png");
img9 = loadImage("Team_IvoryCoast.png");
img10 = loadImage("Team_Mali.png");
img11 = loadImage("Team_Morocco.png");
img12 = loadImage("Team_Niger.png");
img13 = loadImage("Team_Nigeria.png");
img14 = loadImage("Team_SouthAfrica.png");
img15 = loadImage("Team_Togo.png");
img16 = loadImage("Team_Tunisia.png");
img17 = loadImage("Team_Zambia.png");
}
int a = 0;
//Drawing the images into the sketch
void draw() {
background(#000000);
image(img1, 400, 100);
image(img2, 100, 200);
image(img3, 100, 260);
image(img4, 100, 320);
image(img5, 100, 380);
image(img6, 100, 440);
image(img7, 100, 500);
image(img8, 100, 560);
image(img9, 100, 620);
image(img10, 100, 680);
image(img11, 100, 740);
image(img12, 100, 800);
image(img13, 100, 860);
image(img14, 100, 920);
image(img15, 100, 980);
image(img16, 100, 1040);
image(img17, 100, 1100);
}
I am not sure what you mean by cutoff, but remember you are only stating positions, so if the image is bigger (in your case than 60 height), they will overlap on top of each other. What is the exact size of the png images you are loading?
Try assigning also size to the images, i.e. adding two more arguments as per the ref:
image(img2, 100, 200, whateverWidth, 60); // I put 60 since it is the vertical space you are leaving between images
Does this help?