Ranking algorithm for image processing using openCv and java - java

The below Algorithm returns image comparison percentage using openCv Java.
private double compare(Mat hist1, String f2) {
double compare = 0;
ArrayList<Mat> bgr_planes2 = new ArrayList<Mat>();
boolean accumulate = false;
MatOfInt histSize = new MatOfInt(180);
MatOfInt channels = new MatOfInt(0);
MatOfFloat histRanges = new MatOfFloat(0f, 180f);
// Mat img2 = Imgcodecs.imread(f2);
// img2 = resize(img2);
Mat img2 = ContourUtils.findContour(f2, false);
Imgproc.cvtColor(img2, img2, Imgproc.COLOR_RGBA2GRAY);
img2.convertTo(img2, CvType.CV_32F);
Mat hist2 = new Mat();
Core.split(img2, bgr_planes2);
Imgproc.calcHist(bgr_planes2, channels, new Mat(), hist2, histSize,
histRanges, accumulate);
Core.normalize(hist2, hist2, 0, hist2.rows(), Core.NORM_MINMAX, -1,
new Mat());
// img2.convertTo(img2, CvType.CV_32F);
hist2.convertTo(hist2, CvType.CV_32F);
compare = Imgproc.compareHist(hist1, hist2, Imgproc.CV_COMP_CORREL);
img2.release();
hist2.release();
channels.release();
histRanges.release();
histSize.release();
for (Mat m : bgr_planes2) {
if (m != null) {
m.release();
}
}
return compare;
}
Same way i want to find the ranking instead of percentage that how well the images are matched. Please suggest me an idea for ranking algorithm. Thanks.

I understand that your algorithm is comparing an image to a reference image. If you want a ranking of the best math, apply the function to your collection of input image, then sorting the results (descending order) provide you the ranking.
Store the result as key and the list of names of input image (expecting many inputs could have the same result value) as value of a Map.
For example:
Map<double, List<String>> ranking

Related

Find Similarity Of Two Hand Sign Images OpenCv

I am Working on Sign Translator Application For Handy People. In My application User Will Give one sign image from camera or gallery given image will be compare with database images and show the result With Alphabetic Sign.
but my problem is i am not getting good similarity between two images Some Time result is accurate some time not.
Please Refer me Some Idea Or source Code.
Thanks in advance.
Scalar lowerThreshold = new Scalar(0, 48, 80); // Blue color – lower hsv values
Scalar upperThreshold = new Scalar(20, 255, 255); // Blue color – higher hsv values
FeatureDetector detector = FeatureDetector.create(FeatureDetector.PYRAMID_FAST);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
//orb orb bruteforce with filter method
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
//crash on surf flanbased
Mat img1 = new Mat();
Mat img2 = new Mat();
Utils.bitmapToMat(defaultImage,img1);
Utils.bitmapToMat(databaseImage,img2);
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
detector.detect(img1, keypoints1);
extractor.compute(img1, keypoints1, descriptors1);
//second image
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
detector.detect(img2, keypoints2);
extractor.compute(img2, keypoints2, descriptors2);
//matcher image descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2,matches);
//Filter matches by distance
MatOfDMatch filtered = filterMatchesByDistance(matches);
int total = (int) matches.size().height;
int Match= (int) filtered.size().height;
Log.d("LOG", "total:" + total + " Match:"+Match);
int percent = (int)((Match * 100.0f) / total);
if(percent>max){
max=percent;
maximumPercentage.setMaximum(percent);
maximumPercentage.setImageId(id);
imageId=id;
Log.d("Maximum Percentage: ",String.valueOf(max)+"%");
Log.d("MaxId: ",String.valueOf(imageId));
}
id++;
Log.d("matchingOImages: ",String.valueOf(percent)+"%");
filter matching result method
List<DMatch> matches_original = matches.toList();
List<DMatch> matches_filtered = new ArrayList<DMatch>();
int DIST_LIMIT = 30;
// Check all the matches distance and if it passes add to list of filtered matches
Log.d("DISTFILTER", "ORG SIZE:" + matches_original.size() + "");
for (int i = 0; i < matches_original.size(); i++) {
DMatch d = matches_original.get(i);
if (Math.abs(d.distance) <= DIST_LIMIT) {
matches_filtered.add(d);
}
}
Log.d("DISTFILTER", "FIL SIZE:" + matches_filtered.size() + "");
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_filtered);
return mat;
Ok well i think you just entered the modern age of neural networks.
As it can be overwhelming how this stuff works, and often takes years of study, there are some shortcuts to get things done.
For the quickest result I think you might start here:
( Assuming you rather dont want to dive that deep into the innerworkings of a neural net, but rather would use existing software, or services ) https://cloud.google.com/automl/

Replicate Gimp Unsharp Mask with Java - OpenCv

I'm trying to replicate unsharp mask in Gimp with using Java and OpenCv. I use a grayscale image as input and apply unsharp mask, but results are not even close.
I try to implement this C++ code:
Mat blurred; double sigma = 1, threshold = 5, amount = 1;
GaussianBlur(img, blurred, Size(), sigma, sigma);
Mat lowContrastMask = abs(img - blurred) < threshold;
Mat sharpened = img*(1+amount) + blurred*(-amount);
img.copyTo(sharpened, lowContrastMask);
And this is my Java implementation:
double sigma = 1, threshold = 5, amount = 1;
Mat source = Imgcodecs.imread(input.getName());
Mat destination = new Mat();
Imgproc.GaussianBlur(source, destination, new Size(), sigma, sigma);
Mat lowContrastMask = new Mat();
Core.absdiff(source, destination, lowContrastMask);
Imgproc.threshold(lowContrastMask, lowContrastMask, 0, threshold, Imgproc.THRESH_BINARY);
Mat sharpened = new Mat();
Core.multiply(source, new Scalar(0), sharpened, amount+1);
Mat sharpened2 = new Mat();
Core.multiply(destination, new Scalar(0), sharpened2, -amount);
Core.add(sharpened2, sharpened, sharpened);
source.copyTo(sharpened, lowContrastMask);
Alternative Unsharp Masking method:
Mat source = Imgcodecs.imread(input.getName());
Mat destination = new Mat();
Imgproc.GaussianBlur(source, destination, new Size(0,0), 60);
Core.addWeighted(source, 1.5, destination, -1, 0, destination);
So, both methods are working but results are and not good as gimp result. I'm open to any suggestion. I know it look like a bad implementation. I'm a newbie, I appreciate any help.

OpenCV color analysis in HSV

As the title suggests, I am interested in getting the HSV value of a specific pixel using java CV. This sounds easy enough, and it seems to be straight forward in C++ or Python, but I simply cant figure out how to do it in Java. I am pretty new to OpenCV, and if I decide to do more projects using this library I will definitely write them in C++ or Python.
For reference, my goal is to do a color analysis of an object that has varying levels of lighting. The end goal is to be able to take an image of something like a t-shirt and be able to say "this t shirt is x% red".
Here is some of the code I was using. Surprisingly inRange() takes much longer than just looping through every pixel and getting RGB one by one. I want to be able to do exactly this, just with the HSV color space. If you know of a better way to accomplish this goal, please let me know as this has destroyed my entire Saturday. Thanks!
Scalar min = new Scalar(22,11,3);
Scalar max = new Scalar(103,87,74);
int sum = 0;
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
File input = new File("bluesample.jpg");
BufferedImage image = ImageIO.read(input);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
Mat mat1 = new Mat(image.getHeight(),image.getWidth(),CvType.CV_8UC3);
mat.put(0, 0, data);
Core.inRange(mat, min, max, mat1);
System.out.println(mat1.total());
System.out.println(mat1.total());
for (int i=0;i<mat1.rows(); i++ ){
for (int j=0;i<mat1.cols();j++){
sum += mat1.get(j, i, data);
}
}
System.out.println(sum/mat1.total());
EDIT:
try { System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
File input = new File("singlehsvpix.jpg");
BufferedImage image = ImageIO.read(input);
byte[] data = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(image.getHeight(), image.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
Mat mat1 = new Mat(image.getHeight(),image.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(mat, mat1, Imgproc.COLOR_RGB2HSV);
System.out.println(mat1.dump());
byte[] data1 = new byte[mat1.rows() * mat1.cols() * (int)(mat1.elemSize())];
mat1.get(0, 0, data1);
//BufferedImage image1 = new BufferedImage(mat1.cols(),mat1.rows(), BufferedImage.TYPE_BYTE_GRAY);
BufferedImage image1 = new BufferedImage(mat1.cols(),mat1.rows(), 5);
image1.getRaster().setDataElements(0, 0, mat1.cols(), mat1.rows(), data1);
File output = new File("PLS!.jpg");
ImageIO.write(image1, "jpg", output);
System.out.println(mat1.get(0, 0, data1)); // RELEVANT LINE
System.out.println("Done");
} catch (Exception e) {
System.out.println("Error: " + e.getMessage());
}
}
Is printing:
[ 54, 213, 193]
3
Done
For this pic, 54, 213, 193 are the BGR values... I guess I don't understand enough about OpenCV to know why my mat1.get is printing 3
So, you want to convert rgb to hsv.
Imgproc.cvtColor(im_rgb, im_hsv, Imgproc.COLOR_RGB2HSV);
Then, process as you like
Edit: in your code, change mat to mat1
for (int i=0;i<mat1.rows(); i++ ){
for (int j=0;i<mat1.cols();j++){
sum += mat.get(j, i, data); //this line
}
}
System.out.println(sum/mat1.total());
You are adding the value in original matrix.

Quick & fast template matching on screen. Coordinates needed too. Java

I need a way to find an image on the screen. I've searched for ways to do this on SO but some take extremely long. I need it to be fast and efficient, does not need to be accurate. Basically i'm planning to compare or search for a small pixelated image, say 11x10 pixels for example, on the screen.
I also need a way to know the x and y coordinates of the small image on the screen.
Although I've looked through many tools out there like JavaCV and OpenCV, I just wanted to see if there are any other ways to do this.
TL;DR
I need a fast way to search for a small (11x10 example.) image on the screen and know its x,y coordinates.
I think you many find this answer relevant! But it is for Windows & in c++. But i'm sure that you can convert it very easily to any language.
This question is very old, But im trying to acheive the exact same thing here. Ive found that combining these answers would do the trick:
Convert BufferedImage TYPE_INT_RGB to OpenCV Mat Object
OpenCV Template Matching example in Android
The reason you need to do a conversion is because when u grab a screenshot with awt.Robot class its in the INT_RGB format. The matching template example expects bytes and you cannot grab byte data from this type of image directly.
Heres my implementation of these two answers, but it is incomplete. The output is all screwed up and i think it may have something to do with the IntBuffer/ByteBuffers.
-Edit-
I've added a new helper method that converts a INT_RGB to a BYTE_BGR. I can now grab the coordinates of template on the image using matchLoc.This seems to work pretty well, I was able to use this with a robot that clicks the start menu for me based on the template.
private BufferedImage FindTemplate() {
System.out.println("\nRunning Template Matching");
int match_method = Imgproc.TM_SQDIFF;
BufferedImage screenShot = null;
try {
Robot rob = new Robot();
screenShot = rob.createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
} catch (AWTException ex) {
Logger.getLogger(MainGUI.class.getName()).log(Level.SEVERE, null, ex);
}
if(screenShot == null) return;
Mat img = BufferedImageToMat(convertIntRGBTo3ByteBGR(screenShot));
String templateFile = "C:\\Temp\\template1.JPG";
Mat templ = Highgui.imread(templateFile);
// / Create the result matrix
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
// / Do the Matching and Normalize
Imgproc.matchTemplate(img, templ, result, match_method);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
Highgui.imwrite("out2.png", result);
// / Localizing the best match with minMaxLoc
MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF
|| match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
Graphics2D graphics = screenShot.createGraphics();
graphics.setColor(Color.red);
graphics.setStroke(new BasicStroke(3));
graphics.drawRect(matchLoc.x, matchLoc.y, templ.width(), templ.height());
graphics.dispose();
return screenShot;
}
private Mat BufferedImageToMat(BufferedImage img){
int[] data = ((DataBufferInt) img.getRaster().getDataBuffer()).getData();
ByteBuffer byteBuffer = ByteBuffer.allocate(data.length * 4);
IntBuffer intBuffer = byteBuffer.asIntBuffer();
intBuffer.put(data);
Mat mat = new Mat(img.getHeight(), img.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, byteBuffer.array());
return mat;
}`
private BufferedImage convertIntRGBTo3ByteBGR(BufferedImage img){
BufferedImage convertedImage = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_3BYTE_BGR);
Graphics2D graphics = convertedImage.createGraphics();
graphics.drawImage(img, 0, 0, null);
graphics.dispose();
return convertedImage;
}
Results:
Template:

OpenCV: Fitting an object into a scene using homography and perspective transform in Java

I'm implementing using Java the OpenCV tutorial for finding an object in a scene using homography http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
Below is my implementation, where img1 is the scene and img2 is the object
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
//set up img1 (scene)
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
//calculate descriptor for img1
detector.detect(img1, keypoints1);
descriptor.compute(img1, keypoints1, descriptors1);
//set up img2 (template)
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
//calculate descriptor for img2
detector.detect(img2, keypoints2);
descriptor.compute(img2, keypoints2, descriptors2);
//match 2 images' descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1, descriptors2,matches);
//calculate max and min distances between keypoints
double max_dist=0;double min_dist=99;
List<DMatch> matchesList = matches.toList();
for(int i=0;i<descriptors1.rows();i++)
{
double dist = matchesList.get(i).distance;
if (dist<min_dist) min_dist = dist;
if (dist>max_dist) max_dist = dist;
}
//set up good matches, add matches if close enough
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
MatOfDMatch gm = new MatOfDMatch();
for (int i=0;i<descriptors2.rows();i++)
{
if(matchesList.get(i).distance<3*min_dist)
{
good_matches.addLast(matchesList.get(i));
}
}
gm.fromList(good_matches);
//put keypoints mats into lists
List<KeyPoint> keypoints1_List = keypoints1.toList();
List<KeyPoint> keypoints2_List = keypoints2.toList();
//put keypoints into point2f mats so calib3d can use them to find homography
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
for(int i=0;i<good_matches.size();i++)
{
objList.addLast(keypoints2_List.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints1_List.get(good_matches.get(i).trainIdx).pt);
}
MatOfPoint2f obj = new MatOfPoint2f();
MatOfPoint2f scene = new MatOfPoint2f();
obj.fromList(objList);
scene.fromList(sceneList);
//output image
Mat outputImg = new Mat();
MatOfByte drawnMatches = new MatOfByte();
Features2d.drawMatches(img1, keypoints1, img2, keypoints2, gm, outputImg, Scalar.all(-1), Scalar.all(-1), drawnMatches,Features2d.NOT_DRAW_SINGLE_POINTS);
//run homography on object and scene points
Mat H = Calib3d.findHomography(obj, scene,Calib3d.RANSAC, 5);
Mat tmp_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
//get corners from object
tmp_corners.put(0, 0, new double[] {0,0});
tmp_corners.put(1, 0, new double[] {img2.cols(),0});
tmp_corners.put(2, 0, new double[] {img2.cols(),img2.rows()});
tmp_corners.put(3, 0, new double[] {0,img2.rows()});
Core.perspectiveTransform(tmp_corners,scene_corners, H);
Core.line(outputImg, new Point(scene_corners.get(0,0)), new Point(scene_corners.get(1,0)), new Scalar(0, 255, 0),4);
Core.line(outputImg, new Point(scene_corners.get(1,0)), new Point(scene_corners.get(2,0)), new Scalar(0, 255, 0),4);
Core.line(outputImg, new Point(scene_corners.get(2,0)), new Point(scene_corners.get(3,0)), new Scalar(0, 255, 0),4);
Core.line(outputImg, new Point(scene_corners.get(3,0)), new Point(scene_corners.get(0,0)), new Scalar(0, 255, 0),4);
The program is able to calculate and display feature points from both images. However, the scene_corners returned are 4 points in a close cluster (small green blob)
where they are supposed to represent the 4 corners of the perspective projection of the object onto the scene. I checked double checked to make sure my program is as close to the c++ implementation as possible. What might be causing this?
I checked the homography matrix and it seems the corner coordinates are skewed by 2 very big results from the matrix. Is the homography matrix incorrectly calculated?
I'd appreciate any input, thanks.
Update:
I played about with the filter threshold for good matches and found that 2.75*min_dist seems to work well with this set of images. I can now get good matches with zero outliers. However, the bounding box is still wrong. http://i.imgur.com/fuXeOqL.png
How do I know what value of threshold to use for best matches and how does the homography relate to them? Why was 3*min_dist used in the example?
I managed to solve the problem and use homography correctly while investigating index out of bounds errors. It turns out when I added my good matches to my object and scene lists, I swapped round the query and train indices
objList.addLast(keypoints2_List.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints1_List.get(good_matches.get(i).trainIdx).pt);
According to this question OpenCV drawMatches -- queryIdx and trainIdx , since I called
matcher.match(descriptors1, descriptors2,matches);
with descriptor1 first then descriptor2, the correct indices should be:
objList.addLast(keypoints2_List.get(good_matches.get(i).trainIdx).pt);
sceneList.addLast(keypoints1_List.get(good_matches.get(i).queryIdx).pt);
where queryIdx refers to keypoints1_List and trainIdx refers to keypoints2_List.
Here is an example result:
http://i.imgur.com/LZNBjY2.png
Currently I'm also implementing a 2D homography in java and I also found the OpenCV tutorial then your question.
I don't think it'll enhance your results but in the OpenCV tutorial when they compute the min and max distance, they loop with descriptors_object.rows and in your code you do with descriptors1.rows() which is the scene descriptor and not the object descriptor.
Edit: Just also noticed the same with the matcher. For you:
img1/descriptor1 -> Scene
img2/descriptor2 -> the object to find
In the tutorial:
matcher.match( descriptors_object, descriptors_scene, matches );
But in your code:
matcher.match(descriptors1, descriptors2,matches);
And Javadoc:
void org.opencv.features2d.DescriptorMatcher.match(Mat queryDescriptors, Mat trainDescriptors, MatOfDMatch matches)

Categories