What to do with orb feature matches? - java

I am working in logo detection application in OpenCV on Android. I have a lot of searches and find that for this purpose most of the time feature detection is used.
So I searched and tried different detectors and matchers, and finally I have written a code that works well with ORBFeatureDetector and BruteForce matcher :
private DescriptorMatcher BruteMatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
I found the number of matches and minimum distance and number of good matches like this :
List<DMatch> matches = mat_of_matches.toList();
double max_dist =0, min_dist= 100;
int row_count = matches.size();
for(int i=0;i<row_count;i++)
{
double dist = matches.get(i).distance;
//System.out.println("dist="+dist);
if(dist<min_dist)min_dist = dist;
if(dist>max_dist)max_dist = dist;
}
// Log.e("Max_dist,Min_dist", "Max="+max_dist+", Min="+min_dist);
List<DMatch> good_matches = new ArrayList<DMatch>();
double good_dist = 3*min_dist;
for(int i =0;i<row_count; i++)
{
if(matches.get(i).distance<good_dist)
{
good_matches.add(matches.get(i));
//Log.e("good_matches", "good_match_id="+matches.get(i).trainIdx);
}
}
, and finally I created a threshold like this :
if(row_count>490&&good_matches.size()<60&&min_dist<12)logo_detected=true;
else logo_detected=false;
The problem is that for many other things that are also in the threshold accessed and the application keeps saying the logo is detected .
I want to know what should I do with detected matched features? Is it the right thing to do (thresholding)? Or do I need to do something else to detect the logo ?
Please help, thanks.

Related

Transforming google maps waypoint directions to via-directions

I am trying to change the URL of google maps directions from directions which have multiple waypoints to directions where these intermediate waypoints are deleted but the route remains the same.
Specifically from: https://www.google.nl/maps/dir/51.804323,5.8061076/51.8059489,5.7971745/51.8095767,5.8032703/#51.8068221,5.806553,16.5z/data=!4m2!4m1!3e2
to:
https://www.google.nl/maps/dir/51.804323,5.8061076/51.8095767,5.8032703/#51.8069622,5.8023697,17z/data=!4m9!4m8!1m5!3m4!1m2!1d5.7971218!2d51.8060231!3s0x47c7061292e15b39:0x4d7bcd7484c71cf3!1m0!3e2
EDIT: because I had to drag the route manually in the second URL the coordinates of the middle marker are not exactly the same as in the first URL; this difference can be ignored.
the start part of these URLS seem pretty obvious as to what they are doing, however the data parameter is still unclear to me (without it the route is not correct). I tried the Google Maps API, but these return an XML or JSON file, but I just need the corresponding URL which I would also get using the webinterface of Google Maps.
How can I tranform the first URL to the second??
So after a long time trying to figure out how the URL scheme works, I finally figured out how it works (for the directions interface).
The URL consists of the following steps:
You start off with "https://www.google.nl/maps/dir/"
This is followed by the start coordinates in the form "[LAT],[LONG]", the coordinates of intermediate waypoints in the same format,and then the coordinates of end points. All these coordinates are seperated by a "/" character.
This is followed by "#[LAT],[LONG],[ZOOM]/" where LAT LONG are the coordinates of the viewbox and ZOOM is the level of zoom (lower means more zoomed out).
This is followed by "data=" and then "!4m[(5x+4+y)]!4m[(5x+3+y)]!" where x is the amount of VIA-points and y is the amount of intermediate waypoints in the route. So if you have a route from A to D with intermediate destinations B and C and VIA points Q, W and R you have x=3 and y=2 so you get the string "!4m21!4m20"
Next we get all VIA points. This done in the following scheme: you append "!1m[(5x)]" where x is the amount of VIA-points between the current waypoint and the next. So "!1m5...[data]...!1m0" means that between the start and first waypoint there is one VIA-point and between the first waypoint and the end there are no VIA-points. Each "!1m[(5x)]" is followed by x instances of "!1d[LONG]!2d[LAT]!3s[COORDINATE]". I am not entirely sure what COORDINATE does, but is has to be in the format "0x[HEX]:0x[HEX]" where HEX is a hexadecimal number; I simply take the number 0 for this. This seems to work in all my test cases and does not seem to influence anything.
This is then followed by "!1m0". I believe this is necessary to indicate that after the last waypoint (the finish) there are no more VIA points, which is useless information but needed nevertheless.
Finally, we get the last parameter which looks like "!3e[n]" where n is a discrete variable to indicate the type of navigation: n=0 for driving by car,n=1 is for bicycle riding, n=2 is for walking, and n=3 for public transportation.
That is mostly it for what I found out about the URL scheme by testing it relentlessly. There are more parameters you can add, but that needs more work testing.
Finally, I included my implementation for transforming a URL with 0 or more waypoints and 0 or more VIA-points to a URL containing only VIA-points. Feel free to use it and please let me know if you have found any mistakes so I can fix them.
BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
System.out.print("Enter URL: ");
String originalURL = br.readLine();
//get start of URL
String start = "https://www.google.nl/maps/dir/";
//get navigation type
String type = "!3e1";
Matcher t = getMatcher(originalURL, "!3e\\d");
if (t.find()) {
type = t.group();
}
//get viewbox parameter
Matcher v = getMatcher(originalURL, "#[-]?[\\d]+\\.[\\d]+,[-]?[\\d]+.[\\d]+,[-]?[\\d]+[[.]+[\\d]+]*z");
v.find();
String viewbox = v.group();
//get order of points when using VIA
String data = originalURL.substring(originalURL.indexOf("/data=") + 6);
ArrayList<String> order = new ArrayList<>();
Matcher o = getMatcher(data, "!1m[\\d]+");
while (o.find()) {
order.add(o.group());
}
if (order.size() > 0) {
//remove the last element which is always m0 as this should not be
//displayed in the VIA-list
order.remove(order.size() - 1);
}
//!1m2 does not represent the order but indicates that coordinates that are coming up
order.removeIf(a -> a.equals("!1m2"));
//get coordinates of via-points
ArrayList<String> originalViaPoints = new ArrayList<>();
Matcher c = getMatcher(data, "!1d[-]?[\\d]+.[\\d]+!2d[-]?[\\d]+.[\\d]+");
while (c.find()) {
String[] g = c.group().substring(3).split("!2d");
originalViaPoints.add(g[1] + "," + g[0]);
}
//get coordinates of start, end and intermediate points
originalURL = originalURL.substring(0, v.start());
ArrayList<String> waypoints = new ArrayList<>();
Matcher p = getMatcher(originalURL, "[-]?[\\d]+\\.[\\d]+,[-]?[\\d]+.[\\d]+");
while (p.find()) {
waypoints.add(p.group());
}
//start and end must be displayed seperately
String bound = waypoints.get(0) + "/" + waypoints.get(waypoints.size() - 1);
//add intermediate waypoints and via-points to a list of VIA points
ArrayList<String> viaPoints = new ArrayList<>();
//we have VIA points to process
if (!order.isEmpty()) {
int via_index = 0;
int wp_index = 1;
for (String step : order) {
int iter = Integer.valueOf(step.substring(3)) / 5;
for (int i = 0; i < iter; i++) {
viaPoints.add(originalViaPoints.get(via_index++));
}
viaPoints.add(waypoints.get(wp_index++));
}
} else //There are only waypoints in the URL
{
for (int i = 1; i < waypoints.size() - 1; i++) {
viaPoints.add(waypoints.get(i));
}
}
//calculate prefix according to the amount of nodes of the via points
int nodes = viaPoints.size();
String prefix = "!4m" + (5 * nodes + 4) + "!4m" + (5 * nodes + 3) + "!1m" + (5 * nodes);
//get nodes string
String viaString = "";
for (String node : viaPoints) {
viaString += "!3m4!1m2";
String[] pieces = node.split(",");
viaString += "!1d" + pieces[1]; //ALERT: the coordinates are flipped!
viaString += "!2d" + pieces[0];
viaString += "!3s0x0:0x0";
}
String url = start + bound + "/" + viewbox + "/data=" + prefix + viaString + "!1m0" + type;
According to this site, in the old url scheme, there should be 3 ways to add a via point or a route, and they are:
https://www.google.com/maps?dirflg=w&saddr=51.804323,5.8061076&daddr=51.8059489,5.7971745+to:51.8095767,5.8032703
https://www.google.com/maps?dirflg=w&saddr=51.804323,5.8061076&daddr=51.8095767,5.8032703&mrad=51.8059489,5.7971745
https://www.google.com/maps?dirflg=w&saddr=51.804323,5.8061076&daddr=51.8095767,5.8032703&via=51.8059489,5.7971745
But it seems they dropped support to mrad and via. And for using to, it shows the address as if it would be shown in the new url scheme.
For the new URL scheme.. it does not seems to have a lot of documentation on it, so I am not sure if Google wants you to play with it. but... here it is: How to do it with the new scheme.
according to this blog post:
the !xx, is a separator. Looking at your url:
data=
!4m9
!4m8
!1m5
!3m4
!1m2
!1d5.7971218
!2d51.8060231
!3s0x47c7061292e15b39:0x4d7bcd7484c71cf3
!1m0
!3e2
it is really unclear what it is doing, but, at least we see your via lat, and via lng in the !1d and !2d fields;
Also the !3s, in a hex format looks like some kind of lat/lng, might be the area of search. This is how it looks like in dec 5172109373901724473:5583282063383403763
Well, in short, just change the !1d and !2d fields and it seems to work fine. like this:
https://www.google.nl/maps/dir/51.804323,5.8061076/51.8095767,5.8032703/#51.8769532,5.8550939,7.58z/data=!4m9!4m8!1m5!3m4!1m2!1d5.871218!2d52.8060231!3s0x47c7061292e15b39:0x4d7bcd7484c71cf3!1m0!3e2

JFreeChart addBin with race condition?

I'm currently working on a project where I want to plot some times measured. For this I'm using JFreeChart 1.0.13.
I want to create a Histogram with SimpleHistogramBins and then add data to these bins. Here's the code:
Double min = Collections.min(values);
Double max = Collections.max(values);
Double current = min;
int range = 1000;
double minimalOffset = 0.0000000001;
Double stepWidth = (max-min) / range;
SimpleHistogramDataset dataSet = new SimpleHistogramDataset("");
for (int i = 0; i <= range; i++) {
SimpleHistogramBin bin;
if (i != 0) {
bin = new SimpleHistogramBin(current + minimalOffset, current + stepWidth);
} else {
bin = new SimpleHistogramBin(current, current + stepWidth);
}
dataSet.addBin(bin);
current += stepWidth;
}
for (Double value : values) {
System.out.println(value);
dataSet.addObservation(value);
}
This crashes with Exception in thread "main" java.lang.RuntimeException: No bin. At first I thought this was caused by hitting a gap in the bins, but when I started debugging, the error did not occur. The program ran through and I got a plot. Then I added this:
Thread.sleep(1000);
before
for (Double value : values) {
System.out.println(value);
dataSet.addObservation(value);
}
and again, no error.
This got me thinking that maybe there is some kind of race condition? Does JFreeChart add the bins asynchronously? I would appreciate hints in any direction to why I get this kind of behaviour.
Thanks
If anyone should have the same problem, I found a solution:
Instead of using SimpleHistorgramBin I'm using HistogramBin. This basically reduces my code to a few lines:
HistogramDataset dataSet = new HistogramDataset();
dataSet.setType(HistogramType.FREQUENCY);
dataSet.addSeries("Hibernate", Doubles.toArray(values), 1000);
This approach automatically creates the bins I need and the problem is gone.

How do I know that my neural network is being trained correctly

I've written an Adaline Neural Network. Everything that I have compiles, so I know that there isn't a problem with what I've written, but how do I know that I have to algorithm correct? When I try training the network, my computer just says the application is running and it just goes. After about 2 minutes I just stopped it.
Does training normally take this long (I have 10 parameters and 669 observations)?
Do I just need to let it run longer?
Hear is my train method
public void trainNetwork()
{
int good = 0;
//train until all patterns are good.
while(good < trainingData.size())
{
for(int i=0; i< trainingData.size(); i++)
{
this.setInputNodeValues(trainingData.get(i));
adalineNode.run();
if(nodeList.get(nodeList.size()-1).getValue(Constants.NODE_VALUE) != adalineNode.getValue(Constants.NODE_VALUE))
{
adalineNode.learn();
}
else
{
good++;
}
}
}
}
And here is my learn method
public void learn()
{
Double nodeValue = value.get(Constants.NODE_VALUE);
double nodeError = nodeValue * -2.0;
error.put(Constants.NODE_ERROR, nodeError);
BaseLink link;
int count = inLinks.size();
double delta;
for(int i = 0; i < count; i++)
{
link = inLinks.get(i);
Double learningRate = value.get(Constants.LEARNING_RATE);
Double value = inLinks.get(i).getInValue(Constants.NODE_VALUE);
delta = learningRate * value * nodeError;
inLinks.get(i).updateWeight(delta);
}
}
And here is my run method
public void run()
{
double total = 0;
//find out how many input links there are
int count = inLinks.size();
for(int i = 0; i< count-1; i++)
{
//grab a specific link in sequence
BaseLink specificInLink = inLinks.get(i);
Double weightedValue = specificInLink.weightedInValue(Constants.NODE_VALUE);
total += weightedValue;
}
this.setValue(Constants.NODE_VALUE, this.transferFunction(total));
}
These functions are part of a library that I'm writing. I have the entire thing on Github here. Now that everything is written, I just don't know how I should go about actually testing to make sure that I have the training method written correctly.
I asked a similar question a few months ago.
Ten parameters with 669 observations is not a large data set. So there is probably an issue with your algorithm. There are two things you can do that will make debugging your algorithm much easier:
Print the sum of squared errors at the end of each iteration. This will help you determine if the algorithm is converging (at all), stuck at a local minimum, or just very slowly converging.
Test your code on a simple data set. Pick something easy like a two-dimensional input that you know is linearly separable. Will your algorithm learn a simple AND function of two inputs? If so, will it lean an XOR function (2 inputs, 2 hidden nodes, 2 outputs)?
You should be adding debug/test mode messages to watch if the weights are getting saturated and more converged. It is likely that good < trainingData.size() is not happening.
Based on Double nodeValue = value.get(Constants.NODE_VALUE); I assume NODE_VALUE is of type Double ? If that's the case then this line nodeList.get(nodeList.size()-1).getValue(Constants.NODE_VALUE) != adalineNode.getValue(Constants.NODE_VALUE) may not really converge exactly as it is of type double with lot of other parameters involved in obtaining its value and your convergence relies on it. Typically while training a neural network you stop when the convergence is within an acceptable error limit (not a strict equality like you are trying to check).
Hope this helps

Bifurcation and ridge ending point

Is there any way to find Bifurcation point and ridge ending point in a Image (hand, vein), by using a Java code only not Matlab etc.? Can I achieve this by ImageJ Library of Java?
A scientific description you find in Minutiae Extraction from Fingerprint Images.
Some algorithms are implemented in OpenCV see the segmentation section.
The OpenCV library can be linked to java using JNI.
There is an ImageJ plugin that could help you to do that:
AnalyzeSkeleton
(for the source see here )
You can extract branching points and endpoints with the help of its SkeletonResult class.
Many thanks to help me out I went through AnalyzeSkeleton and got the result in SekeletonResult Response by Using IJ. for this I have used IJ.run(imp, "Skeletonize", "");
// Initialize AnalyzeSkeleton_
AnalyzeSkeleton_ skel = new AnalyzeSkeleton_();
skel.calculateShortestPath = true;
skel.setup("", imp);
// Perform analysis in silent mode
// (work on a copy of the ImagePlus if you don't want it displayed)
// run(int pruneIndex, boolean pruneEnds, boolean shortPath, ImagePlus origIP, boolean silent, boolean verbose)
SkeletonResult skelResult = skel.run(AnalyzeSkeleton_.NONE, false, true, null, true, false);
// Read the results
Object shortestPaths[] = skelResult.getShortestPathList().toArray();
double branchLengths[] = skelResult.getAverageBranchLength();
int branchNumbers[] = skelResult.getBranches();
long totalLength = 0;
for (int i = 0; i < branchNumbers.length; i++) {
totalLength += branchNumbers[i] * branchLengths[i];
}
double cumulativeLengthOfShortestPaths = 0;
for (int i = 0; i < shortestPaths.length; i++) {
cumulativeLengthOfShortestPaths +=(Double)shortestPaths[i];
}
System.out.println("totalLength "+totalLength);
System.out.println("cumulativeLengthOfShortestPaths "+cumulativeLengthOfShortestPaths);
System.out.println("getNumOfTrees "+skelResult.getNumOfTrees());
System.out.println("getAverageBranchLength "+skelResult.getAverageBranchLength().length);
System.out.println("getBranches "+skelResult.getBranches().length);
System.out.println("getEndPoints "+skelResult.getEndPoints().length);
System.out.println("getGraph "+skelResult.getGraph().length);
System.out.println("getJunctions "+skelResult.getJunctions().length);
System.out.println("getJunctionVoxels "+skelResult.getJunctionVoxels().length);
System.out.println("getListOfEndPoints "+skelResult.getListOfEndPoints().size());
System.out.println("getListOfJunctionVoxels "+skelResult.getListOfJunctionVoxels().size());
System.out.println("getMaximumBranchLength "+skelResult.getMaximumBranchLength().length);
System.out.println("getNumberOfVoxels "+skelResult.getNumberOfVoxels().length);
System.out.println("getQuadruples "+skelResult.getQuadruples().length); this method .but I am not able to find which method in Skeleton Result class returns bifuraction point could you please help me little more thanks Amar

How to find the index of the "train" image to which the matched keypoint belongs to in Android

After a few hours of research, i still can't find the index of the "train" image to which the matched keypoint belongs to. What i means is
FeatureDetector surfDetector = FeatureDetector.create(FeatureDetector.FAST);
MatOfKeyPoint vector = new MatOfKeyPoint();
surfDetector.detect( mImg, vector );
DescriptorExtractor siftDescriptor =DescriptorExtractor.create(DescriptorExtractor.BRIEF);
Mat descriptors=new Mat();
siftDescriptor.compute(mImg, vector, descriptors);
DescriptorMatcher matcherBruteForce=DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_SL2);
List<MatOfDMatch> matches = new ArrayList<MatOfDMatch>();
matcherBruteForce.match(descriptors, descriptors, matches, 2);
I just use the same image as an example.After this,how to find the index of the "train" image to which the matched keypoint belongs to?
I think you are missing something: probably, you try to find a specific object, and you try to find, in a collection of several images, which is the image that has the "best match" with the keypoints of the object you are looking for. Just looking at the sample code you provide, you extract all SIFT/SURF keypoints for an unknown image, and apply the matcher between the object keypoints and your current image. What you need is some kind of metric that tells you how good is the match between your images. The simplest one beeing the count of the number of matched keypoints. Then, you just need to remember which image in your collection has led to the maximum number of matched keypoints. The number of matche keypoints is probably not the best metric to use, and you should check the vast literature on object detection using SIFT and related methods to find one which best suits your purpose.
By the way, your code is quite confusing: you declare a feature detector named surfDetector but instantiate a "FAST" detector. You declare a feature extractor named siftDescriptor, but instantiate a "BRIEF" extractor. I suggest that you keep the same detector and extractor when they exists, e.g. SURF detector/extractor.
I guess this is what you are looking for, just adapt the variable names to your case:
//scene = query; object = train
Point point1 = keypoints_scene.get( matches.get(i).queryIdx ).pt;
Point point2 = keypoints_object.get( matches.get(i).trainIdx ).pt ;
To find the best match you do the following, each match has a parameter called distance. The one with the smallest distance is the best match. So you go throught the list of all your matches and find the one with the smallest distance...
float min_dist = Float.MAX_VALUE;
int min_position = 0;
for( int i=0; i<matches.size(); i++) {
if( matches.get(i).distance < min_dist ) {
min_dist = matches.get(i).distance;
min_position = i;
}
}
Point best_point_scene = keypoints_scene.get( matches.get(min_position).queryIdx ).pt;
Point best_point_object = keypoints_object.get( matches.get(min_position).trainIdx ).pt ;

Categories