I am getting a wrong eigen-vector (also checked by running multiple times to be sure) when i am using matrix.eig(). The matrix is:
1.2290 1.2168 2.8760 2.6370 2.2949 2.6402
1.2168 0.9476 2.5179 2.1737 1.9795 2.2828
2.8760 2.5179 8.8114 8.6530 7.3910 8.1058
2.6370 2.1737 8.6530 7.6366 6.9503 7.6743
2.2949 1.9795 7.3910 6.9503 6.2722 7.3441
2.6402 2.2828 8.1058 7.6743 7.3441 7.6870
The function returns the eigen vectors:
-0.1698 0.6764 0.1442 -0.6929 -0.1069 0.0365
-0.1460 0.6478 0.1926 0.6898 0.0483 -0.2094
-0.5239 0.0780 -0.5236 0.1621 -0.2244 0.6072
-0.4906 -0.0758 -0.4573 -0.1279 0.2842 -0.6688
-0.4428 -0.2770 0.4307 0.0226 -0.6959 -0.2383
-0.4884 -0.1852 0.5228 -0.0312 0.6089 0.2865
Matlab gives the following eigen-vector for the same input:
0.1698 -0.6762 -0.1439 0.6931 0.1069 0.0365
0.1460 -0.6481 -0.1926 -0.6895 -0.0483 -0.2094
0.5237 -0.0780 0.5233 -0.1622 0.2238 0.6077
0.4907 0.0758 0.4577 0.1278 -0.2840 -0.6686
0.4425 0.2766 -0.4298 -0.0227 0.6968 -0.2384
0.4888 0.1854 -0.5236 0.0313 -0.6082 0.2857
The eigen-values for matlab and jama are matching but eigen-vectors the first 5 columns are reversed in sign and only the last column is accurate.
Is there any issue on the kind of input that Jama.Matrix.EigenvalueDecomposition.eig()
accepts or any other problem with the same? Please tell me how i can fix the error. Thanks in advance.
There is no error here, both results are correct - as is any other scalar times the eigen vectors.
There are an infinite number of eigen vectors that work - its just convention that most software programs report the vectors that have length of one. That Jama reports eigen vectors equal to -1 times those of Matlab is probably just an artifact of the algorithm they used.
For a given matrix, the eigenvalues are unique, whose number equals the dimension of the matrix if plurality is considered. While the corresponding eigenvalues might be different because the vectors can scale according to a certain direction. In your post results, both JAVA and Matlab versions are correct.
Also, you could check the D matrix, where the eigenvalues come from. You could find they are the same.
Related
I have a program that calculates the area of a polygon in metres squared and I would like to convert it to other units (as the user wants) using the javax.measure library.
Measure<Double, Area> a = Measure.valueOf(area, SI.SQUARE_METRE);
So if I want hectares I can use:
a.doubleValue(NonSI.HECTARE);
but the only other Area quantity is Are.
While I can easily divide through by 1000*1000 to get KM squared it gets messier when I try to get Acres or Sq Miles or other common areal units.
A code snippet like NonSI.MILE.divide(8.0).times... tells you are using the old, unfinished JSR 275 and implementations like JScience 4 in your solution.
Any reason to prefer that over the official, finished javax.measure JSR 363?
All of above with a few variations (e.g. UnitFormat is an API interface now, the code in JSR 363 would be SimpleUnitFormat.getInstance().label(acre, "acre");) works perfectly fine in JSR 363. Unlike 275 there is also a vast infrastructure of extension modules for the SI System and other Unit systems. And a couple of major Open Source projects already use the new standard. Give it a try.
After some investigation and experimentation I can generate sq miles and sq kilometres as so:
Unit<Area> sq_km = (Unit<Area>) SI.KILOMETER.times(SI.KILOMETER);
System.out.println(a.to(sq_km));
Unit<Area> sq_mile = (Unit<Area>) NonSI.MILE.times(NonSI.MILE);
System.out.println(a.to(sq_mile));
System.out.println(a.to(NonSI.HECTARE));
Which gives me the output:
4.872007325925411E11 m²
487200.7325925411 km²
188109.25449744106 mi²
4.8720073259254105E7 ha
But acres are escaping me, according to Wikipedia an acre is 1 furlong times 66 ft. So I tried:
Unit<Area> acre = (Unit<Area>) NonSI.MILE.divide(8.0).times(NonSI.FOOT).times(66.0);
System.out.println(a.to(acre));
which gives the right answer but gives me a unit of m²*4046.8564224.
Edit
So further experimentation gives me:
Unit<Area> acre = (Unit<Area>) NonSI.MILE.divide(8.0).times(NonSI.FOOT).times(66.0);
UnitFormat.getInstance().label(acre, "acre");
and the output (for a different polygon than before):
2.6529660563942477E7 acre
Further Update
GeoTools now uses JSR-363 units so the above becomes:
Unit<Area> sq_km = (Unit<Area>) MetricPrefix.KILO(SI.METRE).multiply(MetricPrefix.KILO(SI.METRE));
System.out.println(a.to(sq_km));
System.out.println(pop.divide(a.to(sq_km)));
Unit<Area> sq_mile = (Unit<Area>) USCustomary.MILE.multiply(USCustomary.MILE);
System.out.println(a.to(sq_mile));
System.out.println(a.to(NonSI.HECTARE));
System.out.println(a.to(USCustomary.ACRE).getValue() + " acres");
So Acres are in but for some reason don't have a unit defined in the java8 jar (but does in master).
I am using the Apache Commons lib to calculate the p-value with the ChiSquareTest:
I use the method chiSquareTest(double[] expected, long[] observed); But the values I get back don't make sense to me. So I tried numerous ChiSquare Online Calculators to find out what this function calculates.
An example:
Group 1: {25,25}
Group 2: {30,20}
(Taken from Wikipedia, German Chi Square Test article)
P- values from:
http://www.quantpsy.org/chisq/chisq.htm and
http://vassarstats.net/newcs.html
P = 0.3149 and 0.31490284
0.42154642 and 0.4201
(with and without Yates Correction)
Apache Commons: 0.1489146731787664
Code:
ChiSquareTest tester = new ChiSquareTest();
long[] b = {25,25};
double[] a = {30,20};
tester.chiSquareTest(a,b);
Another thing I do not understand is the need to have a long and a double array. Why not two long arrays?
There are two functions in the lib:
chiSquareTest(double[] expected, long[] observed)
chiSquareTest(long[][] values)
The first one (which I used in the question above) computes the goodness of a fit. But I expected the result from the second one, the test of independence.
The answer was given to me on the Apache Commons user Mailinglist, I will add a link to the archive once it is there. But it is also written in the JavaDoc.
Update:
Mailinglist Archive
I spent a week at Gremlin shell trying to compose one query to
get all incoming and outgoing vertexes, including their edges and directions. All i tried everything.
g.V("name","testname").bothE.as('both').select().back('both').bothV.as('bothV').select(){it.map()}
output i need is (just example structure ):
[v{'name':"testname"}]___[ine{edge_name:"nameofincomingedge"}]____[v{name:'nameofconnectedvertex']
[v{'name':"testname"}]___[oute{edge_name:"nameofoutgoingedge"}]____[v{name:'nameofconnectedvertex']
So i just whant to get 1) all Vertices with exact name , edge of each this vertex (including type inE or outE), and connected Vertex. And ideally after that i want to get their map() so i'l get complete object properties. i dont care about the output style, i just need all of information present, so i can manipulate with it after. I need this to train my Gremlin, but Neo4j examples are welcome. Thanks!
There's a variety of ways to approach this. Here's a few ideas that will hopefully inspire you to an answer:
gremlin> g = TinkerGraphFactory.createTinkerGraph()
==>tinkergraph[vertices:6 edges:6]
gremlin> g.V('name','marko').transform{[v:it,inE:it.inE().as('e').outV().as('v').select().toList(),outE:it.outE().as('e').inV().as('v').select().toList()]}
==>{v=v[1], inE=[], outE=[[e:e[9][1-created->3], v:v[3]], [e:e[7][1-knows->2], v:v[2]], [e:e[8][1-knows->4], v:v[4]]]}
The transform converts the incoming vertex to a Map and does internal traversal over in/out edges. You could also use path as follows to get a similar output:
gremlin> g.V('name','marko').transform{[v:it,inE:it.inE().outV().path().toList().toList(),outE:it.outE().inV().path().toList()]}
==>{v=v[1], inE=[], outE=[[v[1], e[9][1-created->3], v[3]], [v[1], e[7][1-knows->2], v[2]], [v[1], e[8][1-knows->4], v[4]]]}
I provided these answers using TinkerPop 2.x as that looked like what you were using as judged from the syntax. TinkerPop 3.x is now available and if you are just getting started, you should take a look at the latest that has to offer:
http://tinkerpop.incubator.apache.org/
Under 3.0 syntax you might do something like this:
gremlin> g.V().has('name','marko').as('a').bothE().bothV().where(neq('a')).path()
==>[v[1], e[9][1-created->3], v[3]]
==>[v[1], e[7][1-knows->2], v[2]]
==>[v[1], e[8][1-knows->4], v[4]]
I know that you wanted to know what the direction of the edge in the output but that's easy enough to detect on analysis of the path.
UPDATE: Here's the above query written with Daniel's suggestion of otherV usage:
gremlin> g.V().has('name','marko').bothE().otherV().path()
==>[v[1], e[9][1-created->3], v[3]]
==>[v[1], e[7][1-knows->2], v[2]]
==>[v[1], e[8][1-knows->4], v[4]]
To see the data from this you can use by() to pick apart each Path object - The extension to the above query applies valueMap to each piece of each Path:
gremlin> g.V().has('name','marko').bothE().otherV().path().by(__.valueMap(true))
==>[{label=person, name=[marko], id=1, age=[29]}, {label=created, weight=0.4, id=9}, {label=software, name=[lop], id=3, lang=[java]}]
==>[{label=person, name=[marko], id=1, age=[29]}, {label=knows, weight=0.5, id=7}, {label=person, name=[vadas], id=2, age=[27]}]
==>[{label=person, name=[marko], id=1, age=[29]}, {label=knows, weight=1.0, id=8}, {label=person, name=[josh], id=4, age=[32]}]
I am making features detection in two different images, and the resultant image from the Descriptors Matcher contains features that do not belong to each other as see in the img_1 below.
The steps I followed are as follows:
Features detection using SIFT algorithm. and this step yields MatOdKeyPoint object for each image, which means
MatKeyPts_1 and MatKeyPts_2
Descriptors extractor using SURF algorithm.
Descriptors matching using BRUTFORCE algorithm. the code of this step is posted below, and the descriptor extractor of the query_img and the train_img were used as input in this step. I am also using my own classes that I created to control and maintain this process.
The problem is, the result from step 3 is the image posted below img_1, which has completely non-similar features linked to eah others, i expected to see,for an example, the specific region of the hand(img_1, right) is linked to the similar feature in the hand in the (img_1,left), but as you see i got mixed and unrelated features.
My question is how to get correct features matching using SIFT and SURF as features detectors and descriptor extractor respectively?
private static void descriptorMatcher() {
// TODO Auto-generated method stub
MatOfDMatch matDMatch = new MatOfDMatch();//empty MatOfDmatch object
dm.match(matFactory.getComputedDescExtMatAt(0), matFactory.getComputedDescExtMatAt(1), matDMatch);//descriptor extractor of the query and the train image are used as parameters
matFactory.addRawMatchesMatDMatch(matDMatch);
/*writing the raw MatDMatches*/
Mat outImg = new Mat();
Features2d.drawMatches(matFactory.getMatAt(0), matFactory.getMatKeyPtsAt(0), matFactory.getMatAt(1), matFactory.getMatKeyPtsAt(1), MatFactory.lastAddedObj(matFactory.getRawMatchesMatDMatchList()),
outImg);
matFactory.addRawMatchedImage(outImg);
MatFactory.writeMat(FilePathUtils.newOutputPath(SystemConstants.RAW_MATCHED_IMAGE), MatFactory.lastAddedObj(matFactory.getRawMatchedImageList()));//this produce img_2 below posted
/*getting the top 10 shortest distance*/
List<DMatch> dMtchList = matDMatch.toList();
List<DMatch> goodDMatchList = MatFactory.getTopGoodMatches(dMtchList, 0, 10);//this method sort the dMatchList ascendingly and picks onlt the top 10 distances and assign these values to goodDMatchList
/*converting the goo DMatches to MatOfDMatches*/
MatOfDMatch goodMatDMatches = new MatOfDMatch();
goodMatDMatches.fromList(goodDMatchList);
matFactory.addGoodMatchMatDMatch(goodMatDMatches);
/*drawing the good matches and writing the good matches images*/
Features2d.drawMatches(matFactory.getMatAt(0), matFactory.getMatKeyPtsAt(0), matFactory.getMatAt(1), matFactory.getMatKeyPtsAt(1), MatFactory.lastAddedObj(matFactory.getGoodMatchMatDMatchList()),
outImg);
MatFactory.writeMat(FilePathUtils.newOutputPath(SystemConstants.GOOD_MATCHED_IMAGE), outImg);// this produce img_1 below posted
}
Img_1
I have two vectors represented as a HashMap and I want to measure the similarity between them. I use the cosine similarity metric as in the following code:
public static void cosineSimilarity(HashMap<Integer,Double> vector1, HashMap<Integer,Double> vector2){
double scalar=0.0d, v1Norm=0.0d, v2Norm=0.0d;
for(int featureId: vector1.keySet()){
scalar+= (vector1.get(featureId)* vector2.get(featureId));
v1Norm+= (vector1.get(featureId) * vector1.get(featureId));
v2Norm+= (vector2.get(featureId) * vector2.get(featureId));
}
v1Norm=Math.sqrt(v1Norm);
v2Norm=Math.sqrt(v2Norm);
double cosine= scalar / (v1Norm*v2Norm);
System.out.println("v1 is: "+v1Norm+" , v2 is: "+v2Norm+" Cosine is: "+cosine);
}
Strangely, two vectors that are supposed to be dissimilar come close to .9999 result which is just wrong!
Please note that the keys are exactly the same for both maps.
data file is here: file
File format:
FeatureId vector1_value vector2_value
Your code is fine.
The vectors are dominated by several large features. In those features, the two vectors are almost collinear, which is why the similarity measure is close to 1.
I include the six largest features below. Look at the ratio of vec2 over vec1: it's almost identical across those features.
feature vec1 vec2 vec2/vec1
64806110 2875 1.85E+07 6.43E+03
64806108 5750 3.68E+07 6.40E+03
64806107 8625 5.49E+07 6.37E+03
64806106 11500 7.29E+07 6.34E+03
64806111 14375 9.07E+07 6.31E+03
64806109 17250 1.08E+08 6.28E+03