Representing a graph with a 2D array - java

I have a question where I represent a graph in terms of a 2D array.
I have a sample as well, but I have no idea, how it works....
This is the graph I am given
And this is how they represent it using a 2D array
How does one translate to the other?
Also, this is a part of an implementation of Dijsktra's algorithm.
Here is the code for your reference, it is taken from
geeksforgeeks
// A Java program for Dijkstra's single source shortest path algorithm.
// The program is for adjacency matrix representation of the graph
import java.util.*;
import java.lang.*;
import java.io.*;
class ShortestPath {
// A utility function to find the vertex with minimum distance value,
// from the set of vertices not yet included in shortest path tree
static final int V = 9;
int minDistance(int dist[], Boolean sptSet[])
{
// Initialize min value
int min = Integer.MAX_VALUE, min_index = -1;
for (int v = 0; v < V; v++)
if (sptSet[v] == false && dist[v] <= min) {
min = dist[v];
min_index = v;
}
return min_index;
}
// A utility function to print the constructed distance array
void printSolution(int dist[])
{
System.out.println("Vertex \t\t Distance from Source");
for (int i = 0; i < V; i++)
System.out.println(i + " \t\t " + dist[i]);
}
// Function that implements Dijkstra's single source shortest path
// algorithm for a graph represented using adjacency matrix
// representation
void dijkstra(int graph[][], int src)
{
int dist[] = new int[V]; // The output array. dist[i] will hold
// the shortest distance from src to i
// sptSet[i] will true if vertex i is included in shortest
// path tree or shortest distance from src to i is finalized
Boolean sptSet[] = new Boolean[V];
// Initialize all distances as INFINITE and stpSet[] as false
for (int i = 0; i < V; i++) {
dist[i] = Integer.MAX_VALUE;
sptSet[i] = false;
}
// Distance of source vertex from itself is always 0
dist[src] = 0;
// Find shortest path for all vertices
for (int count = 0; count < V - 1; count++) {
// Pick the minimum distance vertex from the set of vertices
// not yet processed. u is always equal to src in first
// iteration.
int u = minDistance(dist, sptSet);
// Mark the picked vertex as processed
sptSet[u] = true;
// Update dist value of the adjacent vertices of the
// picked vertex.
for (int v = 0; v < V; v++)
// Update dist[v] only if is not in sptSet, there is an
// edge from u to v, and total weight of path from src to
// v through u is smaller than current value of dist[v]
if (!sptSet[v] && graph[u][v] != 0 && dist[u] != Integer.MAX_VALUE && dist[u] + graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}
// print the constructed distance array
printSolution(dist);
}
// Driver method
public static void main(String[] args)
{
/* Let us create the example graph discussed above */
int graph[][] = new int[][] { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
ShortestPath t = new ShortestPath();
t.dijkstra(graph, 0);
}
}
If this is the graph given to me, for example, how would I represent it using a 2D array?

You can read it like a coordinate table.
Every row represents a single vertex and every column value on the same row represents the distance to the Nth vertex.
Examples:
0, 1: first row, second column has the value of 4; which means vertex 0's distance to vertex 1 is 4.
1, 7: second row, 8th column has the value of 11; which means vertex 1's distance to vertex 7 is 11.
And so on...

They have used an adjacency matrix to represent an weighted undirected graphs. According to geeksforgeeks:
Adjacency Matrix: Adjacency Matrix is a 2D array of size V x V where V is the
number of vertices in a graph. Let the 2D array be adj[][], a slot adj[i][j] = 1
indicates that there is an edge from vertex i to vertex j. Adjacency matrix for
undirected graph is always symmetric. Adjacency Matrix is also used to represent
weighted graphs. If adj[i][j] = w, then there is an edge from vertex i to vertex
j with weight w.
As the last two line state, adjacency matrix can be used to store a weighted graph which they have done in this case.
There might be few problems if you use adjacency matrix to represent weighted graphs. Here we use 0 to represent no-edge between any two vertices. If there is any graph which has a weight 0in the graph, then a little change in the adjacency matrix will be needed since 0 already represents no-edge.
In the comments you mention about another graph. That one, however, is not weighted. It is also not undirected graph since its not symmetric.
Do comment if you have any other doubts.

I think I've figured it out!
In the example graph, there are 9 vertices. Therefore a 9x9 2D matrix is created, such that:
Row 1 corresponds to vertex 0, Row 2 corresponds to vertex 1 and so on
and
Column 1 corresponds to vertex 0, Column 2 corresponds to vertex 1 and so on
and the array is filled such that it contains the weight of the edges between node x and y. For example: Row 1 column 2 (map0) contains the weight of the edge between vertex 0 and vertex 1.
Therefore for a graph with n vertices, an nxn array needs to be created such that the location[x][y] in the array contains the weight between edge from x to y.
As for the example graph I had asked the solution for, this would be its corresponding 2D Matrix:
It is a little big, because we create a 16x16 2D matrix

Related

How should I be implementing a view clipping plane in a 3D Engine?

This project is written entirely from scratch in Java. I've just been bored ever since Covid started, so I wanted something that would take up my time, and teach me something cool. I've been stuck on this problem for about a week now though. When I try to use my near plane clipping method it skews the new vertices to the opposite side of the screen, but sometimes times it works just fine.
Failure Screenshot
Success Screenshot
So my thought is maybe that since it works sometimes, I'm just not doing the clipping at the correct time in the pipeline?
I start by face culling and lighting,
Then I apply a Camera View Transformation to the Vertices,
Then I clip on the near plane
Finally I apply the projection matrix and Clip any remaining off screen Triangles
Code:
This calculates the intersection points. Sorry if it's messy or to long I'm not very experienced in coding, my major is physics, not CS.
public Vertex vectorIntersectPlane(Vector3d planePos, Vector3d planeNorm, Vector3d lineStart, Vector3d lineEnd){
float planeDot = planeNorm.dotProduct(planePos);
float startDot = lineStart.dotProduct(planeNorm);
float endDot = lineEnd.dotProduct(planeNorm);
float midPoint = (planeDot - startDot) / (endDot - startDot);
Vector3d lineStartEnd = lineEnd.sub(lineStart);
Vector3d lineToIntersect = lineStartEnd.scale(midPoint);
return new Vertex(lineStart.add(lineToIntersect));
}
public float distanceFromPlane(Vector3d planePos, Vector3d planeNorm, Vector3d vert){
float x = planeNorm.getX() * vert.getX();
float y = planeNorm.getY() * vert.getY();
float z = planeNorm.getZ() * vert.getZ();
return (x + y + z - (planeNorm.dotProduct(planePos)));
}
//When a triangle gets clipped it has 4 possible outcomes
// 1 it doesn't actually need clipping and gets returned
// 2 it gets clipped into 1 new triangle, for testing these are red
// 3 it gets clipped into 2 new triangles, for testing 1 is green, and 1 is blue
// 4 it is outside the view planes and shouldn't be rendered
public void clipTriangles(){
Vector3d planePos = new Vector3d(0, 0, ProjectionMatrix.fNear, 1f);
Vector3d planeNorm = Z_AXIS.clone();
final int length = triangles.size();
for(int i = 0; i < length; i++) {
Triangle t = triangles.get(i);
if(!t.isDraw())
continue;
Vector3d[] insidePoint = new Vector3d[3];
int insidePointCount = 0;
Vector3d[] outsidePoint = new Vector3d[3];
int outsidePointCount = 0;
float d0 = distanceFromPlane(planePos, planeNorm, t.getVerticesVectors()[0]);
float d1 = distanceFromPlane(planePos, planeNorm, t.getVerticesVectors()[1]);
float d2 = distanceFromPlane(planePos, planeNorm, t.getVerticesVectors()[2]);
//Storing distances from plane and counting inside outside points
{
if (d0 >= 0){
insidePoint[insidePointCount] = t.getVerticesVectors()[0];
insidePointCount++;
}else{
outsidePoint[outsidePointCount] = t.getVerticesVectors()[0];
outsidePointCount++;
}
if (d1 >= 0){
insidePoint[insidePointCount] = t.getVerticesVectors()[1];
insidePointCount++;
}else{
outsidePoint[outsidePointCount] = t.getVerticesVectors()[1];
outsidePointCount++;
}
if (d2 >= 0){
insidePoint[insidePointCount] = t.getVerticesVectors()[2];
insidePointCount++;
}else{
outsidePoint[outsidePointCount] = t.getVerticesVectors()[2];
}
}
//Triangle has 1 point still inside view, remove original triangle add new clipped triangle
if (insidePointCount == 1) {
t.dontDraw();
Vertex newVert1 = vectorIntersectPlane(planePos, planeNorm, insidePoint[0], outsidePoint[0]);
Vertex newVert2 = vectorIntersectPlane(planePos, planeNorm, insidePoint[0], outsidePoint[1]);
vertices.add(newVert1);
vertices.add(newVert2);
//Triangles are stored with vertex references instead of the actual vertex object.
Triangle temp = new Triangle(t.getVertKeys()[0], vertices.size() - 2, vertices.size() - 1, vertices);
temp.setColor(1,0,0, t.getBrightness(), t.getAlpha());
triangles.add(temp);
continue;
}
//Triangle has two points inside remove original add two new clipped triangles
if (insidePointCount == 2) {
t.dontDraw();
Vertex newVert1 = vectorIntersectPlane(planePos, planeNorm, insidePoint[0], outsidePoint[0]);
Vertex newVert2 = vectorIntersectPlane(planePos, planeNorm, insidePoint[1], outsidePoint[0]);
vertices.add(newVert1);
vertices.add(newVert2);
Triangle temp = new Triangle(t.getVertKeys()[0], t.getVertKeys()[1], vertices.size() - 1, vertices);
temp.setColor(0, 1, 0, t.getBrightness(), t.getAlpha());
triangles.add(temp);
temp = new Triangle(t.getVertKeys()[0], t.getVertKeys()[1], vertices.size() - 2, vertices);
temp.setColor(0, 0, 1, t.getBrightness(), t.getAlpha());
triangles.add(temp);
continue;
}
}
}
I figured out the problem, The new clipped triangles were not being given the correct vertex references. they were just being given the first vertex of the triangle irregardless of if that was inside the view or not.

How to Convert Python Code to Java without numpy

I have a method in Python that makes use of OpenCV to remove the background from an image. I want the same functionality to work with android's version of OpenCV but I just cant seem to wrap my head around how the arrays work and how I can process them.
This is what I have so far in Java :
private Bitmap GetForeground(Bitmap source){
source = scale(source,300,300);
Mat mask = Mat.zeros(source.getHeight(),source.getWidth(),CvType.CV_8U);
Mat bgModel = Mat.zeros(1,65,CvType.CV_64F);
Mat ftModel = Mat.zeros(1,65,CvType.CV_64F);
int x = (int)Math.round(source.getWidth()*0.1);
int y = (int)Math.round(source.getHeight()*0.1);
int width = (int)Math.round(source.getWidth()*0.8);
int height = (int)Math.round(source.getHeight()*0.8);
Rect rect = new Rect(x,y, width,height);
Mat sourceMat = new Mat();
Utils.bitmapToMat(source, sourceMat);
Imgproc.grabCut(sourceMat, mask, rect, bgModel, ftModel, 5, Imgproc.GC_INIT_WITH_RECT);
int frameSize=sourceMat.rows()*sourceMat.cols();
byte[] buffer= new byte[frameSize];
mask.get(0,0,buffer);
for (int i = 0; i < frameSize; i++) {
if (buffer[i] == 2 || buffer[i] == 0){
buffer[i] = 0;
}else{
buffer[i] = 1 ;
}
}
byte[][] sourceArray = getMultiChannelArray(sourceMat);
byte[][][] reshapedMask = ReshapeArray(buffer, sourceMat.rows(), sourceMat.cols());
return source;
}
private byte[][][] ReshapeArray(byte[] arr, int rows, int cols){
byte[][][] out = new byte[cols][rows][1];
int index=0;
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
out[i][j][0] = arr[index];
index++;
}
}
return out;
}
public static byte[][] getMultiChannelArray(Mat m) {
//first index is pixel, second index is channel
int numChannels=m.channels();//is 3 for 8UC3 (e.g. RGB)
int frameSize=m.rows()*m.cols();
byte[] byteBuffer= new byte[frameSize*numChannels];
m.get(0,0,byteBuffer);
//write to separate R,G,B arrays
byte[][] out=new byte[frameSize][numChannels];
for (int p=0,i = 0; p < frameSize; p++) {
for (int n = 0; n < numChannels; n++,i++) {
out[p][n]=byteBuffer[i];
}
}
return out;
}
The python code I want to recreate :
image = cv2.imread('Images/handheld.jpg')
image = imutils.resize(image, height = 300)
mask = np.zeros(image.shape[:2],np.uint8)
bgModel = np.zeros((1,65),np.float64)
frModel = np.zeros((1,65),np.float64)
height, width, d = np.array(image).shape
rect = (int(width*0.1),int(height*0.1),int(width*0.8),int(height*0.8))
cv2.grabCut(image, mask, rect, bgModel,frModel, 5,cv2.GC_INIT_WITH_RECT)
mask = np.where((mask==2) | (mask == 0),0,1).astype('uint8')
image = image*mask[:,:,np.newaxis]
I have no idea how to convert the last two lines of the python code. If there is a way to just run python clean on an android device within my own project that would also be awesome.
At this point, you should consider talking a look to SL4A project which would allow you run your Python code on Android through java app.
Here are interesting links :
https://github.com/damonkohler/sl4a
https://norwied.wordpress.com/2012/04/11/run-sl4a-python-script-from-within-android-app/
http://jokar-johnk.blogspot.com/2011/02/how-to-make-android-app-with-sl4a.html
Let's see both the commands and try to convert them to Java API calls. It may not be simple 2 line in code.
mask = np.where((mask==2) | (mask == 0),0,1).astype('uint8')
In the above command, we are creating a new image mask which has uint data type of pixel values. The new mask matrix would have value 0 for every position where previous mask has a value of either 2 or 0, otherwise 1. Let's demonstrate this with an example:
mask = [
[0, 1, 1, 2],
[1, 0, 1, 3],
[0, 1, 1, 2],
[2, 3, 1, 0],
]
After this operation the output would be:
mask = [
[0, 1, 1, 0],
[1, 0, 1, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
]
So this above command is simply generating a binary mask with only 0 and 1 values. This can replicated in Java using Core.compare() method as:
// Get a mask for all `1` values in matrix.
Mat mask1vals;
Core.compare(mask, new Scalar(1), mask1vals, Core.CMP_EQ);
// Get a mask for all `3` values in matrix.
Mat mask3vals;
Core.compare(mask, new Scalar(3), mask3vals, Core.CMP_EQ);
// Create a combined mask
Mat foregroundMask;
Core.max(mask1vals, mask3vals, foregroundMask)
Now you need to multiply this foreground mask with the input image, to get final grabcut image as:
// First convert the single channel mat to 3 channel mat
Imgproc.cvtColor(foregroundMask, foregroundMask, Imgproc.COLOR_GRAY2BGR);
// Now simply take min operation
Mat out;
Core.min(foregroundMask, image, out);

Java 2D array, possible to set individual size?

Quick question about Java 2D arrays; For my tile-based, top-down, 2D game (using swing) I use
a 2D array to create a map, like this
public int[][] createMap(){
return new int[][]{
{0, 0, 1, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0}};
}
I then use this in my gameComponents class where I draw the individual tiles unto the map, like this
protected void paintComponent(Graphics g){
super.paintComponent(g);
for (int row = 0; row < game.getMap().getWidth(); row++) {
for (int col = 0; col < game.getMap().getHeight(); col++) {
g.drawImage(tile.getTileImage().get(values()[game.getMap().getMapArray()[col][row]]), row * SIZE,
col * SIZE, this);
}
} }
(where size is the size of a tile)
This works, and it correctly draws each tile to the map as expected, however
this also causes a problem for collision detection. As you may have noted, while I do define the size between the tiles in draw method, it is not defined in the array at all. Which, as you'd imagine, raises issues when checking for collision as the drawn tile is not where the tile is in the 2D array (due to size offset).
This is the code I use for checking collision (of course, not working due to ArrayIndexOutofbounds).
public boolean collisionDetected(int xDirection, int yDirection, Game game, Player player){
for (int row = 0; row < game.getMap().getHeight() * 16; row ++){
for (int col = 0; col < game.getMap().getWidth() * 16; col++) {
System.out.println(col + xDirection + player.getPositionX());
if(game.getMap().getTile(col + xDirection + player.getPositionX() ,
row + yDirection + player.getPositionY()) == Tiles.GRASS ){
System.out.println("COLLISION DETECTED");
return true;
}
}
}
return false;
}
This method uses a method within the map class that returns the tile on that
specific coordinate, like this
public Tiles getTile(int col,int row){
return Tiles.values()[mapArray[col][row]];
}
And, of course, as the 2D array doesn't know of the size offset, it just throws
an arrayindexoutofbound.
My question is, is it possible to define a 2D map array with the size of a tile in-mind? I appreciate any help & input I can get, after-all I am here to learn!
Extra clarification: All the tiles are in an enum class (i.e AIR, GRASS, STONE...). Also worth noting that the player position is not bound by an array, I merely move it the amount of pixels I want it to move.
Thanks in advance!
This method uses a method within the map class that returns the tile on that specific coordinate, like this
public Tiles getTile(int col,int row){
return Tiles.values()[mapArray[col][row]];
}
So if you have a "coordinate", why do you call the parameters col/row?
If you have a 10x10 grid and each tile is 20 pixels then the grid size is 200x200 so you could have x/y values in the range 0-199
So if you have a coordinate of 25x35 you would simply calculate the row/col values as:
int row = 35 / 20;
int column = 25 / 20;
So your method would be something like :
public Tiles getTile(int x, int y)
{
int row = y / 20;
int column = x / 20;
return Tiles.values()[mapArray[row][column]];
}

Ice Sliding Puzzle Path Finding

I apologize for the somewhat vague title, I'm unsure what you would call this puzzle.
I'm making a path finding method to find the route with the least moves, not the distance traveled.
The rules of the game are simple, you must traverse from the orange square to the green square, but you can only move in a straight line, and cannot stop moving in that direction until you hit a boundary (either the wall of the arena or an obstacle), as if they were sliding across ice.
Example map, and unless I'm mistaken, the desired path (8 moves)
Arena.java: https://gist.github.com/CalebWhiting/3a6680d40610829b1b6d
ArenaTest.java: https://gist.github.com/CalebWhiting/9a4767508831ea5dc0da
I'm assuming this would be best handled with a Dijkstras or A* path finding algorithm, however I'm not only not very experienced with these algorithms, but also don't know how I would go about defining the path rules.
Thank you for any help in advance.
Here's my solution (Java) in case someone is still interested. As #tobias_k suggested in his comment above, indeed BFS is the way to go:
import java.util.LinkedList;
public class PokemonIceCave {
public static void main(String[] args) {
int[][] iceCave1 = {
{0, 0, 0, 1, 0},
{0, 0, 0, 0, 1},
{0, 1, 1, 0, 0},
{0, 1, 0, 0, 1},
{0, 0, 0, 1, 0}
};
System.out.println(solve(iceCave1, 0, 0, 2, 4));
System.out.println();
int[][] iceCave2 = {
{0, 0, 0, 1, 0},
{0, 0, 0, 0, 1},
{0, 1, 1, 0, 0},
{0, 1, 0, 0, 1},
{0, 0, 0, 1, 0},
{0, 0, 0, 0, 0}
};
System.out.println(solve(iceCave2, 0, 0, 2, 5));
}
public static int solve(int[][] iceCave, int startX, int startY, int endX, int endY) {
Point startPoint = new Point(startX, startY);
LinkedList<Point> queue = new LinkedList<>();
Point[][] iceCaveColors = new Point[iceCave.length][iceCave[0].length];
queue.addLast(new Point(0, 0));
iceCaveColors[startY][startX] = startPoint;
while (queue.size() != 0) {
Point currPos = queue.pollFirst();
System.out.println(currPos);
// traverse adjacent nodes while sliding on the ice
for (Direction dir : Direction.values()) {
Point nextPos = move(iceCave, iceCaveColors, currPos, dir);
System.out.println("\t" + nextPos);
if (nextPos != null) {
queue.addLast(nextPos);
iceCaveColors[nextPos.getY()][nextPos.getX()] = new Point(currPos.getX(), currPos.getY());
if (nextPos.getY() == endY && nextPos.getX() == endX) {
// we found the end point
Point tmp = currPos; // if we start from nextPos we will count one too many edges
int count = 0;
while (tmp != startPoint) {
count++;
tmp = iceCaveColors[tmp.getY()][tmp.getX()];
}
return count;
}
}
}
System.out.println();
}
return -1;
}
public static Point move(int[][] iceCave, Point[][] iceCaveColors, Point currPos, Direction dir) {
int x = currPos.getX();
int y = currPos.getY();
int diffX = (dir == Direction.LEFT ? -1 : (dir == Direction.RIGHT ? 1 : 0));
int diffY = (dir == Direction.UP ? -1 : (dir == Direction.DOWN ? 1 : 0));
int i = 1;
while (x + i * diffX >= 0
&& x + i * diffX < iceCave[0].length
&& y + i * diffY >= 0
&& y + i * diffY < iceCave.length
&& iceCave[y + i * diffY][x + i * diffX] != 1) {
i++;
}
i--; // reverse the last step
if (iceCaveColors[y + i * diffY][x + i * diffX] != null) {
// we've already seen this point
return null;
}
return new Point(x + i * diffX, y + i * diffY);
}
public static class Point {
int x;
int y;
public Point(int x, int y) {
this.x = x;
this.y = y;
}
public int getX() {
return x;
}
public int getY() {
return y;
}
#Override
public String toString() {
return "Point{" +
"x=" + x +
", y=" + y +
'}';
}
}
public enum Direction {
LEFT,
RIGHT,
UP,
DOWN
}
}
I think the best solution would probably be the BFS, where you represent the state of the board with a "State" object with the following parameters: number of moves made so far, and coordinates. It should also have a method to find the next states attainable (which should be fairly easy to code, just go N, S, E, W and return an array of the first blocking spots).
Create initial state (0 moves with initial coordinates)
Put in a priority queue (sorting by number moves)
while(priority queue has more states):
Remove node
if it is a goal state:
return the state
Find all neighbors of current state
Add them to priority queue (remembering to increment number of moves by 1)
This uses an implicit graph representation. Optimality is guaranteed because of the priority queue; when the goal state is found, it will have been reached with the fewest moves. If the whole priority queue is exhausted and no state is returned, then no solution exists. This solution takes O(V^2logV) time because of the priority queue, but I think this is the simplest to code. A straight up O(V) BFS solution is possible but you'll have to keep track of what states you have or have not visited yet and the fewest number of moves to reach them, which would take O(V) memory.

Decomposition of essential matrix leads to wrong rotation and translation

I am doing some SfM and having troubles getting R and T from the essential matrix.
Here is what I am doing in sourcecode:
Mat fundamental = Calib3d.findFundamentalMat(object_left, object_right);
Mat E = new Mat();
Core.multiply(cameraMatrix.t(), fundamental, E); // cameraMatrix.t()*fundamental*cameraMatrix;
Core.multiply(E, cameraMatrix, E);
Mat R = new Mat();
Mat.zeros(3, 3, CvType.CV_64FC1).copyTo(R);
Mat T = new Mat();
calculateRT(E, R, T);
where `calculateRT` is defined as follows:
private void calculateRT(Mat E, Mat R, Mat T) {
/*
* //-- Step 6: calculate Rotation Matrix and Translation Vector
Matx34d P;
//decompose E
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,1,0,0,0,0,1);//HZ 9.13
Mat_<double> R = svd_u * Mat(W) * svd_vt; //
Mat_<double> T = svd_u.col(2); //u3
if (!CheckCoherentRotation (R)) {
std::cout<<"resulting rotation is not coherent\n";
return 0;
}
*/
Mat w = new Mat();
Mat u = new Mat();
Mat vt = new Mat();
Core.SVDecomp(E, w, u, vt, Core.DECOMP_SVD); // Maybe use flags
Mat W = new Mat(new Size(3,3), CvType.CV_64FC1);
W.put(0, 0, W_Values);
Core.multiply(u, W, R);
Core.multiply(R, vt, R);
T = u.col(2);
}
And here are the results of all matrizes after and during calculation.
Number matches: 10299
Number of good matches: 590
Number of obj_points left: 590.0
CameraMatrix:
[1133.601684570312, 0, 639.5;
0 , 1133.601684570312, 383.5;
0, 0, 1]
DistortionCoeff: [0.06604336202144623; 0.21129509806633; 0; 0; -1.206771731376648]
Fundamental:
[4.209958176688844e-08, -8.477216249742946e-08, 9.132798068178793e-05;
3.165719895008366e-07, 6.437858397735847e-07, -0.0006976204595236443;
0.0004532506630569588, -0.0009224427024602799, 1]
Essential:
[0.05410018455525099, 0, 0;
0, 0.8272987826496967, 0;
0, 0, 1]
U: (SVD)
[0, 0, 1;
0, 0.9999999999999999, 0;
1, 0, 0]
W: (SVD)
[1; 0.8272987826496967; 0.05410018455525099]
vt: (SVD)
[0, 0, 1;
0, 1, 0;
1, 0, 0]
R:
[0, 0, 0;
0, 0, 0;
0, 0, 0]
T:
[1; 0; 0]
And for completion here are the image I am using: left and right.
Before calulation of FeaturePoints and so on, I am doing an undistrortion of the images.
Can someone point out where something is goind wrong or what I am doing wrong?
edit: Question
Is it possible that my fundamental matrix is equals to the essential matrix as I am in the calibrated situation and Hartley and zissermann says:
„11.7.3 The calibrated case:
In the case of calibrated cameras normalized image coordinates may be used, and the essential matrix E computed instead of the fundamental matrix”
I've found the misstake. This code is not doing the right matrix multiplication.
Mat E = new Mat();
Core.multiply(cameraMatrix.t(),fundamental, E);
Core.multiply(E, cameraMatrix, E);
I changed this to
Core.gemm(cameraMatrix.t(), fundamental, 1, cameraMatrix, 1, E);
which is now doing the right matrix multiplication. As far as I can get ir from the documentation, Core.multiply is doing the multiplication for each element. not the dot product of row*col.
First, unless you computed the fundamental matrix by taking explicitly into account the inverse of the camera matrix, you are not in the calibrated case, hence the fundamental matrix you estimate is not an essential matrix. This is also quite easy to test: you just have to eigen-decompose the fundamental matrix and see whether the two non-zero eigen-values are equal (see § 9.6.1 in Hartley&Zisserman's book).
Second, both the fundamental matrix and the essential matrix are defined for two cameras and do not make sense if you consider only one camera. If you do have two cameras, with respective matrices K1 and K2, then you can obtain the essential matrix E12, given the fundamental matrix F12 (which maps points in I1 to lines in I2), using the following formula (see equation 9.12 in Hartley&Zisserman's book):
E12 = K2T . F12 . K1
In your case, you used K2 on both sides.

Categories