Related
I have a captured image, the image consists of a table. I want to crop the table out of that image.
This is a sample image.
Can someone suggest what can be done?
I have to use it in android.
Use a hough transform to find the lines in the image.
OpenCV can easily do this and has java bindings. See the tutorial on this page on how to do something very similar.
https://docs.opencv.org/3.4.1/d9/db0/tutorial_hough_lines.html
Here is the java code provided in the tutorial:
import org.opencv.core.*;
import org.opencv.core.Point;
import org.opencv.highgui.HighGui;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
class HoughLinesRun {
public void run(String[] args) {
// Declare the output variables
Mat dst = new Mat(), cdst = new Mat(), cdstP;
String default_file = "../../../../data/sudoku.png";
String filename = ((args.length > 0) ? args[0] : default_file);
// Load an image
Mat src = Imgcodecs.imread(filename, Imgcodecs.IMREAD_GRAYSCALE);
// Check if image is loaded fine
if( src.empty() ) {
System.out.println("Error opening image!");
System.out.println("Program Arguments: [image_name -- default "
+ default_file +"] \n");
System.exit(-1);
}
// Edge detection
Imgproc.Canny(src, dst, 50, 200, 3, false);
// Copy edges to the images that will display the results in BGR
Imgproc.cvtColor(dst, cdst, Imgproc.COLOR_GRAY2BGR);
cdstP = cdst.clone();
// Standard Hough Line Transform
Mat lines = new Mat(); // will hold the results of the detection
Imgproc.HoughLines(dst, lines, 1, Math.PI/180, 150); // runs the actual detection
// Draw the lines
for (int x = 0; x < lines.rows(); x++) {
double rho = lines.get(x, 0)[0],
theta = lines.get(x, 0)[1];
double a = Math.cos(theta), b = Math.sin(theta);
double x0 = a*rho, y0 = b*rho;
Point pt1 = new Point(Math.round(x0 + 1000*(-b)), Math.round(y0 + 1000*(a)));
Point pt2 = new Point(Math.round(x0 - 1000*(-b)), Math.round(y0 - 1000*(a)));
Imgproc.line(cdst, pt1, pt2, new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Probabilistic Line Transform
Mat linesP = new Mat(); // will hold the results of the detection
Imgproc.HoughLinesP(dst, linesP, 1, Math.PI/180, 50, 50, 10); // runs the actual detection
// Draw the lines
for (int x = 0; x < linesP.rows(); x++) {
double[] l = linesP.get(x, 0);
Imgproc.line(cdstP, new Point(l[0], l[1]), new Point(l[2], l[3]), new Scalar(0, 0, 255), 3, Imgproc.LINE_AA, 0);
}
// Show results
HighGui.imshow("Source", src);
HighGui.imshow("Detected Lines (in red) - Standard Hough Line Transform", cdst);
HighGui.imshow("Detected Lines (in red) - Probabilistic Line Transform", cdstP);
// Wait and Exit
HighGui.waitKey();
System.exit(0);
}
}
public class HoughLines {
public static void main(String[] args) {
// Load the native library.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
new HoughLinesRun().run(args);
}
}
Lines or LinesP will contain the found lines. Instead of drawing them (as in the example) you will want to manipulate them a little further.
Sort the found lines by slope.
The two largest clusters will be horizontal lines and then vertical lines.
For the horizontal lines calculate and sort by the y intercept.
The largest y intercept describes the top of the table.
The smallest y intercept is the bottom of the table.
For the vertical lines calculate and sort by the x intercept.
The largest x intercept is the right side of the table.
The smallest x intercept is the left side of the table.
You'll now have the coordinates of the four table corners and can do standard image manipulation to crop/rotate etc. OpenCV can help you will this step too.
Convert your image to grayscale.
Threshold your image to drop noise.
Find the minimum area rect of the non-blank pixels.
In python the code would look like:
import cv2
import numpy as np
img = cv2.imread('table.jpg')
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(imgray, 222, 255, cv2.THRESH_BINARY )
# write out the thresholded image to debug the 222 value
cv2.imwrite("thresh.png", thresh)
indices = np.where(thresh != 255)
coords = np.array([(b,a) for a, b in zip(*(indices[0], indices[1]))])
# coords = cv2.convexHull(coords)
rect = cv2.minAreaRect(coords)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (0, 0, 255), 2)
cv2.imwrite("box.png", img)
For me this produces the following image.
If your image didn't have the red squares it would be a tighter fit.
I applied Gabor filter on images with the following theta- {0,45,90,135}. but the resultant images were exactly the same with the same orientation angle!
I expected that the results of applying Gabor filter with theta = 90 will be different in orientation than the one with theat = 45, but after using Gabor filter with different theta, I get images with no difference in orientation!
Am I using Gabor filter wrong? Because I expect every image to be of different orientation according to the orientation angel specified in the Gabor filter.
The parameter I set for Gabor filter were as follows:
kernel size = Size(5,5);
theta = {0,45,90,135}
sigma = ,2
type = CVType.CV_32F
lambda = 100
gamma = ,5
psi = 5
code:
public static void main(String[] args) {
MatFactory matFactory = new MatFactory();
FilePathUtils.addInputPath(path_Obj);
Mat bgrMat = matFactory.newMat(FilePathUtils.getInputFileFullPathList().get(0));
Mat gsImg = SysUtils.rgbToGrayScaleMat(bgrMat);
double[] theta = new double[4];
theta[0] = 0;
theta[1] = 45;
theta[2] = 90;
theta[3] = 135;
for (int i = 0; i < 4; i++) {
Mat gaborCoeff = Imgproc.getGaborKernel(new Size(3,3), 2, theta[i], 4.1, 54.1, 0, CvType.CV_32F);
Mat dest = new Mat();
Imgproc.filter2D(gsImg, dest, CvType.CV_32F, gaborCoeff);
ImageUtils.showMat(dest, "theta = " + theta[i]);
}
}
image_0 degree:
image_45 degree:
image_90 degree:
image_135 degree:
image after applying Gabor with theta=0,45,90,135, without smoothing :
Maybe the problem is here:
Mat gaborCoeff = Imgproc.getGaborKernel(new Size(3,3), 2, theta[i], 4.1, 54.1, 0, CvType.CV_32F);
I think it should be something like:
Mat gaborCoeff = Imgproc.getGaborKernel(new Size(3,3), sigma, theta[i], lambda, psi, 0, CvType.CV_32F);
And I'm not sure about the values of lambda and psi. Maybe you should try to see the resulting images with different values.
Finally, Gabor result is what you called image_0 degree, image_45 degree. Even though there is no significant change between them, I would change the parameters
I'm trying to stitch two images together, using the OpenCV Java API. However, I get the wrong output and I cannot work out the problem. I use the following steps:
1. detect features
2. extract features
3. match features.
4. find homography
5. find perspective transform
6. warp perspective
7. 'stitch' the 2 images, into a combined image.
but somewhere I'm going wrong. I think it's the way I'm combing the 2 images, but I'm not sure. I get 214 good feature matches between the 2 images, but cannot stitch them?
public class ImageStitching {
static Mat image1;
static Mat image2;
static FeatureDetector fd;
static DescriptorExtractor fe;
static DescriptorMatcher fm;
public static void initialise(){
fd = FeatureDetector.create(FeatureDetector.BRISK);
fe = DescriptorExtractor.create(DescriptorExtractor.SURF);
fm = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
//images
image1 = Highgui.imread("room2.jpg");
image2 = Highgui.imread("room3.jpg");
//structures for the keypoints from the 2 images
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
//structures for the computed descriptors
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();
//structure for the matches
MatOfDMatch matches = new MatOfDMatch();
//getting the keypoints
fd.detect(image1, keypoints1);
fd.detect(image1, keypoints2);
//getting the descriptors from the keypoints
fe.compute(image1, keypoints1, descriptors1);
fe.compute(image2,keypoints2,descriptors2);
//getting the matches the 2 sets of descriptors
fm.match(descriptors2,descriptors1, matches);
//turn the matches to a list
List<DMatch> matchesList = matches.toList();
Double maxDist = 0.0; //keep track of max distance from the matches
Double minDist = 100.0; //keep track of min distance from the matches
//calculate max & min distances between keypoints
for(int i=0; i<keypoints1.rows();i++){
Double dist = (double) matchesList.get(i).distance;
if (dist<minDist) minDist = dist;
if(dist>maxDist) maxDist=dist;
}
System.out.println("max dist: " + maxDist );
System.out.println("min dist: " + minDist);
//structure for the good matches
LinkedList<DMatch> goodMatches = new LinkedList<DMatch>();
//use only the good matches (i.e. whose distance is less than 3*min_dist)
for(int i=0;i<descriptors1.rows();i++){
if(matchesList.get(i).distance<3*minDist){
goodMatches.addLast(matchesList.get(i));
}
}
//structures to hold points of the good matches (coordinates)
LinkedList<Point> objList = new LinkedList<Point>(); // image1
LinkedList<Point> sceneList = new LinkedList<Point>(); //image 2
List<KeyPoint> keypoints_objectList = keypoints1.toList();
List<KeyPoint> keypoints_sceneList = keypoints2.toList();
//putting the points of the good matches into above structures
for(int i = 0; i<goodMatches.size(); i++){
objList.addLast(keypoints_objectList.get(goodMatches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(goodMatches.get(i).trainIdx).pt);
}
System.out.println("\nNum. of good matches" +goodMatches.size());
MatOfDMatch gm = new MatOfDMatch();
gm.fromList(goodMatches);
//converting the points into the appropriate data structure
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
//finding the homography matrix
Mat H = Calib3d.findHomography(obj, scene);
//LinkedList<Point> cornerList = new LinkedList<Point>();
Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
obj_corners.put(0,0, new double[]{0,0});
obj_corners.put(0,0, new double[]{image1.cols(),0});
obj_corners.put(0,0,new double[]{image1.cols(),image1.rows()});
obj_corners.put(0,0,new double[]{0,image1.rows()});
Core.perspectiveTransform(obj_corners, scene_corners, H);
//structure to hold the result of the homography matrix
Mat result = new Mat();
//size of the new image - i.e. image 1 + image 2
Size s = new Size(image1.cols()+image2.cols(),image1.rows());
//using the homography matrix to warp the two images
Imgproc.warpPerspective(image1, result, H, s);
int i = image1.cols();
Mat m = new Mat(result,new Rect(i,0,image2.cols(), image2.rows()));
image2.copyTo(m);
Mat img_mat = new Mat();
Features2d.drawMatches(image1, keypoints1, image2, keypoints2, gm, img_mat, new Scalar(254,0,0),new Scalar(254,0,0) , new MatOfByte(), 2);
//creating the output file
boolean imageStitched = Highgui.imwrite("imageStitched.jpg",result);
boolean imageMatched = Highgui.imwrite("imageMatched.jpg",img_mat);
}
public static void main(String args[]){
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
initialise();
}
I cannot embed images nor post more than 2 links, because of reputation points? so I've linked the incorrectly stitched images and an image showing the matched features between the 2 images (to get an understanding of the issue):
incorrect stitched image: http://oi61.tinypic.com/11ac01c.jpg
detected features: http://oi57.tinypic.com/29m3wif.jpg
It seems that you have a lot of outliers that make the estimation of homography is incorrect. SO you can use RANSAC method that recursively reject those outliers.
No need much efforts for that, just use a third parameter in findHomography function as:
Mat H = Calib3d.findHomography(obj, scene, CV_RANSAC);
Edit
Then try to be sure that your images given to detector are 8-bit grayscale image, as mentioned here
The "incorrectly stitched image" you post looks like having a bad conditioned H matrix. Apart from +dervish suggestions, run:
cv::determinant(H) > 0.01
To check if your H matrix is "usable". If the matrix is badly conditioned, you get the effect you are showing.
You are drawing onto a 2x2 canvas size, if that's the case, you won't see plenty of stitching configurations, i.e. it's ok for image A on the left of image B but not otherwise. Try drawing the output onto a 3x3 canvas size, using the following snippet:
// Use the Homography Matrix to warp the images, but offset it to the
// center of the output canvas. Careful to pre-multiply, not post-multiply.
cv::Mat Offset = (cv::Mat_<double>(3,3) << 1, 0,
width, 0, 1, height, 0, 0, 1);
H = Offset * H;
cv::Mat result;
cv::warpPerspective(mat_l,
result,
H,
cv::Size(3*width, 3*height));
// Copy the reference image to the center of the 3x3 output canvas.
cv::Mat roi = result.colRange(width,2*width).rowRange(height,2*height);
mat_r.copyTo(roi);
Where width and height are those of the input images, supposedly both of the same size. Note that this warping assumes the mat_l unchanged (flat) and mat_r warping to get stitched on it.
This is my situation. It involves aligning a scanned image which will account for incorrect scanning. I must align the scanned image with my Java program.
These are more details:
There is a table-like form printed on a sheet of paper, which will be scanned into an image file.
I will open the picture with Java, and I will have an OVERLAY of text boxes.
The text boxes are supposed to align correctly with the scanned image.
In order to align correctly, my Java program must analyze the scanned image and detect the coordinates of the edges of the table on the scanned image, and thus position the image and the textboxes so that the textboxes and the image both align properly (in case of incorrect scanning)
You see, the guy scanning the image might not necessarily place the image in a perfectly correct position, so I need my program to automatically align the scanned image as it loads it. This program will be reusable on many of such scanned images, so I need the program to be flexible in this way.
My question is one of the following:
How can I use Java to detect the y coordinate of the upper edge of the table and the x-coordinate of the leftmost edge of the table. The table is a a regular table with many cells, with black thin border, printed on a white sheet of paper (horizontal printout)
If an easier method exists to automatically align the scanned image in such a way that all scanned images will have the graphical table align to the same x, y coordinates, then share this method :).
If you don't know the answer to the above to questions, do tell me where I should start. I don't know much about graphics java programming and I have about 1 month to finish this program. Just assume that I have a tight schedule and I have to make the graphics part as simple as possible for me.
Cheers and thank you.
Try to start from a simple scenario and then improve the approach.
Detect corners.
Find the corners in the boundaries of the form.
Using the form corners coordinates, calculate the rotation angle.
Rotate/scale the image.
Map the position of each field in the form relative to form origin coordinates.
Match the textboxes.
The program presented at the end of this post does the steps 1 to 3. It was implemented using Marvin Framework. The image below shows the output image with the detected corners.
The program also outputs: Rotation angle:1.6365770416167182
Source code:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}
Edge detection is something that is typically done by enhancing the contrast between neighboring pixels, such that you get a easily detectable line, which is suitable for further processing.
To do this, a "kernel" transforms a pixel according it the pixel's inital value, and the value of that pixel's neighbors. A good edge detection kernel will enhance the differences between neighboring pixels, and reduce the strength of a pixel with similar neigbors.
I would start by looking at the Sobel operator. This might not return results that are immediately useful to you; however, it will get you far closer than you would be if you were to approach the problem with little knowledge of the field.
After you have some crisp clean edges, you can use larger kernels to detect points where it seems that a 90% bend in two lines occurs, that might give you the pixel coordinates of the outer rectangle, which might be enough for your purposes.
With those outer coordinates, it still is a bit of math to make the new pixels be composted with the average values between the old pixels rotated and moved to "match". The results (especially if you do not know about anti-aliasing math) can be pretty bad, adding blur to the image.
Sharpening filters might be a solution, but they come with their own issues, mainly they make the picture sharper by adding graininess. Too much, and it is obvious that the original image is not a high-quality scan.
I researched the libraries but in the end I found it more convenient to code up my own edge detection methods.
The class below will detect black/grayed out edges of a scanned sheet of paper that contains such edges, and will return the x and y coordinate of the edges of the sheet of paper, starting from the rightmost end (reverse = true) or from lower end (reverse = true) or from the top edge (reverse = false) or from left edge (reverse = false). Also...the program will take ranges along vertical edges (rangex) measured in pixels, and horizontal ranges (rangey) measured in pixels. The ranges determine outliers in the points received.
The program does 4 vertical cuts using the specified arrays, and 4 horizontal cuts. It retrieves the values of the dark dots. It uses the ranges to eliminate outliers. Sometimes, a little spot on the paper may cause an outlier point. The smaller the range, the fewer the outliers. However, sometimes the edge is slightly tilted, so you don't want to make the range too small.
Have fun. It works perfectly for me.
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}
A similar such problem I've done in the past basically figured out the orientation of the form, re-aligned it, re-scaled it, and I was all set. You can use the Hough transform to to detect the angular offset of the image (ie: how much it is rotated), but you still need to detect the boundaries of the form. It also had to accommodate for the boundaries of the piece of paper itself.
This was a lucky break for me, because it basically showed a black and white image in the middle of a big black border.
Apply an aggressive, 5x5 median filter to remove some noise.
Convert from grayscale to black and white (rescale intensity values from [0,255] to [0,1]).
Calculate the Principal Component Analysis (ie: calculate the Eigenvectors of the covariance matrix for your image from the calculated Eigenvalues) (http://en.wikipedia.org/wiki/Principal_component_analysis#Derivation_of_PCA_using_the_covariance_method)
4) This gives you a basis vector. You simply use that to re-orient your image to a standard basis matrix (ie: [1,0],[0,1]).
Your image is now aligned beautifully. I did this for normalizing the orientation of MRI scans of entire human brains.
You also know that you have a massive black border around the actual image. You simply keep deleting rows from the top and bottom, and both sides of the image until they are all gone. You can temporarily apply a 7x7 median or mode filter to a copy of the image so far at this point. It helps rule out too much border remaining in the final image from thumbprints, dirt, etc.
I have computed the values of the points I need to draw and I put them in a Point3f array. I have 526 435 points. The next step is to draw the points one after another so that it would look like a continous line OR to draw lines between the points using LineStripArray. Because I didn't find how to draw point by point, I've tried to use LineStripArray.
I've taken the code from here and I've changed the content of the method "createLineTypes" with the following code:
Group lineGroup = new Group();
Appearance app = new Appearance();
ColoringAttributes ca = new ColoringAttributes(black, ColoringAttributes.SHADE_FLAT);
app.setColoringAttributes(ca);
Computing comp = new Computing();
//the following row computes the values of the points I want to draw
//and it works alright; the points are saved into
//**static List<Vector3> computed_values** from class Computing,
// where Vector3 class defines a 3D point actually
comp.do_the_job();
Point3f[] dashPts = new Point3f[Computing.computed_values.size()];
for(int i = 0; i < Computing.computed_values.size(); i++)
{
dashPts[i] = new Point3f((int)Computing.computed_values.get(i).getX(), (int)Computing.computed_values.get(i).getY(), (int)Computing.computed_values.get(i).getZ());
}
System.out.print(Computing.computed_values.size());
int[] a = {Computing.computed_values.size()};
LineStripArray dash = new LineStripArray(Computing.computed_values.size(), LineArray.COORDINATES, a);
dash.setCoordinates(0, dashPts);
LineAttributes dashLa = new LineAttributes();
dashLa.setLineWidth(1.0f);
dashLa.setLinePattern(LineAttributes.PATTERN_SOLID);
Shape3D dashShape = new Shape3D(dash, app);
lineGroup.addChild(dashShape);
return lineGroup;
The problem is that it only draws 1 vertical line. Any ideas?
I've found a solution and it's with PointArray. Here it is
Group lineGroup = new Group();
Appearance app = new Appearance();
ColoringAttributes ca = new ColoringAttributes(black, ColoringAttributes.SHADE_FLAT);
app.setColoringAttributes(ca);
Computing comp = new Computing();
comp.do_the_job();
Point3f[] plaPts = new Point3f[Computing.computed_values.size()];
for(int i = 0; i < Computing.computed_values.size(); i++)
{
plaPts[i] = new Point3f((float)Computing.computed_values.get(i).getX(), (float)Computing.computed_values.get(i).getY(), (float)Computing.computed_values.get(i).getZ());
}
PointArray pla = new PointArray(Computing.computed_values.size(), GeometryArray.COORDINATES);
pla.setCoordinates(0, plaPts);
Shape3D plShape = new Shape3D(pla, app);
lineGroup.addChild(plShape);
return lineGroup;
Important: for my application, I had to cast the values to float, not to int as I initially did.