I have a number of TextButtons I can drag, checking if they overlap an Image when I let go. Currently what I'm experiencing is either a particular object will detect collision located anywhere on screen, or it will never collide. Note that I'm not using the native DragAndDrop class, but have adapted a parallel implementation from a book.
Given that my TextButtons move when I drag them, I think the following function is updating the (x,y) of the object:
public void touchDragged(InputEvent event, float eventOffsetX, float eventOffsetY, int pointer)
{
float deltaX = eventOffsetX - grabOffsetX;
float deltaY = eventOffsetY - grabOffsetY;
a.moveBy(deltaX, deltaY);
}
Is that correct that the Actor a's x, y change due to moveBy? Because my latter collision detection - where I examine the dragged objects coordinates - reports the same x,y coordinates for the dragged object no matter where I release it. Here's the log for releasing the object from two different locations on the screen:
Does (626.8995, 393.1301)(923.8995, 393.1301)(923.8995, 499.1301)(626.8995, 499.1301) fit into (610.0, 256.0)(990.0, 256.0)(990.0, 677.0)(610.0, 677.0)?
Does (626.8995, 393.1301)(923.8995, 393.1301)(923.8995, 499.1301)(626.8995, 499.1301) fit into (610.0, 256.0)(990.0, 256.0)(990.0, 677.0)(610.0, 677.0)?
and here's the collision detection and sys out generating those log messages:
//this is called by the dragged obj, a, on touchUp() against each of the targets
public boolean overlaps(Actor other)
{
//a is the first, dragged object, other is the target
if (poly1 == null)
poly1 = getPolygon(a);
Polygon poly2 = getPolygon(other);
float[] p1v = poly1.getVertices();
StringBuilder sb = new StringBuilder();
for (int i=0; i<p1v.length-1; i+=2)
sb.append("(").append(p1v[i]).append(", ").append(p1v[i+1]).append(")");
float[] p2v = poly2.getVertices();
StringBuilder sb2 = new StringBuilder();
for (int i=0; i<p2v.length-1; i+=2)
sb2.append("(").append(p2v[i]).append(", ").append(p2v[i+1]).append(")");
System.out.println("Does " + sb + " fit into " + sb2 + "?");
// initial test to improve performance
if ( !poly1.getBoundingRectangle().overlaps(poly2.getBoundingRectangle()) )
return false;
return Intersector.overlapConvexPolygons( poly1, poly2 );
}
public Polygon getPolygon(Actor a) {
Polygon p = new Polygon();
// float[] vertex = { a.getOriginX(), a.getOriginY(), a.getOriginX(), a.getY(), a.getX(), a.getY(), a.getX(), a.getOriginY() };//new float[8];
// float[] vertex = { 0, 0, a.getWidth(), 0, a.getWidth(), a.getHeight(), 0, a.getHeight()};
float[] vertex = { a.getX(), a.getY(), (a.getX() + a.getWidth()), a.getY(), (a.getX() + a.getWidth()), (a.getY() + a.getHeight()), a.getX(), (a.getY() + a.getHeight())};
p.setVertices(vertex);
// p.setPosition(a.getX(), a.getY());
//p.setOrigin(a.getOriginX(), a.getOriginY());
return p;
}
There are a HORDE of collision detection posts already on StackOverflow, and they helped some in showing me how to form valid polygons. Perhaps the 3rd party drag and drop is why I'm not finding my answer in the wealth of knowledge out there, but I'm leaning towards some annoying mistake I'm overlooking.
Score another one for logic failure. Originally I thought I'd just be saving the dimensions of the dragged polygon when I decided to cache it, thinking it would save a few polygon creation steps as it was checked against a number of potential targets. Later, as I kept reworking what values to feed the polygon vertices, I tied in the location of the polygon as well. So it was just caching the first place I dragged it to, and using that every time I dragged it somewhere.
Thanks for the comment, it helped me move past thinking I wasn't understanding the classes. I'm doubtful this particular mistake/resolution will ever be of use to someone else, and would be very understanding if this post is removed.
Related
I have tried to create NPC character that can "see" the player by using cones of vision.
The NPC will rotate back and forth at all times.
My problem is that the arc has a generic and unchanging position, but when its drawn to the screen it looks correct.
[Screenshots of the collisions in action][1]
[GitHub link for java files][2]
I'm using Arc2D to draw the shape like this in my NPC class
// Update the shapes used in the npc
rect.setRect(x, y, w, h);
ellipse.setFrame(rect);
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
/ CenterX, CenterY (of the npc),
/ the distance from the arc to the npc
/ a constant value around 45 degrees and a constant value around 90 degress (to make a pie shape)
I've tried multiplying the position and the angles by the sin and cosine of the NPC's current angle
something like these
visionArc.setArcByCenter(cx * (Math.cos(Math.toRadians(angle))), cy (Math.sin(Math.toRadians(angle)), visionDistance, visionAngle, visionAngle * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle - angle, (visionAngle + angle) * 2, Arc2D.PIE);
or
visionArc.setArcByCenter(cx, cy, visionDistance, visionAngle * (Math.cos(Math.toRadians(angle))), visionAngle * 2, Arc2D.PIE);
I've tried a lot but can't seem to find what works. Making the vision angles not constant makes an arc that expands and contracts, and multiplying the position by the sin or cosine of the angle will make the arc fly around the screen, which doesn't really work either.
This is the function that draws the given NPC
public void drawNPC(NPC npc, Graphics2D g2, AffineTransform old) {
// translate to the position of the npc and rotate
AffineTransform npcTransform = AffineTransform.getRotateInstance(Math.toRadians(npc.angle), npc.x, npc.y);
// Translate back a few units to keep the npc rotating about its own center
// point
npcTransform.translate(-npc.halfWidth, -npc.halfHeight);
g2.setTransform(npcTransform);
// g2.draw(npc.rect); //<-- show bounding box if you want
g2.setColor(npc.outlineColor);
g2.draw(npc.visionArc);
g2.setColor(Color.BLACK);
g2.draw(npc.ellipse);
g2.setTransform(old);
}
This is my collision detection algorithim - NPC is a superclass to ninja (Shorter range, higher peripheral)
public void checkNinjas(Level level) {
for (int i = 0; i < level.ninjas.size(); i++) {
Ninja ninja = level.ninjas.get(i);
playerRect = level.player.rect;
// Check collision
if (playerRect.getBounds2D().intersects(ninja.visionArc.getBounds2D())) {
// Create an area of the object for greater precision
Area area = new Area(playerRect);
area.intersect(new Area(ninja.visionArc));
// After checking if the area intersects a second time make the NPC "See" the player
if (!area.isEmpty()) {
ninja.seesPlayer = true;
}
else {
ninja.seesPlayer = false;
}
}
}
}
Can you help me correct the actual positions of the arcs for my collision detection? I have tried creating new shapes so I can have one to do math on and one to draw to the screen but I scrapped that and am starting again from here.
[1]: https://i.stack.imgur.com/rUvTM.png
[2]: https://github.com/ShadowDraco/ArcCollisionDetection
After a few days of coding and learning and testing new ideas I came back to this program and implemented the collision detection using my original idea (ray casting) and have created the equivalent with rays!
Screenshot of the new product
Github link to the project that taught me the solution
Here's the new math
public void setRays() {
for (int i = 0; i < rays.length; i++) {
double rayStartAngleX = Math.sin(Math.toRadians((startAngle - angle) + i));
double rayStartAngleY = Math.cos(Math.toRadians((startAngle - angle) + i));
rays[i].setLine(cx, cy, cx + visionDistance * rayStartAngleX, cy + visionDistance * rayStartAngleY);
}
}
Here is a link the the program I started after I asked this question and moved on to learn more, and an image to what the new product looks like
(The original github page has been updated with a new branch :) I'm learning git hub right now too
I do not believe that using Arc2D in the way I intended is possible, however there is .setArcByTangent method, it may be possible to use that but I wasn't going to get into that. Rays are cooler.
I'm trying to detect a collision between a small rectangle around the cursor and a "Connector", which is basically just a line between two points.
Now, I've decided to use the Intersector.intersectLinePolygon(p1, p2, polygon) method to do so, but when I run the code. It detects a collision everytime any of the rectangle X or Y points are in the same range as the line's bounding box and I can't really get my head around it. The desired result is the collision reporting only when the rectangle is actually touching the line.
Vector3 worldPos = cam.unproject(new Vector3(mouseX, mouseY, 0));
Rectangle rect = new Rectangle(worldPos.x-4, worldPos.y-4, 8, 8);
Boolean connectorIntersected = false;
for (int i = 0; i < nodeConnectorHandler.getAllConnectors().size(); i++) {
//Getting two points that make the connector line
Node n1 = nodeConnectorHandler.getAllConnectors().get(i).getFrom();
Node n2 = nodeConnectorHandler.getAllConnectors().get(i).getTo();
float x1 = n1.getCX();
float y1 = n1.getCY();
float x2 = n2.getCX();
float y2 = n2.getCY();
//Making a polygon out of rect
Polygon p = new Polygon(new float[] {
rect.getX(),
rect.getY(),
(rect.getX()+8f),
rect.getY(),
(rect.getX()+8f),
(rect.getY()+8f),
rect.getX(),
(rect.getY()+8f)
});
//Checking if the line intersects the polygon (representing the rectangle around the cursor)
if (Intersector.intersectLinePolygon(new Vector2(x1,y1), new Vector2(x2,y2), p))
{
selectedIndex = nodeConnectorHandler.getAllConnectors().get(i).getID();
System.out.println("ConnectorIntersected!");
connectorIntersected = true;
}
break
}
The code reports a collision everytime the rectangle is in these areas (shown in yellow, aprox):
photoshopped image link
The red line inbetween those 2 dots is the "connector"
Cursor is right below the line. It reports a collision in those yellow areas spanning across the whole game world.
I suppose I'm either not using the function properly or that I've made some obvious mistake. Or is this how the function should react? I really don't know. Thanks for any help :)
Ok, apparently I used the wrong method. intersectSegmentPolygon works as expected. My bad ¯_(ツ)_/¯.
I started learning LibGdx and Java recently, and it has been going well so far.
I'm facing an issue with collision detection.
I have two sprites which can be represented as two shapes, a polygon and a circle, which will collide/intersect at any given moment. Once these two shapes collide, something will get triggered.
So far, this is what I have done. It kinda works but it is not accurate. This is called inside the Render() function:
public boolean CollectPowerUp(PowerUps powerUp) {
if (powerUp.position.dst(position) < Constants.PLAYER_HEIGHT -3) {
Gdx.app.log("Collected PowerUp", "TRUE");
EnablePowerUp(powerUp);
return true;
}
return false;
I have searched many websites, and most of the solutions include other softwares like 2DCube or PhysicsEditor. Is it possible to perform this intersection solely by using LibGdx and Java? If so, what should I look into?
Thanks
Intersector class having many static method that can be used for collision detection.
If your polygon is rectangle you can use :
Intersector.overlaps(Circle c, Rectangle r)
else
Polygon polygon=new Polygon();
polygon.setVertices(new float[]{0,0,.......});
Circle circle=new Circle(x, y, radius);
float points[]=polygon.getTransformedVertices();
for (int i=0;i<points.length;i+=2){
if(circle.contains(points[i],points[i+1])){
System.out.println("Collide");
}
}
EDIT
Above code only detect collision if polygon vertices are inside circle, what if
circle is completely inside polygon
some part of circle is inside polygon but vertices are outside the circle
Create a polygon for circle that act as circle in view and polygon in model
float radius=100;
FloatArray floatArray=new FloatArray();
int accuracy=24; // can be use 1 for complete circle
for (int angle=0;angle<360;angle += accuracy){
floatArray.add(radius * MathUtils.cosDeg(angle));
floatArray.add(radius * MathUtils.sinDeg(angle));
}
Polygon circle=new Polygon(floatArray.toArray()); // This is polygon whose vertices are on circumference of circle
float[] circularPoint=circle.getTransformedVertices();
for (int i=0;i<circularPoint.length;i+=2){
if(polygon.contains(circularPoint[i],circularPoint[i+1])){
System.out.println("Collide With circumference");
break;
}
}
There's a nice article on collision detection on www.gamedevelopment.blog which shows how to detect collisions with most shapes. This is the Libgdx circle, polygon collision detection method shown in the article.
public boolean contains (Polygon poly, Circle circ) {
final float[] vertices = poly.getTransformedVertices(); // get all points for this polygon (x and y)
final int numFloats = vertices.length; // get the amount of points(x and y)
// loop through each point's x and y values
for (int i = 0; i < numFloats; i += 2) {
// get the first and second point(x and y of first vertice)
Vector2 start = new Vector2(vertices[i],vertices[i + 1]);
// get 3rd and 4th point (x and y of second vertice) (uses modulo so last point can use first point as end)
Vector2 end = new Vector2(vertices[(i + 2) % numFloats], vertices[(i + 3) % numFloats]);
// get the center of the circle
Vector2 center = new Vector2(circ.x, circ.y);
// get the square radius
float squareRadius = circ.radius * circ.radius;
// use square radius to check if the given line segment intersects the given circle.
return Intersector.intersectSegmentCircle (start, end, center, squareRadius);
}
}
There are many useful methods in the Intersector class which can be used for collision detection.
I'm trying to determine whether two rectangles border each other. If they share an edge or part of an edge, then I want to include them, if they only share a vertice then I don't.
I've tried using android android.graphics.Rect, I was hoping that the intersect method would return true giving me a rectangle, with 0 width but the points of the intersecting edge. I'm using andEngine and also tried the collideswith method of org.andengine.entity.primitive.Rectangle however that returns true, even if the rectangle only share one corner vertice.
Is there a nice way of doing this? The only other way I can think of is to try and create a collection of all the edges then see if they're equal or are in someway partly equal.
Here's an image to demonstrate what I want. If I click on rect 1 then I want to return rects 2,3 and 4, but not 5.
"Map":
It sounds like you need a new class to do this. I would take the coordinates of each corner of the rectangles. Then, when you are selecting a rectangle, you can get those adjacent to it by finding them one side at a time. Starting with the top for an example, you check which other rectangles have corners at the same height. From that list, you check to see which ones exist on at least one point between the two top corners. So, if top left is 0,3 and top right is 4,3 then you would look for the list of corners at y=3. From that list you find all corners where 0<=x<=4 and anything that fits will be adjacent. You then do the same thing for each additional side. It should be an easy class to make, but I am not going to write any code as I do not know anything about how you stored your data or how you would reference this in your code. If you need help with that, write a comment.
Write a function to find which rectangles share edges with rectangles within all considered rectangles.
Then, map these rectangles which share edges to one another. An Adjacency List is just a way of representing a graph in code.
Sometimes code is easier to understand, so here's code. I have not tested this, but it should get you most the way there.
Also, I'm not sure what you're end goal is here but here's a question I answered that deals with rectangular compression.
List<Rectangle> allRectangles;
public boolean shareAnEdge(Rectangle r1, Rectangle r2){
int y1 = r1.y + r1.height;
int y2 = r2.y+r2.height;
int x1 = r1.x+r1.width;
int x2 = r2.x+r2.width;
boolean topShared = (y1 == r2.y && r2.x == r1.x);
boolean bottomShared = (y2 == r2.y && r2.x==r1.x);
boolean rightShared = (x1 == r2.x && r2.y==r1.y);
boolean leftShared = (x2 == r1.x && r2.y==r1.y);
if (topShared || bottomShared || rightShared || leftShared) {
return true;
}
return false;
}
public List<Rectangle> findSharedEdgesFor(Rectangle input){
List<Rectangle> output = new List<Rectangle>();
for(Rectangle r : allRectangles){
if(r!=input && shareAnEdge(r, input)){
output.add(r);
}
}
}
public AdjacencyList createGraph(List<Rectangle> rectangles){
AdjacencyList graph = new AdjacencyList();
for(Rectangle r : rectangles){
List<Rectangle> sharedEdges = findSharedEdgesFor(r);
for(Rectangle shared : sharedEdges){
graph.createEdgeBetween(r, shared);
}
}
}
This is my situation. It involves aligning a scanned image which will account for incorrect scanning. I must align the scanned image with my Java program.
These are more details:
There is a table-like form printed on a sheet of paper, which will be scanned into an image file.
I will open the picture with Java, and I will have an OVERLAY of text boxes.
The text boxes are supposed to align correctly with the scanned image.
In order to align correctly, my Java program must analyze the scanned image and detect the coordinates of the edges of the table on the scanned image, and thus position the image and the textboxes so that the textboxes and the image both align properly (in case of incorrect scanning)
You see, the guy scanning the image might not necessarily place the image in a perfectly correct position, so I need my program to automatically align the scanned image as it loads it. This program will be reusable on many of such scanned images, so I need the program to be flexible in this way.
My question is one of the following:
How can I use Java to detect the y coordinate of the upper edge of the table and the x-coordinate of the leftmost edge of the table. The table is a a regular table with many cells, with black thin border, printed on a white sheet of paper (horizontal printout)
If an easier method exists to automatically align the scanned image in such a way that all scanned images will have the graphical table align to the same x, y coordinates, then share this method :).
If you don't know the answer to the above to questions, do tell me where I should start. I don't know much about graphics java programming and I have about 1 month to finish this program. Just assume that I have a tight schedule and I have to make the graphics part as simple as possible for me.
Cheers and thank you.
Try to start from a simple scenario and then improve the approach.
Detect corners.
Find the corners in the boundaries of the form.
Using the form corners coordinates, calculate the rotation angle.
Rotate/scale the image.
Map the position of each field in the form relative to form origin coordinates.
Match the textboxes.
The program presented at the end of this post does the steps 1 to 3. It was implemented using Marvin Framework. The image below shows the output image with the detected corners.
The program also outputs: Rotation angle:1.6365770416167182
Source code:
import java.awt.Color;
import java.awt.Point;
import marvin.image.MarvinImage;
import marvin.io.MarvinImageIO;
import marvin.plugin.MarvinImagePlugin;
import marvin.util.MarvinAttributes;
import marvin.util.MarvinPluginLoader;
public class FormCorners {
public FormCorners(){
// Load plug-in
MarvinImagePlugin moravec = MarvinPluginLoader.loadImagePlugin("org.marvinproject.image.corner.moravec");
MarvinAttributes attr = new MarvinAttributes();
// Load image
MarvinImage image = MarvinImageIO.loadImage("./res/printedForm.jpg");
// Process and save output image
moravec.setAttribute("threshold", 2000);
moravec.process(image, null, attr);
Point[] boundaries = boundaries(attr);
image = showCorners(image, boundaries, 12);
MarvinImageIO.saveImage(image, "./res/printedForm_output.jpg");
// Print rotation angle
double angle = (Math.atan2((boundaries[1].y*-1)-(boundaries[0].y*-1),boundaries[1].x-boundaries[0].x) * 180 / Math.PI);
angle = angle >= 0 ? angle : angle + 360;
System.out.println("Rotation angle:"+angle);
}
private Point[] boundaries(MarvinAttributes attr){
Point upLeft = new Point(-1,-1);
Point upRight = new Point(-1,-1);
Point bottomLeft = new Point(-1,-1);
Point bottomRight = new Point(-1,-1);
double ulDistance=9999,blDistance=9999,urDistance=9999,brDistance=9999;
double tempDistance=-1;
int[][] cornernessMap = (int[][]) attr.get("cornernessMap");
for(int x=0; x<cornernessMap.length; x++){
for(int y=0; y<cornernessMap[0].length; y++){
if(cornernessMap[x][y] > 0){
if((tempDistance = Point.distance(x, y, 0, 0)) < ulDistance){
upLeft.x = x; upLeft.y = y;
ulDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, 0)) < urDistance){
upRight.x = x; upRight.y = y;
urDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, 0, cornernessMap[0].length)) < blDistance){
bottomLeft.x = x; bottomLeft.y = y;
blDistance = tempDistance;
}
if((tempDistance = Point.distance(x, y, cornernessMap.length, cornernessMap[0].length)) < brDistance){
bottomRight.x = x; bottomRight.y = y;
brDistance = tempDistance;
}
}
}
}
return new Point[]{upLeft, upRight, bottomRight, bottomLeft};
}
private MarvinImage showCorners(MarvinImage image, Point[] points, int rectSize){
MarvinImage ret = image.clone();
for(Point p:points){
ret.fillRect(p.x-(rectSize/2), p.y-(rectSize/2), rectSize, rectSize, Color.red);
}
return ret;
}
public static void main(String[] args) {
new FormCorners();
}
}
Edge detection is something that is typically done by enhancing the contrast between neighboring pixels, such that you get a easily detectable line, which is suitable for further processing.
To do this, a "kernel" transforms a pixel according it the pixel's inital value, and the value of that pixel's neighbors. A good edge detection kernel will enhance the differences between neighboring pixels, and reduce the strength of a pixel with similar neigbors.
I would start by looking at the Sobel operator. This might not return results that are immediately useful to you; however, it will get you far closer than you would be if you were to approach the problem with little knowledge of the field.
After you have some crisp clean edges, you can use larger kernels to detect points where it seems that a 90% bend in two lines occurs, that might give you the pixel coordinates of the outer rectangle, which might be enough for your purposes.
With those outer coordinates, it still is a bit of math to make the new pixels be composted with the average values between the old pixels rotated and moved to "match". The results (especially if you do not know about anti-aliasing math) can be pretty bad, adding blur to the image.
Sharpening filters might be a solution, but they come with their own issues, mainly they make the picture sharper by adding graininess. Too much, and it is obvious that the original image is not a high-quality scan.
I researched the libraries but in the end I found it more convenient to code up my own edge detection methods.
The class below will detect black/grayed out edges of a scanned sheet of paper that contains such edges, and will return the x and y coordinate of the edges of the sheet of paper, starting from the rightmost end (reverse = true) or from lower end (reverse = true) or from the top edge (reverse = false) or from left edge (reverse = false). Also...the program will take ranges along vertical edges (rangex) measured in pixels, and horizontal ranges (rangey) measured in pixels. The ranges determine outliers in the points received.
The program does 4 vertical cuts using the specified arrays, and 4 horizontal cuts. It retrieves the values of the dark dots. It uses the ranges to eliminate outliers. Sometimes, a little spot on the paper may cause an outlier point. The smaller the range, the fewer the outliers. However, sometimes the edge is slightly tilted, so you don't want to make the range too small.
Have fun. It works perfectly for me.
import java.awt.image.BufferedImage;
import java.awt.Color;
import java.util.ArrayList;
import java.lang.Math;
import java.awt.Point;
public class EdgeDetection {
public App ap;
public int[] horizontalCuts = {120, 220, 320, 420};
public int[] verticalCuts = {300, 350, 375, 400};
public void printEdgesTest(BufferedImage image, boolean reversex, boolean reversey, int rangex, int rangey){
int[] mx = horizontalCuts;
int[] my = verticalCuts;
//you are getting edge points here
//the "true" parameter indicates that it performs a cut starting at 0. (left edge)
int[] xEdges = getEdges(image, mx, reversex, true);
int edgex = getEdge(xEdges, rangex);
for(int x = 0; x < xEdges.length; x++){
System.out.println("EDGE = " + xEdges[x]);
}
System.out.println("THE EDGE = " + edgex);
//the "false" parameter indicates you are doing your cut starting at the end (image.getHeight)
//and ending at 0
//if the parameter was true, it would mean it would start the cuts at y = 0
int[] yEdges = getEdges(image, my, reversey, false);
int edgey = getEdge(yEdges, rangey);
for(int y = 0; y < yEdges.length; y++){
System.out.println("EDGE = " + yEdges[y]);
}
System.out.println("THE EDGE = " + edgey);
}
//This function takes an array of coordinates...detects outliers,
//and computes the average of non-outlier points.
public int getEdge(int[] edges, int range){
ArrayList<Integer> result = new ArrayList<Integer>();
boolean[] passes = new boolean[edges.length];
int[][] differences = new int[edges.length][edges.length-1];
//THIS CODE SEGMENT SAVES THE DIFFERENCES BETWEEN THE POINTS INTO AN ARRAY
for(int n = 0; n<edges.length; n++){
for(int m = 0; m<edges.length; m++){
if(m < n){
differences[n][m] = edges[n] - edges[m];
}else if(m > n){
differences[n][m-1] = edges[n] - edges[m];
}
}
}
//This array determines which points are outliers or nots (fall within range of other points)
for(int n = 0; n<edges.length; n++){
passes[n] = false;
for(int m = 0; m<edges.length-1; m++){
if(Math.abs(differences[n][m]) < range){
passes[n] = true;
System.out.println("EDGECHECK = TRUE" + n);
break;
}
}
}
//Create a new array only using valid points
for(int i = 0; i<edges.length; i++){
if(passes[i]){
result.add(edges[i]);
}
}
//Calculate the rounded mean... This will be the x/y coordinate of the edge
//Whether they are x or y values depends on the "reverse" variable used to calculate the edges array
int divisor = result.size();
int addend = 0;
double mean = 0;
for(Integer i : result){
addend += i;
}
mean = (double)addend/(double)divisor;
//returns the mean of the valid points: this is the x or y coordinate of your calculated edge.
if(mean - (int)mean >= .5){
System.out.println("MEAN " + mean);
return (int)mean+1;
}else{
System.out.println("MEAN " + mean);
return (int)mean;
}
}
//this function computes "dark" points, which include light gray, to detect edges.
//reverse - when true, starts counting from x = 0 or y = 0, and ends at image.getWidth or image.getHeight()
//verticalEdge - determines whether you want to detect a vertical edge, or a horizontal edge
//arr[] - determines the coordinates of the vertical or horizontal cuts you will do
//set the arr[] array according to the graphical layout of your scanned image
//image - this is the image you want to detect black/white edges of
public int[] getEdges(BufferedImage image, int[] arr, boolean reverse, boolean verticalEdge){
int red = 255;
int green = 255;
int blue = 255;
int[] result = new int[arr.length];
for(int n = 0; n<arr.length; n++){
for(int m = reverse ? (verticalEdge ? image.getWidth():image.getHeight())-1:0; reverse ? m>=0:m<(verticalEdge ? image.getWidth():image.getHeight());){
Color c = new Color(image.getRGB(verticalEdge ? m:arr[n], verticalEdge ? arr[n]:m));
red = c.getRed();
green = c.getGreen();
blue = c.getBlue();
//determine if the point is considered "dark" or not.
//modify the range if you want to only include really dark spots.
//occasionally, though, the edge might be blurred out, and light gray helps
if(red<239 && green<239 && blue<239){
result[n] = m;
break;
}
//count forwards or backwards depending on reverse variable
if(reverse){
m--;
}else{
m++;
}
}
}
return result;
}
}
A similar such problem I've done in the past basically figured out the orientation of the form, re-aligned it, re-scaled it, and I was all set. You can use the Hough transform to to detect the angular offset of the image (ie: how much it is rotated), but you still need to detect the boundaries of the form. It also had to accommodate for the boundaries of the piece of paper itself.
This was a lucky break for me, because it basically showed a black and white image in the middle of a big black border.
Apply an aggressive, 5x5 median filter to remove some noise.
Convert from grayscale to black and white (rescale intensity values from [0,255] to [0,1]).
Calculate the Principal Component Analysis (ie: calculate the Eigenvectors of the covariance matrix for your image from the calculated Eigenvalues) (http://en.wikipedia.org/wiki/Principal_component_analysis#Derivation_of_PCA_using_the_covariance_method)
4) This gives you a basis vector. You simply use that to re-orient your image to a standard basis matrix (ie: [1,0],[0,1]).
Your image is now aligned beautifully. I did this for normalizing the orientation of MRI scans of entire human brains.
You also know that you have a massive black border around the actual image. You simply keep deleting rows from the top and bottom, and both sides of the image until they are all gone. You can temporarily apply a 7x7 median or mode filter to a copy of the image so far at this point. It helps rule out too much border remaining in the final image from thumbprints, dirt, etc.