Detect rectangles from pixel data retrieved from binary png image - java

I have retrieved all pixels from a png image which looks like this...
I started to get all filled lines horizontal to get the start end pixel of each line .... as I scan horizontal I get also the vertical lines in horizontal layer.... I guess it is possible to code all manually but it takes time.... my question is has anyone experience with opencv,imageJ or other to tell if any of this libs could solve the problem... I am also open for any algorithm suggestion.... (using Java)....
-> The biggest problem is the thickness of the lines between 1 and 4 pixels otherwise I could easily retrieve the joint points

Here is a possible solution using image morphology. The code below is in C++ since I only have little experience with Java.
To solve your problem, you need:
Thinning - to reduce thick lines to one-pixel width lines.
Hit-or-miss transform - for finding patterns in binary image i.e the corner and joint points.
The bad news is both operations is not yet supported in OpenCV as of version 2.4.3. The good news is I have implemented both operations, the code is available on my blog:
void thinning(cv::Mat& im)
void hitmiss(cv::Mat& src, cv::Mat dst, cv::Mat& kernel)
I will be using my thinning() and hitmiss() functions and your test image.
After loading the image, convert it to single-channel binary image.
cv::Mat im = cv::imread("D1Xnm.png");
cv::Mat bw;
cv::cvtColor(im, bw, CV_BGR2GRAY);
cv::threshold(~bw, bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
Since the width of the lines vary from 1 to 4 pixels perform thinning to get one-width lines.
thinning(bw);
From the thinned image, notice that there are perfect and not perfect joint points as shown in the figure below.
To cover both the perfect and imperfect joint points, we need the following kernels for the hit-or-miss transform.
std::vector<cv::Mat> k;
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 0,-1,-1,
-1,-1, 0,-1,-1,
0, 0, 0, 0, 1,
-1,-1, 0, 0,-1,
-1,-1, 1,-1,-1 ));
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 0,-1,-1,
-1,-1, 0,-1,-1,
1, 0, 0, 0, 0,
-1, 0, 0,-1,-1,
-1,-1, 1,-1,-1 ));
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 1,-1,-1,
-1,-1, 0,-1,-1,
1, 0, 0, 0, 0,
-1, 0, 0,-1,-1,
-1,-1, 0,-1,-1 ));
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 1,-1,-1,
-1,-1, 0,-1,-1,
0, 0, 0, 0, 1,
-1,-1, 0, 0,-1,
-1,-1, 0,-1,-1 ));
cv::Mat dst = cv::Mat::zeros(bw.size(), CV_8U);
for (int i = 0; i < k.size(); i++)
{
cv::Mat tmp;
hitmiss(bw, tmp, k[i]);
dst |= tmp;
}
Open the original image to make the result clearer.
The joint points successfully located, draw it on the original image.
std::vector<std::vector<cv::Point> > cnt;
cv::findContours(dst.clone(), cnt, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::drawContours(im, cnt, -1, CV_RGB(255,0,0), 10);
For the sake of completeness, here is the full code. With some efforts you can port it to Java.
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
void thinningIteration(cv::Mat& im, int iter)
{
cv::Mat marker = cv::Mat::zeros(im.size(), CV_8UC1);
for (int i = 1; i < im.rows; i++)
{
for (int j = 1; j < im.cols; j++)
{
uchar p2 = im.at<uchar>(i-1, j);
uchar p3 = im.at<uchar>(i-1, j+1);
uchar p4 = im.at<uchar>(i, j+1);
uchar p5 = im.at<uchar>(i+1, j+1);
uchar p6 = im.at<uchar>(i+1, j);
uchar p7 = im.at<uchar>(i+1, j-1);
uchar p8 = im.at<uchar>(i, j-1);
uchar p9 = im.at<uchar>(i-1, j-1);
int A = (p2 == 0 && p3 == 1) + (p3 == 0 && p4 == 1) +
(p4 == 0 && p5 == 1) + (p5 == 0 && p6 == 1) +
(p6 == 0 && p7 == 1) + (p7 == 0 && p8 == 1) +
(p8 == 0 && p9 == 1) + (p9 == 0 && p2 == 1);
int B = p2 + p3 + p4 + p5 + p6 + p7 + p8 + p9;
int m1 = iter == 0 ? (p2 * p4 * p6) : (p2 * p4 * p8);
int m2 = iter == 0 ? (p4 * p6 * p8) : (p2 * p6 * p8);
if (A == 1 && (B >= 2 && B <= 6) && m1 == 0 && m2 == 0)
marker.at<uchar>(i,j) = 1;
}
}
im &= ~marker;
}
void thinning(cv::Mat& im)
{
im /= 255;
cv::Mat prev = cv::Mat::zeros(im.size(), CV_8UC1);
cv::Mat diff;
do {
thinningIteration(im, 0);
thinningIteration(im, 1);
cv::absdiff(im, prev, diff);
im.copyTo(prev);
}
while (cv::countNonZero(diff) > 0);
im *= 255;
}
void hitmiss(cv::Mat& src, cv::Mat& dst, cv::Mat& kernel)
{
CV_Assert(src.type() == CV_8U && src.channels() == 1);
cv::Mat k1 = (kernel == 1) / 255;
cv::Mat k2 = (kernel == -1) / 255;
cv::normalize(src, src, 0, 1, cv::NORM_MINMAX);
cv::Mat e1, e2;
cv::erode(src, e1, k1, cv::Point(-1,-1), 1, cv::BORDER_CONSTANT, cv::Scalar(0));
cv::erode(1 - src, e2, k2, cv::Point(-1,-1), 1, cv::BORDER_CONSTANT, cv::Scalar(0));
dst = e1 & e2;
}
int main()
{
cv::Mat im = cv::imread("D1Xnm.png");
if (im.empty())
return -1;
cv::Mat bw;
cv::cvtColor(im, bw, CV_BGR2GRAY);
cv::threshold(~bw, bw, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
thinning(bw);
std::vector<cv::Mat> k;
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 0,-1,-1,
-1,-1, 0,-1,-1,
0, 0, 0, 0, 1,
-1,-1, 0, 0,-1,
-1,-1, 1,-1,-1 ));
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 0,-1,-1,
-1,-1, 0,-1,-1,
1, 0, 0, 0, 0,
-1, 0, 0,-1,-1,
-1,-1, 1,-1,-1 ));
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 1,-1,-1,
-1,-1, 0,-1,-1,
1, 0, 0, 0, 0,
-1, 0, 0,-1,-1,
-1,-1, 0,-1,-1 ));
k.push_back((cv::Mat_<char>(5,5) << -1,-1, 1,-1,-1,
-1,-1, 0,-1,-1,
0, 0, 0, 0, 1,
-1,-1, 0, 0,-1,
-1,-1, 0,-1,-1 ));
cv::Mat dst = cv::Mat::zeros(bw.size(), CV_8U);
for (int i = 0; i < k.size(); i++)
{
cv::Mat tmp;
hitmiss(bw, tmp, k[i]);
dst |= tmp;
}
std::vector<std::vector<cv::Point> > cnt;
cv::findContours(dst.clone(), cnt, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
cv::drawContours(im, cnt, -1, CV_RGB(255,0,0), 10);
cv::imshow("src", im);
cv::imshow("bw", bw*255);
cv::imshow("dst", dst*255);
cv::waitKey();
return 0;
}

Take a look at OpenCV squares.c demo, or one of the threads below:
Detect square using opencv2.4 (C)
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection (C++)
OpenCV C++/Obj-C: Advanced square detection (C++)

You can use morphological operations on the binary image you got. In matlab you may play with bwmorph
bw = I == 0; % look at dark lines in image I
[y x] = find( bwmorph( bw, 'branchpoints' ) );
will give you x y coordinates of junction points.

Related

How to Convert Python Code to Java without numpy

I have a method in Python that makes use of OpenCV to remove the background from an image. I want the same functionality to work with android's version of OpenCV but I just cant seem to wrap my head around how the arrays work and how I can process them.
This is what I have so far in Java :
private Bitmap GetForeground(Bitmap source){
source = scale(source,300,300);
Mat mask = Mat.zeros(source.getHeight(),source.getWidth(),CvType.CV_8U);
Mat bgModel = Mat.zeros(1,65,CvType.CV_64F);
Mat ftModel = Mat.zeros(1,65,CvType.CV_64F);
int x = (int)Math.round(source.getWidth()*0.1);
int y = (int)Math.round(source.getHeight()*0.1);
int width = (int)Math.round(source.getWidth()*0.8);
int height = (int)Math.round(source.getHeight()*0.8);
Rect rect = new Rect(x,y, width,height);
Mat sourceMat = new Mat();
Utils.bitmapToMat(source, sourceMat);
Imgproc.grabCut(sourceMat, mask, rect, bgModel, ftModel, 5, Imgproc.GC_INIT_WITH_RECT);
int frameSize=sourceMat.rows()*sourceMat.cols();
byte[] buffer= new byte[frameSize];
mask.get(0,0,buffer);
for (int i = 0; i < frameSize; i++) {
if (buffer[i] == 2 || buffer[i] == 0){
buffer[i] = 0;
}else{
buffer[i] = 1 ;
}
}
byte[][] sourceArray = getMultiChannelArray(sourceMat);
byte[][][] reshapedMask = ReshapeArray(buffer, sourceMat.rows(), sourceMat.cols());
return source;
}
private byte[][][] ReshapeArray(byte[] arr, int rows, int cols){
byte[][][] out = new byte[cols][rows][1];
int index=0;
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++) {
out[i][j][0] = arr[index];
index++;
}
}
return out;
}
public static byte[][] getMultiChannelArray(Mat m) {
//first index is pixel, second index is channel
int numChannels=m.channels();//is 3 for 8UC3 (e.g. RGB)
int frameSize=m.rows()*m.cols();
byte[] byteBuffer= new byte[frameSize*numChannels];
m.get(0,0,byteBuffer);
//write to separate R,G,B arrays
byte[][] out=new byte[frameSize][numChannels];
for (int p=0,i = 0; p < frameSize; p++) {
for (int n = 0; n < numChannels; n++,i++) {
out[p][n]=byteBuffer[i];
}
}
return out;
}
The python code I want to recreate :
image = cv2.imread('Images/handheld.jpg')
image = imutils.resize(image, height = 300)
mask = np.zeros(image.shape[:2],np.uint8)
bgModel = np.zeros((1,65),np.float64)
frModel = np.zeros((1,65),np.float64)
height, width, d = np.array(image).shape
rect = (int(width*0.1),int(height*0.1),int(width*0.8),int(height*0.8))
cv2.grabCut(image, mask, rect, bgModel,frModel, 5,cv2.GC_INIT_WITH_RECT)
mask = np.where((mask==2) | (mask == 0),0,1).astype('uint8')
image = image*mask[:,:,np.newaxis]
I have no idea how to convert the last two lines of the python code. If there is a way to just run python clean on an android device within my own project that would also be awesome.
At this point, you should consider talking a look to SL4A project which would allow you run your Python code on Android through java app.
Here are interesting links :
https://github.com/damonkohler/sl4a
https://norwied.wordpress.com/2012/04/11/run-sl4a-python-script-from-within-android-app/
http://jokar-johnk.blogspot.com/2011/02/how-to-make-android-app-with-sl4a.html
Let's see both the commands and try to convert them to Java API calls. It may not be simple 2 line in code.
mask = np.where((mask==2) | (mask == 0),0,1).astype('uint8')
In the above command, we are creating a new image mask which has uint data type of pixel values. The new mask matrix would have value 0 for every position where previous mask has a value of either 2 or 0, otherwise 1. Let's demonstrate this with an example:
mask = [
[0, 1, 1, 2],
[1, 0, 1, 3],
[0, 1, 1, 2],
[2, 3, 1, 0],
]
After this operation the output would be:
mask = [
[0, 1, 1, 0],
[1, 0, 1, 1],
[0, 1, 1, 0],
[0, 1, 1, 0],
]
So this above command is simply generating a binary mask with only 0 and 1 values. This can replicated in Java using Core.compare() method as:
// Get a mask for all `1` values in matrix.
Mat mask1vals;
Core.compare(mask, new Scalar(1), mask1vals, Core.CMP_EQ);
// Get a mask for all `3` values in matrix.
Mat mask3vals;
Core.compare(mask, new Scalar(3), mask3vals, Core.CMP_EQ);
// Create a combined mask
Mat foregroundMask;
Core.max(mask1vals, mask3vals, foregroundMask)
Now you need to multiply this foreground mask with the input image, to get final grabcut image as:
// First convert the single channel mat to 3 channel mat
Imgproc.cvtColor(foregroundMask, foregroundMask, Imgproc.COLOR_GRAY2BGR);
// Now simply take min operation
Mat out;
Core.min(foregroundMask, image, out);

Create an OpenCV Mat with simple pattern

In OpenCV, is there a fast way to create a Mat object where:
odd columns are '1'
even columns are '0'
For example :
1 0 1 0 1 0
1 0 1 0 1 0
1 0 1 0 1 0
The pattern is always the same.
The size of Mat can be big, and process by looping is really slow to generate this pattern.
OpenCV repeat is there exactly for this.
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
int rows = 1000;
int cols = 1000;
vector<uchar> pattern = { 1, 0 }; // change with int, double, etc according to the type you want.
Mat m;
repeat(pattern, rows, cols/2, m);
return 0;
}
COMPARISON WITH OTHER METHODS
Just a small test to measure the performance of the proposed (so far) methods:
Time in milliseconds:
#Miki [repeat] : 0.442786
#RonaldoMessi [copyTo] : 7.26822
#Derman [merge] : 1.17588
The code I used for the test:
#include <opencv2\opencv.hpp>
#include <vector>
#include <iostream>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
int rows = 1000;
int cols = 1000;
{
// #Miki
double tic = (double)getTickCount();
vector<uchar> pattern = { 1, 0 };
Mat m1;
repeat(pattern, rows, cols / 2, m1);
double toc = ((double)getTickCount() - tic) * 1000 / getTickFrequency();
cout << "#Miki [repeat] \t\t: " << toc << endl;
}
{
// #RonaldoMessi
double tic = (double)getTickCount();
Mat m2(rows, cols, CV_8UC1);
Mat vZeros = Mat::zeros(rows, 1, CV_8UC1);
Mat vOnes = Mat::ones(rows, 1, CV_8UC1);
for (int i = 0; i < cols - 1; i += 2)
{
vOnes.col(0).copyTo(m2.col(i));
vZeros.col(0).copyTo(m2.col(i + 1));
}
double toc = ((double)getTickCount() - tic) * 1000 / getTickFrequency();
cout << "#RonaldoMessi [copyTo] \t: " << toc << endl;
}
{
// #Derman
// NOTE: corrected to give correct output
double tic = (double)getTickCount();
Mat myMat[2];
myMat[0] = cv::Mat::ones(rows, cols/2, CV_8UC1);
myMat[1] = cv::Mat::zeros(rows, cols/2, CV_8UC1);
Mat m3;
merge(myMat, 2, m3);
m3 = m3.reshape(1);
double toc = ((double)getTickCount() - tic) * 1000 / getTickFrequency();
cout << "#Derman [merge] \t: " << toc << endl;
}
getchar();
return 0;
}
You can create two column vectors vZeros and vOnes, then copy these columns to the matrix M:
int cols = A.cols;
int rows = A.rows;
Mat vZeros = Mat::zeros(rows , 1, CV_64F);
Mat vOnes = Mat::ones(rows , 1, CV_64F);
for(int i=0; i<cols-1; i+=2)
{
vOnes.col( 0 ).copyTo( M.col(i) );
vZeros.col( 0 ).copyTo( M.col(i+1) );
}
If two-channel matrix won't bother you, this could be your choice:
int rows = 5;
int cols = 5;
cv::Mat myMat[2];
myMat[0] = cv::Mat::ones(rows, cols, CV_32FC1);
myMat[1] = cv::Mat::zeros(rows, cols, CV_32FC1);
cv::Mat result;
cv::merge(myMat, 2, result);
And this is your result:
[1, 0, 1, 0, 1, 0, 1, 0, 1, 0;
1, 0, 1, 0, 1, 0, 1, 0, 1, 0;
1, 0, 1, 0, 1, 0, 1, 0, 1, 0;
1, 0, 1, 0, 1, 0, 1, 0, 1, 0;
1, 0, 1, 0, 1, 0, 1, 0, 1, 0]

image.getRaster().getDataBuffer() returns array of negative values

This answer suggests that it's over 10 times faster to loop pixel array instead of using BufferedImage.getRGB. Such difference is too important to by ignored in my computer vision program. For that reason, O rewritten my IntegralImage method to calculate integral image using the pixel array:
/* Generate an integral image. Every pixel on such image contains sum of colors or all the
pixels before and itself.
*/
public static double[][][] integralImage(BufferedImage image) {
//Cache width and height in variables
int w = image.getWidth();
int h = image.getHeight();
//Create the 2D array as large as the image is
//Notice that I use [Y, X] coordinates to comply with the formula
double integral_image[][][] = new double[h][w][3];
//Variables for the image pixel array looping
final int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
//final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
//If the image has alpha, there will be 4 elements per pixel
final boolean hasAlpha = image.getAlphaRaster() != null;
final int pixel_size = hasAlpha?4:3;
//If there's alpha it's the first of 4 values, so we skip it
final int pixel_offset = hasAlpha?1:0;
//Coordinates, will be iterated too
//It's faster than calculating them using % and multiplication
int x=0;
int y=0;
int pixel = 0;
//Tmp storage for color
int[] color = new int[3];
//Loop through pixel array
for(int i=0, l=pixels.length; i<l; i+=pixel_size) {
//Prepare all the colors in advance
color[2] = ((int) pixels[pixel + pixel_offset] & 0xff); // blue;
color[1] = ((int) pixels[pixel + pixel_offset + 1] & 0xff); // green;
color[0] = ((int) pixels[pixel + pixel_offset + 2] & 0xff); // red;
//For every color, calculate the integrals
for(int j=0; j<3; j++) {
//Calculate the integral image field
double A = (x > 0 && y > 0) ? integral_image[y-1][x-1][j] : 0;
double B = (x > 0) ? integral_image[y][x-1][j] : 0;
double C = (y > 0) ? integral_image[y-1][x][j] : 0;
integral_image[y][x][j] = - A + B + C + color[j];
}
//Iterate coordinates
x++;
if(x>=w) {
x=0;
y++;
}
}
//Return the array
return integral_image;
}
The problem is that if I use this debug output in the for loop:
if(x==0) {
System.out.println("rgb["+pixels[pixel+pixel_offset+2]+", "+pixels[pixel+pixel_offset+1]+", "+pixels[pixel+pixel_offset]+"]");
System.out.println("rgb["+color[0]+", "+color[1]+", "+color[2]+"]");
}
This is what I get:
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
rgb[-16777216, -16777216, -16777216]
rgb[0, 0, 0]
...
So how should I properly retrieve pixel array for BufferedImage images?
A bug in the code above, that is easily missed, is that the for loop doesn't loop as you'd expect. The for loop updates i, while the loop body uses pixel for its array indexing. Thus, you will only ever see the values of pixel 1, 2 and 3.
Apart from that:
The "problem" with the negative pixel values, is most likely that the code assumes a BufferedImage that stores its pixels in "pixel interleaved" form, however, they are stored "pixel packed". That is, all samples (R, G, B and A) for one pixel is stored in a single sample, an int. This will be the case for all BufferedImage.TYPE_INT_* types (while the BufferedImage.TYPE_nBYTE_* types are stored interleaved).
It's completely normal to have negative values in the raster, this will happen for any pixel that is less than 50% transparent (more than or equal to 50% opaque), because of how the 4 samples are packed into the int, and because int is a signed type in Java.
In this case, use:
int[] color = new int[3];
for (int i = 0; i < pixels.length; i++) {
// Assuming TYPE_INT_RGB, TYPE_INT_ARGB or TYPE_INT_ARGB_PRE
// For TYPE_INT_BGR, you need to reverse the colors.
// You seem to ignore alpha, is that correct?
color[0] = ((pixels[i] >> 16) & 0xff); // red;
color[1] = ((pixels[i] >> 8) & 0xff); // green;
color[2] = ( pixels[i] & 0xff); // blue;
// The rest of the computations...
}
Another possibility, is that you have created a custom type image (BufferedImage.TYPE_CUSTOM) that really uses a 32 bit unsigned int per sample. This is possible, however, int is still a signed entity in Java, so you need to mask off the sign bit. To complicate this a little, in Java -1 & 0xFFFFFFFF == -1 because any computation on an int will still be an int, unless you explicitly say otherwise (doing the same on a byte or short value would have "scaled up" to int). To get a positive value, you need to use a long value like this: -1 & 0xFFFFFFFFL (which is 4294967295).
In this case, use:
long[] color = new long[3];
for(int i = 0; i < pixels.length / pixel_size; i += pixel_size) {
// Somehow assuming BGR order in input, and RGB output (color)
// Still ignoring alpha
color[0] = (pixels[i + pixel_offset + 2] & 0xFFFFFFFFL); // red;
color[1] = (pixels[i + pixel_offset + 1] & 0xFFFFFFFFL); // green;
color[2] = (pixels[i + pixel_offset ] & 0xFFFFFFFFL); // blue;
// The rest of the computations...
}
I don't know what type of image you have, so I can't say for sure which one is the problem, but it's one of those. :-)
PS: BufferedImage.getAlphaRaster() is a possibly an expensive and also inaccurate way to tell if the image has alpha. It's better to just use image.getColorModel().hasAlpha(). See also hasAlpha vs getAlphaRaster.

How to implement the main() method while using Processing in Java?

I tried to implement the following processing code in a JFrame in netbeans.
code was taken from http://www.geekmomprojects.com/mpu-6050-dmp-data-from-i2cdevlib/
However after running MyFrame.java the it only shows a still image in the frame.
It is not taking the serial readings.
MyProcessingSketch.java
`package testprocessing;
/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
import java.util.Arrays;
import processing.core.*;
import processing.serial.*;
import processing.opengl.*;
import toxi.geom.*;
import toxi.processing.*;
public class MyProcessingSketch extends PApplet {
ToxiclibsSupport gfx;
Serial port; // The serial port
char[] teapotPacket = new char[14]; // InvenSense Teapot packet
int serialCount = 0; // current packet byte position
int synced = 0;
int interval = 0;
float[] q = new float[4];
Quaternion quat = new Quaternion(1, 0, 0, 0);
float[] gravity = new float[3];
float[] euler = new float[3];
float[] ypr = new float[3];
#Override
public void setup() {
// 300px square viewport using OpenGL rendering
size(300, 300, OPENGL);
gfx = new ToxiclibsSupport(this);
// setup lights and antialiasing
lights();
smooth();
// display serial port list for debugging/clarity
System.out.println(Arrays.toString(Serial.list()));
// get the first available port (use EITHER this OR the specific port code below)
//String portName = Serial.list()[0];
// get a specific serial port (use EITHER this OR the first-available code above)
String portName = "COM10";
// open the serial port
port = new Serial(this, portName, 9600);
// send single character to trigger DMP init/start
// (expected by MPU6050_DMP6 example Arduino sketch)
port.write('r');
}
#Override
public void draw() {
if (millis() - interval > 1000) {
// resend single character to trigger DMP init/start
// in case the MPU is halted/reset while applet is running
port.write('r');
interval = millis();
}
// black background
background(0);
// translate everything to the middle of the viewport
pushMatrix();
translate(width / 2, height / 2);
// 3-step rotation from yaw/pitch/roll angles (gimbal lock!)
// ...and other weirdness I haven't figured out yet
//rotateY(-ypr[0]);
//rotateZ(-ypr[1]);
//rotateX(-ypr[2]);
// toxiclibs direct angle/axis rotation from quaternion (NO gimbal lock!)
// (axis order [1, 3, 2] and inversion [-1, +1, +1] is a consequence of
// different coordinate system orientation assumptions between Processing
// and InvenSense DMP)
float[] axis = quat.toAxisAngle();
rotate(axis[0], -axis[1], axis[3], axis[2]);
// draw main body in red
fill(255, 0, 0, 200);
box(10, 10, 200);
// draw front-facing tip in blue
fill(0, 0, 255, 200);
pushMatrix();
translate(0, 0, -120);
rotateX(PI/2);
drawCylinder(0, 20, 20, 8);
popMatrix();
// draw wings and tail fin in green
fill(0, 255, 0, 200);
beginShape(TRIANGLES);
vertex(-100, 2, 30); vertex(0, 2, -80); vertex(100, 2, 30); // wing top layer
vertex(-100, -2, 30); vertex(0, -2, -80); vertex(100, -2, 30); // wing bottom layer
vertex(-2, 0, 98); vertex(-2, -30, 98); vertex(-2, 0, 70); // tail left layer
vertex( 2, 0, 98); vertex( 2, -30, 98); vertex( 2, 0, 70); // tail right layer
endShape();
beginShape(QUADS);
vertex(-100, 2, 30); vertex(-100, -2, 30); vertex( 0, -2, -80); vertex( 0, 2, -80);
vertex( 100, 2, 30); vertex( 100, -2, 30); vertex( 0, -2, -80); vertex( 0, 2, -80);
vertex(-100, 2, 30); vertex(-100, -2, 30); vertex(100, -2, 30); vertex(100, 2, 30);
vertex(-2, 0, 98); vertex(2, 0, 98); vertex(2, -30, 98); vertex(-2, -30, 98);
vertex(-2, 0, 98); vertex(2, 0, 98); vertex(2, 0, 70); vertex(-2, 0, 70);
vertex(-2, -30, 98); vertex(2, -30, 98); vertex(2, 0, 70); vertex(-2, 0, 70);
endShape();
popMatrix();
}
void serialEvent(Serial port) {
interval = millis();
while (port.available() > 0) {
int ch = port.read();
if (synced == 0 && ch != '$') return; // initial synchronization - also used to resync/realign if needed
synced = 1;
print ((char)ch);
if ((serialCount == 1 && ch != 2)
|| (serialCount == 12 && ch != '\r')
|| (serialCount == 13 && ch != '\n')) {
serialCount = 0;
synced = 0;
return;
}
if (serialCount > 0 || ch == '$') {
teapotPacket[serialCount++] = (char)ch;
if (serialCount == 14) {
serialCount = 0; // restart packet byte position
// get quaternion from data packet
q[0] = ((teapotPacket[2] << 8) | teapotPacket[3]) / 16384.0f;
q[1] = ((teapotPacket[4] << 8) | teapotPacket[5]) / 16384.0f;
q[2] = ((teapotPacket[6] << 8) | teapotPacket[7]) / 16384.0f;
q[3] = ((teapotPacket[8] << 8) | teapotPacket[9]) / 16384.0f;
for (int i = 0; i < 4; i++) if (q[i] >= 2) q[i] = -4 + q[i];
// set our toxilibs quaternion to new data
quat.set(q[0], q[1], q[2], q[3]);
/*
// below calculations unnecessary for orientation only using toxilibs
// calculate gravity vector
gravity[0] = 2 * (q[1]*q[3] - q[0]*q[2]);
gravity[1] = 2 * (q[0]*q[1] + q[2]*q[3]);
gravity[2] = q[0]*q[0] - q[1]*q[1] - q[2]*q[2] + q[3]*q[3];
// calculate Euler angles
euler[0] = atan2(2*q[1]*q[2] - 2*q[0]*q[3], 2*q[0]*q[0] + 2*q[1]*q[1] - 1);
euler[1] = -asin(2*q[1]*q[3] + 2*q[0]*q[2]);
euler[2] = atan2(2*q[2]*q[3] - 2*q[0]*q[1], 2*q[0]*q[0] + 2*q[3]*q[3] - 1);
// calculate yaw/pitch/roll angles
ypr[0] = atan2(2*q[1]*q[2] - 2*q[0]*q[3], 2*q[0]*q[0] + 2*q[1]*q[1] - 1);
ypr[1] = atan(gravity[0] / sqrt(gravity[1]*gravity[1] + gravity[2]*gravity[2]));
ypr[2] = atan(gravity[1] / sqrt(gravity[0]*gravity[0] + gravity[2]*gravity[2]));
// output various components for debugging
//println("q:\t" + round(q[0]*100.0f)/100.0f + "\t" + round(q[1]*100.0f)/100.0f + "\t" + round(q[2]*100.0f)/100.0f + "\t" + round(q[3]*100.0f)/100.0f);
//println("euler:\t" + euler[0]*180.0f/PI + "\t" + euler[1]*180.0f/PI + "\t" + euler[2]*180.0f/PI);
//println("ypr:\t" + ypr[0]*180.0f/PI + "\t" + ypr[1]*180.0f/PI + "\t" + ypr[2]*180.0f/PI);
*/
}
}
}
}
void drawCylinder(float topRadius, float bottomRadius, float tall, int sides) {
float angle = 0;
float angleIncrement = TWO_PI / sides;
beginShape(QUAD_STRIP);
for (int i = 0; i < sides + 1; ++i) {
vertex(topRadius*cos(angle), 0, topRadius*sin(angle));
vertex(bottomRadius*cos(angle), tall, bottomRadius*sin(angle));
angle += angleIncrement;
}
endShape();
// If it is not a cone, draw the circular top cap
if (topRadius != 0) {
angle = 0;
beginShape(TRIANGLE_FAN);
// Center point
vertex(0, 0, 0);
for (int i = 0; i < sides + 1; i++) {
vertex(topRadius * cos(angle), 0, topRadius * sin(angle));
angle += angleIncrement;
}
endShape();
}
// If it is not a cone, draw the circular bottom cap
if (bottomRadius != 0) {
angle = 0;
beginShape(TRIANGLE_FAN);
// Center point
vertex(0, tall, 0);
for (int i = 0; i < sides + 1; i++) {
vertex(bottomRadius * cos(angle), tall, bottomRadius * sin(angle));
angle += angleIncrement;
}
endShape();
}
}
public static void main(String[] args) {
PApplet.main(new String[] { "--present", "MyProcessingSketch" });
}
}
`
MyFrame.java
package testprocessing;
import javax.swing.JFrame;
public class MyFrame extends JFrame{
private MyProcessingSketch mysketch;
public MyFrame() {
setTitle("IMU");
setDefaultCloseOperation(EXIT_ON_CLOSE);
mysketch = new MyProcessingSketch();
mysketch.init();
add(mysketch);
}
public static void main(String[] args) {
MyFrame frame = new MyFrame();
frame.pack();
frame.setVisible(true);
}
}

Knights tour backtracking lasts too long

How long does it last to solve the knights tour problem with backtracking on an 8x8 board? Because my algorithm already computes somehow too long and it seems, like it wont finish. But when I try a 6x6, or 5x5 board, it finishes successfully.
the code:
class KnightsTour{
private boolean[][] board;
private int count, places;
private static final Point[] moves = new Point[]{
new Point(-2, -1),
new Point(-2, 1),
new Point(2, -1),
new Point(2, 1),
new Point(-1, -2),
new Point(-1, 2),
new Point(1, -2),
new Point(1, 2)
};
public KnightsTour(int n) {
board = new boolean[n][n];
places = n*n;
count = 0;
}
public boolean ride(int x, int y) {
board[x][y] = true;
count++;
if (count == places) {
return true;
}
for (Point p : moves) {
int nextX = x + p.x;
int nextY = y + p.y;
if (nextX < 0 || nextX >= board.length || nextY < 0 || nextY >= board.length || board[nextX][nextY]) {
continue;
}
if (ride(nextX, nextY)) {
return true;
}
}
board[x][y] = false;
count--;
return false;
}
}
I came across the same problem. Everything runs smoothly till n=7 and suddenly it takes forever to calculate for n=8. I hope this helps someone :)
The problem lies with the order in which you are checking for the moves. You are using :
xMove[8] = { -2, -2, 2, 2, -1, -1, 1, 1}
yMove[8] = { -1, 1, -1, 1, -2, 2, -2, 2}
If you plot these vectors in the 2D plane, they are haphazardly placed. In other words, they are not ordered in either a clockwise or an anti-clockwise manner. Consider this instead :
xMove[8] = { 2, 1, -1, -2, -2, -1, 1, 2 }
yMove[8] = { 1, 2, 2, 1, -1, -2, -2, -1 }
If you plot these vectors, they are neatly arranged in an anticlockwise circle.
Somehow this causes the recursion to run much quickly for large values of n. Mind you, it still takes forever to calculate for n=9 onwards.

Categories