I am running template matching using openCV 3.4.7 Android SDK (java).
The code work almost perfectly; when the template is match, it draws a rectangle on the matching area. The problem is that even when there is no match, it draws a random rectangle. I think that happens because the threshold is not set correctly. If so, can someone please help me out?
Here's the code:
public static void run(String inFile, String templateFile, String outFile,
int match_method) {
Mat img = Imgcodecs.imread(inFile);
Mat templ = Imgcodecs.imread(templateFile);
// / Create the result matrix
int result_cols = img.cols() - templ.cols() + 1;
int result_rows = img.rows() - templ.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_32FC1);
// / Do the Matching and Normalize
Imgproc.matchTemplate(img, templ, result, match_method);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
// / Localizing the best match with minMaxLoc
Core.MinMaxLocResult mmr = Core.minMaxLoc(result);
Point matchLoc;
if (match_method == Imgproc.TM_SQDIFF
|| match_method == Imgproc.TM_SQDIFF_NORMED) {
matchLoc = mmr.minLoc;
} else {
matchLoc = mmr.maxLoc;
}
// / Show me what you got
Imgproc.rectangle(img, matchLoc, new Point(matchLoc.x + templ.cols(),
matchLoc.y + templ.rows()), new Scalar(0, 0, 128));
// Save the visualized detection.
System.out.println("Writing " + outFile);
Imgcodecs.imwrite(outFile, img);
}
You can use Imgproc.TM_CCOEFF_NORMED or Imgproc.TM_CCORR_NORMED and mmr.maxVal >= 0.8. It should take care of most of your false positives.
Sample Code:
import org.opencv.core.*;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import java.io.File;
import java.nio.file.Files;
public class templateMatchingTester {
private static String str = null;
static {
if (str == null) {
str = "initialised";
nu.pattern.OpenCV.loadShared();
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
}
private static Mat createMatrixFromImage(String imagePath) {
Mat imageMatrix = Imgcodecs.imread(imagePath);
Mat greyImage = new Mat();
Imgproc.cvtColor(imageMatrix, greyImage, Imgproc.COLOR_BGR2GRAY);
return greyImage;
}
private static boolean matchTemplate(String pathToInputImage,String pathToTemplate){
Mat inputImage = createMatrixFromImage(pathToInputImage);
Mat templateImage = createMatrixFromImage(pathToTemplate);
// Create the result matrix
int result_cols = inputImage.cols() - templateImage.cols() + 1;
int result_rows = inputImage.rows() - templateImage.rows() + 1;
Mat result = new Mat(result_rows, result_cols, CvType.CV_8UC1);
int match_method;
match_method = Imgproc.TM_CCOEFF_NORMED;//Imgproc.TM_CCORR_NORMED;
Imgproc.matchTemplate(inputImage, templateImage, result, match_method);
Core.MinMaxLocResult mmr = Core.minMaxLoc(result);
double minMatchQuality = 0.85;
System.out.println(mmr.maxVal);
if (mmr.maxVal >= minMatchQuality){
return true;
} else
return false;
}
public static void main(String args[]) {
String template = "path/to/your/templateImage";
final File folder = new File("path/to/your/testImagesFolder/");
int matchCount = 0;
for (final File fileEntry : folder.listFiles()){
if (matchTemplate(fileEntry.getPath(),template)){
matchCount+=1;
}else
System.out.println(fileEntry.getPath());
}
System.out.println(matchCount);
}
}
Use a normed match method to ensure your match value is [0..1].
Replace this line
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1, new Mat());
with a thresholding operation. Otherwise a best match of 0.9 would become 1 by the second normalization and you would lose the actual match "quality" information.
Normalizing the result of the template matching will always result in your best match being 1 making it impossible to discard a bad match.
i wrote an app that would take a screenshot of the game overwatch and attempt to tell who is on each team. using template matching and open cv. project need to iterate over the result image and check values.
OpenCVUtils.getPointsFromMatAboveThreshold(result,
0.90f)
public static void scaleAndCheckAll(String guid){
Mat source = imread(IMG_PROC_PATH + guid); //load the source image
Mat scaledSrc = new Mat(defaultScreenshotSize, source.type());
resize(source, scaledSrc, defaultScreenshotSize);
Mat sourceGrey = new Mat(scaledSrc.size(), CV_8UC1);
cvtColor(scaledSrc, sourceGrey, COLOR_BGR2GRAY);
for (String hero : getCharacters()) {
Mat template = OpenCVUtils.matFromJar(TEMPLATES_FOLDER + hero + ".png", 0); //load a template
Size size = new Size(sourceGrey.cols()-template.cols()+1, sourceGrey.rows()-template.rows()+1);
Mat result = new Mat(size, CV_32FC1);
matchTemplate(sourceGrey, template, result, TM_CCORR_NORMED);// get results
Scalar color = OpenCVUtils.randColor();
List<Point> points = OpenCVUtils.getPointsFromMatAboveThreshold(result,
0.90f);
for (Point point : points) {
//rectangle(scaledSrc, new Rect(point.x(),point.y(),template.cols(),template.rows()), color, -2, 0, 0);
putText(scaledSrc, hero, point, FONT_HERSHEY_PLAIN, 2, color);
}
}
String withExt = IMG_PROC_PATH + guid +".png";
imwrite(withExt, scaledSrc);
File noExt = new File(IMG_PROC_PATH + guid);
File ext = new File(withExt);
noExt.delete();
ext.renameTo(noExt);
}
the other method.
public static List<Point> getPointsFromMatAboveThreshold(Mat m, float t){
List<Point> matches = new ArrayList<Point>();
FloatIndexer indexer = m.createIndexer();
for (int y = 0; y < m.rows(); y++) {
for (int x = 0; x < m.cols(); x++) {
if (indexer.get(y,x)>t) {
System.out.println("(" + x + "," + y +") = "+ indexer.get(y,x));
matches.add(new Point(x, y));
}
}
}
return matches;
}
you can just get the first from the list or see how close they are if you expect multiple matches.
I am new with Apache and I am checking that the image that I insert with the picture is resized in the word document. I am using the example that comes in the Apache documentation, just modified. The image is considerably enlarged from the original size and when the created .word document is opened, the picture is shown resized on document and I find no explanation, when I am forcing the size the picture should be.
Below is the code used:
public class SimpleImages {
public static void main(String\[\] args) throws IOException, InvalidFormatException {
try (XWPFDocument doc = new XWPFDocument()) {
XWPFParagraph p = doc.createParagraph();
XWPFRun r = p.createRun();
for (String imgFile : args) {
int format;
if (imgFile.endsWith(".emf")) {
format = XWPFDocument.PICTURE_TYPE_EMF;
} else if (imgFile.endsWith(".wmf")) {
format = XWPFDocument.PICTURE_TYPE_WMF;
} else if (imgFile.endsWith(".pict")) {
format = XWPFDocument.PICTURE_TYPE_PICT;
} else if (imgFile.endsWith(".jpeg") || imgFile.endsWith(".jpg")) {
format = XWPFDocument.PICTURE_TYPE_JPEG;
} else if (imgFile.endsWith(".png")) {
format = XWPFDocument.PICTURE_TYPE_PNG;
} else if (imgFile.endsWith(".dib")) {
format = XWPFDocument.PICTURE_TYPE_DIB;
} else if (imgFile.endsWith(".gif")) {
format = XWPFDocument.PICTURE_TYPE_GIF;
} else if (imgFile.endsWith(".tiff")) {
format = XWPFDocument.PICTURE_TYPE_TIFF;
} else if (imgFile.endsWith(".eps")) {
format = XWPFDocument.PICTURE_TYPE_EPS;
} else if (imgFile.endsWith(".bmp")) {
format = XWPFDocument.PICTURE_TYPE_BMP;
} else if (imgFile.endsWith(".wpg")) {
format = XWPFDocument.PICTURE_TYPE_WPG;
} else {
System.err.println("Unsupported picture: " + imgFile +
". Expected emf|wmf|pict|jpeg|png|dib|gif|tiff|eps|bmp|wpg");
continue;
}
r.setText(imgFile);
r.addBreak();
try (FileInputStream is = new FileInputStream(imgFile)) {
BufferedImage bimg = ImageIO.read(new File(imgFile));
int anchoImagen = bimg.getWidth();
int altoImagen = bimg.getHeight();
System.out.println("anchoImagen: " + anchoImagen);
System.out.println("altoImagen: " + anchoImagen);
r.addPicture(is, format, imgFile, Units.toEMU(anchoImagen), Units.toEMU(altoImagen));
}
r.addBreak(BreakType.PAGE);
}
try (FileOutputStream out = new FileOutputStream("C:\\W_Ejm_Jasper\\example-poi-img\\src\\main\\java\\es\\eve\\example_poi_img\\images.docx")) {
doc.write(out);
System.out.println(" FIN " );
}
}
}
}
the image inside the word
the original image is (131 * 216 pixels):
the image is scaled in the word
my question is about : how to record and save with the time specified like after two hours this app must done record and save in one folder.
public class per1 {
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Scanner scan = new Scanner(System.in);
VideoCapture camera = new VideoCapture(0);
String cc = String.valueOf(camera.get(Videoio.CAP_PROP_FOURCC));
int fps = (int) camera.get(Videoio.CAP_PROP_FPS);
int width = (int) camera.get(Videoio.CAP_PROP_FRAME_WIDTH);
int height = (int) camera.get(Videoio.CAP_PROP_FRAME_HEIGHT);
final Size frameSize = new Size((int) camera.get(Videoio.CAP_PROP_FRAME_WIDTH), (int) camera.get(Videoio.CAP_PROP_FRAME_HEIGHT));
VideoWriter save = new VideoWriter("D:/video.mpg", Videoio.CAP_PROP_FOURCC, fps, frameSize, true);
if (camera.isOpened()) {
System.out.println("ON");
Mat framecam = new Mat();
boolean cekframe = camera.read(framecam);
System.out.println("cekframe " + cekframe);
try {
while (cekframe) {
camera.read(framecam);
save.write(framecam);
}
Thread.sleep(4000);
} catch (Exception e) {
System.out.println("OFF \n" + e);
}
camera.release();
save.release();
System.exit(1);
System.out.println("DOne");
}
}
I am trying to achieve EigenFace Recognition in JavaCV and implementing it through this code:-
public static void main(String[] args) {
String trainingDir = "C:/Users/user/Documents/NetBeansProjects/Face/testimg";
IplImage testImage = cvLoadImage("C:/Users/user/Desktop/aa.png");
File root = new File(trainingDir);
FilenameFilter pngFilter = new FilenameFilter() {
public boolean accept(File dir, String name) {
return name.toLowerCase().endsWith(".png");
}
};
File[] imageFiles = root.listFiles(pngFilter);
MatVector images = new MatVector(imageFiles.length);
int[] labels = new int[imageFiles.length];
int counter = 0;
int label;
IplImage img;
IplImage grayImg = null;
try {
for (File image : imageFiles) {
img = cvLoadImage(image.getAbsolutePath(), CV_BGR2GRAY);
int yer = image.getName().indexOf(".");
String isim = image.getName().substring(0, yer);
label = Integer.parseInt(isim);
images.put(counter, img);
labels[counter] = label;
counter++;
}
} catch (Exception e) {
e.printStackTrace();
}
IplImage greyTestImage = IplImage.create(testImage.width(), testImage.height(), IPL_DEPTH_8U, 1);
//FaceRecognizer faceRecognizer = createFisherFaceRecognizer();
FaceRecognizer faceRecognizer = createEigenFaceRecognizer();
//FaceRecognizer faceRecognizer = createLBPHFaceRecognizer()
faceRecognizer.train(images, labels);
cvCvtColor(testImage, greyTestImage, CV_BGR2GRAY);
int predictedLabel = faceRecognizer.predict(greyTestImage);
System.out.println("Predicted label: " + predictedLabel);
}
But each time I run it,It gives me an error
OpenCV Error: Image step is wrong (The matrix is not continuous, thus its number of rows can not be changed) in cv::Mat::reshape, file ........\opencv\modules\core\src\matrix.cpp, line 802
Exception in thread "main" java.lang.RuntimeException: ........\opencv\modules\core\src\matrix.cpp:802: error: (-13) The matrix is not continuous, thus its number of rows can not be changed in function cv::Mat::reshape
I have read some where that it happens when Images are not of same size and not a multiple of 8,but i have all the images of same size and grayscaled too.The code i used for saving detected Face is:-
Mat image_roi = new Mat(frame,rect_Crop);
Imgproc.cvtColor(image_roi, image_roi, Imgproc.COLOR_BGR2GRAY);
Size sz = new Size(240,240);
Imgproc.resize( image_roi, image_roi, sz );
String filename = "testimg\\" +jTextField1.getText() + ".png";
System.out.println(String.format("Writing %s", filename));
Imgcodecs.imwrite(filename, image_roi);
It also gives me
java.lang.NumberFormatException:
for my files don't know why.....???
Please help....!!!!
I have a task to write program with 1 camera, 1 kinect, a lot of video processing and then controlling a robot.
This code just shows captured video frames without processing, but I only have 20 frames/s approximately. The same simple frames displaying program in Matlab gave me 29 frames/s. I was hoping that I will win some speed in Java, but it doesn't look like that, am I doing sth wrong? If not, how I can increase the speed?
public class Video implements Runnable {
//final int INTERVAL=1000;///you may use interval
IplImage image;
CanvasFrame canvas = new CanvasFrame("Web Cam");
public Video() {
canvas.setDefaultCloseOperation(javax.swing.JFrame.EXIT_ON_CLOSE);
}
#Override
public void run() {
FrameGrabber grabber = new VideoInputFrameGrabber(0); // 1 for next camera
int i=0;
try {
grabber.start();
IplImage img;
int g = 0;
long start2 = 0;
long stop = System.nanoTime();
long diff = 0;
start2 = System.nanoTime();
while (true) {
img = grabber.grab();
if (img != null) {
// cvFlip(img, img, 1);// l-r = 90_degrees_steps_anti_clockwise
// cvSaveImage((i++)+"-aa.jpg", img);
// show image on window
canvas.showImage(img);
}
g++;
if(g%200 == 0){
stop = System.nanoTime();
diff = stop - start2;
double d = (float)diff;
double dd = d/1000000000;
double dv = dd/g;
System.out.printf("frames = %.2f\n",1/dv);
}
//Thread.sleep(INTERVAL);
}
} catch (Exception e) {
}
}
public static void main(String[] args) {
Video gs = new Video();
Thread th = new Thread(gs);
th.start();
}
}