opencv face-detection in java : conception steps - java

I'm working on face-detection project via webcam using opencv
In this approach (viola-jones) to detecting object in images combines four key concepts :
1-Simple rectangular features called haar features ( i can find this one in haarcascade_frontalface_alt.xml file).
2- An integral Image for raped feature detection.
3- The AdaBoost machine-learning method.
4-A cascaded classifier to combine many features efficiently.
my questions are:
-does haarcascade_frontalface_alt.xml contains the cascaded classifier also with the haar feature?
-how can i add the integral image and AdaBoost in my project and how to use it??or is it already done automatically??

it seems, you've read a lot of papers and pondered ideas, but have not found the opencv implementation ;)
using it is actually quite easy:
// setup a cascade classifier:
CascadeClassifier cascade;
// load a pretrained cascadefile(and PLEASE CHECK!):
bool ok = cascade.load("haarcascade_frontalface_alt.xml");
if ( ! ok )
{
...
}
// later, search for stuff in your img:
Mat gray; // uchar grayscale!
vector<Rect> faces; // the result vec
cascade.detectMultiScale( gray, faces, 1.1, 3,
CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH ,
cv::Size(20, 20) );
for ( size_t i=0; i<faces.size(); i++ )
{
// gray( faces[i] ); is the img portion that contains the detected object
}

Related

Upgrading to latest version of Boofcv

I'm currently running an old version (0.17) of Boofcv and want to upgrade. The documentation (https://boofcv.org/index.php?title=Download ) is confusing:
The easiest way to use boofcv is to reference its jars on Maven
Central. See below for Maven and Gradle code. BoofCV is broken up into
many modules. To make it easier to use BoofCV all of its core
functionality can be referenced using the 'all' module. Individual
modules in "integration" still must be referenced individually.
Artifact List
boofcv-core : All the core functionality of BoofCV
boofcv-all : All the core and integration packages in BoofCV. YOU PROBABLY WANT CORE AND NOT THIS
This is self-contradictory - do we use "all" or "core"?
When I introduce 0.32 version of boofcv-core I get many unresolved references, such as
Description Resource Path Location Type
ImageFloat32 cannot be resolved to a type BoofCVTest.java
Three parts of my question:
Have the fundamental types for images been renamed?
How will legacy code need editing?
What is the default set of libraries in Maven?
There's been a lot of refactoring since 0.17 because of how verbose things were getting and to simplify the API. For example, ImageFloat32 is now GrayF32. The easiest way to figure out all the changes is to look at the relevant example code.
For modules, start with boofcv-core. Then add modules listed in integration as needed. For example if you need android support add boofcv-android. If you include boofcv-all you will have a lot of stuff you probably don't need, like Kinect support.
To help others who are upgrading, here is an example of the changes I have made to upgrade to current Boofcv. They don't seem to be too difficult ; I have simply used
s/ImageUInt/GrayU/g
and similar for other types. So far I have only found one method that needs changing (VisualizeBinaryData.renderBinary).
/** thresholds an image
* uses BoofCV 0.32 or later
* NOT YET TESTED
*
* #param image
* #param threshold
* #return thresholded BufferedImage
*/
/* WAS Boofcv 0.17
public static BufferedImage boofCVBinarization(BufferedImage image, int threshold) {
ImageUInt8 input = ConvertBufferedImage.convertFrom(image,(ImageUInt8)null);
ImageUInt8 binary = new ImageUInt8(input.getWidth(), input.getHeight());
ThresholdImageOps.threshold(input, binary, threshold, false);
BufferedImage outputImage = VisualizeBinaryData.renderBinary(binary,null);
return outputImage;
}
The changes are ImageUInt8 => GrayU8 (etc.)
VisualizeBinaryData.renderBinary(binary,null) => ConvertBufferedImage.extractBuffered(binary)
It compiles - but haven't yet run it.
*/
public static BufferedImage boofCVBinarization(BufferedImage image, int threshold) {
GrayU8 input = ConvertBufferedImage.convertFrom(image,(GrayU8)null);
GrayU8 binary = new GrayU8(input.getWidth(), input.getHeight());
ThresholdImageOps.threshold(input, binary, threshold, false);
BufferedImage outputImage = ConvertBufferedImage.extractBuffered(binary);
return outputImage;
}

H2O : NullPointerException error while building ensemble model using deep learning grid

I am trying to build a stacked ensemble model to predict merchant churn using R (version 3.3.3) and deep learning in h2o (version 3.10.5.1). The response variable is binary. At the moment I am trying run the code to build a stacked ensemble model using the top 5 models developed by the grid search. However, when the code is run, I get the java.lang.NullPointerException error with the following output:
java.lang.NullPointerException
at hex.StackedEnsembleModel.checkAndInheritModelProperties(StackedEnsembleModel.java:265)
at hex.ensemble.StackedEnsemble$StackedEnsembleDriver.computeImpl(StackedEnsemble.java:115)
at hex.ModelBuilder$Driver.compute2(ModelBuilder.java:173)
at water.H2O$H2OCountedCompleter.compute(H2O.java:1349)
at jsr166y.CountedCompleter.exec(CountedCompleter.java:468)
at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Below is the code that I've used to do the hyper-parameter grid search and build the ensemble model:
hyper_params <- list(
activation=c("Rectifier","Tanh","Maxout","RectifierWithDropout","TanhWithDropout","MaxoutWithDropout"),
hidden=list(c(50,50),c(30,30,30),c(32,32,32,32,32),c(64,64,64,64,64),c(100,100,100,100,100)),
input_dropout_ratio=seq(0,0.2,0.05),
l1=seq(0,1e-4,1e-6),
l2=seq(0,1e-4,1e-6),
rho = c(0.9,0.95,0.99,0.999),
epsilon=c(1e-10,1e-09,1e-08,1e-07,1e-06,1e-05,1e-04)
)
search_criteria <- list(
strategy = "RandomDiscrete",
max_runtime_secs = 3600,
max_models = 100,
seed=1234,
stopping_metric="misclassification",
stopping_tolerance=0.01,
stopping_rounds=5
)
dl_ensemble_grid <- h2o.grid(
hyper_params = hyper_params,
search_criteria = search_criteria,
algorithm="deeplearning",
grid_id = "final_grid_ensemble_dl",
x=predictors,
y=response,
training_frame = h2o.rbind(train, valid, test),
nfolds=5,
fold_assignment="Modulo",
keep_cross_validation_predictions = TRUE,
keep_cross_validation_fold_assignment = TRUE,
epochs=12,
max_runtime_secs = 3600,
stopping_metric="misclassification",
stopping_tolerance=0.01,
stopping_rounds=5,
seed = 1234,
max_w2=10
)
DLsortedGridEnsemble_logloss <- h2o.getGrid("final_grid_ensemble_dl",sort_by="logloss",decreasing=FALSE)
ensemble <- h2o.stackedEnsemble(x = predictors,
y = response,
training_frame = h2o.rbind(train,valid,test),
base_models = list(
DLsortedGridEnsemble_logloss#model_ids[[1]],
DLsortedGridEnsemble_logloss#model_ids[[2]],
DLsortedGridEnsemble_logloss#model_ids[[3]],
DLsortedGridEnsemble_logloss#model_ids[[4]],
DLsortedGridEnsemble_logloss#model_ids[[5]],
)
Note: what I have realised so far is that h2o.stackedEnsemble function works when there's only one base model and it gives the Java error as soon as there's two or more base models.
I would really appreciate if I could get some feedback as to how this could be resolved.
The error refers to a line of the StackedEnsembleModel.java code that checks that the training_frame in the base models and the training_frame in h2o.stackedEnsemble() have the same checksum. I think the problem is caused because you dynamically created the training frame, rather than defining it explicitly (even though that should work since it's the same data in the end). So, rather than setting training_frame = h2o.rbind(train, valid, test) in the grid and ensemble functions, set the following at the top of your code:
df <- h2o.rbind(train, valid, test)
And then set training_frame = df in the grid and ensemble functions.
As a side note, you may get better DL models if you use a validation frame (for early stopping), rather than using all your data for the training frame. Also, if you want to use all the models in your grid (might lead to better performance, but not always), you can set base_models = DLsortedGridEnsemble_logloss#model_ids in the h2o.stackedEnsemble() function.

how to get correctly matched features

I am making features detection in two different images, and the resultant image from the Descriptors Matcher contains features that do not belong to each other as see in the img_1 below.
The steps I followed are as follows:
Features detection using SIFT algorithm. and this step yields MatOdKeyPoint object for each image, which means
MatKeyPts_1 and MatKeyPts_2
Descriptors extractor using SURF algorithm.
Descriptors matching using BRUTFORCE algorithm. the code of this step is posted below, and the descriptor extractor of the query_img and the train_img were used as input in this step. I am also using my own classes that I created to control and maintain this process.
The problem is, the result from step 3 is the image posted below img_1, which has completely non-similar features linked to eah others, i expected to see,for an example, the specific region of the hand(img_1, right) is linked to the similar feature in the hand in the (img_1,left), but as you see i got mixed and unrelated features.
My question is how to get correct features matching using SIFT and SURF as features detectors and descriptor extractor respectively?
private static void descriptorMatcher() {
// TODO Auto-generated method stub
MatOfDMatch matDMatch = new MatOfDMatch();//empty MatOfDmatch object
dm.match(matFactory.getComputedDescExtMatAt(0), matFactory.getComputedDescExtMatAt(1), matDMatch);//descriptor extractor of the query and the train image are used as parameters
matFactory.addRawMatchesMatDMatch(matDMatch);
/*writing the raw MatDMatches*/
Mat outImg = new Mat();
Features2d.drawMatches(matFactory.getMatAt(0), matFactory.getMatKeyPtsAt(0), matFactory.getMatAt(1), matFactory.getMatKeyPtsAt(1), MatFactory.lastAddedObj(matFactory.getRawMatchesMatDMatchList()),
outImg);
matFactory.addRawMatchedImage(outImg);
MatFactory.writeMat(FilePathUtils.newOutputPath(SystemConstants.RAW_MATCHED_IMAGE), MatFactory.lastAddedObj(matFactory.getRawMatchedImageList()));//this produce img_2 below posted
/*getting the top 10 shortest distance*/
List<DMatch> dMtchList = matDMatch.toList();
List<DMatch> goodDMatchList = MatFactory.getTopGoodMatches(dMtchList, 0, 10);//this method sort the dMatchList ascendingly and picks onlt the top 10 distances and assign these values to goodDMatchList
/*converting the goo DMatches to MatOfDMatches*/
MatOfDMatch goodMatDMatches = new MatOfDMatch();
goodMatDMatches.fromList(goodDMatchList);
matFactory.addGoodMatchMatDMatch(goodMatDMatches);
/*drawing the good matches and writing the good matches images*/
Features2d.drawMatches(matFactory.getMatAt(0), matFactory.getMatKeyPtsAt(0), matFactory.getMatAt(1), matFactory.getMatKeyPtsAt(1), MatFactory.lastAddedObj(matFactory.getGoodMatchMatDMatchList()),
outImg);
MatFactory.writeMat(FilePathUtils.newOutputPath(SystemConstants.GOOD_MATCHED_IMAGE), outImg);// this produce img_1 below posted
}
Img_1

Copying Mat to raw array in OpenCV with Java? (Getting "multiple of channels count" error)

I'm trying to load an image in Scala using OpenCV with the Java bindings. After loading the image, I'd like to convert it to a traditional Scala Array[Float].
Following the suggestions in this post, I implemented the following code to achieve this:
val image = Highgui.imread(imgName)
image.convertTo(image, CvType.CV_32FC1) //convert 8-bit char -> single channel 32-bit float
val s = image.size()
val height = s.height.asInstanceOf[Int]
val width = s.width.asInstanceOf[Int]
val nChannels = image.channels()
printf("img size = %d, %d, %d \n", height, width, nChannels); // 512, 512, 3
//thanks: http://answers.opencv.org/question/4761/mat-to-byte-array/
val imageInFloats = new Array[Float](height * width * image.channels())
image.get(0, 0, imageInFloats)
When compiling the code, I get the following error:
[error] (run-main) java.lang.UnsupportedOperationException:
Provided data element number (1) should be multiple of the Mat channels count (3)
java.lang.UnsupportedOperationException: Provided data element number (1) should
be multiple of the Mat channels count (3)
at org.opencv.core.Mat.get(Mat.java:2587)
at HelloOpenCV$.main(conv.scala:25)
...
There are a couple of reasons why this error doesn't make sense to me:
The image should be 1-channel because we do convertTo(...32FC1). Printing image.channels() reveals that there are 3 channels. Huh?
The size of imageInFloats is a multiple of image.channels(). I think this contradicts the error message about it not being a multiple of the number of channels.
Why does this code throw the should be a multiple of Mat channels count error?
Configuration details:
sbt 0.12.4
OpenCV 2.4.9
Final notes:
There's a more lightweight Scala library that would work as well as OpenCV for loading images into Scala. I'm using OpenCV at the for this because I've been doing a bunch of other vision stuff in Scala with OpenCV. That said, I'm willing to explore other libraries for image I/O.
if you do like : Highgui.imread(imgName) , it loads it as a 3 channel rgb image.
it should work, as you expected, if you either Highgui.imread(imgName,0) ( load as grayscale ) or apply cvtColor() to do a manual conversion.

GLES2.0: Use GL_TEXTURE_EXTERNAL_OES via glEGLImageTargetTexture2DOES

I would like to render an image buffer in Java (NDK is no option in this case) and pass it to shaders via GL_TEXTURE_EXTERNAL_OES.
glTexImage2D does not work, as mentioned in the spec. But the function glEGLImageTargetTexture2DOES is only available via the GLES11Ext class, which seems kind of wrong to use.
Anyway, I tried and it gives me GL_INVALID_OPERATION, which should happen if:
If the GL is unable to specify a texture object using the supplied
eglImageOES (if, for example, refers to a multisampled
eglImageOES), the error INVALID_OPERATION is generated.
Sadly I cannot make heads or tails from this description, especially since the Android Java API doesn't seem to give me access to eglImageOES functions. Neither have I found a Java example for the usage of this function.
Attached a small example:
// Bind the texture unit 0
GLES20.glActiveTexture( GLES20.GL_TEXTURE0 );
throwOnError( "glActiveTexture" );
GLES20.glBindTexture( GL_TEXTURE_EXTERNAL_OES, _samplerLocation );
throwOnError( "glBindTexture" );
// _output is ByteBuffer.allocateDirect(pixels * Integer.SIZE / Byte.SIZE).order(ByteOrder.nativeOrder()).asIntBuffer()
_output.rewind();
_output.limit( pixels );
GLES11Ext.glEGLImageTargetTexture2DOES( GL_TEXTURE_EXTERNAL_OES, _output );
throwOnError( "glEGLImageTargetTexture2DOES" ); // <-- throws
GLES20.glDrawArrays( GLES20.GL_TRIANGLE_STRIP, 0, 4 );
throwOnError( "glDrawArrays" );
Did anyone do that before or know whether this is possible or not?
EDIT:
I had a look at glEGLImageTargetTexture2DOES implementation and it seems that I have to correctly setup the buffer. Added that, but still same error.
There are some comments in this page which might help:
http://developer.android.com/reference/android/graphics/SurfaceTexture.html

Categories