Android Opencv Why HOG descriptors are always zero? - java

I am stack with this problem for a couple of days. I want to make an android app that takes a picture and extracts HOG features of that image for future processing. The problem is that the code below always returns the HOG descriptors with rezo values.
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Log.i(TAG, "Saving a bitmap to file");
// The camera preview was automatically stopped. Start it again.
mCamera.startPreview();
mCamera.setPreviewCallback(this);
this.disableView();
Bitmap bitmapPicture = BitmapFactory.decodeByteArray(data, 0, data.length);
myImage = new Mat(bitmapPicture.getWidth(), bitmapPicture.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(bitmapPicture, myImage);
Bitmap bm = Bitmap.createBitmap(myImage.cols(), myImage.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(myImage.clone(), bm);
// find the imageview and draw it!
ImageView iv = (ImageView) getRootView().findViewById(R.id.imageView);
this.setVisibility(SurfaceView.GONE);
iv.setVisibility(ImageView.VISIBLE);
Mat forHOGim = new Mat();
org.opencv.core.Size sz = new org.opencv.core.Size(64,128);
Imgproc.resize( myImage, myImage, sz );
Imgproc.cvtColor(myImage,forHOGim,Imgproc.COLOR_RGB2GRAY);
//forHOGim = myImage.clone();
MatOfFloat descriptors = new MatOfFloat(); //an empty vector of descriptors
org.opencv.core.Size winStride = new org.opencv.core.Size(64/2,128/2); //50% overlap in the sliding window
org.opencv.core.Size padding = new org.opencv.core.Size(0,0); //no padding around the image
MatOfPoint locations = new MatOfPoint(); ////an empty vector of locations, so perform full search
//HOGDescriptor hog = new HOGDescriptor();
HOGDescriptor hog = new HOGDescriptor(sz,new org.opencv.core.Size(16,16),new org.opencv.core.Size(8,8),new org.opencv.core.Size(8,8),9);
Log.i(TAG,"Constructed");
hog.compute(forHOGim , descriptors, new org.opencv.core.Size(16,16), padding, locations);
Log.i(TAG,"Computed");
Log.i(TAG,String.valueOf(hog.getDescriptorSize())+" "+descriptors.size());
Log.i(TAG,String.valueOf(descriptors.get(12,0)[0]));
double dd=0.0;
for (int i=0;i<3780;i++){
if (descriptors.get(i,0)[0]!=dd) Log.i(TAG,"NOT ZERO");
}
Bitmap bm2 = Bitmap.createBitmap(forHOGim.cols(), forHOGim.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(forHOGim,bm2);
iv.setImageBitmap(bm2);
}
So in the logcat I never get the NOT ZERO message. The problem is that whatever changes I do to this code I always have zeros in the descriptors MatOfFloat... And the strange part is, if I uncomment the HOGDescriptor hog = new HOGDescriptor(); and use that one instead of the one I am using now, my application crashes...
The rest of the code runs fine, the picture is always taken and displayed on my image view as I expect.
Any help will be appreciated.
Thanks in advance.

The problem was inside the library. When I executed the same code with OpenCV 2.4.13 for Linux and not for Android, the code worked great as expected. So I hope they will fix any problems with the HOGDescriptor for OpenCV4Android.

Related

Java based OpenCV in Android : Want to overlap an image onto another image only where the mask has non-zero values

I know that my question is a rather basic question, but I tried several days to solve this problem, and can't find the exact answer to my question on Stackoverflow too.
I have a background image, and I want to overlap the foreground image onto it, but only on the regions where my mask has non-zero
values. The code that I wrote is as below. The foreground image appears in
the regions where the mask has non-zero values (in the upper left quarter of the
whole image region), but in the rest of the image, the background does not appear, it just appears 'black'.
I would greatly appreciate if anyone could help
with this problem. A similar code works well for OpenCV for C language.
The background image (background.bmp) and the foreground image (foreground.bmp),
are being read in without problem (this I have verified).
setContentView(R.layout.activity_main);
String pathToBackground
= Environment.getExternalStorageDirectory()+"/background.bmp";
String pathToForeground
= Environment.getExternalStorageDirectory()+"/foreground.bmp";
Mat foreground;
Mat background;
background = imread(pathToBackground); // read in the background image
foreground = imread(pathToForeground); // read in the foreground image
Bitmap back_bitmap = BitmapFactory.decodeFile(path+"/background.bmp");
Utils.bitmapToMat(back_bitmap, background); // convert from bitmap to Mat
Mat mask2 =
new Mat( new Size(foreground.cols(), foreground.rows() ), CvType.CV_8UC1);
mask2.setTo( new Scalar( 0 ) ); // set the mask to all zero
Size sizeMask = mask2.size();
for (int i=0;i<sizeMask.height/2;i++ ) // making the upper left quarter
for (int j=0;j<sizeMask.width/2;j++ ) //of the whole mask image
mask2.put(i, j, 10); // non-zero
foreground.copyTo(background, mask2); // copy the foreground image on to
// the background image where the
// mask2 has non-zero value
// --> not working as expected
Utils.matToBitmap(background, back_bitmap); // convert from Mat to Bitmap
ImageView v = (ImageView)findViewById(R.id.imageView);
v.setImageBitmap(back_bitmap); // only the foreground image appears in the
// upper left region. The remaining region
// is just black.
// The remaining region should show the
// background image.
Thanks to the comments of 'Rick M.', I could solve the problem by the following code (though not very neat). I made an empty bitmap('back_bitmap') instead of
putting an initial image in it. The result is as shown in here.
Before I could not see the background image.
setContentView(R.layout.activity_main);
String pathToBackground =
Environment.getExternalStorageDirectory()+"/background.bmp";
String pathToForeground =
Environment.getExternalStorageDirectory()+"/foreground.bmp";
Mat foreground;
Mat background;
background = imread(pathToBackground);
foreground = imread(pathToForeground);
Bitmap back_bitmap
= Bitmap.createBitmap(foreground.cols(), foreground.rows(),
Bitmap.Config.ARGB_8888); // Make an empty Bitmap
Mat mask2
= new Mat( new Size( foreground.cols(), foreground.rows() ),
CvType.CV_8UC1 );
mask2.setTo( new Scalar( 0 ) );
Size sizeMask = mask2.size();
double[] data;
for (int i=0;i<sizeMask.height;i++ )
for (int j=0;j<sizeMask.width;j++ ) {
data = foreground.get(i, j);
if (data[0] + data[1] + data[2] > 0)
mask2.put(i, j, 10);
}
foreground.copyTo(background, mask2);
Utils.matToBitmap(background, back_bitmap);
ImageView v = (ImageView)findViewById(R.id.imageView);
v.setImageBitmap(back_bitmap);
}
}

OpenCV - Java - How to remove some pixels around a cluster

I am doing a project where I need to identify certain areas of the image. After processing the image and removing all the unnecessary things I finally get the area which I need as shown in the image (area inside the green circle).
I am unable to draw a circle around that area using OpenCV. I am currently using the Java version of OpenCV. If someone can point me to the right direction on how to implement that green circle over the image, it will be very helpful.
Things I have tried to detect that area.
blob detector - Did not achieve much.
Cluster - Same as blob detector.
HoughCircles - Draws unnecessary circles in the image.
FindContour - Did not draw anything since it is not a perfect circle, ellipse or any other well known polygon.
I appreciate your help.
Here is a solution:
Opening in order to clean the image from all the thin/elongate patterns.
Connected component labeling in order to count the remaining patterns
Size counting of each remaining pattern
The biggest pattern is the one you want to circle.
Note: is you want to perfectly preserve the pattern, you can replace the opening by an opening by reconstruction (erosion + geodesic reconstruction).
I finally found a solution for my problem. I used the feature detector from the OpenCV library and gave the right threshold to the detector. This did the trick for me. The code in Java looks like below.
public static void main(String[] args){
try{
//Validation whether a file name is passed to the function
if(args.length == 0){
System.out.println("here...");
log.error("No file was passed to the function");
throw new IOException();
}
//Read the image from the input
Mat inputMat = Highgui.imread(args[0],Highgui.CV_LOAD_IMAGE_GRAYSCALE);
//Create a feature detector. In this case we are using SURF (Speeded-Up Robust Features) detector.
MatOfKeyPoint objectKeyPoints = new MatOfKeyPoint();
FeatureDetector featureDetector = FeatureDetector.create(FeatureDetector.SURF);
//A temporary file is created to input Hessian Threshold to the SURF detector
File tempFile = File.createTempFile("config", ".yml");
String settings = "%YAML:1.0\nhessianThreshold: 7000.\noctaves: 3\noctaveLayers: 4\nupright: 0\n";
FileWriter writer = new FileWriter(tempFile, false);
writer.write(settings);
writer.close();
//Read the configuration from the temporary file to assign the threshold for the detector
featureDetector.read(tempFile.getPath());
//Detect the features in the image provided
featureDetector.detect(inputMat, objectKeyPoints);
//Iterate through the list of key points detected in the previous step and find the Key Point with the largest size
List<KeyPoint> objectKeyPointList = objectKeyPoints.toList();
KeyPoint impKeyPoint = new KeyPoint();
for(int i=0; i<objectKeyPointList.size(); i++){
if(impKeyPoint == null){
impKeyPoint = objectKeyPointList.get(i);
}
else if(impKeyPoint.size < objectKeyPointList.get(i).size){
impKeyPoint = objectKeyPointList.get(i);
}
}
//If the size of the Key Point is greater than 120 then reduce the size to 120 and if the size is less than 120 then increase to 120
if(impKeyPoint.size > 120){
KeyPoint tempKeyPoint = new KeyPoint();
tempKeyPoint = impKeyPoint;
tempKeyPoint.size = 120;
impKeyPoint = tempKeyPoint;
}
else if(impKeyPoint.size < 120){
KeyPoint tempKeyPoint = new KeyPoint();
tempKeyPoint = impKeyPoint;
tempKeyPoint.size = 120;
impKeyPoint = tempKeyPoint;
}
//Convert the Key Point to MatOfKeyPoint since drawKeyPoints accepts only MatOfKeyPoint
MatOfKeyPoint impMatOfKeyPoint = new MatOfKeyPoint(impKeyPoint);
//Mat for drawing the circle in the image
Mat outputImage = new Mat(inputMat.rows(), inputMat.cols(), Highgui.CV_LOAD_IMAGE_COLOR);
//Green color for the circle
Scalar greenCircle = new Scalar(0, 255, 0);
//Draw the circle around the optic nerve when detected
Features2d.drawKeypoints(inputMat, impMatOfKeyPoint, outputImage, greenCircle, Features2d.DRAW_RICH_KEYPOINTS);
//Write the image to a file
Highgui.imwrite("surf_keypoints.png", outputImage);
}catch(Exception e){
log.fatal(e.getMessage());
}
}
Hope this is helpful for others.

Cropping BufferedImage For Use in Xuggle encodeVideo

I have an application to capture video of the screen and save to a file. I give the user the ability to pick between 480, 720, and "Full Screen" video sizes. A 480 will record in a small box on the screen, 720 will record in a larger box, and of course, "Full Screen" will record in an even larger box. However, this full screen box is NOT the actual screen resolution. It is the app window size, which happens to be around 1700x800. The Video Tool works perfectly for the 480 and 720 options, and will also work if "Full Screen" is overwridden to be the entire screen of 1920x1080.
My question: Are only certain sizes allowed? Does it have to fit a certain aspect ratio, or be an "acceptable" resolution? My code, below, is modified from the xuggle CaptureScreenToFile.java file (the location of the problem is noted by comments):
public void run() {
try {
String parent = "Videos";
String outFile = parent + "example" + ".mp4";
file = new File(outFile);
// This is the robot for taking a snapshot of the screen. It's part of Java AWT
final Robot robot = new Robot();
final Rectangle customResolution = where; //defined resolution (custom record size - in this case, 1696x813)
final Toolkit toolkit = Toolkit.getDefaultToolkit();
final Rectangle fullResolution = new Rectangle(toolkit.getScreenSize()); //full resolution (1920x1080)
// First, let's make a IMediaWriter to write the file.
final IMediaWriter writer = ToolFactory.makeWriter(outFile);
writer.setForceInterleave(false);
// We tell it we're going to add one video stream, with id 0,
// at position 0, and that it will have a fixed frame rate of
// FRAME_RATE.
writer.addVideoStream(0, 0, FRAME_RATE, customResolution.width, customResolution.height); //if I use fullResolution, it works just fine - but captures more of the screen than I want.
// Now, we're going to loop
long startTime = System.nanoTime();
while (recording) {
// take the screen shot
BufferedImage screen = robot.createScreenCapture(fullResolution); //tried capturing using customResolution, but did not work. Instead, this captures full screen, then tries to trim it below (also does not work).
// convert to the right image type
BufferedImage bgrScreen = convertToType(screen, BufferedImage.TYPE_3BYTE_BGR); //Do I need to convert after trimming?
BufferedImage trimmedScreen = bgrScreen.getSubimage((int)customResolution.getX(), (int)customResolution.getY(), (int)customResolution.getWidth(), (int)customResolution.getHeight());
// encode the image
try{
//~~~~Problem is this line of code!~~~~ Error noted below.
writer.encodeVideo(0, trimmedScreen, System.nanoTime() - startTime, TimeUnit.NANOSECONDS); //tried using trimmedScreen and bgrScreen
} catch (Exception e) {
e.printStackTrace();
}
// sleep for framerate milliseconds
Thread.sleep((long) (1000 / FRAME_RATE.getDouble()));
}
// Finally we tell the writer to close and write the trailer if
// needed
writer.close();
} catch (Throwable e) {
System.err.println("an error occurred: " + e.getMessage());
}
}
public static BufferedImage convertToType(BufferedImage sourceImage, int targetType) {
BufferedImage image;
// if the source image is already the target type, return the source image
if (sourceImage.getType() == targetType)
image = sourceImage;
// otherwise create a new image of the target type and draw the new image
else {
image = new BufferedImage(sourceImage.getWidth(), sourceImage.getHeight(), targetType);
image.getGraphics().drawImage(sourceImage, 0, 0, null);
}
return image;
}
Error:
java.lang.RuntimeException: could not open stream com.xuggle.xuggler.IStream#2834912[index:0;id:0;streamcoder:com.xuggle.xuggler.IStreamCoder#2992432[codec=com.xuggle.xuggler.ICodec#2930320[type=CODEC_TYPE_VIDEO;id=CODEC_ID_H264;name=libx264;];time base=1/50;frame rate=0/0;pixel type=YUV420P;width=1696;height=813;];framerate:0/0;timebase:1/90000;direction:OUTBOUND;]: Operation not permitted
Note: The file is successfully created, but has size of zero, and cannot be opened by Windows Media Player, with the following error text:
Windows Media Player cannot play the file. The Player might not support the file type or might not support the codec that was used to compress the file.
Sorry for the wordy question. I'm interested in learning WHAT and WHY, not just a solution. So if anyone can explain why it isn't working, or point me towards material to help, I'd appreciate it. Thanks!
Try to have the dimension even numbers 1696x812

Android: Display image from file in highest resolution

I have quite the annoying problem. I'm building an app where one can share photos. On the SurfaceView where you take the actual photo, the resolution is great. However, when I retrieve that image and display it in a ListView using Picasso, the resolution goes to crap. The pixelation is real. Is there anything that I'm doing horrendously wrong to cause this? The first code snippet below is where I actually save the photo, and the one below that is my getItemView() method in my adapter for the listview. Thanks in advance.
Note that the "photo" variable you see in my code is a Parse subclass I've created to make it easier working with data associated with each photo. I think you can safely ignore it.
EDIT:
SurfaceView of Camera:
Note that I attempt to set the camera parameters to the highest quality allowed. Unfortunately, when I LOG size.width and size.height, I can only get around 176x144. Is there a way to get a higher resolution for supported camera sizes itself?
camera.setDisplayOrientation(90);
Parameters parameters = camera.getParameters();
parameters.set("jpeg-quality", 70);
parameters.setPictureFormat(ImageFormat.JPEG);
List<Camera.Size> sizes = parameters.getSupportedPictureSizes();
Size size = sizes.get(Integer.valueOf((sizes.size()-1)));
parameters.setPictureSize(size.width, size.height);
camera.setParameters(parameters);
camera.setDisplayOrientation(90);
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
parameters.setPreviewSize(size2.width, size2.height);
camera.setPreviewDisplay(holder);
camera.startPreview();
Saving the photo:
// Freeze camera
camera.stopPreview();
// Resize photo
Bitmap mealImage = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap mealImageScaled = Bitmap.createScaledBitmap(mealImage, 640, 640, false);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedScaledMealImage = Bitmap.createBitmap(mealImageScaled, 0,
0, mealImageScaled.getWidth(), mealImageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledMealImage.compress(Bitmap.CompressFormat.JPEG, 100, bos);
byte[] scaledData = bos.toByteArray();
// Save the scaled image to Parse with the date and time as its file name.
DateTime currentTime = new DateTime();
DateTimeFormatter fmt = DateTimeFormat.forPattern("HH MM SS");
photoFile = new ParseFile(currentTime.toString(fmt), scaledData);
photo.setPhotoFile(photoFile);
Displaying it:
final ParseImageView photoView = holder.photoView;
ParseFile photoFile = photo.getParseFile("photo");
Picasso.with(getContext())
.load(photoFile.getUrl())
.into(photoView, new Callback() {
#Override
public void onError() {
}
#Override
public void onSuccess() {
}
});
The problem is not with the Picasso
It because this line of code
parameters.set("jpeg-quality", 70);
and this
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
When you setup the camera you already turned down the quality to the 70% (because based on the Android Documentation the range of jpeq-quality is between 0-100)
And then you also need to check is the size of the camera is correct or not, because you are making assumption with that code
you can try this code to get the best preview size with your preffered width and height
private Camera.Size getBestPreviewSize(int width, int height, Camera.Parameters parameters){
Camera.Size bestSize = null;
List<Camera.Size> sizeList = parameters.getSupportedPreviewSizes();
bestSize = sizeList.get(0);
for(int i = 1; i < sizeList.size(); i++){
if((sizeList.get(i).width * sizeList.get(i).height) >
(bestSize.width * bestSize.height)){
bestSize = sizeList.get(i);
}
}
return bestSize;
}
I hope this answer will help you, if you have another question about my answer you can try to ask me in the comment :)

how to detect the number of face in an image picture

I am creating an Android face detection app and when I run it on my device it always says:
"Sorry TakePic has suddenly stopped"
Here is my code for face detection and I believe that this is the source of the error:
bitmap = MediaStore.Images.Media.getBitmap(cr, selectedImage);
TextView detect = (TextView)findViewById(R.id.detect);
Bitmap maskBitmap = Bitmap.createBitmap( bitmap.getWidth(),bitmap.getHeight(), Bitmap.Config.RGB_565 );
Canvas c = new Canvas();
c.setBitmap(maskBitmap);
Paint p = new Paint();
p.setFilterBitmap(true); // possibly not nessecary as there is no scaling
c.drawBitmap(bitmap,0,0,p);
bitmap.recycle();
detectedFaces=new FaceDetector.Face[NUMBER_OF_FACES];
faceDetector=new FaceDetector(maskBitmap.getWidth(),maskBitmap.getHeight(),NUMBER_OF_FACES);
NUMBER_OF_FACE_DETECTED=faceDetector.findFaces(maskBitmap, detectedFaces);
k.setImageBitmap(bitmap);
detect.setText(NUMBER_OF_FACE_DETECTED);
Toast.makeText(MainActivity.this, selectedImage.toString(), Toast.LENGTH_LONG).show();
What would the mistake be with this code?
The best practice in such cases is to inspect LogCat view and look for exception messages to locate where your code is breaking.

Categories