How to identify three palm lines in hand programmatically in android - java

I am working on an android application in which I want to read palm lines when user clicks photo of his hand.
I have tried to google it but could not get desired results. I have also integrated the openCV library in my project but not sure how can I use it to detect those lines.
Is there any algorithm or library through which I can achieve it.
Any suggestion would be appreciated.
Thanks

Well there is no specific library to achieve that but you can use opencv to achieve it. Either you can train your own classifier to detect the plan lines or you can use edge detection to get all the edges and later use some mathematical logic to detect plan lines

Related

Android App use HED with OpenCV and Deep Learning(Java)

I am developing an android app to scan documents with my phone. I am using openCV and Canny edge detection and it works ok but if I try to scan an document on a background without enough contrast between the document and the background it fails. I have tried other apps in the Play Store and they are still able to scan the document with less contrast. So I was looking for ways to improve my edge detection and found this:
https://www.pyimagesearch.com/2019/03/04/holistically-nested-edge-detection-with-opencv-and-deep-learning/
But I can't figure out how to use HED in my Android Studio Java project. More precisely I can't find out how to create the custom layer cropping class for the neural network in Java. I was able to get the rest of the tutorial to work but I dont know how to create the custom layer cropping class.
At the moment im am registering an empty or wrong class as cropping layer and I'm getting blank images.
If any of you guys know something or can point me in the right direction I'd be very thankfull.
(edit) I did some research and apparently you have to create a class in c++ and use it in java but i can't find instructions on how to achieve this.

Visualize Point Cloud in Project Tango from PCL

I would like to do the following with my application in Project Tango:
Read the Point cloud data.
Filter the points using PCL
Segment Points Using PCL RANSAC plane fitting algorithm
Color each segment
Display the point cloud segmented with different
color on the screen
I have reach step 3 now, and my problem is how to display the points. what I need is an output similar to the plane-fitting C++ example provided by Google. I'm using java point cloud example with native code. I need the displaying to ensure of the filter step and the segmenting output.
My problem is that I don't have any idea how should I perform the visualization in Android from PCL?.
Thanks
For simple task like this, pure OpenGL could work. Tango sample code uses a render library called Rajawalli, which is used more for games.

Using OpenCV for IP camera

I'm just a student who is given the project to do people counting using IP camera without buying anything. I have done research for the past 3 days and seems that OpenCV is the only free program but I'm unsure if it is able to do people counting.
Is there any links that teaches how to set up OpenCV? I found one at http://robocv.blogspot.it/2012/01/using-your-ip-camera-with-opencv.html but I'm not sure if I should configure it using the method stated in the link.
Your help is very much appreciated!
you can use opencv in java by using native opencv C code. Or you can use JavaCV.
yes people counting is possible using open cv. You have to use body detection and object tracking.

How to get the projection and model view transforms in Android in java

I'm trying to calculate a ray in my 3D world from a tap on an Android phone screen.
I thought I could just take the projection*modelview transform and inverse it. However, getting hold of the actual matrices doesn't seem to be so straightforward as I expected in OpenGL ES.
After a good while searching around the internet, options seem to be:
Many people seem to be using a bunch of classes from an Android demo source (MatrixGrabber, MatrixTrackingGL..).
Use glGetFloatv. However this doesn't seem to exist in GL10. It seems to have been added in GL11 with some problems and with no documentation.
Implement your own matrix code.
Now it's almost two years since most of the stuff I've found was written and I find it hard to believe that google hasn't got around to make this easier for game developers on Android, so I'm assuming I'm just being blind. Is there an official or recommended way of doing this? Maybe something supported in the sdk?
Many thanks.

image recognition in android/java

I am developing an application in android.In this application I want to find the shape of an object in a black and white snaps. If i have a triangular object in the snap then i want to find the angle between the edges of the objects programmatically. I do not want to draw the line manually on the object and find the angle. My actual need is that scan the image and find angle of the object using objects pixel intensity.
Can anyone please suggest me how to proceed for this work.
Thanks in advance.
The OpenCV library, which can be built for Android, can perform such shape detection. Here's a tutorial about triangle and other shapes detection. It shows how to extract vertices, from which you should be able to get angles easily. It's in C# but should be easy to port to C.
I don't think that you'll find a ready-to-use OpenCV Java binding for Android. But using the Android NDK, you could encapsulate calls to the OpenCV C API, and expose a few functions in Java through JNI. A google search about "opencv android java" yields a couple of tips.
Not really sure if this is a good solution for a mobile device, but it sounds like you should use a Hough transform to find the lines, and then find a triangle using those lines.
It sounds like you need to use some Edge detection, Which is what Hough transform is part of. There are many different complex approaches to this process, but this is definitely a starting point to read up on.

Categories