I would like to do the following with my application in Project Tango:
Read the Point cloud data.
Filter the points using PCL
Segment Points Using PCL RANSAC plane fitting algorithm
Color each segment
Display the point cloud segmented with different
color on the screen
I have reach step 3 now, and my problem is how to display the points. what I need is an output similar to the plane-fitting C++ example provided by Google. I'm using java point cloud example with native code. I need the displaying to ensure of the filter step and the segmenting output.
My problem is that I don't have any idea how should I perform the visualization in Android from PCL?.
Thanks
For simple task like this, pure OpenGL could work. Tango sample code uses a render library called Rajawalli, which is used more for games.
Related
I am developing an android app to scan documents with my phone. I am using openCV and Canny edge detection and it works ok but if I try to scan an document on a background without enough contrast between the document and the background it fails. I have tried other apps in the Play Store and they are still able to scan the document with less contrast. So I was looking for ways to improve my edge detection and found this:
https://www.pyimagesearch.com/2019/03/04/holistically-nested-edge-detection-with-opencv-and-deep-learning/
But I can't figure out how to use HED in my Android Studio Java project. More precisely I can't find out how to create the custom layer cropping class for the neural network in Java. I was able to get the rest of the tutorial to work but I dont know how to create the custom layer cropping class.
At the moment im am registering an empty or wrong class as cropping layer and I'm getting blank images.
If any of you guys know something or can point me in the right direction I'd be very thankfull.
(edit) I did some research and apparently you have to create a class in c++ and use it in java but i can't find instructions on how to achieve this.
I am working on an android application in which I want to read palm lines when user clicks photo of his hand.
I have tried to google it but could not get desired results. I have also integrated the openCV library in my project but not sure how can I use it to detect those lines.
Is there any algorithm or library through which I can achieve it.
Any suggestion would be appreciated.
Thanks
Well there is no specific library to achieve that but you can use opencv to achieve it. Either you can train your own classifier to detect the plan lines or you can use edge detection to get all the edges and later use some mathematical logic to detect plan lines
This is my question i want you to use the API hosted at http://212.88.98.116:4050/. This is a simple json API showing some live data. i want to create android program that presents a graphical representation(Bar,line graph) of the data using android java language.Please i need your help just 5 hours from now.
You probably should use a third party library for this.
MPAndroid is quite popural :
http://code.tutsplus.com/tutorials/add-charts-to-your-android-app-using-mpandroidchart--cms-23335
Download the data from your code and use this simple API to create a bar graph.
I am developing an application on the android platform. The app basically needs to capture an image using the camera(already done) and then analyse the captured image to see where does one color ends and the next one starts (assuming the image would always have 2 or 3 dominant colors and would be really simple). Any ideas?
P.S. I have already tried OpenCV, but there are two problems: 1. The library needs to be installed previously on your phone for your app to work and I can't have that since it will be a commercial app (I am not sure about this dependency though) 2. Secondly, the min-sdk for my app is android 2.2 and for OpenCV it's 2.3
I have just started using OpenCv and the problem you mentioned (installing library) was one of the major issues faced by me too. However I found a solution to this problem by declaring the OpenCv initialization as static. When you make the initialization static then there is no need for the pre-installation of those libraries.
Gud Luck!
OpenCV, while a good general purpose library, is just a collection of utilities that deal with pixels. If the license and min SDK are issues, write it yourself. Segmentation is a matter of choosing a starting x,y location within the image and traversing in each direction until a pixel is encountered that meets or exceeds your threshold for "different". Use a stack to keep account of where you stepped in x and y and then backtrack by popping indices off the stack and follow another direction when you get back to where you were. Push indices onto the stack when you step in either x or y.
It's not difficult, just rather tedious, but that's why people wrote libraries to do this stuff.
To do something like that you'll need to do image processing. A very popular library for C++/Java which can certainly handle this is OpenCV.
I am developing an application in android.In this application I want to find the shape of an object in a black and white snaps. If i have a triangular object in the snap then i want to find the angle between the edges of the objects programmatically. I do not want to draw the line manually on the object and find the angle. My actual need is that scan the image and find angle of the object using objects pixel intensity.
Can anyone please suggest me how to proceed for this work.
Thanks in advance.
The OpenCV library, which can be built for Android, can perform such shape detection. Here's a tutorial about triangle and other shapes detection. It shows how to extract vertices, from which you should be able to get angles easily. It's in C# but should be easy to port to C.
I don't think that you'll find a ready-to-use OpenCV Java binding for Android. But using the Android NDK, you could encapsulate calls to the OpenCV C API, and expose a few functions in Java through JNI. A google search about "opencv android java" yields a couple of tips.
Not really sure if this is a good solution for a mobile device, but it sounds like you should use a Hough transform to find the lines, and then find a triangle using those lines.
It sounds like you need to use some Edge detection, Which is what Hough transform is part of. There are many different complex approaches to this process, but this is definitely a starting point to read up on.