Loading an image using OpenCV in Android - java

I apologize if this question has already been covered on this site, but I can't seem to find a straight forward answer. I'm making an android app that uses OpenCV to take a picture, process it to detect objects, and figure out the coordinates of those objects in the scene.
All the tutorials I find for android seem to be real time processing of the camera preview or using c++, which I would strongly prefer to not have to go down the c++ road. I'm sure I'm missing something simple but I don't know what it is.
On another note, the objects I'm trying to detect are billiards balls on a table. What would be the best pre-processing technique to better detect the balls? I did a quick test using the canny method, and it seems that the light reflecting off the balls breaks up the circle shape.

To load images in Android, you can use
Bitmap bMap=BitmapFactory.decodeResource(getResources(),R.drawable.image1)
where image1 is a image under Resources folder of your Android project.
Then convert Bitmap to bytes or Mat and process in C++ (OpenCV) or Java with matToBitmap or MatToBitmap methods in android-opencv.
If you are comfortable in processing using Mat data type, you can use (you need android-opencv project to use this)
Mat m = Highgui.imread("/media/path_to_image");
You can use blobs, hough circles to detect billiard balls. There is no certain way to answer this. You can try using normalized RGB or Lab colorspaces to play around (to remove reflection of the billiard balls) to see if you detect billiard balls (by running hough circles, blob detectors).
The real-time scenario will require you to use machine-learning techniques to detect billiard balls by using certain feature vectors, then do training and testing.

This works for me (got the answer from here: How to get Mat with a drawable input in Android using OpenCV)
Mat img = null;
try {
img = Utils.loadResource(this, R.drawable.image_id, CvType.CV_8UC4);
} catch (IOException e) {
e.printStackTrace();
}

I know it's too late, but someone may use this.
Highgui has been removed from opencv for android.
You can use Imgcodes instead.
Mat BGRMat = Imgcodecs.imread(getResources().getDrawable(R.drawable.img).toString());

Related

How to rotate an image on command using Codename One's Transform class?

I am currently doing a school project where we are creating an Asteroids game using Codename One. My current functionality works well, except for when it comes to rotating the image of the ship. Using the Transform class has been ineffective; the image does not rotate, no matter how the Transform is applied or the image is drawn.
Here is a sample portion of the code used to turn:
public void turnRight() //Rotates the ship 5 degrees clockwise
{
if (direction==355)
direction = 0;
else
direction+=5;
Transform tmpTransform = Transform.makeIdentity();
theImage.getGraphics().getTransform(tmpTransform);
tmpTransform.rotate((float)Math.toRadians(5), x, y);
theImage.getGraphics().setTransform(tmpTransform);
theImage.getGraphics().drawImage(shipPic, 0, 0);
}
Where:
theImage is a mutable Image (100x100)
shipPic is an Image created via Image.createImage(String path)
In addition, I have tried using the draw(Graphics g, Point p) method and passing theImage.getGraphics(), as well as passing shipPic.getGraphics()
I am at a loss, and Codename One's documentation on the subject is unhelpful.
Could I get some assistance, please?
You need to use one graphics object so something like this:
Graphics g = theImage.getGraphics();
Would be more correct. You also must test for transform support when rendering onto an image as low level graphics isn't always portable to all OS's in all surfaces. A good example is iOS where rendering onto an image uses a completely different low level implementation than the display rendering.
Normally I would render directly to the display as that is hardware accelerated on modern devices and images are often implemented in software.
About the documentation, did you read the graphics section in the developer guide?
It should contain explanations of everything and if something is missing there is search. If you still can't find something and figure it out by yourself notice you can also edit the docs and help us improve them.

OpenCV applying filters to specific place in image

Hello I am using OpenCV for Java and want to blur faces from image, but I keep failing to apply it only for face. How can I do it?
Your question is too general. What basically needs to be done is to run a face detection algorithm (google for face detection opencv), extract the rectangle of pixels depicting the face using cv::rect, blur them and replace the original pixels. I suggest you read about face detection (try to understand the main ideas regardless of openCV), read some tutorials on implementation and then, if you need assistance, write a more specific question here and people are sure to answer you.

How to take high-res picture while sensing depth using project tango

How take picture using project tango ?
I read this answer: Using the onFrameAvailable() in Jacobi Google Tango API
which works for grabbing a frame but picture quality is not great. Is there any takePicture equivalent ?
Note that java API
public void onFrameAvailable(int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
mTangoCameraPreview.onFrameAvailable();
}
}
does not provide rgb data. If I use android camera to take picture, tango can not sense depth. There I will have to use TangoCameraPreview.
Thanks
You don't have to use TangoCameraPreview to get frames in Java. It is really just a convenience class provided to help with getting video on the screen. It appears to be implemented entirely in Java with calls to com.google.atap.tangoservice.Tango (i.e no calls to unpublished APIs). In fact, if you look inside the Tango SDK jar file, you can see that someone accidentally included a version of the source file - it has some diff annotations and may not be up to date but examining it is still instructive.
I prefer not to use TangoCameraPreview and instead call Tango.connectTextureId() and Tango.updateTexture() myself to load frame pixels into an OpenGL texture that I can then use however I want. That is exactly what TangoCameraPreview does under the hood.
The best way to capture a frame in pure Java is to draw the texture at its exact size (1280x720) to an offscreen buffer and read it back. This also has the side effect of converting the texture from whatever YUV format it has into RGB (which may or may not be desirable). In OpenGL ES you do this using a framebuffer and renderbuffer.
Adding the framebuffer/renderbuffer stuff to a program that can already render to the screen isn't a lot of code - about on par with the amount needed to save a file - but it is tricky to get right when you do it for the first time. I created an Android Studio sample capture app that saves a Tango texture as a PNG to the pictures folder (when you tap the screen) in case that is helpful for anyone.

How should I do image processing in Java?

I'm making an applet that lets users crop out a piece of an image and save it. For cropping, I'm going to implement a "magic wand"-esque tool. I can do all this in Matlab but i'm having some trouble figuring out the Java libraries. Here are a few tasks I need to perform:
Randomly access pixels in an image by (x,y) and return a single object (java.awt.Color, ARGB int, short[], whatever -- as long as I'm not dealing with channels individually)
Create an alpha channel from a boolean[ ][ ]
Create a N by M image that's initialized to green
Any pros out there who can help me? Just some code snippets off the top of your head would be fine.
Many thanks,
Neal
You want to use the Java2D libraries. Specifically, you want to use the BufferedImage class from the library to deal with your images. You can access individual pixels and do all of the things you have specified above. Sun/Oracle has a good tutorial to get you started in the right direction. The second part in that tutorial goes over creating an alpha channel. Oh, and to access individual pixels, you want to use the WritableRaster class. So you can do something like this. Hope this gets you started.
WritableRaster imageRaster = Bufferedimg.getRaster();
//use java random generation to get a random x and y coordinate, then call this to access the pixel
imageRaster.getPixel(x, y,(int[])null);
ImageJ is a mature, open-source image processing framework that supports macros, plugins and a host of other features.
Marvin is a Java image processing framework that can help you. It provides algorithms for filtering, feature extraction, morphological analysis, tranformations, segmentation and so forth. Moreover, its architecture supports real-time video processing with the same algorithms.

Hardware accelerate bitmap drawing in java

I want to be able to draw consecutive bitmaps (of type BufferedImage.TYPE_INT_RGB) of a video as quickly as possible in java. I want to know the best method in doing so. Does anyone have any advice where I should start? From what I've read, 2 options are:
1) Use GDI/GDI+ routines in a JNI dll working with JAWT (Im on Windows)
2) Use Java3D and apply Textures to a Box's face and rotate it to the camera
Im interesting in any advice on these topics as well as any others.
I have done a decent amount of GDI/GDI+ programming in VB when i created an ActiveX control, so using GDI should be painless, but im guessing Java3D will utilize the GPU more (I could be wrong) and give better performance. What do you think? GDI and JAWT with my previous experience, or start and new API journey with Java3D.
Thanks in advance. :)
To obtain a fluid animation (if it what you want to get), you need to use double buffering. For doing this, you will need to create a new java.awt.Image (or a subclass like BufferedImage, or if you want OpenGL accelerated processing, VolatileImage) for each frame you want to display. If you haven't already done so, call Image.getGraphics() to get a java.awt.Graphics object (can also be useful to add your content to the Image). At the end, when you hidden Image is complete, call Graphics.draw() to replace the current display smoothly.
VolatileImage is OpenGL accelerated and much faster. When VolatileImage.getGraphics() is called, it actually returns a Graphics2D, which is also part of the accelerated graphic pipeline.
It works on Windows, Linux and Solaris, but you need to have OpenGL drivers installed for your graphic card.
Some additional refs:
Accelerated graphic pipeline:
http://download.oracle.com/javase/1.5.0/docs/guide/2d/new_features.html
http://www.javalobby.org/forums/thread.jspa?threadID=16840&tstart=0
Double buffering:
http://www.java2s.com/Code/Java/2D-Graphics-GUI/Smoothmoveusingdoublebuffer.htm
http://www.heatonresearch.com/articles/23/page2.html
http://www.javacooperation.gmxhome.de/BildschirmflackernEng.html

Categories