I am working on an OpenCV project that relies on finger detection. Currently I have an OpenCVFrameGrabber that grabs a frame and places it in an IplImage. I then draw that image onto my GUI.
This all works, but the image that is drawn seems to be in black and white even though I have a color camera. There are noticeable vertical lines in the image and when there is some color, it seems to be split into components along these lines.
Does anyone know of a way to get the original webcam image?
I recently started playing with JavaCV and I'm always trying to avoid this new classes and stick with the "original" OpenCV methods.
I suggest you try the following code and make sure that the most simple capture procedure works:
public static void main(String[] args)
{
CvCapture capture = cvCreateCameraCapture(0);
if (capture == null)
{
System.out.println("!!! Failed cvCreateCameraCapture");
return;
}
cvNamedWindow("camera_demo");
IplImage grabbed_image = null;
while (true)
{
grabbed_image = cvQueryFrame(capture);
if (grabbed_image == null)
{
System.out.println("!!! Failed cvQueryFrame");
break;
}
cvShowImage("camera_demo", grabbed_image);
int key = cvWaitKey(33);
if (key == 27)
{
break;
}
}
cvReleaseCapture(capture);
}
If this works, your problem might be related to OpenCVFrameGrabber. If it doesn't, you might want to experiment your code with another camera.
Related
I just found about Sikuli when I was looking for a library to find matches of a given image within a larger image (both loaded from files).
By default, Sikuli only supports loading the searched image from file, but relies on a proprietary class Screen to take screenshots to use as base for the search... And I'd like to have the ability to use a image file instead.
Looking for a solution has led me to this question, but the answer is a bit vague when you consider that I have no prior experience with Sikuli and the available documentation is not particularly helpful for my needs.
Does anyone have any examples on how to make a customized implementation of Screen, ScreenRegion, ImageScreen and ImageScreenLocation? Even a link to a more detailed documentation on these classes would be a big help.
All I want is to obtain the coordinates of an image match within another image file, so if there's another library that could help with this task I'd more than happy to learn about it!
You can implement it by yourself with something like this:
class MyImage{
private BufferedImage img;
private int imgWidth;
private int imgHeight;
public MyImage(String imagePath){
try{
img = ImageIO.read(getClass().getResource(imagePath));
}catch(IOException ioe){System.out.println("Unable to open file");}
init();
}
public MyImage(BufferedImage img){
this.img = img;
init();
}
private void init(){
imgWidth = img.getWidth;
imgHeight = img.getHeight();
}
public boolean equals(BufferedImage img){
//Your algorithm for image comparison (See below desc for your choices)
}
public boolean contains(BufferedImage subImage){
int subWidth = subImage.getWidth();
int subHeight = subImage.getHeight();
if(subWidth > imgWidth || subHeight > imgHeight)
throw new IllegalArgumentException("SubImage is larger than main image");
for(int x=0; x<(imgHeight-subHeight); x++)
for(int y=0; y<(imgWidth-subWidth); y++){
BufferedImage cmpImage = img.getSumbimage(x, y, subWidth, subHeight);
if(subImage.equals(cmpImage))
return true;
}
return false;
}
}
The contains method will grab a subimage from the main image and compare with the given subimage. If it is not the same, it will move on to the next pixel until it went through the entire image. There might be other more efficient ways than moving pixel by pixel, but this should work.
To compare 2 images for similarity
You have at least 2 options:
Scan pixel by pixel using a pair of nested loop to compare the RGB value of each pixel. (Just like how you compare two int 2D array for similarity)
It should be possible to generate a hash for the 2 images and just compare the hash value.
Aah... Sikuli has an answer for this too... You just didnt look close enough. :)
Answer : The FINDER Class
Pattern searchImage = new Pattern("abc.png").similar((float)0.9);
String ScreenImage = "xyz.png"; //In this case, the image you want to search
Finder objFinder = null;
Match objMatch = null;
objFinder = new Finder(ScreenImage);
objFinder.find(searchImage); //searchImage is the image you want to search within ScreenImage
int counter = 0;
while(objFinder.hasNext())
{
objMatch = objFinder.next(); //objMatch gives you the matching region.
counter++;
}
if(counter!=0)
System.out.println("Match Found!");
In the end I gave up on Sikuli and used pure OpenCV in my Android project: The Imgproc.matchTemplate() method did the trick, giving me a matrix of all pixels with "scores" for the likehood of that being the starting point of my subimage.
With Sikuli, you can check for the presence of an image inside another one.
In this example code, the pictures are loaded from files.
This code tell us if the second picture is a part of the first picture.
public static void main(String[] argv){
String img1Path = "/test/img1.png";
String img2Path = "/test/img2.png";
if ( findPictureRegion(img1Path, img2Path) == null )
System.out.println("Picture 2 was not found in picture 1");
else
System.out.println("Picture 2 is in picture 1");
}
public static ScreenRegion findPictureRegion(String refPictureName, String targetPictureName2){
Target target = new ImageTarget(new File(targetPictureName2));
target.setMinScore(0.5); // Precision of recognization from 0 to 1.
BufferedImage refPicture = loadPicture(refPictureName);
ScreenRegion screenRegion = new StaticImageScreenRegion(refPicture);
return screenRegion.find(target);
}
public static BufferedImage loadPicture(String pictureFullPath){
try {
return ImageIO.read(new File(pictureFullPath));
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
To use Sikuli package, I added this dependency with Maven :
<!-- SIKULI libraries -->
<dependency>
<groupId>org.sikuli</groupId>
<artifactId>sikuli-api</artifactId>
<version>1.1.0</version>
</dependency>
I am developing one security related project, there is need to check any face is detected or not, if face is detected then do some action, if face is not detected then close app.
Everything is perfect working, i am using SurfaceView which is implemented SurfaceHolder.Callback and in that open camera and camera have one method name is startFaceDetection using this method i detect face.
code for reference
public class SurfaceViewPreview extends SurfaceView implements SurfaceHolder.Callback {
private SurfaceHolder mHolder;
private Camera mCamera;
public SurfaceViewPreview(Context context, AttributeSet attrs) {
super(context, attrs);
setWillNotDraw(false);
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
public void surfaceCreated(SurfaceHolder holder) {
try {
if (Camera.getNumberOfCameras() <= 0 || ContextCompat.checkSelfPermission(getContext(), Manifest.permission.WRITE_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED)
return;
mCamera = Camera.open(0);
mCamera.setPreviewDisplay(mHolder);
} catch (Exception e) {
e.printStackTrace();
if (this.mCamera != null) {
this.mCamera.release();
this.mCamera = null;
}
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
if (Camera.getNumberOfCameras() <= 0 || ContextCompat.checkSelfPermission(getContext(), Manifest.permission.WRITE_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED)
return;
mCamera.stopPreview();
mCamera.release();
mCamera = null;
}
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
if (Camera.getNumberOfCameras() <= 0 || ContextCompat.checkSelfPermission(getContext(), Manifest.permission.WRITE_EXTERNAL_STORAGE)
!= PackageManager.PERMISSION_GRANTED)
return;
mCamera.startPreview();
mCamera.setFaceDetectionListener(new Camera.FaceDetectionListener() {
#Override
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
// face is detected.
}
});
mCamera.startFaceDetection();
}
}
Now, problem if any human post if i shown to camera then detected as human, but i want real human face detection not fake poster face.
Possible way to handle my requirement.
1) Capture 10 images periodically and check all variation is same then it means static face is there (like poster which is mounted in wall).
2) Write any proper algorithm which tell to detected face is real human or fake face.
3) Any library is available which is said human face is really available or not.
if anyone have idea please suggest, how to solve above issue (any code is available then share with me), response is appreciated !
how can use adapting learning ways to conclude real vs fake picture/video frame.
You could use the parallax effect. First you take a 2 pictures from 2 different locations which are like 2cm apart. then you could compare the images and see:
*If they are very similar(almost same) then the image is 2d and it is a poster
*If they are very different then it is a 3d Face
Another way you could do this is by using the camera flash. The flash would cause a bit of reflection on photographs and this would prevent people from using a video to bypass your system as a screen would cause a lot of glare would would block the face preventing the camera from detecting the face. All you would need to do is add a flash(preferably blinking at like 100Hz so the people can't see it but it would show up in a picture)
I hope this helped :)
I had a challenge solving problems similiar the one #YogeshRathi. I had an algorithm with the CV2 library (Python) recognizing faces taken from a security camera.
Every 5 seconds I took pictures, and the algorithm recognized faces in the poster hanged on the wall.
After test different solutions (other algorithms, train models...) what I did finally was generate a buffer, in which there were 5 pictures always, one in and another out. The one entering to the buffer do it with the list of coordinates of all rectangles that contains a face (5 faces in the pictures --> 5 rectangles), and compare with the rest of the pictures inside the buffer.
The comparison of pictures consists in compare the rectangles (every rectangle has 4 coordinates) between both pictures, by the substraction of every single coordinate. If the rectangle is static (a face in a poster have almost the same rectangle in different pictures) the difference between both rectangles is simbolic, so then, unless they have different number of rectangles, if all the rectangles in both pictures have simbolic differences they are similar.
If in a picture appears a real person we will have different number of rectangles (the number of faces inside the poster and the one that belongs to the real person), o at least one of them will be different to the list of rectangles of the picture is being compared with.
If the rectangles in both pictures are similar, I put a flag in a Historial field, that is 0. If there are different rectangles, the flag is 1.
You compare the picture that enters the buffer one to one to the rest of the pictures in the buffer. So when you finish, you have a list of flags (like this [0,0,0,1,1]) attached to every picture.
When the picture go out of the buffer you evaluate the Historial field. If a 0 is contained in the list it means that at least there are a picture that is identical, so then you can consider there are no faces on it you have to identify, because all it contains only fake faces from a poster.
I'm trying to create a loop using Java Swing Timer to constantly cycle through a set of images (i1, i2, i3....in where n is total number of images).
Each of the images is exactly the same size and must be displayed on a Label (say, l1).
There must be a delay of ten seconds between each image being displayed.
Any idea how I can go about this without using the Java TumbleItem applet> It's seems much too complicated for a simple implementation such as mine. (Displaying special deals posters on an online storefront application for school).
I am open to this being achieved in any other way.
Help would be greatly appreciated. Thanks in advance!
I'm trying to create a loop using Java Swing Timer to constantly cycle through a set of images
When you use a Timer you don't use a loop. When the Timer fires you just change the image. So somewhere you would need to keep a List of the images to display and an index of the currently displayed image.
Any idea how I can go about this without using the Java TumbleItem applet> It's seems much too complicated for a simple implementation such as mine
How is it complicated? It displays a series of images, which is close to what you want.
Yes, there is some extra code that loads the images and doesn't start the animation until all the images are loaded. So you could easily simplify the code by not worry about that. Also, there is code that does animation from from left-to-right and then right-to-left. You also don't need that part of the code. Also, there is code that configures the animation speed. Again you can hard code that.
So if you start with that example and then simplify the code you will have a simple solution. Give it a try and then post your code when you encounter a problem.
This is very simple. Use a timer like this:
Timer timer = new Timer();
timer.scheduleAtFixedRate(new TimerTask() {
public void run() {
//codehere
}
}, 0, delayInMillis)
Use can use an integer to specify in image.
public int image = 1;
in the run() function, use this to switch between the image
if(image = 1) {
image = 2;
} else if(image = 2) {
image = 3;
} if(image = 3) {
image = 0;
}
Now, wherever you are drawing your images, use this:
if(image == 1) {
//draw first image
} else if(image == 2) {
//draw second image
} else if(image == 3) {
//draw third image
}
My project is using Processing core jar and the GSVideo library on a OSX 10.8.5 using Eclipse.
I cannot get GSVideo jump(int frame) or jump(float time) to actually redraw the next frames. The image displayed toggles back and forth between frames when I repeatedly press the RIGHT to advance the frame in the example program below. Because the example below works with *.mov, but not *.mpg video I want to ask if there are any known problems with gstreamer advancing frames in MPEG2 video. Or perhaps something's up with either java-gstreamer or GSVideo?
I'm working with video in MPEG2 format.. And there is no problem just to play and pause the MPEG2. It just seems that movie.jump(frameNum or time) functions are not working. I've started looking for an example of frame stepping using playbin2's seek method.
Here is info about the video I'm trying to jump.
stream 0: type: CODEC_TYPE_VIDEO; codec: CODEC_ID_MPEG2VIDEO; duration: 7717710; start time: 433367; timebase: 1/90000; coder tb: 1001/60000;
width: 1920; height: 1080; format: YUV420P; frame-rate: 29.97;
The example code.
import processing.core.*;
import codeanticode.gsvideo.*;
public class FramesTest extends PApplet {
GSPlayer player;
GSMovie movie;
int newFrame = 0;
PFont font;
public void setup() {
size(320, 240);
background(0);
//movie = new GSMovie(this, "station.mov"); // sample works
movie = new GSMovie(this, "myMovie.mpg"); // mpg does not
movie.play();
movie.goToBeginning();
movie.pause();
textSize(24);
}
public void movieEvent(GSMovie movie) {
System.out.println("movie"+ movie.frame());
movie.read();
}
public void draw() {
image(movie, 0, 0, width, height);
fill(240, 20, 30);
text(movie.frame() + " / " + (movie.length() - 1), 10, 30);
}
public void keyPressed() {
if (movie.isSeeking()) return;
if (key == CODED) {
if (keyCode == LEFT) {
if (0 < newFrame) newFrame--;
} else if (keyCode == RIGHT) {
if (newFrame < movie.length() - 1) newFrame++;
}
}
movie.play();
movie.jump(newFrame);
movie.pause();
if (movie.available()){
System.out.println(movie.frame());
movie.read();
}
System.out.println(newFrame);
}
public static void main(String[] args) {
// TODO Auto-generated method stub
PApplet.main(new String[] { FramesTest.class.getName() }); //
}
}
The example code was pulled from here...
http://gsvideo.sourceforge.net/examples/Movie/Frames/Frames.pde
I've searched the internet for a few days and this attempted contact with this forum as well...
https://sourceforge.net/projects/gsvideo/forums
This post seems similar but my problem is not playing (that's fine). I cannot jump to a specific frame.... GStreamer: Play mpeg2
Many thanks to the SO community for any help I might receive.
Update:
To work around the MPEG2 compression issue (described by v.k. below) I am trying to create a gstreamer pipeline to do on-the-fly transcoding to mp4 using either GSVideo Pipeline or with java-gstreamer. The command below works in Ubuntu.
gst-launch-0.10 filesrc location=myMpeg2Video.mpg ! mpegdemux name=demux demux.video_00 ! ffdec_mpeg2video ! queue ! x264enc ! ffdec_h264 ! xvimagesink
But the following GSVideo Pipeline displays an empty gray window :(
pipeline = new GSPipeline(this, "filesrc location=file:/path/movie.mpg ! mpegdemux name=demux demux.video_00 ! ffdec_mpeg2video");
pipeline.play();
as v.k. pointed out, seeking is in general not accurate.
One important thing to note is that development on gsvideo has basically stopped. The main elements of it were ported to the built-in video library in Processing 2.0. I did some work in built-in video to try to improve seeking, and the example Frames in Libraries|video|Movie shows how (to try) to jump to specific frames by indicating a time value. Maybe this helps in your case?
Also if you find a more accurate way of doing seeking as you suggest in your last post, I could include that in video library.
So, for starters, I'm learning canvas by expanding upon: http://www.helloandroid.com/tutorials/how-use-canvas-your-android-apps-part-1.
I figured that I wanted to make my program run something everytime it runs, so I decided to add a function to be ran in the run function, so it looks like:
public void run() {
Canvas c;
while (_run) {
displayHumanHand();
c = null;
try {
c = _surfaceHolder.lockCanvas(null);
synchronized (_surfaceHolder) {
onDraw(c);
}
} finally {
if (c != null) {
_surfaceHolder.unlockCanvasAndPost(c);
}
}
}
}
With displayhumanhand just having an array of "cards" and arranging them numerically, and should have no effect on the bitmaps being used (for now). However, the difference that adding this line of code causes to the quality changes it from:
Why? What causes the decrease in quality? How can I fix this?
Also, why does the image on the right look different from the one on the left when I am drawing the same icon (For the first imgur link)?
Bleugh. That code isn't so good in the tutorial. It's doing so many things wrong I don't know where to start, but a tight infinite loop being one of them. Follow this tutorial instead: http://blog.goltergaul.de/2010/03/android-game-project-basics-of-threads-and-canvas/