How to get fps/frames per second of a videocapture object - java

I am using a videocapture object to capture and process frames of a video in opencv/javacv.
I dont know how to get the frame rate.I want a timer that runs in the background during the live video capture.It should pause on a face being detected and continue later.
Due to the processing of haarcascade file,it is taking much time for each rame to process. how to adjust the frame rate.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
VideoCapture camera = new VideoCapture(0);

You can extract various parameter from VideoCapture like frame rate, frame height, frame width etc.
cv::VideoCapture input_video;
if(input_video.open(my_device))
{
std::cout<<"Video file open "<<std::endl;
}
else
{
std::cout<<"Not able to Video file open "<<std::endl;
}
int fps = input_video.get(CV_CAP_PROP_FPS);
int frameCount = input_video.get(CV_CAP_PROP_FRAME_COUNT);
double fheight = input_video.get(CV_CAP_PROP_FRAME_HEIGHT);
double fwidth = input_video.get(CV_CAP_PROP_FRAME_WIDTH);

System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
VideoCapture VC = new VideoCapture(0);
//First you requried open Camera.
VC.open();
//Now for geting 'Frame per secand"
VC.get(Videoio.CAP_PROP_FPS); // it returns FPS(Frame per secand)
//Now for seting 'Frame per secand"
VC.set(Videoio.CAP_PROP_FPS,10.0);//in this 10.0 is value for FPS,its double value.
VC.relase();

This answer helps me. There is a list of constants and their values. Just put value to VideoCapture's get() method. Ex. videoCapture.get(5), will return FPS of video.

Related

Implement video files in JFrame

This time I got a really hard nut to crack. I was able to implement a slideshow-program which is able to display a number of random pictures after each other in a given time. The program also reacts on button presses.
Now I got the task to also make it able to display video files and it's wrecking my head. The tasks that need to be solved are the following:
The resolution of the file should be dependent on the actualy size of the screen. If an image or video has a greater resolution than the screen it is supposed to be scaled down (see the example code).
Images and videos are supposed to be implemented as JComponents in a JFrame which itself should be composed out of several elements like an area for the image/video, an area for text and for buttons etc - (I solved this for pictures).
After a certain amount of time, the slideshow is supposed to show the next picture/video. With pictures the time is fixed but when showing a video, the time should be dependent on the duration of the video itself (we wouldn't want the slideshow to jump to the next slide in the middle of the video)
For easier explanation let me first show how I solved the implementation of the pictures into the slideshow:
'''
public class DisplayImage extends JComponent {
private static final long serialVersionUID = 2613775805584208452L;
private static Image image;
public static Image displayImage(File f, Dimension screenSize) throws IOException {
//This method loads a file from the computer and resizes it in comparison to the size of the computer screen. The image is then returned for further processing.
BufferedImage img = ImageIO.read(f);
Image dimg;
double width = screenSize.getWidth()*0.75;
double z1 = (img.getWidth()/width);
double z2 = (img.getHeight()/screenSize.getHeight());
if (img.getHeight()/z1 <= width && img.getHeight()/z1 < screenSize.getHeight()) {
dimg = img.getScaledInstance((int)(img.getWidth()/z1), (int) (img.getHeight()/z1),Image.SCALE_SMOOTH);
} else {
dimg = img.getScaledInstance((int)(img.getWidth()/z2), (int)(img.getHeight()/z2),Image.SCALE_SMOOTH);
}
return dimg;
}
public void setImage(Image image1) {
//When an image is resized, it is given to this method.
//It replaces the global variable "image" with the new loaded image so the JFrame in the slideshow is actually reset and will display the new image.
image = image1;
repaint();
invalidate();
}}
'''
As you can see, I am completely fine for loading a new image and rewriting the image as well as the JComponent of the class with it.
Coming to a video file it get's messy instead. I was able to get video files to be loaded by another code taken from somewhere here using Maven but I didn't succeed in implementing it as a JComponent (I have been browsing stackoverflow as well as google already for days but couldn't find the solution for my problem). So far the only thing I can do is starting an extra player besides the slideshow as if they have nothing in common:
'''
public void playVideo(File f, Dimension screenSize) throws IOException, JCodecException {
Picture img = FrameGrab.getFrameAtSec(f, 1);
double width = screenSize.getWidth()*0.75;
double z1 = (img.getWidth()/width);
double z2 = (img.getHeight()/screenSize.getHeight());
NativeLibrary.addSearchPath(RuntimeUtil.getLibVlcLibraryName(), "C:\\Program Files\\VideoLAN\\VLC");
Native.loadLibrary(RuntimeUtil.getLibVlcLibraryName(), LibVlc.class);
JFrame frame = new JFrame("vlcj Tutorial");
MediaPlayerFactory mediaPlayerFactory = new MediaPlayerFactory();
Canvas c = new Canvas();
c.setBackground(Color.black);
JPanel p = new JPanel();
p.setLayout(new BorderLayout());
p.add(c, BorderLayout.CENTER);
frame.add(p, BorderLayout.CENTER);
EmbeddedMediaPlayer mediaPlayer = mediaPlayerFactory.newEmbeddedMediaPlayer();
mediaPlayer.setVideoSurface(mediaPlayerFactory.newVideoSurface(c));
if (img.getHeight()/z1 <= width && img.getHeight()/z1 < screenSize.getHeight()) {
frame.setSize((int)(img.getWidth()/z1), (int)(img.getHeight()/z1));
} else {
frame.setSize((int)(img.getWidth()/z2), (int)(img.getHeight()/z2));
}
frame.setSize(screenSize);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setVisible(true);
mediaPlayer.playMedia(f.getPath());
}
'''
The mess starts already with me not being able to actually get the measurements of the video file itself (meaning width and height). I have been crushing my head over implementing different frameworks like JavaCV, Xuggle, MarvinFramework and much more but it was no good at all. The only thing I can do is to get a frame from the video as a Picture-type as shown in this example. But this doesn't work for me to give back either a JComponent or a BufferedImage (as with the pictures seen in the first method). Even worse: I have found no possible way to make the JFrame actually be reset when a video file is loaded leading for it to freeze dead as soon as a video is started in a new player. After that there is only the kill switch left.
So I'm lost here. Any help is greatly appreciated.

Getting HSV values of pixels in image OpenCV

I am working on a Rubik's side scanner to determine what state the cube is in. I am quite new to computer vision and using it so it has been a little bit of a challenge. What I have done so far is that I use a video capture and at certain frames capture that frame and save it for image processing. Here is what it looks like.
When the photo is taken the cube is in the same position each time so I don't have to worry about locating the stickers.
What I am having trouble doing is getting a small range of pixels in each square to determine its HSV.
I know the ranges of HSV are roughly
Red = Hue(0...9) AND Hue(151..180)
Orange = Hue(10...15)
Yellow = Hue(16..45)
Green = Hue(46..100)
Blue = Hue(101..150)
White = Saturation(0..20) AND Value(230..255)
So after I have captured the image I then load it and split the HSV values of the image but don't know how to get the certain pixel coordinates of the image. How do I do so?
BufferedImage getOneFrame() {
currFrame++;
//At the 90th frame I capture that frame and save that frame
if (currFrame == 120) {
cap.read(mat2Img.mat);
mat2Img.getImage(mat2Img.mat);
Imgcodecs.imwrite("firstImage.png", mat2Img.mat);
}
cap.read(mat2Img.mat);
return mat2Img.getImage(mat2Img.mat);
}
public void splitChannels() {
IplImage firstShot = cvLoadImage("firstImage.png");
//I split the channels so that I can determine the value of the pixel range
IplImage hsv = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), firstShot.nChannels());
IplImage hue = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
IplImage sat = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
IplImage val = IplImage.create( firstShot.width(), firstShot.height(), firstShot.depth(), CV_8UC1 );
cvSplit( hsv, hue, sat, val, null );
//How do I get a small range of pixels of my images to determine get their HSV?
}
If I understand your question right, you know the coordinates of all areas that interest you. Save the information about each area into cvRect objects.
You can traverse the rectangle area by looping. Make a double loop. In outer loop start at rect.y and stop before rect.y + rect.height. In inner loop, do a similar thing in x direction. Inside the loop, use CV_IMAGE_ELEM macro to access individual pixel values and compute whatever you need.
One advice though: There are several advantages to using Mat instead of IplImage when working with OpenCV. I recommend that you start using 'Mat', unless you have some special reasons to do so, of course. Click to see the documentation and take a look at one of constructors that takes one Mat and one Rect as parameters. This constructor is your good friend - you can create a new Mat object (without copying any data) which will only contain the area inside the rectangle.

Store thousands of BufferedImages in ArrayList without using up all memory - Java

I am trying to make a screen recording app. I have code that takes a screenshot using java.awt.Robot.createScreenCapture and then stores the output in an arraylist. The arraylist needs to store 7500 images. I need to be able to access any of the BufferedImages very quickly. I have tried converting the BufferedImages into byte[] and then storing them, but converting them back to bufferedimages takes too long (about 1 second). Is there a way I could do this without having to add command line arguments?
Error:
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
Code:
static ArrayList < BufferedImage > bilist = new ArrayList < BufferedImage > ();
public static Timer recordingTimer = new Timer (40, new ActionListener () {
public void actionPerformed ( ActionEvent e ) {
try {
BufferedImage bimage = robot.createScreenCapture(wholescreen);
bilist.add(bimage);
if ( bilist.size() > 7500 ) bilist.remove(7500);
} catch ( Exception ex ) {
ex.printStackTrace();
}
}
});
Real solution: compress frames with a hardware-accelerated video encoder (or software encoder if you can afford CPU)
Old answer:
I have solved my problem! What I did was I changed 5 minutes of recording to 15 seconds, then I changed the type of the BufferedImages to TYPE_BYTE_INDEXED, then I halved the images dimensions, and then I lowered the frame rate. In the future, I might make this same program working with Gilbert Le Blanc's system (look at comment above).

libgdx - ui SpriteBatch needs to scale based on window size?

I am currently using libgdx 1.5.4. Normally to modify screen size, I have done this:
public class DesktopLauncher {
public static void main (String[] arg) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
int temp = 3; //scale viewport
config.width = temp *160;
config.height = temp *144;
new LwjglApplication(new MyGame(), config);
}
}
The temp var above will scale the desktop application window size by some number (3 in this case). In my render() loop, I have a batch (floatingBatch = new SpriteBatch();) that isn't modified by a camera, to draw my 'ui' elements:
floatingBatch.begin();
//bunch of floatingBatch.draw()'s...
floatingBatch.end();
I have noticed, though, that the coordinates in floatingBatch don't scale with my screen size. For example, the point (200,200) in floating batch will be offscreen if temp above = 1, but will be onscreen if temp = 3. This isn't right, I want floatingBatch to scale with the window. If I start with temp = 1, and just manually drag the window out to about 3x it's starting size, everything is as I want obviously.
One thing that I have tried is have the temp = 1 above, and then do this in my create() function:
Gdx.graphics.setDisplayMode(160*3, 144*3, false);
The above is fine, and works, but it looks a little awkward visually (the screen first starts out as 144x160, then resizes to 3x that size.) Is there a better way to do what I am describing? Can I change floatingBatch somehow so that it scales with the window?
You can use
Gdx.graphics.getWidth();
and
Gdx.graphics.getHeight();
to obtain the pixel count of the device then adjust your scene2d
Refer to the api for more help

Screen Capture in separate thread making Java application slow/unresponsive

I'm working on an application that records the users screen, webcam and microphone whilst he/she is performing certain activities. It will be used for research purposes. The application has been successfully tested on Windows, but on Mac OS X (Maverick with Java 7.0.45) the application becomes slow and unresponsive when recording is started.
This is why I find this difficult to comprehend:
The recording is done in a separate thread, so how could it influence the responsiveness of another thread? Especially as after each run either Thread.yield() or Thread.sleep(...) are called.
Logs show that whilst attempting to record at 15 FPS, the resulting frame rate was 2 FPS. So it seems the code that does the capturing of a single frame might be too slow. But why then does it work fine on Windows?
Just a quick note: the application was successfully tested by tons of users on Windows, but I only got to test it on a single Mac. However, that one was just formatted and got a clean install of OS X Maverick, Java (and Netbeans).
Below you will find the code that records the screen and writes it to a video using Xuggler. The code for recording the webcam is similar, and I'd doubt recording the audio has anything to do with it. My question is:
What might be the cause of the application becoming unresponsive?, and
How could the code be made more efficient and so improve FPS?
IMediaWriter writer = ToolFactory.makeWriter(file.getAbsolutePath());
Dimension size = Globals.sessionFrame.getBounds().getSize();
Rectangle screenRect;
BufferedImage capture;
BufferedImage mousePointImg;
writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_H264, size.width, size.height);
int i = 0;
while (stop == false) {
// Get mouse cursor to draw over screen image.
PointerInfo mousePointer = MouseInfo.getPointerInfo();
Point mousePoint = mousePointer.getLocation();
Point screenPoint = new Point((int) (mousePoint.getX() -
Globals.sessionFrame.getBounds().getX()), (int) (mousePoint.getY() -
Globals.sessionFrame.getBounds().getY()));
// Get the screen image.
try {
screenRect = new Rectangle(Globals.sessionFrame.getBounds());
capture = new Robot().createScreenCapture(screenRect);
} catch ( ... ) { ... }
// Convert and resize the screen image.
BufferedImage image = ConverterFactory.convertToType(capture,
BufferedImage.TYPE_3BYTE_BGR);
IConverter converter = ConverterFactory.createConverter(image,
IPixelFormat.Type.YUV420P);
// Draw the mouse cursor if necessary.
if (mouseWithinScreen()) {
Graphics g = image.getGraphics();
g.drawImage(mousePointImg, (int) screenPoint.getX(),
(int) screenPoint.getY(), null);
}
// Prepare the frame.
IVideoPicture frame = converter.toPicture(image, (System.currentTimeMillis() -
startTimeMillis()) * 1000);
frame.setKeyFrame(i % (getDesiredFPS() * getDesiredKeyframeSec()) == 0);
// Write to the video
writer.encodeVideo(0, frame);
// Delay the next capture if we are at the desired FPS.
try {
if (atDesiredFPS()) {
Thread.yield();
} else {
Thread.sleep(1000 / getDesiredFPS());
}
} catch ( ... ) { ... }
i++;
}
writer.close();
There are several architectural issues that I can see in your code:
First if you want to execute something at a fixed rate, use the ScheduledThreadPoolExecutor.scheduleAtFixedRate(...) function. It will make your entire delay code part obsolete as well as ensuring that certain OS timing issues will not interfere with your scheduling.
Then to make things faster you need to take your code apart a bit. As far as I can see you have 3 tasks: the capture, the mouse-drawing/conversion and the stream writing. If you put the capture part in a scheduled Runnable, the conversion into multi-parallel execution as Callables into an Executor, and then in a 3rd thread take the results from a result list and write it into the stream, you can fully utilize multi-cores.
Pseudocode:
Global declarations (or hand them over to the various classes):
final static Executor converterExecutor = Executors.newFixedThreadPoolExecutor(Runtime.getRuntime().availableProcessors());
final static LinkedBlockingQueue<Future<IVideoPicture>> imageQueue = new LinkedBlockingQueue<>();
// ...
Capture Runnable (scheduled at fixed rate):
capture = captureScreen();
final Converter converter = new Converter(capture);
final Future<IVideoPicture> conversionResult = converterExecutor.submit(converter);
imageQueue.offer(conversionResult); // returns false if queue is full
Conversion Callable:
class Converter implements Callable<IVideoPicture> {
// ... variables and constructor
public IVideoPicture call() {
return convert(this.image);
}
}
Writer Runnable:
IVideoPicture frame;
while (this.done == false) {
frame = imageQueue.get();
writer.encodeVideo(0, frame);
}
You can ensure that the imageQueue does not overflow with images to render if the CPU is too slow by limiting the size of this queue, see the constructor of LinkedBlockingQueue.

Categories